The repository that contains the code for the backend of our Tyndale AI system
Go to file
Danny 2f172ddaf9 feat: auto-build FAISS index on startup if missing
Add lifespan handler that checks for FAISS index at startup
and automatically builds it if not found. This ensures the
service works on fresh deployments without manual indexing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 10:47:03 -06:00
.claude/agents feat: add OpenAI integration with dependency injection support 2026-01-13 15:17:44 -06:00
app feat: auto-build FAISS index on startup if missing 2026-01-30 10:47:03 -06:00
artifacts feat: yaml files for llm reasoning 2026-01-30 10:37:00 -06:00
embeddings feat: yaml files for llm reasoning 2026-01-30 10:41:12 -06:00
schemas feat: yaml files for llm reasoning 2026-01-29 12:39:41 -06:00
.env.example feat: add CORS middleware and SSE streaming endpoint 2026-01-16 12:43:21 -06:00
Dockerfile feat: add OpenAI integration with dependency injection support 2026-01-13 15:17:44 -06:00
README.md feat: add FastAPI skeleton for LLM chat service 2026-01-07 19:32:57 -06:00
requirements.txt feat: replace Redis with in-memory conversation storage 2026-01-30 10:34:47 -06:00

README.md

Tyndale AI Service

LLM Chat Service for algorithmic trading support - codebase Q&A, P&L summarization, and strategy enhancement suggestions.

Quick Start

Local Development

# Install dependencies
pip install -r requirements.txt

# Run the server
uvicorn app.main:app --reload --port 8080

Docker

# Build
docker build -t tyndale-ai-service .

# Run
docker run -p 8080:8080 -e LLM_MODE=local tyndale-ai-service

API Endpoints

Health Check

curl http://localhost:8080/health

Response:

{"status": "ok"}

Chat

curl -X POST http://localhost:8080/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, how are you?"}'

Response:

{
  "conversation_id": "uuid-generated-if-not-provided",
  "response": "...",
  "mode": "local",
  "sources": []
}

With conversation ID:

curl -X POST http://localhost:8080/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Follow up question", "conversation_id": "my-conversation-123"}'

Environment Variables

Variable Description Default
LLM_MODE local or remote local
LLM_REMOTE_URL Remote LLM endpoint URL (empty)
LLM_REMOTE_TOKEN Bearer token for remote LLM (empty)

Remote Mode Setup

export LLM_MODE=remote
export LLM_REMOTE_URL=https://your-llm-service.com/generate
export LLM_REMOTE_TOKEN=your-api-token

uvicorn app.main:app --reload --port 8080

The remote adapter expects the LLM service to accept:

{"conversation_id": "...", "message": "..."}

And return:

{"response": "..."}

Project Structure

tyndale-ai-service/
├── app/
│   ├── __init__.py
│   ├── main.py          # FastAPI app + routes
│   ├── schemas.py       # Pydantic models
│   ├── config.py        # Environment config
│   └── llm/
│       ├── __init__.py
│       └── adapter.py   # LLM adapter interface + implementations
├── requirements.txt
├── Dockerfile
├── .env.example
└── README.md

Features

  • Dual mode operation: Local stub or remote LLM
  • Conversation tracking: UUID generation for new conversations
  • Security: 10,000 character message limit, no content logging
  • Cloud Run ready: Port 8080, stateless design
  • Async: Full async/await support with httpx