Add COPY artifacts/ to Dockerfile so YAML files are available for RAG indexing when the container starts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .claude/agents | ||
| app | ||
| artifacts | ||
| embeddings | ||
| schemas | ||
| .env.example | ||
| Dockerfile | ||
| README.md | ||
| requirements.txt | ||
README.md
Tyndale AI Service
LLM Chat Service for algorithmic trading support - codebase Q&A, P&L summarization, and strategy enhancement suggestions.
Quick Start
Local Development
# Install dependencies
pip install -r requirements.txt
# Run the server
uvicorn app.main:app --reload --port 8080
Docker
# Build
docker build -t tyndale-ai-service .
# Run
docker run -p 8080:8080 -e LLM_MODE=local tyndale-ai-service
API Endpoints
Health Check
curl http://localhost:8080/health
Response:
{"status": "ok"}
Chat
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'
Response:
{
"conversation_id": "uuid-generated-if-not-provided",
"response": "...",
"mode": "local",
"sources": []
}
With conversation ID:
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Follow up question", "conversation_id": "my-conversation-123"}'
Environment Variables
| Variable | Description | Default |
|---|---|---|
LLM_MODE |
local or remote |
local |
LLM_REMOTE_URL |
Remote LLM endpoint URL | (empty) |
LLM_REMOTE_TOKEN |
Bearer token for remote LLM | (empty) |
Remote Mode Setup
export LLM_MODE=remote
export LLM_REMOTE_URL=https://your-llm-service.com/generate
export LLM_REMOTE_TOKEN=your-api-token
uvicorn app.main:app --reload --port 8080
The remote adapter expects the LLM service to accept:
{"conversation_id": "...", "message": "..."}
And return:
{"response": "..."}
Project Structure
tyndale-ai-service/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI app + routes
│ ├── schemas.py # Pydantic models
│ ├── config.py # Environment config
│ └── llm/
│ ├── __init__.py
│ └── adapter.py # LLM adapter interface + implementations
├── requirements.txt
├── Dockerfile
├── .env.example
└── README.md
Features
- Dual mode operation: Local stub or remote LLM
- Conversation tracking: UUID generation for new conversations
- Security: 10,000 character message limit, no content logging
- Cloud Run ready: Port 8080, stateless design
- Async: Full async/await support with httpx