The repository that contains the code for the backend of our Tyndale AI system
Go to file
Danny 4c084d7668 feat: add AskSage LLM provider integration
Implement AskSageAdapter using the official asksageclient SDK to support
AskSage as an LLM provider option. This enables users to leverage
AskSage's API with configurable email, API key, and model settings.

- Add AskSageAdapter class with async support via thread pool
- Update Settings to include asksage_email, asksage_api_key, asksage_model
- Extend llm_mode literal to include "asksage" option
- Update dependency injection to instantiate AskSageAdapter when configured
- Remove completed OPENAI_INTEGRATION_PLAN.md
- Update requirements.txt with full dependency list including asksageclient
2026-01-16 12:16:34 -06:00
.claude/agents feat: add OpenAI integration with dependency injection support 2026-01-13 15:17:44 -06:00
app feat: add AskSage LLM provider integration 2026-01-16 12:16:34 -06:00
.env.example feat: add AskSage LLM provider integration 2026-01-16 12:16:34 -06:00
Dockerfile feat: add OpenAI integration with dependency injection support 2026-01-13 15:17:44 -06:00
README.md feat: add FastAPI skeleton for LLM chat service 2026-01-07 19:32:57 -06:00
requirements.txt feat: add AskSage LLM provider integration 2026-01-16 12:16:34 -06:00

README.md

Tyndale AI Service

LLM Chat Service for algorithmic trading support - codebase Q&A, P&L summarization, and strategy enhancement suggestions.

Quick Start

Local Development

# Install dependencies
pip install -r requirements.txt

# Run the server
uvicorn app.main:app --reload --port 8080

Docker

# Build
docker build -t tyndale-ai-service .

# Run
docker run -p 8080:8080 -e LLM_MODE=local tyndale-ai-service

API Endpoints

Health Check

curl http://localhost:8080/health

Response:

{"status": "ok"}

Chat

curl -X POST http://localhost:8080/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, how are you?"}'

Response:

{
  "conversation_id": "uuid-generated-if-not-provided",
  "response": "...",
  "mode": "local",
  "sources": []
}

With conversation ID:

curl -X POST http://localhost:8080/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Follow up question", "conversation_id": "my-conversation-123"}'

Environment Variables

Variable Description Default
LLM_MODE local or remote local
LLM_REMOTE_URL Remote LLM endpoint URL (empty)
LLM_REMOTE_TOKEN Bearer token for remote LLM (empty)

Remote Mode Setup

export LLM_MODE=remote
export LLM_REMOTE_URL=https://your-llm-service.com/generate
export LLM_REMOTE_TOKEN=your-api-token

uvicorn app.main:app --reload --port 8080

The remote adapter expects the LLM service to accept:

{"conversation_id": "...", "message": "..."}

And return:

{"response": "..."}

Project Structure

tyndale-ai-service/
├── app/
│   ├── __init__.py
│   ├── main.py          # FastAPI app + routes
│   ├── schemas.py       # Pydantic models
│   ├── config.py        # Environment config
│   └── llm/
│       ├── __init__.py
│       └── adapter.py   # LLM adapter interface + implementations
├── requirements.txt
├── Dockerfile
├── .env.example
└── README.md

Features

  • Dual mode operation: Local stub or remote LLM
  • Conversation tracking: UUID generation for new conversations
  • Security: 10,000 character message limit, no content logging
  • Cloud Run ready: Port 8080, stateless design
  • Async: Full async/await support with httpx