- Add OpenAIAdapter class using official OpenAI SDK with async support
- Create custom exception hierarchy for LLM errors (authentication,
rate limit, connection, configuration, response errors)
- Refactor adapter factory to use FastAPI Depends() for dependency injection
- Update configuration to support 'openai' mode with API key and model settings
- Add proper HTTP error mapping for all LLM exception types
- Update Dockerfile with default OPENAI_MODEL environment variable
- Update .env.example with OpenAI configuration options