I wanted to ask if there's any plan to support alternative LLM providers, such as:
- Local models via vLLM, llama.cpp, or Ollama
- Other cloud providers like Mistral, Together AI, or Cohere
This would provide more flexibility for users who prefer self-hosted or non-OpenAI options. Would you be open to adding support for this? I'd be happy to help test or contribute if needed.
Looking forward to your thoughts!