- Add comprehensive Moonshot AI provider with 11 models including:
* Legacy moonshot-v1 series (8k, 32k, 128k context)
* Latest Kimi K2 models (K2 Preview, Turbo, Thinking)
* Vision-enabled models for multimodal capabilities
* Auto-selecting model variants
- Update xAI provider with latest Grok models:
* Add Grok 4 (256K context) and Grok 4 (07-09) variant
* Add Grok 3 Mini Beta and Mini Fast Beta variants
* Update context limits to match actual model capabilities
* Remove outdated grok-beta and grok-2-1212 models
- Add MOONSHOT_API_KEY to environment configuration
- Register Moonshot provider in service status monitoring
- Full OpenAI-compatible API integration via api.moonshot.ai
- Fix TypeScript errors in GitHub provider
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
- Updated constants in app/lib/.server/llm/constants.ts
- Modified stream-text functionality in app/lib/.server/llm/stream-text.ts
- Updated Anthropic provider in app/lib/modules/llm/providers/anthropic.ts
- Modified GitHub provider in app/lib/modules/llm/providers/github.ts
- Updated Google provider in app/lib/modules/llm/providers/google.ts
- Modified OpenAI provider in app/lib/modules/llm/providers/openai.ts
- Updated LLM types in app/lib/modules/llm/types.ts
- Modified API route in app/routes/api.llmcall.ts
Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.