Stijnus
b5d9055851
🔧 Fix Token Limits & Invalid JSON Response Errors ( #1934 )
...
ISSUES FIXED:
- ❌ Invalid JSON response errors during streaming
- ❌ Incorrect token limits causing API rejections
- ❌ Outdated hardcoded model configurations
- ❌ Poor error messages for API failures
SOLUTIONS IMPLEMENTED:
🎯 ACCURATE TOKEN LIMITS & CONTEXT SIZES
- OpenAI GPT-4o: 128k context (was 8k)
- OpenAI GPT-3.5-turbo: 16k context (was 8k)
- Anthropic Claude 3.5 Sonnet: 200k context (was 8k)
- Anthropic Claude 3 Haiku: 200k context (was 8k)
- Google Gemini 1.5 Pro: 2M context (was 8k)
- Google Gemini 1.5 Flash: 1M context (was 8k)
- Groq Llama models: 128k context (was 8k)
- Together models: Updated with accurate limits
�� DYNAMIC MODEL FETCHING ENHANCED
- Smart context detection from provider APIs
- Automatic fallback to known limits when API unavailable
- Safety caps to prevent token overflow (100k max)
- Intelligent model filtering and deduplication
🛡️ IMPROVED ERROR HANDLING
- Specific error messages for Invalid JSON responses
- Token limit exceeded warnings with solutions
- API key validation with clear guidance
- Rate limiting detection and user guidance
- Network timeout handling
⚡ PERFORMANCE OPTIMIZATIONS
- Reduced static models from 40+ to 12 essential
- Enhanced streaming error detection
- Better API response validation
- Improved context window display (shows M/k units)
🔧 TECHNICAL IMPROVEMENTS
- Dynamic model context detection from APIs
- Enhanced streaming reliability
- Better token limit enforcement
- Comprehensive error categorization
- Smart model validation before API calls
IMPACT:
✅ Eliminates Invalid JSON response errors
✅ Prevents token limit API rejections
✅ Provides accurate model capabilities
✅ Improves user experience with clear errors
✅ Enables full utilization of modern LLM context windows
2025-08-29 20:53:57 +02:00
xKevIsDev
e9e117c62f
fix: update maxTokenAllowed calculation to enforce upper limit
...
- Changed the maxTokenAllowed property to use Math.min for limiting the value to a maximum of 16384 tokens, ensuring better control over context window size.
2025-07-17 23:26:19 +01:00
Anirban Kar
32bfdd9c24
feat: added more dynamic models, sorted and remove duplicate models ( #1206 )
2025-01-29 02:33:23 +05:30
Mohammad Saif Khan
68bbbd0a67
feat: add deepseek-r1-distill-llama-70b to groq provider ( #1187 )
...
This PR introduces a new model, deepseek-r1-distill-llama-70b, to the staticModels array and ensures compatibility with the Groq API. The changes include:
Adding the deepseek-r1-distill-llama-70b model to the staticModels array with its relevant metadata.
Updating the Groq API call to use the new model for chat completions.
These changes enable the application to support the deepseek-r1-distill-llama-70b model, expanding the range of available models for users.
2025-01-27 18:08:46 +05:30
Anirban Kar
7295352a98
refactor: refactored LLM Providers: Adapting Modular Approach ( #832 )
...
* refactor: Refactoring Providers to have providers as modules
* updated package and lock file
* added grok model back
* updated registry system
2024-12-21 11:45:17 +05:30