35 Commits

Author SHA1 Message Date
Stijnus
4ca535b9d1 feat: comprehensive service integration refactor with enhanced tabs architecture (#1978)
* feat: add service tabs refactor with GitHub, GitLab, Supabase, Vercel, and Netlify integration

This commit introduces a comprehensive refactor of the connections system,
replacing the single connections tab with dedicated service integration tabs:

 New Service Tabs:
- GitHub Tab: Complete integration with repository management, stats, and API
- GitLab Tab: GitLab project integration and management
- Supabase Tab: Database project management with comprehensive analytics
- Vercel Tab: Project deployment management and monitoring
- Netlify Tab: Site deployment and build management

🔧 Supporting Infrastructure:
- Enhanced store management for each service with auto-connect via env vars
- API routes for secure server-side token handling and data fetching
- Updated TypeScript types with missing properties and interfaces
- Comprehensive hooks for service connections and state management
- Security utilities for API endpoint validation

🎨 UI/UX Improvements:
- Individual service tabs with tailored functionality
- Motion animations and improved loading states
- Connection testing and health monitoring
- Advanced analytics dashboards for each service
- Consistent design patterns across all service tabs

🛠️ Technical Changes:
- Removed legacy connection tab in favor of individual service tabs
- Updated tab configuration and routing system
- Added comprehensive error handling and loading states
- Enhanced type safety with extended interfaces
- Implemented environment variable auto-connection features

Note: Some TypeScript errors remain and will need to be resolved in follow-up commits.
The dev server runs successfully and the service tabs are functional.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: comprehensive service integration refactor with enhanced tabs architecture

Major architectural improvements to service integrations:

**Service Integration Refactor:**
- Complete restructure of service connection tabs (GitHub, GitLab, Vercel, Netlify, Supabase)
- Migrated from centralized ConnectionsTab to dedicated service-specific tabs
- Added shared service integration components for consistent UX
- Implemented auto-connection feature using environment variables

**New Components & Architecture:**
- ServiceIntegrationLayout for consistent service tab structure
- ConnectionStatus, ServiceCard components for reusable UI patterns
- BranchSelector component for repository branch management
- Enhanced authentication dialogs with improved error handling

**API & Backend Enhancements:**
- New API endpoints: github-branches, gitlab-branches, gitlab-projects, vercel-user
- Enhanced GitLab API service with comprehensive project management
- Improved connection testing hooks (useConnectionTest)
- Better error handling and rate limiting across all services

**Configuration & Environment:**
- Updated .env.example with comprehensive service integration guides
- Added auto-connection support for all major services
- Improved development and production environment configurations
- Enhanced tab management with proper service icons

**Code Quality & TypeScript:**
- Fixed all TypeScript errors across service integration components
- Enhanced type definitions for Vercel, Supabase, and other service integrations
- Improved type safety with proper optional chaining and type assertions
- Better separation of concerns between UI and business logic

**Removed Legacy Code:**
- Removed redundant connection components and consolidated into service tabs
- Cleaned up unused imports and deprecated connection patterns
- Streamlined authentication flows across all services

This refactor provides a more maintainable, scalable architecture for service integrations
while significantly improving the user experience for managing external connections.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: clean up dead code and consolidate utilities

- Remove legacy .eslintrc.json (replaced by flat config)
- Remove duplicate app/utils/types.ts (unused type definitions)
- Remove app/utils/cn.ts and consolidate with classNames utility
- Clean up unused ServiceErrorHandler class implementation
- Enhance classNames utility to support boolean values
- Update GlowingEffect.tsx to use consolidated classNames utility

Removes ~150+ lines of unused code while maintaining all functionality.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Simplify terminal health checks and improve project setup

Removed aggressive health checking and reconnection logic from TerminalManager to prevent issues with terminal responsiveness. Updated TerminalTabs to remove onReconnect handlers. Enhanced projectCommands utility to generate non-interactive setup commands and detect shadcn projects, improving automation and reliability of project setup.

* fix: resolve GitLab deployment issues and enhance GitHub deployment reliability

GitLab Deployment Fixes:
- Fix COEP header issue for avatar images by adding crossOrigin and referrerPolicy attributes
- Implement repository name sanitization to handle special characters and ensure GitLab compliance
- Enhance error handling with detailed validation error parsing and user-friendly messages
- Add explicit path field and description to project creation requests
- Improve URL encoding and project path resolution for proper API calls
- Add graceful file commit handling with timeout and error recovery

GitHub Deployment Enhancements:
- Add comprehensive repository name validation and sanitization
- Implement real-time feedback for invalid characters in repository name input
- Enhance error handling with specific error types and retry suggestions
- Improve user experience with better error messages and validation feedback
- Add repository name length limits and character restrictions
- Show sanitized name preview to users before submission

General Improvements:
- Add GitLabAuthDialog component for improved authentication flow
- Enhance logging and debugging capabilities for deployment operations
- Improve accessibility with proper dialog titles and descriptions
- Add better user notifications for name sanitization and validation issues

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-08 19:29:12 +02:00
Stijnus
9e01e5c0bc feat: add support for OPENAI_LIKE_API_MODELS
- Add OPENAI_LIKE_API_MODELS environment variable support
- Enable fallback model parsing when /models endpoint fails
- Support providers like Fireworks AI that don't allow /models requests
- Format: path/to/model1:limit;path/to/model2:limit;path/to/model3:limit
- Update IProviderSetting interface to include OPENAI_LIKE_API_MODELS property
- Fix all linting errors and code formatting issues
2025-09-07 01:07:24 +02:00
Stijnus
df242a7935 feat: add Moonshot AI (Kimi) provider and update xAI Grok models (#1953)
- Add comprehensive Moonshot AI provider with 11 models including:
  * Legacy moonshot-v1 series (8k, 32k, 128k context)
  * Latest Kimi K2 models (K2 Preview, Turbo, Thinking)
  * Vision-enabled models for multimodal capabilities
  * Auto-selecting model variants

- Update xAI provider with latest Grok models:
  * Add Grok 4 (256K context) and Grok 4 (07-09) variant
  * Add Grok 3 Mini Beta and Mini Fast Beta variants
  * Update context limits to match actual model capabilities
  * Remove outdated grok-beta and grok-2-1212 models

- Add MOONSHOT_API_KEY to environment configuration
- Register Moonshot provider in service status monitoring
- Full OpenAI-compatible API integration via api.moonshot.ai
- Fix TypeScript errors in GitHub provider

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-31 18:54:14 +02:00
Stijnus
ff8b0d7af1 fix: maxCompletionTokens Implementation for All Providers (#1938)
* Update LLM providers and constants

- Updated constants in app/lib/.server/llm/constants.ts
- Modified stream-text functionality in app/lib/.server/llm/stream-text.ts
- Updated Anthropic provider in app/lib/modules/llm/providers/anthropic.ts
- Modified GitHub provider in app/lib/modules/llm/providers/github.ts
- Updated Google provider in app/lib/modules/llm/providers/google.ts
- Modified OpenAI provider in app/lib/modules/llm/providers/openai.ts
- Updated LLM types in app/lib/modules/llm/types.ts
- Modified API route in app/routes/api.llmcall.ts

* Fix maxCompletionTokens Implementation for All Providers

 - Cohere: Added maxCompletionTokens: 4000 to all 10 static models
  - DeepSeek: Added maxCompletionTokens: 8192 to all 3 static models
  - Groq: Added maxCompletionTokens: 8192 to both static models
  - Mistral: Added maxCompletionTokens: 8192 to all 9 static models
  - Together: Added maxCompletionTokens: 8192 to both static models

  - Groq: Fixed getDynamicModels to include maxCompletionTokens: 8192
  - Together: Fixed getDynamicModels to include maxCompletionTokens: 8192
  - OpenAI: Fixed getDynamicModels with proper logic for reasoning models (o1: 16384, o1-mini: 8192) and standard models
2025-08-29 23:13:58 +02:00
Stijnus
38c13494c2 Update LLM providers and constants (#1937)
- Updated constants in app/lib/.server/llm/constants.ts
- Modified stream-text functionality in app/lib/.server/llm/stream-text.ts
- Updated Anthropic provider in app/lib/modules/llm/providers/anthropic.ts
- Modified GitHub provider in app/lib/modules/llm/providers/github.ts
- Updated Google provider in app/lib/modules/llm/providers/google.ts
- Modified OpenAI provider in app/lib/modules/llm/providers/openai.ts
- Updated LLM types in app/lib/modules/llm/types.ts
- Modified API route in app/routes/api.llmcall.ts
2025-08-29 22:55:02 +02:00
Stijnus
b5d9055851 🔧 Fix Token Limits & Invalid JSON Response Errors (#1934)
ISSUES FIXED:
-  Invalid JSON response errors during streaming
-  Incorrect token limits causing API rejections
-  Outdated hardcoded model configurations
-  Poor error messages for API failures

SOLUTIONS IMPLEMENTED:

🎯 ACCURATE TOKEN LIMITS & CONTEXT SIZES
- OpenAI GPT-4o: 128k context (was 8k)
- OpenAI GPT-3.5-turbo: 16k context (was 8k)
- Anthropic Claude 3.5 Sonnet: 200k context (was 8k)
- Anthropic Claude 3 Haiku: 200k context (was 8k)
- Google Gemini 1.5 Pro: 2M context (was 8k)
- Google Gemini 1.5 Flash: 1M context (was 8k)
- Groq Llama models: 128k context (was 8k)
- Together models: Updated with accurate limits

�� DYNAMIC MODEL FETCHING ENHANCED
- Smart context detection from provider APIs
- Automatic fallback to known limits when API unavailable
- Safety caps to prevent token overflow (100k max)
- Intelligent model filtering and deduplication

🛡️ IMPROVED ERROR HANDLING
- Specific error messages for Invalid JSON responses
- Token limit exceeded warnings with solutions
- API key validation with clear guidance
- Rate limiting detection and user guidance
- Network timeout handling

 PERFORMANCE OPTIMIZATIONS
- Reduced static models from 40+ to 12 essential
- Enhanced streaming error detection
- Better API response validation
- Improved context window display (shows M/k units)

🔧 TECHNICAL IMPROVEMENTS
- Dynamic model context detection from APIs
- Enhanced streaming reliability
- Better token limit enforcement
- Comprehensive error categorization
- Smart model validation before API calls

IMPACT:
 Eliminates Invalid JSON response errors
 Prevents token limit API rejections
 Provides accurate model capabilities
 Improves user experience with clear errors
 Enables full utilization of modern LLM context windows
2025-08-29 20:53:57 +02:00
xKevIsDev
e9e117c62f fix: update maxTokenAllowed calculation to enforce upper limit
- Changed the maxTokenAllowed property to use Math.min for limiting the value to a maximum of 16384 tokens, ensuring better control over context window size.
2025-07-17 23:26:19 +01:00
Dino Hensen
208ba2a54b feat: increase max token limit for Claude model claude-3-7-sonnet-20250219
- Added logging for dynamic max tokens based on model details.
- Increased max token limit for Claude model from 8000 to 128000.
- Included beta header for Anthropik API call.
2025-05-13 07:20:38 +02:00
KevIsDev
d5ced7e305 refactor: update prompt to be more specific with install and run commands
remove gemini model as this is now fetched dynamically
2025-04-25 00:54:01 +01:00
KevIsDev
adcdc8efdf feat(llm): add new models for xAI and Google providers
Add 'grok-3-beta' to xAI provider and 'gemini-2.5-flash-preview-04-17' to Google provider. Also, ensure file saving when content is updated in WorkbenchStore and update streaming indicator styling in chat messages.
2025-04-18 13:45:11 +01:00
Stijnus
50dd74de07 fix: settings bugfix error building my application issue #1414 (#1436)
* Fix: error building my application #1414

* fix for vite

* Update vite.config.ts

* Update root.tsx

* fix the root.tsx and the debugtab

* lm studio fix and fix for the api key

* Update api.enhancer for prompt enhancement

* bugfixes

* Revert api.enhancer.ts back to original code

* Update api.enhancer.ts

* Update api.git-proxy.$.ts

* Update api.git-proxy.$.ts

* Update api.enhancer.ts
2025-03-09 01:07:56 +05:30
Burhanuddin Khatri
20722a108c feat: add Claude 3.7 Sonnet model as static list and update API key reference (#1449) 2025-03-05 19:12:52 +05:30
Anirban Kar
dc20bbc81f feat: added anthropic dynamic models (#1374) 2025-02-26 22:04:46 +05:30
Filipe Giácomo
4b817ebdce Update amazon-bedrock.ts
New Claude 3.5 Sonnet v2 Anthropogenic Model
2025-02-12 11:39:37 -03:00
Anirban Kar
3be18e3f9d feat: added dynamic model support for openAI provider (#1241) 2025-02-01 15:29:54 +05:30
Anirban Kar
32bfdd9c24 feat: added more dynamic models, sorted and remove duplicate models (#1206) 2025-01-29 02:33:23 +05:30
Mohammad Saif Khan
39a0724ef3 feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (#1202)
Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.
2025-01-28 23:30:50 +05:30
Mohammad Saif Khan
68bbbd0a67 feat: add deepseek-r1-distill-llama-70b to groq provider (#1187)
This PR introduces a new model, deepseek-r1-distill-llama-70b, to the staticModels array and ensures compatibility with the Groq API. The changes include:

Adding the deepseek-r1-distill-llama-70b model to the staticModels array with its relevant metadata.

Updating the Groq API call to use the new model for chat completions.

These changes enable the application to support the deepseek-r1-distill-llama-70b model, expanding the range of available models for users.
2025-01-27 18:08:46 +05:30
Anirban Kar
df766c98d4 feat: added support for reasoning content (#1168) 2025-01-25 16:16:19 +05:30
Anirban Kar
660353360f fix: docker prod env variable fix (#1170)
* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag
2025-01-25 03:52:26 +05:30
Anirban Kar
3c56346e83 feat: enhance context handling by adding code context selection and implementing summary generation (#1091) #release
* feat: add context annotation types and enhance file handling in LLM processing

* feat: enhance context handling by adding chatId to annotations and implementing summary generation

* removed useless changes

* feat: updated token counts to include optimization requests

* prompt fix

* logging added

* useless logs removed
2025-01-22 22:48:13 +05:30
Anirban Kar
0ad4aa56d3 feat: added deepseek reasoner model in deepseek provider (#1151) 2025-01-22 01:58:31 +05:30
Stijnus
b732f20233 bug fix for Open preview in a new tab. 2025-01-18 19:25:01 +01:00
Oliver Jägle
e19644268c feat: configure dynamic providers via .env (#1108)
* Use backend API route to fetch dynamic models

# Conflicts:
#	app/components/chat/BaseChat.tsx

* Override ApiKeys if provided in frontend

* Remove obsolete artifact

* Transport api keys from client to server in header

* Cache static provider information

* Restore reading provider settings from cookie

* Reload only a single provider on api key change

* Transport apiKeys and providerSettings via cookies.

While doing this, introduce a simple helper function for cookies
2025-01-18 03:39:19 +05:30
Ngô Tấn Tài
c7738243ca feat: added Github provider (#1109) 2025-01-17 13:22:51 +05:30
GaryStimson
6aaff63ca7 fix: bugfix in fetching API Key on base llm provider. (#1063) 2025-01-12 21:54:45 +05:30
Anirban Kar
49c7129ded fix: ollama and lm studio url issue fix for docker and build (#1008)
* fix: ollama and lm studio url issue fix for docker and build

* vite config fix
2025-01-06 19:18:42 +05:30
kunjabijukchhe
3ecac25a35 feat: implement Claude 3, Claude3.5, Nova Pro, Nova Lite and Mistral model integration with AWS Bedrock (#974)
* feat: Integrate AWS Bedrock with Claude 3.5 Sonnet, Claude 3 Sonnet, and Claude 3.5 Haiku

* update Dockerfile for AWS Bedrock configuration

* feat: add new Bedrock model 'Mistral' and update Haiku to version 3

* feat: add new bedrock model Nova Lite and Nova Pro

* Update README documentation to reflect the latest changes

* Add the icon for aws bedrock

* add support for serialized AWS Bedrock configuration in api key
2025-01-06 17:49:16 +05:30
Gaurav-Wankhede
e9852bfb22 Update hyperbolic.ts
Changed updated Hyperbolic Settings link
2025-01-01 17:59:11 +05:30
Anirban Kar
6494f5ac2e fix: updated logger and model caching minor bugfix #release (#895)
* fix: updated logger and model caching

* usage token stream issue fix

* minor changes

* updated starter template change to fix the app title

* starter template bigfix

* fixed hydretion errors and raw logs

* removed raw log

* made auto select template false by default

* more cleaner logs and updated logic to call dynamicModels only if not found in static models

* updated starter template instructions

* browser console log improved for firefox

* provider icons fix icons
2024-12-31 22:47:32 +05:30
Anirban Kar
389eedcac4 fix: better model loading ui feedback and model list update (#954)
* fix: better model loading feedback and model list update

* added load on providersettings  update
2024-12-31 19:22:46 +05:30
Arsalaan Ahmed
e00264236e feat: added hyperbolic llm models (#943)
* Added Hyperbolic Models

* Fix: Fixed problem in connecting with hyperbolic models

* added dynamic models for hyperbolic

* removed logs
2024-12-30 23:26:33 +05:30
Eduard Ruzga
4c81cb02e1 fix: add defaults for LMStudio to work out of the box (#928) 2024-12-30 17:50:13 +05:30
Anirban Kar
8b58c7a0fb fix: ollama provider module base url hotfix for docker (#863)
* fix: ollama base url hotfix

* cleanup logic
2024-12-22 01:25:48 +05:30
Anirban Kar
7295352a98 refactor: refactored LLM Providers: Adapting Modular Approach (#832)
* refactor: Refactoring Providers to have providers as modules

* updated package and lock file

* added grok model back

* updated registry system
2024-12-21 11:45:17 +05:30