feat: add Moonshot AI (Kimi) provider and update xAI Grok models (#1953)
- Add comprehensive Moonshot AI provider with 11 models including: * Legacy moonshot-v1 series (8k, 32k, 128k context) * Latest Kimi K2 models (K2 Preview, Turbo, Thinking) * Vision-enabled models for multimodal capabilities * Auto-selecting model variants - Update xAI provider with latest Grok models: * Add Grok 4 (256K context) and Grok 4 (07-09) variant * Add Grok 3 Mini Beta and Mini Fast Beta variants * Update context limits to match actual model capabilities * Remove outdated grok-beta and grok-2-1212 models - Add MOONSHOT_API_KEY to environment configuration - Register Moonshot provider in service status monitoring - Full OpenAI-compatible API integration via api.moonshot.ai - Fix TypeScript errors in GitHub provider 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
219
.env.example
219
.env.example
@@ -1,131 +1,142 @@
|
||||
# Rename this file to .env once you have filled in the below environment variables!
|
||||
# ======================================
|
||||
# Environment Variables for Bolt.diy
|
||||
# ======================================
|
||||
# Copy this file to .env.local and fill in your API keys
|
||||
# See README.md for setup instructions
|
||||
|
||||
# Get your GROQ API Key here -
|
||||
# https://console.groq.com/keys
|
||||
# You only need this environment variable set if you want to use Groq models
|
||||
GROQ_API_KEY=
|
||||
# ======================================
|
||||
# AI PROVIDER API KEYS
|
||||
# ======================================
|
||||
|
||||
# Get your HuggingFace API Key here -
|
||||
# https://huggingface.co/settings/tokens
|
||||
# You only need this environment variable set if you want to use HuggingFace models
|
||||
HuggingFace_API_KEY=
|
||||
# Anthropic Claude
|
||||
# Get your API key from: https://console.anthropic.com/
|
||||
ANTHROPIC_API_KEY=your_anthropic_api_key_here
|
||||
|
||||
# OpenAI GPT models
|
||||
# Get your API key from: https://platform.openai.com/api-keys
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
|
||||
# Get your Open AI API Key by following these instructions -
|
||||
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
|
||||
# You only need this environment variable set if you want to use GPT models
|
||||
OPENAI_API_KEY=
|
||||
# GitHub Models (OpenAI models hosted by GitHub)
|
||||
# Get your Personal Access Token from: https://github.com/settings/tokens
|
||||
# - Select "Fine-grained tokens"
|
||||
# - Set repository access to "All repositories"
|
||||
# - Enable "GitHub Models" permission
|
||||
GITHUB_API_KEY=github_pat_your_personal_access_token_here
|
||||
|
||||
# Get your Anthropic API Key in your account settings -
|
||||
# https://console.anthropic.com/settings/keys
|
||||
# You only need this environment variable set if you want to use Claude models
|
||||
ANTHROPIC_API_KEY=
|
||||
# Perplexity AI (Search-augmented models)
|
||||
# Get your API key from: https://www.perplexity.ai/settings/api
|
||||
PERPLEXITY_API_KEY=your_perplexity_api_key_here
|
||||
|
||||
# Get your OpenRouter API Key in your account settings -
|
||||
# https://openrouter.ai/settings/keys
|
||||
# You only need this environment variable set if you want to use OpenRouter models
|
||||
OPEN_ROUTER_API_KEY=
|
||||
# DeepSeek
|
||||
# Get your API key from: https://platform.deepseek.com/api_keys
|
||||
DEEPSEEK_API_KEY=your_deepseek_api_key_here
|
||||
|
||||
# Get your Google Generative AI API Key by following these instructions -
|
||||
# https://console.cloud.google.com/apis/credentials
|
||||
# You only need this environment variable set if you want to use Google Generative AI models
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=
|
||||
# Google Gemini
|
||||
# Get your API key from: https://makersuite.google.com/app/apikey
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=your_google_gemini_api_key_here
|
||||
|
||||
# You only need this environment variable set if you want to use oLLAMA models
|
||||
# DONT USE http://localhost:11434 due to IPV6 issues
|
||||
# USE EXAMPLE http://127.0.0.1:11434
|
||||
OLLAMA_API_BASE_URL=
|
||||
# Cohere
|
||||
# Get your API key from: https://dashboard.cohere.ai/api-keys
|
||||
COHERE_API_KEY=your_cohere_api_key_here
|
||||
|
||||
# You only need this environment variable set if you want to use OpenAI Like models
|
||||
OPENAI_LIKE_API_BASE_URL=
|
||||
# Groq (Fast inference)
|
||||
# Get your API key from: https://console.groq.com/keys
|
||||
GROQ_API_KEY=your_groq_api_key_here
|
||||
|
||||
# You only need this environment variable set if you want to use Together AI models
|
||||
TOGETHER_API_BASE_URL=
|
||||
# Mistral
|
||||
# Get your API key from: https://console.mistral.ai/api-keys/
|
||||
MISTRAL_API_KEY=your_mistral_api_key_here
|
||||
|
||||
# You only need this environment variable set if you want to use DeepSeek models through their API
|
||||
DEEPSEEK_API_KEY=
|
||||
# Together AI
|
||||
# Get your API key from: https://api.together.xyz/settings/api-keys
|
||||
TOGETHER_API_KEY=your_together_api_key_here
|
||||
|
||||
# Get your OpenAI Like API Key
|
||||
OPENAI_LIKE_API_KEY=
|
||||
# X.AI (Elon Musk's company)
|
||||
# Get your API key from: https://console.x.ai/
|
||||
XAI_API_KEY=your_xai_api_key_here
|
||||
|
||||
# Get your Together API Key
|
||||
TOGETHER_API_KEY=
|
||||
# Moonshot AI (Kimi models)
|
||||
# Get your API key from: https://platform.moonshot.ai/console/api-keys
|
||||
MOONSHOT_API_KEY=your_moonshot_api_key_here
|
||||
|
||||
# You only need this environment variable set if you want to use Hyperbolic models
|
||||
#Get your Hyperbolics API Key at https://app.hyperbolic.xyz/settings
|
||||
#baseURL="https://api.hyperbolic.xyz/v1/chat/completions"
|
||||
HYPERBOLIC_API_KEY=
|
||||
HYPERBOLIC_API_BASE_URL=
|
||||
# Hugging Face
|
||||
# Get your API key from: https://huggingface.co/settings/tokens
|
||||
HuggingFace_API_KEY=your_huggingface_api_key_here
|
||||
|
||||
# Get your Mistral API Key by following these instructions -
|
||||
# https://console.mistral.ai/api-keys/
|
||||
# You only need this environment variable set if you want to use Mistral models
|
||||
MISTRAL_API_KEY=
|
||||
# Hyperbolic
|
||||
# Get your API key from: https://app.hyperbolic.xyz/settings
|
||||
HYPERBOLIC_API_KEY=your_hyperbolic_api_key_here
|
||||
|
||||
# Get the Cohere Api key by following these instructions -
|
||||
# https://dashboard.cohere.com/api-keys
|
||||
# You only need this environment variable set if you want to use Cohere models
|
||||
COHERE_API_KEY=
|
||||
# OpenRouter (Meta routing for multiple providers)
|
||||
# Get your API key from: https://openrouter.ai/keys
|
||||
OPEN_ROUTER_API_KEY=your_openrouter_api_key_here
|
||||
|
||||
# Get LMStudio Base URL from LM Studio Developer Console
|
||||
# Make sure to enable CORS
|
||||
# DONT USE http://localhost:1234 due to IPV6 issues
|
||||
# Example: http://127.0.0.1:1234
|
||||
LMSTUDIO_API_BASE_URL=
|
||||
# ======================================
|
||||
# CUSTOM PROVIDER BASE URLS (Optional)
|
||||
# ======================================
|
||||
|
||||
# Get your xAI API key
|
||||
# https://x.ai/api
|
||||
# You only need this environment variable set if you want to use xAI models
|
||||
XAI_API_KEY=
|
||||
# Ollama (Local models)
|
||||
# DON'T USE http://localhost:11434 due to IPv6 issues
|
||||
# USE: http://127.0.0.1:11434
|
||||
OLLAMA_API_BASE_URL=http://127.0.0.1:11434
|
||||
|
||||
# Get your Perplexity API Key here -
|
||||
# https://www.perplexity.ai/settings/api
|
||||
# You only need this environment variable set if you want to use Perplexity models
|
||||
PERPLEXITY_API_KEY=
|
||||
# OpenAI-like API (Compatible providers)
|
||||
OPENAI_LIKE_API_BASE_URL=your_openai_like_base_url_here
|
||||
OPENAI_LIKE_API_KEY=your_openai_like_api_key_here
|
||||
|
||||
# Get your AWS configuration
|
||||
# https://console.aws.amazon.com/iam/home
|
||||
# The JSON should include the following keys:
|
||||
# - region: The AWS region where Bedrock is available.
|
||||
# - accessKeyId: Your AWS access key ID.
|
||||
# - secretAccessKey: Your AWS secret access key.
|
||||
# - sessionToken (optional): Temporary session token if using an IAM role or temporary credentials.
|
||||
# Example JSON:
|
||||
# {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey", "sessionToken": "yourSessionToken"}
|
||||
AWS_BEDROCK_CONFIG=
|
||||
# Together AI Base URL
|
||||
TOGETHER_API_BASE_URL=your_together_base_url_here
|
||||
|
||||
# Include this environment variable if you want more logging for debugging locally
|
||||
VITE_LOG_LEVEL=debug
|
||||
# Hyperbolic Base URL
|
||||
HYPERBOLIC_API_BASE_URL=https://api.hyperbolic.xyz/v1/chat/completions
|
||||
|
||||
# Get your GitHub Personal Access Token here -
|
||||
# https://github.com/settings/tokens
|
||||
# This token is used for:
|
||||
# 1. Importing/cloning GitHub repositories without rate limiting
|
||||
# 2. Accessing private repositories
|
||||
# 3. Automatic GitHub authentication (no need to manually connect in the UI)
|
||||
#
|
||||
# For classic tokens, ensure it has these scopes: repo, read:org, read:user
|
||||
# For fine-grained tokens, ensure it has Repository and Organization access
|
||||
VITE_GITHUB_ACCESS_TOKEN=
|
||||
# LMStudio (Local models)
|
||||
# Make sure to enable CORS in LMStudio
|
||||
# DON'T USE http://localhost:1234 due to IPv6 issues
|
||||
# USE: http://127.0.0.1:1234
|
||||
LMSTUDIO_API_BASE_URL=http://127.0.0.1:1234
|
||||
|
||||
# Specify the type of GitHub token you're using
|
||||
# Can be 'classic' or 'fine-grained'
|
||||
# Classic tokens are recommended for broader access
|
||||
# ======================================
|
||||
# CLOUD SERVICES CONFIGURATION
|
||||
# ======================================
|
||||
|
||||
# AWS Bedrock Configuration (JSON format)
|
||||
# Get your credentials from: https://console.aws.amazon.com/iam/home
|
||||
# Example: {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey"}
|
||||
AWS_BEDROCK_CONFIG=your_aws_bedrock_config_json_here
|
||||
|
||||
# ======================================
|
||||
# GITHUB INTEGRATION
|
||||
# ======================================
|
||||
|
||||
# GitHub Personal Access Token
|
||||
# Get from: https://github.com/settings/tokens
|
||||
# Used for importing/cloning repositories and accessing private repos
|
||||
VITE_GITHUB_ACCESS_TOKEN=your_github_personal_access_token_here
|
||||
|
||||
# GitHub Token Type ('classic' or 'fine-grained')
|
||||
VITE_GITHUB_TOKEN_TYPE=classic
|
||||
|
||||
# Bug Report Configuration (Server-side only)
|
||||
# GitHub token for creating bug reports - requires 'public_repo' scope
|
||||
# This token should be configured on the server/deployment environment
|
||||
# GITHUB_BUG_REPORT_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
# ======================================
|
||||
# DEVELOPMENT SETTINGS
|
||||
# ======================================
|
||||
|
||||
# Repository where bug reports will be created
|
||||
# Format: "owner/repository"
|
||||
# BUG_REPORT_REPO=stackblitz-labs/bolt.diy
|
||||
# Development Mode
|
||||
NODE_ENV=development
|
||||
|
||||
# Example Context Values for qwen2.5-coder:32b
|
||||
#
|
||||
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
|
||||
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
|
||||
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
|
||||
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
|
||||
DEFAULT_NUM_CTX=
|
||||
# Application Port (optional, defaults to 3000)
|
||||
PORT=3000
|
||||
|
||||
# Logging Level (debug, info, warn, error)
|
||||
VITE_LOG_LEVEL=debug
|
||||
|
||||
# Default Context Window Size (for local models)
|
||||
DEFAULT_NUM_CTX=32768
|
||||
|
||||
# ======================================
|
||||
# INSTRUCTIONS
|
||||
# ======================================
|
||||
# 1. Copy this file to .env.local: cp .env.example .env.local
|
||||
# 2. Fill in the API keys you want to use
|
||||
# 3. Restart your development server: npm run dev
|
||||
# 4. Go to Settings > Providers to enable/configure providers
|
||||
Reference in New Issue
Block a user