Files
bolt-diy/.env.example
Stijnus df242a7935 feat: add Moonshot AI (Kimi) provider and update xAI Grok models (#1953)
- Add comprehensive Moonshot AI provider with 11 models including:
  * Legacy moonshot-v1 series (8k, 32k, 128k context)
  * Latest Kimi K2 models (K2 Preview, Turbo, Thinking)
  * Vision-enabled models for multimodal capabilities
  * Auto-selecting model variants

- Update xAI provider with latest Grok models:
  * Add Grok 4 (256K context) and Grok 4 (07-09) variant
  * Add Grok 3 Mini Beta and Mini Fast Beta variants
  * Update context limits to match actual model capabilities
  * Remove outdated grok-beta and grok-2-1212 models

- Add MOONSHOT_API_KEY to environment configuration
- Register Moonshot provider in service status monitoring
- Full OpenAI-compatible API integration via api.moonshot.ai
- Fix TypeScript errors in GitHub provider

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-31 18:54:14 +02:00

142 lines
4.5 KiB
Plaintext

# ======================================
# Environment Variables for Bolt.diy
# ======================================
# Copy this file to .env.local and fill in your API keys
# See README.md for setup instructions
# ======================================
# AI PROVIDER API KEYS
# ======================================
# Anthropic Claude
# Get your API key from: https://console.anthropic.com/
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# OpenAI GPT models
# Get your API key from: https://platform.openai.com/api-keys
OPENAI_API_KEY=your_openai_api_key_here
# GitHub Models (OpenAI models hosted by GitHub)
# Get your Personal Access Token from: https://github.com/settings/tokens
# - Select "Fine-grained tokens"
# - Set repository access to "All repositories"
# - Enable "GitHub Models" permission
GITHUB_API_KEY=github_pat_your_personal_access_token_here
# Perplexity AI (Search-augmented models)
# Get your API key from: https://www.perplexity.ai/settings/api
PERPLEXITY_API_KEY=your_perplexity_api_key_here
# DeepSeek
# Get your API key from: https://platform.deepseek.com/api_keys
DEEPSEEK_API_KEY=your_deepseek_api_key_here
# Google Gemini
# Get your API key from: https://makersuite.google.com/app/apikey
GOOGLE_GENERATIVE_AI_API_KEY=your_google_gemini_api_key_here
# Cohere
# Get your API key from: https://dashboard.cohere.ai/api-keys
COHERE_API_KEY=your_cohere_api_key_here
# Groq (Fast inference)
# Get your API key from: https://console.groq.com/keys
GROQ_API_KEY=your_groq_api_key_here
# Mistral
# Get your API key from: https://console.mistral.ai/api-keys/
MISTRAL_API_KEY=your_mistral_api_key_here
# Together AI
# Get your API key from: https://api.together.xyz/settings/api-keys
TOGETHER_API_KEY=your_together_api_key_here
# X.AI (Elon Musk's company)
# Get your API key from: https://console.x.ai/
XAI_API_KEY=your_xai_api_key_here
# Moonshot AI (Kimi models)
# Get your API key from: https://platform.moonshot.ai/console/api-keys
MOONSHOT_API_KEY=your_moonshot_api_key_here
# Hugging Face
# Get your API key from: https://huggingface.co/settings/tokens
HuggingFace_API_KEY=your_huggingface_api_key_here
# Hyperbolic
# Get your API key from: https://app.hyperbolic.xyz/settings
HYPERBOLIC_API_KEY=your_hyperbolic_api_key_here
# OpenRouter (Meta routing for multiple providers)
# Get your API key from: https://openrouter.ai/keys
OPEN_ROUTER_API_KEY=your_openrouter_api_key_here
# ======================================
# CUSTOM PROVIDER BASE URLS (Optional)
# ======================================
# Ollama (Local models)
# DON'T USE http://localhost:11434 due to IPv6 issues
# USE: http://127.0.0.1:11434
OLLAMA_API_BASE_URL=http://127.0.0.1:11434
# OpenAI-like API (Compatible providers)
OPENAI_LIKE_API_BASE_URL=your_openai_like_base_url_here
OPENAI_LIKE_API_KEY=your_openai_like_api_key_here
# Together AI Base URL
TOGETHER_API_BASE_URL=your_together_base_url_here
# Hyperbolic Base URL
HYPERBOLIC_API_BASE_URL=https://api.hyperbolic.xyz/v1/chat/completions
# LMStudio (Local models)
# Make sure to enable CORS in LMStudio
# DON'T USE http://localhost:1234 due to IPv6 issues
# USE: http://127.0.0.1:1234
LMSTUDIO_API_BASE_URL=http://127.0.0.1:1234
# ======================================
# CLOUD SERVICES CONFIGURATION
# ======================================
# AWS Bedrock Configuration (JSON format)
# Get your credentials from: https://console.aws.amazon.com/iam/home
# Example: {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey"}
AWS_BEDROCK_CONFIG=your_aws_bedrock_config_json_here
# ======================================
# GITHUB INTEGRATION
# ======================================
# GitHub Personal Access Token
# Get from: https://github.com/settings/tokens
# Used for importing/cloning repositories and accessing private repos
VITE_GITHUB_ACCESS_TOKEN=your_github_personal_access_token_here
# GitHub Token Type ('classic' or 'fine-grained')
VITE_GITHUB_TOKEN_TYPE=classic
# ======================================
# DEVELOPMENT SETTINGS
# ======================================
# Development Mode
NODE_ENV=development
# Application Port (optional, defaults to 3000)
PORT=3000
# Logging Level (debug, info, warn, error)
VITE_LOG_LEVEL=debug
# Default Context Window Size (for local models)
DEFAULT_NUM_CTX=32768
# ======================================
# INSTRUCTIONS
# ======================================
# 1. Copy this file to .env.local: cp .env.example .env.local
# 2. Fill in the API keys you want to use
# 3. Restart your development server: npm run dev
# 4. Go to Settings > Providers to enable/configure providers