docs: update index.md and FAQ.md documentation

- Update provider count from '20+' to '19' to match actual implementation
- Update API key configuration instructions to reflect new modern UI
- Update provider navigation paths to match current interface
- Fix Moonshot provider configuration path
- Ensure all documentation accurately reflects current codebase state
This commit is contained in:
Stijnus
2025-09-05 01:42:23 +02:00
parent a06161a0e1
commit 23c0a8aaae
2 changed files with 24 additions and 15 deletions

View File

@@ -3,7 +3,7 @@
## Models and Setup
??? question "What are the best models for bolt.diy?"
For the best experience with bolt.diy, we recommend using the following models from our 20+ supported providers:
For the best experience with bolt.diy, we recommend using the following models from our 19 supported providers:
**Top Recommended Models:**
- **Claude 3.5 Sonnet** (Anthropic): Best overall coder, excellent for complex applications
@@ -42,10 +42,13 @@ You can configure API keys in two ways:
```
**Option 2: In-App Configuration**
- Go to Settings → Providers
- Select a provider
- Click the pencil icon next to the provider
- Enter your API key directly in the interface
- Click the settings icon (⚙️) in the sidebar
- Navigate to the "Providers" tab
- Switch between "Cloud Providers" and "Local Providers" tabs
- Click on a provider card to expand its configuration
- Click on the "API Key" field to enter edit mode
- Paste your API key and press Enter to save
- Look for the green checkmark to confirm proper configuration
!!! note "Security Note"
Never commit API keys to version control. The `.env.local` file is already in `.gitignore`.
@@ -67,7 +70,7 @@ Moonshot AI provides access to advanced Kimi models with excellent reasoning cap
1. Visit [Moonshot AI Platform](https://platform.moonshot.ai/console/api-keys)
2. Create an account and generate an API key
3. Add `MOONSHOT_API_KEY=your_key_here` to your `.env.local` file
4. Or configure it directly in Settings → Providers → Moonshot
4. Or configure it directly in Settings → Providers → Cloud Providers → Moonshot
**Available Models:**
- **Kimi K2 Preview**: Latest Kimi model with 128K context
@@ -200,7 +203,7 @@ bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to ex
Recent major additions to bolt.diy include:
**Advanced AI Capabilities:**
- **20+ LLM Providers**: Support for Anthropic, OpenAI, Google, DeepSeek, Cohere, and more
- **19 LLM Providers**: Support for Anthropic, OpenAI, Google, DeepSeek, Cohere, and more
- **MCP Integration**: Model Context Protocol for enhanced AI tool calling
- **Dynamic Model Loading**: Automatic model discovery from provider APIs

View File

@@ -1,6 +1,6 @@
# Welcome to bolt diy
bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use models from 20+ providers including OpenAI, Anthropic, Ollama, OpenRouter, Google/Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, Groq, Cohere, Together AI, Perplexity AI, Hyperbolic, Moonshot AI (Kimi), Amazon Bedrock, GitHub Models, and more - with easy extensibility to add any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use models from 19 providers including OpenAI, Anthropic, Ollama, OpenRouter, Google/Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, Groq, Cohere, Together AI, Perplexity AI, Hyperbolic, Moonshot AI (Kimi), Amazon Bedrock, GitHub Models, and more - with easy extensibility to add any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
## Table of Contents
@@ -40,7 +40,7 @@ Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos
## Features
- **AI-powered full-stack web development** directly in your browser with live preview
- **Support for 20+ LLM providers** with an extensible architecture to integrate additional models
- **Support for 19 LLM providers** with an extensible architecture to integrate additional models
- **Attach images and files to prompts** for better contextual understanding
- **Integrated terminal** with WebContainer sandbox for running commands and testing
- **Version control with Git** - import/export projects, connect to GitHub repositories
@@ -123,14 +123,20 @@ Once you've set your keys, you can proceed with running the app. You will set th
#### 2. Configure API Keys Directly in the Application
Alternatively, you can configure your API keys directly in the application once it's running. To do this:
Alternatively, you can configure your API keys directly in the application using the modern settings interface:
1. Launch the application and navigate to the provider selection dropdown.
2. Select the provider you wish to configure.
3. Click the pencil icon next to the selected provider.
4. Enter your API key in the provided field.
1. **Open Settings**: Click the settings icon (⚙️) in the sidebar to access the settings panel
2. **Navigate to Providers**: Select the "Providers" tab from the settings menu
3. **Choose Provider Type**: Switch between "Cloud Providers" and "Local Providers" tabs
4. **Select Provider**: Browse the grid of available providers and click on the provider card you want to configure
5. **Configure API Key**: Click on the "API Key" field to enter edit mode, then paste your API key and press Enter
6. **Verify Configuration**: Look for the green checkmark indicator showing the provider is properly configured
This method allows you to easily add or update your keys without needing to modify files directly.
The interface provides:
- **Real-time validation** with visual status indicators
- **Bulk operations** to enable/disable multiple providers at once
- **Secure storage** of API keys in browser cookies
- **Environment variable auto-detection** for server-side configurations
Once you've configured your keys, the application will be ready to use the selected LLMs.