Merge pull request #1962 from Stijnus/BOLTDIY_DOCS

feat: update readme and documentation
This commit is contained in:
Stijnus
2025-09-05 01:59:03 +02:00
committed by GitHub
6 changed files with 1006 additions and 172 deletions

296
README.md
View File

@@ -2,10 +2,10 @@
[![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy) [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
Welcome to bolt.diy, the official open source version of Bolt.new, which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. Welcome to bolt.diy, the official open source version of Bolt.new, which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, Groq, Cohere, Together, Perplexity, Moonshot (Kimi), Hyperbolic, GitHub Models, Amazon Bedrock, and OpenAI-like providers - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
----- -----
Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more offical installation instructions and more informations. Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more official installation instructions and additional information.
----- -----
Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos-tutorial-helpful-content/3243) has a bunch of incredible resources for running and deploying bolt.diy yourself! Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos-tutorial-helpful-content/3243) has a bunch of incredible resources for running and deploying bolt.diy yourself!
@@ -17,10 +17,13 @@ bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMed
## Table of Contents ## Table of Contents
- [Join the Community](#join-the-community) - [Join the Community](#join-the-community)
- [Requested Additions](#requested-additions) - [Recent Major Additions](#recent-major-additions)
- [Features](#features) - [Features](#features)
- [Setup](#setup) - [Setup](#setup)
- [Run the Application](#run-the-application) - [Quick Installation](#quick-installation)
- [Manual Installation](#manual-installation)
- [Configuring API Keys and Providers](#configuring-api-keys-and-providers)
- [Setup Using Git (For Developers only)](#setup-using-git-for-developers-only)
- [Available Scripts](#available-scripts) - [Available Scripts](#available-scripts)
- [Contributing](#contributing) - [Contributing](#contributing)
- [Roadmap](#roadmap) - [Roadmap](#roadmap)
@@ -38,73 +41,52 @@ you to understand where the current areas of focus are.
If you want to know what we are working on, what we are planning to work on, or if you want to contribute to the If you want to know what we are working on, what we are planning to work on, or if you want to contribute to the
project, please check the [project management guide](./PROJECT.md) to get started easily. project, please check the [project management guide](./PROJECT.md) to get started easily.
## Requested Additions ## Recent Major Additions
- ✅ OpenRouter Integration (@coleam00) ### ✅ Completed Features
- ✅ Gemini Integration (@jonathands) - **19+ AI Provider Integrations** - OpenAI, Anthropic, Google, Groq, xAI, DeepSeek, Mistral, Cohere, Together, Perplexity, HuggingFace, Ollama, LM Studio, OpenRouter, Moonshot, Hyperbolic, GitHub Models, Amazon Bedrock, OpenAI-like
- ✅ Autogenerate Ollama models from what is downloaded (@yunatamos) - **Electron Desktop App** - Native desktop experience with full functionality
- ✅ Filter models by provider (@jasonm23) - **Advanced Deployment Options** - Netlify, Vercel, and GitHub Pages deployment
- ✅ Download project as ZIP (@fabwaseem) - **Supabase Integration** - Database management and query capabilities
- ✅ Improvements to the main bolt.new prompt in `app\lib\.server\llm\prompts.ts` (@kofi-bhr) - **Data Visualization & Analysis** - Charts, graphs, and data analysis tools
- ✅ DeepSeek API Integration (@zenith110) - **MCP (Model Context Protocol)** - Enhanced AI tool integration
- ✅ Mistral API Integration (@ArulGandhi) - **Search Functionality** - Codebase search and navigation
- ✅ "Open AI Like" API Integration (@ZerxZ) - **File Locking System** - Prevents conflicts during AI code generation
- ✅ Ability to sync files (one way sync) to local folder (@muzafferkadir) - **Diff View** - Visual representation of AI-made changes
- ✅ Containerize the application with Docker for easy installation (@aaronbolton) - **Git Integration** - Clone, import, and deployment capabilities
- ✅ Publish projects directly to GitHub (@goncaloalves) - **Expo App Creation** - React Native development support
- ✅ Ability to enter API keys in the UI (@ali00209) - **Voice Prompting** - Audio input for prompts
- ✅ xAI Grok Beta Integration (@milutinke) - **Bulk Chat Operations** - Delete multiple chats at once
- ✅ LM Studio Integration (@karrot0) - **Project Snapshot Restoration** - Restore projects from snapshots on reload
- ✅ HuggingFace Integration (@ahsan3219)
- ✅ Bolt terminal to see the output of LLM run commands (@thecodacus) ### 🔄 In Progress / Planned
- ✅ Streaming of code output (@thecodacus) - **File Locking & Diff Improvements** - Enhanced conflict prevention
- ✅ Ability to revert code to earlier version (@wonderwhy-er) - **Backend Agent Architecture** - Move from single model calls to agent-based system
- ✅ Chat history backup and restore functionality (@sidbetatester) - **LLM Prompt Optimization** - Better performance for smaller models
- ✅ Cohere Integration (@hasanraiyan) - **Project Planning Documentation** - LLM-generated project plans in markdown
- ✅ Dynamic model max token length (@hasanraiyan) - **VSCode Integration** - Git-like confirmations and workflows
- ✅ Better prompt enhancing (@SujalXplores) - **Document Upload for Knowledge** - Reference materials and coding style guides
- ✅ Prompt caching (@SujalXplores) - **Additional Provider Integrations** - Azure OpenAI, Vertex AI, Granite
- ✅ Load local projects into the app (@wonderwhy-er)
- ✅ Together Integration (@mouimet-infinisoft)
- ✅ Mobile friendly (@qwikode)
- ✅ Better prompt enhancing (@SujalXplores)
- ✅ Attach images to prompts (@atrokhym)(@stijnus)
- ✅ Added Git Clone button (@thecodacus)
- ✅ Git Import from url (@thecodacus)
- ✅ PromptLibrary to have different variations of prompts for different use cases (@thecodacus)
- ✅ Detect package.json and commands to auto install & run preview for folder and git import (@wonderwhy-er)
- ✅ Selection tool to target changes visually (@emcconnell)
- ✅ Detect terminal Errors and ask bolt to fix it (@thecodacus)
- ✅ Detect preview Errors and ask bolt to fix it (@wonderwhy-er)
- ✅ Add Starter Template Options (@thecodacus)
- ✅ Perplexity Integration (@meetpateltech)
- ✅ AWS Bedrock Integration (@kunjabijukchhe)
- ✅ Add a "Diff View" to see the changes (@toddyclipsgg)
-**HIGH PRIORITY** - Prevent bolt from rewriting files as often (file locking and diffs)
-**HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
-**HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
- ✅ Deploy directly to Netlify (@xKevIsDev)
- ✅ Supabase Integration (@xKevIsDev)
- ⬜ Have LLM plan the project in a MD file for better results/transparency
- ⬜ VSCode Integration with git-like confirmations
- ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
- ✅ Voice prompting
- ⬜ Azure Open AI API Integration
- ⬜ Vertex AI Integration
- ⬜ Granite Integration
- ✅ Popout Window for Web Container(@stijnus)
- ✅ Ability to change Popout window size (@stijnus)
## Features ## Features
- **AI-powered full-stack web development** for **NodeJS based applications** directly in your browser. - **AI-powered full-stack web development** for **NodeJS based applications** directly in your browser.
- **Support for multiple LLMs** with an extensible architecture to integrate additional models. - **Support for 19+ LLMs** with an extensible architecture to integrate additional models.
- **Attach images to prompts** for better contextual understanding. - **Attach images to prompts** for better contextual understanding.
- **Integrated terminal** to view output of LLM-run commands. - **Integrated terminal** to view output of LLM-run commands.
- **Revert code to earlier versions** for easier debugging and quicker changes. - **Revert code to earlier versions** for easier debugging and quicker changes.
- **Download projects as ZIP** for easy portability Sync to a folder on the host. - **Download projects as ZIP** for easy portability and sync to a folder on the host.
- **Integration-ready Docker support** for a hassle-free setup. - **Integration-ready Docker support** for a hassle-free setup.
- **Deploy** directly to **Netlify** - **Deploy directly** to **Netlify**, **Vercel**, or **GitHub Pages**.
- **Electron desktop app** for native desktop experience.
- **Data visualization and analysis** with integrated charts and graphs.
- **Git integration** with clone, import, and deployment capabilities.
- **MCP (Model Context Protocol)** support for enhanced AI tool integration.
- **Search functionality** to search through your codebase.
- **File locking system** to prevent conflicts during AI code generation.
- **Diff view** to see changes made by the AI.
- **Supabase integration** for database management and queries.
- **Expo app creation** for React Native development.
## Setup ## Setup
@@ -114,10 +96,13 @@ Let's get you up and running with the stable version of Bolt.DIY!
## Quick Installation ## Quick Installation
[![Download Latest Release](https://img.shields.io/github/v/release/stackblitz-labs/bolt.diy?label=Download%20Bolt&sort=semver)](https://github.com/stackblitz-labs/bolt.diy/releases/latest) ← Click here to go the the latest release version! [![Download Latest Release](https://img.shields.io/github/v/release/stackblitz-labs/bolt.diy?label=Download%20Bolt&sort=semver)](https://github.com/stackblitz-labs/bolt.diy/releases/latest) ← Click here to go to the latest release version!
- Download the binary for your platform - Download the binary for your platform (available for Windows, macOS, and Linux)
- Note: For macOS, if you get the error "This app is damaged", run ```xattr -cr /path/to/Bolt.app``` - **Note**: For macOS, if you get the error "This app is damaged", run:
```bash
xattr -cr /path/to/Bolt.app
```
## Manual installation ## Manual installation
@@ -192,38 +177,159 @@ This option requires some familiarity with Docker but provides a more isolated e
docker compose --profile development up docker compose --profile development up
``` ```
### Option 3: Desktop Application (Electron)
For users who prefer a native desktop experience, bolt.diy is also available as an Electron desktop application:
1. **Download the Desktop App**:
- Visit the [latest release](https://github.com/stackblitz-labs/bolt.diy/releases/latest)
- Download the appropriate binary for your operating system
- For macOS: Extract and run the `.dmg` file
- For Windows: Run the `.exe` installer
- For Linux: Extract and run the AppImage or install the `.deb` package
2. **Alternative**: Build from Source:
```bash
# Install dependencies
pnpm install
# Build the Electron app
pnpm electron:build:dist # For all platforms
# OR platform-specific:
pnpm electron:build:mac # macOS
pnpm electron:build:win # Windows
pnpm electron:build:linux # Linux
```
The desktop app provides the same full functionality as the web version with additional native features.
## Configuring API Keys and Providers ## Configuring API Keys and Providers
### Adding Your API Keys Bolt.diy features a modern, intuitive settings interface for managing AI providers and API keys. The settings are organized into dedicated panels for easy navigation and configuration.
Setting up your API keys in Bolt.DIY is straightforward: ### Accessing Provider Settings
1. Open the home page (main interface) 1. **Open Settings**: Click the settings icon (⚙️) in the sidebar to access the settings panel
2. Select your desired provider from the dropdown menu 2. **Navigate to Providers**: Select the "Providers" tab from the settings menu
3. Click the pencil (edit) icon 3. **Choose Provider Type**: Switch between "Cloud Providers" and "Local Providers" tabs
4. Enter your API key in the secure input field
![API Key Configuration Interface](./docs/images/api-key-ui-section.png) ### Cloud Providers Configuration
### Configuring Custom Base URLs The Cloud Providers tab displays all cloud-based AI services in an organized card layout:
For providers that support custom base URLs (such as Ollama or LM Studio), follow these steps: #### Adding API Keys
1. **Select Provider**: Browse the grid of available cloud providers (OpenAI, Anthropic, Google, etc.)
2. **Toggle Provider**: Use the switch to enable/disable each provider
3. **Set API Key**:
- Click the provider card to expand its configuration
- Click on the "API Key" field to enter edit mode
- Paste your API key and press Enter to save
- The interface shows real-time validation with green checkmarks for valid keys
1. Click the settings icon in the sidebar to open the settings menu #### Advanced Features
![Settings Button Location](./docs/images/bolt-settings-button.png) - **Bulk Toggle**: Use "Enable All Cloud" to toggle all cloud providers at once
- **Visual Status**: Green checkmarks indicate properly configured providers
- **Provider Icons**: Each provider has a distinctive icon for easy identification
- **Descriptions**: Helpful descriptions explain each provider's capabilities
2. Navigate to the "Providers" tab ### Local Providers Configuration
3. Search for your provider using the search bar
4. Enter your custom base URL in the designated field
![Provider Base URL Configuration](./docs/images/provider-base-url.png)
> **Note**: Custom base URLs are particularly useful when running local instances of AI models or using custom API endpoints. The Local Providers tab manages local AI installations and custom endpoints:
### Supported Providers #### Ollama Configuration
1. **Enable Ollama**: Toggle the Ollama provider switch
2. **Configure Endpoint**: Set the API endpoint (defaults to `http://127.0.0.1:11434`)
3. **Model Management**:
- View all installed models with size and parameter information
- Update models to latest versions with one click
- Delete unused models
- Install new models by entering model names
- Ollama #### Other Local Providers
- LM Studio - **LM Studio**: Configure custom base URLs for LM Studio endpoints
- OpenAILike - **OpenAI-like**: Connect to any OpenAI-compatible API endpoint
- **Auto-detection**: The system automatically detects environment variables for base URLs
### Environment Variables vs UI Configuration
Bolt.diy supports both methods for maximum flexibility:
#### Environment Variables (Recommended for Production)
Set API keys and base URLs in your `.env.local` file:
```bash
# API Keys
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
# Custom Base URLs
OLLAMA_BASE_URL=http://127.0.0.1:11434
LMSTUDIO_BASE_URL=http://127.0.0.1:1234
```
#### UI-Based Configuration
- **Real-time Updates**: Changes take effect immediately
- **Secure Storage**: API keys are stored securely in browser cookies
- **Visual Feedback**: Clear indicators show configuration status
- **Easy Management**: Edit, view, and manage keys through the interface
### Provider-Specific Features
#### OpenRouter
- **Free Models Filter**: Toggle to show only free models when browsing
- **Pricing Information**: View input/output costs for each model
- **Model Search**: Fuzzy search through all available models
#### Ollama
- **Model Installer**: Built-in interface to install new models
- **Progress Tracking**: Real-time download progress for model updates
- **Model Details**: View model size, parameters, and quantization levels
- **Auto-refresh**: Automatically detects newly installed models
#### Search & Navigation
- **Fuzzy Search**: Type-ahead search across all providers and models
- **Keyboard Navigation**: Use arrow keys and Enter to navigate quickly
- **Clear Search**: Press `Cmd+K` (Mac) or `Ctrl+K` (Windows/Linux) to clear search
### Troubleshooting
#### Common Issues
- **API Key Not Recognized**: Ensure you're using the correct API key format for each provider
- **Base URL Issues**: Verify the endpoint URL is correct and accessible
- **Model Not Loading**: Check that the provider is enabled and properly configured
- **Environment Variables Not Working**: Restart the application after adding new environment variables
#### Status Indicators
- 🟢 **Green Checkmark**: Provider properly configured and ready to use
- 🔴 **Red X**: Configuration missing or invalid
- 🟡 **Yellow Indicator**: Provider enabled but may need additional setup
- 🔵 **Blue Pencil**: Click to edit configuration
### Supported Providers Overview
#### Cloud Providers
- **OpenAI** - GPT-4, GPT-3.5, and other OpenAI models
- **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus, and other Claude models
- **Google (Gemini)** - Gemini 1.5 Pro, Gemini 1.5 Flash, and other Gemini models
- **Groq** - Fast inference with Llama, Mixtral, and other models
- **xAI** - Grok models including Grok-2 and Grok-2 Vision
- **DeepSeek** - DeepSeek Coder and other DeepSeek models
- **Mistral** - Mixtral, Mistral 7B, and other Mistral models
- **Cohere** - Command R, Command R+, and other Cohere models
- **Together AI** - Various open-source models
- **Perplexity** - Sonar models for search and reasoning
- **HuggingFace** - Access to HuggingFace model hub
- **OpenRouter** - Unified API for multiple model providers
- **Moonshot (Kimi)** - Kimi AI models
- **Hyperbolic** - High-performance model inference
- **GitHub Models** - Models available through GitHub
- **Amazon Bedrock** - AWS managed AI models
#### Local Providers
- **Ollama** - Run open-source models locally with advanced model management
- **LM Studio** - Local model inference with LM Studio
- **OpenAI-like** - Connect to any OpenAI-compatible API endpoint
> **💡 Pro Tip**: Start with OpenAI or Anthropic for the best results, then explore other providers based on your specific needs and budget considerations.
## Setup Using Git (For Developers only) ## Setup Using Git (For Developers only)
@@ -272,7 +378,7 @@ This method is recommended for developers who want to:
Hint: Be aware that this can have beta-features and more likely got bugs than the stable release Hint: Be aware that this can have beta-features and more likely got bugs than the stable release
>**Open the WebUI to test (Default: http://localhost:5173)** >**Open the WebUI to test (Default: http://localhost:5173)**
> - Beginngers: > - Beginners:
> - Try to use a sophisticated Provider/Model like Anthropic with Claude Sonnet 3.x Models to get best results > - Try to use a sophisticated Provider/Model like Anthropic with Claude Sonnet 3.x Models to get best results
> - Explanation: The System Prompt currently implemented in bolt.diy cant cover the best performance for all providers and models out there. So it works better with some models, then other, even if the models itself are perfect for >programming > - Explanation: The System Prompt currently implemented in bolt.diy cant cover the best performance for all providers and models out there. So it works better with some models, then other, even if the models itself are perfect for >programming
> - Future: Planned is a Plugin/Extentions-Library so there can be different System Prompts for different Models, which will help to get better results > - Future: Planned is a Plugin/Extentions-Library so there can be different System Prompts for different Models, which will help to get better results
@@ -341,7 +447,25 @@ Remember to always commit your local changes or stash them before pulling update
- **`pnpm run typecheck`**: Runs TypeScript type checking. - **`pnpm run typecheck`**: Runs TypeScript type checking.
- **`pnpm run typegen`**: Generates TypeScript types using Wrangler. - **`pnpm run typegen`**: Generates TypeScript types using Wrangler.
- **`pnpm run deploy`**: Deploys the project to Cloudflare Pages. - **`pnpm run deploy`**: Deploys the project to Cloudflare Pages.
- **`pnpm run lint`**: Runs ESLint to check for code issues.
- **`pnpm run lint:fix`**: Automatically fixes linting issues. - **`pnpm run lint:fix`**: Automatically fixes linting issues.
- **`pnpm run clean`**: Cleans build artifacts and cache.
- **`pnpm run prepare`**: Sets up husky for git hooks.
- **Docker Scripts**:
- **`pnpm run dockerbuild`**: Builds the Docker image for development.
- **`pnpm run dockerbuild:prod`**: Builds the Docker image for production.
- **`pnpm run dockerrun`**: Runs the Docker container.
- **`pnpm run dockerstart`**: Starts the Docker container with proper bindings.
- **Electron Scripts**:
- **`pnpm electron:build:deps`**: Builds Electron main and preload scripts.
- **`pnpm electron:build:main`**: Builds the Electron main process.
- **`pnpm electron:build:preload`**: Builds the Electron preload script.
- **`pnpm electron:build:renderer`**: Builds the Electron renderer.
- **`pnpm electron:build:unpack`**: Creates an unpacked Electron build.
- **`pnpm electron:build:mac`**: Builds for macOS.
- **`pnpm electron:build:win`**: Builds for Windows.
- **`pnpm electron:build:linux`**: Builds for Linux.
- **`pnpm electron:build:dist`**: Builds for all platforms.
--- ---

View File

@@ -5,12 +5,6 @@ import { classNames } from '~/utils/classNames';
import { profileStore } from '~/lib/stores/profile'; import { profileStore } from '~/lib/stores/profile';
import type { TabType, Profile } from './types'; import type { TabType, Profile } from './types';
const BetaLabel = () => (
<span className="px-1.5 py-0.5 rounded-full bg-purple-500/10 dark:bg-purple-500/20 text-[10px] font-medium text-purple-600 dark:text-purple-400 ml-2">
BETA
</span>
);
interface AvatarDropdownProps { interface AvatarDropdownProps {
onSelectTab: (tab: TabType) => void; onSelectTab: (tab: TabType) => void;
} }
@@ -117,22 +111,6 @@ export const AvatarDropdown = ({ onSelectTab }: AvatarDropdownProps) => {
</DropdownMenu.Item> </DropdownMenu.Item>
<div className="my-1 border-t border-gray-200/50 dark:border-gray-800/50" /> <div className="my-1 border-t border-gray-200/50 dark:border-gray-800/50" />
<DropdownMenu.Item
className={classNames(
'flex items-center gap-2 px-4 py-2.5',
'text-sm text-gray-700 dark:text-gray-200',
'hover:bg-purple-50 dark:hover:bg-purple-500/10',
'hover:text-purple-500 dark:hover:text-purple-400',
'cursor-pointer transition-all duration-200',
'outline-none',
'group',
)}
onClick={() => onSelectTab('service-status')}
>
<div className="i-ph:heartbeat w-4 h-4 text-gray-400 group-hover:text-purple-500 dark:group-hover:text-purple-400 transition-colors" />
Service Status
<BetaLabel />
</DropdownMenu.Item>
<DropdownMenu.Item <DropdownMenu.Item
className={classNames( className={classNames(
@@ -151,6 +129,22 @@ export const AvatarDropdown = ({ onSelectTab }: AvatarDropdownProps) => {
<div className="i-ph:bug w-4 h-4 text-gray-400 group-hover:text-purple-500 dark:group-hover:text-purple-400 transition-colors" /> <div className="i-ph:bug w-4 h-4 text-gray-400 group-hover:text-purple-500 dark:group-hover:text-purple-400 transition-colors" />
Report Bug Report Bug
</DropdownMenu.Item> </DropdownMenu.Item>
<DropdownMenu.Item
className={classNames(
'flex items-center gap-2 px-4 py-2.5',
'text-sm text-gray-700 dark:text-gray-200',
'hover:bg-purple-50 dark:hover:bg-purple-500/10',
'hover:text-purple-500 dark:hover:text-purple-400',
'cursor-pointer transition-all duration-200',
'outline-none',
'group',
)}
onClick={() => window.open('https://stackblitz-labs.github.io/bolt.diy/', '_blank')}
>
<div className="i-ph:question w-4 h-4 text-gray-400 group-hover:text-purple-500 dark:group-hover:text-purple-400 transition-colors" />
Help & Documentation
</DropdownMenu.Item>
</DropdownMenu.Content> </DropdownMenu.Content>
</DropdownMenu.Portal> </DropdownMenu.Portal>
</DropdownMenu.Root> </DropdownMenu.Root>

View File

@@ -4,7 +4,7 @@ import { toast } from 'react-toastify';
import { Dialog, DialogButton, DialogDescription, DialogRoot, DialogTitle } from '~/components/ui/Dialog'; import { Dialog, DialogButton, DialogDescription, DialogRoot, DialogTitle } from '~/components/ui/Dialog';
import { ThemeSwitch } from '~/components/ui/ThemeSwitch'; import { ThemeSwitch } from '~/components/ui/ThemeSwitch';
import { ControlPanel } from '~/components/@settings/core/ControlPanel'; import { ControlPanel } from '~/components/@settings/core/ControlPanel';
import { SettingsButton } from '~/components/ui/SettingsButton'; import { SettingsButton, HelpButton } from '~/components/ui/SettingsButton';
import { Button } from '~/components/ui/Button'; import { Button } from '~/components/ui/Button';
import { db, deleteById, getAll, chatId, type ChatHistoryItem, useChatHistory } from '~/lib/persistence'; import { db, deleteById, getAll, chatId, type ChatHistoryItem, useChatHistory } from '~/lib/persistence';
import { cubicEasingFn } from '~/utils/easings'; import { cubicEasingFn } from '~/utils/easings';
@@ -340,6 +340,7 @@ export const Menu = () => {
<div className="h-12 flex items-center justify-between px-4 border-b border-gray-100 dark:border-gray-800/50 bg-gray-50/50 dark:bg-gray-900/50 rounded-tr-2xl"> <div className="h-12 flex items-center justify-between px-4 border-b border-gray-100 dark:border-gray-800/50 bg-gray-50/50 dark:bg-gray-900/50 rounded-tr-2xl">
<div className="text-gray-900 dark:text-white font-medium"></div> <div className="text-gray-900 dark:text-white font-medium"></div>
<div className="flex items-center gap-3"> <div className="flex items-center gap-3">
<HelpButton onClick={() => window.open('https://stackblitz-labs.github.io/bolt.diy/', '_blank')} />
<span className="font-medium text-sm text-gray-900 dark:text-white truncate"> <span className="font-medium text-sm text-gray-900 dark:text-white truncate">
{profile?.username || 'Guest User'} {profile?.username || 'Guest User'}
</span> </span>
@@ -525,7 +526,9 @@ export const Menu = () => {
</DialogRoot> </DialogRoot>
</div> </div>
<div className="flex items-center justify-between border-t border-gray-200 dark:border-gray-800 px-4 py-3"> <div className="flex items-center justify-between border-t border-gray-200 dark:border-gray-800 px-4 py-3">
<SettingsButton onClick={handleSettingsClick} /> <div className="flex items-center gap-3">
<SettingsButton onClick={handleSettingsClick} />
</div>
<ThemeSwitch /> <ThemeSwitch />
</div> </div>
</div> </div>

View File

@@ -16,3 +16,20 @@ export const SettingsButton = memo(({ onClick }: SettingsButtonProps) => {
/> />
); );
}); });
interface HelpButtonProps {
onClick: () => void;
}
export const HelpButton = memo(({ onClick }: HelpButtonProps) => {
return (
<IconButton
onClick={onClick}
icon="i-ph:question"
size="xl"
title="Help & Documentation"
data-testid="help-button"
className="text-[#666] hover:text-bolt-elements-textPrimary hover:bg-bolt-elements-item-backgroundActive/10 transition-colors"
/>
);
});

View File

@@ -3,32 +3,185 @@
## Models and Setup ## Models and Setup
??? question "What are the best models for bolt.diy?" ??? question "What are the best models for bolt.diy?"
For the best experience with bolt.diy, we recommend using the following models: For the best experience with bolt.diy, we recommend using the following models from our 19 supported providers:
- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases **Top Recommended Models:**
- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance - **Claude 3.5 Sonnet** (Anthropic): Best overall coder, excellent for complex applications
- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities - **GPT-4o** (OpenAI): Strong alternative with great performance across all use cases
- **DeepSeekCoder V3**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted) - **Claude 4 Opus** (Anthropic): Latest flagship model with enhanced capabilities
- **DeepSeekCoder V2 236b**: available through OpenRouter, DeepSeek API, or self-hosted - **Gemini 2.0 Flash** (Google): Exceptional speed for rapid development
- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements - **DeepSeekCoder V3** (DeepSeek): Best open-source model for coding tasks
!!! warning **Self-Hosting Options:**
Models with less than 7b parameters typically lack the capability to properly interact with bolt! - **DeepSeekCoder V2 236b**: Powerful self-hosted option
- **Qwen 2.5 Coder 32b**: Best for moderate hardware requirements
- **Ollama models**: Local inference with various model sizes
**Latest Specialized Models:**
- **Moonshot AI (Kimi)**: Kimi K2 models with advanced reasoning capabilities
- **xAI Grok 4**: Latest Grok model with 256K context window
- **Anthropic Claude 4 Opus**: Latest flagship model from Anthropic
!!! tip "Model Selection Tips"
- Use larger models (7B+ parameters) for complex applications
- Claude models excel at structured code generation
- GPT-4o provides excellent general-purpose coding assistance
- Gemini models offer the fastest response times
??? question "How do I configure API keys for different providers?"
You can configure API keys in two ways:
**Option 1: Environment Variables (Recommended for production)**
Create a `.env.local` file in your project root:
```bash
ANTHROPIC_API_KEY=your_anthropic_key_here
OPENAI_API_KEY=your_openai_key_here
GOOGLE_GENERATIVE_AI_API_KEY=your_google_key_here
MOONSHOT_API_KEY=your_moonshot_key_here
XAI_API_KEY=your_xai_key_here
```
**Option 2: In-App Configuration**
- Click the settings icon (⚙️) in the sidebar
- Navigate to the "Providers" tab
- Switch between "Cloud Providers" and "Local Providers" tabs
- Click on a provider card to expand its configuration
- Click on the "API Key" field to enter edit mode
- Paste your API key and press Enter to save
- Look for the green checkmark to confirm proper configuration
!!! note "Security Note"
Never commit API keys to version control. The `.env.local` file is already in `.gitignore`.
??? question "How do I add a new LLM provider?"
bolt.diy uses a modular provider architecture. To add a new provider:
1. **Create a Provider Class** in `app/lib/modules/llm/providers/your-provider.ts`
2. **Implement the BaseProvider interface** with your provider's specific logic
3. **Register the provider** in `app/lib/modules/llm/registry.ts`
4. **The system automatically detects** and registers your new provider
See the [Adding New LLMs](../#adding-new-llms) section for complete implementation details.
??? question "How do I set up Moonshot AI (Kimi) provider?"
Moonshot AI provides access to advanced Kimi models with excellent reasoning capabilities:
**Setup Steps:**
1. Visit [Moonshot AI Platform](https://platform.moonshot.ai/console/api-keys)
2. Create an account and generate an API key
3. Add `MOONSHOT_API_KEY=your_key_here` to your `.env.local` file
4. Or configure it directly in Settings → Providers → Cloud Providers → Moonshot
**Available Models:**
- **Kimi K2 Preview**: Latest Kimi model with 128K context
- **Kimi K2 Turbo**: Fast inference optimized version
- **Kimi Thinking**: Specialized for complex reasoning tasks
- **Moonshot v1 series**: Legacy models with vision capabilities
!!! tip "Moonshot AI Features"
- Excellent for Chinese language tasks
- Strong reasoning capabilities
- Vision-enabled models available
- Competitive pricing
??? question "What are the latest xAI Grok models?"
xAI has released several new Grok models with enhanced capabilities:
**Latest Models:**
- **Grok 4**: Most advanced model with 256K context window
- **Grok 4 (07-09)**: Specialized variant for specific tasks
- **Grok 3 Beta**: Previous generation with 131K context
- **Grok 3 Mini variants**: Optimized for speed and efficiency
**Setup:**
1. Get your API key from [xAI Platform](https://docs.x.ai/docs/quickstart#creating-an-api-key)
2. Add `XAI_API_KEY=your_key_here` to your `.env.local` file
3. Models will be available in the provider selection
## Best Practices ## Best Practices
??? question "How do I get the best results with bolt.diy?" - **Be specific about your stack**: ??? question "How do I access help and documentation?"
Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences. bolt.diy provides multiple ways to access help and documentation:
- **Use the enhance prompt icon**: **Help Icon in Sidebar:**
Before sending your prompt, click the *enhance* icon to let the AI refine your prompt. You can edit the suggested improvements before submitting. - Look for the question mark (?) icon in the sidebar
- Click it to open the full documentation in a new tab
- Provides instant access to guides, troubleshooting, and FAQs
- **Scaffold the basics first, then add features**: **Documentation Resources:**
Ensure the foundational structure of your application is in place before introducing advanced functionality. This helps bolt.diy establish a solid base to build on. - **Main Documentation**: Complete setup and feature guides
- **FAQ Section**: Answers to common questions
- **Troubleshooting**: Solutions for common issues
- **Best Practices**: Tips for optimal usage
- **Batch simple instructions**: **Community Support:**
Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example: - **GitHub Issues**: Report bugs and request features
*"Change the color scheme, add mobile responsiveness, and restart the dev server."* - **Community Forum**: [thinktank.ottomator.ai](https://thinktank.ottomator.ai)
??? question "How do I get the best results with bolt.diy?"
Follow these proven strategies for optimal results:
**Project Setup:**
- **Be specific about your stack**: Mention frameworks/libraries (Astro, Tailwind, ShadCN, Next.js) in your initial prompt
- **Choose appropriate templates**: Use our 15+ project templates for quick starts
- **Configure providers properly**: Set up your preferred LLM providers before starting
**Development Workflow:**
- **Use the enhance prompt icon**: Click the enhance icon to let AI refine your prompts before submitting
- **Scaffold basics first**: Build foundational structure before adding advanced features
- **Batch simple instructions**: Combine tasks like *"Change colors, add mobile responsiveness, restart dev server"*
**Advanced Features:**
- **Leverage MCP tools**: Use Model Context Protocol for enhanced AI capabilities
- **Connect databases**: Integrate Supabase for backend functionality
- **Use Git integration**: Version control your projects with GitHub
- **Deploy easily**: Use built-in Vercel, Netlify, or GitHub Pages deployment
??? question "How do I use MCP (Model Context Protocol) tools?"
MCP extends bolt.diy's AI capabilities with external tools:
**Setting up MCP:**
1. Go to Settings → MCP tab
2. Add MCP server configurations
3. Configure server endpoints and authentication
4. Enable/disable servers as needed
**Available MCP Capabilities:**
- Database connections and queries
- File system operations
- API integrations
- Custom business logic tools
The MCP integration allows the AI to interact with external services and data sources during conversations.
??? question "How do I deploy my bolt.diy projects?"
bolt.diy supports one-click deployment to multiple platforms:
**Supported Platforms:**
- **Vercel**: Go to Settings → Connections → Vercel, then deploy with one click
- **Netlify**: Connect your Netlify account and deploy instantly
- **GitHub Pages**: Push to GitHub and enable Pages in repository settings
**Deployment Features:**
- Automatic build configuration for popular frameworks
- Environment variable management
- Custom domain support
- Preview deployments for testing
??? question "How do I use Git integration features?"
bolt.diy provides comprehensive Git and GitHub integration:
**Basic Git Operations:**
- Import existing repositories by URL
- Create new repositories on GitHub
- Automatic commits for major changes
- Push/pull changes seamlessly
**Advanced Features:**
- Connect GitHub account in Settings → Connections
- Import from your connected repositories
- Version control with diff visualization
- Collaborative development support
## Project Information ## Project Information
@@ -44,44 +197,205 @@ bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to ex
We're forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we're also exploring partnerships to help the project thrive. We're forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we're also exploring partnerships to help the project thrive.
## New Features & Technologies
??? question "What's new in bolt.diy?"
Recent major additions to bolt.diy include:
**Advanced AI Capabilities:**
- **19 LLM Providers**: Support for Anthropic, OpenAI, Google, DeepSeek, Cohere, and more
- **MCP Integration**: Model Context Protocol for enhanced AI tool calling
- **Dynamic Model Loading**: Automatic model discovery from provider APIs
**Development Tools:**
- **WebContainer**: Secure sandboxed development environment
- **Live Preview**: Real-time application previews without leaving the editor
- **Project Templates**: 15+ starter templates for popular frameworks
**Version Control & Collaboration:**
- **Git Integration**: Import/export projects with GitHub
- **Automatic Commits**: Smart version control for project changes
- **Diff Visualization**: See code changes clearly
**Backend & Database:**
- **Supabase Integration**: Built-in database and authentication
- **API Integration**: Connect to external services and databases
**Deployment & Production:**
- **One-Click Deployment**: Vercel, Netlify, and GitHub Pages support
- **Environment Management**: Production-ready configuration
- **Build Optimization**: Automatic configuration for popular frameworks
??? question "How do I use the new project templates?"
bolt.diy offers templates for popular frameworks and technologies:
**Getting Started:**
1. Start a new project in bolt.diy
2. Browse available templates in the starter selection
3. Choose your preferred technology stack
4. The AI will scaffold your project with best practices
**Available Templates:**
- **Frontend**: React, Vue, Angular, Svelte, SolidJS
- **Full-Stack**: Next.js, Astro, Qwik, Remix, Nuxt
- **Mobile**: Expo, React Native
- **Content**: Slidev presentations, Astro blogs
- **Vanilla**: Vite with TypeScript/JavaScript
Templates include pre-configured tooling, linting, and build processes.
??? question "How does WebContainer work?"
WebContainer provides a secure development environment:
**Features:**
- **Isolated Environment**: Secure sandbox for running code
- **Full Node.js Support**: Run npm, build tools, and dev servers
- **Live File System**: Direct manipulation of project files
- **Terminal Integration**: Execute commands with real-time output
**Supported Technologies:**
- All major JavaScript frameworks (React, Vue, Angular, etc.)
- Build tools (Vite, Webpack, Parcel)
- Package managers (npm, pnpm, yarn)
??? question "How do I connect external databases?"
Use Supabase for backend database functionality:
**Setup Process:**
1. Create a Supabase project at supabase.com
2. Get your project URL and API keys
3. Configure the connection in your bolt.diy project
4. Use Supabase tools to interact with your database
**Available Features:**
- Real-time subscriptions
- Built-in authentication
- Row Level Security (RLS)
- Automatic API generation
- Database migrations
## Model Comparisons ## Model Comparisons
??? question "How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy?" ??? question "How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy?"
While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs. While local LLMs are improving rapidly, larger models still offer the best results for complex applications. Here's the current landscape:
**Recommended for Production:**
- **Claude 4 Opus**: Latest flagship model with enhanced reasoning (200K context)
- **Claude 3.5 Sonnet**: Proven excellent performance across all tasks
- **GPT-4o**: Strong general-purpose coding with great reliability
- **xAI Grok 4**: Latest Grok with 256K context window
**Fast & Efficient:**
- **Gemini 2.0 Flash**: Exceptional speed for rapid development
- **Claude 3 Haiku**: Cost-effective for simpler tasks
- **xAI Grok 3 Mini Fast**: Optimized for speed and efficiency
**Advanced Reasoning:**
- **Moonshot AI Kimi K2**: Advanced reasoning with 128K context
- **Moonshot AI Kimi Thinking**: Specialized for complex reasoning tasks
**Open Source & Self-Hosting:**
- **DeepSeekCoder V3**: Best open-source model available
- **DeepSeekCoder V2 236b**: Powerful self-hosted option
- **Qwen 2.5 Coder 32b**: Good balance of performance and resource usage
**Local Models (Ollama):**
- Best for privacy and offline development
- Use 7B+ parameter models for reasonable performance
- Still experimental for complex, large-scale applications
!!! tip "Model Selection Guide"
- Use Claude/GPT-4o for complex applications
- Use Gemini for fast prototyping
- Use local models for privacy/offline development
- Always test with your specific use case
## Troubleshooting ## Troubleshooting
??? error "There was an error processing this request" ??? error "There was an error processing this request"
This generic error message means something went wrong. Check both: This generic error message means something went wrong. Check these locations:
- The terminal (if you started the app with Docker or `pnpm`). - **Terminal output**: If you started with Docker or `pnpm`
- **Browser developer console**: Press `F12` → Console tab
- The developer console in your browser (press `F12` or right-click > *Inspect*, then go to the *Console* tab). - **Server logs**: Check for any backend errors
- **Network tab**: Verify API calls are working
??? error "x-api-key header missing" ??? error "x-api-key header missing"
This error is sometimes resolved by restarting the Docker container. This authentication error can be resolved by:
If that doesn't work, try switching from Docker to `pnpm` or vice versa. We're actively investigating this issue.
- **Restarting the container**: `docker compose restart` (if using Docker)
- **Switching run methods**: Try `pnpm` if using Docker, or vice versa
- **Checking API keys**: Verify your API keys are properly configured
- **Clearing browser cache**: Sometimes cached authentication causes issues
??? error "Blank preview when running the app" ??? error "Blank preview when running the app"
A blank preview often occurs due to hallucinated bad code or incorrect commands. Blank previews usually indicate code generation issues:
To troubleshoot:
- Check the developer console for errors. - **Check developer console** for JavaScript errors
- **Verify WebContainer is running** properly
- **Try refreshing** the preview pane
- **Check for hallucinated code** in the generated files
- **Restart the development server** if issues persist
- Remember, previews are core functionality, so the app isn't broken! We're working on making these errors more transparent. ??? error "MCP server connection failed"
If you're having trouble with MCP integrations:
- **Verify server configuration** in Settings → MCP
- **Check server endpoints** and authentication credentials
- **Test server connectivity** outside of bolt.diy
- **Review MCP server logs** for specific error messages
- **Ensure server supports** the MCP protocol version
??? error "Git integration not working"
Common Git-related issues and solutions:
- **GitHub connection failed**: Verify your GitHub token has correct permissions
- **Repository not found**: Check repository URL and access permissions
- **Push/pull failed**: Ensure you have write access to the repository
- **Merge conflicts**: Resolve conflicts manually or use the diff viewer
- **Large files blocked**: Check GitHub's file size limits
??? error "Deployment failed"
Deployment issues can be resolved by:
- **Checking build logs** for specific error messages
- **Verifying environment variables** are set correctly
- **Testing locally** before deploying
- **Checking platform-specific requirements** (Node version, build commands)
- **Reviewing deployment configuration** in platform settings
??? error "Everything works, but the results are bad" ??? error "Everything works, but the results are bad"
Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like For suboptimal AI responses, try these solutions:
- GPT-4o - **Switch to a more capable model**: Use Claude 3.5 Sonnet, GPT-4o, or Claude 4 Opus
- Claude 3.5 Sonnet - **Be more specific** in your prompts about requirements and technologies
- DeepSeek Coder V2 236b - **Use the enhance prompt feature** to refine your requests
- **Break complex tasks** into smaller, focused prompts
- **Provide context** about your project structure and goals
??? error "WebContainer preview not loading"
If the live preview isn't working:
- **Check WebContainer status** in the terminal
- **Verify Node.js compatibility** with your project
- **Restart the development environment**
- **Clear browser cache** and reload
- **Check for conflicting ports** (default is 5173)
??? error "Received structured exception #0xc0000005: access violation" ??? error "Received structured exception #0xc0000005: access violation"
If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170) **Windows-specific issue**: Update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
??? error "Miniflare or Wrangler errors in Windows" ??? error "Miniflare or Wrangler errors in Windows"
You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here <a href="https://github.com/stackblitz-labs/bolt.diy/issues/19">Github Issues</a> **Windows development environment**: Install Visual Studio C++ (version 14.40.33816 or later). More details in [GitHub Issues](https://github.com/stackblitz-labs/bolt.diy/issues/19)
??? error "Provider not showing up after adding it"
If your custom LLM provider isn't appearing:
- **Restart the development server** to reload providers
- **Check the provider registry** in `app/lib/modules/llm/registry.ts`
- **Verify the provider class** extends `BaseProvider` correctly
- **Check browser console** for provider loading errors
- **Ensure proper TypeScript compilation** without errors
--- ---

View File

@@ -1,6 +1,6 @@
# Welcome to bolt diy # Welcome to bolt diy
bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use models from 19 providers including OpenAI, Anthropic, Ollama, OpenRouter, Google/Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, Groq, Cohere, Together AI, Perplexity AI, Hyperbolic, Moonshot AI (Kimi), Amazon Bedrock, GitHub Models, and more - with easy extensibility to add any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
## Table of Contents ## Table of Contents
@@ -17,6 +17,12 @@ bolt.diy allows you to choose the LLM that you use for each prompt! Currently, y
- [Option 2: With Docker](#option-2-with-docker) - [Option 2: With Docker](#option-2-with-docker)
- [Update Your Local Version to the Latest](#update-your-local-version-to-the-latest) - [Update Your Local Version to the Latest](#update-your-local-version-to-the-latest)
- [Adding New LLMs](#adding-new-llms) - [Adding New LLMs](#adding-new-llms)
- [MCP (Model Context Protocol) Integration](#mcp-model-context-protocol-integration)
- [Git Integration and Version Control](#git-integration-and-version-control)
- [Deployment Options](#deployment-options)
- [Supabase Integration](#supabase-integration)
- [WebContainer and Live Preview](#webcontainer-and-live-preview)
- [Project Templates](#project-templates)
- [Available Scripts](#available-scripts) - [Available Scripts](#available-scripts)
- [Development](#development) - [Development](#development)
- [Tips and Tricks](#tips-and-tricks) - [Tips and Tricks](#tips-and-tricks)
@@ -33,13 +39,22 @@ Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos
## Features ## Features
- **AI-powered full-stack web development** directly in your browser. - **AI-powered full-stack web development** directly in your browser with live preview
- **Support for multiple LLMs** with an extensible architecture to integrate additional models. - **Support for 19 LLM providers** with an extensible architecture to integrate additional models
- **Attach images to prompts** for better contextual understanding. - **Attach images and files to prompts** for better contextual understanding
- **Integrated terminal** to view output of LLM-run commands. - **Integrated terminal** with WebContainer sandbox for running commands and testing
- **Revert code to earlier versions** for easier debugging and quicker changes. - **Version control with Git** - import/export projects, connect to GitHub repositories
- **Download projects as ZIP** for easy portability. - **MCP (Model Context Protocol)** integration for enhanced AI capabilities and tool calling
- **Integration-ready Docker support** for a hassle-free setup. - **Database integration** with Supabase for backend development
- **One-click deployments** to Vercel, Netlify, and GitHub Pages
- **Project templates** for popular frameworks (React, Vue, Angular, Next.js, Astro, etc.)
- **Real-time collaboration** and project sharing
- **Code diff visualization** and version history
- **Download projects as ZIP** or push directly to GitHub
- **Docker support** for containerized development environments
- **Electron app** for native desktop experience
- **Theme customization** and accessibility features
- **Help icon** in sidebar linking to comprehensive documentation
--- ---
@@ -67,7 +82,8 @@ Alternatively, you can download the latest version of the project directly from
Clone the repository using Git: Clone the repository using Git:
```bash ```bash
git clone -b stable https://github.com/stackblitz-labs/bolt.diy git clone https://github.com/stackblitz-labs/bolt.diy
cd bolt.diy
``` ```
--- ---
@@ -107,14 +123,20 @@ Once you've set your keys, you can proceed with running the app. You will set th
#### 2. Configure API Keys Directly in the Application #### 2. Configure API Keys Directly in the Application
Alternatively, you can configure your API keys directly in the application once it's running. To do this: Alternatively, you can configure your API keys directly in the application using the modern settings interface:
1. Launch the application and navigate to the provider selection dropdown. 1. **Open Settings**: Click the settings icon (⚙️) in the sidebar to access the settings panel
2. Select the provider you wish to configure. 2. **Navigate to Providers**: Select the "Providers" tab from the settings menu
3. Click the pencil icon next to the selected provider. 3. **Choose Provider Type**: Switch between "Cloud Providers" and "Local Providers" tabs
4. Enter your API key in the provided field. 4. **Select Provider**: Browse the grid of available providers and click on the provider card you want to configure
5. **Configure API Key**: Click on the "API Key" field to enter edit mode, then paste your API key and press Enter
6. **Verify Configuration**: Look for the green checkmark indicator showing the provider is properly configured
This method allows you to easily add or update your keys without needing to modify files directly. The interface provides:
- **Real-time validation** with visual status indicators
- **Bulk operations** to enable/disable multiple providers at once
- **Secure storage** of API keys in browser cookies
- **Environment variable auto-detection** for server-side configurations
Once you've configured your keys, the application will be ready to use the selected LLMs. Once you've configured your keys, the application will be ready to use the selected LLMs.
@@ -216,26 +238,367 @@ This ensures that you're running the latest version of bolt.diy and can take adv
--- ---
## Adding New LLMs: ## Adding New LLMs
To make new LLMs available to use in this version of bolt.diy, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. bolt.diy supports a modular architecture for adding new LLM providers and models. The system is designed to be easily extensible while maintaining consistency across all providers.
By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish! ### Understanding the Provider Architecture
When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here! Each LLM provider is implemented as a separate class that extends the `BaseProvider` class. The provider system includes:
- **Static Models**: Pre-defined models that are always available
- **Dynamic Models**: Models that can be loaded from the provider's API at runtime
- **Configuration**: API key management and provider-specific settings
### Adding a New Provider
To add a new LLM provider, you need to create multiple files:
#### 1. Create the Provider Class
Create a new file in `app/lib/modules/llm/providers/your-provider.ts`:
```typescript
import { BaseProvider } from '~/lib/modules/llm/base-provider';
import type { ModelInfo } from '~/lib/modules/llm/types';
import type { LanguageModelV1 } from 'ai';
import type { IProviderSetting } from '~/types/model';
import { createYourProvider } from '@ai-sdk/your-provider';
export default class YourProvider extends BaseProvider {
name = 'YourProvider';
getApiKeyLink = 'https://your-provider.com/api-keys';
config = {
apiTokenKey: 'YOUR_PROVIDER_API_KEY',
};
staticModels: ModelInfo[] = [
{
name: 'your-model-name',
label: 'Your Model Label',
provider: 'YourProvider',
maxTokenAllowed: 100000,
maxCompletionTokens: 4000,
},
];
async getDynamicModels(
apiKeys?: Record<string, string>,
settings?: IProviderSetting,
serverEnv?: Record<string, string>,
): Promise<ModelInfo[]> {
// Implement dynamic model loading if supported
return [];
}
getModelInstance(options: {
model: string;
serverEnv: Record<string, string>;
apiKeys?: Record<string, string>;
providerSettings?: Record<string, IProviderSetting>;
}): LanguageModelV1 {
const { apiKeys, model } = options;
const apiKey = apiKeys?.[this.config.apiTokenKey] || '';
return createYourProvider({
apiKey,
// other configuration options
})(model);
}
}
```
#### 2. Register the Provider
Add your provider to `app/lib/modules/llm/registry.ts`:
```typescript
import YourProvider from './providers/your-provider';
// ... existing imports ...
export {
// ... existing exports ...
YourProvider,
};
```
#### 3. Update the Manager (if needed)
The provider will be automatically registered by the `LLMManager` through the registry. The manager scans for all classes that extend `BaseProvider` and registers them automatically.
### Adding Models to Existing Providers
To add new models to an existing provider:
1. **Edit the provider file** (e.g., `app/lib/modules/llm/providers/openai.ts`)
2. **Add to the `staticModels` array**:
```typescript
staticModels: ModelInfo[] = [
// ... existing models ...
{
name: 'gpt-4o-mini-new',
label: 'GPT-4o Mini (New)',
provider: 'OpenAI',
maxTokenAllowed: 128000,
maxCompletionTokens: 16000,
},
];
```
### Provider-Specific Configuration
Each provider can have its own configuration options:
- **API Key Environment Variables**: Define in the `config` object
- **Base URL Support**: Add `baseUrlKey` for custom endpoints
- **Provider Settings**: Custom settings in the UI
- **Dynamic Model Loading**: Implement `getDynamicModels()` for API-based model discovery
### Testing Your New Provider
1. **Restart the development server** after making changes
2. **Check the provider appears** in the Settings → Providers section
3. **Configure API keys** in the provider settings
4. **Test the models** in a chat session
### Best Practices
- **Follow the naming conventions** used by existing providers
- **Include proper error handling** for API failures
- **Add comprehensive documentation** for your provider
- **Test with both static and dynamic models**
- **Ensure proper API key validation**
The modular architecture makes it easy to add new providers while maintaining consistency and reliability across the entire system.
---
## MCP (Model Context Protocol) Integration
bolt.diy supports MCP (Model Context Protocol) servers to extend AI capabilities with external tools and services. MCP allows you to connect various tools and services that the AI can use during conversations.
### Setting up MCP Servers
1. Navigate to Settings → MCP tab
2. Add MCP server configurations
3. Configure server endpoints and authentication
4. Enable/disable servers as needed
MCP servers can provide:
- Database connections and queries
- File system operations
- API integrations
- Custom business logic tools
- And much more...
The MCP integration enhances the AI's ability to perform complex tasks by giving it access to external tools and data sources.
---
## Git Integration and Version Control
bolt.diy provides comprehensive Git integration for version control, collaboration, and project management.
### GitHub Integration
1. **Connect your GitHub account** in Settings → Connections → GitHub
2. **Import existing repositories** by URL or from your connected account
3. **Push projects directly to GitHub** with automatic repository creation
4. **Sync changes** between local development and remote repositories
### Version Control Features
- **Automatic commits** for major changes
- **Diff visualization** to see code changes
- **Branch management** and merge conflict resolution
- **Revert to previous versions** for debugging
- **Collaborative development** with team members
### Export Options
- **Download as ZIP** for easy sharing
- **Push to GitHub** for version control and collaboration
- **Import from GitHub** to continue working on existing projects
---
## Deployment Options
bolt.diy provides one-click deployment to popular hosting platforms, making it easy to share your projects with the world.
### Supported Platforms
#### Vercel Deployment
1. Connect your Vercel account in Settings → Connections → Vercel
2. Click the deploy button in your project
3. bolt.diy automatically builds and deploys your project
4. Get a live URL instantly with Vercel's global CDN
#### Netlify Deployment
1. Connect your Netlify account in Settings → Connections → Netlify
2. Deploy with a single click
3. Automatic build configuration and optimization
4. Preview deployments for every change
#### GitHub Pages
1. Connect your GitHub account
2. Push your project to a GitHub repository
3. Enable GitHub Pages in repository settings
4. Automatic deployment from your repository
### Deployment Features
- **Automatic build configuration** for popular frameworks
- **Environment variable management** for production
- **Custom domain support** through platform settings
- **Deployment previews** for testing changes
- **Rollback capabilities** for quick issue resolution
---
## Supabase Integration
bolt.diy integrates with Supabase to provide backend database functionality, authentication, and real-time features for your applications.
### Setting up Supabase
1. Create a Supabase project at [supabase.com](https://supabase.com)
2. Get your project URL and API keys from the Supabase dashboard
3. Configure the connection in your bolt.diy project
4. Use the Supabase tools to interact with your database
### Database Features
- **Real-time subscriptions** for live data updates
- **Authentication** with built-in user management
- **Row Level Security (RLS)** policies for data protection
- **Built-in API** for CRUD operations
- **Database migrations** and schema management
### Integration with AI Development
The AI can help you:
- **Design database schemas** for your applications
- **Write SQL queries** and database functions
- **Implement authentication flows**
- **Create API endpoints** for your frontend
- **Set up real-time features** for collaborative apps
Supabase integration makes it easy to build full-stack applications with a robust backend infrastructure.
---
## WebContainer and Live Preview
bolt.diy uses WebContainer technology to provide a secure, isolated development environment with live preview capabilities.
### WebContainer Features
- **Secure sandbox environment** - Run code in isolated containers
- **Live preview** - See your changes instantly without leaving the editor
- **Full Node.js environment** - Run npm scripts, build tools, and development servers
- **File system access** - Direct manipulation of project files
- **Terminal integration** - Execute commands and see real-time output
### Development Workflow
1. **Write code** in the integrated editor
2. **Run development servers** directly in WebContainer
3. **Preview your application** in real-time
4. **Test functionality** with the integrated terminal
5. **Debug issues** with live error reporting
### Supported Technologies
WebContainer supports all major JavaScript frameworks and tools:
- React, Vue, Angular, Svelte
- Next.js, Nuxt, Astro, Remix
- Vite, Webpack, Parcel
- Node.js, npm, pnpm, yarn
- And many more...
The WebContainer integration provides a seamless development experience without the need for local setup.
---
## Project Templates
bolt.diy comes with a comprehensive collection of starter templates to help you quickly bootstrap your projects. Choose from popular frameworks and technologies:
### Frontend Frameworks
- **React + Vite** - Modern React setup with TypeScript
- **Vue.js** - Progressive JavaScript framework
- **Angular** - Enterprise-ready framework
- **Svelte** - Compiler-based framework for fast apps
- **SolidJS** - Reactive framework with fine-grained updates
### Full-Stack Frameworks
- **Next.js with shadcn/ui** - React framework with UI components
- **Astro** - Static site generator for content-focused sites
- **Qwik** - Resumable framework for instant loading
- **Remix** - Full-stack React framework
- **Nuxt** - Vue.js meta-framework
### Mobile & Cross-Platform
- **Expo App** - React Native with Expo
- **React Native** - Cross-platform mobile development
### Presentation & Content
- **Slidev** - Developer-friendly presentations
- **Astro Basic** - Lightweight static sites
### Vanilla JavaScript
- **Vanilla Vite** - Minimal JavaScript setup
- **Vite TypeScript** - TypeScript without framework
### Getting Started with Templates
1. Start a new project in bolt.diy
2. Browse available templates in the starter selection
3. Select your preferred technology stack
4. The AI will scaffold your project with best practices
5. Begin development immediately with live preview
All templates are pre-configured with modern tooling, linting, and build processes for immediate productivity.
--- ---
## Available Scripts ## Available Scripts
- `pnpm run dev`: Starts the development server. ### Development Scripts
- `pnpm run build`: Builds the project. - `pnpm run dev`: Starts the development server with hot reloading
- `pnpm run start`: Runs the built application locally using Wrangler Pages. This script uses `bindings.sh` to set up necessary bindings so you don't have to duplicate environment variables. - `pnpm run build`: Builds the project for production
- `pnpm run preview`: Builds the project and then starts it locally, useful for testing the production build. Note, HTTP streaming currently doesn't work as expected with `wrangler pages dev`. - `pnpm run start`: Runs the built application locally using Wrangler Pages
- `pnpm test`: Runs the test suite using Vitest. - `pnpm run preview`: Builds and starts locally for production testing
- `pnpm run typecheck`: Runs TypeScript type checking. - `pnpm test`: Runs the test suite using Vitest
- `pnpm run typegen`: Generates TypeScript types using Wrangler. - `pnpm run test:watch`: Runs tests in watch mode
- `pnpm run deploy`: Builds the project and deploys it to Cloudflare Pages. - `pnpm run lint`: Runs ESLint with auto-fix
- `pnpm run typecheck`: Runs TypeScript type checking
- `pnpm run typegen`: Generates TypeScript types using Wrangler
### Docker Scripts
- `pnpm run dockerbuild`: Builds Docker image for development
- `pnpm run dockerbuild:prod`: Builds Docker image for production
- `pnpm run dockerrun`: Runs the Docker container
- `docker compose --profile development up`: Runs with Docker Compose (development)
### Electron Scripts
- `pnpm electron:build:mac`: Builds for macOS
- `pnpm electron:build:win`: Builds for Windows
- `pnpm electron:build:linux`: Builds for Linux
- `pnpm electron:build:dist`: Builds for all platforms (Mac, Windows, Linux)
- `pnpm electron:build:unpack`: Creates unpacked build for testing
### Deployment Scripts
- `pnpm run deploy`: Builds and deploys to Cloudflare Pages
- `npm run dockerbuild`: Alternative Docker build command
### Utility Scripts
- `pnpm run clean`: Cleans build artifacts
- `pnpm run prepare`: Sets up Husky for git hooks
--- ---
@@ -251,6 +614,23 @@ This will start the Remix Vite development server. You will need Google Chrome C
--- ---
## Getting Help & Resources
### Help Icon in Sidebar
bolt.diy includes a convenient help icon (?) in the sidebar that provides quick access to comprehensive documentation. Simply click the help icon to open the full documentation in a new tab.
The documentation includes:
- **Complete setup guides** for all supported providers
- **Feature explanations** for advanced capabilities
- **Troubleshooting guides** for common issues
- **Best practices** for optimal usage
- **FAQ section** with detailed answers
### Community Support
- **GitHub Issues**: Report bugs and request features
- **Community Forum**: Join discussions at [thinktank.ottomator.ai](https://thinktank.ottomator.ai)
- **Contributing Guide**: Learn how to contribute to the project
## Tips and Tricks ## Tips and Tricks
Here are some tips to get the most out of bolt.diy: Here are some tips to get the most out of bolt.diy:
@@ -262,3 +642,5 @@ Here are some tips to get the most out of bolt.diy:
- **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps Bolt understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality. - **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps Bolt understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality.
- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask Bolt to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly. - **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask Bolt to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.
- **Access documentation quickly**: Use the help icon (?) in the sidebar for instant access to guides, troubleshooting, and best practices.