import React from 'react'; import { Button } from '~/components/ui/Button'; import { Card, CardContent, CardHeader } from '~/components/ui/Card'; import { Cpu, Server, Settings, ExternalLink, Package, Code, Database, CheckCircle, AlertCircle, Activity, Cable, ArrowLeft, Download, Shield, Globe, Terminal, Monitor, Wifi, } from 'lucide-react'; // Setup Guide Component function SetupGuide({ onBack }: { onBack: () => void }) { return (
{/* Header with Back Button */}

Local Provider Setup Guide

Complete setup instructions for running AI models locally

{/* Hardware Requirements Overview */}

System Requirements

Recommended hardware for optimal performance

CPU

8+ cores, modern architecture

RAM

16GB minimum, 32GB+ recommended

GPU

NVIDIA RTX 30xx+ or AMD RX 6000+

{/* Ollama Setup Section */}

Ollama Setup

Most popular choice for running open-source models locally with desktop app

Recommended
{/* Installation Options */}

1. Choose Installation Method

{/* Desktop App - New and Recommended */}
🆕 Desktop App (Recommended)

New user-friendly desktop application with built-in model management and web interface.

Built-in Web Interface

Desktop app includes a web interface at{' '} http://localhost:11434

{/* CLI Installation */}
Command Line (Advanced)
Windows
winget install Ollama.Ollama
macOS
brew install ollama
Linux
curl -fsSL https://ollama.com/install.sh | sh
{/* Latest Model Recommendations */}

2. Download Latest Models

Code & Development
# Latest Llama 3.2 for coding
ollama pull llama3.2:3b
ollama pull codellama:13b
ollama pull deepseek-coder-v2
ollama pull qwen2.5-coder:7b
General Purpose & Chat
# Latest general models
ollama pull llama3.2:3b
ollama pull mistral:7b
ollama pull phi3.5:3.8b
ollama pull qwen2.5:7b
Performance Optimized
  • • Llama 3.2: 3B - Fastest, 8GB RAM
  • • Phi-3.5: 3.8B - Great balance
  • • Qwen2.5: 7B - Excellent quality
  • • Mistral: 7B - Popular choice
Pro Tips
  • • Start with 3B-7B models for best performance
  • • Use quantized versions for faster loading
  • • Desktop app auto-manages model storage
  • • Web UI available at localhost:11434
{/* Desktop App Features */}

3. Desktop App Features

🖥️ User Interface
  • • Model library browser
  • • One-click model downloads
  • • Built-in chat interface
  • • System resource monitoring
🔧 Management Tools
  • • Automatic updates
  • • Model size optimization
  • • GPU acceleration detection
  • • Cross-platform compatibility
{/* Troubleshooting */}

4. Troubleshooting & Commands

Common Issues
  • • Desktop app not starting: Restart system
  • • GPU not detected: Update drivers
  • • Port 11434 blocked: Change port in settings
  • • Models not loading: Check available disk space
  • • Slow performance: Use smaller models or enable GPU
Useful Commands
# Check installed models
ollama list
# Remove unused models
ollama rm model_name
# Check GPU usage
ollama ps
# View logs
ollama logs
{/* LM Studio Setup Section */}

LM Studio Setup

User-friendly GUI for running local models with excellent model management

{/* Installation */}

1. Download & Install

Download LM Studio for Windows, macOS, or Linux from the official website.

{/* Configuration */}

2. Configure Local Server

Start Local Server
  1. Download a model from the "My Models" tab
  2. Go to "Local Server" tab
  3. Select your downloaded model
  4. Set port to 1234 (default)
  5. Click "Start Server"
Critical: Enable CORS

To work with Bolt DIY, you MUST enable CORS in LM Studio:

  1. In Server Settings, check "Enable CORS"
  2. Set Network Interface to "0.0.0.0" for external access
  3. Alternatively, use CLI:{' '} lms server start --cors
{/* Advantages */}
LM Studio Advantages
  • Built-in model downloader with search
  • Easy model switching and management
  • Built-in chat interface for testing
  • GGUF format support (most compatible)
  • Regular updates with new features
{/* LocalAI Setup Section */}

LocalAI Setup

Self-hosted OpenAI-compatible API server with extensive model support

{/* Installation */}

Installation Options

Quick Install
# One-line install
curl https://localai.io/install.sh | sh
Docker (Recommended)
docker run -p 8080:8080
quay.io/go-skynet/local-ai:latest
{/* Configuration */}

Configuration

LocalAI supports many model formats and provides a full OpenAI-compatible API.

# Example configuration
models:
- name: llama3.1
backend: llama
parameters:
model: llama3.1.gguf
{/* Advantages */}
LocalAI Advantages
  • Full OpenAI API compatibility
  • Supports multiple model formats
  • Docker deployment option
  • Built-in model gallery
  • REST API for model management
{/* Performance Optimization */}

Performance Optimization

Tips to improve local AI performance

Hardware Optimizations

  • Use NVIDIA GPU with CUDA for 5-10x speedup
  • Increase RAM for larger context windows
  • Use SSD storage for faster model loading
  • Close other applications to free up RAM

Software Optimizations

  • Use smaller models for faster responses
  • Enable quantization (4-bit, 8-bit models)
  • Reduce context length for chat applications
  • Use streaming responses for better UX
{/* Alternative Options */}

Alternative Options

Other local AI solutions and cloud alternatives

Other Local Solutions

Jan.ai

Modern interface with built-in model marketplace

Oobabooga

Advanced text generation web UI with extensions

KoboldAI

Focus on creative writing and storytelling

Cloud Alternatives

OpenRouter

Access to 100+ models through unified API

Together AI

Fast inference with open-source models

Groq

Ultra-fast LPU inference for Llama models

); } export default SetupGuide;