How to Use Local AI Models with Shakespeare

    How to Use Local AI Models with Shakespeare

    Want complete control over your AI-powered website building? Local AI models offer enhanced privacy, faster responses, and dramatically reduced costs in your development workflow.

    Derek Ross
    🔒 Complete Privacy
    ⚡ Zero Ongoing Costs
    🎛️ Full Customization
    📴 Works Offline

    Want complete control over your AI-powered website building? Local AI models offer enhanced privacy, faster responses, and dramatically reduced costs in your development workflow. With Shakespeare's support for local models including GPT-OSS, DeepSeek-R1, Gemma 3 and other models running on your machine, you can build websites entirely offline while keeping your projects and data completely private.

    Why Choose Local AI Models?

    🔒 Complete Privacy

    • • No data ever leaves your machine
    • • Perfect for sensitive business projects
    • • Full control over your intellectual property
    • • No concerns about terms of service changes

    💰 Zero Ongoing Costs

    • • Pay once for hardware, use forever
    • • No per-request charges
    • • No monthly subscriptions
    • • Unlimited usage without budget worries

    🎛️ Full Customization

    • • Fine-tune models for your specific needs
    • • Complete control over model behavior
    • • No rate limits or usage restrictions
    • • Experiment freely without costs

    Getting Started: Installing Ollama

    System Requirements

    MIN

    Minimum

    8GB RAM, modern CPU

    REC

    Recommended

    16GB+ RAM, GPU with 8GB+ VRAM

    OPT

    Optimal

    32GB+ RAM, high-end GPU

    Installation Steps

    🍎 macOS Installation

    # Install via Homebrew
    brew install ollama
    
    # Or download from ollama.ai
    curl -fsSL https://ollama.ai/install.sh | sh

    🐧 Linux Installation

    # Install via curl
    curl -fsSL https://ollama.ai/install.sh | sh
    
    # Or use package manager
    sudo apt install ollama  # Ubuntu/Debian
    sudo pacman -S ollama     # Arch Linux

    🪟 Windows Installation

    1. 1. Download the installer from ollama.ai
    2. 2. Run the installer as administrator
    3. 3. Restart your system after installation

    Choosing the Right Model

    For Website Building: Top Local Model Recommendations

    GPT-OSS - OpenAI's Open-Weight Powerhouse

    ollama pull gpt-oss

    Best for:

    Powerful reasoning, agentic tasks, versatile developer use cases

    Trade-offs:

    ✓ Excellent at complex web development

    ⚠ Requires decent hardware

    DeepSeek-R1 - Enterprise-Grade Reasoning

    ollama pull deepseek-r1

    Best for:

    Open reasoning with performance approaching O3 and Gemini 2.5 Pro

    Trade-offs:

    ✓ Leading-edge reasoning capabilities

    ⚠ Slower inference due to reasoning depth

    Gemma 3 - Google's Single-GPU Solution

    ollama pull gemma3

    Best for:

    Google's most capable model that runs on a single GPU

    Trade-offs:

    ✓ Excellent performance on consumer hardware

    ⚠ May need more guidance for complex tasks

    Quick Model Comparison

    ModelSizeRAM RequiredSpeedCode QualityContent Quality
    GPT-OSS8GB16GBFastExcellentExcellent
    DeepSeek-R17GB16GBMediumExcellentVery Good
    Gemma 34GB8GBVery FastVery GoodGood

    Configuring CORS for Browser Access

    ⚠️

    Important: CORS Configuration Required

    Since Shakespeare runs in your browser, you need to configure Ollama to accept cross-origin requests (CORS). This is a crucial step that allows Shakespeare to communicate with your local Ollama instance.

    Checking CORS Status

    First, verify if CORS is already enabled:

    curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I

    If you see HTTP/1.1 403 Forbidden, CORS is not enabled and needs configuration.

    Enabling CORS by Platform

    🍎 Enabling CORS on macOS

    # Allow all origins (easiest for local development)
    launchctl setenv OLLAMA_ORIGINS "*"
    
    # Or specify specific origins for better security
    launchctl setenv OLLAMA_ORIGINS "localhost:3000,localhost:5173,shakespeare.app"
    
    # Optional: Make Ollama accessible on your network
    launchctl setenv OLLAMA_HOST "0.0.0.0"
    
    # Restart Ollama for changes to take effect

    🐧 Enabling CORS on Linux

    Edit the Ollama service configuration:

    sudo systemctl edit ollama.service

    Add these environment variables:

    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    Environment="OLLAMA_ORIGINS=*"

    Then restart the service:

    sudo service ollama restart

    🪟 Enabling CORS on Windows

    1. 1. Open System Properties → Environment Variables
    2. 2. Add new system variables:
    3. OLLAMA_ORIGINS with value * (or specific origins)
    4. OLLAMA_HOST with value 0.0.0.0 (optional, for network access)
    5. 3. Restart Ollama from the system tray

    Verifying CORS Configuration

    After configuration, test again:

    curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I

    Success looks like:

    HTTP/1.1 204 No Content
    Access-Control-Allow-Origin: *

    Configuring Local Models in Shakespeare

    1

    Start Ollama Service

    # Start Ollama server (with CORS already configured)
    ollama serve
    
    # The service will run on http://localhost:11434
    2

    Test Your Model

    ollama run gpt-oss "Write a simple HTML page with a header"
    3

    Configure in Shakespeare

    1. 1. Open Shakespeare and go to Settings > AI Settings
    2. 2. Scroll to "Add Custom Provider"
    3. 3. Click to expand the custom provider section
    4. 4. Enter the following configuration:
    5. Provider Name: "Ollama Local"
    6. API Endpoint: http://localhost:11434
    7. Model Name: gpt-oss (or your chosen model)
    8. API Type: "Ollama"
    4

    Test the Connection

    1. 1. Click "Test Connection" to verify setup
    2. 2. If successful, click "Add" to save the provider
    3. 3. Select your local model from the configured providers list

    The Bottom Line

    🚀 Local Models with Shakespeare Provide

    Enhanced privacy - Your code never leaves your machine
    Faster responses - No network latency
    Reduced costs - Zero API fees after initial setup
    Complete control - Run any compatible model

    Ready to supercharge your development workflow? Start with GPT-OSS and experience the benefits of local AI models in Shakespeare.

    🎭

    Ready to Build with Shakespeare?

    Start building amazing projects with AI-powered development on Nostr.

    Turn your ideas into reality through natural conversation with AI

    Get the latest Shakespeare updates and resources

    Stay updated with the latest features and announcements. Optionally provide your npub to link your account.

    * indicates required
    Written by Derek Ross