How to Use Local AI Models with Shakespeare
Want complete control over your AI-powered website building? Local AI models offer enhanced privacy, faster responses, and dramatically reduced costs in your development workflow.
Want complete control over your AI-powered website building? Local AI models offer enhanced privacy, faster responses, and dramatically reduced costs in your development workflow. With Shakespeare's support for local models including GPT-OSS, DeepSeek-R1, Gemma 3 and other models running on your machine, you can build websites entirely offline while keeping your projects and data completely private.
Why Choose Local AI Models?
🔒 Complete Privacy
- • No data ever leaves your machine
- • Perfect for sensitive business projects
- • Full control over your intellectual property
- • No concerns about terms of service changes
💰 Zero Ongoing Costs
- • Pay once for hardware, use forever
- • No per-request charges
- • No monthly subscriptions
- • Unlimited usage without budget worries
🎛️ Full Customization
- • Fine-tune models for your specific needs
- • Complete control over model behavior
- • No rate limits or usage restrictions
- • Experiment freely without costs
Getting Started: Installing Ollama
System Requirements
Minimum
8GB RAM, modern CPU
Recommended
16GB+ RAM, GPU with 8GB+ VRAM
Optimal
32GB+ RAM, high-end GPU
Installation Steps
🍎 macOS Installation
# Install via Homebrew
brew install ollama
# Or download from ollama.ai
curl -fsSL https://ollama.ai/install.sh | sh
🐧 Linux Installation
# Install via curl
curl -fsSL https://ollama.ai/install.sh | sh
# Or use package manager
sudo apt install ollama # Ubuntu/Debian
sudo pacman -S ollama # Arch Linux
🪟 Windows Installation
- 1. Download the installer from ollama.ai
- 2. Run the installer as administrator
- 3. Restart your system after installation
Choosing the Right Model
For Website Building: Top Local Model Recommendations
GPT-OSS - OpenAI's Open-Weight Powerhouse
ollama pull gpt-oss
Best for:
Powerful reasoning, agentic tasks, versatile developer use cases
Trade-offs:
✓ Excellent at complex web development
⚠ Requires decent hardware
DeepSeek-R1 - Enterprise-Grade Reasoning
ollama pull deepseek-r1
Best for:
Open reasoning with performance approaching O3 and Gemini 2.5 Pro
Trade-offs:
✓ Leading-edge reasoning capabilities
⚠ Slower inference due to reasoning depth
Gemma 3 - Google's Single-GPU Solution
ollama pull gemma3
Best for:
Google's most capable model that runs on a single GPU
Trade-offs:
✓ Excellent performance on consumer hardware
⚠ May need more guidance for complex tasks
Quick Model Comparison
Model | Size | RAM Required | Speed | Code Quality | Content Quality |
---|---|---|---|---|---|
GPT-OSS | 8GB | 16GB | Fast | Excellent | Excellent |
DeepSeek-R1 | 7GB | 16GB | Medium | Excellent | Very Good |
Gemma 3 | 4GB | 8GB | Very Fast | Very Good | Good |
Configuring CORS for Browser Access
Important: CORS Configuration Required
Since Shakespeare runs in your browser, you need to configure Ollama to accept cross-origin requests (CORS). This is a crucial step that allows Shakespeare to communicate with your local Ollama instance.
Checking CORS Status
First, verify if CORS is already enabled:
curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I
If you see HTTP/1.1 403 Forbidden
, CORS is not enabled and needs configuration.
Enabling CORS by Platform
🍎 Enabling CORS on macOS
# Allow all origins (easiest for local development)
launchctl setenv OLLAMA_ORIGINS "*"
# Or specify specific origins for better security
launchctl setenv OLLAMA_ORIGINS "localhost:3000,localhost:5173,shakespeare.app"
# Optional: Make Ollama accessible on your network
launchctl setenv OLLAMA_HOST "0.0.0.0"
# Restart Ollama for changes to take effect
🐧 Enabling CORS on Linux
Edit the Ollama service configuration:
sudo systemctl edit ollama.service
Add these environment variables:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Then restart the service:
sudo service ollama restart
🪟 Enabling CORS on Windows
- 1. Open System Properties → Environment Variables
- 2. Add new system variables:
- •
OLLAMA_ORIGINS
with value*
(or specific origins) - •
OLLAMA_HOST
with value0.0.0.0
(optional, for network access) - 3. Restart Ollama from the system tray
Verifying CORS Configuration
After configuration, test again:
curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I
Success looks like:
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Configuring Local Models in Shakespeare
Start Ollama Service
# Start Ollama server (with CORS already configured)
ollama serve
# The service will run on http://localhost:11434
Test Your Model
ollama run gpt-oss "Write a simple HTML page with a header"
Configure in Shakespeare
- 1. Open Shakespeare and go to Settings > AI Settings
- 2. Scroll to "Add Custom Provider"
- 3. Click to expand the custom provider section
- 4. Enter the following configuration:
- • Provider Name: "Ollama Local"
- • API Endpoint:
http://localhost:11434
- • Model Name:
gpt-oss
(or your chosen model) - • API Type: "Ollama"
Test the Connection
- 1. Click "Test Connection" to verify setup
- 2. If successful, click "Add" to save the provider
- 3. Select your local model from the configured providers list
The Bottom Line
🚀 Local Models with Shakespeare Provide
Ready to supercharge your development workflow? Start with GPT-OSS and experience the benefits of local AI models in Shakespeare.
Ready to Build with Shakespeare?
Start building amazing projects with AI-powered development on Nostr.
Turn your ideas into reality through natural conversation with AI
Get the latest Shakespeare updates and resources
Stay updated with the latest features and announcements. Optionally provide your npub to link your account.