Ollama
Use Ollama to run AI models locally on your Mac and connect them to RewriteBar.
Setup Ollama with RewriteBar
- Install Ollama from ollama.com/download.
- Open Terminal and confirm installation:
ollama --version
- Download a model (example):
ollama run llama3.2
- Confirm the model exists:
ollama list
- In RewriteBar, open Preferences → AI Provider → Ollama and use:
- Base URL:
http://localhost:11434/v1 - Default Model: a model from
ollama list(for examplellama3.2:latest)
- Base URL:
- Click Verify and Enable.
Model Selection Guide
Start with a smaller model, then move up only if you need better quality.
| Model | Best For | Size | Notes |
|---|---|---|---|
llama3.2 | Everyday rewriting | ~2GB | Best starting point for most users |
llama3.1:8b | Higher quality rewrites | ~4.7GB | Better output, more RAM needed |
mistral:7b | Balanced quality and speed | ~4.1GB | Good general-purpose alternative |
codellama:7b | Technical and code-heavy text | ~3.8GB | Useful for developer-focused writing |
How to choose
- Prioritize speed: pick
llama3.2. - Prioritize quality: try
llama3.1:8b. - Working on technical docs: test
codellama:7b. - On limited RAM: stick to smaller models first.
You can switch models anytime in RewriteBar settings without changing your workflow.
Troubleshooting
- Connection refused: Ollama is not running. Start Ollama and retry.
- Model not found: run
ollama listand select a model that is installed. - Slow responses: choose a smaller model or close memory-heavy apps.