Ollama

Use Ollama to run AI models locally on your Mac and connect them to RewriteBar.

Setup Ollama with RewriteBar

  1. Install Ollama from ollama.com/download.
  2. Open Terminal and confirm installation:
ollama --version
  1. Download a model (example):
ollama run llama3.2
  1. Confirm the model exists:
ollama list
  1. In RewriteBar, open PreferencesAI ProviderOllama and use:
    • Base URL: http://localhost:11434/v1
    • Default Model: a model from ollama list (for example llama3.2:latest)
  2. Click Verify and Enable.

Model Selection Guide

Start with a smaller model, then move up only if you need better quality.

ModelBest ForSizeNotes
llama3.2Everyday rewriting~2GBBest starting point for most users
llama3.1:8bHigher quality rewrites~4.7GBBetter output, more RAM needed
mistral:7bBalanced quality and speed~4.1GBGood general-purpose alternative
codellama:7bTechnical and code-heavy text~3.8GBUseful for developer-focused writing

How to choose

  • Prioritize speed: pick llama3.2.
  • Prioritize quality: try llama3.1:8b.
  • Working on technical docs: test codellama:7b.
  • On limited RAM: stick to smaller models first.

You can switch models anytime in RewriteBar settings without changing your workflow.

Troubleshooting

  • Connection refused: Ollama is not running. Start Ollama and retry.
  • Model not found: run ollama list and select a model that is installed.
  • Slow responses: choose a smaller model or close memory-heavy apps.