DocumentationOverviewBrowse
Documentation
Overview
AI Providers
RewriteBar supports multiple AI providers, so you can pick the setup that fits your workflow.
Choose the Right Model
If you want guidance on speed vs quality, local model tradeoffs, and per-command overrides, see:
Provider Directory
Use this table to find:
- whether a provider is cloud-hosted or local
- quick links such as API console and provider website
- whether provider-specific RewriteBar docs are available
| Provider | Type | Links | Provider Docs |
|---|---|---|---|
| OpenAI | Cloud | Guide | |
| Anthropic | Cloud | Guide | |
| Cloud | Guide | ||
| Ollama | Local | Guide | |
| Groq | Cloud | Guide | |
| DeepSeek | Cloud | - | |
| OpenRouter | Cloud | Guide | |
| Perplexity | Cloud | See provider docs | Guide |
| GitHub Copilot | Cloud | Guide | |
| Mistral | Cloud | - | |
| XAI | Cloud | - | |
| TogetherAI | Cloud | - | |
| Azure AI Foundry | Cloud | - | |
| LM Studio | Local | - | |
| Apple Intelligence | Local | - | |
| Osaurus | Local | Guide | |
| 302.AI | Cloud | - | |
| Amazon Bedrock | Cloud | - | |
| Baseten | Cloud | - | |
| Cloudflare AI Gateway | Cloud | - | |
| Cortex | Cloud | - | |
| DeepInfra | Cloud | - | |
| Firmware | Cloud | - | |
| Fireworks AI | Cloud | - | |
| Hugging Face | Cloud | - | |
| io.net | Cloud | - | |
| Moonshot AI | Cloud | - | |
| MiniMax | Cloud | - | |
| Nebius Token Factory | Cloud | - | |
| OpenCode Zen | Cloud | - | |
| OVHcloud AI Endpoints | Cloud | - | |
| Scaleway | Cloud | - | |
| Venice AI | Cloud | - | |
| Vercel AI Gateway | Cloud | - | |
| Z.AI | Cloud | - | |
| ZenMux | Cloud | - | |
| Cerebras | Cloud | - |
Some providers (for example Ollama and Apple Intelligence) can be used without a cloud API key.
Switching Providers
You can easily switch between providers:
- Go to Settings → AI Providers
- Select your preferred provider
- The change takes effect immediately
Troubleshooting
API Key Issues
- Ensure your API key is valid and has sufficient credits
- Check that the API key has the correct permissions
Local AI Issues
- Verify that Ollama/LM Studio is running
- Check the server URL and port
- Ensure the model is loaded and accessible
Performance Issues
- Try a different model
- Check your internet connection (for cloud providers)
- Restart the local AI service (for local providers)