How to Refresh Models in LM Studio (Fix "No model loaded")
A practical guide to refresh models in LM Studio, rescan your model list, and fix the "No model loaded" error in apps like RewriteBar.
Posted by
Related reading
How to Use RewriteBar with LM Studio
Learn how to integrate RewriteBar with LM Studio for local AI processing. Complete setup guide including LM Studio installation, model configuration, and RewriteBar provider setup.
How to Use RewriteBar with Ollama
Learn how to integrate RewriteBar with Ollama for local AI processing. Complete setup guide including Ollama installation, model configuration, and RewriteBar provider setup.
Practical Prompting: A Cheatsheet for Real-World Use
Learn practical techniques to write effective prompts for RewriteBar commands. Get better results from your AI writing toolkit.

Why this matters
You already downloaded a model in LM Studio, but your app still says No model loaded or shows an empty model list.
That is frustrating, and it blocks your workflow right away.
This guide gives you a direct path to fix it in a few minutes.
Quick answer: how to refresh models in LM Studio
If your app does not show newly downloaded models, run this checklist in order:
- Open LM Studio and confirm the model is downloaded.
- Start the local server in LM Studio (default:
http://localhost:1234). - In your app, verify
base URLpoints tohttp://localhost:1234/v1. - Click your app's
Refresh Modelsbutton. - Re-select the model in your app settings.
- Send a small test prompt.
If it still fails, go through the troubleshooting steps below.
What “refresh models” actually does
Most apps that integrate with LM Studio use the OpenAI-compatible endpoint:
GET /v1/models
When you click Refresh Models, the app usually calls that endpoint and rebuilds its dropdown from the response.
That means model refresh problems are often one of these:
- LM Studio server is not running
- app is using the wrong base URL
- model list changed but app cache was not refreshed
- model is downloaded but not selected for generation
- JIT model loading is off, so only loaded-in-memory models appear in
/v1/models
Step-by-step fix for “No model loaded”
1. Confirm LM Studio server is running
In LM Studio, open the local server panel and start the server.
Default address:
http://localhost:1234
If you changed the port, use your custom one in every connected app.
2. Check the base URL in your app
For OpenAI-compatible integrations, use:
http://localhost:1234/v1
A common mismatch is using /api/v1 in an app that expects OpenAI-style endpoints.
3. Verify models are visible from the API
Run in Terminal:
curl http://localhost:1234/v1/models
If you get a valid model list, the issue is app-side configuration.
If the response is empty or the request fails, the issue is inside LM Studio setup.
4. Load or auto-load a model
Depending on your LM Studio setup, a request can fail if no model is ready for generation. Pick a model in LM Studio, then test again.
If your app can only see some models, check LM Studio Server Settings:
Just in Time Model LoadingON:/v1/modelscan include all downloaded modelsJust in Time Model LoadingOFF:/v1/modelslists only models already loaded into memory
5. Refresh inside your app
Back in your app (for example RewriteBar):
- click
Refresh Models - choose the model again
- save provider settings
- run a short rewrite test
6. Restart only what is needed
If the model list still looks stale:
- stop and start LM Studio server
- refresh model list in app
- restart the app only if the list still does not change
“Refresh models” vs “rescan models”
You might see different labels in different apps:
refresh button modelsrefresh model listrescan models
They usually mean the same thing:
- fetch an up-to-date list from
GET /v1/models - rebind app state to the current model IDs
So if one button does not work, try the equivalent action in the same settings screen.
Advanced check (LM Studio native API)
LM Studio now has native REST endpoints too. If you need richer model state details, check:
GET /api/v1/models
This can help debug loaded instances and model metadata when OpenAI-compatible integrations feel opaque.
RewriteBar users: fastest path
If you are connecting RewriteBar to LM Studio:
- Open RewriteBar provider settings.
- Set URL to
http://localhost:1234/v1. - Click
Refresh Models. - Select your preferred local model.
- Click
Verify and Enable.
If the model dropdown is empty, test curl http://localhost:1234/v1/models first. That tells you right away if the root issue is server-side or app-side.
FAQ
Why do I see “No model loaded” even after download?
A downloaded model is not always active for generation. Start the local server, select a model, then refresh models in your app.
Does LM Studio “refresh models” read local files directly?
Most client apps do not read your model folder. They call LM Studio API endpoints and trust that response.
Which endpoint should I test first?
Start with:
GET /v1/models
If that works, your app can usually populate the model list.
My app shows old model names. What should I do?
Refresh the model list in-app, then reselect the model and save settings. If stale names remain, restart LM Studio server and refresh again.
Final takeaway
Most refresh issues come down to three checks:
- LM Studio server is running.
- Your app uses
http://localhost:1234/v1. - Your app refreshes the list after model changes.
If those three are correct, model selection usually works right away.
If you want a local-first writing workflow after setup, you can download RewriteBar and run your text rewrites with LM Studio models in your daily apps.