Back to Blog

How to Refresh Models in LM Studio (Fix "No model loaded")

A practical guide to refresh models in LM Studio, rescan your model list, and fix the "No model loaded" error in apps like RewriteBar.

Posted by

How to refresh models in LM Studio and fix no model loaded

Why this matters

You already downloaded a model in LM Studio, but your app still says No model loaded or shows an empty model list.

That is frustrating, and it blocks your workflow right away.

This guide gives you a direct path to fix it in a few minutes.


Quick answer: how to refresh models in LM Studio

If your app does not show newly downloaded models, run this checklist in order:

  1. Open LM Studio and confirm the model is downloaded.
  2. Start the local server in LM Studio (default: http://localhost:1234).
  3. In your app, verify base URL points to http://localhost:1234/v1.
  4. Click your app's Refresh Models button.
  5. Re-select the model in your app settings.
  6. Send a small test prompt.

If it still fails, go through the troubleshooting steps below.


What “refresh models” actually does

Most apps that integrate with LM Studio use the OpenAI-compatible endpoint:

  • GET /v1/models

When you click Refresh Models, the app usually calls that endpoint and rebuilds its dropdown from the response.

That means model refresh problems are often one of these:

  • LM Studio server is not running
  • app is using the wrong base URL
  • model list changed but app cache was not refreshed
  • model is downloaded but not selected for generation
  • JIT model loading is off, so only loaded-in-memory models appear in /v1/models

Step-by-step fix for “No model loaded”

1. Confirm LM Studio server is running

In LM Studio, open the local server panel and start the server.

Default address:

  • http://localhost:1234

If you changed the port, use your custom one in every connected app.

2. Check the base URL in your app

For OpenAI-compatible integrations, use:

  • http://localhost:1234/v1

A common mismatch is using /api/v1 in an app that expects OpenAI-style endpoints.

3. Verify models are visible from the API

Run in Terminal:

curl http://localhost:1234/v1/models

If you get a valid model list, the issue is app-side configuration.

If the response is empty or the request fails, the issue is inside LM Studio setup.

4. Load or auto-load a model

Depending on your LM Studio setup, a request can fail if no model is ready for generation. Pick a model in LM Studio, then test again.

If your app can only see some models, check LM Studio Server Settings:

  • Just in Time Model Loading ON: /v1/models can include all downloaded models
  • Just in Time Model Loading OFF: /v1/models lists only models already loaded into memory

5. Refresh inside your app

Back in your app (for example RewriteBar):

  • click Refresh Models
  • choose the model again
  • save provider settings
  • run a short rewrite test

6. Restart only what is needed

If the model list still looks stale:

  • stop and start LM Studio server
  • refresh model list in app
  • restart the app only if the list still does not change

“Refresh models” vs “rescan models”

You might see different labels in different apps:

  • refresh button models
  • refresh model list
  • rescan models

They usually mean the same thing:

  • fetch an up-to-date list from GET /v1/models
  • rebind app state to the current model IDs

So if one button does not work, try the equivalent action in the same settings screen.


Advanced check (LM Studio native API)

LM Studio now has native REST endpoints too. If you need richer model state details, check:

  • GET /api/v1/models

This can help debug loaded instances and model metadata when OpenAI-compatible integrations feel opaque.


RewriteBar users: fastest path

If you are connecting RewriteBar to LM Studio:

  1. Open RewriteBar provider settings.
  2. Set URL to http://localhost:1234/v1.
  3. Click Refresh Models.
  4. Select your preferred local model.
  5. Click Verify and Enable.

If the model dropdown is empty, test curl http://localhost:1234/v1/models first. That tells you right away if the root issue is server-side or app-side.


FAQ

Why do I see “No model loaded” even after download?

A downloaded model is not always active for generation. Start the local server, select a model, then refresh models in your app.

Does LM Studio “refresh models” read local files directly?

Most client apps do not read your model folder. They call LM Studio API endpoints and trust that response.

Which endpoint should I test first?

Start with:

  • GET /v1/models

If that works, your app can usually populate the model list.

My app shows old model names. What should I do?

Refresh the model list in-app, then reselect the model and save settings. If stale names remain, restart LM Studio server and refresh again.


Final takeaway

Most refresh issues come down to three checks:

  1. LM Studio server is running.
  2. Your app uses http://localhost:1234/v1.
  3. Your app refreshes the list after model changes.

If those three are correct, model selection usually works right away.

If you want a local-first writing workflow after setup, you can download RewriteBar and run your text rewrites with LM Studio models in your daily apps.