Skip to content

Instantly share code, notes, and snippets.

@BretFisher
Last active April 11, 2025 05:27
Show Gist options
  • Save BretFisher/aafd46eeb7acef2f5ef7d1ea70abe2ad to your computer and use it in GitHub Desktop.
Save BretFisher/aafd46eeb7acef2f5ef7d1ea70abe2ad to your computer and use it in GitHub Desktop.
Use Open WebUI with Docker Model Runner and Compose

How to use this compose file to run Open WebUI on a local LLM running with Docker Model Runner

  1. Enable Docker Model Runner (v4.40 or newer) in Settings or run the command:
    • docker desktop enable model-runner --no-tcp
  2. Download some models from https://hub.docker.com/u/ai
    • docker model pull ai/qwen2.5:0.5B-F16
    • docker model pull ai/smollm2:latest
    • Be sure to only download models that you have the VRAM to run :)
  3. Run the compose.yaml here to startup the Open WebUI on port 3000
    • you can run my published compose file directly (without needing to save the YAML locally) with docker compose -f oci://bretfisher/openwebui up
  4. Create an admin user and login at http://localhost:3000
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OPENAI_API_BASE_URL=http://model-runner.docker.internal:80/engines/llama.cpp/v1
- OPENAI_API_KEY=na
volumes:
- open-webui:/app/backend/data
restart: always
volumes:
open-webui:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment