Skip to content

Instantly share code, notes, and snippets.

@brccabral
Last active March 12, 2025 07:08
Show Gist options
  • Save brccabral/2b3b4058e7a8d1904222ce9473c1493b to your computer and use it in GitHub Desktop.
Save brccabral/2b3b4058e7a8d1904222ce9473c1493b to your computer and use it in GitHub Desktop.
Ollama OpenWebUI Raspberry Pi

Ollama OpenWebUI Raspberry Pi

  • Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
  • Install the models
ollama run deepseek-r1:1.5b
ollama stop deepseek-r1:1.5b
ollama run llama3.2:1b
ollama stop llama3.2:1b
  • Edit Ollama service to point to a ENV file
sudo systemctl edit ollama.service
[Service]
EnviromentFile=/path/to/envs/file
  • Create /path/to/envs/file
nano /path/to/envs/file
  • Set env variables
OLLAMA_HOST=0.0.0.0:11434
OLLAMA_MODELS=/path/to/models
  • If you change the env variables, remember to export them before calling ollama run
  • Restart services
sudo systemctl daemon-reload
sudo systemctl restart ollama
  • Set-up Open-WebUI as the UI interface for the AI chat. It is a Docker image.
mkdir ~/OpenWebUI
cd ~/OpenWebUI
nano compose.yml
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - ./data:/app/backend/data
    ports:
      - 3030:8080
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped
docker compose up -d
  • Using Podman
podman network create open-webui

podman run -d \
-p 3030:8080 \
--gpus all \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
--network open-webui \
ghcr.io/open-webui/open-webui:cuda
  • Allow ports in firewall
sudo ufw allow 11434 # ollama
sudo ufw allow 3030  # open-webui docker
  • Useful commands
ollama list
ollama ps
ollama show llama3.2:1b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment