Skip to content

Instantly share code, notes, and snippets.

@n0ctu
Created April 2, 2025 21:00
Show Gist options
  • Save n0ctu/ade8b9f12a210d8a321b4bf189f0ee0b to your computer and use it in GitHub Desktop.
Save n0ctu/ade8b9f12a210d8a321b4bf189f0ee0b to your computer and use it in GitHub Desktop.
WIP: Ollama + Open WebUI Setup
---
services:
ollama-server:
image: ollama/ollama:latest
container_name: ollama-server
ports:
- "11434:11434"
volumes:
- ./ollama-data:/root/.ollama
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "ollama --version || exit 1"]
interval: 5s
timeout: 2s
retries: 5
command: serve
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['all']
capabilities: [ 'compute', 'graphics', 'utility' ]
limits:
memory: 16g
cpus: "16"
environment:
- OLLAMA_CUDA_DEVICES=all
security_opt:
- no-new-privileges
cap_drop:
- ALL
cap_add:
- SYS_NICE
read_only: true
networks:
- network_ollama
ollama-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: ollama-webui
restart: unless-stopped
depends_on:
- ollama-server
environment:
- OLLAMA_BASE_URL=http://ollama-server:11434
#- WEBUI_AUTH=False # No login required if uncommented
volumes:
- ./open-webui-data:/app/backend/data
ports:
- "8080:8080"
networks:
- network_ollama
networks:
network_ollama:
driver: bridge
@n0ctu
Copy link
Author

n0ctu commented Apr 2, 2025

This was tested in wsl (Ubuntu 24.04) with Docker (Docker version 28.0.4, build b8034c0) and the NVIDIA Container Toolkit. Refer to https://hub.docker.com/r/ollama/ollama for installation instructions for the toolkit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment