A docker-compose.yml file is how you define an entire application stack in one place, every container, how they connect, and where they store data, so you can start everything with a single command instead of a chain of docker run flags you have to remember and type correctly every time.
Here’s how it’s structured.
The Basic Structure
A Compose file has three top-level sections. Only services is required.
services: # your containers
networks: # how they communicate
volumes: # where they store data
Each entry services defines one container. The name you give it ollama, db, web becomes both the container’s identifier and its hostname on the internal network. That second part matters more than it looks.
Defining a Service
Here’s a minimal service definition:
services:
ollama:
image: ollama/ollama
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
restart: unless-stopped
image tells Docker which image to pull. ports maps your machine’s port to the container’s port, the same HOST:CONTAINER pattern from docker run -p. volumes mounts persistent storage so data survives the container being stopped or removed. restart: unless-stopped keeps the container running after a reboot unless you explicitly stop it.
How Services Find Each Other
This is the part that makes Compose genuinely useful. When you run docker compose up Docker creates a shared network for all services in the file. Every service is reachable from every other service using its service name as the hostname.
services:
ollama:
image: ollama/ollama
open-webui:
image: ghcr.io/open-webui/open-webui:main
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
open-webui connects to Ollama at http://ollama:11434 not localhost, not an IP address, just the service name. Docker resolves it automatically. This is service discovery, and it’s what makes multi-container stacks manageable.
depends_on controls startup order. Open WebUI waits for Ollama to start before it launches.
Volumes at the Top Level
Named volumes referenced inside services need to be declared at the top level of the file:
services:
ollama:
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:
The top-level declaration tells Docker to manage that volume. Without it, the file won’t run. Docker stores the data outside the container so it persists across restarts, updates, and even container removal.
A Complete Working Example
Here’s a two-service stack Ollama and Open WebUI as a complete, runnable Compose file:
services:
ollama:
image: ollama/ollama
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
restart: unless-stopped
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui_data:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
restart: unless-stopped
volumes:
ollama_data:
open-webui_data:
Save this as docker-compose.yml and run:
docker compose up -d
Both containers start, connect, and persist their data. docker compose down stops and removes them. The volumes stay intact until you explicitly remove them.
Reading Any Compose File
When you encounter an unfamiliar Compose file, read it in this order: check the service names to understand what’s in the stack, look at the image lines to see what’s running, trace the environment variables to see how services reference each other, and check the volumes section to understand what data persists.
That pattern covers most of what you need to understand any Compose file you come across.
The Takeaway
A docker-compose.yml file replaces a chain of docker run commands with a single readable definition. Service names double as hostnames, which is how containers find each other without hardcoded IPs. Volumes declared at the top level persist data outside the container lifecycle. Once you can read and write a Compose file, managing a multi-container AI homelab stack becomes a matter of one command in, one command out.
