Docker Compose
In this series (8 parts)
Running one container is simple. Running four that talk to each other, share volumes, and start in the right order is not. Docker Compose solves this. You describe your stack in one YAML file and bring it up with a single command.
Compose file structure
A compose.yml has three top-level keys that matter: services, networks, and volumes. Services define your containers. Networks control communication. Volumes handle persistence. Compose creates a default bridge network automatically, so containers reach each other by service name. If web needs Postgres, it connects to postgres:5432. No IP addresses, no manual linking.
Service dependencies
Your web server cannot accept requests until Postgres is ready. A worker cannot process jobs until Redis is available.
graph LR web["web (app server)"] --> postgres["postgres (database)"] web --> redis["redis (cache)"] worker["worker (background)"] --> postgres worker --> redis
Service dependency graph. Both web and worker depend on postgres and redis.
The depends_on key alone only waits for the container to start, not for the service inside to be ready. A Postgres container can be “running” for seconds before it accepts connections. Use healthchecks with condition: service_healthy for actual readiness.
Full compose.yml
This stack runs a Python web app, a background worker, PostgreSQL, and Redis.
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://app:secret@postgres:5432/myapp
- REDIS_URL=redis://redis:6379/0
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- backend
worker:
build:
context: .
dockerfile: Dockerfile
command: celery -A tasks worker --loglevel=info
environment:
- DATABASE_URL=postgresql://app:secret@postgres:5432/myapp
- REDIS_URL=redis://redis:6379/0
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- backend
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
networks:
backend:
driver: bridge
volumes:
pgdata:
Postgres data lives in the named volume pgdata. Destroying the containers with docker compose down preserves this volume. Only docker compose down -v removes it.
Environment variable handling
Hardcoding secrets in the compose file works for local dev but not production. Three approaches exist.
Inline values using the environment key, as shown above. Fine for non-sensitive configuration.
The .env file sits next to your compose.yml. Compose loads it automatically.
# .env
POSTGRES_PASSWORD=secret
APP_PORT=8000
Reference these in your compose file with variable substitution:
services:
postgres:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
web:
ports:
- "${APP_PORT}:8000"
The env_file key loads variables from a named file directly into the container.
services:
web:
env_file:
- ./web.env
.env is for Compose-level interpolation (filling in ${VAR} in YAML). env_file passes variables straight to the container.
Override files
Compose automatically merges compose.yml with docker-compose.override.yml if it exists. Keep production config in the base file and layer dev tweaks on top.
# docker-compose.override.yml
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "8000:8000"
- "5678:5678"
environment:
- DEBUG=true
- FLASK_ENV=development
command: python -m debugpy --listen 0.0.0.0:5678 -m flask run --host=0.0.0.0 --port=8000 --reload
worker:
volumes:
- ./src:/app/src
environment:
- DEBUG=true
command: watchmedo auto-restart --directory=./src --pattern=*.py -- celery -A tasks worker --loglevel=debug
In development, docker compose up merges both files automatically. You get bind mounts for live reloading, a debug port, and verbose logging. In CI or production, use docker compose -f compose.yml up to skip the override.
Profiles
Not every service should run every time. Monitoring tools and debug utilities are useful during development but unnecessary in production. Profiles group optional services.
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
profiles:
- monitoring
networks:
- backend
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
profiles:
- monitoring
networks:
- backend
Services without a profiles key always start. Services with profiles only start when you activate their profile.
# start everything including monitoring
docker compose --profile monitoring up
Essential commands
These are the commands you use daily.
# start all services (detached)
docker compose up -d
# stop and remove containers, networks
docker compose down
# remove everything including volumes
docker compose down -v
# view running services
docker compose ps
# tail logs for a specific service
docker compose logs -f web
# run a command inside a running container
docker compose exec postgres psql -U app -d myapp
# rebuild images before starting
docker compose up -d --build
# scale a service
docker compose up -d --scale worker=3
Use exec for database access, migrations, and debugging. It attaches to a running container. Use run instead for a fresh, temporary container.
What comes next
You now have a complete multi-service stack defined in code. The next step is taking this to production: building optimized images, pushing them to a registry, and deploying across multiple hosts. Single-host Compose works for development, testing, and small deployments. For anything larger, you need container orchestration that handles scheduling, scaling, and failover across a cluster.