Docker fundamentals
In this series (8 parts)
You know that containers use namespaces and cgroups to isolate processes. Now you need to actually run them. Docker wraps those kernel primitives behind a clean CLI that handles image management, container lifecycle, networking, and storage.
Images vs containers
An image is a read-only template containing a filesystem snapshot: your application code, runtime, libraries, and configuration. Think of it as a class definition. It describes what should exist but does not run anything.
A container is a running instance of an image. Docker creates a thin writable layer on top of the image’s read-only layers. Writes live only in that layer. Remove the container, the layer is gone. One image can spawn hundreds of containers, each with its own writable layer, process namespace, and network stack. They share the underlying image layers, saving disk space.
Docker daemon architecture
Docker uses a client-server model. Three components work together.
graph LR CLI["Docker CLI<br/>(client)"] -->|REST API| Daemon["Docker Daemon<br/>(dockerd)"] Daemon --> Images["Images"] Daemon --> Containers["Containers"] Daemon --> Volumes["Volumes"] Daemon --> Networks["Networks"] Daemon -->|pull / push| Registry["Container Registry<br/>(Docker Hub, ECR, GCR)"]
The Docker CLI sends commands to the daemon over a REST API. The daemon manages all Docker objects locally and communicates with remote registries for image distribution.
The Docker CLI translates your commands into REST API calls to the daemon. The Docker daemon (dockerd) runs as a background process, managing images, containers, networks, and volumes. A container registry (Docker Hub, AWS ECR, Google GCR) stores and distributes images.
When you run docker pull nginx, the CLI tells the daemon to fetch the image from the registry. When you run docker run nginx, the daemon creates a container from that local image.
The docker run command
docker run is the command you will use most. It creates a container and starts it in a single step.
docker run nginx
That starts an Nginx container in the foreground. Your terminal is attached to its output. Not very useful for a web server. Here are the flags that matter.
Common flags
# Detached mode with a custom name
docker run -d --name web-server nginx
# Map host port 8080 to container port 80
docker run -d -p 8080:80 --name web-server nginx
# Mount a host directory into the container
docker run -d -p 8080:80 -v /home/user/html:/usr/share/nginx/html --name web-server nginx
# Set environment variables for configuration
docker run -d \
--name app-db \
-p 5432:5432 \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=secret123 \
-e POSTGRES_DB=myapp \
postgres:16
# Custom network and auto-remove on stop
docker run -d \
--name redis-cache \
--network app-network \
--rm \
-p 6379:6379 \
redis:7-alpine
Flag breakdown:
-druns the container in the background (detached).--nameassigns a readable name instead of Docker’s random ones likeeager_tesla.-p 8080:80maps host port 8080 to container port 80.-v host:containermounts a host directory into the container. Edits on the host appear inside immediately.-eor--envsets environment variables. Most database images use these for initial setup.--networkattaches the container to a Docker network. Containers on the same network resolve each other by name.--rmremoves the container when it stops. Good for throwaway dev containers.
Container lifecycle
Containers move through a predictable set of states.
stateDiagram-v2 [*] --> Created: docker create Created --> Running: docker start Running --> Paused: docker pause Paused --> Running: docker unpause Running --> Stopped: docker stop Stopped --> Running: docker start Stopped --> Removed: docker rm Running --> Removed: docker rm -f Removed --> [*]
A container moves from created to running to stopped. You can pause and unpause a running container. Removal is final.
Most of the time you use docker run, which combines create and start. But understanding the full lifecycle helps when debugging.
Inspecting containers
Listing containers
docker ps # running containers
docker ps -a # all containers, including stopped
docker ps -q # only IDs (useful for scripting)
Reading logs
docker logs web-server # all logs
docker logs -f web-server # follow in real time
docker logs --tail 50 -f web-server # last 50 lines, then follow
docker logs -t web-server # include timestamps
Combine -f with --tail to skip history and stream only recent entries.
Executing commands inside a running container
docker exec -it web-server /bin/bash # interactive shell
docker exec web-server cat /etc/nginx/nginx.conf # single command
docker exec web-server ps aux # check processes
docker exec -it is your primary debugging tool. -i keeps stdin open, -t allocates a pseudo-TTY. Together they give you an interactive shell inside the container without stopping anything.
Stopping and removing containers
# Stop a running container (SIGTERM, then SIGKILL after 10s)
docker stop web-server
# Force stop immediately (SIGKILL)
docker kill web-server
# Remove a stopped container
docker rm web-server
# Force remove a running container
docker rm -f web-server
# Remove all stopped containers
docker container prune
Prefer stop over kill so your application can close connections and flush buffers.
Image lifecycle
Images go through their own lifecycle: pull, build, tag, push, and remove.
Pulling images
docker pull nginx
docker pull nginx:1.25-alpine
docker pull registry.example.com/myapp:v2.1.0
Always pin image versions in production. The latest tag is mutable.
Building images
docker build -t myapp:1.0 .
docker build -t myapp:1.0 -f Dockerfile.prod .
-t tags the image. The . sets the build context directory sent to the daemon.
Tagging and pushing
docker tag myapp:1.0 registry.example.com/myapp:1.0
docker push registry.example.com/myapp:1.0
Tagging creates another reference to the same image layers. Pushing uploads only layers the registry lacks.
Removing images
docker rmi nginx:1.25-alpine
docker image prune
docker system prune -a
docker system prune -a removes all images not referenced by a running container. Use it on development machines, not production hosts.
Practical example
Here is a session tying everything together.
# Create a custom network
docker network create app-net
# Start a PostgreSQL database
docker run -d \
--name db \
--network app-net \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=secret \
-e POSTGRES_DB=myapp \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16
# Verify the database is running
docker ps
docker logs --tail 20 db
# Connect to the database container and run a query
docker exec -it db psql -U admin -d myapp -c "SELECT version();"
# Stop and clean up
docker stop db
docker rm db
docker network rm app-net
docker volume rm pgdata
Every command maps to a concept from this article. The network lets containers find each other by name. The volume persists data across restarts. The exec command opens psql for debugging.
What comes next
You can run containers, inspect them, debug them, and clean them up. The next step is building your own images. Dockerfiles define how your application gets packaged, from choosing a base image to copying source code to setting the startup command.