Docker networking
In this series (8 parts)
Every container runs in its own network namespace with its own IP address, routing table, and interfaces. Docker networking controls how containers discover each other, communicate, and expose services to the outside world.
Network drivers
Docker ships with several network drivers. Each provides different isolation and connectivity guarantees.
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
f6e5d4c3b2a1 host host local
9z8y7x6w5v4u none null local
These three exist by default. You cannot remove them.
Bridge (default)
When you run a container without specifying a network, Docker attaches it to the default bridge network. The daemon creates a virtual bridge called docker0 on the host. Each container gets a virtual Ethernet pair: one end inside the container (eth0), the other end attached to docker0 as a veth interface.
graph TB
subgraph Host
docker0[docker0 bridge<br/>172.17.0.1]
veth1[veth1a2b3c]
veth2[veth4d5e6f]
docker0 --- veth1
docker0 --- veth2
eth_host[eth0 Host NIC]
docker0 -.->|NAT via iptables| eth_host
end
subgraph Container_A[Container A]
eth0a[eth0<br/>172.17.0.2]
end
subgraph Container_B[Container B]
eth0b[eth0<br/>172.17.0.3]
end
veth1 --- eth0a
veth2 --- eth0b
Containers on the default bridge network. Each container has a veth pair connecting its eth0 to docker0. Outbound traffic is NATed through the host.
Containers on the default bridge can reach each other by IP but not by name.
docker run -d --name web nginx
docker run -it --rm alpine ping 172.17.0.2 # works
docker run -it --rm alpine ping web # fails
User-defined bridge networks
User-defined bridge networks fix the DNS problem and provide better isolation. Containers on different user-defined networks cannot communicate unless explicitly connected to both.
# Create a network
docker network create my-app
# Run containers on it
docker run -d --name api --network my-app nginx
docker run -d --name db --network my-app postgres:16
Now api can reach db by name:
docker exec api ping db
# PING db (172.18.0.3): 56 data bytes
# 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.098 ms
Docker achieves this with an embedded DNS server at 127.0.0.11 inside every container on a user-defined network. When a container performs a DNS lookup, the request hits this server first. If the name matches another container on the same network, Docker resolves it directly. Otherwise it forwards the query to the host’s DNS servers.
# Check the DNS config inside a container
docker exec api cat /etc/resolv.conf
# nameserver 127.0.0.11
Host network
The host driver removes network isolation entirely. The container shares the host’s network namespace.
docker run -d --network host nginx
Nginx now listens on port 80 of the host directly. No port mapping, no bridge, no NAT. This is useful for performance-sensitive workloads where virtual networking adds measurable latency. The trade-off: two containers cannot both bind to the same host port.
None network
The none driver gives a container only a loopback interface. No external connectivity.
docker run -d --network none alpine sleep 3600
Useful for containers that process data from mounted volumes and should never make network calls.
Overlay network
Overlay networks span multiple Docker hosts. They use VXLAN tunneling to encapsulate container traffic inside UDP packets that travel between nodes.
# Initialize swarm mode (required for overlay)
docker swarm init
# Create an overlay network
docker network create --driver overlay --attachable backend
# Deploy services on it
docker service create --name api --network backend nginx
docker service create --name cache --network backend redis
The --attachable flag lets standalone containers join the overlay. Without it, only swarm services can connect. Containers on an overlay get DNS resolution across hosts.
Network management
# Inspect subnet, gateway, and connected containers
docker network inspect my-app
# Connect a running container to another network
docker network connect my-app existing-container
# Disconnect it
docker network disconnect my-app existing-container
A container can belong to multiple networks simultaneously, bridging communication between otherwise isolated segments.
Network aliases
Aliases let multiple containers respond to the same DNS name.
docker run -d --network my-app --network-alias search elasticsearch:8
docker run -d --network my-app --network-alias search elasticsearch:8
Any container on my-app that resolves search gets back both IPs. Docker’s embedded DNS round-robins between them.
Exposing vs publishing ports
These two concepts are frequently confused.
EXPOSE (documentation only)
EXPOSE in a Dockerfile is metadata. It documents which ports the application listens on.
FROM nginx:alpine
EXPOSE 80
EXPOSE 443
EXPOSE does not open any ports. It is a signal to anyone reading the Dockerfile or running docker inspect that the application expects traffic on those ports.
Publishing ports with -p
Publishing maps a host port to a container port, making the container reachable from outside.
# Map host port 8080 to container port 80
docker run -d -p 8080:80 nginx
Traffic to host:8080 is forwarded to the container’s port 80 via iptables NAT rules.
Port mapping variations
# Explicit host:container mapping
docker run -d -p 8080:80 nginx
# Bind to a specific host IP
docker run -d -p 127.0.0.1:8080:80 nginx
# Random host port assigned by Docker
docker run -d -p 80 nginx
# Check what port was assigned
docker port $(docker ps -q -l) 80
# 0.0.0.0:32768
# Publish all EXPOSE'd ports with random host ports
docker run -d -P nginx
The 127.0.0.1:8080:80 form is important for security. It binds only to localhost, making the port unreachable from other machines on the network.
Practical example
Putting it together with a frontend, API, and database:
docker network create frontend
docker network create backend
# Database: only on backend
docker run -d --name postgres --network backend \
-e POSTGRES_PASSWORD=secret postgres:16
# API: connected to both networks
docker run -d --name api --network backend \
-e DATABASE_URL=postgresql://postgres:secret@postgres:5432/app my-api:latest
docker network connect frontend api
# Web server: on frontend, published to host
docker run -d --name web --network frontend -p 80:3000 my-frontend:latest
Here, web reaches api by name over frontend. api reaches postgres over backend. But web cannot reach postgres directly because they share no network.
Troubleshooting
# Check which networks a container is on
docker inspect --format='{{json .NetworkSettings.Networks}}' api | python3 -m json.tool
# Test DNS resolution inside a container
docker exec api nslookup postgres
# Check published port mappings
docker port web
# List all containers on a network
docker network inspect frontend --format='{{range .Containers}}{{.Name}} {{end}}'
What comes next
You now understand how Docker isolates and connects containers. Next up is persistent data. Containers are ephemeral, so anything written to the container filesystem disappears on removal. Docker volumes and bind mounts provide storage that outlives individual containers.