Docker in CI/CD pipelines
In this series (8 parts)
Running docker build on your laptop works fine. Running it inside a CI runner, where there may be no Docker daemon or persistent filesystem, is a different problem. This post covers building images in CI, caching layers, pushing to registries, and producing multi-platform images.
Pipeline overview
A typical container CI/CD pipeline flows through these stages:
graph LR A[Checkout] --> B[Build Image] B --> C[Scan] C --> D[Test] D --> E[Push to Registry] E --> F[Deploy]
Each stage gates the next. A failed vulnerability scan blocks the push to any registry.
Building images in CI
Three main options exist for building container images in CI. Each trades off differently on security, speed, and complexity.
Docker-in-Docker (DinD)
DinD runs a full Docker daemon inside your CI container. It works with every Docker feature but requires privileged mode.
# GitLab CI example
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2376
build:
image: docker:latest
script:
- docker build -t myapp:${CI_COMMIT_SHA} .
Pros: Full Docker compatibility, familiar workflow.
Cons: Requires --privileged, a security risk. Slower cold starts. Not available on many managed CI runners.
Kaniko
Kaniko builds images in userspace. No Docker daemon, no privileged mode. It parses the Dockerfile and executes each instruction directly in the filesystem.
# GitLab CI with Kaniko
build:
image:
name: gcr.io/kaniko-project/executor:v1.21.0-debug
entrypoint: [""]
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "registry.example.com/myapp:${CI_COMMIT_SHA}"
--cache=true
Pros: No daemon, runs unprivileged, works in any container runtime. Built-in registry caching. Cons: Does not support every Dockerfile instruction. Debugging build failures is harder without an interactive shell.
Buildah
Buildah is a daemonless OCI build tool. It produces standard images and can run rootless.
buildah bud --layers --tag myapp:latest .
buildah push myapp:latest docker://registry.example.com/myapp:latest
Pros: Rootless builds, OCI-native, scriptable without a Dockerfile. Integrates well with Podman. Cons: Smaller ecosystem than Docker. Some CI platforms lack first-class Buildah support.
Layer caching in CI
Without caching, every CI build reinstalls dependencies from scratch. That turns a 30-second build into a 10-minute one.
Registry-based caching with —cache-from
When the CI filesystem is ephemeral, push cache layers to a registry and pull them on the next build.
# Pull previous cache, build, then push updated cache
docker buildx build \
--cache-from type=registry,ref=ghcr.io/myorg/myapp:buildcache \
--cache-to type=registry,ref=ghcr.io/myorg/myapp:buildcache,mode=max \
--tag ghcr.io/myorg/myapp:${GITHUB_SHA} \
--push .
The mode=max flag exports all layers, not just those in the final image. This maximizes cache hits for multi-stage builds.
GitHub Actions cache backend
GitHub Actions has a native cache backend for BuildKit that avoids round-trips to a registry.
- uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
This stores layers in the GitHub Actions cache (10 GB limit per repo). It is the fastest option for GitHub-hosted runners.
Pushing to registries
Authentication varies by registry. Always pipe credentials through stdin.
# Docker Hub
echo "${DOCKERHUB_TOKEN}" | docker login -u "${DOCKERHUB_USER}" --password-stdin
# Amazon ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
# GitHub Container Registry
echo "${GITHUB_TOKEN}" | docker login ghcr.io -u "${GITHUB_ACTOR}" --password-stdin
Never put credentials in your Dockerfile or build args.
Multi-platform builds with buildx
Buildx uses QEMU emulation or native builder nodes to produce images for multiple architectures in a single command.
# Create a builder that supports multi-platform
docker buildx create --name multiarch --use
docker buildx inspect --bootstrap
# Build for amd64 and arm64, push directly
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag ghcr.io/myorg/myapp:1.4.0 \
--push .
The registry stores a manifest list. When a user pulls the image, Docker selects the correct platform variant automatically. ARM builds under QEMU are slower, so consider native ARM runners for large projects.
Tagging strategies
Tags communicate what an image contains. Use a consistent scheme.
| Strategy | Example Tag | When to Use |
|---|---|---|
| Semantic version | 1.4.0 | Releases, customer-facing versions |
| Git SHA | abc1234 | Every commit, full traceability |
| Branch name | main, feature-x | Development, preview environments |
latest | latest | Convenience only, never in production |
A good practice is to apply multiple tags per image:
docker buildx build \
--tag ghcr.io/myorg/myapp:1.4.0 \
--tag ghcr.io/myorg/myapp:abc1234 \
--tag ghcr.io/myorg/myapp:latest \
--push .
Pin deployments to the semver or SHA tag. The latest tag is mutable and will drift. For artifact management, consistent tagging is critical to trace a running container back to the exact commit that produced it.
Full GitHub Actions workflow
This workflow builds a multi-platform image, scans it with Trivy, and pushes to GHCR.
name: Build and Push
on:
push:
branches: [main]
tags: ["v*"]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/metadata-action@v5
id: meta
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
type=sha,prefix=
- uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: false
load: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
exit-code: "1"
severity: CRITICAL,HIGH
- uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
The metadata-action generates tags from git context. On a tagged push like v1.4.0, it produces 1.4.0. On a branch push, it uses the commit SHA.
Choosing your build tool
- DinD if you need full Docker compatibility and your runners allow privileged containers.
- Kaniko if you run on Kubernetes or any environment without a Docker socket.
- Buildah if you want rootless, OCI-native builds with fine-grained control.
What comes next
Building and pushing images is one piece of the pipeline. The next challenge is managing the artifacts those pipelines produce: tracking which images are deployed, cleaning up old tags, and enforcing promotion policies. That is covered in artifact management.