GitLab CI/CD in depth
In this series (10 parts)
GitLab CI/CD is built into GitLab. There is no separate service to enable. Add a .gitlab-ci.yml file to the root of your repository and GitLab starts running pipelines. The integration goes deep: pipelines show up on merge requests, environments are tracked in the GitLab UI, and review apps spin up automatically for every branch.
File structure
A .gitlab-ci.yml file defines the entire pipeline. It sits at the repository root.
stages:
- build
- test
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
build-app:
stage: build
image: python:3.12-slim
script:
- pip install -r requirements.txt
- python -m build
artifacts:
paths:
- dist/
expire_in: 1 hour
run-tests:
stage: test
image: python:3.12-slim
script:
- pip install -r requirements.txt
- pytest --cov=src --cov-report=xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
deploy-staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
The top-level keys:
- stages: ordered list of stage names.
- variables: global variables available to all jobs.
- Job definitions: each key that is not a reserved keyword becomes a job.
Stages and job ordering
Stages define execution order. All jobs in a stage run in parallel. The next stage starts only when every job in the current stage succeeds.
stages:
- build
- test
- security
- deploy
This creates four sequential phases. Within each phase, jobs run concurrently. If you need finer control, use the needs keyword to create a DAG:
unit-tests:
stage: test
needs: [build-app]
script:
- pytest tests/unit/
integration-tests:
stage: test
needs: [build-app]
script:
- pytest tests/integration/
deploy-staging:
stage: deploy
needs: [unit-tests, integration-tests]
script:
- ./deploy.sh staging
With needs, deploy-staging starts as soon as both test jobs finish, even if other jobs in the test stage are still running.
Images and services
Each job can specify a Docker image to run in:
run-tests:
image: python:3.12-slim
services:
- name: postgres:16
alias: db
variables:
DATABASE_URL: "postgresql://postgres:postgres@db:5432/testdb"
script:
- pip install -r requirements.txt
- pytest
The services keyword starts additional containers alongside the job container. This is how you run integration tests against a real database without managing external infrastructure.
Common service patterns:
- PostgreSQL or MySQL for database tests
- Redis for cache-dependent tests
- Elasticsearch for search integration tests
- A custom API mock container
Artifacts
Artifacts persist files between jobs and after the pipeline finishes.
build-app:
stage: build
script:
- python -m build
artifacts:
paths:
- dist/
expire_in: 1 week
when: always
The when: always option saves artifacts even if the job fails. This is useful for test reports: you want the report especially when tests fail.
GitLab has special artifact types for reports:
artifacts:
reports:
junit: report.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
sast: gl-sast-report.json
JUnit reports show test results directly on merge requests. Coverage reports show line-by-line coverage in the diff view. These integrations are one of the strongest features of GitLab CI.
Cache
Caching speeds up jobs by reusing downloaded dependencies across pipeline runs.
run-tests:
image: python:3.12-slim
cache:
key:
files:
- requirements.txt
paths:
- .cache/pip
- venv/
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
before_script:
- python -m venv venv
- source venv/bin/activate
- pip install -r requirements.txt
script:
- pytest
Cache key strategies:
- File-based keys (
key: { files: [requirements.txt] }): the cache invalidates when the file changes. - Branch-based keys (
key: $CI_COMMIT_REF_SLUG): each branch gets its own cache. - Combined keys (
key: "$CI_COMMIT_REF_SLUG-$CI_JOB_NAME"): per-branch, per-job caching.
The before_script and after_script keywords run commands before and after the main script block. They are useful for setup and cleanup that applies to many jobs.
Rules vs only/except
The only and except keywords are the old way to control when jobs run. The rules keyword replaces them with a more powerful syntax.
# Old way (avoid in new pipelines)
deploy-production:
only:
- main
except:
- schedules
# New way
deploy-production:
rules:
- if: $CI_COMMIT_BRANCH == "main" && $CI_PIPELINE_SOURCE != "schedule"
when: manual
- when: never
Rules are evaluated top to bottom. The first matching rule determines whether and how the job runs. Common when values:
on_success: run if previous stages succeeded (default).manual: show a play button in the UI.delayed: run after a specified delay.never: skip the job.
Rules can also check file changes:
build-docs:
rules:
- changes:
- docs/**
- README.md
when: on_success
- when: never
This job only runs when documentation files change.
GitLab Runners
A runner is the agent that executes jobs. GitLab provides shared runners on GitLab.com, but many teams register their own.
Runner types:
- Shared runners: available to all projects in a GitLab instance. Managed by GitLab on GitLab.com.
- Group runners: available to all projects in a group. Managed by the group owner.
- Project runners: dedicated to a single project.
Runner executors determine how jobs are isolated:
| Executor | Isolation | Use case |
|---|---|---|
| Docker | Container per job | Most common, good isolation |
| Kubernetes | Pod per job | Scalable, cloud-native |
| Shell | None (runs on host) | Simple, fast, risky |
| VirtualBox | VM per job | Maximum isolation, slow |
For most teams, the Docker executor is the right choice. It provides good isolation, fast startup, and the ability to use any Docker image.
Environments and review apps
GitLab tracks deployment environments as first-class objects.
deploy-staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
deploy-production:
stage: deploy
script:
- ./deploy.sh production
environment:
name: production
url: https://app.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual
The GitLab UI shows a list of environments with their current deployment status, the commit that is deployed, and links to the running application.
Review apps take this further. They create a temporary environment for each merge request:
deploy-review:
stage: deploy
script:
- ./deploy-review.sh $CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
on_stop: stop-review
auto_stop_in: 1 week
rules:
- if: $CI_MERGE_REQUEST_IID
stop-review:
stage: deploy
script:
- ./teardown-review.sh $CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
rules:
- if: $CI_MERGE_REQUEST_IID
when: manual
Each merge request gets its own running instance of the application. Reviewers can click through the actual UI instead of just reading code. The auto_stop_in setting cleans up abandoned review apps.
graph TD
A[Developer pushes branch] --> B[Pipeline runs]
B --> C[Build]
C --> D[Unit tests]
C --> E[Integration tests]
D --> F[Deploy review app]
E --> F
F --> G[Review app live at branch.review.example.com]
G --> H{MR merged?}
H -->|Yes| I[Deploy to staging]
H -->|No| J[Review app auto-stops after 1 week]
I --> K[Smoke tests]
K --> L[Manual deploy to production]
Review app lifecycle. Each merge request gets its own environment. Merged branches promote to staging.
Complete pipeline: Python service
Here is a full .gitlab-ci.yml for a Python web service with test coverage enforcement, Docker image build, and staged deployment.
stages:
- build
- test
- security
- package
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
DOCKER_IMAGE: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
build-app:
stage: build
image: python:3.12-slim
cache:
key:
files:
- requirements.txt
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- pip install --upgrade pip
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
- python -m build
artifacts:
paths:
- dist/
- venv/
expire_in: 1 hour
unit-tests:
stage: test
image: python:3.12-slim
needs: [build-app]
cache:
key:
files:
- requirements.txt
paths:
- .cache/pip
script:
- source venv/bin/activate
- pytest tests/unit/ --cov=src --cov-report=xml --cov-report=term --cov-fail-under=80 -v
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
junit: report.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
when: always
integration-tests:
stage: test
image: python:3.12-slim
needs: [build-app]
services:
- name: postgres:16
alias: db
variables:
POSTGRES_DB: testdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: "postgresql://postgres:postgres@db:5432/testdb"
script:
- source venv/bin/activate
- pytest tests/integration/ -v
artifacts:
reports:
junit: integration-report.xml
when: always
lint:
stage: test
image: python:3.12-slim
needs: [build-app]
script:
- source venv/bin/activate
- ruff check src/ tests/
- ruff format --check src/ tests/
- mypy src/
dependency-scan:
stage: security
image: python:3.12-slim
needs: [build-app]
script:
- source venv/bin/activate
- pip-audit --require-hashes -r requirements.txt
allow_failure: true
build-docker:
stage: package
image: docker:24
services:
- docker:24-dind
needs: [unit-tests, integration-tests, lint]
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $DOCKER_IMAGE -t $CI_REGISTRY_IMAGE:latest .
- docker push $DOCKER_IMAGE
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_BRANCH == "main"
deploy-staging:
stage: deploy
image: alpine:latest
needs: [build-docker]
before_script:
- apk add --no-cache curl
script:
- echo "Deploying $DOCKER_IMAGE to staging"
- curl -X POST -H "Authorization: Bearer $STAGING_DEPLOY_TOKEN"
"$DEPLOY_API_URL/deploy?env=staging&image=$DOCKER_IMAGE"
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
deploy-production:
stage: deploy
image: alpine:latest
needs: [deploy-staging]
before_script:
- apk add --no-cache curl
script:
- echo "Deploying $DOCKER_IMAGE to production"
- curl -X POST -H "Authorization: Bearer $PROD_DEPLOY_TOKEN"
"$DEPLOY_API_URL/deploy?env=production&image=$DOCKER_IMAGE"
environment:
name: production
url: https://app.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual
Job durations vary. The critical path runs through build, integration tests, Docker build, and deploy. Parallelism in the test stage saves time.
Tips for GitLab CI
Use extends for shared configuration. Hidden jobs (prefixed with .) can be inherited:
.python-base:
image: python:3.12-slim
cache:
key:
files:
- requirements.txt
paths:
- .cache/pip
unit-tests:
extends: .python-base
stage: test
script:
- pytest tests/unit/
Use include for shared pipelines. Pull in templates from other files or repositories:
include:
- project: 'my-org/ci-templates'
ref: main
file: '/templates/python-ci.yml'
Use interruptible: true on jobs that can be safely cancelled when a new pipeline starts on the same branch. This prevents wasted runner minutes.
What comes next
The next article covers Jenkins fundamentals, including Jenkinsfile syntax, the controller/agent architecture, and shared libraries. If your team does not use Jenkins, you can skip ahead to articles about testing strategies and deployment patterns in CI/CD pipelines.