Jenkins fundamentals
In this series (10 parts)
Jenkins has been around since 2011 (as a fork of Hudson, which started in 2004). It runs CI/CD for thousands of organizations, from startups to Fortune 500 companies. It is open source, endlessly configurable, and has a plugin for nearly everything. It is also complex to operate, painful to upgrade, and easy to misconfigure. Understanding Jenkins means understanding both its power and its trade-offs.
Architecture: controller and agents
Jenkins uses a controller/agent architecture. The controller is the central server that manages configuration, schedules jobs, and serves the web UI. Agents are the machines that actually execute the work.
graph TD A[Jenkins Controller] --> B[Agent: Linux Docker] A --> C[Agent: Linux Bare Metal] A --> D[Agent: macOS] A --> E[Agent: Windows] B --> F[Job: Build and Test] C --> G[Job: Integration Tests] D --> H[Job: iOS Build] E --> I[Job: .NET Build]
The controller distributes jobs to agents based on labels and availability. Each agent can have different capabilities.
Key concepts:
- Controller: runs the Jenkins application, stores configuration, manages the job queue, and records build history. Should not run build jobs itself in production.
- Agents: connect to the controller and execute jobs. Can be permanent (always running) or ephemeral (spun up on demand).
- Labels: tags assigned to agents. A job can require a specific label, so it only runs on agents that have the right tools installed.
- Executors: each agent has a number of executor slots. Each slot can run one job at a time.
The controller should never execute build jobs in production. Running untrusted code on the controller is a security risk because the controller has access to all credentials and configuration.
Jenkinsfile: declarative syntax
Modern Jenkins pipelines are defined in a Jenkinsfile at the root of the repository. There are two syntaxes: scripted (Groovy-based, flexible, hard to read) and declarative (structured, opinionated, recommended for most teams).
pipeline {
agent any
environment {
MAVEN_OPTS = '-Xmx512m'
APP_VERSION = "${env.BUILD_NUMBER}"
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '20'))
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'mvn clean compile -B'
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'mvn test -B'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Integration Tests') {
steps {
sh 'mvn verify -B -Dskip.unit.tests=true'
}
post {
always {
junit 'target/failsafe-reports/*.xml'
}
}
}
}
}
stage('Package') {
steps {
sh 'mvn package -B -DskipTests'
archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
}
}
stage('Build Docker Image') {
when {
branch 'main'
}
steps {
script {
def image = docker.build("myorg/myapp:${APP_VERSION}")
docker.withRegistry('https://registry.example.com', 'docker-registry-creds') {
image.push()
image.push('latest')
}
}
}
}
stage('Deploy to Staging') {
when {
branch 'main'
}
steps {
withCredentials([string(credentialsId: 'staging-deploy-token', variable: 'DEPLOY_TOKEN')]) {
sh './scripts/deploy.sh staging'
}
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
input {
message 'Deploy to production?'
ok 'Yes, deploy'
submitter 'admin,deployers'
}
steps {
withCredentials([string(credentialsId: 'prod-deploy-token', variable: 'DEPLOY_TOKEN')]) {
sh './scripts/deploy.sh production'
}
}
}
}
post {
success {
slackSend(channel: '#deployments', message: "Build ${env.BUILD_NUMBER} succeeded")
}
failure {
slackSend(channel: '#deployments', message: "Build ${env.BUILD_NUMBER} failed", color: 'danger')
}
always {
cleanWs()
}
}
}
Declarative syntax breakdown
pipeline: the top-level block. Everything lives inside it.
agent: where the pipeline runs. agent any means any available agent. You can also specify a label (agent { label 'linux' }) or a Docker image (agent { docker { image 'maven:3.9' } }).
environment: sets environment variables. Supports credentials:
environment {
DB_PASSWORD = credentials('db-password-id')
}
options: pipeline-wide settings like timeout, build retention, and concurrency limits.
stages: the ordered list of stages. Each stage contains steps.
parallel: runs multiple stages concurrently within a parent stage.
when: conditional execution. Common conditions:
when { branch 'main' }
when { changeset '**/*.java' }
when { expression { return params.DEPLOY_TO_PROD == true } }
when { allOf { branch 'main'; environment name: 'DEPLOY', value: 'true' } }
input: pauses the pipeline and waits for human approval. The submitter field restricts who can approve.
post: runs after stages complete. Conditions include always, success, failure, unstable, and changed.
Shared libraries
As the number of Jenkinsfiles grows, duplication becomes a problem. Shared libraries let you extract common logic into a separate Git repository that Jenkins loads automatically.
A shared library repository has this structure:
vars/
buildAndTest.groovy
deployToEnv.groovy
src/
org/
mycompany/
DockerHelper.groovy
resources/
deploy-template.sh
The vars/ directory contains global functions callable from any Jenkinsfile:
// vars/buildAndTest.groovy
def call(Map config = [:]) {
pipeline {
agent { label config.agent ?: 'linux' }
stages {
stage('Build') {
steps {
sh "mvn clean compile -B"
}
}
stage('Test') {
steps {
sh "mvn test -B"
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Package') {
steps {
sh "mvn package -B -DskipTests"
archiveArtifacts artifacts: 'target/*.jar'
}
}
}
}
}
Using it from a Jenkinsfile:
@Library('my-shared-library') _
buildAndTest(agent: 'linux')
The @Library annotation loads the shared library. The underscore after it is required when the library defines global variables.
Key plugins
Jenkins without plugins is a bare-bones job scheduler. Plugins add almost every useful feature. Here are the ones most teams need:
| Plugin | Purpose |
|---|---|
| Pipeline | Enables Jenkinsfile support |
| Git | Git integration and checkout |
| Credentials | Secure storage for secrets |
| Docker Pipeline | Build and use Docker images in pipelines |
| Blue Ocean | Modern UI for pipeline visualization |
| JUnit | Test result reporting |
| Slack Notification | Send build notifications to Slack |
| Pipeline Utility Steps | File operations, JSON/YAML parsing |
| Lockable Resources | Prevent concurrent access to shared resources |
| Job DSL | Programmatic job creation |
The plugin ecosystem is both a strength and a weakness. Plugins can conflict with each other, and outdated plugins are a frequent source of security vulnerabilities. Keep your plugin list minimal and update regularly.
Blue Ocean
Blue Ocean is a modern UI for Jenkins pipelines. It provides a visual pipeline editor, a cleaner build log view, and a branch-aware interface that works well with multibranch pipelines.
Blue Ocean is useful for teams transitioning to Jenkins pipelines from freestyle jobs. It makes the pipeline structure visible and the logs easier to navigate. However, Blue Ocean development has slowed, and many teams use the classic UI or third-party dashboards instead.
Jenkins in practice
Multibranch pipelines
A multibranch pipeline automatically discovers branches in your repository and creates a pipeline for each one. When a branch is deleted, the pipeline is removed. This is the closest Jenkins gets to the behavior of GitHub Actions or GitLab CI, where every branch can have its own pipeline.
// Jenkinsfile that behaves differently per branch
pipeline {
agent any
stages {
stage('Build and Test') {
steps {
sh 'mvn clean verify -B'
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh './deploy.sh'
}
}
}
}
Credential management
Jenkins stores credentials in an encrypted store. Access them in pipelines with withCredentials:
withCredentials([
usernamePassword(credentialsId: 'docker-hub', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS'),
string(credentialsId: 'api-key', variable: 'API_KEY'),
file(credentialsId: 'kubeconfig', variable: 'KUBECONFIG')
]) {
sh 'docker login -u $DOCKER_USER -p $DOCKER_PASS'
sh 'kubectl --kubeconfig=$KUBECONFIG apply -f deployment.yaml'
}
Credentials are scoped to folders, which map to teams or projects. A team’s credentials are not visible to other teams.
graph LR A[Developer pushes to branch] --> B[Multibranch pipeline detects change] B --> C[Jenkins loads Jenkinsfile from branch] C --> D[Controller assigns job to agent] D --> E[Agent runs stages] E --> F[Results reported to controller] F --> G[Status shown in Jenkins UI and GitHub]
The flow from push to result in a multibranch pipeline setup. Jenkins polls the repository or receives a webhook.
Why teams move away from Jenkins
Jenkins is powerful, but it carries operational overhead that newer platforms avoid:
Infrastructure management. You own the controller and agents. You handle upgrades, backups, scaling, and security patches. GitHub Actions and GitLab CI manage this for you.
Plugin maintenance. Plugins need updating. Plugin conflicts cause outages. A plugin author abandoning their project can leave you stuck on an insecure version.
Groovy complexity. Jenkinsfile syntax is Groovy, which is a full programming language. This flexibility becomes a liability when pipelines turn into 500-line scripts with custom class hierarchies and error handling.
Configuration drift. Jenkins instances accumulate manual configuration over years. Rebuilding a Jenkins instance from scratch is difficult without disciplined use of configuration-as-code.
Jenkins requires more maintenance as teams grow. Managed platforms scale with less operational effort.
When to stay on Jenkins
Jenkins is still the right choice in some situations:
- Complex build requirements. If your builds need custom hardware, specific OS versions, or specialized tooling, Jenkins agents give you full control.
- On-premises requirements. If your code cannot leave your network, self-hosted Jenkins may be simpler than self-hosting a GitLab instance.
- Existing investment. If your team has hundreds of pipelines and shared libraries on Jenkins, migration has a real cost. Improve what you have before rewriting everything.
- Orchestration beyond CI/CD. Jenkins can orchestrate arbitrary automation: infrastructure provisioning, data pipeline triggers, scheduled maintenance tasks. Its flexibility extends beyond code pipelines.
What comes next
This concludes the platform-specific articles in the CI/CD series. Future articles will cover testing strategies for CI/CD pipelines, deployment patterns like blue-green and canary releases, pipeline security, and managing secrets at scale across multiple environments.