Search…
CI/CD Pipelines · Part 1

What CI/CD actually means

In this series (10 parts)
  1. What CI/CD actually means
  2. Pipeline anatomy and design
  3. GitHub Actions in depth
  4. GitLab CI/CD in depth
  5. Jenkins fundamentals
  6. Testing in CI pipelines
  7. Artifact management
  8. Pipeline security and supply chain
  9. Progressive delivery
  10. Self-hosted runners and pipeline scaling

Most teams say they do CI/CD. What they usually mean is they have a script that runs tests when someone pushes code. That is a start, but it misses the point. CI/CD is a set of practices that change how teams ship software. The tooling supports the practices. Not the other way around.

Continuous integration is a team practice

Continuous integration means every developer integrates their work into a shared branch at least once a day. The keyword is “continuous.” A feature branch that lives for two weeks and gets merged in a 400-line pull request is not continuous integration. It is delayed integration with a CI server watching.

The actual requirements:

  1. Developers commit to the main branch (or merge short-lived branches) frequently.
  2. Every commit triggers an automated build and test run.
  3. If the build breaks, the team fixes it within minutes. Not hours. Not “after lunch.”
  4. The build includes enough automated tests to catch real problems.

The third point is the one most teams skip. CI only works when a broken main branch is treated as an emergency. If your team routinely sees red builds and keeps working, you do not have continuous integration. You have a notification system everyone ignores.

Why broken main is a team problem

When main is broken, every developer who pulls the latest code inherits that breakage. They cannot test their own changes against a known-good baseline. Merge conflicts multiply. Bugs hide behind other bugs.

A broken main branch is not the fault of the person whose commit triggered the failure. It is a team problem because the team agreed to share a branch. The fix should come from whoever can resolve it fastest.

Some teams use a simple rule: if the build is red for more than ten minutes, the commit gets reverted. No blame, no discussion. Revert first, investigate later. This sounds harsh until you experience the alternative: a main branch that stays broken for a day while three people argue about whose change caused the failure.

CI is not just running tests

A CI pipeline does more than execute a test suite. A proper CI build validates that the software can be built, tested, and packaged from a clean state.

A minimal CI pipeline should:

  • Install dependencies from scratch (no stale local caches)
  • Compile the code (if applicable)
  • Run unit tests
  • Run integration tests
  • Run linters and static analysis
  • Produce a build artifact

Each of these steps serves as a quality gate. If any gate fails, the pipeline stops and the team gets notified. The artifact at the end proves that the code in this commit can be built and tested successfully. That artifact becomes the thing you deploy.

graph LR
  A[Developer pushes commit] --> B[Install dependencies]
  B --> C[Compile / build]
  C --> D[Lint and static analysis]
  D --> E[Unit tests]
  E --> F[Integration tests]
  F --> G[Build artifact]
  G --> H[Publish artifact]
  H --> I[Deploy to staging]
  I --> J[Smoke tests]
  J --> K[Deploy to production]
  K --> L[Post-deploy verification]

A pipeline from commit to production. Each box is a quality gate that can stop the process.

Continuous delivery vs continuous deployment

These two terms sound similar and get confused constantly. They describe different levels of automation.

Continuous delivery means every commit that passes the pipeline could be deployed to production. The artifact is built, tested, and ready. A human makes the final decision to release. The deploy button exists and works, but someone has to press it.

Continuous deployment means every commit that passes the pipeline is deployed to production automatically. No human approval step. If the tests pass, the code is live.

The difference matters:

AspectContinuous DeliveryContinuous Deployment
Human approvalYes, before productionNo
Deploy frequencyWhen someone decidesEvery passing commit
Rollback triggerHuman or automatedAutomated
Test requirementsHighVery high
Risk toleranceModerateLow (must trust tests)

Most teams practice continuous delivery. Continuous deployment requires a level of test coverage and monitoring maturity that takes time to build. Starting with continuous delivery and moving toward continuous deployment as confidence grows is a reasonable path.

The trust equation

Continuous deployment only works if you trust your pipeline. That trust comes from:

  • Comprehensive tests that catch real bugs, not just superficial checks
  • Monitoring that detects problems in production within seconds
  • Automated rollback mechanisms that revert bad deploys without human intervention
  • Feature flags that decouple deployment from release

If any of these are missing, a human approval step is not a bottleneck. It is a safety net you still need.

The pipeline as a quality gate

Think of a pipeline as a series of checkpoints. Each checkpoint answers a question:

  • Build stage: Can this code compile and produce an artifact?
  • Lint stage: Does this code follow the team’s standards?
  • Unit test stage: Do the individual components work correctly?
  • Integration test stage: Do the components work together?
  • Security scan stage: Are there known vulnerabilities in the dependencies?
  • Deploy to staging: Does the application start and respond in an environment close to production?
  • Smoke tests: Do the critical user paths work end to end?

A pipeline is only as strong as its weakest gate. If your integration tests are flaky and the team habitually re-runs them until they pass, that gate is broken. If your linter has so many exceptions that it catches nothing meaningful, remove it or fix it.

Fast feedback matters

A pipeline that takes 45 minutes to run will get ignored. Developers will push code, switch to another task, and forget about the build. When it finally fails, the context is gone.

Aim for these targets:

  • Lint and unit tests: under 5 minutes
  • Full pipeline including integration tests: under 15 minutes
  • Deploy to staging: under 5 minutes after artifact is built

If your pipeline is slow, parallelize. Run linting and unit tests at the same time. Split integration tests across multiple runners. Cache dependencies aggressively. Speed is not a nice-to-have. It directly affects whether the team actually practices CI.

Approximate attention rates based on industry surveys. Faster pipelines get more attention.

What CI/CD is not

CI/CD is not a tool. Jenkins, GitHub Actions, GitLab CI, CircleCI: these are pipeline runners. You can install all of them and still not practice CI/CD if your team merges week-old branches and ignores red builds.

CI/CD is not just for deployment. The “CD” part gets all the attention, but the real value starts with CI. If your team integrates frequently and keeps the build green, you have already solved the hardest coordination problem in software development.

CI/CD is not a one-time setup. Pipelines need maintenance. Tests get slow. Dependencies change. New services get added. Treat your pipeline configuration as production code. Review it. Test it. Refactor it when it gets unwieldy.

Getting started

If your team does not have CI/CD today, start here:

  1. Set up a pipeline that runs on every push to main. It should install dependencies, build the code, and run whatever tests you have.
  2. Make the rule: if the build is red, someone fixes it immediately. No exceptions.
  3. Shorten your branch lifetimes. Aim for branches that live less than a day.
  4. Add a linter. Automate the style arguments so code reviews can focus on logic.
  5. Add integration tests for your most critical paths.
  6. Set up a staging environment and deploy to it automatically on every green build.

Each step builds on the previous one. Do not try to jump from zero to continuous deployment in a week. The practices matter more than the tooling.

What comes next

The next article covers pipeline anatomy and design. We will look at how pipelines are structured: triggers, stages, jobs, steps, artifacts, and caching. Understanding the building blocks lets you design pipelines that are fast, reliable, and maintainable.

Start typing to search across all content
navigate Enter open Esc close