The software delivery lifecycle
In this series (10 parts)
- What DevOps actually is
- The software delivery lifecycle
- Agile, Scrum, and Kanban for DevOps teams
- Trunk-based development and branching strategies
- Environments and promotion strategies
- Configuration management
- Secrets management
- Deployment strategies
- On-call culture and incident management
- DevOps metrics and measuring maturity
Software delivery is a loop, not a line. Traditional waterfall treated it as a sequence of handoffs: requirements, design, implementation, testing, deployment. Each phase had an owner. Each transition lost context. DevOps replaces this with a continuous cycle where feedback from production flows back into planning within hours, not months.
The eight stages
Every piece of software moves through the same fundamental stages. DevOps compresses the time between them.
graph LR Plan --> Code --> Build --> Test --> Release --> Deploy --> Operate --> Monitor Monitor -->|"feedback"| Plan
The DevOps infinity loop. Each stage feeds forward into the next, and monitoring feeds back into planning.
Plan
Product managers, designers, and engineers decide what to build. Work is broken into small, deliverable increments. The critical discipline here is keeping batch sizes small. A feature that takes six months to plan will take six months to deliver. A feature scoped to a week can ship in days.
Code
Developers write code in short-lived branches or directly on trunk. Version control is non-negotiable. Code review happens before merge. The key metric is how quickly code moves from “started” to “merged.”
Build
Source code is compiled, bundled, or packaged into a deployable artifact. This happens automatically on every commit through a CI server. Build times matter. A 30-minute build is a 30-minute feedback delay. Fast builds enable fast iteration.
Test
Automated tests run against the build artifact. Unit tests catch logic errors. Integration tests verify component interactions. End-to-end tests validate user workflows. The test suite must be fast enough to run on every commit and reliable enough that failures signal real problems, not flaky infrastructure.
Release
The artifact is versioned, tagged, and made available for deployment. In some models this is a manual approval gate. In others it is fully automated. The release stage answers: “Is this artifact ready for production?”
Deploy
The artifact moves to a target environment. This can be a rolling update, a blue-green deployment, or a canary release. The deploy stage should be boring. If deploying feels risky, something earlier in the pipeline is broken.
Operate
The system runs in production. Operations concerns include scaling, failover, incident response, and capacity planning. In a DevOps culture, the team that built the service also operates it. This ownership drives better design decisions.
Monitor
Metrics, logs, and traces flow from production into observability systems. Alerts fire when SLOs are at risk. Dashboards show system health. Monitoring closes the loop: it turns production behavior into planning input.
Lead time vs deployment frequency
Two dimensions define delivery speed.
Lead time for changes measures the elapsed time from first commit to production deployment. It captures the efficiency of your entire pipeline: code review speed, build time, test suite duration, deployment automation.
Deployment frequency measures how often you push to production. Daily? Weekly? On every commit?
These metrics are inversely correlated with batch size. Small changes flow through the pipeline faster. Large changes get stuck.
| Performance level | Deployment frequency | Lead time |
|---|---|---|
| Elite | On demand (multiple/day) | Less than 1 hour |
| High | Weekly to monthly | 1 day to 1 week |
| Medium | Monthly to quarterly | 1 week to 1 month |
| Low | Less than quarterly | More than 6 months |
The four DORA metrics
The DevOps Research and Assessment (DORA) team identified four metrics that predict software delivery performance. These are not vanity metrics. They correlate with organizational performance, team wellbeing, and even profitability.
Deployment frequency
How often does your team deploy to production? Elite teams deploy on demand, multiple times per day. This is only possible when the pipeline is fully automated and each deployment is low risk.
Lead time for changes
How long from commit to production? This metric exposes bottlenecks. If code review takes three days, your lead time has a three-day floor regardless of how fast everything else is.
Change failure rate
What percentage of deployments cause a failure in production? A failure is anything that requires remediation: a rollback, a hotfix, a patch. Elite teams keep this below 5%.
Mean time to recovery (MTTR)
When a failure occurs, how long until service is restored? Fast recovery depends on good monitoring (to detect the problem), good deployment tooling (to roll back), and good incident response practices.
Elite performers deploy 208x more frequently than low performers and recover from failures 12x faster. Data derived from the Accelerate State of DevOps reports.
How DevOps compresses the cycle
Without DevOps, each stage has a queue. Code waits for review. Builds wait for a shared CI server. Tests wait for a manual QA pass. Releases wait for a change advisory board. Deployments wait for a maintenance window.
DevOps eliminates these queues through:
- Automation. CI/CD pipelines run builds, tests, and deployments without human intervention for the happy path.
- Small batches. Fewer changes per deployment means less risk per deployment.
- Shift left. Security scanning, performance testing, and compliance checks move earlier in the pipeline where they are cheaper to fix.
- Fast feedback. Developers know within minutes if their change broke something, not days.
The compounding effect is dramatic. A team that deploys weekly and adopts these practices often reaches daily deployments within months. The ceiling is deploying on every commit.
Common anti-patterns
The “DevOps team” anti-pattern. Creating a DevOps team between dev and ops just adds a third silo. DevOps is a practice embedded in every team, not a team unto itself.
Automating a bad process. If your deployment process requires 47 manual steps, automating all 47 steps gives you a fragile 47-step automated process. Simplify first, then automate.
Vanity metrics. Tracking lines of code, number of commits, or story points completed tells you nothing about delivery performance. Stick to the DORA four.
Ignoring MTTR. Teams obsess over preventing failures but underinvest in recovery speed. Failures are inevitable. Recovery speed is a choice.
What comes next
The next article covers how Agile methodologies like Scrum and Kanban connect to the delivery lifecycle and why sprint cadence should align with your deployment pipeline maturity.