Helm and package management
In this series (14 parts)
- Why Kubernetes exists
- Kubernetes architecture
- Core Kubernetes objects
- Kubernetes networking
- Storage in Kubernetes
- Kubernetes configuration and secrets
- Resource management and autoscaling
- Kubernetes workload types
- Kubernetes observability
- Kubernetes security
- Helm and package management
- GitOps with ArgoCD
- Kubernetes cluster operations
- Service mesh concepts
A production cluster runs dozens of services. Each service needs a Deployment, a Service, a ConfigMap, maybe an Ingress, maybe a PodDisruptionBudget. Copy those manifests across environments and you end up with hundreds of YAML files that drift apart over time. Helm exists to solve this.
The problem Helm solves
Kubernetes has no built-in concept of an “application.” It understands individual resources. You apply them one at a time or in bulk with kubectl apply, but there is no grouping, no versioning, no rollback as a unit. If your Deployment update breaks something, you manually hunt down the previous manifest and reapply it.
Helm introduces three ideas that fix this:
- Charts package related manifests into a single distributable unit.
- Values parameterize those manifests so one chart works across dev, staging, and production.
- Releases track each installation of a chart, giving you version history and one-command rollback.
Think of Helm as apt or yum for Kubernetes. You install a chart, upgrade it when a new version ships, and roll back if something breaks.
Chart structure
A Helm chart is a directory with a specific layout.
mychart/
Chart.yaml # metadata: name, version, dependencies
values.yaml # default configuration values
charts/ # dependency charts
templates/ # Go template files that produce manifests
deployment.yaml
service.yaml
ingress.yaml
_helpers.tpl # reusable template snippets
NOTES.txt # post-install instructions shown to user
Chart.yaml holds the chart’s identity:
apiVersion: v2
name: webapp
description: A web application chart
type: application
version: 0.3.0
appVersion: "1.2.0"
dependencies:
- name: postgresql
version: "12.1.9"
repository: "https://charts.bitnami.com/bitnami"
The version field tracks the chart itself. The appVersion field tracks the software the chart deploys. They evolve independently.
values.yaml defines defaults that templates consume:
replicaCount: 2
image:
repository: myregistry.io/webapp
tag: "1.2.0"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: false
host: ""
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
Go templating basics
Helm templates use Go’s text/template package. The double curly brace syntax injects values into your manifests at render time.
Here is a Deployment template that references the values above:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-webapp
labels:
app: {{ .Release.Name }}-webapp
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}-webapp
template:
metadata:
labels:
app: {{ .Release.Name }}-webapp
spec:
containers:
- name: webapp
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 80
resources:
{{- toYaml .Values.resources | nindent 12 }}
The .Release.Name object comes from the release, not from values.yaml. Helm provides several built-in objects: .Release, .Chart, .Values, .Capabilities, and .Template.
Conditionals and loops work as expected:
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
spec:
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-webapp
port:
number: {{ .Values.service.port }}
{{- end }}
The _helpers.tpl file defines reusable named templates. Reference them with the include function:
{{- define "webapp.fullname" -}}
{{ .Release.Name }}-{{ .Chart.Name }}
{{- end -}}
Use it in any template with {{ include "webapp.fullname" . }}.
Release management
flowchart LR
A[helm install] --> B[Release v1]
B --> C[helm upgrade]
C --> D[Release v2]
D --> E[helm upgrade]
E --> F[Release v3]
F --> G{Problem?}
G -->|Yes| H[helm rollback]
H --> I[Release v4
config from v2]
G -->|No| J[Running]
style A fill:#3498db,color:#fff
style H fill:#e74c3c,color:#fff
style J fill:#2ecc71,color:#fff
Helm release lifecycle. Each upgrade creates a new revision. Rollback creates a new revision with the configuration of a previous one.
Install a chart as a named release:
helm install my-webapp ./mychart \
--namespace production \
--values production-values.yaml
Override individual values on the command line:
helm install my-webapp ./mychart --set replicaCount=5
Upgrade a running release:
helm upgrade my-webapp ./mychart \
--values production-values.yaml \
--set image.tag="1.3.0"
Check history and roll back:
helm history my-webapp
# REVISION STATUS DESCRIPTION
# 1 superseded Install complete
# 2 superseded Upgrade complete
# 3 deployed Upgrade complete
helm rollback my-webapp 2
Preview changes before applying them:
helm upgrade my-webapp ./mychart --dry-run --debug
helm template my-webapp ./mychart # render locally without a cluster
Chart repositories
Public charts live in repositories. Add one, search it, and install:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo bitnami/nginx
helm install my-nginx bitnami/nginx --version 15.1.0
OCI registries also work as chart storage since Helm 3.8:
helm push mychart-0.3.0.tgz oci://myregistry.io/charts
helm install my-webapp oci://myregistry.io/charts/mychart --version 0.3.0
Pin chart versions in production. Floating versions lead to surprises.
Helm hooks
Hooks let you run Jobs or other resources at specific points in a release lifecycle. Annotate a resource to make it a hook:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-db-migrate
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["python", "manage.py", "migrate"]
restartPolicy: Never
backoffLimit: 3
Available hook points:
| Hook | Fires when |
|---|---|
pre-install | After templates render, before any resources are created |
post-install | After all resources are loaded into Kubernetes |
pre-upgrade | After templates render, before any resources are updated |
post-upgrade | After all resources are upgraded |
pre-rollback | Before a rollback is executed |
post-rollback | After rollback completes |
pre-delete | Before a release is deleted |
post-delete | After a release is deleted |
test | When helm test is invoked |
The hook-weight annotation controls ordering when multiple hooks fire at the same point. Lower weights run first. The hook-delete-policy controls cleanup: hook-succeeded deletes the resource after success, hook-failed deletes on failure, and before-hook-creation deletes the previous hook resource before creating a new one.
Helmfile for multi-chart orchestration
Real clusters run many charts together. Helmfile declares your entire cluster state in a single file:
repositories:
- name: bitnami
url: https://charts.bitnami.com/bitnami
environments:
production:
values:
- environments/production.yaml
staging:
values:
- environments/staging.yaml
releases:
- name: postgres
namespace: database
chart: bitnami/postgresql
version: 12.1.9
values:
- values/postgres.yaml
- values/postgres-{{ .Environment.Name }}.yaml
- name: redis
namespace: cache
chart: bitnami/redis
version: 17.3.7
values:
- values/redis.yaml
- name: webapp
namespace: app
chart: ./charts/webapp
needs:
- database/postgres
- cache/redis
values:
- values/webapp.yaml
- values/webapp-{{ .Environment.Name }}.yaml
The needs field defines dependency ordering. Helmfile deploys postgres and redis first, then webapp.
Apply the entire stack with one command:
helmfile -e production sync
Diff before applying to see what will change:
helmfile -e production diff
Destroy everything:
helmfile -e production destroy
Helmfile also supports selectors for partial deploys:
helmfile -e staging -l name=webapp sync
What comes next
Helm manages chart packaging and release lifecycle, but someone still has to run helm upgrade. In a GitOps workflow, a tool like ArgoCD watches your Git repository and automatically reconciles the cluster state with what is committed. No manual commands, no CI pipeline pushing to the cluster. The next post covers GitOps with ArgoCD, where your Git repository becomes the single source of truth for everything running in your cluster.