Kubernetes security in depth
In this series (10 parts)
Kubernetes manages containers at scale. It also manages the attack surface at scale. A misconfigured pod can escalate to cluster-wide compromise. A missing network policy allows lateral movement. A permissive RBAC role grants access beyond what any workload needs.
The CIS Kubernetes Benchmark
The Center for Internet Security publishes a comprehensive benchmark for Kubernetes hardening. It covers the API server, etcd, controller manager, scheduler, and worker nodes.
kube-bench
kube-bench automates CIS benchmark checks:
# Run all checks
kube-bench run
# Run checks for a specific component
kube-bench run --targets master
# Output as JSON
kube-bench run --json > benchmark-results.json
Common failures include:
| Check | Issue | Fix |
|---|---|---|
| 1.2.6 | Anonymous auth enabled | --anonymous-auth=false |
| 1.2.16 | Audit logging disabled | Configure audit policy |
| 4.2.1 | Kubelet anonymous auth | --anonymous-auth=false in kubelet config |
| 4.2.6 | Protect kernel defaults | --protect-kernel-defaults=true |
Run kube-bench on every cluster provisioning and as a scheduled job:
apiVersion: batch/v1
kind: CronJob
metadata:
name: kube-bench
spec:
schedule: "0 6 * * 1"
jobTemplate:
spec:
template:
spec:
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench", "run", "--json"]
restartPolicy: Never
hostPID: true
PodSecurity Standards
Kubernetes PodSecurity admission controller enforces three security profiles:
Privileged: Unrestricted. Used only for system-level workloads like CNI plugins and storage drivers.
Baseline: Prevents known privilege escalations. Blocks hostNetwork, hostPID, privileged containers, and most dangerous volume types.
Restricted: Maximum security. Requires non-root execution, read-only root filesystem, drops all capabilities, and disallows privilege escalation.
Apply standards at the namespace level:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
A restricted pod spec:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:v1.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1000
capabilities:
drop: ["ALL"]
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
The emptyDir volume for /tmp is necessary because the root filesystem is read-only. Applications that write temporary files need writable mount points.
OPA Gatekeeper
Open Policy Agent (OPA) Gatekeeper extends admission control with custom policies written in Rego.
Constraint templates
Define what the policy checks:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg}] {
provided := {l | input.review.object.metadata.labels[l]}
required := {l | l := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("Missing required labels: %v", [missing])
}
Constraints
Apply the template with specific parameters:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-team-label
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["team", "cost-center"]
Common Gatekeeper policies for security:
- Block images from untrusted registries
- Require resource limits on all containers
- Prevent use of
latesttag - Enforce network policy existence per namespace
- Block services of type
LoadBalancerwithout annotation
Network policies
By default, every pod can communicate with every other pod. Network policies implement microsegmentation.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
This policy allows the API pod to receive traffic only from the frontend on port 8080 and send traffic only to the database on port 5432 and DNS on port 53. All other traffic is denied.
graph LR FE[Frontend] -->|8080| API[API Server] API -->|5432| DB[(Database)] API -->|53/UDP| DNS[kube-dns] ATK[Attacker Pod] -.-x|Blocked| API API -.-x|Blocked| EXT[External Service] style ATK fill:#e74c3c,color:#fff style EXT fill:#e74c3c,color:#fff style FE fill:#2ecc71 style API fill:#3498db style DB fill:#f39c12
Network policy enforcement. Solid arrows show allowed traffic. Dotted arrows show denied traffic.
Start with a default-deny policy for each namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then add specific allow rules for each workload. This inverts the security model from “allow everything, deny some” to “deny everything, allow some.”
Audit log analysis
The Kubernetes API server logs every request. Audit logs reveal who did what, when, and to which resource.
Configure an audit policy:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: Metadata
resources:
- group: ""
resources: ["pods", "services"]
- level: None
resources:
- group: ""
resources: ["endpoints", "events"]
Key events to monitor:
- Secret access from unexpected service accounts
execinto running pods- RBAC role or binding modifications
- Namespace creation or deletion
- Admission webhook configuration changes
Falco for Kubernetes
Falco extends to Kubernetes audit logs, detecting suspicious API server activity:
- rule: K8s Secret Access
desc: Detect access to Kubernetes secrets
condition: >
ka.verb in (get, list) and
ka.target.resource = secrets and
not ka.user.name in (system:kube-controller-manager,
system:kube-scheduler)
output: >
Secret accessed (user=%ka.user.name
secret=%ka.target.name ns=%ka.target.namespace)
priority: WARNING
source: k8s_audit
Combined with runtime Falco rules monitoring system calls, you get visibility into both the Kubernetes control plane and the container runtime.
What comes next
The next article on secrets management covers the practical patterns for storing, distributing, and rotating secrets across Kubernetes workloads. You will learn Vault deployment patterns, the external-secrets-operator, and emergency response procedures for leaked credentials.