Compliance as code
In this series (10 parts)
Compliance audits consume weeks of engineering time when done manually. Screenshots, spreadsheets, and interviews produce evidence that is stale before the audit concludes. Compliance as code replaces this with policies that run continuously, producing machine-readable evidence that is always current.
OPA and Rego basics
Open Policy Agent (OPA) is a general-purpose policy engine. It decouples policy decisions from application logic. Rego is its declarative query language.
How OPA works
OPA evaluates policies against structured data (JSON). Applications send a query with input data. OPA returns a decision.
Input (JSON) + Policy (Rego) = Decision (JSON)
Writing Rego policies
Rego reads like a set of logical assertions. If all assertions in a rule body are true, the rule’s head is true.
package terraform.s3
# Deny S3 buckets without encryption
deny[msg] {
resource := input.resource.aws_s3_bucket[name]
not resource.server_side_encryption_configuration
msg := sprintf("S3 bucket '%s' lacks server-side encryption", [name])
}
# Deny S3 buckets without versioning
deny[msg] {
resource := input.resource.aws_s3_bucket[name]
not resource.versioning[_].enabled == true
msg := sprintf("S3 bucket '%s' does not have versioning enabled", [name])
}
# Deny S3 buckets without logging
deny[msg] {
resource := input.resource.aws_s3_bucket[name]
not resource.logging
msg := sprintf("S3 bucket '%s' does not have access logging enabled", [name])
}
Each deny rule produces a violation message. An empty set of messages means the configuration is compliant.
Testing Rego policies
Rego policies are code. Test them like code:
package terraform.s3_test
import data.terraform.s3
test_deny_unencrypted_bucket {
result := s3.deny with input as {
"resource": {
"aws_s3_bucket": {
"my-bucket": {}
}
}
}
count(result) > 0
}
test_allow_encrypted_bucket {
result := s3.deny with input as {
"resource": {
"aws_s3_bucket": {
"my-bucket": {
"server_side_encryption_configuration": {
"rule": {
"apply_server_side_encryption_by_default": {
"sse_algorithm": "aws:kms"
}
}
},
"versioning": [{"enabled": true}],
"logging": {"target_bucket": "logs"}
}
}
}
}
count(result) == 0
}
Run tests with:
opa test policies/ -v
Conftest
Conftest wraps OPA for configuration file testing. It understands Terraform, Kubernetes YAML, Dockerfiles, and many other formats.
# Test Terraform plan
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
conftest test tfplan.json -p policies/terraform/
# Test Kubernetes manifests
conftest test deployment.yaml -p policies/kubernetes/
# Test Dockerfiles
conftest test Dockerfile -p policies/docker/
A Kubernetes policy for Conftest:
package kubernetes
deny[msg] {
input.kind == "Deployment"
container := input.spec.template.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem == true
msg := sprintf("Container '%s' must use readOnlyRootFilesystem", [container.name])
}
deny[msg] {
input.kind == "Deployment"
container := input.spec.template.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container '%s' must set memory limits", [container.name])
}
deny[msg] {
input.kind == "Deployment"
container := input.spec.template.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Container '%s' must not use :latest tag", [container.name])
}
Checkov
Checkov scans infrastructure-as-code for security and compliance issues. It ships with 1000+ built-in rules mapped to compliance frameworks.
# Scan Terraform
checkov -d ./terraform/ --framework terraform
# Scan with specific checks
checkov -d ./terraform/ --check CKV_AWS_18,CKV_AWS_19,CKV_AWS_21
# Output as JUnit XML for CI
checkov -d ./terraform/ -o junitxml > checkov-results.xml
# Scan Kubernetes manifests
checkov -d ./k8s/ --framework kubernetes
Checkov maps findings directly to compliance frameworks:
Check: CKV_AWS_18: "Ensure the S3 bucket has access logging enabled"
PASSED for resource: aws_s3_bucket.logs
FAILED for resource: aws_s3_bucket.data
Compliance frameworks:
- SOC 2 Type II: CC6.1, CC7.2
- ISO 27001: A.12.4.1
- PCI DSS: 10.2
CIS benchmarks automated
The Center for Internet Security publishes benchmarks for AWS, Azure, GCP, Kubernetes, and operating systems. Automating these benchmarks is the foundation of most compliance programs.
# AWS CIS benchmark with Prowler
pip install prowler
prowler aws --compliance cis_2.0_aws
# Kubernetes CIS benchmark
kube-bench run --benchmark cis-1.8
# Generate compliance report
prowler aws --compliance cis_2.0_aws -M json-ocsf -o ./reports/
Prowler maps each check to the CIS control number, provides remediation guidance, and outputs machine-readable results.
Pipeline integration
graph LR
A[Developer Push] --> B[Conftest:<br/>IaC policies]
B --> C[Checkov:<br/>CIS benchmarks]
C --> D[Terraform Plan]
D --> E[OPA:<br/>Plan validation]
E --> F{All policies pass?}
F -->|Yes| G[Terraform Apply]
F -->|No| H[Block + Notify]
B --> B1[Evidence stored]
C --> C1[Evidence stored]
E --> E1[Evidence stored]
style H fill:#e74c3c,color:#fff
style G fill:#2ecc71,color:#fff
style B1 fill:#f0f0f0
style C1 fill:#f0f0f0
style E1 fill:#f0f0f0
Policy enforcement pipeline. Each stage produces audit evidence. Failures block deployment and notify the team.
A GitHub Actions workflow combining these tools:
name: Compliance Checks
on:
pull_request:
paths: ["terraform/**", "k8s/**"]
jobs:
policy-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Conftest - Kubernetes policies
run: |
conftest test k8s/*.yaml -p policies/kubernetes/ -o json > conftest-results.json
- name: Checkov - Terraform
uses: bridgecrewio/checkov-action@master
with:
directory: terraform/
output_format: junitxml
output_file_path: checkov-results.xml
- name: OPA - Terraform plan
run: |
cd terraform && terraform init && terraform plan -out=plan.binary
terraform show -json plan.binary > plan.json
opa eval -d policies/terraform/ -i plan.json "data.terraform.deny" --format pretty
- name: Archive evidence
uses: actions/upload-artifact@v4
with:
name: compliance-evidence
path: |
conftest-results.json
checkov-results.xml
Audit evidence for SOC 2 and ISO 27001
Auditors need evidence that controls are operating effectively over time. Compliance as code provides this through:
Continuous evidence generation. Every pipeline run produces timestamped results. Store these in an immutable evidence repository:
# Upload evidence to S3 with versioning
aws s3 cp compliance-results.json \
s3://audit-evidence/$(date +%Y/%m/%d)/pipeline-${BUILD_ID}.json
Control mapping. Map each automated check to a specific audit control:
| Check | SOC 2 Control | ISO 27001 Control |
|---|---|---|
| S3 encryption enabled | CC6.1 (Encryption) | A.10.1.1 (Cryptographic controls) |
| IAM MFA required | CC6.1 (Logical access) | A.9.4.2 (Secure log-on) |
| Audit logging enabled | CC7.2 (System monitoring) | A.12.4.1 (Event logging) |
| Vulnerability scanning | CC7.1 (Detection) | A.12.6.1 (Technical vuln mgmt) |
Exception management. Not every finding requires immediate remediation. Document exceptions with justification, owner, and review date:
# compliance-exceptions.yaml
exceptions:
- check: CKV_AWS_18
resource: aws_s3_bucket.legacy_app
justification: "Legacy application scheduled for decommission in Q2"
owner: "@platform-team"
review_date: "2026-06-30"
approved_by: "@security-lead"
Track exceptions over time. Expiring exceptions trigger reviews.
Full automation reduces audit preparation from weeks to hours. Evidence is always available, always current, and always machine-readable.
What comes next
The final article in this series covers incident response for DevSecOps. You will learn the differences between security and operational incidents, how to collect forensic evidence in cloud environments, and how to use infrastructure as code for post-incident remediation.