SAST and DAST
In this series (10 parts)
Static analysis reads code. Dynamic analysis runs code. Neither is sufficient alone. SAST catches structural flaws like SQL injection patterns and insecure deserialization. DAST finds runtime issues like authentication bypass and server misconfiguration. A mature pipeline uses both.
Static Application Security Testing (SAST)
SAST tools analyze source code, bytecode, or binaries without executing the application. They trace data flows from user input (sources) to dangerous operations (sinks) and flag paths that lack proper sanitization.
Semgrep
Semgrep matches code patterns using a syntax that mirrors the target language. Unlike regex-based tools, it understands language structure.
# .semgrep.yml
rules:
- id: sql-injection
patterns:
- pattern: |
cursor.execute($QUERY)
- pattern-not: |
cursor.execute($QUERY, $PARAMS)
message: "Possible SQL injection. Use parameterized queries."
languages: [python]
severity: ERROR
This rule catches cursor.execute(f"SELECT * FROM users WHERE id = {user_id}") but allows cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)).
Run Semgrep in CI:
# GitHub Actions
- name: Semgrep scan
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/default
p/owasp-top-ten
.semgrep.yml
The p/default and p/owasp-top-ten registries contain community-maintained rules covering common vulnerability classes. Custom rules in .semgrep.yml handle project-specific patterns.
Bandit for Python
Bandit specializes in Python security analysis. It categorizes findings by severity and confidence:
bandit -r src/ -ll -ii
The -ll flag shows only medium and high severity issues. The -ii flag shows only medium and high confidence results. This combination drastically reduces noise.
Common Bandit findings:
# B608: SQL injection via string formatting
query = "SELECT * FROM users WHERE name = '%s'" % name
# B605: Starting process with shell=True
subprocess.call(cmd, shell=True)
# B303: Use of insecure MD5 hash
hashlib.md5(data).hexdigest()
# B324: Use of insecure SHA1 hash
hashlib.sha1(data).hexdigest()
ESLint security plugin
For JavaScript and TypeScript, the ESLint security plugin catches patterns specific to the Node.js ecosystem:
// detect-eval-with-expression
eval(userInput); // flagged
// detect-non-literal-require
require(variable); // flagged - potential path traversal
// detect-unsafe-regex
const re = /^(a+)+$/; // flagged - ReDoS vulnerable
// detect-no-csrf-before-method-override
app.use(express.methodOverride());
app.use(csrf()); // flagged - CSRF check bypassed
CodeQL
GitHub’s CodeQL treats code as data. It compiles source into a database that you query with a SQL-like language:
import javascript
from CallExpr call, StringLiteral arg
where call.getCalleeName() = "eval"
and arg = call.getArgument(0)
and arg.getValue().regexpMatch(".*\\$\\{.*")
select call, "eval() called with template literal"
CodeQL excels at finding complex vulnerabilities that span multiple files and function calls. It is free for public repositories on GitHub.
Dynamic Application Security Testing (DAST)
DAST tools probe running applications by sending crafted requests and analyzing responses. They discover vulnerabilities that only manifest at runtime: misconfigured headers, authentication flaws, and server-side injection.
OWASP ZAP
ZAP is the standard open-source DAST tool. It operates in multiple modes:
Baseline scan for CI pipelines:
docker run -t ghcr.io/zaproxy/zaproxy:stable zap-baseline.py \
-t https://staging.example.com \
-c zap-config.conf \
-J zap-report.json
The baseline scan checks for passive issues: missing security headers, cookie flags, information disclosure. It completes in minutes and is safe for every build.
Full scan for scheduled security testing:
docker run -t ghcr.io/zaproxy/zaproxy:stable zap-full-scan.py \
-t https://staging.example.com \
-c zap-config.conf \
-J zap-report.json
The full scan includes active attacks: SQL injection probes, XSS payloads, directory traversal attempts. Run this against staging environments only, never production.
API scan for REST and GraphQL endpoints:
docker run -t ghcr.io/zaproxy/zaproxy:stable zap-api-scan.py \
-t https://staging.example.com/openapi.json \
-f openapi
ZAP reads the OpenAPI spec and generates targeted tests for each endpoint, respecting parameter types and authentication requirements.
Integrating into CI
The key decision is where in the pipeline each tool runs and whether failures block the build.
graph LR A[Push] --> B[SAST] B --> C[Build] C --> D[Deploy to staging] D --> E[DAST baseline] E --> F[DAST full<br/>scheduled] B -->|Critical: block| B1[Fail build] B -->|Medium: warn| B2[PR comment] E -->|High: block| E1[Fail pipeline] E -->|Medium: log| E2[Dashboard] style B1 fill:#e74c3c,color:#fff style E1 fill:#e74c3c,color:#fff style B2 fill:#f39c12 style E2 fill:#f39c12
SAST runs on every push, blocking on critical findings. DAST baseline runs after staging deployment. Full DAST scans run on a schedule.
A practical GitHub Actions workflow combining both:
name: Security Scans
on: [push, pull_request]
jobs:
sast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: p/default
dast:
needs: [deploy-staging]
runs-on: ubuntu-latest
steps:
- name: ZAP Baseline
uses: zaproxy/action-baseline@v0.10.0
with:
target: "https://staging.example.com"
fail_action: true
Managing false positives
Every SAST tool produces false positives. Unmanaged, they create alert fatigue and erode trust in the tooling. Handle them systematically.
Inline suppression with justification:
# nosemgrep: sql-injection - query uses allowlisted table names only
cursor.execute(f"SELECT * FROM {ALLOWED_TABLES[table_key]}")
Centralized suppression files:
# .semgrep-ignore.yml
- id: sql-injection
paths:
- src/migrations/ # Generated migration files
- tests/ # Test fixtures
Triage workflow:
- New finding appears in CI
- Security champion reviews within 48 hours
- Classify as: true positive (fix), false positive (suppress with reason), or accepted risk (document and track)
- Review accepted risks quarterly
Track your false positive rate over time. A rate above 30% indicates rules need tuning. Below 10% suggests rules may be too conservative.
Tuning reduces false positives dramatically. Invest time in customizing rules for your codebase rather than accepting default configurations.
SAST vs DAST comparison
| Aspect | SAST | DAST |
|---|---|---|
| Input | Source code | Running application |
| Speed | Fast (minutes) | Slow (minutes to hours) |
| Coverage | All code paths | Only reachable endpoints |
| False positives | Higher | Lower |
| Environment needed | None | Deployed application |
| Best at | Injection, hardcoded secrets | Auth issues, misconfig |
Use both. SAST catches what DAST cannot reach. DAST catches what SAST cannot simulate.
What comes next
The next article on software supply chain security covers the risks hiding in your dependencies. You will learn about dependency confusion attacks, SBOMs, artifact signing with Sigstore, and the SLSA framework for build integrity.