Incident response
In this series (16 parts)
- How attackers think: the attacker mindset
- Networking fundamentals for security
- Cryptography fundamentals
- Public key infrastructure and certificates
- Authentication and authorization
- Web application security: OWASP Top 10
- Network attacks and defenses
- Linux privilege escalation
- Windows security fundamentals
- Malware types and analysis basics
- Reconnaissance and OSINT
- Exploitation basics and CVEs
- Post-exploitation and persistence
- Defensive security: hardening and monitoring
- Incident response
- CTF skills and practice labs
When a security incident happens, and it will, the difference between a minor disruption and a catastrophe is how you respond. Incident response (IR) is a structured process that ensures you contain the damage, preserve evidence, and recover operations in a methodical way. Panic and ad-hoc firefighting make things worse.
Prerequisites
You should understand defensive security, post-exploitation techniques, and Linux logs and monitoring.
The IR lifecycle
graph LR A[Preparation] --> B[Detection & Analysis] B --> C[Containment] C --> D[Eradication] D --> E[Recovery] E --> F[Lessons Learned] F -->|Improve| A style A fill:#64b5f6,stroke:#1976d2,color:#000 style C fill:#ffb74d,stroke:#f57c00,color:#000 style F fill:#81c784,stroke:#388e3c,color:#000
Each phase has specific objectives and actions. Skipping phases leads to reinfection (if you skip eradication) or evidence destruction (if you skip containment properly).
Phase 1: Preparation
Preparation happens before an incident. If you are building your IR capability during an active breach, you are already behind.
What to prepare:
- IR plan: documented procedures, escalation paths, communication templates
- Contact list: who to call (IR team, management, legal, PR, law enforcement)
- Tools: forensic toolkit ready to deploy (disk imaging, memory acquisition, network capture)
- Logging: centralized log collection, sufficient retention (90+ days)
- Backups: tested, offline backups that cannot be reached by ransomware
- Baseline: know what “normal” looks like so you can spot anomalies
# Preparation: document your normal baseline
# Save a snapshot of normal processes
ps aux > /root/baseline/processes.txt
# Save normal network connections
ss -tlnp > /root/baseline/listening-ports.txt
# Save normal cron jobs
for user in $(cut -d: -f1 /etc/passwd); do
crontab -l -u "$user" 2>/dev/null >> /root/baseline/crontabs.txt
done
# Save file hashes of critical binaries
sha256sum /usr/bin/ssh /usr/sbin/sshd /usr/bin/sudo > /root/baseline/binary-hashes.txt
Phase 2: Detection and analysis
Detection sources:
- SIEM alerts
- EDR alerts
- User reports (“my files are encrypted,” “I got a weird email”)
- Third-party notification (law enforcement, security researchers)
- Log anomalies
Initial triage questions:
- What is the scope? One machine or many?
- When did it start? (check timestamps in logs)
- What type of incident? (malware, unauthorized access, data breach, DDoS)
- Is the attack still active?
- What data or systems are at risk?
# Quick triage on a suspicious Linux system
# 1. Who is logged in right now?
w
who
# 2. What processes are running?
ps auxf
# 3. What network connections are active?
ss -tnp
# 4. What changed recently?
find / -mtime -1 -type f 2>/dev/null | grep -v "/proc\|/sys\|/run" | head -50
# 5. Check auth logs for suspicious activity
grep "Accepted\|Failed" /var/log/auth.log | tail -30
# 6. Check for known persistence mechanisms
crontab -l
cat /etc/crontab
ls -la /etc/cron.d/
cat /root/.ssh/authorized_keys
Severity classification:
| Severity | Criteria | Response time |
|---|---|---|
| Critical | Active data breach, ransomware spreading, production down | Immediate |
| High | Confirmed compromise, no active spread | Within 1 hour |
| Medium | Suspicious activity, unconfirmed | Within 4 hours |
| Low | Policy violation, minor anomaly | Next business day |
Phase 3: Containment
Stop the bleeding without destroying evidence.
Short-term containment:
# Isolate the compromised system from the network
# Option 1: Firewall rules (keeps system running for analysis)
sudo iptables -A INPUT -j DROP
sudo iptables -A OUTPUT -j DROP
sudo iptables -I INPUT -s <IR_TEAM_IP> -j ACCEPT
sudo iptables -I OUTPUT -d <IR_TEAM_IP> -j ACCEPT
# Option 2: Disable the network interface (more aggressive)
sudo ip link set eth0 down
⚠ Do NOT shut down the system unless absolutely necessary. Shutting down destroys volatile evidence in memory (running processes, network connections, encryption keys).
Preserve evidence before making changes:
# Capture running processes
ps auxf > /tmp/incident/processes.txt
# Capture network connections
ss -tnp > /tmp/incident/connections.txt
ss -tlnp > /tmp/incident/listening.txt
# Capture memory (if tools are available)
# Using LiME: sudo insmod lime.ko "path=/tmp/incident/memory.lime format=lime"
# Create a disk image for forensic analysis
# Using dd: sudo dd if=/dev/sda of=/mnt/forensic/disk-image.dd bs=4M status=progress
Long-term containment:
- Block attacker IPs at the firewall
- Reset compromised credentials
- Revoke compromised SSH keys and certificates
- Increase monitoring on systems that communicated with the compromised host
Phase 4: Eradication
Remove the attacker’s access and tools completely.
# Remove persistence mechanisms
# 1. Malicious cron jobs
crontab -r -u compromised_user
# 2. Unauthorized SSH keys
# Compare against known-good authorized_keys
diff /root/.ssh/authorized_keys /root/baseline/authorized_keys
# 3. Malicious systemd services
systemctl list-unit-files --state=enabled | grep suspicious
systemctl disable --now suspicious.service
rm /etc/systemd/system/suspicious.service
# 4. Backdoor binaries
# Compare hashes against baseline
sha256sum /usr/bin/ssh /usr/sbin/sshd /usr/bin/sudo
# Compare with /root/baseline/binary-hashes.txt
# If different, reinstall from package: sudo apt install --reinstall openssh-server
# 5. Unauthorized user accounts
grep "bash\|sh" /etc/passwd | grep -v "expected_users"
Phase 5: Recovery
Bring systems back to normal operation.
- Rebuild compromised systems from known-good images or backups (preferred over cleaning, because you might miss something)
- Reset all credentials that could have been exposed
- Patch the vulnerability that was exploited for initial access
- Monitor closely for reinfection (the attacker may try again)
- Restore from backups if data was lost or encrypted
# Verify the system is clean before returning to production
# Run a full security audit
sudo lynis audit system
# Check all listening ports match expected services
ss -tlnp
# Verify file integrity
debsums -c 2>/dev/null # Debian/Ubuntu: check installed package files
Phase 6: Lessons learned
The most valuable phase, and the most often skipped.
Within 1-2 weeks of the incident, hold a post-incident review:
- Timeline: what happened, when, and in what order?
- Detection: how was it discovered? How long was the attacker in the network?
- Response: what worked? What did not?
- Root cause: what vulnerability or weakness was exploited?
- Improvements: what changes will prevent this from happening again?
Document everything. Update the IR plan based on what you learned.
Chain of custody
If the incident may involve law enforcement or legal proceedings, evidence must be handled properly:
- Document everything: who collected what, when, where, and how
- Create forensic copies (bit-for-bit disk images), work on copies, never originals
- Hash everything: SHA-256 hash of disk images, memory dumps, and log exports
- Secure storage: evidence stored in a locked location with access logs
- Minimize handling: the fewer people who touch evidence, the better
# Create a forensic disk image with hash verification
sudo dd if=/dev/sda of=/mnt/evidence/disk.dd bs=4M status=progress
sha256sum /mnt/evidence/disk.dd > /mnt/evidence/disk.dd.sha256
# Verify later
sha256sum -c /mnt/evidence/disk.dd.sha256
Example 1: Triage a suspected compromise
You receive an alert: a server is making unusual outbound connections.
# Step 1: Check current connections
ss -tnp | grep ESTAB
Output:
ESTAB 0 0 192.168.1.100:22 10.0.0.50:54321 users:(("sshd",pid=1234))
ESTAB 0 0 192.168.1.100:43210 203.0.113.5:443 users:(("curl",pid=5678))
ESTAB 0 0 192.168.1.100:43211 198.51.100.23:8080 users:(("python3",pid=9012))
The SSH connection to 10.0.0.50 is expected (admin access). But curl connecting to 203.0.113.5 and python3 connecting to 198.51.100.23 are suspicious.
# Step 2: Investigate the suspicious processes
ps aux | grep -E "5678|9012"
Output:
www-data 5678 0.0 0.0 12345 2345 ? S 03:15 0:00 curl -s https://203.0.113.5/beacon
www-data 9012 0.1 0.5 67890 12345 ? S 03:15 0:05 python3 /tmp/.hidden/agent.py
Red flags:
- Running as www-data (web server compromise)
- Hidden directory in /tmp
- Process named “beacon” and “agent” suggest C2 communication
- Started at 3:15 AM
# Step 3: Examine the malicious files
ls -la /tmp/.hidden/
cat /tmp/.hidden/agent.py | head -20
sha256sum /tmp/.hidden/agent.py
# Step 4: Check how the attacker got in
grep "www-data" /var/log/auth.log
journalctl -u nginx --since "03:00" --until "03:30"
# Step 5: Contain
# Block C2 IPs
sudo ufw deny out to 203.0.113.5
sudo ufw deny out to 198.51.100.23
# Kill the malicious processes
kill 5678 9012
Example 2: The incident response report
After containment, document the incident:
Incident Report: Web Server Compromise
Date: 2026-07-15
Severity: High
Status: Contained
Timeline:
- 03:12 UTC: Web application exploited via SQL injection in /api/search
- 03:14 UTC: Attacker uploaded web shell to /var/www/html/uploads/cmd.php
- 03:15 UTC: Reverse shell established to 198.51.100.23:8080
- 03:16 UTC: Attacker downloaded C2 agent to /tmp/.hidden/
- 03:17 UTC: C2 beacon started, connecting to 203.0.113.5:443
- 06:30 UTC: Alert triggered by EDR for unusual outbound connections
- 06:45 UTC: IR team begins investigation
- 07:00 UTC: Malicious processes killed, C2 IPs blocked
- 07:30 UTC: Web shell removed, SQL injection patched
Root cause: Unparameterized SQL query in /api/search endpoint
Impact: No data exfiltration confirmed. C2 agent was in reconnaissance phase.
Remediation:
1. Patched SQL injection vulnerability
2. Removed all malicious files
3. Reset www-data service credentials
4. Added WAF rules for SQL injection patterns
5. Scheduled code review for all API endpoints
What comes next
The final article in this series covers CTF skills and practice labs, where you will learn how to practice everything from this series in a safe, legal environment.
For more on the technical detection of attacks, review Defensive security.