Search…
Linux from Scratch · Part 8

Standard I/O, pipes, and redirection

In this series (15 parts)
  1. What is Linux and how it differs from other OSes
  2. Installing Linux and setting up your environment
  3. The Linux filesystem explained
  4. Users, groups, and permissions
  5. Essential command line tools
  6. Shell scripting fundamentals
  7. Processes and job control
  8. Standard I/O, pipes, and redirection
  9. The Linux networking stack
  10. Package management and software installation
  11. Disk management and filesystems
  12. Logs and system monitoring
  13. SSH and remote access
  14. Cron jobs and task scheduling
  15. Linux security basics for sysadmins

Every process in Linux has three standard streams: standard input (stdin), standard output (stdout), and standard error (stderr). Pipes and redirection let you connect these streams between processes and files, which is how you build powerful command pipelines from simple tools.

Prerequisites

You should understand processes and be comfortable with command line tools.

The three standard streams

graph LR
  IN["stdin (fd 0)"] --> P[Process]
  P --> OUT["stdout (fd 1)"]
  P --> ERR["stderr (fd 2)"]
  style IN fill:#64b5f6,stroke:#1976d2,color:#000
  style P fill:#f9a825,stroke:#f57f17,color:#000
  style OUT fill:#81c784,stroke:#388e3c,color:#000
  style ERR fill:#ef5350,stroke:#c62828,color:#fff
StreamFile descriptorDefault destinationPurpose
stdin0KeyboardInput to the program
stdout1TerminalNormal output
stderr2TerminalError messages

By default, both stdout and stderr go to your terminal, which is why you see errors mixed with normal output. Redirection lets you separate them.

Output redirection

Redirect stdout to a file

# Write to a file (creates or overwrites)
echo "hello" > output.txt

# Append to a file
echo "world" >> output.txt

cat output.txt

Output:

hello
world

Redirect stderr to a file

# stderr only (fd 2)
ls /nonexistent 2> errors.txt

cat errors.txt

Output:

ls: cannot access '/nonexistent': No such file or directory

Redirect both stdout and stderr

# Both to the same file
ls /etc/hostname /nonexistent &> all-output.txt

cat all-output.txt

Output:

/etc/hostname
ls: cannot access '/nonexistent': No such file or directory
# Or equivalently (older syntax)
ls /etc/hostname /nonexistent > all-output.txt 2>&1

The 2>&1 means “send stderr (2) to wherever stdout (1) is going.” The order matters: the stdout redirect must come first.

Redirect stdout and stderr to different files

ls /etc/hostname /nonexistent > success.txt 2> errors.txt

cat success.txt

Output:

/etc/hostname
cat errors.txt

Output:

ls: cannot access '/nonexistent': No such file or directory

Input redirection

# Feed a file as stdin
wc -l < /etc/passwd

Output:

35

Here-documents

# Multi-line input
cat << EOF
Line 1
Line 2
Current user: $USER
EOF

Here-strings

# Single-line input
grep "world" <<< "hello world"

Output:

hello world

Pipes

A pipe | connects the stdout of one command to the stdin of the next. This is the most powerful feature of the Unix philosophy: small tools that do one thing well, connected together.

# Count lines in /etc/passwd
cat /etc/passwd | wc -l

Output:

35

Pipes can be chained indefinitely:

# Find the 5 most common shells used by users
cat /etc/passwd | cut -d: -f7 | sort | uniq -c | sort -rn | head -5

Output:

     23 /usr/sbin/nologin
      7 /bin/bash
      2 /bin/false
      1 /bin/sync
      1 /usr/bin/zsh

Each command in the pipeline runs as a separate process, all running simultaneously. Data flows through the pipe as it is produced, not all at once.

/dev/null: the black hole

/dev/null discards everything written to it and returns nothing when read.

# Discard stdout
ls /etc 2>/dev/null > /dev/null

# Discard stderr only (keep stdout)
ls /etc/hostname /nonexistent 2>/dev/null

Output:

/etc/hostname
# Discard everything
command_that_might_fail &> /dev/null

tee: write to file AND stdout

tee copies stdin to both a file and stdout. Useful when you want to see output and save it.

# Show output and save to file
ls /etc | tee file-list.txt | head -5

Output:

adduser.conf
alternatives
apt
bash.bashrc
cron.d
# Append instead of overwrite
echo "new entry" | tee -a file-list.txt

Real-world use: save the output of a build while watching it:

make 2>&1 | tee build.log

Process substitution

Process substitution <(command) lets you use a command’s output as if it were a file. This is useful for commands that expect file arguments.

# Compare the output of two commands
diff <(ls /usr/bin | head -10) <(ls /usr/sbin | head -10)

Output:

1,10c1,10
< 2to3-3.12
< addpart
< apt
< apt-cache
...
# Feed multiple "files" to a command
paste <(cut -d: -f1 /etc/passwd) <(cut -d: -f7 /etc/passwd) | head -5

Output:

root	/bin/bash
daemon	/usr/sbin/nologin
bin	/usr/sbin/nologin
sys	/usr/sbin/nologin
sync	/bin/sync

Example 1: Chain 4 commands to answer a real question

Question: “What are the top 10 IP addresses making requests to our web server in the last hour?”

# Step 1: Look at the log format
head -1 /var/log/nginx/access.log

Output:

192.168.1.100 - - [04/May/2026:09:15:01 +0000] "GET / HTTP/1.1" 200 612

The IP is the first field. Let’s build the pipeline step by step:

# Step 2: Extract just the IPs
awk '{print $1}' /var/log/nginx/access.log | head -5

Output:

192.168.1.100
10.0.0.50
192.168.1.100
203.0.113.5
10.0.0.50
# Step 3: Sort them (required for uniq)
awk '{print $1}' /var/log/nginx/access.log | sort | head -5

Output:

10.0.0.50
10.0.0.50
10.0.0.50
172.16.0.5
192.168.1.100
# Step 4: Count unique IPs and sort by count
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10

Output:

   4523 192.168.1.100
   2341 10.0.0.50
    987 203.0.113.5
    456 172.16.0.5
    123 198.51.100.23
     45 10.0.0.99
     12 192.168.1.200
      5 172.16.0.10
      3 10.10.10.1
      1 127.0.0.1

The full pipeline: four commands, each doing one simple thing, combined to answer a complex question. This is the Unix philosophy in action.

Example 2: Redirect all output correctly

Let’s write a deployment script that logs everything properly:

cat > deploy.sh << 'SCRIPT'
#!/bin/bash

LOG_FILE="/tmp/deploy-$(date +%Y%m%d-%H%M%S).log"

echo "Deployment started at $(date)" | tee "$LOG_FILE"
echo "Log file: $LOG_FILE"
echo ""

# Function that logs to both terminal and file
run_step() {
    local step_name="$1"
    shift
    echo "=== Step: $step_name ===" | tee -a "$LOG_FILE"
    
    # Run the command, capture both stdout and stderr
    if "$@" >> "$LOG_FILE" 2>&1; then
        echo "  ✓ $step_name succeeded" | tee -a "$LOG_FILE"
    else
        echo "  ✗ $step_name FAILED (exit code: $?)" | tee -a "$LOG_FILE"
        echo "  Check $LOG_FILE for details" >&2
        return 1
    fi
}

# Simulate deployment steps
run_step "Check disk space" df -h /
run_step "Check system load" uptime
run_step "List running services" systemctl list-units --type=service --state=running --no-pager
run_step "This will fail" ls /nonexistent-path

echo ""
echo "=== Deployment Summary ===" | tee -a "$LOG_FILE"
echo "Full log: $LOG_FILE" | tee -a "$LOG_FILE"
SCRIPT

chmod +x deploy.sh
./deploy.sh

Output:

Deployment started at Sun May  4 10:30:00 UTC 2026
Log file: /tmp/deploy-20260504-103000.log

=== Step: Check disk space ===
 Check disk space succeeded
=== Step: Check system load ===
 Check system load succeeded
=== Step: List running services ===
 List running services succeeded
=== Step: This will fail ===
 This will fail FAILED (exit code: 2)
  Check /tmp/deploy-20260504-103000.log for details

=== Deployment Summary ===
Full log: /tmp/deploy-20260504-103000.log

Now check the log file for the error details:

grep -A5 "This will fail" /tmp/deploy-*.log

Output:

=== Step: This will fail ===
ls: cannot access '/nonexistent-path': No such file or directory
 This will fail FAILED (exit code: 2)

The terminal shows a clean summary, while the log file has the full error details. This pattern is used in real deployment scripts, CI/CD pipelines, and cron jobs.

Clean up:

rm -f deploy.sh /tmp/deploy-*.log

Quick reference

OperatorMeaning
>Redirect stdout to file (overwrite)
>>Redirect stdout to file (append)
2>Redirect stderr to file
&>Redirect both stdout and stderr
2>&1Redirect stderr to wherever stdout goes
<Redirect file to stdin
|Pipe stdout to next command’s stdin
teeCopy stdin to file and stdout
<(cmd)Process substitution (use output as file)
/dev/nullDiscard output

What comes next

The next article covers The Linux networking stack, where you will learn how Linux handles network interfaces, routing, DNS, and firewalls.

For a practical look at combining pipes with text tools for log analysis, see article 12 in this series.

Start typing to search across all content
navigate Enter open Esc close