Level 1

Pipes redirection and streams in Linux

Maximilian B. 5 min read 3 views

Linux command lines look simple, but every command moves data through streams. If you understand streams, pipes, and redirection, you can build commands that are clean, safe, and easy to debug. If you ignore them, you can hide errors, lose logs, and waste time during incidents. This article explains the model in plain language first, then shows the technical details you need in production.

stdin, stdout, and stderr: the three streams

Each Linux process starts with three open file descriptors:

  • 0 is standard input (stdin)
  • 1 is standard output (stdout)
  • 2 is standard error (stderr)

By default, all three point to your terminal. A command reads input from stdin, writes normal results to stdout, and writes problems to stderr. The separation matters. In automation, stdout is often consumed by another command or saved as data, while stderr is kept for diagnostics.

# demo-streams.sh
#!/usr/bin/env bash

echo "backup started"            # stdout (fd 1)
echo "disk almost full" >&2      # stderr (fd 2)
chmod +x demo-streams.sh
./demo-streams.sh

Production consequence: if a script mixes errors into stdout, downstream parsing can break. For example, JSON output plus random error text can make API consumers fail. Keep data and error messages separated.

Redirection operators and what they really do

Redirection changes where a file descriptor points. This is shell behavior, not a feature of the command itself.

# overwrite file with stdout
command > output.log

# append stdout
command >> output.log

# overwrite file with stderr only
command 2> error.log

# append stderr only
command 2>> error.log

# send both stdout and stderr to one file (portable style)
command > all.log 2>&1

# discard both streams
command > /dev/null 2>&1

The order of redirections is important because the shell processes them left to right.

# Case A: both streams go to out.log
command > out.log 2>&1

# Case B: stderr stays on terminal, stdout goes to out.log
command 2>&1 > out.log

In Case B, 2>&1 copies stderr to the current stdout (the terminal) before stdout is moved to the file. This detail causes many "why is error still on screen?" tickets.

For beginners, start with explicit forms like 1> and 2> when learning. For operators, standardize logging patterns in scripts so everyone can read and review them quickly.

Pipes connect commands, not files

A pipe (|) connects stdout of one process to stdin of the next process. This lets you process data step by step without temp files.

# Show recent SSH failures from journal, then count source IPs
journalctl -u ssh --since "1 hour ago" | \
  grep -E "Failed password" | \
  sed -E 's/.*from ([0-9.]+).*/\1/' | \
  sort | uniq -c | sort -nr

Under the hood, the kernel provides a finite pipe buffer. If the reader is slow or stops, the writer can block. Most of the time this is fine, but in heavy pipelines it can affect timing and job duration.

Another key point is exit status. By default, many shells return the exit code of the last command in a pipeline. That can hide failures earlier in the chain.

# Safer in scripts: fail if any pipeline stage fails
set -o pipefail

# Example: gzip error will fail the pipeline even if tee succeeds
cat /var/log/app.log | gzip | tee /tmp/app.log.gz > /dev/null

Production consequence: without pipefail, backup or export jobs may look successful while one stage actually failed.

Practical incident pattern: keep output visible and logged

During an outage, you often need two things at once: see output live and keep a file for later analysis. tee is the standard tool.

# Capture package upgrade output with timestamps
sudo dnf upgrade -y 2>&1 | tee /var/log/ops/dnf-upgrade-$(date +%F-%H%M).log

Equivalent pattern for Debian and Ubuntu families:

sudo apt-get update 2>&1 | tee /var/log/ops/apt-update-$(date +%F-%H%M).log
sudo apt-get dist-upgrade -y 2>&1 | tee /var/log/ops/apt-upgrade-$(date +%F-%H%M).log

This pattern sends stderr into stdout with 2>&1, then tee writes to screen and file. You keep operator visibility and a post-incident record.

For cron jobs, a safer pattern is separate files per stream:

/usr/local/bin/nightly-sync \
  > /var/log/ops/nightly-sync.out.log \
  2> /var/log/ops/nightly-sync.err.log

Now your monitoring can alert on non-empty .err.log without parsing normal output.

Compatibility notes for Debian, Ubuntu, Fedora, and RHEL

The stream and file descriptor model is POSIX and stable across current releases. The commands in this article are compatible with Debian 13.3, Ubuntu 24.04.3 LTS, Ubuntu 25.10, Fedora 43, and RHEL 10.1. They also work on RHEL 9.7 in normal Bash-based administration workflows.

  • On Debian and Ubuntu, scripts run with /bin/sh usually use dash. Keep scripts POSIX if the shebang is #!/bin/sh.
  • Operators on Fedora and RHEL usually use Bash interactively. Bash features like &> work there, but command >file 2>&1 is more portable across shells.
  • journalctl, tee, grep, and basic redirection syntax are consistent across these distributions.
  • SELinux on Fedora/RHEL can block writes to unexpected log paths. If a redirect fails with permission errors, verify context and policy, not only Unix mode bits.
# Portable shebang for Bash-specific pipeline behavior
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail

Common mistakes and how to avoid them

  • Using 2>&1 in the wrong place. Check ordering carefully.
  • Silencing all output with /dev/null during troubleshooting. You lose evidence when you need it most.
  • Assuming pipelines fail when any stage fails. Add set -o pipefail in Bash scripts.
  • Writing logs to directories that do not exist. Redirection fails before the command runs.
# Create log directory first, then run command
install -d -m 0750 /var/log/ops
/usr/local/bin/report > /var/log/ops/report.log 2>&1

Summary

Pipes and redirection are basic Linux skills, but they have direct production impact. Treat stdout as data, stderr as diagnostics, and control both intentionally. Use pipelines to build readable command chains, and use pipefail when failures must stop the job. These habits work the same way on Debian 13.3, Ubuntu 24.04.3 LTS and 25.10, Fedora 43, RHEL 10.1, and RHEL 9.7, so your scripts stay predictable across mixed environments.

Share this article
X / Twitter LinkedIn Reddit