Enterprise backup strategies are the one thing everyone agrees is important and nobody wants to test until the disk fails at 2 AM. A solid Linux backup strategy involves picking the right tool for the right scenario, automating the schedule, verifying that restores actually work, and rotating old archives so you do not fill your storage. This article covers five essential backup tools that handle different parts of the problem: tar for archives, dd for disk images, rsync for efficient mirroring, and restic and borg for modern encrypted and deduplicated backups. If you are new to rsync and restic, our Level 1 guide on backups and recovery with rsync, restic, and snapshots covers the fundamentals.
tar: incremental archives with snapshot files on Linux
tar is the oldest backup tool on Unix systems and still useful when you need portable archives. For enterprise use, the key feature is incremental backups using snapshot files. A level-0 (full) backup captures everything. Subsequent runs capture only files changed since the last snapshot.
# Level 0 full backup with gzip compression
sudo tar --listed-incremental=/var/backups/etc.snar \
-czf /var/backups/etc-full-$(date +%Y%m%d).tar.gz /etc
# Level 1 incremental — only changed files since last snapshot
sudo tar --listed-incremental=/var/backups/etc.snar \
-czf /var/backups/etc-incr-$(date +%Y%m%d%H%M).tar.gz /etc
The .snar file tracks file metadata. To start a new full backup cycle, delete or rename the snapshot file. One important production detail: restoring an incremental chain requires replaying the full backup first, then each incremental in order. Lose one file in the middle and everything after it is useless. For this reason, many teams do weekly full backups with daily incrementals, limiting the chain to seven files at most.
On Debian 13.3, Ubuntu 24.04.3 LTS, Fedora 43, and RHEL 10.1, tar ships in the base install. Consider using zstd compression (--zstd) instead of gzip for better compression ratios and speed on modern hardware.
dd: block-level disk imaging and forensic copies
dd copies raw blocks from one device to another. It does not understand filesystems; it copies every block, including empty space. This makes it suitable for full disk images, partition cloning, and forensic copies where you need bit-for-bit accuracy.
dd has no safety checks. Writing to the wrong device will silently destroy data. Always double-check device names with lsblk before running dd. The nickname "disk destroyer" is well earned.
# Create a compressed disk image of /dev/sda
sudo dd if=/dev/sda bs=4M status=progress | gzip -1 > /mnt/backup/sda-$(date +%Y%m%d).img.gz
# Restore the image to a same-size or larger disk
gunzip -c /mnt/backup/sda-20260227.img.gz | sudo dd of=/dev/sdb bs=4M status=progress
# Clone a partition with progress
sudo dd if=/dev/sda1 of=/dev/sdb1 bs=4M status=progress conv=fsync
dd is slow for routine backups because it copies unused blocks too. Use it for disaster recovery images of boot disks, firmware partitions, or when you need to hand an auditor an exact disk copy. Before creating disk images, verify drive health using SMART diagnostics as described in our guide on disk health and capacity monitoring with df, du, and smartctl. For regular file-level backups, rsync or tar are faster and more flexible.
rsync: efficient file synchronization and backup over SSH
rsync compares source and destination and transfers only the differences. For daily backups of large directory trees, this is dramatically faster than tar because unchanged files are skipped entirely. rsync over SSH provides encryption in transit without additional setup.
# Mirror /var/www to a remote backup server, preserving permissions and deleting removed files
rsync -avz --delete \
-e "ssh -i /root/.ssh/backup_key -o StrictHostKeyChecking=accept-new" \
/var/www/ backup@storage.example.com:/backups/web-frontend-03/www/
# Backup with bandwidth limit (10 MB/s) to avoid saturating the link
rsync -avz --bwlimit=10000 \
/var/lib/postgresql/ backup@storage.example.com:/backups/db-primary/pgdata/
# Dry run to preview what would be transferred
rsync -avzn --delete /etc/ backup@storage.example.com:/backups/etc/
A few production considerations. The --delete flag removes files on the destination that no longer exist on the source. This keeps the mirror accurate but means accidental deletions propagate to the backup. To protect against this, combine rsync with snapshot-capable storage (LVM snapshots, ZFS snapshots, or Btrfs subvolumes on the backup server). Alternatively, use --backup --backup-dir to move deleted files to a timestamped directory instead of removing them.
# rsync with a safety net: deleted files go to a dated directory
rsync -avz --delete \
--backup --backup-dir="/backups/deleted/$(date +%Y%m%d)" \
/var/lib/app/ backup@storage.example.com:/backups/app/current/
restic: encrypted deduplicated backups with S3 and SFTP backends
restic is a modern backup tool written in Go. It provides encryption by default (AES-256), deduplication at the chunk level, and supports multiple storage backends: local directories, SFTP, Amazon S3, Backblaze B2, Azure Blob, and Google Cloud Storage. On Debian 13.3 and Ubuntu 24.04.3 LTS, install with apt install restic. On Fedora 43 and RHEL 10.1, dnf install restic.
# Initialize a restic repository on S3
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
restic -r s3:s3.eu-west-1.amazonaws.com/company-backups/db-primary init
# Create a backup
restic -r s3:s3.eu-west-1.amazonaws.com/company-backups/db-primary \
backup /var/lib/postgresql /etc/postgresql \
--tag postgresql --tag production
# Initialize a repository over SFTP
restic -r sftp:backup@storage.example.com:/restic-repo init
# Backup to the SFTP repo
restic -r sftp:backup@storage.example.com:/restic-repo \
backup /etc /var/lib/app --exclude-caches
Restore and verification
Backups you never test are not backups. restic makes verification straightforward:
# List all snapshots
restic -r sftp:backup@storage.example.com:/restic-repo snapshots
# Check repository integrity (reads all data packs)
restic -r sftp:backup@storage.example.com:/restic-repo check --read-data
# Restore a specific snapshot to a target directory
restic -r sftp:backup@storage.example.com:/restic-repo \
restore abc12345 --target /tmp/restore-test
# Restore only specific paths from a snapshot
restic -r sftp:backup@storage.example.com:/restic-repo \
restore latest --target /tmp/restore-test --include /etc/postgresql
Retention and rotation
# Apply retention policy: keep 7 daily, 4 weekly, 6 monthly snapshots
restic -r sftp:backup@storage.example.com:/restic-repo forget \
--keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
BorgBackup: deduplication and encrypted archives
BorgBackup (borg) is another modern backup tool with chunk-level deduplication and optional encryption. It is particularly efficient for backing up large datasets with many similar files, such as mail stores or virtual machine images. borg has been around since 2015 (a fork of Attic) and is mature in production use. Install with apt install borgbackup on Debian/Ubuntu or dnf install borgbackup on Fedora/RHEL.
# Initialize an encrypted borg repository
borg init --encryption=repokey-blake2 backup@storage.example.com:/borg-repo
# Create a backup with compression
borg create --compression zstd,6 --stats \
backup@storage.example.com:/borg-repo::'{hostname}-{now:%Y%m%d-%H%M}' \
/etc /var/lib/app /var/log/app
# List archives in the repository
borg list backup@storage.example.com:/borg-repo
# Extract (restore) a specific archive
borg extract backup@storage.example.com:/borg-repo::db-primary-20260227-0200 \
var/lib/postgresql
borg retention with prune
# Keep 7 daily, 4 weekly, 6 monthly, 2 yearly backups
borg prune --stats \
--keep-daily=7 --keep-weekly=4 --keep-monthly=6 --keep-yearly=2 \
backup@storage.example.com:/borg-repo
# Compact the repository after pruning (reclaim disk space)
borg compact backup@storage.example.com:/borg-repo
The main difference between borg and restic in practice: borg repositories are locked during operations, so you cannot run two borg processes against the same repo simultaneously. restic handles concurrent access better. However, borg's deduplication is slightly more efficient for certain workloads. Choose based on your backend requirements: if you need S3-native storage, restic has better built-in support. If you need maximum deduplication over SSH, borg is a strong choice.
Scheduling automated backups with systemd timers
Cron works, but systemd timers give you better logging, dependency management, and the ability to catch up on missed runs. For a broader comparison of scheduling approaches, see our guide on scheduled tasks with cron, anacron, and systemd timers. Here is a complete example for a nightly restic backup:
# /etc/systemd/system/restic-backup.service
[Unit]
Description=Restic backup of application data
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=/etc/restic/env
ExecStart=/usr/bin/restic backup /etc /var/lib/app --tag nightly
ExecStartPost=/usr/bin/restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
IOSchedulingClass=idle
Nice=10
# /etc/systemd/system/restic-backup.timer
[Unit]
Description=Run restic backup nightly at 02:00
[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=900
[Install]
WantedBy=timers.target
# Enable and start the timer
sudo systemctl daemon-reload
sudo systemctl enable --now restic-backup.timer
# Check timer status
systemctl list-timers restic-backup.timer
# View last run logs
journalctl -u restic-backup.service --since yesterday
The Persistent=true setting means if the server was off at 02:00, the backup runs as soon as the machine boots. RandomizedDelaySec=900 spreads backup start times across a 15-minute window if you deploy the same timer to a fleet, avoiding a thundering herd hitting your backup storage simultaneously.
Linux backup tools quick reference cheat sheet
| Tool | Best For | Key Command |
|---|---|---|
| tar (full) | Portable archives, small datasets | tar --listed-incremental=snap.snar -czf backup.tar.gz /path |
| dd | Disk images, forensic copies | dd if=/dev/sdX bs=4M status=progress | gzip > img.gz |
| rsync | Daily mirror, large trees, SSH transfer | rsync -avz --delete /src/ user@host:/dst/ |
| restic | Encrypted, deduplicated, S3/SFTP | restic -r s3:bucket backup /path --tag label |
| borg | Max deduplication, SSH repos | borg create --compression zstd repo::archive /path |
| restic verify | Repository integrity check | restic check --read-data |
| restic retain | Rotation policy | restic forget --keep-daily 7 --keep-weekly 4 --prune |
| systemd timer | Scheduling with logging | OnCalendar=*-*-* 02:00:00 + Persistent=true |
Summary
Each tool has its place. tar remains useful for simple, portable archives and incremental chains. dd handles raw disk images when you need block-level fidelity. rsync is the workhorse for daily file synchronization where speed matters. restic and borg add encryption, deduplication, and cloud storage support that enterprise environments require. Schedule everything with systemd timers for reliable execution and proper logging. The most important practice is testing restores regularly. A backup you have never restored is a hypothesis, not a safety net. Run quarterly restore drills on every critical backup repository, and verify the data is complete and usable.