Back Up and Restore PostgreSQL on Linux the Right Way
A pragmatic guide to production-grade PostgreSQL backups: pg_dump with the custom format, a cron schedule, sensible file rotation, offsite upload, and a restore you have actually tested.
The honest rule of backups
A backup you have never restored is not a backup. It is a hope. The single most useful habit you can build is to restore a production dump into a disposable database at least once a month and run a smoke query. Everything in this guide is designed so that test restore stays boring.
Step 1: Install the right client version
Match your client version to your server version. A newer pg_dump can dump an older server, but never the reverse. On Debian/Ubuntu the PostgreSQL project ships its own repository.
sudo apt install -y postgresql-common sudo sh /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y sudo apt install -y postgresql-client-16
Replace 16 with whatever major version your server runs. psql --version must return a version equal to or higher than the server.
Step 2: Use the custom format (Fc), always
There are three dump formats. Use the custom format. It is compressed, it supports parallel restore with -j, and it lets you restore just one schema or table later if you need to.
pg_dump -Fc -h 127.0.0.1 -U appuser -d appdb \
-f /var/backups/pg/appdb.dump
Pass the password via the standard ~/.pgpass file so it never appears in shell history or cron logs. Create it with mode 600 and put one line in it:
127.0.0.1:5432:appdb:appuser:supersecret
chmod 600 ~/.pgpassStep 3: A real backup script
Create /usr/local/bin/pg-backup.sh. It is short, strict, logs meaningfully, and bails on any error.
#!/usr/bin/env bash set -euo pipefail DB=appdb USER=appuser HOST=127.0.0.1 DEST=/var/backups/pg KEEP_DAYS=14 STAMP=$(date -u +%Y%m%dT%H%M%SZ) FILE="$DEST/$DB-$STAMP.dump" mkdir -p "$DEST" pg_dump -Fc -h "$HOST" -U "$USER" -d "$DB" -f "$FILE" find "$DEST" -type f -name "$DB-*.dump" -mtime "+$KEEP_DAYS" -delete echo "backup ok: $FILE ($(stat -c%s "$FILE") bytes)"
sudo chmod +x /usr/local/bin/pg-backup.shStep 4: Run it every night from cron
Edit the postgres user's crontab (or the user that owns ~/.pgpass).
15 2 * * * /usr/local/bin/pg-backup.sh >> /var/log/pg-backup.log 2>&1
Pick a quiet hour. Redirect stdout and stderr to a log file so you can actually see what happened. Tail it tomorrow morning before you drink coffee:
tail -n 20 /var/log/pg-backup.logStep 5: Push backups offsite
Local backups only protect you from mistakes, not disasters. Push every dump to a bucket the database server cannot delete on its own. rclone works well with every major provider and is easy to lock down with a restricted API key.
rclone copy "$FILE" remote:pg-backups/$DB/ --transfers=2 --retries=3
For extra safety, enable server-side encryption at the bucket level and set an object-lock / immutability policy so the bucket itself rejects delete requests for a minimum retention window. If ransomware ever reaches the server, it cannot wipe the backups.
Step 6: Restore into a fresh database
Never restore on top of production without a drill first. Create an empty target database and use pg_restore.
createdb -h 127.0.0.1 -U appuser appdb_restore_check pg_restore -h 127.0.0.1 -U appuser -d appdb_restore_check \ --no-owner --no-privileges -j 4 \ /var/backups/pg/appdb-YYYYMMDDTHHMMSSZ.dump
The -j 4 flag restores four tables in parallel. Drop it if your disk is slow. --no-owner and --no-privileges avoid errors if the target does not have the same roles as production — useful on a laptop.
psql -h 127.0.0.1 -U appuser -d appdb_restore_check -c "SELECT COUNT(*) FROM users;"If the number matches today's production count within reason, your backup is real. Drop the test database.
Going further: base backups and PITR
For high-value databases, pg_dump alone is not enough: it gives you daily snapshots but nothing in between. For point-in-time recovery you want base backups (via pg_basebackup or pgBackRest) plus archived WAL segments. Set that up once the logical dumps above are running and proven. Most SME workloads are perfectly fine with nightly logical dumps for the first year.
Frequently Asked Questions
What format should I use for pg_dump?
The custom format (-Fc) is almost always right. It is compressed, supports parallel restore, and lets you restore selectively later.
Should I store the backup files locally?
Locally and offsite. Local for fast restores, offsite for disaster recovery. Never trust a single location.
How do I know my backup actually works?
Restore it into a disposable database on a schedule, at least monthly, and run a smoke query. A backup you have never restored is only a wish.
Does pg_dump lock my database?
No. It uses a consistent snapshot and does not block writers. It does, however, use disk IO and CPU; run it during quiet hours for large databases.