This is an old revision of the document!


Overview

PostgreSQL like any other DB, has to be backed up from time to time. From one side to assure recovery from other to sleep better during the night :). In order to simplify things, there are couple major concepts to sync first:

  • Restore - The physical restore of files (datafiles, configuration files, logs and so on)
  • Recovery - The synchronization and adjustments between these files. For example, replying an XLOG file is a recovery operation, while restoring the file for that XLOG to be applied on is the restore operation.

Backup

Basically there are two ways how a PostgreSQL database can be backed up:

Logical backup refers to simple export of a data, achieved by the pg_dump. It is important to note that logical backup cannot be used in conjunction of XLOG or other incremental features, but it is very convenient for data migration.

Logical backup can also be configured in many ways, as simple as to dump a particular database in SQL script file. In nutshell there are two things which you should remember:

  1. Dump global data: like roles and users with pg_dumpall
  2. Dump a particular database: with pg_dump

Furthermore, the best way is to use a customer format ā€œ-Fcā€ in order to preserve consistency across versions:

-bash-4.2$ pg_dump -Fc -C hunter_dev > hunter_dev_backup.sql   <- This will dump database "hunter_dev" with "c"ustom format and it will include "C"reate statement.

-bash-4.2$
-bash-4.2$ ls -lart
-rw-r--r--.  1 postgres postgres 17187218 Aug  1 12:43 hunter_dev_backup.sql

Our backup will not be so much readable with text editors:

-bash-4.2$ head hunter_dev_backup.sql                                            +
PGDMP
w
hunter_dev10.310.3p0ENCODINENCODINGSET client_encoding = 'UTF8';
falseq00
STDSTRINGS
STDSTRINGS(SET standard_conforming_strings = 'on';
falser00
SEARCHPATH
SEARCHPATH8SELECT pg_catalog.set_config('search_path', '', false);
falses126232979
hunter_deDATABASE|CREATE DATABASE hunter_dev WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_GB.UTF-8' LC_CTYPE = 'en_GB.UTF-8';

Or even, further to dump the database in a directory using 5 slaves:

-bash-4.1$ pg_dump -Fd sales -j 5 -f salesdir
-bash-4.1$
-bash-4.1$ ls -lart
total 20
dr-xr-xr-x. 23 root     root     4096 Mar 20 09:53 ..
-rw-r--r--.  1 postgres postgres 1313 Mar 20 10:04 sales.sql
drwx------.  2 postgres postgres 4096 Mar 20 10:05 dumpdir
drwxr-xr-x.  4 postgres postgres 4096 Mar 20 10:05 .
drwx------.  2 postgres postgres 4096 Mar 20 10:05 salesdir
-bash-4.1$ cd salesdir/
-bash-4.1$ ls -lart
total 20
-rw-r--r--. 1 postgres postgres 2048 Mar 20 10:05 toc.dat
-rw-r--r--. 1 postgres postgres   27 Mar 20 10:05 2993.dat.gz
-rw-r--r--. 1 postgres postgres   25 Mar 20 10:05 2992.dat.gz
drwx------. 2 postgres postgres 4096 Mar 20 10:05 .
drwxr-xr-x. 3 postgres postgres 4096 Mar 20 10:05 ..

The recovery depends on the backup, like any other database, backup and recovery tools come in pairs. So let's see how to recover a file generated by pg_dump.

-bash-4.2$ psql -a -f hunter_dev_pgsql.sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 10.0
-- Dumped by pg_dump version 10.0
SET statement_timeout = 0;
SET
SET lock_timeout = 0;
SET
SET idle_in_transaction_session_timeout = 0;
SET
SET client_encoding = 'UTF8';
SET
SET standard_conforming_strings = on;
SET
SET check_function_bodies = false;
SET
SET client_min_messages = warning;
SET
SET row_security = off;
SET

This command will execute and show all the input of file: hunter_dev_pgsql.sql

Appendix


Here is a nice script to automate the export of all databases on a server:

#!/bin/bash
# => location to place backups
backup_dir="/dati/backups/"
backup_file="/var/lib/pgsql/croncmd/backup/logs/backup.log"
# => string to append to the name of the backup files
backup_date=`date +%F`
# => number of days you want to keep copies of your databases
#number_of_days=2
number_of_days=1

#elimina il file di log vecchio
rm -rf $backup_file

# get a list of databases to back up
databases=`psql -l -t | egrep -v "^ ( |template[01])" | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`

echo $databases >> $backup_file

# starting time
echo [`date`] START PostgreSQL backups >> $backup_file

# back up each database
numero_database=0
for i in $databases; do
numero_database=$(( $numero_database + 1 ))
outfile=$backup_dir$backup_date-$i.dump.gz
echo -n "–> Dumping $i to $outfile" >> $backup_file
/usr/pgsql-9.1/bin/pg_dump -Fc $i | gzip > $outfile
outfilesize=`ls -lh $outfile | awk '{ print $5 }'`
echo " ($outfilesize)"  >> $backup_file
done

# clean old backups
echo "–> Cleaning up backups older than $number_of_days days" >> $backup_file
rm -f `find $backup_dir -type f -prune -mtime +$number_of_days`

# finish time
echo [`date`] STOP PostgreSQL backups >> $backup_file

data=`date +%Y-%m-%d`
backup_file_today=$backup_dir$data"*"
backup_creati=`find $backup_file_today -maxdepth 1 -type f -print | wc -l`

#Controllo che il numero di backup creati sia uguale al numero dei database
if [ $numero_database == $backup_creati ] ; then
        echo "PROCEDURA BACKUP TERMINATA SENZA ERRORI" >> $backup_file
else
        echo "ERRORE NELLA PROCEDURA BACKUP" >> $backup_file
fi

Which can be put in crontab as follows:

# Minute   Hour   Day of Month       Month          Day of Week        Command
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)
    0       4          *             *                   *              /path/startCron.sh

  • postgresql_backup_recovery.1564662800.txt.gz
  • Last modified: 2019/10/18 20:04
  • (external edit)