postgresql_backup_recovery_logical

Logical backup refers to simple export of a data, achieved by the pg_dump. Logical backups are vastly used between different versions. For example transfer of a database between PostgreSQL 9.6 → 11. It is important to note that logical backup cannot be used in conjunction of XLOG or other incremental features, but it is very convenient for data migration.

Of course with logical backup we can backup:

  • Entire Server
  • Particular Database
  • All roles and privileges

To be sure that there won't be any problems, we can dump all roles within the cluster as follows:

Dump all roles

-bash-4.2$ pg_dumpall -p 5432 -v --roles-only -f "/var/lib/pgsql/migrationToDev/roles.sql"
**********************************************************

This will generates us a file called roles.sql in the mentioned directory which we can execute on the destination server. if you want to generate a dump also for the tablespaces use the following:

Dump all globas

-bash-4.2$ pg_dumpall -p 5432 -v --globals-only -f "/var/lib/pgsql/migrationToDev/globals.sql"

Logical backup can also be configured in many ways, as simple as to dump a particular database in SQL script file. In nutshell there are two things which you should remember:

  1. Dump global data: like roles and users with pg_dumpall
  2. Dump a particular database: with pg_dump

Furthermore, the best way is to use a customer format “-Fc” in order to preserve consistency across versions:

-bash-4.2$ pg_dump -Fc -C hunter_dev > hunter_dev_backup.sql   <- This will dump database "hunter_dev" with "c"ustom format and it will include "C"reate statement.

-bash-4.2$
-bash-4.2$ ls -lart
-rw-r--r--.  1 postgres postgres 17187218 Aug  1 12:43 hunter_dev_backup.sql

-bash-4.2$ head hunter_dev_backup.sql                                            +
PGDMP
w
hunter_dev10.310.3p0ENCODINENCODINGSET client_encoding = 'UTF8';
falseq00
STDSTRINGS
STDSTRINGS(SET standard_conforming_strings = 'on';
falser00
SEARCHPATH
SEARCHPATH8SELECT pg_catalog.set_config('search_path', '', false);
falses126232979
hunter_deDATABASE|CREATE DATABASE hunter_dev WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_GB.UTF-8' LC_CTYPE = 'en_GB.UTF-8';

Or even, further to dump the database in a directory using 5 slaves:

-bash-4.1$ pg_dump -Fd sales -j 5 -f salesdir
-bash-4.1$
-bash-4.1$ ls -lart
total 20
dr-xr-xr-x. 23 root     root     4096 Mar 20 09:53 ..
-rw-r--r--.  1 postgres postgres 1313 Mar 20 10:04 sales.sql
drwx------.  2 postgres postgres 4096 Mar 20 10:05 dumpdir
drwxr-xr-x.  4 postgres postgres 4096 Mar 20 10:05 .
drwx------.  2 postgres postgres 4096 Mar 20 10:05 salesdir
-bash-4.1$ cd salesdir/
-bash-4.1$ ls -lart
total 20
-rw-r--r--. 1 postgres postgres 2048 Mar 20 10:05 toc.dat
-rw-r--r--. 1 postgres postgres   27 Mar 20 10:05 2993.dat.gz
-rw-r--r--. 1 postgres postgres   25 Mar 20 10:05 2992.dat.gz
drwx------. 2 postgres postgres 4096 Mar 20 10:05 .
drwxr-xr-x. 3 postgres postgres 4096 Mar 20 10:05 ..

We can migrate tables between databases, given the structure is already there:

Migrate tables

pg_dump -Fc --data-only --table config --table config_properties wl_sportsbook | pg_restore --data-only -d wl_config

To backup users and roles, we have to use a way similar to backing up the entire server, just we will backup only the globals as follows:

Dump Roles & Users

pg_dumpall -h localhost -p 5432 -v --globals-only > hunter_dev_globals.sql

This way, it will be pure SQL and can be simply re-run on other server.

In terms of logical backup, restore and recovery are the same thing. It means, execute the script in order to REDO the changes.

However BARE IN MIND ONE PROBLEM :)

  • Backup taken with: pg_dump: Restore with psql

-bash-4.2$ psql -a -f hunter_dev_pgsql.sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 10.0
-- Dumped by pg_dump version 10.0

  • Backup taken with: pg_dump -Fc: Restore with pg_restore

-bash-4.2$ pg_restore -d postgres -C -Fc hunter_dev_backup.sql <- pg_restore will connect to postgres DB, create hunter_dev and switch
-bash-4.2$ psql
psql (9.6.14)
Type "help" for help.

postgres=# \l
                                      List of databases
    Name     |    Owner    | Encoding |   Collate   |    Ctype    |     Access pri
vileges
-------------+-------------+----------+-------------+-------------+---------------
------------
 hunter_dev  | hunter_dev  | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 |

If you have many databases you can restore them via bash script.

#!/bin/bash
for database in `ls`; do
outfile=importAll.log
time=`date`
echo "Started import for $database at $time" >> $outfile
pg_restore -d postgres -C $database >> $import_$database.log
time=`date`
echo "Import for $database done  $time" >> $outfile
done

The following script will restore all databases in the current directory.

Here is a nice script to automate the export of all databases on a server:

#!/bin/bash
# => location to place backups
backup_dir="/dati/backups/"
backup_file="/var/lib/pgsql/croncmd/backup/logs/backup.log"
# => string to append to the name of the backup files
backup_date=`date +%F`
# => number of days you want to keep copies of your databases
#number_of_days=2
number_of_days=1

#elimina il file di log vecchio
rm -rf $backup_file

# get a list of databases to back up
databases=`psql -l -t | egrep -v "^ ( |template[01])" | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`

echo $databases >> $backup_file

# starting time
echo [`date`] START PostgreSQL backups >> $backup_file

# back up each database
numero_database=0
for i in $databases; do
numero_database=$(( $numero_database + 1 ))
outfile=$backup_dir$backup_date-$i.dump.gz
echo -n "–> Dumping $i to $outfile" >> $backup_file
/usr/pgsql-9.1/bin/pg_dump -Fc $i | gzip > $outfile
outfilesize=`ls -lh $outfile | awk '{ print $5 }'`
echo " ($outfilesize)"  >> $backup_file
done

# clean old backups
echo "–> Cleaning up backups older than $number_of_days days" >> $backup_file
rm -f `find $backup_dir -type f -prune -mtime +$number_of_days`

# finish time
echo [`date`] STOP PostgreSQL backups >> $backup_file

data=`date +%Y-%m-%d`
backup_file_today=$backup_dir$data"*"
backup_creati=`find $backup_file_today -maxdepth 1 -type f -print | wc -l`

if [ $numero_database == $backup_creati ] ; then
        echo "Backup procedure completed without error" >> $backup_file
else
        echo "Error during backup procedure execution" >> $backup_file
fi

Which can be put in crontab as follows:

# Minute   Hour   Day of Month       Month          Day of Week        Command
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)
    0       4          *             *                   *              /path/startCron.sh

pg_dump (options)
-p, –port=PORT database server port number
-i, –ignore-version proceed even when server version mismatches
-h, –host=HOSTNAME database server host or socket directory
-U, –username=NAME connect as specified database user
-W, –password force password prompt (should happen automatically)
-d, –dbname=NAME connect to database name
-v, –verbose verbose mode
-F, –format=c|t|p output file format (custom, tar, plain text)
-c, –clean clean (drop) schema prior to create
-b, –blobs include large objects in dump
-v, –verbose verbose mode
-f, –file=FILENAME output file name


pg_restore (options)
-C, --create=Creates the database before restoring into it. With "--clean" it will perform DROP + Create
-p, --port=PORT database server port number
-i, --ignore-version proceed even when server version mismatches
-h, --host=HOSTNAME database server host or socket directory
-U, --username=NAME connect as specified database user
-W, --password force password prompt (should happen automatically)
-d, --dbname=NAME connect to database name
-v, --verbose verbose mode

  • postgresql_backup_recovery_logical.txt
  • Last modified: 2021/06/24 09:51
  • by andonovj