DB‎ > ‎Postgres‎ > ‎

PostgreSQL continuous archiving example

Thank you for visiting this page, There is an updated version for this article, see PostgreSQL continuous archiving and recovery how to
Before you stared, I recommended to read the archiving document first.

The documentation tells you more about how it works, but not how to configure it. Here I use my own example.

1. Prepare disk space for the archived files.

    By default, PostgreSQL maintains a write ahead log(WAL) in the pg_xlog subdirectory of the server data directory. By default, these files are rotating and get overwritten, under archiving mode, we configure PostgreSQL to copy WAL files to archive directory, to be noted, you need to take care of the archive directory by yourself. In my case,

2. PostgreSQL setting, a restart is need to enable the change

The following three essential, you can even call a script in archive_command
wal_level = archive
archive_mode = on
archive_command = '/bin/cp -p %p /home/backups/archivelogs/%f </dev/null'

The following two are optional, suggest you test your system before you chagne them, you can find the detail in the documentation http://www.postgresql.org/docs/9.2/static/continuous-archiving.html
archive_timeout = 60
checkpoint_timeout = 1h

3. continuous backup script

#set -v        # debugging tools
#set -x

cmd=`basename $0`
EMAIL='dba@abc.com'; export EMAIL

BKUPDIR=/home/backups/pgsql_data; export BKUPDIR
LOG=/home/backups/pgsql_data/backuplog.`date +%m%d%y%H`.log; export LOG

typeset -i res

function exitMsg() {
  echo "$cmd: Unsuccessful PostgreSQL database hotbackup at `/bin/date`" >>$mylog
  echo "$mymsg" >>$mylog
  cat $mylog >> $LOG
  cat "$LOG" | mail -s "PostgreSQL database hotbackup failed (`/bin/date`)" $EMAIL
  exit 1

cat /dev/null >$LOG1 2>/dev/null

echo "" >> $LOG
echo "On `/bin/date` Daily PostgreSQL database hot backup:" >> $LOG

/usr/bin/psql -U postgres -c "SELECT pg_start_backup('/home/backups/pgsql_data/label');" postgres &>backup.log

nice tar -zcf /home/backups/pgsql_data/data.`date +%m%d%y%H`.tgz /var/lib/pgsql/9.1/data >>backup.log

/usr/bin/psql -U postgres -c "SELECT pg_stop_backup();" postgres &>backup1.log

if [ $res -ne 0 ] ; then
  exitMsg $LOG1 "ERROR: PostgreSQL database hotbackup failed."

nice /usr/bin/find /home/backups/archivelogs -not -name "*.bz2" | xargs -n 1 -P 5 bzip2

cat $LOG1 >> $LOG
cat /dev/null >$LOG1 2>/dev/null
rm $LOG1 2>/dev/null
cat backup1.log >> backup.log
cat backup.log >> $LOG
rm backup.log backup1.log

echo "On `/bin/date` End of PostgreSQL hotbackup :" >> $LOG
cat "$LOG" | mail -s "PostgreSQL Hotbackup SUCCEEDED! (`/bin/date`)" $EMAIL;

exit 0

4. cleanup old backups script

set -x
#  Cleanup old database log files daily.
if [ `whoami` != 'postgres' ]; then
  echo '$cmd: you must be logged in as the postgres user'
  exit 1

nice /usr/bin/find /home/backups/pgsql_data -name "*.log*" \
-mtime +1 -exec rm {} \;
nice /usr/bin/find /home/backups/pgsql_data -name "*.tgz" \
-mtime +1 -exec rm {} \;
nice /usr/bin/find /home/backups/archivelogs -name "*.bz2" \
-mtime +1 -exec rm {} \;

5. Recovery, I tested but never used for real case, I put the preocedure here

Okay, the worst has happened and you need to recover from your backup. Here is the procedure:

  1. Stop the server, if it's running.

  2. If you have the space to do so, copy the whole cluster data directory and any tablespaces to a temporary location in case you need them later. Note that this precaution will require that you have enough free space on your system to hold two copies of your existing database. If you do not have enough space, you should at least save the contents of the cluster's pg_xlog subdirectory, as it might contain logs which were not archived before the system went down.

  3. Remove all existing files and subdirectories under the cluster data directory and under the root directories of any tablespaces you are using.

  4. Restore the database files from your file system backup. Be sure that they are restored with the right ownership (the database system user, not root!) and with the right permissions. If you are using tablespaces, you should verify that the symbolic links in pg_tblspc/ were correctly restored.

  5. Remove any files present in pg_xlog/; these came from the file system backup and are therefore probably obsolete rather than current. If you didn't archive pg_xlog/ at all, then recreate it with proper permissions, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before.

  6. If you have unarchived WAL segment files that you saved in step 2, copy them into pg_xlog/. (It is best to copy them, not move them, so you still have the unmodified files if a problem occurs and you have to start over.)

  7. Create a recovery command file recovery.conf in the cluster data directory (see Chapter 26). You might also want to temporarily modify pg_hba.conf to prevent ordinary users from connecting until you are sure the recovery was successful.

  8. Start the server. The server will go into recovery mode and proceed to read through the archived WAL files it needs. Should the recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will rename recovery.conf to recovery.done (to prevent accidentally re-entering recovery mode later) and then commence normal database operations.

  9. Inspect the contents of the database to ensure you have recovered to the desired state. If not, return to step 1. If all is well, allow your users to connect by restoring pg_hba.conf to normal.

Important thing is that sometime, you need to specify timeline
recovery_target_time = ''  # for example '2013-11-06 10:00:05'