This article looks at the processes involved in master-slave failover in PostgreSQL and the options and commands that are available for configuring it.
1. Recover process in the background
2. Replication options
3. Failover commands
In order to understand how failover works, we need to understand the processes involved. In this post we will go through the following basics:
1. Recovery process in the background
2. Replication options
3. Failover commands
Recovery process in the background
Slave servers in PostgreSQL architecture help us in load balancing and achieving high availability. Slave Server can be used for read only transactions so that we can have some utilization of the hardware that we have invested in and when the master server fails then can take over to ensure minimum downtime. Slave servers use the recovery process in the background to ensure that it stays updated so that it can be used for read-only or failover. This happens when the slave receives the WAL files from the master and replays them to update the changes. The slave can read from the WAL archives or from the streaming, depending on the setup.
Once the failover happens, the slave uses recovery.conf file to bring it to the latest state and then become a master.
The basic configuration for setting up a master-slave requires the following changes to the postgresql.conf:
wal_level = logical or Hot_Standby max_wal_senders = 4 max_replication_slots = 4
In addition to the above, the following need to be configured on the slave:
hot_standby = on hot_standby_feedback = on
We also need to modify the pg_hba.conf file to ensure that the two are able to communicate with each other.
host replication repuser <slave IP Address>/32 md5
On the slave, we will also need to create a recovery.conf file, which the slave will use to promote itself. A simple example of a recovery.conf is given below:
standby_mode = 'on' primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' restore_command = 'cp /path/to/archive/%f %p' archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r'
For more information, see the PostgreSQL documentation: https://www.postgresql.org/docs/12/warm-standby.html.
There may be a few more settings based on the setup and configuration that you are using, so verify the setup once it’s done.
Depending on the requirements, slaves can play multiple roles as per their configuration. It can help in high availability, load balancing, and so on. For example:
1. Depending on the data being read and written on the slave you can have two kinds of replication: Async (data is first committed on the master and then transferred to the slave) and Sync (data is committed on both and only then a successful flag is sent).
Failover Results: There is a possibility of loss of data in asynchronous replication.
2. Depending on the role: Warm Standby (available only for promoting to master in-case of failure and does not have any active role) and Hot Standby (available for read only transactions).
Failover Results: There is no advantage in this case for failover in either scenario.
3. Depending on data streaming: Physical replication (WAL files are copied across the network and then replayed on the slave) and Logical replication (streaming is used to send data over real time).
Failover Results: In case of WAL shipping, if any of the walls is not received on the slave there might be data loss depending on the number of WAL files.
There are many other options available. For more information, see the PostgreSQL documentation:
PostgreSQL in itself does not provide an automatic failover solution. So we can either do it manually—script it—or use one of the multiple alternative solutions available. We will look at the manual option, which once understood can also be scripted. The slave exits the mode once a pg_ctl promote command is executed.
./pg_ctl promote -D /usr/edb/as12/data
Once the command is executed, it verifies any files available in the pg_wal folder, replays them, and then starts as a master node. It does not try to connect to the (old) master server, and hence the promotion is very fast.
If you are doing failover manually, then there is no requirement for a trigger file and you can run it in the terminal itself.
Another note of caution: once the old master node comes back online, we have to ensure that it does not start acting as a master again so that no data loss occurs. This can easily be controlled by either taking it off the network or ensuring that the IP visible to the applications is also moved over to the slaves.