It’s time.
About a year ago, we published PostgreSQL 10 with support for native logical replication. One of the uses of logical replication is to allow low- or no-downtime upgrading between PostgreSQL major versions. Until now, PostgreSQL 10 was the only PostgreSQL release with native logical replication, so there weren’t many opportunities for upgrading in this way. (Logical replication can also be used for moving data between instances on different operating systems or CPU architectures or with different low-level configuration settings such as block size or locale — sidegrading if you will.) Now that PostgreSQL 11 is near, there will be more reasons to make use of this functionality.
Let’s first compare the three main ways to upgrade a PostgreSQL installation:
- pg_dump and restore
- pg_upgrade
- logical replication
We can compare these methods in terms of robustness, speed, required downtime, and restrictions (and more, but we have to stop somewhere for this article).
pg_dump and restore is arguably the most robust method, since it’s the most tested and has been in use for decades. It also has very few restrictions in terms of what it can handle. It is possible to construct databases that cannot be dumped and restored, mostly involving particular object dependency relationships, but those are rare and usually involve discouraged practices.
The problem with the dump and restore method is of course that it effectively requires downtime for the whole time the dump and restore operations run. While the source database is still readable and writable while the process runs, any updates to the source database after the start of the dump will be lost.
pg_upgrade improves on the pg_dump process by moving over the data files directly without having to dump them out into a logical textual form. Note that pg_upgrade still uses pg_dump internally to copy the schema, but not the data. When pg_upgrade was new, its robustness was questioned, and it did upgrade some databases incorrectly. But pg_upgrade is now quite mature and well tested, so one does not need to hesitate about using it for that reason anymore. While pg_upgrade runs, the database system is down. But one can make a choice about how long pg_upgrade runs. In the default copy mode, the total run time is composed of the time to dump and restore the schema (which is usually very fast, unless one has thousands of tables or other objects) plus the time to copy the data files, which depends on how big the database is (and the I/O system, file system, etc.).
In the optional link mode, the data files are instead hard-linked to the new data directory, so that the time is merely the time to perform a short kernel operation per file instead of copying every byte. The drawback is that if anything goes wrong with the upgrade or you need to fall back to the old installation, this operation will have destroyed your old database. (I’m working on a best-of-both-worlds solution for PostgreSQL 12 using reflinks or file clone operations on supported file systems.)
Logical replication is the newest of the bunch here, so it will probably take some time to work out the kinks. If you don’t have time to explore and investigate, this might not be the way to go right now. (Of course, people have been using other non-core logical replication solutions such as Slony, Londiste, and pglogical for upgrading PostgreSQL for many years, so there is a lot of experience with the principles, if not with the particulars.)
The advantage of using logical replication to upgrade is that the application can continue to run against the old instance while the data synchronization happens. There only needs to be a small outage while the client connections are switched over. So while an upgrade using logical replication is probably slower start to end than using pg_upgrade in copy mode (and definitely slower than using hardlink mode), it doesn’t matter very much since the actual downtime can be much shorter.
Note that logical replication currently doesn’t replicate schema changes. In this proposed upgrade procedure, the schema is still copied over via pg_dump, but subsequent schema changes are not carried over. Upgrading with logical replication also has a few other restrictions. Certain operations are not captured by logical replication: large objects, TRUNCATE, sequence changes. We will discuss workarounds for these issues later.
If you have any physical standbys (and if not, why don’t you?), there are also some differences to consider between the methods. With either method, you need to build new physical standbys for the upgraded instance. With dump and restore as well as with logical replication, they can be put in place before the upgrade starts so that the standby will be mostly ready once the restore or logical replication initial sync is complete, subject to replication delay.
With pg_upgrade, the new standbys have to be created after the upgrade of the primary is complete. (The pg_upgrade documentation describes this in further detail.) If you rely on physical standbys for high-availability, the standbys ought to be in place before you switch to the new instance, so the setup of the standbys could affect your overall timing calculations.
But back to logical replication. Here is how upgrading with logical replication can be done:
0. The old instance must be prepared for logical replication. This requires some configurations settings as described under http://www.postgresql.org/docs/10/static/logical-replication-config.html (mainly wal_level = logical
. If it turns out you need to make those changes, they will require a server restart. So check this well ahead of time. Also check that pg_hba.conf
on the old instance is set up to accept connections from the new instance. (Changing that only requires a reload.)
1. Install the new PostgreSQL version. You need at least the server package and the client package that contains pg_dump. Many packagings now allow installing multiple versions side by side. If you are running virtual machines or cloud instances, it’s worth considering installing the new instance on a new host.
2. Set up a new instance, that is, run initdb. The new instance can have different settings than the old one, for example locale, WAL segment size, or checksumming. (Why not use this opportunity to turn on data checksums?)
3. Before you start the new instance, you might need to change some configuration settings. If the instance runs on the same host as the old instance, you need to set a different port number. Also, carry over any custom changes you have made in postgresql.conf
on your old instance, such as memory settings, max_connections
, etc. Similarly, make pg_hba.conf
settings appropriate to your environment. You can usually start by copying over the pg_hba.conf
file from the old instance. If you want to use SSL, set that up now.
4. Start the new (empty) instance and check that it works to your satisfaction. If you set up the new instance on a new host, check at this point that you can make a database connection (using psql) from the new host to the old database instance. We will need that in the subsequent steps.
5. Copy over the schema definitions with pg_dumpall. (Or you can do it with pg_dump for each database separately, but then don’t forget global objects such as roles.)
pg_dumpall -s >schemadump.sql
psql -d postgres -f schemadump.sql
Any schema changes after this point will not be migrated. You would have to manage those yourself. In many cases, you can just apply the changing DDL on both hosts, but running commands that change the table structure during an upgrade is probably a challenge too far.
6. In each database in the source instance, create a publication that captures all tables:
CREATE PUBLICATION p_upgrade FOR ALL TABLES;
Logical replication works separately in each database, so this needs to be repeated in each database. On the other hand, you don’t have to upgrade all databases at once, so you can do this one database at a time or even not upgrade some databases.
7. In each database in the target instance, create a subscription that subscribes to the just-created publication. Be sure to match the source and target databases correctly.
CREATE SUBSCRIPTION s_upgrade CONNECTION 'host=oldhost port=oldport dbname=dbname ...' PUBLICATION p_upgrade;
Set the connection parameters as appropriate.
8. Now you wait until the subscriptions have copied over the initial data and have fully caught up with the publisher. You can check the initial sync status of each table in a subscription in the system catalog pg_subscription_rel
(look for r
= ready in column srsubstate
). The overall status of the replication can be checked in pg_stat_replication
on the sending side and pg_stat_subscription
on the receiving side.
9. As mentioned above, sequence changes are not replicated. One possible workaround for this is to copy over the sequence values using pg_dump. You can get a dump of the current sequence values using something like this:
pg_dump -d dbname --data-only -t '*_seq' >seq-data.sql
(This assumes that the sequence names all match *_seq
and no tables match that name. In more complicated cases you could also go the route of creating a full dump and extracing the sequence data from the dump’s table of contents.)
Since the sequences might advance as you do this, perhaps munge the seq-data.sql
file to add a bit of slack to the numbers.
Then restore that file to the new database using psql.
10. Showtime: Switch the applications to the new instances. This requires some thinking ahead of time. In the simplest scenario, you stop your application programs, change the connection settings, restart. If you use a connection proxy, you can switch over the connection there. You can also switch client applications one by one, perhaps to test things out a bit or ease the load on the new system. This will work as long as the applications still pointing to the old server and those pointing to the new server don’t make conflicting writes. (In that case you would be running a multimaster system, at least for a short time, and that is another order of complexity.)
11. When the upgrade is complete, you can tear down the replication setup. In each database on the new instance, run
DROP SUBSCRIPTION s_upgrade;
If you have already shut down the old instance, this will fail because it won’t be able to reach the remote server to drop the replication slot. See the DROP SUBSCRIPTION man page for how to proceed in this situation.
You can also drop the publications on the source instance, but that is not necessary since a publication does not retain any resources.
12. Finally, remove the old instances if you don’t need them any longer.
Some additional comments on workarounds for things that logical replication does not support. If you are using large objects, you can move them over using pg_dump, of course as long as they don’t change during the upgrade process. This is a significant limitation, so if you are a heavy user of large objects, then this method might not be for you. If your application issues TRUNCATE during the upgrade process, those actions will not be replicated. Perhaps you can tweak your application to prevent it from doing that for the time of the upgrade, or you can substitute a DELETE instead. PostgreSQL 11 will support replicating TRUNCATE, but that will only work if both the source and the destination instance are PostgreSQL 11 or newer.
Some closing comments that really apply to all upgrade undertakings:
- Applications and all database client programs should be tested against a new major PostgreSQL version before being put into production.
- To that end, you should also test the upgrade procedure before executing it in the production environment.
- Write things down or better script and automate as much as possible.
- Make sure your backup setup, monitoring systems, and any maintenance tools and scripts are adjusted appropriately during the upgrade procedure. Ideally, these should be in place and verified before the switchover is done.
With that in mind, good luck and please share your experiences.