Webinar: Highway to Zero Downtime PostgreSQL Upgrades [Follow Up]

October 05, 2020

In the past, major upgrades of terabyte size PostgreSQL databases required extended downtime or great amounts of work. With logical decoding, a whole new world of opportunities appeared for upgrades that require a very small amount of downtime.

This webinar gave a comprehensive overview of how to perform a “near” zero downtime upgrade using pglogical, but also covered small things which, if not taken care of in time, can end up in an extended upgrade, or worse, in a failed one.

In order to explore this topic in detail, 2ndQuadrant arranged a live webinar, “Highway to Zero Downtime PostgreSQL Upgrades”, which was hosted by Martín Marqués, Technical Support Lead, Americas at 2ndQuadrant.

Below is a preview of the webinar. Those who weren’t able to attend the live webinar can now view the recording here.

Due to limited time, some of the questions were not answered during the live webinar, so answers by the host are provided below:

Question: Can I use pglogical on AWS Aurora 9.6, to do an Upgrade to version 11?

Answer: I can’t give an answer to that, as Aurora does not share the same storage as community PostgreSQL.

 

Question: Can I upgrade a cluster from Postgres 10 to Postgres 13 with a single pg_upgrade command or should upgrade the internal steps as well?

Answer: You do not have to upgrade in steps of one version jump. It’s perfectly fine to jump from 10 to 13 with one run of pg_upgrade.

 

Question: Does there exists a way to calculate the max_wal_senders, maxwokers… max_parallel_worker…?

Answer: There is no formula. It depends, in the case of replication, on how many connections you’ll have opened at a given time. Each physical or logical replication connection will use a wal_sender, so you need to keep that in mind. max_worker_processes should be set to a sensibly larger value than senders

 

Question: When you mention "Large Objects" like what? something like files in bytea?

Answer: No, Large Objects are generally binary data that is inserted using lo_* functions. This data will be stored in a catalog table called pg_largeobjects which is not logically replicated. If on the other hand you use bytea fields in your use tables, those do get replicated and so you can use logical replication or pglogical to upgrade just like I explained in the webinar

 

Question: Question to LogicalReplication: What happens, when a DDL is executed on the Provider-Node, before Subscriber-Node has completed the catchup?

Answer: Nothing really. The DDL will be logged in the WAL file, but that will be ignored by the logical replication stream. The problem could come up if the DDL changes the schema in a way that will break replication of DML from the table involved in such DDL

 

Question: Is pg_logical extension available also for PostgreSQL version 13 ? (This extension seems to be out of Debian repositories)

Answer: We are releasing a version for Postgres 13 very soon.

Question: Is it possible to use streaming replication and then enable logical replication (not pglogical) with no need for initial data sync?

Answer: Yes, that is possible but it’s not documented. You would have to do something similar to what pglogical_create_subscriber does.

 

Question: How to actually deal with sequences in logical replication?

Answer: You will need to manually synchronize them during the switchover.

 

Question: Are there plans on adding automatic DDL replication to either pglogical or native logical replication? replicate_ddl_command() is a manual step. pgl_ddl_deploy extension, from a different source, offers this functionality when used with pglogical, but has issues from my experience.

Answer: pglogical 3 will have transparent DDL replication.

 

Question: Because we have to change Postgres config file does it mean pglogical would technically mean some downtime if not planned during initial provisioning of original/master db.

Answer: It really depends. You can change the configuration at any time and restart the target node. Replication will resume from where it stopped before the shutdown.

 

Question: I am looking to upgrade from AWS RDS PG10 to a version below 13. Need mature features like partitioning and logical replication. would you recommend 12 or wait till 13 is available in AWS?

Answer: Can’t really answer that, as I don’t know what the PG13 road map looks like in RDS. It also depends on if there are features introduced in PG13 that you would like to use.

 

Question: If we go with multiple db approach, do we get better resiliency?

Answer: Multiple DB doesn’t mean a more resilient system.

 

Question: Is there a notification or some sort of query to determine if the initial sync is complete on logical replication?

Answer: You need to use the logical replication views available in Postgres. There is a section about monitoring logical replication in the documentation.

 

Question: When you say "Large Objects are not Replicated" does that mean TOAST tables? We have many hundreds of GB in JSONB that is actually storing data in TOAST tables?

Answer: No, Large Objects are those created using lo_* functions from postgres. The binary data from the large objects are stored in a catalog table, as so it’s not replicated. Toasted data is replicated without issues.

 

Question: As a ‘best practices’ question, would it be ideal in the case of upgrading a server + standby be to setup a new server with logical replication, and a standby of that new server?

Answer: Yes, that would be ideal

 

Question: When you say primary key, do you mean a unique index or an actual primary key clause?

Answer: It has to be a primary key constraint. Unique index is not enough. pglogical 3 will have capacity to use REPLICA IDENTITY FULL and also REPLICA IDENTITY INDEX over an index that covers NOT NULL fields

 

Question: If we want to use pg_upgrade and have pg_largeobject , time for upgrade increase?

Answer: Yes, and the time is very large if the number of LO is big.

 

Question: How do you move the open connections from the old server to the new server?

Answer: You don’t. Those connections need to be dropped. One option, so you don’t drop the connections on the app side, is to use pgbouncer. You can redirect connections with pgbouncer (this process is documented if I recall correctly).

 

Question: How do we reset the sequences once the upgrade is completed and sequences are copied over.Will it start from 0 ?

Answer: You will need to loop over the sequences on the source node looking at the next_val and set the next one on the destination node

 

Question: Can we use repmgr between different versions to set replica and setup logical replication?

Answer: No. Repmgr only works with physical replication

 


To be the first to know about upcoming PostgreSQL webinars by 2ndQuadrant, visit our Webinars page.

For any questions, comments, or feedback, please visit our website or send an email to webinar@2ndquadrant.com.

 

 

Share this

More Blogs

RAG app with Postgres and pgvector

Build a RAG app using Postgres and pgvector to enhance AI applications with improved data management, privacy, and efficient local LLM integration.
October 08, 2024

Mastering PostgreSQL in Kubernetes with CloudNativePG

Previewing EDB’s training session for PGConf.EU 2024, presented by our Kubernetes experts EDB is committed to advancing PostgreSQL by sharing our expertise and insights, especially as the landscape of database...
September 30, 2024

The Expanding World of AI and Postgres

It wasn’t long ago that AI was considered a niche topic of interest reserved for researchers and academics. But as AI/ML engineers with extensive research backgrounds entered the industry, AI...
September 25, 2024