EDB Postgres Distributed 6.3.0 release notes v6.3.0
Released: 26 March 2026
EDB Postgres Distributed (PGD) 6.3.0 includes new features, enhancements, and bug fixes focused on improving stability and reliability.
Highlights
- Connection Manager on subscriber-only nodes: Connection Manager now starts by default on subscriber-only nodes, providing access to advanced connection pooling and session management features. It also re-routes all read-only traffic to the local node to reduce latency.
- Consensus management via PGD CLI: The PGD CLI now includes commands to enable or disable Raft consensus on specific nodes or groups. The commands pgd raft enable and pgd raft disable provide granular control over node behavior.
- Platform support update: PGD is now supported on SUSE Linux Enterprise Server (SLES) 15 SP7 beginning with 6.3.0.
Features
| Description | Addresses |
|---|---|
Connection Manager starts by default on subscriber-only nodes and includes new health check endpoints.Connection Manager now initializes automatically on subscriber-only nodes. This release also adds the | |
Added the pgd raft set-leader command.Introduced the | |
Added pgd raft enable and pgd raft disable commands.Added commands to enable or disable consensus (Raft) across different nodes, groups, or cluster levels. |
Enhancements
| Description | Addresses |
|---|---|
Improved the group deletion process to automatically drop associated replication sets.Enhanced the node group removal logic to automatically drop replication sets of the same name. | 57531 |
Improved the consistency of Connection Manager against | |
Added support for user name mapping ( | |
Improved the | |
Improved conflict handling for origin changes in group commit.Enhanced group commit to perform eager conflict detection and resolution when the origin of a transaction changes. Conflict handling now relies on timestamp-based detection rather than replication-progress speculation. To ensure optimal performance and avoid increased commit latency, you must synchronize system clocks across all nodes. | |
Added the | |
Improved write leader election logic via two new | |
Added support for | |
Added the |
Changes
| Description | Addresses |
|---|---|
Standalone macOS PGD CLI is now ARM64 only.The standalone PGD CLI for macOS is now available exclusively for the Apple Silicon (ARM64) architecture. This change applies only to the standalone macOS binary; the availability and support for standalone PGD CLI binaries on GNU/Linux remain unchanged. |
Bug Fixes
| Description | Addresses |
|---|---|
Fixed a replication crash loop caused by interrupted partition detaching.Resolved an issue where logical replication would enter an infinite crash loop if a | 53628, 59291 |
Fixed a | 53628, 59291 |
Fixed | 54906 |
Fixed encoding charset issue with | |
Fixed the in-line help for the | |
Fixed the matrix view output for the | 56295 |
Fixed a memory growth in the consensus shared memory queue.Prevented shared memory queues used by consensus from growing while a node is unreachable, ensuring that memory remains stable during node outages. | |
Fixed a memory leak in the consensus process when subscriber-only nodes are down.Resolved a malloc-based memory leak related to the improper freeing of connection strings. | 57094 |
Fixed an issue where the | |
Fixed a Connection Manager memory leak occurring during configuration reloads.Resolved a memory leak in the Connection Manager caused by incorrect handling of Postgres memory contexts during configuration reloads. | |
Fixed a segmentation fault occurring during | 58060 |
Fixed an issue where setting | |
Fixed an issue with the analytics replicator, which could cause redundant replication processes.In cluster topologies with subgroups and analytics enabled in the parent group, a bug caused each subgroup's write leader to start its own analytics replicator instance, resulting in duplicate rows. This fix ensures that only one analytics replicator per PGD group runs across the cluster. | |
Fixed an infinite loop in the Connection Manager that occurred when a server connection was unexpectedly closed.Resolved a race condition in the Connection Manager where an unexpected server disconnection during the retrieval process from a connection pool would trigger an infinite loop. This issue was specifically limited to clusters operating in session pooling mode. | |
Improved lock handling during change application to prevent data corruption from concurrent backends.Table and index locks are now held until transaction commit when applying changes. This ensures correct serialization and prevents data consistency issues during concurrent operations, specifically addressing risks introduced by parallel apply. | |
Parallel Apply can now be enabled when PGD is deployed with community Postgres.Resolved an issue where enabling Parallel Apply on community PostgreSQL caused writer hangs and stuck subscriptions due to frequent lock timeouts. This fix allows Parallel Apply to be used reliably without requiring it to be disabled as a workaround. | 54565, 57005, 57327 |
Improved Connection Manager resilience during file descriptor exhaustion.Connection Manager now automatically recovers from file descriptor exhaustion. If the system limit is reached, the process will close listening sockets and automatically restart after 30 seconds to restore connectivity. | |
Fixed false error reporting during node group creation.Resolved a race condition where | |
Fixed proxy configuration loss after | 58319 |
Fixed stale | |
Fixed | |
Prevented duplicate data replication during analytics write leader transitions.Fixed an issue where the analytics replicator on a previous write leader could continue running simultaneously with the replicator on the new leader. This synchronization improvement ensures the old process stops completely before the new one initializes, preventing duplicate data inserts. This fix includes the addition of the bdr.group_lease catalog table and the bdr.group_lease_override function. |
- On this page
- Highlights
- Features
- Enhancements
- Changes
- Bug Fixes