Choosing a geo-replication deployment pattern v6.3.1

Before configuring geo-replication, decide on a deployment pattern. The choice affects how you configure routing, how the cluster behaves during a location failure, and how much operational complexity you take on.

The full description of each pattern is in Deployment patterns. This page covers the decisions that matter specifically for geo-replication.

Deciding on the number of active write locations

The number of active write locations is the most fundamental decision. More active write locations means lower write latency for users in each region, but cross-region write conflicts become inevitable and conflict resolution must be part of your application design.

PGD uses the Raft consensus algorithm to coordinate decisions across nodes, such as electing a write leader and performing DDL operations. Raft requires a quorum (a majority of nodes) to be available before it can make these decisions. In a geo-distributed cluster, this means that the number of active locations directly affects whether the cluster can maintain quorum during a location failure.

  • Two active write locations: The simplest geo-replication setup. Each location serves local reads and writes, but losing one location breaks Raft majority, which blocks DDL and global sequences until the location recovers. Adding a witness node in a third location resolves this limitation without the cost of a full third active location. The relevant patterns are Two data groups, active-active and its witness variant.

  • Three or more active write locations: Maintain natural Raft majority even when one location fails, eliminating the quorum concern. The trade-off is higher operational complexity. The relevant deployment patterns are Three data groups, active-active-active and Multiple locations, data residency for the variant where sensitive data must stay within its origin region.

Choosing between local and global routing

PGD supports two routing modes for write traffic. You configure your preferred routing mode by setting the enable_routing configuration option for your node group.

  • Local routing: Each location has its own write leader and routes writes locally. This approach gives the lowest write latency and means each location operates independently for most operations. However, cross-region conflicts are possible, and losing a location with only two groups breaks global consensus.

  • Global routing: The entire cluster has a single write leader regardless of location. All write traffic is routed to that leader, eliminating cross-region conflicts at the cost of added latency for writes originating in other regions. Even if multiple locations are capable of accepting writes, global routing constrains writes to a single leader across those locations. This mode works with three or more active write locations, or with two data groups with a witness location. If the primary location fails, the two remaining locations still have Raft majority and can elect a new write leader.

For configuration steps, see Configuring routing.

Enforcing data residency requirements

Regulations such as GDPR, CCPA, and national data localization laws require that personal or sensitive data stays within its origin region.

Data residency enforcement is built into the Multiple locations, data residency deployment pattern. With local routing, each region's subgroup has its own write leader and applications connect to a local endpoint, so sensitive data stays in-region by default. Cross-region replication is selective: only global reference data (product catalogs, configuration, anonymized analytics) replicates between groups. Personal data, financial records, and other regulated data remain local to their origin group.

You must design your schema to clearly separate local from global data, and configure your application to always write regional data to the correct local endpoint.

Comparison

Each geo-replication deployment pattern makes different trade-offs across quorum resilience, routing options, conflict handling, and data residency.

Deployment patternActive write locationsQuorum during location failureSupports global routingConflict resolution requiredData residency enforcement
Two data groups, active-active2No (Yes with witness)No (Yes with witness)YesNo
Three data groups, active-active-active3+YesYesYesNo
Multiple locations, data residency2+YesYesYesYes