Deployment of PostgreSQL Replica Cluster via Barman Cloud Plugin on CloudNativePG - Part 1
This blog outlines the step-by-step process for setting up a CloudNativePG (CNPG) replica cluster using the Barman Cloud Plugin for backups and WAL archiving.
Environment Details:
- Operator: CloudNativePG (CNPG) 1.28.1
- Database: EDB Postgres Advanced Server 18 (EPAS)
- Backup Setup: Barman Cloud Plugin v0.11.0
- Storage: AWS S3
1. Pre-requisite: Create Namespaces and Credentials
First, we establish separate namespaces for the primary and replica clusters and store the S3 credentials required for the Barman plugin to access the object store.
user% kubectl create ns primary
namespace/primary created
user% kubectl create ns replica
namespace/replica createdCreate the same AWS credentials in the primary namespace
user% kubectl create secret generic aws-creds \
--from-literal=ACCESS_KEY_ID=xxxxxxxxN3GE5FSxxxxxx \
--from-literal=ACCESS_SECRET_KEY=xxxxxxxxGrS+xlfTlCZTaTxxxxxx -n primary
secret/aws-creds createdCreate the same AWS credentials in the replica namespace
user% kubectl create secret generic aws-creds \
--from-literal=ACCESS_KEY_ID=xxxxxxxxN3GE5FSxxxxxx \
--from-literal=ACCESS_SECRET_KEY=xxxxxxxxGrS+xlfTlCZTaTxxxxxx -n replica
secret/aws-creds created2. Set up the Barman Plugin
The Barman plugin requires cert-manager for secure communication. We verify the installation of cmctl, install cert-manager, and deploy the Barman Cloud plugin.
user% brew install cmctl
==> Auto-updating Homebrew...
Adjust how often this is run with `$HOMEBREW_AUTO_UPDATE_SECS` or disable with
`$HOMEBREW_NO_AUTO_UPDATE=1`. Hide these hints with `$HOMEBREW_NO_ENV_HINTS=1` (see `man brew`).
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
:
:
notchi: Notch companion for Claude Code
nvidia-sync: Utility for launching applications and containers on remote Linux systems
You have 30 outdated formulae and 1 outdated cask installed.
installed user% kubectl create namespace cert-manager
namespace/cert-manager createduser% kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/cert-manager configured
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io unchanged
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-tokenrequest created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-tokenrequest created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager-cainjector created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook createduser% kubectl get pods -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-6bfcb455c7-q75dr 1/1 Running 1 (37m ago) 107m
cert-manager-cainjector-84d45cd8f4-j4zvg 1/1 Running 0 107m
cert-manager-webhook-5bb447875c-lsn7s 1/1 Running 0 107muser% cmctl check api The cert-manager API is readyApply the plugin manifest to the operator's namespace
user% kubectl apply -f https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.11.0/manifest.yaml
customresourcedefinition.apiextensions.k8s.io/objectstores.barmancloud.cnpg.io created
serviceaccount/plugin-barman-cloud created
role.rbac.authorization.k8s.io/barman-plugin-leader-election-role created
clusterrole.rbac.authorization.k8s.io/barman-plugin-metrics-auth-role created
clusterrole.rbac.authorization.k8s.io/barman-plugin-metrics-reader created
clusterrole.rbac.authorization.k8s.io/barman-plugin-objectstore-editor-role created
clusterrole.rbac.authorization.k8s.io/barman-plugin-objectstore-viewer-role created
clusterrole.rbac.authorization.k8s.io/plugin-barman-cloud created
rolebinding.rbac.authorization.k8s.io/barman-plugin-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/barman-plugin-metrics-auth-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/plugin-barman-cloud-binding created
secret/plugin-barman-cloud-52ggkcd52d created
service/barman-cloud created
deployment.apps/barman-cloud created
certificate.cert-manager.io/barman-cloud-client created
certificate.cert-manager.io/barman-cloud-server created
issuer.cert-manager.io/selfsigned-issuer createduser% kubectl rollout status deployment -n postgresql-operator-system barman-cloud
deployment "barman-cloud" successfully rolled out3. Create ObjectStore Resources
We define the `ObjectStore` resource in both namespaces. This tells the plugin where to store and retrieve the backups and WAL files.
Create ObjectStore for the primary cluster in the primary namespace
user% vi ObjectStore-primary.yaml
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: s3-store
namespace: primary # Also apply a copy to the 'replica' namespace
spec:
configuration:
destinationPath: s3://swapnil-backup/cnp/
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
wal:
compression: gzipuser% kubectl apply -f ObjectStore-primary.yaml
objectstore.barmancloud.cnpg.io/s3-store createdCreate an ObjectStore for the Replica cluster in the Replica namespace
user% vi ObjectStore-replica.yaml
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: s3-store
namespace: replica
spec:
configuration:
destinationPath: s3://swapnil-backup/cnp/
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
wal:
compression: gzipuser% kubectl apply -f ObjectStore-replica.yaml
objectstore.barmancloud.cnpg.io/s3-store created 4. Deploy and Verify Primary Cluster
Deploy the primary cluster in the primary namespace and verify the WAL archiving status through the Barman plugin.
user% vi cluster-primary.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-primary
namespace: primary
spec:
instances: 3
imageName: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
primaryUpdateStrategy: unsupervised
storage:
size: 1G
replica:
primary: cluster-primary
source: cluster-primary
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: s3-store
externalClusters:
- name: cluster-primary
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-primary
- name: cluster-replica
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-replicauser% kubectl apply -f cluster-primary.yaml
cluster.postgresql.k8s.enterprisedb.io/cluster-primary createduser% kubectl get pods -L role -n primary
NAME READY STATUS RESTARTS AGE ROLE
cluster-primary-1 2/2 Running 0 3m11s primary
cluster-primary-2 2/2 Running 0 2m10s replica
cluster-primary-3 2/2 Running 0 102s replicaCheck CNP cluster status and verify WAL archiving is "OK."
user% kubectl cnp status cluster-primary -n primary
Cluster Summary
Name primary/cluster-primary
System ID: 7621517035557347356
PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
Primary instance: cluster-primary-1
Primary promotion time: 2026-03-26 10:59:23 +0000 UTC (3m8s)
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Size: 169M
Current Write LSN: 0/8000060 (Timeline: 1 - WAL File: 000000010000000000000008)
Continuous Backup status (Barman Cloud Plugin)
No recovery window information found in ObjectStore 's3-store' for server 'cluster-primary'
Working WAL archiving: OK
WALs waiting to be archived: 0
Last Archived WAL: 000000010000000000000007.00000028.backup @ 2026-03-26T11:00:11.577402Z
Last Failed WAL: -
Streaming Replication status
Replication Slots Enabled
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ----------------
cluster-primary-2 0/8000060 0/8000060 0/8000060 0/8000060 00:00:00 00:00:00 00:00:00 streaming async 0 active
cluster-primary-3 0/8000060 0/8000060 0/8000060 0/8000060 00:00:00 00:00:00 00:00:00 streaming async 0 active
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
cluster-primary-1 0/8000060 Primary OK BestEffort 1.28.1 replicacluster-control-plane
cluster-primary-2 0/8000060 Standby (async) OK BestEffort 1.28.1 replicacluster-control-plane
cluster-primary-3 0/8000060 Standby (async) OK BestEffort 1.28.1 replicacluster-control-plane
Plugins status
Name Version Status Reported Operator Capabilities
---- ------- ------ ------------------------------
barman-cloud.cloudnative-pg.io 0.11.0 N/A Reconciler Hooks, Lifecycle Service5. Execute Manual Backup
Perform a manual backup through the Barman plugin to ensure the S3 object store is populated for the replica cluster bootstrap.
Trigger a backup using the plugin method
user% kubectl cnp backup cluster-primary --method=plugin --plugin-name=barman-cloud.cloudnative-pg.io -n primary
backup/cluster-primary-20260326163523 createdVerify backup completion
user% kubectl get backup -n primary
NAME AGE CLUSTER METHOD PHASE ERROR
cluster-primary-20260326163523 33s cluster-primary plugin completed Verify backup First Point of Recoverability and Last Successful Backup:
user% kubectl cnp status cluster-primary -n primary
Cluster Summary
Name primary/cluster-primary
System ID: 7621517035557347356
PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
Primary instance: cluster-primary-1
Primary promotion time: 2026-03-26 10:59:23 +0000 UTC (7m27s)
Status: Cluster in healthy state
:
:
Continuous Backup status (Barman Cloud Plugin)
ObjectStore / Server name: s3-store/cluster-primary
First Point of Recoverability: 2026-03-26 16:35:25 IST
Last Successful Backup: 2026-03-26 16:35:25 IST
Last Failed Backup: -
Working WAL archiving: OK
WALs waiting to be archived: 0
Last Archived WAL: 000000010000000000000008 @ 2026-03-26T11:05:08.855493Z
Last Failed WAL: -6. Deploy and Verify Replica Cluster
The replica cluster is configured to bootstrap via recovery from the primary's object store and then maintain synchronization (the replica cluster will point to the `cluster-primary` source for recovery)
user% vi cluster-replica.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-replica
namespace: replica
spec:
instances: 3
imageName: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
storage:
size: 1G
bootstrap:
recovery:
source: cluster-primary
replica:
primary: cluster-primary
source: cluster-primary
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: s3-store
externalClusters:
- name: cluster-primary
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-primary
- name: cluster-replica
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-replicauser% kubectl apply -f cluster-replica.yaml
cluster.postgresql.k8s.enterprisedb.io/cluster-replica createduser% kubectl get pods -L role -n replica
NAME READY STATUS RESTARTS AGE ROLE
cluster-replica-1 2/2 Running 0 116s primary
cluster-replica-2 2/2 Running 0 88s replica
cluster-replica-3 2/2 Running 0 60s replicaVerify replica status and connection to the source cluster
user% kubectl cnp status cluster-replica -n replica
Replica Cluster Summary
Name replica/cluster-replica
System ID: 7621517035557347356
PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
Designated primary: cluster-replica-1
Source cluster: cluster-primary
Primary promotion time: 2026-03-26 11:08:43 +0000 UTC (2m8s)
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Size: 88M
Continuous Backup status (Barman Cloud Plugin)
No recovery window information found in ObjectStore 's3-store' for server 'cluster-replica'
Working WAL archiving: OK
WALs waiting to be archived: 0
Last Archived WAL: 000000010000000000000008 @ 2026-03-26T11:08:49.722535Z
Last Failed WAL: -
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
cluster-replica-1 0/9000000 Designated primary OK BestEffort 1.28.1 replicacluster-control-plane
cluster-replica-2 0/9000000 Standby (in Replica Cluster) OK BestEffort 1.28.1 replicacluster-control-plane
cluster-replica-3 0/9000000 Standby (in Replica Cluster) OK BestEffort 1.28.1 replicacluster-control-plane
Plugins status
Name Version Status Reported Operator Capabilities
---- ------- ------ ------------------------------
barman-cloud.cloudnative-pg.io 0.11.0 N/A Reconciler Hooks, Lifecycle Service7. Verify Replication
Create data in the primary cluster and check if it is correctly replicated to the replica cluster.
Write to Primary:
user% kubectl cnp psql cluster-primary -n primary
psql (18.3.0)
Type "help" for help.
postgres=# create table test(id int);
CREATE TABLE
postgres=#
postgres=# insert into test values (1);
INSERT 0 1
postgres=# select * from test;
id
----
1
(1 row)
postgres=# checkpoint;
CHECKPOINT
postgres=# select * from pg_switch_wal();
pg_switch_wal
---------------
0/9020208
(1 row)
postgres=# select * from pg_switch_wal();
pg_switch_wal
---------------
0/A000078
(1 row)Read from Replica:
user% kubectl cnp psql cluster-replica -n replica
psql (18.3.0)
Type "help" for help.
postgres=# \dt
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
postgres=# \dt
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
postgres=# select * from test;
id
----
1
(1 row)Conclusion
The successful deployment of the cluster-replica in the replica namespace, bootstrapped from the cluster-primary in the primary namespace, demonstrates the robustness of the Barman Cloud Plugin within a CloudNativePG environment.
By leveraging S3-compliant storage for both full binary backups and continuous WAL archiving, we have achieved a decoupled architecture where the replica cluster can be initialized and synchronized without a direct network dependency on the primary's active pods during the initial phase.
Advanced Operations: Lifecycle Management
Now that the distributed topology is established and data is synchronizing via the Barman Cloud Plugin, you can perform manual or automated traffic shifts between clusters.
Next Phase: Refer to the runbook Switchover and Switchback of CloudNativePG Replica Clusters in a Distributed Topology (K8s) Part 2 to learn how to promote the replica cluster to primary status and demote the original primary without data loss.