In PostgreSQL terminology, recovery is the process of starting a PostgreSQL instance using a previously taken backup. The PostgreSQL recovery mechanism is very solid and rich. It also supports Point In Time Recovery, which allows you to restore a given cluster up to any point in time from the first available backup in your catalog to the last archived WAL (as you can see, the WAL archive is mandatory in this case).
In EDB Postgres for Kubernetes, recovery cannot be performed "in-place" on an existing cluster. Recovery is rather a way to bootstrap a new Postgres cluster starting from an available physical backup.
For details on the
bootstrap stanza, please refer to the
recovery bootstrap mode lets you create a new cluster from an existing
physical base backup, and then reapply the WAL files containing the REDO log
from the archive.
WAL files are pulled from the defined recovery object store.
Base backups may be taken either on object stores, or using volume snapshots (from version 1.21).
Recovery using volume snapshots had an initial release on 1.20.1. Because of the amount of progress on the feature for 1.21.0, it is strongly advised that you upgrade to 1.21.0 or more advanced releases to use volume snapshots.
Recovery from a recovery object store can be achieved in two ways:
- using a recovery object store, that is, a backup of another cluster
created by Barman Cloud and defined via the
barmanObjectStoreoption in the
- using an existing
Backupobject in the same namespace (this was the only option available before version 1.8.0).
Both recovery methods enable either full recovery (up to the last
available WAL) or up to a point in time.
When performing a full recovery, the cluster can also be started
in replica mode (see replica clusters for reference).
If using replica mode, make sure that the PostgreSQL configuration
.spec.postgresql.parameters) of the recovered cluster is
compatible, from a physical replication standpoint, with the original one.
For recovery using volume snapshots:
- using a consistent set of
VolumeSnapshotobjects that all belong to the same backup, and identified by the same
k8s.enterprisedb.io/backupNamelabels, then recovering through the
volumeSnapshotsoption in the
.spec.bootstrap.recoverystanza, as described in "Recovery from
You can recover from a backup created by Barman Cloud and stored on a supported
object store. Once you have defined the external cluster, including all the
required configuration in the
barmanObjectStore section, you need to
reference it in the
.spec.recovery.source option. The following example
defines a recovery object store in a blob container in Azure:
By default the
recovery method strictly uses the
name of the
cluster in the
externalClusters section as the name of the main folder
of the backup data within the object store, which is normally reserved
for the name of the server. You can specify a different folder name
In the above example we are taking advantage of the parallel WAL restore feature, dedicating up to 8 jobs to concurrently fetch the required WAL files from the archive. This feature can appreciably reduce the recovery time. Make sure that you plan ahead for this scenario and correctly tune the value of this parameter for your environment. It will certainly make a difference when (not if) you'll need it.
When creating replicas after having recovered the primary instance from
the volume snapshot, the operator might end up using
to synchronize them, resulting in a slower process depending on the size
of the database. This limitation will be lifted in the future when support
for online backups and PVC cloning will be introduced.
EDB Postgres for Kubernetes can create a new cluster from a
VolumeSnapshot of a PVC of an
Cluster that's been taken using the declarative API for
volume snapshot backups.
You will need to specify the name of the snapshot, as in the following example:
In case the backed-up cluster was using a separate PVC to store the WAL files, the recovery must include that too:
If bootstrapping a replica-mode cluster from snapshots, to leverage snapshots for the standby instances and not just the primary, it would be advisable to:
- start with a single instance replica cluster. The primary instance will be recovered using the snapshot and available WALs form the source cluster
- take a snapshot of the primary in the replica cluster
- increase the number of instances in the replica cluster as desired
In case a
Backup resource is already available in the namespace in which the
cluster should be created, you can specify its name through
.spec.bootstrap.recovery.backup.name, as in the following example:
This bootstrap method allows you to specify just a reference to the backup that needs to be restored.
The previous example implies the application database and its owning user to be
the default one,
app. If the PostgreSQL cluster being restored was using
different names, they can be specified as documented in the Configure the
application database section.
Whether you recover from a recovery object store, a volume snapshot, or an
Backup resource, the following considerations apply:
- The application database name and the application database user are preserved from the backup that is being restored. The operator does not currently attempt to back up the underlying secrets, as this is part of the usual maintenance activity of the Kubernetes cluster itself.
- To preserve the original
postgresuser password, you need to properly configure
enableSuperuserAccessand supply a
- By default, the recovery will continue up to the latest
available WAL on the default target timeline (
currentfor PostgreSQL up to 11,
latestfor version 12 and above). You can optionally specify a
recoveryTargetto perform a point in time recovery (see the "Point in time recovery" section).
Consider using the
barmanObjectStore.wal.maxParallel option to speed
up WAL fetching from the archive by concurrently downloading the transaction
logs from the recovery object store.
Instead of replaying all the WALs up to the latest one, we can ask PostgreSQL to stop replaying WALs at any given point in time, after having extracted a base backup. PostgreSQL uses this technique to achieve point-in-time recovery (PITR). The presence of a WAL archive is mandatory.
PITR requires you to specify a recovery target, by using the options described in the "Recovery targets" section below.
The operator will generate the configuration parameters required for this feature to work in case a recovery target is specified.
The example below uses a recovery object store in Azure that contains both the base backups and the WAL archive. The recovery target is based on a requested timestamp:
You might have noticed that in the above example you only had to specify
targetTime in the form of a timestamp, without having to worry about
specifying the base backup from which to start the recovery.
backupID option is the one that allows you to specify the base backup
from which to initiate the recovery process. By default, this value is
If you assign a value to it (in the form of a Barman backup ID), the operator will use that backup as base for the recovery.
You need to make sure that such a backup exists and is accessible.
If the backup ID is not specified, the operator will automatically detect the base backup for the recovery as follows:
- when you use
targetLSN, the operator selects the closest backup that was completed before that target
- otherwise the operator selects the last available backup in chronological order.
The example below uses:
- a Kubernetes volume snapshot for the
PGDATAcontaining the base backup from which to start the recovery process, identified in the
recovery.volumeSnapshotssection and called
- a recovery object store in MinIO containing the WAL archive, identified by
recovery.sourceoption in the form of an external cluster definition
The recovery target is based on a requested timestamp.
In case the backed up Cluster had
walStorage enabled, you also must
specify the volume snapshot containing the
PGWAL directory, as mentioned
in the Recovery from VolumeSnapshot objects
It is your responsibility to ensure that the end time of the base backup in the volume snapshot is prior to the recovery target timestamp.
Here are the recovery target criteria you can use:
: time stamp up to which recovery will proceed, expressed in
RFC 3339 format
(the precise stopping point is also influenced by the
: transaction ID up to which recovery will proceed
(the precise stopping point is also influenced by the
keep in mind that while transaction IDs are assigned sequentially at
transaction start, transactions can complete in a different numeric order.
The transactions that will be recovered are those that committed before
(and optionally including) the specified one
: named restore point (created with
pg_create_restore_point()) to which
recovery will proceed
: LSN of the write-ahead log location up to which recovery will proceed
(the precise stopping point is also influenced by the
targetImmediate : recovery should end as soon as a consistent state is reached - i.e. as early as possible. When restoring from an online backup, this means the point where taking the backup ended
While the operator is able to automatically retrieve the closest backup
targetLSN is specified, this is not possible
for the remaining targets:
In such cases, it is important to specify
backupID, unless you are OK with
the last available backup in the catalog.
The example below uses a
targetName based recovery target:
You can choose only a single one among the targets above in each
Additionally, you can specify
targetTLI force recovery to a specific
By default, the previous parameters are considered to be inclusive, stopping
just after the recovery target, matching the behavior in PostgreSQL
You can request exclusive behavior,
stopping right before the recovery target, by setting the
exclusive parameter to
true like in the following example relying on a blob container in Azure
for both base backups and the WAL archive:
For the recovered cluster, we can configure the application database name and credentials with additional configuration. To update application database credentials, we can generate our own passwords, store them as secrets, and update the database use the secrets. Or we can also let the operator generate a secret with randomly secure password for use. Please reference the "Bootstrap an empty cluster" section for more information about secrets.
The following example configure the application database
app with owner
app, and supplied secret
With the above configuration, the following will happen after recovery is completed:
- if database
appdoes not exist, a new database
appwill be created.
- if user
appdoes not exist, a new user
appwill be created.
- if user
appis not the owner of database, user
appwill be granted as owner of database
- If value of
usernamematch value of
ownerin secret, the password of application database will be changed to the value of
For a replica cluster with replica mode enabled, the operator will not create any database or user in the PostgreSQL instance, as these will be recovered from the original cluster.
You can use the data uploaded to the object storage to bootstrap a
new cluster from a previously taken backup.
The operator will orchestrate the recovery process using the
barman-cloud-restore tool (for the base backup) and the
barman-cloud-wal-restore tool (for WAL files, including parallel support, if
For details and instructions on the
recovery bootstrap method, please refer
to the "Bootstrap from a backup" section.
If you are not familiar with how PostgreSQL PITR
works, we suggest that you configure the recovery cluster as the original
one when it comes to
.spec.postgresql.parameters. Once the new cluster is
restored, you can then change the settings as desired.
Under the hood, the operator will inject an init container in the first instance of the new cluster, and the init container will start recovering the backup from the object storage.
The duration of the base backup copy in the new PVC depends on the size of the backup, as well as the speed of both the network and the storage.
When the base backup recovery process is completed, the operator starts the
Postgres instance in recovery mode: in this phase, PostgreSQL is up, albeit not
able to accept connections, and the pod is healthy according to the
liveness probe. Through the
restore_command, PostgreSQL starts fetching WAL
files from the archive (you can speed up this phase by setting the
maxParallel option and enable the parallel WAL restore capability).
This phase terminates when PostgreSQL reaches the target (either the end of the
WAL or the required target in case of Point-In-Time-Recovery). Indeed, you can
optionally specify a
recoveryTarget to perform a point in time recovery. If
left unspecified, the recovery will continue up to the latest available WAL on
the default target timeline (
current for PostgreSQL up to 11,
version 12 and above).
Once the recovery is complete, the operator will set the required superuser password into the instance. The new primary instance will start as usual, and the remaining instances will join the cluster as replicas.
The process is transparent for the user and it is managed by the instance manager running in the Pods.
A manifest for a cluster restore may include a
This means that the new cluster, after recovery, will start archiving WAL's and
taking backups if configured to do so.
For example, the section below could be part of a manifest for a Cluster
bootstrapping from Cluster
cluster-example-backup, and would create a
new folder in the storage bucket named
recoveredCluster where the base backups
and WAL's of the recovered cluster would be stored.
You should not re-use the exact same
for different clusters. There could be cases where the existing information
in the storage buckets could be overwritten by the new cluster.
The operator includes a safety check to ensure a cluster will not
overwrite a storage bucket that contained information. A cluster that would
overwrite existing storage will remain in state
Setting up primary with
Pods in an Error state.
The pod logs will show:
ERROR: WAL archive check failed for server recoveredCluster: Expected empty archive
If you set the
k8s.enterprisedb.io/skipEmptyWalArchiveCheck annotation to
the recovered cluster, you can skip the above check. This is not recommended
as for the general use case the above check works fine. Please don't do
this unless you are familiar with PostgreSQL recovery system, as this can lead
you to severe data loss.