EDB Postgres for Kubernetes Plugin v1
EDB Postgres for Kubernetes provides a plugin for
kubectl to manage a cluster in Kubernetes.
The plugin also works with
oc in an OpenShift environment.
You can install the
cnp plugin using a variety of methods.
For air-gapped systems, installation via package managers, using previously downloaded files, may be a good option.
In the releases section of the GitHub repository, you can navigate to any release of interest (pick the same or newer release than your EDB Postgres for Kubernetes operator), and in it you will find an Assets section. In that section are pre-built packages for a variety of systems. As a result, you can follow standard practices and instructions to install them in your systems.
For example, let's install the 1.18.1 release of the plugin, for an Intel based
64 bit server. First, we download the right
Then, install from the local file using
As in the example for
.deb packages, let's install the 1.18.1 release for an
Intel 64 bit machine. Note the
--output flag to provide a file name.
Then install with
yum, and you're ready to use:
EDB Postgres for Kubernetes Plugin is currently built for the following operating system and architectures:
- arm 5/6/7
- arm 5/6/7
Once the plugin was installed and deployed, you can start using it like this:
cnp plugin can be used to generate the YAML manifest for the
installation of the operator. This option would typically be used if you want
to override some default configurations such as number of replicas,
installation namespace, namespaces to watch, and so on.
For details and available options, run:
The main options are:
-n: the namespace in which to install the operator (by default:
--replicas: number of replicas in the deployment
--version: minor version of the operator to be installed, such as
1.17. If a minor version is specified, the plugin will install the latest patch version of that minor version. If no version is supplied the plugin will install the latest
MAJOR.MINOR.PATCHversion of the operator.
--watch-namespace: comma separated string containing the namespaces to watch (by default all namespaces)
An example of the
generate command, which will generate a YAML manifest that
will install the operator, is as follows:
The flags in the above command have the following meaning:
-n kinginstall the CNP operator into the
--version 1.17install the latest patch version for minor version 1.17
--replicas 3install the operator with 3 replicas
--watch-namespaces "albert, bb, freddie"have the operator watch for changes in the
status command provides an overview of the current status of your
- general information: name of the cluster, PostgreSQL's system ID, number of instances, current timeline and position in the WAL
- backup: point of recoverability, and WAL archiving status as returned by
pg_stat_archiverview from the primary - or designated primary in the case of a replica cluster
- streaming replication: information taken directly from the
pg_stat_replicationview on the primary instance
- instances: information about each Postgres instance, taken directly by each
instance manager; in the case of a standby, the
Current LSNfield corresponds to the latest write-ahead log location that has been replayed during recovery (replay LSN).
The status information above is taken at different times and at different
locations, resulting in slightly inconsistent returned values. For example,
Current Write LSN location in the main header, might be different
Current LSN field in the instances status as it is taken at
two different time intervals.
You can also get a more verbose version of the status by adding
--verbose or just
The command also supports output in
The meaning of this command is to
promote a pod in the cluster to primary, so you
can start with maintenance work or test a switch-over situation in your cluster
Or you can use the instance node number to promote
Clusters created using the EDB Postgres for Kubernetes operator work with a CA to sign a TLS authentication certificate.
To get a certificate, you need to provide a name for the secret to store the credentials, the cluster name, and a user for this certificate
After the secret is created, you can get it using
And the content of the same in plain text using the following commands:
kubectl cnp restart command can be used in two cases:
requesting the operator to orchestrate a rollout restart for a certain cluster. This is useful to apply configuration changes to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
request a single instance restart, either in-place if the instance is the cluster's primary or deleting and recreating the pod if it is a replica.
If the in-place restart is requested but the change cannot be applied without a switchover, the switchover will take precedence over the in-place restart. A common case for this will be a minor upgrade of PostgreSQL image.
If you want ConfigMaps and Secrets to be automatically reloaded
by instances, you can add a label with key
k8s.enterprisedb.io/reload to it.
kubectl cnp reload command requests the operator to trigger a reconciliation
loop for a certain cluster. This is useful to apply configuration changes
to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
The following command will reload all configurations for a given cluster:
kubectl cnp maintenance command helps to modify one or more clusters
across namespaces and set the maintenance window values, it will change
the following fields:
Accepts as argument
unset using this to set the
true in case
false in case of
reusePVC is always set to
false unless the
--reusePVC flag is passed.
The plugin will ask for a confirmation with a list of the cluster to modify and their new values, if this is accepted this action will be applied to all the cluster in the list.
If you want to set in maintenance all the PostgreSQL in your Kubernetes cluster, just need to write the following command:
And you'll have the list of all the cluster to update
kubectl cnp report command bundles various pieces
of information into a ZIP file.
It aims to provide the needed context to debug problems
with clusters in production.
It has two sub-commands:
operator sub-command requests the operator to provide information
regarding the operator deployment, configuration and events.
All confidential information in Secrets and ConfigMaps is REDACTED.
The Data map will show the keys but the values will be empty.
--stopRedaction will defeat the redaction and show the
values. Use only at your own risk, this will share private data.
By default, operator logs are not collected, but you can enable operator
log collection with the
- deployment information: the operator Deployment and operator Pod
- configuration: the Secrets and ConfigMaps in the operator namespace
- events: the Events in the operator namespace
- webhook configuration: the mutating and validating webhook configurations
- webhook service: the webhook service
- logs: logs for the operator Pod (optional, off by default) in JSON-lines format
The command will generate a ZIP file containing various manifest in YAML format
(by default, but settable to JSON with the
-f flag to name a result file explicitly. If the
-f flag is not used, a
default time-stamped filename is created for the zip file.
The report plugin obeys
kubectl conventions, and will look for objects constrained
by namespace. The CNP Operator will generally not be installed in the same
namespace as the clusters.
E.g. the default installation namespace is postgresql-operator-system
-f flag set:
Unzipping the file will produce a time-stamped top-level folder to keep the directory tidy:
will result in:
If you activated the
--logs option, you'd see an extra subdirectory:
The plugin will try to get the PREVIOUS operator's logs, which is helpful when investigating restarted operators. In all cases, it will also try to get the CURRENT operator logs. If current and previous logs are available, it will show them both.
If the operator hasn't been restarted, you'll still see the
====== Begin …
====== End … guards, with no content inside.
You can verify that the confidential information is REDACTED by default:
--stopRedaction) option activated, secrets are shown:
You'll get a reminder that you're about to view confidential information:
cluster sub-command gathers the following:
- cluster resources: the cluster information, same as
kubectl get cluster -o yaml
- cluster pods: pods in the cluster namespace matching the cluster name
- cluster jobs: jobs, if any, in the cluster namespace matching the cluster name
- events: events in the cluster namespace
- pod logs: logs for the cluster Pods (optional, off by default) in JSON-lines format
- job logs: logs for the Pods created by jobs (optional, off by default) in JSON-lines format
cluster sub-command accepts the
-o flags, as the
-f flag is not used, a default timestamped report name will be used.
Note that the cluster information does not contain configuration Secrets / ConfigMaps,
-S is disabled.
By default, cluster logs are not collected, but you can enable cluster
log collection with the
Note that, unlike the
operator sub-command, for the
cluster sub-command you
need to provide the cluster name, and very likely the namespace, unless the cluster
is in the default one.
Remember that you can use the
--logs flag to add the pod and job logs to the ZIP.
will result in:
report operator directive will detect automatically if the cluster is
running on OpenShift, and will get the Cluster Service Version and the
Install Plan, and add them automatically to the zip under the
the namespace becomes very important on OpenShift. The default namespace for OpenShift in CNP is "openshift-operators". Many (most) clients will use a different namespace for the CNP operator.
You can find the OpenShift-related files in the
kubectl cnp destroy command helps remove an instance and all the
associated PVCs from a Kubernetes cluster.
--keep-pvc flag, if specified, allows you to keep the PVCs,
while removing all
metadata.ownerReferences that were set by the instance.
k8s.enterprisedb.io/pvcStatus label on the PVCs will change from
detached to signify that they are no longer in use.
Running again the command without the
--keep-pvc flag will remove the
The following example removes the
cluster-example-2 pod and the associated
Sometimes you may want to suspend the execution of a EDB Postgres for Kubernetes
while retaining its data, then resume its activity at a later time. We've
called this feature cluster hibernation.
Hibernation is only available via the
kubectl cnp hibernate [on|off]
Hibernating a EDB Postgres for Kubernetes cluster means destroying all the resources generated by the cluster, except the PVCs that belong to the PostgreSQL primary instance.
You can hibernate a cluster with:
- shutdown every PostgreSQL instance
- detach the PVCs containing the data of the primary instance, and annotate them with the latest database status and the latest cluster configuration
- delete the
Clusterresource, including every generated resource - except the aforementioned PVCs
When hibernated, a EDB Postgres for Kubernetes cluster is represented by just a group of
PVCs, in which the one containing the
PGDATA is annotated with the latest
available status, including content from
A cluster having fenced instances cannot be hibernated, as fencing is part of the hibernation procedure too.
In case of error the operator will not be able to revert the procedure. You can still force the operation with:
A hibernated cluster can be resumed with:
Once the cluster has been hibernated, it's possible to show the last configuration and the status that PostgreSQL had after it was shut down. That can be done with:
Pgbench can be run against an existing PostgreSQL cluster with following command:
Refer to the Benchmarking pgbench section for more details.
fio can be run on an existing storage class with following command:
Refer to the Benchmarking fio section for more details.
kubectl cnp backup command requests a new physical base backup for
an existing Postgres cluster by creating a new
The following example requests an on-demand backup for a given cluster:
The created backup will be named after the request time:
By default, new created backup will use the backup target policy defined
in cluster to choose which instance to run on. You can also use
option to override this policy. please refer to Backup and Recovery
for more information about backup target.
kubectl cnp psql command starts a new PostgreSQL interactive front-end
process (psql) connected to an existing Postgres cluster, as if you were running
it from the actual pod. This means that you will be using the
As you will be connecting as
postgres user, in production environments this
method should be used with extreme care, by authorized personnel only.
By default, the command will connect to the primary instance. The user can
select to work against a replica by using the
This command will start
kubectl exec, and the
kubectl executable must be
reachable in your
PATH variable to correctly work.
When connecting to instances running on OpenShift, you must explicitly
pass a username to the
psql command, because of a security measure built into
kubectl cnp snapshot creates consistent snapshots of a Postgres
- choosing a replica Pod to work on
- fencing the replica
- taking the snapshot
- unfencing the replica
A cluster already having a fenced instance cannot be snapshotted.
At the moment, this command can be used only for clusters having at least one
replica: that replica will be shut down by the fencing procedure to ensure the
snapshot to be consistent (cold backup). As the development of
declarative support for Kubernetes'
VolumeSnapshot API continues,
this limitation will be removed, allowing you to take online backups
as business continuity requires.
Even if the procedure will shut down a replica, the primary Pod will not be involved.
kubectl cnp snapshot command requires the cluster name:
VolumeSnapshot resource will be created with an empty
VolumeSnapshotClass reference. That resource is intended by be used by the
VolumeSnapshotClass configured as default.
VolumeSnapshotClass can be requested via the