EDB Postgres for Kubernetes provides a plugin for kubectl to manage a cluster in Kubernetes.
The plugin also works with oc in an OpenShift environment.
You can install the plugin in your system with:
EDB Postgres for Kubernetes Plugin is currently build for the following
operating system and architectures:
Once the plugin was installed and deployed, you can start using it like this:
The status command provides an overview of the current status of your
general information: name of the cluster, PostgreSQL's system ID, number of
instances, current timeline and position in the WAL
backup: point of recoverability, and WAL archiving status as returned by
the pg_stat_archiver view from the primary - or designated primary in the
case of a replica cluster
streaming replication: information taken directly from the pg_stat_replication
view on the primary instance
instances: information about each Postgres instance, taken directly by each
instance manager; in the case of a standby, the Current LSN field corresponds
to the latest write-ahead log location that has been replayed during recovery
The status information above is taken at different times and at different
locations, resulting in slightly inconsistent returned values. For example,
the Current Write LSN location in the main header, might be different
from the Current LSN field in the instances status as it is taken at
two different time intervals.
You can also get a more verbose version of the status by adding
--verbose or just -v
The command also supports output in yaml and json format.
The meaning of this command is to promote a pod in the cluster to primary, so you
can start with maintenance work or test a switch-over situation in your cluster
Or you can use the instance node number to promote
Clusters created using the EDB Postgres for Kubernetes operator work with a CA to sign
a TLS authentication certificate.
To get a certificate, you need to provide a name for the secret to store
the credentials, the cluster name, and a user for this certificate
After the secrete it's created, you can get it using kubectl
And the content of the same in plain text using the following commands:
The kubectl cnp restart command can be used in two cases:
requesting the operator to orchestrate a rollout restart
for a certain cluster. This is useful to apply
configuration changes to cluster dependent objects, such as ConfigMaps
containing custom monitoring queries.
request a single instance restart, either in-place if the instance is
the cluster's primary or deleting and recreating the pod if
it is a replica.
If the in-place restart is requested but the change cannot be applied without
a switchover, the switchover will take precedence over the in-place restart. A
common case for this will be a minor upgrade of PostgreSQL image.
If you want ConfigMaps and Secrets to be automatically reloaded
by instances, you can add a label with key k8s.enterprisedb.io/reload to it.
The kubectl cnp reload command requests the operator to trigger a reconciliation
loop for a certain cluster. This is useful to apply configuration changes
to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
The following command will reload all configurations for a given cluster:
The kubectl cnp maintenance command helps to modify one or more clusters
across namespaces and set the maintenance window values, it will change
the following fields:
Accepts as argument set and unset using this to set the
inProgress to true in case setand to false in case of unset.
By default, reusePVC is always set to false unless the --reusePVC flag is passed.
The plugin will ask for a confirmation with a list of the cluster to modify
and their new values, if this is accepted this action will be applied to
all the cluster in the list.
If you want to set in maintenance all the PostgreSQL in your Kubernetes cluster,
just need to write the following command:
And you'll have the list of all the cluster to update
The kubectl cnp report command bundles various pieces
of information into a ZIP file.
It aims to provide the needed context to debug problems
with clusters in production.
It has two sub-commands: operator and cluster.
The operator sub-command requests the operator to provide information
regarding the operator deployment, configuration and events.
All confidential information in Secrets and ConfigMaps is REDACTED.
The Data map will show the keys but the values will be empty.
The flag -S / --stopRedaction will defeat the redaction and show the
values. Use only at your own risk, this will share private data.
By default, operator logs are not collected, but you can enable operator
log collection with the --logs flag
deployment information: the operator Deployment and operator Pod
configuration: the Secrets and ConfigMaps in the operator namespace
events: the Events in the operator namespace
webhook configuration: the mutating and validating webhook configurations
webhook service: the webhook service
logs: logs for the operator Pod (optional, off by default) in JSON-lines format
The command will generate a ZIP file containing various manifest in YAML format
(by default, but settable to JSON with the -o flag).
Use the -f flag to name a result file explicitly. If the -f flag is not used, a
default time-stamped filename is created for the zip file.
With the -f flag set:
Unzipping the file will produce a time-stamped top-level folder to keep the
will result in:
You can verify that the confidential information is REDACTED:
With the -S (--stopRedaction) option activated, secrets are shown:
You'll get a reminder that you're about to view confidential information:
The report operator directive will detect automatically if the cluster is
running on OpenShift, and will get the Cluster Service Version and the
Install Plan, and add them automatically to the zip under the openshift
the namespace becomes very important on OpenShift. The default namespace
for OpenShift in CNP is "openshift-operators". Many (most) clients will use
a different namespace for the CNP operator.
You can find the OpenShift-related files in the openshift sub-folder:
The cluster sub-command gathers the following:
cluster resources: the cluster information, same as kubectl get cluster -o yaml
cluster pods: pods in the cluster namespace matching the cluster name
cluster jobs: jobs, if any, in the cluster namespace matching the cluster name
events: events in the cluster namespace
pod logs: logs for the cluster Pods (optional, off by default) in JSON-lines format
job logs: logs for the Pods created by jobs (optional, off by default) in JSON-lines format
The cluster sub-command accepts the -f and -o flags, as the operator does.
If the -f flag is not used, a default timestamped report name will be used.
Note that the cluster information does not contain configuration Secrets / ConfigMaps,
so the -S is disabled.
By default, cluster logs are not collected, but you can enable cluster
log collection with the --logs flag
Note that, unlike the operator sub-command, for the cluster sub-command you
need to provide the cluster name, and very likely the namespace, unless the cluster
is in the default one.
Remember that you can use the --logs flag to add the pod and job logs to the ZIP.