Cloud Native PostgreSQL provides a plugin for kubectl to manage a cluster in Kubernetes.
The plugin also works with oc in an OpenShift environment.
You can install the plugin in your system with:
Once the plugin was installed and deployed, you can start using it like this:
The status command provides an overview of the current status of your
general information: name of the cluster, PostgreSQL's system ID, number of
instances, current timeline and position in the WAL
backup: point of recoverability, and WAL archiving status as returned by
the pg_stat_archiver view from the primary - or designated primary in the
case of a replica cluster
streaming replication: information taken directly from the pg_stat_replication
view on the primary instance
instances: information about each Postgres instance, taken directly by each
instance manager; in the case of a standby, the Current LSN field corresponds
to the latest write-ahead log location that has been replayed during recovery
The status information above is taken at different times and at different
locations, resulting in slightly inconsistent returned values. For example,
the Current Write LSN location in the main header, might be different
from the Current LSN field in the instances status as it is taken at
two different time intervals.
You can also get a more verbose version of the status by adding --verbose or just -v
The command also supports output in yaml and json format.
The meaning of this command is to promote a pod in the cluster to primary, so you
can start with maintenance work or test a switch-over situation in your cluster
Or you can use the instance node number to promote
Clusters created using the Cloud Native PostgreSQL operator work with a CA to sign
a TLS authentication certificate.
To get a certificate, you need to provide a name for the secret to store
the credentials, the cluster name, and a user for this certificate
After the secrete it's created, you can get it using kubectl
And the content of the same in plain text using the following commands:
The kubectl cnp restart command requests the operator to orchestrate
a rollout restart for a certain cluster. This is useful to apply
configuration changes to cluster dependent objects, such as ConfigMaps
containing custom monitoring queries.
The following command will restart a given cluster in a rollout fashion:
If you want ConfigMaps and Secrets to be automatically reloaded by instances, you can
add a label with key k8s.enterprisedb.io/reload to it.
The kubectl cnp reload command requests the operator to trigger a reconciliation
loop for a certain cluster. This is useful to apply configuration changes
to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
The following command will reload all configurations for a given cluster: