Interactive Demo

Installation, Configuration and Deployment Demo

Feedback

Want to see what it takes to get the Cloud Native PostgreSQL Operator up and running? This section will demonstrate the following:

  1. Installing the Cloud Native PostgreSQL Operator
  2. Deploying a three-node PostgreSQL cluster
  3. Installing and using the kubectl-cnp plugin
  4. Testing failover to verify the resilience of the cluster

It will take roughly 5-10 minutes to work through.

This demo is interactive

You can follow along right in your browser by clicking the button below. Once the environment initializes, you'll see a terminal open at the bottom of the screen.

console
Clicking Start Now will load an interactive terminal in this window

Minikube is already installed in this environment; we just need to start the cluster:

minikube start
Output
* minikube v1.8.1 on Ubuntu 18.04
* Using the none driver based on user configuration
* Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
* OS release is Ubuntu 18.04.4 LTS
* Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Launching Kubernetes ...
* Enabling addons: default-storageclass, storage-provisioner
* Configuring local host environment ...
* Waiting for cluster to come online ...
* Done! kubectl is now configured to use "minikube"

This will create the Kubernetes cluster, and you will be ready to use it. Verify that it works with the following command:

kubectl get nodes
Output
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   66s   v1.17.3

You will see one node called minikube. If the status isn't yet "Ready", wait for a few seconds and run the command above again.

Install Cloud Native PostgreSQL

Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the "Installation" section:

kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.7.0.yaml
Output
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
serviceaccount/postgresql-operator-manager created
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
service/postgresql-operator-webhook-service created
deployment.apps/postgresql-operator-controller-manager created
validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created

And then verify that it was successfully installed:

kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager
Output
NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
postgresql-operator-controller-manager   1/1     1            1           52s

Deploy a PostgreSQL cluster

As with any other deployment in Kubernetes, to deploy a PostgreSQL cluster you need to apply a configuration file that defines your desired Cluster.

The cluster-example.yaml sample file defines a simple Cluster using the default storage class to allocate disk space:

cat <<EOF > cluster-example.yaml
# Example of PostgreSQL cluster
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
  name: cluster-example
spec:
  instances: 3

  # Example of rolling update strategy:
  # - unsupervised: automated update of the primary once all
  #                 replicas have been upgraded (default)
  # - supervised: requires manual supervision to perform
  #               the switchover of the primary
  primaryUpdateStrategy: unsupervised

  # Require 1Gi of space
  storage:
    size: 1Gi
EOF
There's more

For more detailed information about the available options, please refer to the "API Reference" section.

In order to create the 3-node PostgreSQL cluster, you need to run the following command:

kubectl apply -f cluster-example.yaml
Output
cluster.postgresql.k8s.enterprisedb.io/cluster-example created

You can check that the pods are being created with the get pods command. It'll take a bit to initialize, so if you run that immediately after applying the cluster configuration you'll see the status as Init: or PodInitializing:

kubectl get pods
Output
NAME                             READY   STATUS            RESTARTS   AGE
cluster-example-1-initdb-kq2vw   0/1     PodInitializing   0          18s

...give it a minute, and then check on it again:

kubectl get pods
Output
NAME                READY   STATUS    RESTARTS   AGE
cluster-example-1   1/1     Running   0          56s
cluster-example-2   1/1     Running   0          35s
cluster-example-3   1/1     Running   0          19s

Now we can check the status of the cluster:

kubectl get cluster cluster-example -o yaml
Output
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
  creationTimestamp: "2021-07-13T06:48:56Z"
  generation: 1
  name: cluster-example
  namespace: default
  resourceVersion: "2270"
  selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
  uid: 405c5b71-6e9b-4baf-a186-2086bbb23a2b
spec:
  affinity:
    podAntiAffinityType: preferred
    topologyKey: ""
  bootstrap:
    initdb:
      database: app
      owner: app
  imageName: quay.io/enterprisedb/postgresql:13.3
  instances: 3
  postgresql:
    parameters:
      cluster_name: ""
      log_destination: csvlog
      log_directory: /controller/log
      log_filename: postgres
      log_rotation_age: "0"
      log_rotation_size: "0"
      log_truncate_on_rotation: "false"
      logging_collector: "on"
      max_parallel_workers: "32"
      max_replication_slots: "32"
      max_worker_processes: "32"
      shared_preload_libraries: ""
      wal_keep_size: 512MB
  primaryUpdateStrategy: unsupervised
  resources: {}
  storage:
    size: 1Gi
status:
  certificates:
    clientCASecret: cluster-example-ca
    expirations:
      cluster-example-ca: 2022-07-13 06:43:56 +0000 UTC
      cluster-example-replication: 2022-07-13 06:43:56 +0000 UTC
      cluster-example-server: 2022-07-13 06:43:56 +0000 UTC
    replicationTLSSecret: cluster-example-replication
    serverAltDNSNames:
    - cluster-example-rw
    - cluster-example-rw.default
    - cluster-example-rw.default.svc
    - cluster-example-r
    - cluster-example-r.default
    - cluster-example-r.default.svc
    - cluster-example-ro
    - cluster-example-ro.default
    - cluster-example-ro.default.svc
    serverCASecret: cluster-example-ca
    serverTLSSecret: cluster-example-server
  currentPrimary: cluster-example-1
  healthyPVC:
  - cluster-example-3
  - cluster-example-1
  - cluster-example-2
  instances: 3
  instancesStatus:
    healthy:
    - cluster-example-1
    - cluster-example-2
    - cluster-example-3
  latestGeneratedNode: 3
  licenseStatus:
    isImplicit: true
    isTrial: true
    licenseExpiration: "2021-08-12T06:48:56Z"
    licenseStatus: Implicit trial license
    repositoryAccess: false
    valid: true
  phase: Cluster in healthy state
  pvcCount: 3
  readService: cluster-example-r
  readyInstances: 3
  secretsResourceVersion:
    applicationSecretVersion: "957"
    clientCaSecretVersion: "953"
    replicationSecretVersion: "955"
    serverCaSecretVersion: "953"
    serverSecretVersion: "954"
    superuserSecretVersion: "956"
  targetPrimary: cluster-example-1
  writeService: cluster-example-rw
Note

By default, the operator will install the latest available minor version of the latest major version of PostgreSQL when the operator was released. You can override this by setting the imageName key in the spec section of the Cluster definition.

Important

The immutable infrastructure paradigm requires that you always point to a specific version of the container image. Never use tags like latest or 13 in a production environment as it might lead to unpredictable scenarios in terms of update policies and version consistency in the cluster.

Install the kubectl-cnp plugin

Cloud Native PostgreSQL provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:

curl -sSfL \
  https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
  sudo sh -s -- -b /usr/local/bin
Output
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.7.0 for v1.7.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp

The cnp command is now available in kubectl:

kubectl cnp status cluster-example
Output
Cluster in healthy state   
Name:              cluster-example
Namespace:         default
PostgreSQL Image:  quay.io/enterprisedb/postgresql:13.3
Primary instance:  cluster-example-1
Instances:         3
Ready instances:   3

Instances status
Pod name           Current LSN  Received LSN  Replay LSN  System ID            Primary  Replicating  Replay paused  Pending restart  Status
--------           -----------  ------------  ----------  ---------            -------  -----------  -------------  ---------------  ------
cluster-example-1  0/5000060                              6984299688308645905  ✓        ✗            ✗              ✗                OK
cluster-example-2               0/5000060     0/5000060   6984299688308645905  ✗        ✓            ✗              ✗                OK
cluster-example-3               0/5000060     0/5000060   6984299688308645905  ✗        ✓            ✗              ✗                OK
There's more

See the Cloud Native PostgreSQL Plugin page for more commands and options.

Testing failover

As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:

kubectl delete pod --wait=false cluster-example-1
Output
pod "cluster-example-1" deleted

This simulates a hard shutdown of the server - a scenario where something has gone wrong.

Now if we check the status...

kubectl cnp status cluster-example
Output
Failing over   Failing over to cluster-example-2
Name:              cluster-example
Namespace:         default
PostgreSQL Image:  quay.io/enterprisedb/postgresql:13.3
Primary instance:  cluster-example-2
Instances:         3
Ready instances:   2

Instances status
Pod name           Current LSN  Received LSN  Replay LSN  System ID            Primary  Replicating  Replay paused  Pending restart  Status
--------           -----------  ------------  ----------  ---------            -------  -----------  -------------  ---------------  ------
cluster-example-1  -            -             -           -                    -        -            -              -                pod not available
cluster-example-2  0/6000230                              6984299688308645905  ✓        ✗            ✗              ✗                OK
cluster-example-3               0/60000A0     0/60000A0   6984299688308645905  ✗        ✗            ✗              ✗                OK

...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:

kubectl cnp status cluster-example
Output
Cluster in healthy state   
Name:              cluster-example
Namespace:         default
PostgreSQL Image:  quay.io/enterprisedb/postgresql:13.3
Primary instance:  cluster-example-2
Instances:         3
Ready instances:   3

Instances status
Pod name           Current LSN  Received LSN  Replay LSN  System ID            Primary  Replicating  Replay paused  Pending restart  Status
--------           -----------  ------------  ----------  ---------            -------  -----------  -------------  ---------------  ------
cluster-example-1               0/6004230     0/6004230   6984299688308645905  ✗        ✓            ✗              ✗                OK
cluster-example-2  0/6004230                              6984299688308645905  ✓        ✗            ✗              ✗                OK
cluster-example-3               0/6004230     0/6004230   6984299688308645905  ✗        ✓            ✗              ✗                OK

Further reading

This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!


Could this page could be better? Report a problem or suggest an addition!