Red Hat OpenShift v1.28.1
EDB Postgres® AI for CloudNativePG™ Cluster is certified to run on Red Hat OpenShift Container Platform (OCP) version 4.x and is available directly from the Red Hat Catalog.
The goal of this section is to help you decide the best installation method for EDB Postgres® AI for CloudNativePG™ Cluster based on your organizations' security and access control policies.
The first and critical step is to design the architecture of your PostgreSQL clusters in your OpenShift environment.
Once the architecture is clear, you can proceed with the installation. EDB Postgres® AI for CloudNativePG™ Cluster can be installed and managed via:
- OpenShift web console
- OpenShift command-line interface (CLI) called
oc, for full control
EDB Postgres® AI for CloudNativePG™ Cluster supports all available install modes defined by OpenShift:
- cluster-wide, in all namespaces
- local, in a single namespace
- local, watching multiple namespaces (only available using
oc)
Note
A project is a Kubernetes namespace with additional annotations, and is the central vehicle by which access to resources for regular users is managed.
In most cases, the default cluster-wide installation of EDB Postgres® AI for CloudNativePG™ Cluster is the recommended one, with either central management of PostgreSQL clusters or delegated management (limited to specific users/projects according to RBAC definitions - see "Important OpenShift concepts" and "Users and Permissions" below).
Important
Both the installation and upgrade processes require access to an OpenShift
Container Platform cluster using an account with cluster-admin permissions.
From "Default cluster roles",
a cluster-admin is "a super-user that can perform any action in any
project. When bound to a user with a local binding, they have full control over
quota and every action on every resource in the project".
Architecture
The same concepts that have been included in the generic Kubernetes/PostgreSQL architecture page apply for OpenShift as well.
Here as well, the critical factor is the number of availability zones or data centers for your OpenShift environment.
As outlined in the "Disaster Recovery Strategies for Applications Running on OpenShift" blog article written by Raffaele Spazzoli back in 2020 about stateful applications, in order to fully exploit EDB Postgres® AI for CloudNativePG™ Cluster, you need to plan, design and implement an OpenShift cluster spanning 3 or more availability zones. While this doesn't pose an issue in most of the public cloud provider deployments, it is definitely a challenge in on-premise scenarios.
If your OpenShift cluster has only one availability zone, the zone is your Single Point of Failure (SPoF) from a High Availability standpoint - provided that you have wisely adopted a share-nothing architecture, making sure that your PostgreSQL clusters have at least one standby (two if using synchronous replication), and that each PostgreSQL instance runs on a different Kubernetes worker node using different storage. Make sure that continuous backup data is stored additionally in a storage service outside the OpenShift cluster, allowing you to perform Disaster Recovery operations beyond your data center.
Most likely you will have another OpenShift cluster in another data center, either in the same metropolitan area or in another region, in an active/passive strategy. You can set up an independent "Replica cluster", with the understanding that this is primarily a Disaster Recovery solution - very effective but with some limitations that require manual intervention, as explained in the feature page. The same solution can be applied to additional OpenShift clusters, even in a cascading manner.
On the other hand, if your OpenShift cluster spans multiple availability zones in a region, you can fully leverage the capabilities of the operator for resilience and self-healing, and the region can become your SPoF, i.e. it would take a full region outage to bring down your cluster. Moreover, you can take advantage of multiple OpenShift clusters in different regions by setting up replica clusters, as previously mentioned.
Reserving Nodes for PostgreSQL Workloads
For optimal performance and resource allocation in your PostgreSQL database
operations, it is highly recommended to isolate PostgreSQL workloads by
dedicating specific worker nodes solely to postgres in production. This is
particularly crucial whether you're operating in a single availability zone or
a multi-availability zone environment.
A worker node in OpenShift that is dedicated to running PostgreSQL workloads is
commonly referred to as a Postgres node or postgres node.
This dedicated approach ensures that your PostgreSQL workloads are not competing for resources with other applications, leading to enhanced stability and performance.
For further details, please refer to the "Reserving Nodes for PostgreSQL Workloads" section within the broader "Architecture" documentation. The primary difference when working in OpenShift involves how labels and taints are applied to the nodes, as described below.
To label a node as a postgres node, execute the following command:
oc label node <NODE-NAME> node-role.kubernetes.io/postgres=
To apply a postgres taint to a node, use the following command:
oc adm taint node <NODE-NAME> node-role.kubernetes.io/postgres=:NoSchedule
By correctly labeling and tainting your nodes, you ensure that only PostgreSQL workloads are scheduled on these dedicated nodes via affinity and tolerations, reinforcing the stability and performance of your database environment.
Important OpenShift concepts
To understand how the EDB Postgres® AI for CloudNativePG™ Cluster operator fits in an OpenShift environment, you must familiarize yourself with the following Kubernetes-related topics:
- Operators
- Authentication
- Authorization via Role-based Access Control (RBAC)
- Service Accounts and Users
- Rules, Roles and Bindings
- Cluster RBAC vs local RBAC through projects
This is especially true in case you are not comfortable with the elevated permissions required by the default cluster-wide installation of the operator.
We have also selected the diagram below from the OpenShift documentation, as it clearly illustrates the relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts.
The "Predefined RBAC objects" section
below contains important information about how EDB Postgres® AI for CloudNativePG™ Cluster adheres
to Kubernetes and OpenShift RBAC implementation, covering default installed
cluster roles, roles, service accounts.
If you are familiar with the above concepts, you can proceed directly to the selected installation method. Otherwise, we recommend that you read the following resources taken from the OpenShift documentation and the Red Hat blog:
- "Operator Lifecycle Manager (OLM) concepts and resources"
- "Understanding authentication"
- "Role-based access control (RBAC)", covering rules, roles and bindings for authorization, as well as cluster RBAC vs local RBAC through projects
- "Default project service accounts and roles"
- "With Kubernetes Operators comes great responsibility" blog article
Cluster Service Version (CSV)
Technically, the operator is designed to run in OpenShift via the Operator Lifecycle Manager (OLM), according to the Cluster Service Version (CSV) defined by EDB.
The CSV is a YAML manifest that defines not only the user interfaces (available
through the web dashboard), but also the RBAC rules required by the operator
and the custom resources defined and owned by the operator (such as the
Cluster one, for example). The CSV defines also the available installModes
for the operator, namely: AllNamespaces (cluster-wide), SingleNamespace
(single project), MultiNamespace (multi-project), and OwnNamespace.
There's more ...
You can find out more about CSVs and install modes by reading "Operator group membership" and "Defining cluster service versions (CSVs)" from the OpenShift documentation.
Limitations for multi-tenant management
Red Hat OpenShift Container Platform provides limited support for simultaneously installing different variations of an operator on a single cluster. Like any other operator, EDB Postgres® AI for CloudNativePG™ Cluster becomes an extension of the control plane. As the control plane is shared among all tenants (projects) of an OpenShift cluster, operators too become shared resources in a multi-tenant environment.
Operator Lifecycle Manager (OLM) can install operators multiple times in different namespaces, with one important limitation: they all need to share the same API version of the operator.
For more information, please refer to "Operator groups" in OpenShift documentation.
Channels
EDB Postgres® AI for CloudNativePG™ Cluster is distributed through the following OLM channels, each serving a distinct purpose:
candidate: this channel provides early access to the next potentialfastrelease. It includes the latest pre-release versions with new features and fixes, but is considered experimental and not supported. Use this channel only for testing and validation purposes—not in production environments. Versions incandidatemay not appear in other channels if no further updates are recommended.fast: designed for users who want timely access to the latest stable features and patches. The head of thefastchannel always points to the latest patch release of the latest minor release of EDB Postgres for Kubernetes.stable: similar tofast, but restricted to the latest minor release currently under EDB’s Long Term Support (LTS) policy. Designed for users who require predictable updates and official support while benefiting from ongoing stability and maintenance.stable-vX.Y: tracks the latest patch release within a specific minor version (e.g.,stable-v1.26). These channels are ideal for environments that require version pinning and predictable updates within a stable minor release.
The fast and stable channels may span multiple minor versions, whereas
each stable-vX.Y channel is limited to patch updates within a specific minor
release.
EDB Postgres® AI for CloudNativePG™ Cluster follow trunk-based development and
continuous delivery principles. As a result, we generally recommend using the
fast channel to stay current with the latest stable improvements and fixes.
Installation via web console
Ensuring access to EDB private registry
Important
You'll need access to the private EDB repository where both the operator and operand images are stored. Access requires a valid EDB subscription plan. Please refer to "Accessing EDB private image registries" for further details.
CRITICAL WARNING: UPGRADING OPERATORS
OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the Central Migration Guide first.
The OpenShift install will use pull secrets in order to access the operand and operator images, which are held in a private repository.
Once you have credentials to the private repository, you will need to create
a pull secret in the openshift-operators namespace, named:
postgresql-operator-pull-secret, for the EDB Postgres® AI for CloudNativePG™ Cluster operator images
You can create this secret using the oc create command by replacing <TOKEN> with
the repository token for your EDB account, as explained in
Get your token.
oc create secret docker-registry postgresql-operator-pull-secret \ -n openshift-operators \ --docker-server=docker.enterprisedb.com \ --docker-username=k8s \ --docker-password="<TOKEN>"
The EDB Postgres® AI for CloudNativePG™ Cluster operator can be found in the Red Hat OperatorHub directly from your OpenShift dashboard.
Navigate in the web console to the
Operators -> OperatorHubpage:
Scroll in the
Databasesection or type a keyword into theFilter by keywordbox (in this case, "PostgreSQL") to find the EDB Postgres® AI for CloudNativePG™ Cluster Operator, then select it:
Read the information about the Operator and select
Install.The following
Operator installationpage expects you to choose:- the installation mode: cluster-wide or single namespace installation
- the update channel (see the "Channels" section for more
information - if unsure, pick
fast) - the approval strategy, following the availability on the market place of a new release of the operator, certified by Red Hat:
Automatic: OLM automatically upgrades the running operator with the new versionManual: OpenShift waits for human intervention, by requiring an approval in theInstalled Operatorssection
Important
The process of the operator upgrade is described in the "Upgrades" section.
Important
It is possible to install the operator in a single project
(technically speaking: OwnNamespace install mode) multiple times
in the same cluster. There will be an operator installation in every namespace,
with different upgrade policies as long as the API is the same (see
"Limitations for multi-tenant management").
Note
If you are running with OpenShift 4.20 or later, OperatorHub has been integrated into the
Software Catalog. In the web console, navigate to Operators -> Software Catalog
and select a Project to view the software catalog.
Choosing cluster-wide vs local installation of the operator is a critical turning point. Trying to install the operator globally with an existing local installation is blocked, by throwing the error below. If you want to proceed you need to remove every local installation of the operator first.
Cluster-wide installation
With cluster-wide installation, you are asking OpenShift to install the
Operator in the default openshift-operators namespace and to make it
available to all the projects in the cluster. This is the default and normally
recommended approach to install EDB Postgres® AI for CloudNativePG™ Cluster.
Warning
This doesn't mean that every user in the OpenShift cluster can use the EDB Postgres® AI for CloudNativePG™ Cluster Operator, deploy a Cluster object or even see the Cluster objects that
are running in their own namespaces. There are some special roles that users must
have in the namespace in order to interact with EDB Postgres® AI for CloudNativePG™ Cluster' managed
custom resources - primarily the Cluster one. Please refer to the
"Users and Permissions" section below for details.
From the web console, select All namespaces on the cluster (default) as
Installation mode:
As a result, the operator will be visible in every namespaces. Otherwise, as with any
other OpenShift operator, check the logs in any pods in the openshift-operators
project on the Workloads → Pods page that are reporting issues to troubleshoot further.
Beware
By choosing the cluster-wide installation you cannot easily move to a single project installation at a later time.
Single project installation
With single project installation, you are asking OpenShift to install the Operator in a given namespace, and to make it available to that project only.
Warning
This doesn't mean that every user in the namespace can use the EDB Postgres® AI for CloudNativePG™ Cluster Operator, deploy a Cluster object or even see the Cluster objects that
are running in the namespace. Similarly to the cluster-wide installation mode,
there are some special roles that users must have in the namespace in order to
interact with EDB Postgres® AI for CloudNativePG™ Cluster' managed custom resources - primarily the Cluster
one. Please refer to the "Users and Permissions" section below
for details.
From the web console, select A specific namespace on the cluster as
Installation mode, then pick the target namespace (in our example
proj-dev):
As a result, the operator will be visible in the selected namespace only. You
can verify this from the Installed operators page:
In case of a problem, from the Workloads → Pods page check the logs in any
pods in the selected installation namespace that are reporting issues to
troubleshoot further.
Beware
By choosing the single project installation you cannot easily move to a cluster-wide installation at a later time.
This installation process can be repeated in multiple namespaces in the same OpenShift cluster, enabling independent installations of the operator in different projects. In this case, make sure you read "Limitations for multi-tenant management".
Installation via the oc CLI
Important
Please refer to the "Installing the OpenShift CLI" section below
for information on how to install the oc command-line interface.
CRITICAL WARNING: UPGRADING OPERATORS
OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the Central Migration Guide first.
Instead of using the OpenShift Container Platform web console, you can install
the EDB Postgres® AI for CloudNativePG™ Cluster Operator from the OperatorHub and create a
subscription using the oc command-line interface. Through the oc CLI you
can install the operator in all namespaces, a single namespace or multiple
namespaces.
Warning
Multiple namespace installation is currently supported by OpenShift. However, definition of multiple target namespaces for an operator may be removed in future versions of OpenShift.
This section primarily covers the installation of the operator in multiple
projects with a simple example, by creating an OperatorGroup and a
Subscription objects.
Info
In our example, we will install the operator in the my-operators
namespace and make it only available in the web-staging, web-prod,
bi-staging, and bi-prod namespaces. Feel free to change the names of the
projects as you like or add/remove some namespaces.
Check that the
cloud-native-postgresqloperator is available from the OperatorHub:oc get packagemanifests -n openshift-marketplace cloud-native-postgresql
Inspect the operator to verify the installation modes (
MultiNamespacein particular) and the available channels:oc describe packagemanifests -n openshift-marketplace cloud-native-postgresql
Create an
OperatorGroupobject in themy-operatorsnamespace so that it targets theweb-staging,web-prod,bi-staging, andbi-prodnamespaces:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cloud-native-postgresql namespace: my-operators spec: targetNamespaces: - web-staging - web-prod - bi-staging - bi-prod
Important
Alternatively, you can list namespaces using a label selector, as explained in "Target namespace selection".
Create a
Subscriptionobject in themy-operatorsnamespace to subscribe to thefastchannel of thecloud-native-postgresqloperator that is available in thecertified-operatorssource of theopenshift-marketplace(as previously located in steps 1 and 2):apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cloud-native-postgresql namespace: my-operators spec: channel: fast name: cloud-native-postgresql source: certified-operators sourceNamespace: openshift-marketplace
Use
oc apply -fwith the above YAML file definitions for theOperatorGroupandSubscriptionobjects.
The method described in this section can be very powerful in conjunction with
proper RoleBinding objects, as it enables mapping EDB Postgres® AI for CloudNativePG™ Cluster'
predefined ClusterRoles to specific users in selected namespaces.
Info
The above instructions can also be used for single project binding. The only difference is the number of specified target namespaces (one) and, possibly, the namespace of the operator group (ideally, the same as the target namespace).
The result of the above operation can also be verified from the webconsole, as shown in the image below.
Cluster-wide installation with oc
If you prefer, you can also use oc to install the operator globally, by
taking advantage of the default OperatorGroup called global-operators in
the openshift-operators namespace, and create a new Subscription object for
the cloud-native-postgresql operator in the same namespace:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cloud-native-postgresql namespace: openshift-operators spec: channel: fast name: cloud-native-postgresql source: certified-operators sourceNamespace: openshift-marketplace
Once you run oc apply -f with the above YAML file, the operator will be available in all namespaces.
Installing the OpenShift CLI (oc)
The oc command represents the OpenShift command-line interface (CLI). It is
highly recommended to install it on your system. Below you find a basic set of
instructions to install oc from your OpenShift dashboard.
First, select the question mark at the top right corner of the dashboard:
Then follow the instructions you are given, by downloading the binary that suits your needs in terms of operating system and architecture:
OpenShift CLI
For more detailed and updated information, please refer to the official OpenShift CLI documentation directly maintained by Red Hat.
Predefined RBAC objects
EDB Postgres® AI for CloudNativePG™ Cluster comes with a predefined set of resources that play an important role when it comes to RBAC policy configuration.
Custom Resource Definitions (CRD)
The EDB Postgres® AI for CloudNativePG™ Cluster operator owns the following custom resource definitions (CRD):
BackupClusterPoolerScheduledBackupImageCatalogClusterImageCatalog
You can verify this by running: