For each PostgreSQL instance, the operator provides an exporter of metrics for
Prometheus via HTTP, on port 9187, named
The operator comes with a predefined set of metrics, as well as a highly
configurable and customizable system to define additional queries via one or
Secret resources (see the
"User defined metrics" section below for details).
Starting from version 1.11, EDB Postgres for Kubernetes already installs
by default a set of predefined metrics in
Metrics can be accessed as follows:
All monitoring queries that are performed on PostgreSQL are:
- transactionally atomic (one transaction per query)
- executed with the
- executed with
- executed as user
Please refer to the "Default roles" section in PostgreSQL
for details on the
Queries, by default, are run against the main database, as defined by
bootstrap method of the
Cluster resource, according
to the following logic:
initdb: queries will be run by default against the specified database in
appif not specified
recovery: queries will be run by default against the specified database in
postgresif not specified
pg_basebackup: queries will be run by default against the specified database in
postgresif not specified
The default database can always be overridden for a given user-defined metric,
by specifying a list of one or more databases in the
If you are interested in evaluating the integration of EDB Postgres for Kubernetes with Prometheus and Grafana, please look at cnp-sandbox.
A specific PostgreSQL cluster can be monitored using the
Prometheus Operator's resource
A PodMonitor correctly pointing to a Cluster can be automatically created by the operator by setting
true in the Cluster resource itself (default: false).
Any change to the
PodMonitor created automatically will be overridden by the Operator at the next reconciliation
cycle, in case you need to customize it, you can do so as described below.
To deploy a
PodMonitor for a specific Cluster manually, you can just define it as follows, changing it as needed:
Make sure you modify the example above with a unique name as well as the
correct cluster's namespace and labels (we are using
Every PostgreSQL instance exporter automatically exposes a set of predefined metrics, which can be classified in two major categories:
PostgreSQL related metrics, starting with
- number of WAL files and total size on disk
- number of
.donefiles in the archive status folder
- requested minimum and maximum number of synchronous replicas, as well as the expected and actually observed values
- flag indicating if replica cluster mode is enabled or disabled
- flag indicating if a manual switchover is required
Go runtime related metrics, starting with
Below is a sample of the metrics returned by the
endpoint of an instance. As you can see, the Prometheus format is
cnp_collector_postgres_version is a GaugeVec metric containing the
Major.Minor version of Postgres (either PostgreSQL or EPAS). The full
Major.Minor.Patch can be found inside one of its label
This feature is currently in beta state and the format is inspired by the queries.yaml file of the PostgreSQL Prometheus Exporter.
Custom metrics can be defined by users by referring to the created
Secret in a
customQueriesSecret section as in the following example:
customQueriesSecret sections contain a list of
Secret references specifying the key in which the custom queries are defined.
Take care that the referred resources have to be created in the same namespace as the Cluster resource.
If you want ConfigMaps and Secrets to be automatically reloaded by instances, you can
add a label with key
k8s.enterprisedb.io/reload to it, otherwise you will have to reload
the instances using the
kubectl cnp reload subcommand.
When a user defined metric overwrites an already existing metric the instance manager prints a json warning log,
containing the message:
Query with the same name already found. Overwriting the existing one.
and a key
queryName containing the overwritten query name.
Here you can see an example of a
ConfigMap containing a single custom query,
referenced by the
Cluster example above:
A list of basic monitoring queries can be found in the
target_databases option lists more than one database
the metric is collected from each of them.
Database auto-discovery can be enabled for a specific query by specifying a
shell-like pattern (i.e., containing
) in the list of
target_databases. If provided, the operator will expand the list of target
databases by adding all the databases returned by the execution of
datname FROM pg_database WHERE datallowconn AND NOT datistemplate and matching
the pattern according to path.Match() rules.
* character has a special meaning in yaml,
so you need to quote (
target_databases value when it includes such a pattern.
It is recommended that you always include the name of the database
in the returned labels, for example using the
as in the following example:
This will produce in the following metric being exposed:
Here is an example of a query with auto-discovery enabled which also
runs on the
template1 database (otherwise not returned by the
The above example will produce the following metrics (provided the databases exist):
Every custom query has the following basic structure:
Here is a short description of all the available fields:
<MetricName>: the name of the Prometheus metric
query: the SQL query to run on the target database to generate the metrics
primary: whether to run the query only on the primary instance
master: same as
primary(for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated)
runonserver: a semantic version range to limit the versions of PostgreSQL the query should run on (e.g.
target_databases: a list of databases to run the
queryagainst, or a shell-like pattern to enable auto discovery. Overwrites the default database if provided.
metrics: section containing a list of all exported columns, defined as follows:
<ColumnName>: the name of the column returned by the query
usage: one of the values described below
description: the metric's description
metrics_mapping: the optional column mapping when
usageis set to
The possible values for
|Column Usage Label||Description|
|this column should be ignored|
|use this column as a label|
|use this column as a counter|
|use this column as a gauge|
|use this column with the supplied mapping of text values|
|use this column as a text duration (in milliseconds)|
|use this column as a histogram|
Please visit the "Metric Types" page from the Prometheus documentation for more information.
Custom defined metrics are returned by the Prometheus exporter endpoint (
with the following format:
LabelColumnName are metrics with
usage set to
LABEL and their
pg_replication example above, the exporter's endpoint would
return the following output when invoked:
The operator can be configured to automatically inject in a Cluster a set of
monitoring queries defined in a ConfigMap or a Secret, inside the operator's namespace.
You have to set the
MONITORING_QUERIES_SECRET key in the "operator configuration",
respectively to the name of the ConfigMap or the Secret;
the operator will then use the content of the
Any change to the
queries content will be immediately reflected on all the
deployed Clusters using it.
The operator installation manifests come with a predefined ConfigMap,
postgresql-operator-default-monitoring, to be used by all Clusters.
MONITORING_QUERIES_CONFIGMAP is by default set to
postgresql-operator-default-monitoring in the operator configuration.
If you want to disable the default set of metrics, you can:
- disable it at operator level: set the
""(empty string), in the operator ConfigMap. Changes to operator ConfigMap require an operator restart.
- disable it for a specific Cluster: set
truein the Cluster.
The ConfigMap or Secret specified via
will always be copied to the Cluster's namespace with a fixed name:
So that, if you intend to have default metrics, you should not create a ConfigMap with this name in the cluster's namespace.
EDB Postgres for Kubernetes is inspired by the PostgreSQL Prometheus Exporter, but
presents some differences. In particular, the
cache_seconds field is not implemented
in EDB Postgres for Kubernetes' exporter.
The operator internally exposes Prometheus metrics
via HTTP on port 8080, named
Metrics can be accessed as follows:
Currently, the operator exposes default
kubebuilder metrics, see
kubebuilder documentation for more details.
Starting on Openshift 4.6 there is a complete monitoring stack called
"Monitoring for user-defined projects"
which can be enabled by cluster administrators. Cloud Native PostgreSQL will
automatically create a
PodMonitor object if the option
spec.monitoring.enablePodMonitor of the
Cluster definition is set to
To enable cluster wide
user-defined monitoring you must first create a
ConfigMap with the name
cluster-monitoring-config in the
openshift-monitoring namespace/project with the following content:
ConfigMap already exists, just add the variable
This will enable the monitoring for the whole cluster, if it is needed only for one namespace/project please refer to the official Red Hat documentation or talk with your cluster administrator.
After that, just create the proper PodMonitor in the namespace/project with something similar to this:
We currently don’t use
ServiceMonitor because our service doesn’t define
a port pointing to the metrics. If we added a metric port this could expose