For each PostgreSQL instance, the operator provides an exporter of metrics for
Prometheus via HTTP, on port 9187, named
The operator comes with a predefined set of metrics, as well as a highly
configurable and customizable system to define additional queries via one or
Secret resources (see the
"User defined metrics" section below for details).
Metrics can be accessed as follows:
All monitoring queries that are performed on PostgreSQL are:
- transactionally atomic (one transaction per query)
- executed with the
- executed with
- executed as user
Please refer to the "Default roles" section in PostgreSQL
for details on the
Queries, by default, are run against the main database, as defined by
bootstrap method of the
Cluster resource, according
to the following logic:
initdb: queries will be run against the specified database by default, so the value passed as
initdb.databaseor defaulting to
appif not specified.
- not using
initdb: queries will run against the
postgresdatabase, by default.
The default database can always be overridden for a given user-defined metric,
by specifying a list of one or more databases in the
Make sure you modify the example above with a unique name as well as the
correct cluster's namespace and labels (we are using
Every PostgreSQL instance exporter automatically exposes a set of predefined metrics, which can be classified in two major categories:
PostgreSQL related metrics, starting with
- number of WAL files and total size on disk
- number of
.donefiles in the archive status folder
- requested minimum and maximum number of synchronous replicas, as well as the expected and actually observed values
- flag indicating if replica cluster mode is enabled or disabled
- flag indicating if a manual switchover is required
Go runtime related metrics, starting with
Below is a sample of the metrics returned by the
endpoint of an instance. As you can see, the Prometheus format is
This feature is currently in beta state and the format is inspired by the queries.yaml file of the PostgreSQL Prometheus Exporter.
Custom metrics can be defined by users by referring to the created
Secret in a
customQueriesSecret section as in the following example:
customQueriesSecret sections contain a list of
Secret references specifying the key in which the custom queries are defined.
Take care that the referred resources have to be created in the same namespace as the Cluster resource.
If you want ConfigMaps and Secrets to be automatically reloaded by instances, you can
add a label with key
k8s.enterprisedb.io/reload to it, otherwise you will have to reload
the instances using the
kubectl cnp reload subcommand.
Here you can see an example of a
ConfigMap containing a single custom query,
referenced by the
Cluster example above:
A list of basic monitoring queries can be found in the
target_databases option lists more than one database
the metric is collected from each of them.
Database auto-discovery can be enabled for a specific query by specifying a
shell-like pattern (i.e., containing
) in the list of
target_databases. If provided, the operator will expand the list of target
databases by adding all the databases returned by the execution of
datname FROM pg_database WHERE datallowconn AND NOT datistemplate and matching
the pattern according to path.Match() rules.
* character has a special meaning in yaml,
so you need to quote (
target_databases value when it includes such a pattern.
It is recommended that you always include the name of the database
in the returned labels, for example using the
as in the following example:
This will produce in the following metric being exposed:
Here is an example of a query with auto-discovery enabled which also
runs on the
template1 database (otherwise not returned by the
The above example will produce the following metrics (provided the databases exist):
Every custom query has the following basic structure:
Here is a short description of all the available fields:
<MetricName>: the name of the Prometheus metric
query: the SQL query to run on the target database to generate the metrics
primary: whether to run the query only on the primary instance
master: same as
primary(for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated)
runonserver: a semantic version range to limit the versions of PostgreSQL the query should run on (e.g.
target_databases: a list of databases to run the
queryagainst, or a shell-like pattern to enable auto discovery. Overwrites the default database if provided.
metrics: section containing a list of all exported columns, defined as follows:
<ColumnName>: the name of the column returned by the query
usage: one of the values described below
description: the metric's description
metrics_mapping: the optional column mapping when
usageis set to
The possible values for
|Column Usage Label||Description|
|this column should be ignored|
|use this column as a label|
|use this column as a counter|
|use this column as a gauge|
|use this column with the supplied mapping of text values|
|use this column as a text duration (in milliseconds)|
|use this column as a histogram|
Please visit the "Metric Types" page from the Prometheus documentation for more information.
Custom defined metrics are returned by the Prometheus exporter endpoint (
with the following format:
LabelColumnName are metrics with
usage set to
LABEL and their
pg_replication example above, the exporter's endpoint would
return the following output when invoked:
Cloud Native PostgreSQL is inspired by the PostgreSQL Prometheus Exporter, but presents some differences. In particular, the following fields of a metric that are defined in the official Prometheus exporter are not implemented in Cloud Native PostgreSQL's exporter:
cache_seconds: number of seconds to cache the result of the query
pg_version field of a column definition is not implemented.
The operator internally exposes Prometheus metrics
via HTTP on port 8080, named
Metrics can be accessed as follows:
Currently, the operator exposes default
kubebuilder metrics, see
kubebuilder documentation for more details.