Configuring HARP v23
TPA installs and configures HARP when
failover_manager is set
harp. This value is the default for BDR-Always-ON clusters.
You must provide the
Contact EDB to obtain access to these packages.
See the HARP documentation for more details on HARP configuration.
|The name of the cluster.
|The consensus layer to use (
|The location of this instance (defaults to the
|Amount of time in seconds the node's readiness status persists if not refreshed.
|Amount of time in seconds the Lead Master lease persists if not refreshed.
|Amount of time in milliseconds between refreshes of the Lead Master lease.
|The interval, measured in ms, between attempts that a disconnected node tries to reconnect to the DCS.
|In the case in which two nodes have an equal amount of lag and other qualified criteria to take the Lead Master lease, acts as an additional ranking value to prioritize one node over another.
|Rather than removing a node from all possible routing, stop the database on a node when it's fenced.
|If HARP is unable to reach the DCS, then fence the node.
|Highest allowable variance (in bytes) between last recorded LSN of previous Lead Master and this node before being allowed to take the Lead Master lock.
|Highest allowable variance (in bytes) between last received LSN and applied LSN between this node and its CAMO partners.
|Whether to strictly enforce CAMO queue state.
|Use Unix domain socket for manager database access.
|Time in milliseconds to allow a query to the DCS to succeed.
|Milliseconds to sleep between polling DCS. Applies only when
|Builtin proxy connection timeout, in seconds, to Lead Master.
|Amount of time builtin proxy waits on an idle connection to the Lead Master before sending a keepalive ping.
|Maximum number of client connections accepted by harp-proxy (
|A custom command to receive the obfuscated sslpassword in the stdin and provide the handled sslpassword via stdout.
|Similar to dcs -> request_timeout but for connection to the database.
You can use the harp-config hook to execute tasks after the HARP configuration files are installed, for example, to install additional configuration files.
--harp-consensus-protocol argument to
tpaexec configure is
mandatory for the BDR-Always-ON architecture.
--harp-consensus-protocol etcd option is given to
configure, then TPA sets
config.yml. It gives the
etcd role to a suitable subset of the
instances, depending on your chosen layout.
HARP v2 requires etcd v3.5.0 or later, which is available in the products/harp/release package repositories provided by EDB.
You can configure the following parameters for etcd,
|The port used by etcd for peer communication
|The port used by clients to connect to etcd
--harp-consensus-protocol bdr option is given to
configure, then TPA sets
config.yml. In this case, the existing PGD instances are used
for consensus, and no further configuration is required.
If you want HARP proxy to use a separate read-only user, you can specify that
harp_dcs_user: username under
cluster_vars. TPA uses the
harp_dcs_user setting to create a read-only user and set it up in the DCS
If you want HARP manager to use a separate user, you can specify that by setting
harp_manager_user: username under
cluster_vars. TPA uses that setting to create a new user and grant it the
The command provided by
harp_ssl_password_command is used by HARP
to de-obfuscate the
sslpassword given in the connection string. If
sslpassword isn't present, then
sslpassword isn't obfuscated, then
harp_ssl_password_command isn't required and should not be specified.
You can configure the following parameters for the HARP service.
harp-manager service is overridden so it's restarted on failure. The default is
false to comply with the service installed by the
You can enable and configure the http(s) service for HARP that provides API endpoints to monitor the service's health.
enable: false``secure: false``host: <inventory_hostname>``port: 8080``probes:
timeout: 10s``endpoint: "host=<proxy_name> port=<6432> dbname=<bdrdb> user=<username>"
|Configure the http section of HARP
config.yml that defines the http(s) API settings.
The variable can contain these keys:
key_file keys are both required if you use
and are willing to use your own certificate and key.
You must ensure that both certificate and key are available at the given
location on the target node before running
key_file empty if you want TPA to generate a
certificate and key for you using a cluster-specific CA certificate.
TPA CA certificate isn't "well-known." You need to add this certificate
to the trust store of each machine that probes the endpoints.
The CA certificate can be found on the cluster directory on the TPA node at
See the HARP documentation for more information about the available API endpoints.