A standard installation of HARP includes two system services:
- HARP Manager (
harp-manager) on the node being managed
- HARP Proxy (
There are two ways to install and configure these services to manage Postgres for proper quorum-based connection routing.
HARP has dependencies on external software. These must fit a minimum version as listed here.
The easiest way to install and configure HARP is to use the EDB TPAexec utility for cluster deployment and management. For details on this software, see the TPAexec product page.
TPAExec is currently available only through an EULA specifically dedicated to EDB Postgres Distributed cluster deployments. If you can't access the TPAExec URL, contact your sales or account representative.
Configure TPAexec to recognize that cluster routing is
managed through HARP by ensuring the TPA
config.yml file contains these
Versions of TPAexec earlier than 21.1 require a slightly different approach:
After this, install HARP by invoking the
for making cluster modifications:
No other modifications are necessary apart from cluster-specific considerations.
Currently CentOS/RHEL packages are provided by the EDB packaging infrastructure. For details, see the HARP product page.
etcd packages for many popular Linux distributions aren't
available by their standard public repositories. EDB has therefore packaged
etcd for RHEL and CentOS versions 7 and 8, Debian, and variants such as
Ubuntu LTS. You need access to our HARP package repository to use
HARP requires a distributed consensus layer to operate. Currently this must be
etcd. If using fewer than three BDR nodes, you might need to rely on
etcd. Otherwise any BDR service outage reduces the
consensus layer to a single node and thus prevents node consensus and disables
If you're using
etcd as the consensus layer,
etcd must be installed either
directly on the Postgres nodes or in a separate location they can access.
etcd as the consensus layer, include this code in the HARP
When using TPAExec, all configured etcd endpoints are entered here automatically.
bdr native consensus layer is available from BDR 3.6.21 and 3.7.3. This
consensus layer model requires no supplementary software when managing routing
for a EDB Postgres Distributed cluster.
To ensure quorum is possible in the cluster, always use more than two nodes so that BDR's consensus layer remains responsive during node maintenance or outages.
To set BDR as the consensus layer, include this in the
The endpoints for a BDR consensus layer follow the standard Postgres DSN connection format.