harpctl command-line tool v4

harpctl is a command-line tool for directly manipulating the consensus layer contents to fit desired cluster geometry. You can use it to, for example, examine node status, "promote" a node to lead master, disable/enable cluster management, bootstrap cluster settings, and so on.

Synopsis

$ harpctl --help

Usage:
  harpctl [command]

Available Commands:
  apply       Command to set up a cluster
  completion  generate the autocompletion script for the specified shell
  fence       Fence specified node
  get         Command to get resources
  help        Help about any command
  manage      Manage Cluster
  promote     Promotes the sepcified node to be primary,
  set         Command to set resources.
  unfence     unfence specified node
  unmanage    Unmanage Cluster
  version     Command to get version information

Flags:
  -c, --cluster string       name of cluster.
  -f, --config-file string   config file (default is /etc/harp/config.yml)
  -h, --help                 help for harpctl
  -o, --output string         'json, yaml'.
  -t, --toggle               Help message for toggle

Use "harpctl [command] --help" for more information about a command.

In addition to this basic synopsis, each of the available commands has its own series of allowed subcommands and flags.

Configuration

harpctl must interact with the consensus layer to operate. This means a certain minimum amount of settings must be defined in config.yml for successful execution. This includes:

  • dcs.driver
  • dcs.endpoints
  • cluster.name

As an example using etcd:

cluster:
  name: mycluster
dcs:
  driver: etcd
  endpoints:
    - host1:2379
    - host2:2379
    - host3:2379

See Configuration for details.

Execution

Execute harpctl like this:

harpctl command [flags]

Each command has its own series of subcommands and flags. Further help for these are available by executing this command:

harpctl <command> --help

harpctl apply

You must use an apply command to "bootstrap" a HARP cluster using a file that defines various attributes of the intended cluster.

Execute an apply command like this:

harpctl apply <filename>

This essentially creates all of the initial cluster metadata, default or custom management settings, and so on. This is done here because the DCS is used as the ultimate source of truth for managing the cluster, and this makes it possible to change these settings dynamically.

This can either be done once to bootstrap the entire cluster, once per type of object, or even on a per-node basis for the sake of simplicity.

This is an example of a bootstrap file for a single node:

cluster:
  name: mycluster
nodes:
  - name: firstnode
    dsn: host=host1 dbname=bdrdb user=harp_user
    location: dc1
    priority: 100
    db_data_dir: /db/pgdata

As seen here, it is good practice to always include a cluster name preamble to ensure all changes target the correct HARP cluster, in case several are operating in the same environment.

Once apply completes without error, the node is integrated with the rest of the cluster.

Note

You can also use this command to bootstrap the entire cluster at once since all defined sections are applied at the same time. However, we don't encourage this use for anything but testing as it increases the difficulty of validating each portion of the cluster during initial definition.

harpctl fence

Marks the local or specified node as fenced. A node with this status is essentially completely excluded from the cluster. HARP Proxy doesn't send it traffic, its representative HARP Manager doesn't claim the lead master lease, and further steps are also taken. If running, HARP Manager stops Postgres on the node as well.

Execute a fence command like this:

harpctl fence (<node-name>)

The node-name is optional; if omitted, harpctl uses the name of the locally configured node.

harpctl get

Fetches information stored in the consensus layer for various elements of the cluster. This includes nodes, locations, the cluster, and so on. The full list includes:

  • cluster Returns the cluster state.
  • leader Returns the current or specified location leader.
  • location Returns current or specified location information.
  • locations Returns list of all locations.
  • node Returns the specified Postgres node.
  • nodes Returns list of all Postgres nodes.
  • proxy Returns current or specified proxy information.
  • proxies Returns list of all proxy nodes.

harpctl get cluster

Fetches information stored in the consensus layer for the current cluster:

> harpctl get cluster

Name      Enabled
----      -------
mycluster true

harpctl get leader

Fetches node information for the current lead master stored in the DCS for the specified location. Use harpctl get locations to list the defined locations.

Example:

> harpctl get leader dc1

Cluster   Name   Ready Role    Type Location Fenced Lock Duration
-------   ----   ----- ----    ---- -------- ------ -------------
mycluster mynode true  primary bdr  dc1      false  30

harpctl get location

Fetches location information for the specified location. Use harpctl get locations to list the defined locations.

Example:

> harpctl get location dc1

Cluster   Location Leader Previous Leader Target Leader Lease Renewals
-------   -------- ------ --------------- ------------- --------------
mycluster dc1      mynode mynode                        <nil>

harpctl get locations

Fetches information for all locations currently present in the DCS.

Example:

> harpctl get locations

Cluster   Location Leader   Previous Leader Target Leader Lease Renewals
-------   -------- ------   --------------- ------------- --------------
mycluster dc1      mynode   mynode                        <nil>
mycluster dc2      thatnode thatnode                      <nil>

harpctl get node

Fetches node information stored in the DCS for the specified node.

Example:

> harpctl get node mynode
Cluster    Name   Location Ready Fenced Allow Routing Routing Status Role    Type Lock Duration
-------    ----   -------- ----- ------ ------------- -------------- ----    ---- -------------
mycluster  mynode dc1      true  false  true          ok               primary bdr  30

harpctl get nodes

Fetches node information stored in the DCS for the all nodes in the cluster.

Example:

> harpctl get nodes

Cluster    Name  Location Ready Fenced Allow Routing Routing Status Role    Type Lock Duration
-------    ----  -------- ----- ------ ------------- -------------- ----    ---- -------------
myclusters bdra1 dc1      true  false  true          ok             primary bdr  30
myclusters bdra2 dc1      true  false  false         N/A            primary bdr  30
myclusters bdra3 dc1      true  false  false         N/A            primary bdr  30

harpctl get proxy

Fetches proxy information stored in the DCS for specified proxy. Specify global to see proxy defaults for this cluster.

Example:

> harpctl get proxy proxy1

Cluster   Name   Pool Mode Auth Type Max Client Conn Default Pool Size
-------   ----   --------- --------- --------------- -----------------
mycluster proxy1 session   md5       1000            20

harpctl get proxies

Fetches proxy information stored in the DCS for all proxies in the cluster. Additionally, lists the global pseudo-proxy for default proxy settings.

Example:

> harpctl get proxies

Cluster   Name   Pool Mode Auth Type Max Client Conn Default Pool Size
-------   ----   --------- --------- --------------- -----------------
mycluster global session   md5       500             25
mycluster proxy1 session   md5       1000            20
mycluster proxy2 session   md5       1500            30

harpctl manage

If a cluster isn't in a managed state, instructs all HARP Manager services to resume monitoring Postgres and updating the consensus layer. Do this after maintenance is complete following HARP software updates or other significant changes that might affect the whole cluster.

Execute a manage command like this:

harpctl manage cluster
Note

Currently you can enable or disable cluster management only at the cluster level. Later versions will also make it possible to do this for individual nodes or proxies.

harpctl promote

Promotes the next available node that meets leadership requirements to lead master in the current Location. Since this is a requested event, it invokes a smooth handover where:

  1. The existing lead master releases the lead master lease, provided:
    • If CAMO is enabled, the promoted node must be up to date and CAMO ready, and the CAMO queue must have less than node.maximum_camo_lag bytes remaining to be applied.
    • Replication lag between the old lead master and the promoted node is less than node.maximum_lag.
  2. The promoted node is the only valid candidate to take the lead master lease and does so as soon as it is released by the current holder. All other nodes ignore the unset lead master lease.
    • If CAMO is enabled, the promoted node temporarily disables client traffic until the CAMO queue is fully applied, even though it holds the lead master lease.
  3. HARP Proxy, if using pgbouncer, will PAUSE connections to allow ongoing transactions to complete. Once the lead master lease is claimed by the promoted node, it reconfigures PgBouncer for the new connection target and resumes database traffic. If HARP Proxy is using the builtin proxy, it terminates existing connections and creates new connections to the lead master as new connections are requested from the client.

Execute a promote command like this:

harpctl promote (<node-name>)

Provide the --force option to forcibly set a node to lead master, even if it doesn't meet the criteria for becoming lead master. This circumvents any verification of CAMO status or replication lag and causes an immediate transition to the promoted node. This is the only way to specify an exact node for promotion.

The node must be online and operational for this to succeed. Use this option with care.

harpctl set

Sets a specific attribute in the cluster to the supplied value. This is used to tweak configuration settings for a specific node, proxy, location, or the cluster rather than using apply. You can use this for the following object types:

  • cluster Sets cluster-related attributes.
  • location Sets specific location attributes.
  • node Sets specific node attributes.
  • proxy Sets specific proxy attributes.

harpctl set cluster

Sets cluster-related attributes only.

Example:

harpctl set cluster event_sync_interval=200

harpctl set node

Sets node-related attributes for the named node. Any options mentioned in Node directives are valid here.

Example:

harpctl set node mynode priority=500

harpctl set proxy

Sets proxy-related attributes for the named proxy. Any options mentioned in the Proxy directives are valid here. Properties set this way require a restart of the proxy before the new value takes effect.

Example:

harpctl set proxy proxy1 max_client_conn=750

Use global for cluster-wide proxy defaults:

harpctl set proxy global default_pool_size=10

harpctl unfence

Removes the fenced attribute from the local or specified node. This removes all previously applied cluster exclusions from the node so that it can again receive traffic or hold the lead master lease. Postgres is also started if it isn't running.

Execute an unfence command like this:

harpctl unfence (<node-name>)

The node-name is optional. If you omit it, harpctl uses the name of the locally configured node.

harpctl unmanage

Instructs all HARP Manager services in the cluster to remain running but no longer actively monitoring Postgres, or modify the contents of the consensus layer. This means that any ordinary failover event such as a node outage doesn't result in a leadership migration. This is intended for system or HARP maintenance prior to making changes to HARP software or other significant changes to the cluster.

Execute an unmanage command like this:

harpctl unmanage cluster
Note

Currently you can enable or disable cluster management at only the cluster level. Later versions will also make it possible to do this for individual nodes or proxies.