Running Multiple Agents on a Single Node

You can monitor multiple database clusters that reside on the same host by running multiple Master or Standby agents on that Failover Manager node. You may also run multiple Witness agents on a single node. To configure Failover Manager to monitor more than one database cluster, while ensuring that Failover Manager agents from different clusters do not interfere with each other, you must:

  1. Create a cluster properties file for each member of each cluster that defines a unique set of properties and the role of the node within the cluster.

  2. Create a cluster members file for each member of each cluster that lists the members of the cluster.

  3. Customize the service script (on a RHEL or CentOS 6.x system) or the unit file (on a RHEL or CentOS 7.x system) for each cluster to specify the names of the cluster properties and the cluster members files.

  4. Start the services for each cluster.

The examples that follow uses two database clusters (acctg and sales) running on the same node:

  • Data for acctg resides in /opt/pgdata1; its server is monitoring port 5444.

  • Data for sales resides in /opt/pgdata2; its server is monitoring port 5445.

To run a Failover Manager agent for both of these database clusters, use the efm.properties.in template to create two properties files. Each cluster properties file must have a unique name. For this example, we create acctg.properties and sales.properties to match the acctg and sales database clusters.

The following parameters must be unique in each cluster properties file:

admin.port bind.address db.port db.recovery.conf.dir virtualIp (if used) virtualIp.interface (if used)

Within each cluster properties file, the db.port parameter should specify a unique value for each cluster, while the db.user and db.database parameter may have the same value or a unique value. For example, the acctg.properties file may specify:

db.user=efm_user db.password.encrypted=7c801b32a05c0c5cb2ad4ffbda5e8f9a db.port=5444 db.database=acctg_db

While the sales.properties file may specify:

db.user=efm_user db.password.encrypted=e003fea651a8b4a80fb248a22b36f334 db.port=5445 db.database=sales_db

Some parameters require special attention when setting up more than one Failover Manager cluster agent on the same node. If multiple agents reside on the same node, each port must be unique. Any two ports will work, but it may be easier to keep the information clear if using ports that are not too close to each other.

When creating the cluster properties file for each cluster, the db.recovery.conf.dir parameters must also specify values that are unique for each respective database cluster.

The following parameters are used when assigning the virtual IP address to a node. If your Failover Manager cluster does not use a virtual IP address, leave these parameters blank.

virtualIp virtualIp.interface virtualIp.prefix

This parameter value is determined by the virtual IP addresses being used and may or may not be the same for both acctg.properties and sales.properties.

After creating the acctg.properties and sales.properties files, create a service script or unit file for each cluster that points to the respective property files; this step is platform specific. If you are using RHEL 6.x or CentOS 6.x, see RHEL 6.x or CentOS 6.x; if you are using RHEL 7.x or CentOS 7.x, see RHEL 7.x or CentOS 7.x.

Please note: If you are using a custom service script or unit file, you must manually update the file to reflect the new service name when you upgrade Failover Manager.

RHEL 6.x or CentOS 6.x

If you are using RHEL 6.x or CentOS 6.x, you should copy the efm-3.6 service script to new file with a name that is unique for each cluster. For example:

# cp /etc/init.d/efm-3.6 /etc/init.d/efm-acctg

# cp /etc/init.d/efm-3.6 /etc/init.d/efm-sales

Then edit the CLUSTER variable, modifying the cluster name from efm to acctg or sales.

After creating the service scripts, run:

# chkconfig efm-acctg on

# chkconfig efm-sales on

Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command:

# service efm-acctg start

RHEL 7.x or CentOS 7.x

If you are using RHEL 7.x or CentOS 7.x, you should copy the efm-3.6 unit file to new file with a name that is unique for each cluster. For example, if you have two clusters (named acctg and sales), the unit file names might be:

/etc/systemd/system/efm-acctg.service

/etc/systemd/system/efm-sales.service

Then, edit the CLUSTER variable within each unit file, changing the specified cluster name from efm to the new cluster name. For example, for a cluster named acctg, the value would specify:

Environment=CLUSTER=acctg

You must also update the value of the PIDfile parameter to specify the new cluster name. For example:

PIDFile=/var/run/efm-3.6/acctg.pid

After copying the service scripts, use the following commands to enable the services:

# systemctl enable efm-acctg.service

# systemctl enable efm-sales.service

Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command:

# systemctl start efm-acctg

For information about customizing a unit file, please visit:

http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F