Postgres as a Service
Deploy a Managed Postgres Cluster in Minutes! Enterprise-ready and Oracle compatible.
You can monitor multiple database clusters that reside on the same host by running multiple Master or Standby agents on that Failover Manager node. You may also run multiple Witness agents on a single node. To configure Failover Manager to monitor more than one database cluster, while ensuring that Failover Manager agents from different clusters do not interfere with each other, you must:
3. The examples that follow uses two database clusters (acctg and sales) running on the same node:
• Data for acctg resides in /opt/pgdata1; its server is monitoring port 5444.
• Data for sales resides in /opt/pgdata2; its server is monitoring port 5445.To run a Failover Manager agent for both of these database clusters, use the efm.properties.in template to create two properties files. Each cluster properties file must have a unique name. For this example, we create acctg.properties and sales.properties to match the acctg and sales database clusters.admin.port
script.fence (if used)
virtualIp (if used)
virtualIp.interface (if used)Within each cluster properties file, the db.port parameter should specify a unique value for each cluster, while the db.user and db.database parameter may have the same value or a unique value. For example, the acctg.properties file may specify:db.user=efm_user
db.database=acctg_dbWhile the sales.properties file may specify:db.user=efm_user
db.database=sales_dbSome parameters require special attention when setting up more than one Failover Manager cluster agent on the same node. If multiple agents reside on the same node, each port must be unique. Any two ports will work, but it may be easier to keep the information clear if using ports that are not too close to each other.Remember, the database user specified in the cluster properties file must have read access to the database.When creating the cluster properties file for each cluster, the db.recovery.conf.dir parameters must also specify values that are unique for each respective database cluster.If you are using a fencing script, use the script.fence parameter to identify a fencing script that is unique for each cluster. In the event of a failover, Failover Manager does not pass any information to the fencing script that could identify which master has failed.If a Linux firewall is enabled on the host of a Failover Manager node, you may need to add rules to the firewall configuration that allow tcp communication between the EFM processes in the cluster; see for more information.The following parameters are used when assigning the virtual IP address to a node. If your Failover Manager cluster does not use a virtual IP address, leave these parameters blank.You must specify a unique virtual IP address for each cluster. If the same address is used, a failure of one database cluster would cause the address to be released from the master, breaking existing connections to the remaining database cluster.virtualIp.interfaceYou must specify a unique interface name for each cluster. For example, acctg.properties might include a value of eth0:0, while sales.properties might specify eth0:1.virtualIp.netmaskThis parameter value is determined by the virtual IP addresses being used and may or may not be the same for both acctg.properties and sales.properties.After creating the acctg.properties and sales.properties files, create a service script for each cluster that points to the respective property files; this step is platform specific. If you are using RHEL 6.x or CentOS 6.x, see Section 4.3.1; if you are using RHEL 7.x or CentOS 7.x, see Section 188.8.131.52.3.1 RHEL 6.x or CentOS 6.xIf you are using RHEL 6.x or CentOS 6.x, you should copy the efm-2.0 service script to new file with a name that is unique for each cluster. For example:Then edit the CLUSTER variable, modifying the cluster name from efm to acctg or sales.Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command:4.3.2 RHEL 7.x or CentOS 7.xIf you are using RHEL 7.x or CentOS 7.x, you should copy the efm-2.0 service script to new file with a name that is unique for each cluster. For example:Then edit the CLUSTER variable, modifying the cluster name from efm to acctg or sales.Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command: