Postgres as a Service
Deploy a Managed Postgres Cluster in Minutes! Enterprise-ready and Oracle compatible.
You can monitor multiple database clusters that reside on the same host by running multiple Master or Standby agents on that Failover Manager node. You may also run multiple Witness agents on a single node. To configure Failover Manager to monitor more than one database cluster, while ensuring that Failover Manager agents from different clusters do not interfere with each other, you must:
3. The examples that follow uses two database clusters (acctg and sales) running on the same node:
• Data for acctg resides in /opt/pgdata1; its server is monitoring port 5444.
• Data for sales resides in /opt/pgdata2; its server is monitoring port 5445.To run a Failover Manager agent for both of these database clusters, use the efm.properties.in template to create two properties files. Each cluster properties file must have a unique name. For this example, we create acctg.properties and sales.properties to match the acctg and sales database clusters.admin.port
virtualIp (if used)
virtualIp.interface (if used)Within each cluster properties file, the db.port parameter should specify a unique value for each cluster, while the db.user and db.database parameter may have the same value or a unique value. For example, the acctg.properties file may specify:db.user=efm_user
db.database=acctg_dbWhile the sales.properties file may specify:db.user=efm_user
db.database=sales_dbSome parameters require special attention when setting up more than one Failover Manager cluster agent on the same node. If multiple agents reside on the same node, each port must be unique. Any two ports will work, but it may be easier to keep the information clear if using ports that are not too close to each other.When creating the cluster properties file for each cluster, the db.recovery.conf.dir parameters must also specify values that are unique for each respective database cluster.The following parameters are used when assigning the virtual IP address to a node. If your Failover Manager cluster does not use a virtual IP address, leave these parameters blank.virtualIp
virtualIp.prefixThis parameter value is determined by the virtual IP addresses being used and may or may not be the same for both acctg.properties and sales.properties.After creating the acctg.properties and sales.properties files, create a service script or unit file for each cluster that points to the respective property files; this step is platform specific. If you are using RHEL 6.x or CentOS 6.x, see Section 4.3.1; if you are using RHEL 7.x or CentOS 7.x, see Section 4.3.2.Please note: If you are using a custom service script or unit file, you must manually update the file to reflect the new service name when you upgrade Failover Manager.4.3.1 RHEL 6.x or CentOS 6.xIf you are using RHEL 6.x or CentOS 6.x, you should copy the efm-3.4 service script to new file with a name that is unique for each cluster. For example:Then edit the CLUSTER variable, modifying the cluster name from efm to acctg or sales.Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command:4.3.2 RHEL 7.x or CentOS 7.xIf you are using RHEL 7.x or CentOS 7.x, you should copy the efm-3.4 unit file to new file with a name that is unique for each cluster. For example, if you have two clusters (named acctg and sales), the unit file names might be:Then, edit the CLUSTER variable within each unit file, changing the specified cluster name from efm to the new cluster name. For example, for a cluster named acctg, the value would specify:You must also update the value of the PIDfile parameter to specify the new cluster name. For example:Then, use the new service scripts to start the agents. For example, you can start the acctg agent with the command: