This tutorial describes quickly configuring a Failover Manager cluster in a test environment. Other sections in this guide provide key information that you should read and understand before configuring Failover Manager for a production deployment.A database server is running and streaming replication is set up between a master and one or two standby nodes.You have installed Failover Manager on each node. For more information about installing Failover Manager, see Section 3.You should start the configuration process on a master or standby node. Then, copy the configuration files to other nodes to save time.cd /etc/edb/efm-3.2
cp efm.properties.in efm.properties
cp efm.nodes.in efm.nodes
chown efm:efm efm.properties
chown efm:efm efm.nodesThe cluster_name.properties file contains parameters that specify connection properties and behaviors for your Failover Manager cluster. Modifications to property settings are applied when Failover Manager starts.The following properties are the minimal properties required to configure a Failover Manager cluster. If you are configuring a production system, please see Section 3.5 for a complete list of properties.Database connection properties (needed even on the witness so it can connect to other databases when needed):Owner of the data directory (usually postgres or enterprisedb):Only one of the following properties is needed. If you provide the service name, EFM will use a service command to control the database server when necessary; if you provide the location of the Postgres bin directory, EFM will use pg_ctl to control the database server.The data directory in which EFM will find or create recovery.conf files:This is the local address of the node and the port to use for EFM. Other nodes will use this address to reach the agent, and the agent will also use this address for connecting to the local database (as opposed to connecting to localhost). An example of the format is included below:Set this property to true on a witness node and false if it is a master or standby:If you are running on a network without access to the Internet, change this to an address that is available on your network:When configuring a production cluster, the following properties can be either true or false depending on your system configuration and usage. Set them both to true to simplify startup if you're configuring an EFM test cluster.The cluster_name.nodes file is read at startup to tell an agent how to find the rest of the cluster or, in the case of the first node started, can be used to simplify authorization of subsequent nodes.Add the addresses and ports of each node in the cluster to this file. One node will act as the membership coordinator; the list should include at least the membership coordinator's address:Please note that the Failover Manager agent will not verify the content of the efm.nodes file; the agent expects that some of the addresses in the file cannot be reached (e.g. that another agent hasn’t been started yet). For more information about the efm.nodes file, see Section 3.5.2.Copy the efm.properties and efm.nodes files to the /etc/edb/efm-3.2 directory on the other nodes in your sample cluster. After copying the files, change the file ownership so the files are owned by efm:efm. The efm.properties file can be the same on every node, except for the following properties:
• Modify the bind.address property to use the node’s local address.
• Set is.witness to true if the node is a witness node. If the node is a witness node, the properties relating to a local database installation will be ignored.On any node, start the Failover Manager agent. The agent is named efm-3.2; you can use your platform-specific service command to control the service. For example, on a CentOS or RHEL 7.x host use the command:After the agent starts, run the following command to see the status of the single-node cluster. You should see the addresses of the other nodes in the Allowed node host list.Start the agent on the other nodes. Run the efm cluster-status efm command on any node to see the cluster status.If the cluster status output shows that the master and standby(s) are in sync, you can perform a switchover with the following command:That command will promote a standby and reconfigure the master database as a new standby in the cluster. To switch back, run the command again.