Postgres as a Service
Deploy a Managed Postgres Cluster in Minutes! Enterprise-ready and Oracle compatible.
Once configured, a Failover Manager cluster requires no regular maintenance. The following sections provide information about performing the management tasks that may occasionally be required by a Failover Manager Cluster.To start the Failover Manager cluster on RHEL 6.x or CentOS 6.x, assume superuser privileges, and invoke the command:To start the Failover Manager cluster on RHEL 7.x or CentOS 7.x, assume superuser privileges, and invoke the command:If the cluster properties file for the node specifies that is.witness is true, the node will start as a Witness node.If the node is not a dedicated Witness node, Failover Manager will connect to the local database and invoke the pg_is_in_recovery() function. If the server responds false, the agent assumes the node is a Master node, and assigns a virtual IP address to the node (if applicable). If the server responds true, the Failover Manager agent assumes that the node is a Standby server.After joining the cluster, the Failover Manager agent checks the supplied database credentials to ensure that it can connect to all of the databases within the cluster. If the agent cannot connect, the agent will shut down.You can add a node to a Failover Manager cluster at any time. To be a useful Standby for the current node, the node must be a standby in the PostgreSQL Streaming Replication scenario.
1. Assume the identity of efm or the OS superuser on any existing node (that is currently part of the running cluster), and invoke the efm add-node command, adding the IP address of the new node to the Failover Manager Allowed node host list.When invoking the command, specify the cluster name, the IP address of the new node, and if applicable, the failover priority of the new node:efm add-node cluster_name ip_address [priority]For more information about using the efm add-node command or controlling a Failover Manager service, see .
4. Assume superuser privileges on the new node, and use the service efm-2.0 start command to start the Failover Manager agent:When the new node joins the cluster, Failover Manager will send a notification to the administrator email provided in the user.email parameter in the cluster properties file.If your Failover Manager cluster includes more than one Standby server, you can use the efm add-node command to influence the promotion priority of the Standby nodes. Invoke the command on any existing member of the Failover Manager cluster, and specify a priority value after the IP address of the member.For example, the following command instructs Failover Manager that the acctg cluster member that is monitoring 10.0.1.9:7800 is the primary Standby (1):In the event of a failover, Failover Manager will first retrieve information from Postgres streaming replication to confirm which Standby node has the most recent data, and promote the node with the least chance of data loss. If two Standby nodes contain equally up-to-date data, the node with a higher user-specified priority value will be promoted to Master. To check the priority value of your Standby nodes, use the command:efm cluster-status cluster_namePlease note: The promotion priority may change if a node becomes isolated from the cluster, and later re-joins the cluster.You can invoke efm promote on any node of a Failover Manager cluster to start a manual promotion of a Standby database to Master database. Manual promotion should only be performed during a maintenance window for your database cluster. If you do not have an up-to-date Standby database available, you will be prompted before continuing. To start a manual promotion, assume the identity of efm or the OS superuser, and invoke the command:efm promote cluster_nameDuring a manual promotion, the Master agent releases the virtual IP address before creating a recovery.conf file in the directory specified by the db.recovery.conf.dir parameter. The Master agent remains running, and assumes a status of Idle.The Standby agent confirms that the virtual IP address is no longer in use before pinging a well-known address to ensure that the agent is not isolated from the network. The Standby agent runs the fencing script and promotes the Standby database to Master. The Standby agent then assigns the virtual IP address to the Standby node, and runs the post-promotion script (if applicable).Failover Manager currently does not provide fallback functionality to restore the old Master database - you must perform this configuration manually.Please note that this command instructs the service to ignore the value specified in the auto.failover parameter in the cluster properties file.When you stop an agent, Failover Manager will remove the node's address from the cluster members list on all of the running nodes of the cluster, but will not remove the address from the Failover Manager Allowed node host list.To stop the Failover Manager agent on RHEL 6.x or CentOS 6.x, assume superuser privileges, and invoke the command:To stop the Failover Manager agent on RHEL 7.x or CentOS 7.x, assume superuser privileges, and invoke the command:Until you invoke the efm remove-node command (removing the node's address of the node from the Allowed node host list), you can use the service efm-2.0 start command to restart the node at a later date without first running the efm add-node command again.To stop a Failover Manager cluster, connect to any node of a Failover Manager cluster, assume the identity of efm or the OS superuser, and invoke the command:efm stop-cluster cluster_nameThe command will cause all Failover Manager agents to exit. Terminating the Failover Manager agents completely disables all failover functionality.The efm remove-node command removes the IP address of a node from the Failover Manager Allowed node host list. Assume the identity of efm or the OS superuser on any existing node (that is currently part of the running cluster), and invoke the efm remove-node command, specifying the cluster name and the IP address of the node:efm remove-node cluster_name ip_addressThe efm remove-node command will not stop a running agent; the service will continue to run on the node until you stop the agent (for information about controlling the agent, see Section 5). If the agent or cluster is subsequently stopped, the node will not be allowed to rejoin the cluster, and will be removed from the failover priority list (and will be ineligible for promotion).