Upgrading Failover Manager v4

Failover Manager provides a utility to assist you when upgrading a cluster managed by Failover Manager. To upgrade an existing cluster, you must:

  1. Install Failover Manager 4.8 on each node of the cluster. For detailed information about installing Failover Manager, see Installing Failover Manager.

  2. After installing Failover Manager, invoke the efm upgrade-conf utility to create the .properties and .nodes files for Failover Manager 4.8. The Failover Manager installer installs the upgrade utility (efm upgrade-conf) to the /usr/edb/efm-4.8/bin directory. To invoke the utility, assume root privileges, and invoke the command:

    efm upgrade-conf <cluster_name>

    The efm upgrade-conf utility locates the .properties and .nodes files of preexisting clusters and copies the parameter values to a new configuration file for use by Failover Manager. The utility saves the updated copy of the configuration files in the /etc/edb/efm-4.8 directory.

  3. Modify the .properties and .nodes files for Failover Manager 4.8, specifying any new preferences. Use your choice of editor to modify any additional properties in the properties file (located in the /etc/edb/efm-4.8 directory) before starting the service for that node. For detailed information about property settings, see The cluster properties file.

  4. If you're using Eager Failover, you must disable it before stopping the Failover Manager cluster. For more information, see Disabling Eager Failover.

  5. Use a version-specific command to stop the old Failover Manager cluster. For example, you can use the following command to stop a version 4.7 cluster:

    /usr/efm-4.7/bin/efm stop-cluster efm
    Note

    The primary agent doesn't drop the virtual IP address (if used) when it's stopped. The database remains up and accessible on the VIP during the EFM upgrade. See also Using Failover Manager with virtual IP addresses.

  6. Start the new Failover Manager service (edb-efm-4.8) on each node of the cluster.

The following example shows invoking the upgrade utility to create the .properties and .nodes files for a Failover Manager installation:

[root@hostname ~]# /usr/edb/efm-4.8/bin/efm upgrade-conf efm
Checking directory /etc/edb/efm-4.7
Processing efm.properties file
Checking directory /etc/edb/efm-4.7
Processing efm.nodes file

Upgrade of files is finished. The owner and group for properties and nodes files have been set as 'efm'.
[root@hostname ~]#

The optional -source flag

You can use the -source flag to explicitly specify the directory containing the files to process. If the directory is a Failover Manager configuration location, that is, /etc/edb/efm-<earlier_version>, the utility writes the new files in the default configuration directory. This behavior allows upgrading from a specific earlier version if desired.

If the source directory is any other directory, the utility creates the new files in the directory where the command was invoked. The files are owned by the user who ran the command. This approach is typically used when using a Failover Manager configuration without sudo, and doesn't require root privileges.

Summary:

  • The -source flag isn't used. The utility searches previous installation directories for configuration files. The new files are generated in the current default configuration directory and are owned by the efm user. Root privileges are required.

  • The -source flag is set to a previous installation's configuration directory. The utility looks only in the specified directory for configuration files. The new files are generated in the current default configuration directory and are owned by the efm user. Root privileges are required.

  • The -source flag is set to any other directory. The utility looks only in the specified directory for configuration files. The new files are generated in the directory from which the command was invoked and are owned by the user invoking the command. Root privileges aren't required.

Note

In all cases, if a <cluster_name>.properties or <cluster_name>.nodes file already exists in the target directory, it's renamed with a timestamp before the new file is saved.

Uninstalling Failover Manager

Note

If you are using custom scripts, check to see if they are calling any Failover Manager scripts. For example, a script that runs after promotion to perform various tasks and then calls Failover Manager's efm_address script to acquire a virtual IP address. If you have any custom scripts calling Failover Manager scripts, update the custom scripts to use the newly installed version of the Failover Manager script before uninstalling the older version of the Failover Manager script.

After upgrading to Failover Manager 4.8, you can use your native package manager to remove previous installations of Failover Manager. For example, use the following command to remove Failover Manager 4.7 and any unneeded dependencies:

  • On RHEL or CentOS 7.x:
yum remove edb-efm47
  • On RHEL or Rocky Linux or AlmaLinux 8.x:
dnf remove edb-efm47
  • On Debian or Ubuntu:
apt-get remove edb-efm47
  • On SLES:
zypper remove edb-efm47

Performing a maintenance task

You can perform maintenance activities such as an OS patch or a minor database version upgrade. You can upgrade from one minor version to another (for example, from 10.1.5 to version 10.2.7) or apply a patch release for a version.

First, update the database server on each standby node of the Failover Manager cluster. Then, perform a switchover, promoting a standby node to the role of primary in the Failover Manager cluster. Then, perform a database update on the old primary node.

On each node of the cluster, perform the following steps to update the database server:

  1. Stop the Failover Manager agent.
  2. Stop the database server.
  3. Update the database server.
  4. Start the database service.
  5. Start the Failover Manager agent.

For detailed information about controlling the EDB Postgres Advanced Server service or upgrading your version of EDB Postres Advanced Server, see the EDB Postgres Advanced Server Guide

When your updates are complete, you can use the efm set-priority command to add the old primary to the front of the standby list (if needed), and then switchover to return the cluster to its original state.