Modifying an Ark Cluster¶
To review or modify cluster properties, you can:
- Right-click on a cluster name and select Properties… from the context menu.
- Highlight the cluster name and select Properties… from the Object menu.
Please note that not all cluster properties are modifiable.
Use fields on the General tab to review or modify general details about the cluster:
- The Name field displays the name of the cluster; this field is not modifiable.
- The Owner field displays the name of the cluster owner; an administrative user may use the drop-down listbox to reassign ownership of a cluster to a user that has previously authenticated with the console host.
- The Email field displays the notification email for the cluster.
Select the Maintenance tab to continue.
Use fields on the Maintenance tab to manage cluster availability behaviors:
- If Monitor database health is set to Yes, EDB Ark will monitor the health of the database to ensure that service is not interrupted; if the state of the database server changes to any state other than running while Monitor Database Health is checked, Ark will attempt to restart the database. If the database restart fails, Ark will restore the configuration files to their original settings and attempt a restart. If the server fails to restart after restoring the configuration, Ark will failover to a new instance. Set Monitor database health to No to instruct Ark to not automatically restart the database if the database stops.
- When Monitor load balancer health is set to *Yes, EDB Ark monitors the health of the load balancer to ensure that service is not interrupted. If the load balancer (pgpool) should fail while monitoring is enabled, PgPool will be automatically restarted. Set the Monitor Load Balancer Health slider to No to indicate that you do not wish for load balancer health to be monitored and automatically restarted if an interruption in service is detected.
- Use the Cluster healing mode radio buttons to specify the type of failover that should be employed:
- Select the Replace failed master with a new master radio button to specify that the cluster manager should create a new master to replace a failed master node. When replacing a failed master node with a new master node, the data volumes from the failed instance are attached to the new master node, preserving data integrity, while the replicas continue serving client queries.
- Select the Replace failed master with existing replica radio button to specify that the cluster manager should promote a replica node to be the new master node for the cluster. When replacing a failed master node with an existing replica, a replica node is marked for promotion to master node, while the other replica nodes are re-configured to replicate data from the new master node. Since replica nodes use asynchronous replication, any data that was committed to the old master node, but not pushed to the replica prior to the node failure will be lost.
Select the Auto Scale tab to continue.
Use the fields on the Auto Scale tab to specify automatic replica and storage scaling preferences:
- When Automatic replica scaling? is set to Yes, the server will automatically add replica nodes when the number of connections reaches the value specified in the # of server connections field.
- When Automatic storage scaling? is set to Yes, the server will automatically increase the available storage with the amount of storage used reaches the value specified in the % of storage used field.
- Use the # of server connections field to specify the connection threshold that will trigger automatic replica scaling.
- Use the % of storage used field to specify the storage threshold that will trigger automatic storage scaling.
Select the Backup tab to continue.
Use fields on the Backup tab to manage backup preferences for the cluster:
- Use the Backup retention selector to specify the number of backups that should be stored for the selected cluster.
- Use the Backup window drop-down listbox to select an optimal time to process cluster backups; specify a time when the number of clients accessing the database is minimal.
- Set Continuous archiving to Yes to enable point-in-time recovery for a cluster. When enabled, a base backup is automatically performed that can to be used to restore to a specific point in time. All subsequent automatic scheduled backups will also support point-in-time recovery. When point-in-time recovery is enabled, the value specified in the Backup retention field determines the duration of the point-in-time recovery backup window. For example, if you specify a value of 7, the backup window will be 7 calendar days long. When the backup retention threshold is reached, the oldest base backup is removed, as well as any WAL files required to perform a recovery with that backup.