Table of Contents Previous Next


6 Multi-Master Replication Operation : 6.6 Conflict Resolution

Conflict resolution deals with the topic of the types of conflicts that might occur, the strategies for dealing with conflicts, and the options available for automatically resolving such conflicts.
Uniqueness Conflict. A uniqueness conflict occurs when the same value is used for a primary key or unique column in an insert transaction on two or more master nodes. This is also referred to as an insert/insert conflict.
Update Conflict. An update transaction modifies a column value in the same row on two or more master nodes. For example, an employee address column is updated on master node A, and another user updates the address column for the same employee on master node B. The timestamps of when the transactions occur on each node could be different, but both transactions occur in a time interval during which synchronization has not yet occurred. Thus when synchronization does take place, both conflicting transactions are to be applied. This is also referred to as an update/update conflict.
Delete Conflict. The row corresponding to an update transaction on the source node is not found on the target node as the row has already been deleted on the target node. This is referred to as an update/delete conflict. Conversely, if there is a delete transaction on the source node and an update transaction for the same row on the target node, this case is referred to as a delete/update conflict. Finally, in the case where the row corresponding to a delete transaction on the source node is not found on the target node as the row has already been deleted on the target node is referred to as a delete/delete conflict.
Node A: INSERT INTO addrbook (name, address) VALUES ('A', 'ADDR A');
Node A: INSERT INTO addrbook (name, address) VALUES ('B', 'ADDR B');
Node B: INSERT INTO addrbook (name, address) VALUES ('C', 'ADDR C');
id = 1, name = 'C', address = 'ADDR C'
Node A: UPDATE addrbook SET address = 'ADDR B1' WHERE id = 2;
Node B: UPDATE addrbook SET address = 'ADDR B2' WHERE id = 2;
Node A: UPDATE addrbook SET address = 'ADDR B1' WHERE id = 2;
Node B: DELETE FROM addrbook WHERE id = 2;
Row with id = 2 deleted
The row with id = 2 is already deleted on target Node B, hence update from Node A fails.
Note: For clarity of the explanations and examples that follow, it is assumed that parallel synchronization is not in use (that is, each replication process occurs serially).
If no conflict is detected, the transactional change is replicated to the target master node and the transaction status for that target node is marked as completed in the source master node control schema. A transaction status mapping for each target master node is maintained on all master nodes. For example node A contains two mappings of status – one for node B and another for node C.
Earliest Timestamp. When the earliest timestamp option is selected, the relevant rows involved in an update conflict from the source and target master nodes are compared based on the timestamp of when the update occurred on that particular node. The row change that occurred earliest is applied. The row changes with the later timestamps are discarded.
Latest Timestamp. Same approach as earliest timestamp except the row change with the latest timestamp is accepted. The row changes with earlier timestamps are discarded.
Node Priority. The row change of the master node with the highest node priority level is applied while the lower priority level master node changes are discarded. The node priority level is an integer in the range of 1 to 10, inclusive where 1 is the highest priority level and 10 is the lowest priority level.
Custom. Custom conflict handling applies to update/update conflicts only. You must supply a PL/pgSQL program to resolve any conflicts that occur resulting from an update/update conflict. See Section 6.6.6 for information on using custom conflict handling.
Node specific sequence range. A sequence range is reserved for each master node. For example, master node A would have MINVALUE = 1 and MAXVALUE = 1000, master node B would have MINVALUE = 1001 and MAXVALUE = 2000, and so on for other nodes. This ensures that a unique ID is always generated across all master nodes.
Start value variation. Each node is assigned a different start value. For example, master node A would have a START value of 1, node B would have 2, and node C would have 3. An increment greater than or equal to the number of nodes guarantees unique IDs as shown in Table 6.
Common sequence. All nodes share a common sequence object, however this has the major disadvantage of slowing down transaction processing due to network round-trips associated with each ID generation.
Node A: UPDATE addrbook SET address = 'ADDR A' WHERE id = 2;
Node C: UPDATE addrbook SET address = 'ADDR C' WHERE id = 2;
Synchronization pushes Node A changes to Node C. Current address on Node C <> old value on Node A ('ADDR C' <> 'ADDR') hence conflict detected. Latest change on Node C accepted and Node A change discarded.
First, the following UPDATE statement is given in the master definition node:
Note that the original value, OPERATIONS, of column dname is the same as the value to which it is changed in the UPDATE statement.
The following UPDATE statement is then given in a second master node:
However the value of column dname in the second master node remains set to LOGISTICS. It was not reverted back to the value OPERATIONS from the master definition node as would normally be expected on a conflicting column. Note that as expected, the value in column loc is reverted from CAMBRIDGE back to the master definition node value of BEDFORD.
A column is considered a conflicting column if it is updated on more than one master node in the same synchronization. Even if the new, updated value for the column is identical in the conflicting update transactions, the fact that the same column was updated on more than one master node makes it a conflicting column.
Columns are to be set to the source node. When the resolution_code parameter of the function is set to a value of 1, the resultant setting of all columns in both conflicting nodes is obtained from the source node of the replication.
Columns are to be set to the target node. When the resolution_code parameter of the function is set to a value of 2, the resultant setting of all columns in both conflicting nodes is obtained from the target node of the replication.
The function logic sets the column. When the resolution_code parameter of the function is set to a value of 3, the resultant setting of the first conflicting column is obtained from the value returned in the source parameter coded within the function logic. The resultant setting of all other column values is obtained from the source node of the replication.
INOUT parameter of the record type of the shadow table in schema _edb_replicator_pub of the master definition node on which conflicts are to be resolved. The input values are the column values from the source node. When resolution_code is set to a value of 3, set the columns in this parameter to the values that are to be used for the final outcome.
IN parameter of the record type of the shadow table in schema _edb_replicator_pub of the master definition node on which conflicts are to be resolved. The input values are the column values from the target node.
IN parameter of type VARCHAR(255) containing the name of the column on which the update/update conflict has occurred. If more than one column is involved in the conflict, the name of the first conflicting column is returned.
OUT parameter of type VARCHAR(255) containing any informative message to be written to the publication server log file. The publication server configuration option logging.level must be set to at least the INFO level in order for the messages to appear in the publication server log file. See Section 3.3 for the location of the publication server log file.
OUT parameter of type INTEGER that you set to one of the following values to determine how to resolve the conflict: 1 to use the column values of the source node of the replication for the final outcome, 2 to use the column values of the target node of the replication for the final outcome, or 3 to use the value set for the source INOUT parameter of the first conflicting column as the final outcome for that column.
Step 1: The publication under the master definition node must exist before adding the function to the master definition node. See Section 6.2.3 for information on creating the publication.
Step 2: Add the function to the master definition node. The following example shows the addition of the function using PSQL.
Step 3: Open the Conflict Resolution Options tab in any of the following ways:
custom_conflict_1_mmr
Step 4: For the table on which you want to use custom conflict handling, select Custom from the appropriate drop-down list. In the Custom Handler text box, enter the schema and function name used in the CREATE FUNCTION statement.
custom_conflict_2_mmr
Step 5: Click the Update button, and then click OK in response to the Conflict Resolution Options Updated Successfully confirmation message.
custom_conflict_3_mmr
Note: If the multi-master replication system uses custom conflict handling, and you subsequently switch the role of the master definition node to another master node, you must re-add the functions to the new master definition node. That is, you must repeat Step 2 on the new master definition node.
Note: If you wish to delete the multi-master replication system, before removing the publication you must drop all custom conflict handling functions from the master definition node.
The following example shows the effect of custom conflict handling using the custom conflict handling function named custom_conflict_dept shown in Section 6.6.6.1. This function sets the target node as the winner of update/update conflicts on the dept table.
custom_conflict_4_mmr
In the source master node the loc column of department 50 loses the value set in its UPDATE statement. The column is reset to the value from the target master node.
In the target master node the loc column of department 50 retains the value set from its UPDATE statement.
The target node wins the conflict as determined by the setting of the resolution_code parameter to a value of 2 in the custom conflict handling function.
The following example shows the effect of custom conflict handling using the custom conflict handling function named custom_conflict_emp shown in Section 6.6.6.1. This function sets values coded in the function as the winner of update/update conflicts on the emp table.
The following is the row from the emp table prior to the update:
After the synchronization replication the master node, edb, contains the following values for the conflicting row:
After the synchronization replication the master node, mmrnode, contains the following values for the conflicting row:
The following example uses master definition node, edb, and a second master node, mmrnode. Initially, the inventory table has the same contents on both master nodes.
For an update transaction, the shadow table contains the column values before the update was made on the publication table (columns with names rrep_old_column_name) and the values after the update was applied (columns named identically to the publication table column names).
The custom conflict handling function uses both the current and old values of the quantity columns from the source and target shadow tables as shown by the following.
Assume two items with item_id of 1 are purchased on the master definition node:
Also assume one item with item_id of 1 is purchased from the second master node:
After the synchronization replication and invocation of the custom conflict handling function, the quantity column for item_id 1 is correctly set to 47 in both master nodes:
As discussed in Section 6.6.4 there is no built-in, automatic conflict resolution strategy for the uniqueness (insert/insert) conflict. If a uniqueness conflict occurs, then you must modify rows in the publication tables containing the conflict as well as modify rows in the metadata tables in the master nodes and in the xDB Control database to resolve the conflict.
Finding Conflicts. Locating unresolved conflicts
Conflict Resolution Preparation. Helpful setup steps to aid in the manual conflict resolution process
Overview of Correction Strategies. Overview of the methods you can use to perform the corrections
Manual Publication Table Correction. Manual correction of the publication tables
Correction Using New Transactions. Using new transactions to bring all master nodes to a consistent state
Correction Using Shadow Table Transactions. Using existing shadow table transactions to bring all master nodes to a consistent state
Conflicts can be found using the Conflict History tab as described in Section 6.7. The following is an example of the Conflict History tab. Click the Refresh button to reveal all of the latest conflicts.
conflicthistory_2_mmr
conflicthistory_4_mmr
Step 1: While connected as a database superuser to a master node, create a database link to enable remote access to the xDB Control database. The Advanced Server database link feature is used for this purpose.
In the following example xdb is the name given to the database link. The CONNECT TO clause supplies the database user name and password that is used to connect to the xDB Control database. The USING libpq clause supplies the connection information to the host, port, and database name of the xDB Control database.
Step 2: The following query can then be run from the master node to display information regarding pending (unresolved) conflicts:
To prevent the triggers on the publication tables from firing, during the session in which you modify the publication table rows, the database server configuration parameter session_replication_role must be set to a value of replica. (The default setting of session_replication_role is origin in which case the triggers will fire.)
The suggested method to ensure the replica setting is in effect is to create a database user with a default session setting of replica for this parameter. Whenever you connect to a database with this database user, the replica setting will be in effect during this session.
In the following example database superuser mmrmaint is created and altered for this purpose:
Note: If you have implemented a high availability solution for the xDB Control database as described in Section 6.11, be certain you do not use this database user with session_replication_role set to replica when making modifications to the metadata tables in the xDB Control database. Parameter session_replication_role must be set to the default value of origin when modifying tables in the xDB Control database. Changes you make to the tables in the xDB Control database must be replicated to the backup xDB Control database in the standby server by firing the Slony triggers.
The Conflict History tab and the SQL query described in Section 6.6.7.1 can help determine the source of an initial conflict.
Therefore, when you have discovered that a conflict has occurred, it is strongly recommended that you stop the publication server. Use the stop option of the Linux scripts or Windows services described in Step 1 of Section 5.2.1.
Which transactions on the publication tables have occurred and are recorded in the shadow tables following the initial conflict, and whether or not these transactions have been applied completely and correctly to the publication tables across all master nodes. These transactions may not be marked as pending. Instead their rrep_tx_conflict_status column could be set to null meaning that no specific conflict was detected during replication, or the transaction has not yet been replicated. These transactions can be identified because they have a later rrep_tx_timestamp value than the transactions causing the initial conflict.
Step 1: Make the necessary manual corrections to the rows in the publication tables across all master nodes to get them into an initial, consistent state so each publication table has the same set of identical rows across master nodes. This may be to a state before the conflicting transactions occurred, depending upon what you determine to be the easiest course of action for fully resolving the conflict.
Step 2: Apply or reapply transactions (either from your application or from the shadow tables) so that all publication tables across all master nodes are updated consistently according to the desired, expected result of what has been recorded in the shadow tables.
Step 3: In the shadow tables, update certain indicators for conflicting entries to show that these were resolved in Step 2.
Step 4: In the xDB Control database, update certain indicators for the conflicting entries to show that these conflicts have been resolved. This update changes the Resolution Status of these entries to Resolved in the Conflict History tab. These entries will no longer appear in the SQL query described in Section 6.6.7.1.
Step 5: Resume operation of your replication system. Start the publication server and recreate the replication schedule if you were using one.
Manual Publication Table Correction. Use a utility such as PSQL or pgAdmin (Postgres Enterprise Manager Client in Advanced Server) to manually correct the rows in the publication tables across all master nodes without replicating these changes. Use the database user with session_replication_role set to replica for this purpose.
Correction Using New Transactions. Rerun your application on one master node to create new transactions that you will allow to replicate to all other master nodes. Use this method after you have ensured that all publication tables are in a consistent state across all master nodes.
Correction Using Shadow Table Transactions. Force the synchronization of transactions already recorded in the shadow tables. Use this method if there are many shadow table transactions that need to be applied, and it is simpler to force the synchronization of these transactions rather than reissuing the transactions from your application.
A 3-node multi-master replication system has been established. The master node names are mmrnode_a (the master definition node), mmrnode_b, and mmrnode_c.
The publication is named emp_pub and uses the dept and emp tables that have been used as examples throughout this document.
The conflict used to illustrate the first two conflict resolution methods is a uniqueness conflict occurring on the dept table on primary key column deptno on value 50 resulting from the INSERT statements shown by the following:
On mmrnode_a, the following statement is run:
On mmrnode_b, the following statement is run:
conflictmanual_1_mmr
Master nodes mmrnode_a and mmrnode_b each contain a row with primary key value 50, but the other column values in the row are different.
Master node mmrnode_c does not have a row with primary key value 50.
Assuming that the correct state of the dept table should be the one in mmrnode_b, the following options are available to correct the state of all master nodes:
Manually correct the dept table in mmrnode_a and mmrnode_c. That is, update the row in mmrnode_a so it has the correct values, and insert the missing row in mmrnode_c. The dept table on all nodes is now consistent and up-to-date.
Manually delete the row with primary key value 50 from the table on both mmrnode_a and mmrnode_b. This brings the dept table on all master nodes back to a prior, consistent state. Then, with the multi-master replication system running, perform the insert transaction again using the correct column values on any one of the master nodes.
Manually delete the incorrect row with primary key value 50 from the table on mmrnode_a. Leave the correct row in the table in mmrnode_b. This simulates the state where the correct transaction was run on mmrnode_b, is recorded in the shadow table, but has not yet been replicated, and the incorrect transaction was never run on mmrnode_a. Update the shadow table entry in mmrnode_a to indicate that it is discarded and to ensure it is not included in any future synchronizations. Update the metadata for the shadow table entry in mmrnode_b to force its inclusion in the next synchronization. Perform a synchronization replication so the accepted shadow table entry in mmrnode_b is replicated to mmrnode_a and mmrnode_c.
Step 1: Manually correct the rows in the publication tables with session_replication_role set to replica.
On mmrnode_a, correct the erroneous row:
On mmrnode_c, insert the missing row:
The dept table on mmrnode_a and mmrnode_c now match the content of the table on mmrnode_b:
Step 2: Update the shadow table entries for the conflicting transactions in the master nodes to indicate that the conflict has been resolved.
Shadow tables are located in each master node in schema _edb_replicator_pub. Shadow tables follow the naming convention rrst_schema_table where schema is the name of the schema containing the publication table and table is the name of the publication table.
A row in a shadow table corresponds to an INSERT, UPDATE, or DELETE statement that is applied to the corresponding publication tables in the other master nodes. A shadow table row does not necessarily correspond to the SQL statement issued by the user application. For example, a SQL statement issued by a user application that includes a WHERE clause using a range such as greater than or less than, results in multiple, individual entries in the shadow table for each individual row in the result set of the application’s SQL statement.
The primary key of a shadow table is a program generated, positive integer in column rrep_sync_id. The rrep_sync_id values are unique amongst all shadow tables within a given master node. Therefore, the rrep_sync_id values for conflicting transactions may or may not have the same value across master nodes as this depends upon how many prior transactions were recorded in the shadow tables of each master node.
A shadow table entry for a transaction involved in a conflict that has not yet been resolved contains a value of P (pending) in column rrep_tx_conflict_status. If a transaction is not involved in a conflict, this column is set to null. (The vast majority of shadow table entries should have null in this column.) If a transaction was involved in a conflict that was resolved automatically by the publication server, and this transaction was accepted as being correct, this column contains C (complete/accepted). If a transaction was involved in a conflict that was resolved automatically, and this transaction was deemed incorrect, this column contains D (discarded).
The following query is performed on the shadow table for the dept table in mmrnode_a on rrep_sync_id value 1 obtained from field src_rrep_sync_id in the preceding output.
A similar query can locate the pending shadow table entry in mmrnode_b by querying on the key value obtained from field target_rep_sync_id:
Note: To be certain no pending transactions are overlooked, you should examine the shadow tables in all master nodes that may have been involved in the conflict and search for entries where rrep_tx_conflict_status is set to P.
The following shows the rrep_tx_conflict_status column marked P (pending) in the Postgres Enterprise Manager Client.
conflictmanual_2_mmr
Modify column rrep_tx_conflict_status by changing the value to D (discarded) to show that the pending conflict has been resolved. A value of D also ensures that the shadow table entry will not be replicated during any future synchronization replications.
conflictmanual_4_mmr
Be sure to qualify the row with the correct rrep_sync_id value if you perform the update using a SQL statement such as in the following:
There is no shadow table entry in mmrnode_c, since an insert transaction was not performed in that master node by the application.
Step 3: In the xDB Control database, modify the entries in the erep_conflicts table to indicate the conflict has been resolved. Table erep_conflicts is located in schema _edb_replicator_pub.
Note: The entries in table erep_conflicts only affect the data that appears in the Conflict History tab and the SQL query described in Section 6.6.7.1. Changing entries in erep_conflicts has no effect on future replication operations, but provides a way to keep a record of how past conflicts were resolved.
A row in the erep_conflicts table appears as an entry in the Conflict History tab.
The primary key of the erep_conflicts table is comprised of columns src_db_id, target_db_id, src_rrep_sync_id, and target_rrep_sync_id. Column src_db_id contains a unique identifier for the master node in which a transaction occurred that results in a conflict when replicated to the master node identified by target_db_id. src_rrep_sync_id is the shadow table identifier of the transaction on the source master node involved in the conflict while target_rrep_sync_id is the shadow table identifier of the transaction on the target master node that is involved in the conflict.
Table erep_pub_database in the xDB Control database associates the database identifiers src_db_id and target_db_id with the master node attributes such as the database name, IP address, and port.
Column table_id is the identifier of the publication table on which the conflict occurred. Association of the table_id value with the publication table attributes such as its name, schema, and shadow table is found in each master node in _edb_replicator_pub.rrep_tables.
Column resolution_status indicates the status of the conflict. Possible values are P (pending) or C (completed – the conflict has been resolved). This status appears in the Resolution Status column of the Conflict History tab.
Column win_db_id can be used to record the database identifier of the master node that contains the “winning” (accepted) transaction. This information appears in the Winning DB column of the Conflict History tab.
Column win_rrep_sync_id can be used to record the shadow table identifier of the winning transaction.
conflictmanual_1_mmr
This entry can be located in erep_conflicts with the following query for this example:
conflictmanual_5_mmr
Change the value in column resolution_status from P (pending) to C (completed) to indicate this conflict has been resolved. The value in winning_db_id is changed to 4 to indicate master node mmrnode_b contains the winning transaction. The value in winning_rrep_sync_id is changed to the value of rrep_sync_id for the shadow table entry of the transaction in mmrnode_b since this is the one deemed to be correct.
The following is the updated erep_conflicts entry:
conflictmanual_6_mmr
When viewed in the Conflict History tab, the entry now shows Resolved instead of Pending in the Resolution Status column, and the Winning DB column shows the address of master node mmrnode_b.
conflictmanual_7_mmr
Referring back to the uniqueness conflict on the dept table, instead of correcting the erroneous row and inserting the row into the master node where it is missing as described in Section 6.6.7.4, you can delete the conflicting rows from all master nodes, then insert the correct row in one master node and let the multi-master replication system synchronize the correct row to all master nodes.
Step 1: Manually delete the inserted row from the publication tables in all master nodes with session_replication_role set to replica.
On mmrnode_a, delete the erroneous row:
On mmrnode_b, delete the row even though the transaction created the correct result:
On mmrnode_c, no changes are required as the conflicting transaction did not insert a new row into the table on this node:
Step 2: Rerun the transaction on one master node with the multi-master replication system running and with session_replication_role set to the default (origin).
For this example, the correct INSERT statement is executed on mmrnode_a:
On mmrnode_a:
Step 3: Perform synchronization replication.
On mmrnode_a;
On mmrnode_b:
On mmrnode_c:
Step 4: Update the shadow table entries for the conflicting transactions in the master nodes to indicate that the conflict has been resolved as in Step 2 of Section 6.6.7.4.
Change the rrep_tx_conflict_status column from P (pending) to D (discarded) on all master nodes.
conflictmanual_8_mmr
Note the second entry for the accepted transaction you ran in Step 2 where rrep_tx_conflict_status is set to null indicating there was no conflict.
conflictmanual_9_mmr
There is no shadow table entry in mmrnode_c, since an insert transaction was not performed in that master node by the application.
Step 5: In the xDB Control database, modify the entry in the erep_conflicts table to indicate the conflict has been resolved as in Step 3 of Section 6.6.7.4.
In mmrnode_b, the following row is inserted:
In mmrnode_c, the following row is inserted with the same primary key value 9001 in the empno column:
In mmrnode_c, this is followed by a series of updates to the newly inserted row:
On mmrnode_a the conflicting row has not been replicated:
On mmrnode_b the conflicting row inserted on this node remains, but is updated with the transactions replicated from mmrnode_c:
On mmrnode_c the conflicting row inserted on this node remains along with the updates performed on this node:
The following are the steps to reproduce the correct row, currently on mmrnode_c, to the other master nodes by synchronizing the shadow table entries that resulted from the original insert and updates to this row on mmrnode_c.
Step 1: Manually delete the inserted row from the publication tables on all master nodes except for mmrnode_c, which has the correct row. Be sure session_replication_role is set to replica.
On mmrnode_a, this row does not exist:
On mmrnode_b, delete the erroneous row:
On mmrnode_c, the correct, accepted row is left intact:
Step 2: On the master nodes containing the conflicting row that is to be discarded, mark the shadow table entry for that row as discarded. This indicates the conflict on this row has been resolved and ensures this shadow table entry is not replicated in the future.
Change the rrep_tx_conflict_status column from P (pending) to D (discarded) on the losing node, mmrnode_b as shown by the following:
conflictmanual_15_mmr
Step 3: On winning node mmrnode_c, inspect the shadow table for the emp publication table.
conflictmanual_11a_mmr
Make note of the rrep_sync_id values for these four entries, which are 1, 2, 3, and 4 in this example.
conflictmanual_11b_mmr
Make sure the rrep_tx_conflict_status column is null for these four entries. In this case, for the insert transaction, you will need to change the P (pending) value to null.
The resulting change for the rrep_tx_conflict_status column in the shadow table on mmrnode_c is shown by the following:
conflictmanual_12_mmr
Step 4: In order to replicate these four shadow table entries during the next synchronization, one or more entries must be added to the metadata table _edb_replicator_pub.rrep_mmr_txset on mmrnode_c to indicate pending status for synchronization to the target master nodes (mmrnode_a and mmrnode_b) of the four shadow table entries identified by the rrep_sync_id values of 1, 2, 3, and 4 noted in Step 3.
First, you must identify the pub_id and target db_id values that are to be associated with the pending transactions. To do so, invoke the following query substituting the rrep_sync_id values for sync_id in the query:
The results indicate that the previously executed synchronization that attempted to apply the shadow table transactions identified by the rrep_sync_id values of 1, 2, 3, and 4 were all for the publication identified by pub_id of 2. The target master nodes were identified by db_id of 1 (for mmrnode_a) and db_id of 3 (for mmrnode_b).
Thus, at least two entries must be inserted into the metadata table _edb_replicator_pub.rrep_mmr_txset on mmrnode_c. At least one entry is required for the target db_id of 1 and at least one entry for the target db_id of 3.
Since each entry in _edb_replicator_pub.rrep_mmr_txset consists of a range of rrep_sync_id values (identified by columns start_rrep_sync_id and end_rrep_sync_id) and the desired shadow table rrep_sync_id values happen to be contiguous (1 thru 4), a single entry can encompass the four rrep_sync_id values for a single target database.
Thus, in this example, a total of two entries can be added to _edb_replicator_pub.rrep_mmr_txset – one for each target database.
Note: If there were multiple, non-contiguous rrep_sync_id values required for synchronization (for example, 1, 2, 5, and 6), then multiple entries would be required for each target database. The entries would specify rrep_sync_id ranges to collectively cover all of the non-contiguous values, but omitting rrep_sync_id values that are not to be included in the synchronization (for example, one entry for 1 through 2 and a second entry for 5 through 6).
Step 5: Insert the entries into the _edb_replicator_pub.rrep_mmr_txset metadata table as identified in the preceding step.
The two INSERT statements invoked on mmrnode_c are the following:
A query of the _edb_replicator_pub.rrep_mmr_txset metadata table displays the following:
There are now two new entries with pending status (P), one for target db_id 1, the other for target db_id 3. Both entries cover the rrep_sync_id range of 1 through 4.
The two entries with completed status (C) are from the synchronization attempt that initially produced the conflict.
Step 6: Perform synchronization replication.
The insert and three update transactions recorded in the rrst_edb_emp shadow table on mmrnode_c are replicated to the other master nodes.
On mmrnode_a:
On mmrnode_b:
Step 7: In the xDB Control database, modify the entry in the erep_conflicts table to indicate the conflict has been resolved as in Step 3 of Section 6.6.7.4.

6 Multi-Master Replication Operation : 6.6 Conflict Resolution

Table of Contents Previous Next