10.4 Miscellaneous xDB Replication Server Processing Topics
10 Appendix : 10.4 Miscellaneous xDB Replication Server Processing Topics
The publication server and the subscription server support various configuration options for purposes such as the following:
• Optimize synchronization performance based on the types of transactions affecting the publication. (See Section 5.8.2 for details on these particular options.)
• Utilize alternate loading methods in snapshot replications. (See Section 5.8.1 for details on these particular options.)
• Special configuration options for multi-master replication. (See Section 6.12 for details on these particular options.)
• Special configuration options for the log-based method of synchronization replication. (See Section 10.4.1.15 for details on these particular options.)Most options apply to the publication server only, although a few are used by the subscription server.The configuration options for the publication server are set and passed in a text file called the publication server configuration file with file name xdb_pubserver.conf.The configuration options for the subscription server are set and passed in a text file called the subscription server configuration file with file name xdb_subserver.conf.Modified publication server configuration options take effect after the publication server is restarted. Similarly, modified subscription server configuration options take effect after the subscription server is restarted.The configuration options that have been explicitly put into effect by overriding their defaults in the configuration files are logged in the publication server log file and the subscription server log file. Section 3.5 contains the directory locations of these log files.The following is a description of how to set the configuration options. This is followed by sections describing the purpose of each option.Step 1: The publication and subscription server configuration files are created during xDB Replication Server installation and already contain all of the configuration options as comments with their default settings.To change the setting of a configuration option, edit the publication server or subscription server configuration file by removing the comment symbol (#) from the option and substituting the desired value in place of the currently coded value.The following example shows a portion of the publication server configuration file where replacement of null characters in the publication data has been activated and the replacement character has been set to the question mark character.Step 2: Restart the publication or subscription server.Note: The options described in this section apply to the publication server and the subscription server unless otherwise specified.The following options control various aspects of message logging in the publication server log file, the subscription server log file, and the Migration Toolkit log file.See Section 10.3.2.4 for additional information on the publication and subscription server log files.Set the logging.level option to control the severity of messages written to the publication server log file and the subscription server log file.The default value is WARNING.Set the logging.file.size option to control the maximum file size (in megabytes) of the publication server log file and the subscription server log file.Note: If logging.file.count is set to 0, the setting of logging.file.size is ignored. The log file is allowed to grow without limit.The default value is 50 megabytes.Set the logging.file.count option to control the number of files in the log file rotation history of the publication server log file and the subscription server log file.The default value for n is 20.A non-zero value of n specifies the maximum number of log files that are to be created.Note: In the remaining discussion the publication server log file named pubserver.log is used as an example. For the subscription server, the log file is named subserver.log.
• Specify a value of 0 to disable log file rotation and create a single, unlimited size log file named pubserver.log. This log file will grow to an unlimited size ignoring any setting of logging.file.size.
• Specify a value of 1 to disable log file rotation and create a single, limited size log file named pubserver.log. The log file is deleted and a new one is created each time the log file reaches the size limit set by logging.file.size.
• Specify a value of 2 or greater to enable log file rotation. All log file names have an integer suffix (for example, pubserver.log.0, pubserver.log.1, pubserver.log.2).When log file rotation is enabled, the log file with the greatest integer suffix contains the oldest messages. When there are enough messages to generate every file in the history rotation, the oldest messages are in pubserver.log.n-1 where n is the setting of logging.file.count. Log file pubserver.log.0 is the current, active log file containing the most recent messages.When log file rotation is enabled and the current, active log file (pubserver.log.0) reaches the size specified by logging.file.size, then the following events occur:
• The log file containing the oldest messages (pubserver.log.n-1) is deleted.
• Each remaining log file is renamed with the next greater integer suffix (pubserver.log.m is renamed to pubserver.log.m+1 with m varying from 0 to n-2).
• A new, active log file is created (pubserver.log.0).Set the mtk.logging.file.size option to control the maximum file size (in megabytes) of the Migration Toolkit log file.The default value is 50 megabytes.Set the mtk.logging.file.count option to control the number of files in the log file rotation history of the Migration Toolkit log file.The default value for n is 20.A non-zero value of n specifies the maximum number of history log files that are to be created.
• Specify a value of 0 to disable log file rotation and create a single, limited size log file named mtk.log. The log file is deleted and a new one is created each time the log file reaches the size limit set by mtk.logging.file.size.
• Specify a value of 1 or greater to enable log file rotation. All log file names have an integer suffix (for example, mtk.log.1, mtk.log.2).When log file rotation is enabled, the log file with the greatest integer suffix contains the oldest messages. When there are enough messages to generate every file in the history rotation, the oldest messages are in mtk.log.n where n is the setting of mtk.logging.file.count.Log file mtk.log is the current, active log file containing the most recent messages.When the current, active log file (mtk.log) reaches the size specified by mtk.logging.file.size, then the following events occur:
• The log file containing the oldest messages (mtk.log.n) is deleted.
• Each remaining log file with a suffix is renamed with the next greater integer suffix (mtk.log.m is renamed to mtk.log.m+1 with m varying from 1 to n-1).
• Log file mtk.log is renamed to mtk.log.1.
• A new, active log file is created (mtk.log).10.4.1.2 Replacing Null CharactersNote: The options described in this section apply to the publication server only.A character consisting of binary zeros (also called the null character string) and represented as 000 in octal or 0x00 in hexadecimal can result in errors when attempting to load such data into a Postgres character column.You may get the following error in the Migration Toolkit log file when performing a snapshot replication of an Oracle table that contains the null character string:If any of these errors occur, you can set an option that will convert each null character encountered in character columns of the source tables to a space character, or to any other character of your choice, before loading the target tables.Note: This option does not alter null characters encountered in columns with binary data types such as Oracle RAW and BLOB data types.This option results in the substitution of a space character for each null character encountered in the source character data.If you want to use a character other than a space character to replace each null character, use the following option in addition to replaceNullChar=true.char is a single character you want to substitute for the null character. For example, the following combination will replace each null character with the hash symbol #.10.4.1.3 Schema Migration OptionsNote: The option described in this section applies to the subscription server only.The option in this section controls how certain aspects of the publication database schema are migrated to the subscription database.By default, column CHECK constraints from publication tables are migrated to the subscription table definitions when the subscription is created. Set this option to true if you do not want CHECK constraints as part of the subscription table definitions.Setting this option to true is useful if the CHECK constraint is based on a built-in function supported by the publication database server, and this built-in function does not exist in the subscription database server.Note: The option described in this section must be set to the same value for both the publication server and the subscription server.Note: This feature applies only for subscriptions in an Advanced Server database. It does not apply to subscriptions in a PostgreSQL database.In Oracle, table partitioning provides the capability to store table rows in different physical locations (tablespaces) according to a rule defined on the table.
• Range Partitioning. Ranges of values defined on a column determine which tablespace a row is stored.
• List Partitioning. A list of values defined on a column determines which tablespace a row is stored.
• Hash Partitioning. An algorithm on a column generates a hash key, which determines which tablespace a row is stored.Note: If you are using Advanced Server, table partitioning using Oracle compatible table partitioning syntax is an available feature. See the section on table partitioning in the Database Compatibility for Oracle Developer’s Guide for information. See Section 7.10 for information on including Postgres partitioned tables in a replication system. The importPartitionAsTable option described in this section applies only to table partitioning in an Oracle database.The importPartitionAsTable option controls what happens when an Oracle partitioned table is part of the publication.Depending upon the Oracle partitioned table type and the setting of the importPartitionAsTable option one of the following may occur:
• When importPartitionAsTable=false (the default setting), the following occurs:
• Note: If there are subscription tables created as sets of Advanced Server inherited tables, then you must also set the enableConstBeforeDataLoad option in the publication server configuration file to true. See Section 10.4.1.6 for information on the enableConstBeforeDataLoad option.When importPartitionAsTable=true, the following occurs:Setting the importPartitionAsTable option to true allows you to replicate a broader range of Oracle partitioned table types, but as normal Advanced Server tables without simulating partitions by using inheritance.By default the xDB Replication Server supports the basic thin client URL pattern for an Oracle JDBC connection. If there is a requirement to specify custom connectivity credentials, specify the advanced URL using the following option.oraJDBCCustomURL=customURL_stringoraJDBCCustomURL=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOST)(PORT=$PORT))(CONNECT_DATA=(SERVICE_NAME =$SERVICE_NAME)(SERVER=DEDICATED)))The parameters prefixed with a dollar sign ($) are dynamically replaced based on the actual connection values specified when adding the Oracle publication database (see Section 5.2.2). Alternatively, the parameters prefixed with a dollar sign can be replaced by hardcoded values in the URL string in which case these hardcoded values override what is specified when adding the publication database.10.4.1.6 Snapshot Replication OptionsNote: The options described in this section apply to the publication server only unless otherwise specified.When JDBC COPY is used in snapshot replication, the data delimiter between column values is an escaped tab character (\t). Set this option to false if you do not want to escape the tab delimiter character.When JDBC COPY is used in snapshot replication, the data delimiter between column values is an escaped tab character (\t). Set this option to change the data delimiter character.c denotes the single replacement character for the data delimiter.The enableConstBeforeDataLoad option controls whether or not table constraints, including triggers, are re-enabled before loading data into target tables. The default process is that the tables are loaded first, and then the constraints are enabled afterwards.If there are target tables created as sets of Postgres inherited tables resulting from partitioned Oracle source tables, then this option must be enabled.Note: The option described in this section applies to the publication server and the subscription server.An alternative method to modifying the /etc/hosts file so that the host name is associated with a non-loopback IP address as discussed in Section 184.108.40.206 is to specify the network IP address using the java.rmi.server.hostname option.In the publication server configuration file, set this option to the network IP address of the host running the publication server.In the subscription server configuration file, set this option to the network IP address of the host running the subscription server.java.rmi.server.hostname=xxx.xxx.xx.xxxFor example, instead of modifying the /etc/hosts file to look like the following for a publication or subscription server running on host 192.168.2.19:10.4.1.8 Using pgAgent Job SchedulingNote: Using pgAgent job scheduling has significance only if Postgres is the publication database.Note: You must have pgAgent installed and running on the host where the publication database resides.When the pgdbschedule option is set to true, xDB Replication Server uses the pgAgent job scheduler instead of the default Quartz job scheduler.
• Scheduling shadow table history cleanup in the publication database. See Section 7.5.1 for information on scheduling shadow table history cleanup.
• Unlike the Quartz scheduler, pgAgent can still run and perform its tasks even if the publication server is not running.A cleanup job is provided that can be run on demand or on a schedule to remove dead (processed) tuples from the shadow tables (see Section 7.5).However, to perform even quicker cleanup scheduling, turn on this option to force the cleanup of shadow tables after every synchronization replication.10.4.1.10 Setting Event History Cleanup ThresholdThe event history cleanup job is scheduled to run every day at 12 AM to remove completed, historical, event and replication history data from the control schema xdb_events, xdb_events_status, xdb_pub_replog, and xdb_pub_table_replog tables that are older than n days. By default the history data older than seven days is removed.Specify a value of 0 to cleanup all, completed, event history and replication history data, regardless of its age.The default value is 7 days.10.4.1.11 DDL Change Replication Table LockingWhen the DDL change replication process is invoked, each affected table in the replication system is acquired in turn with an exclusive lock before the DDL change is applied to the table.Set ddlChangeTableLock to false if you do not want an exclusive lock placed on the table before applying the DDL change. This option should be set to false only if there are no write transactions expected on the target table. If write transactions do occur, they may not be recorded by the replication system.If you wish to maintain zero transaction count records in the replication history after the publication server is restarted, set persistZeroTxRepEvent to true. Otherwise, zero transaction count records are no longer available once the publication server is restarted.When creating non-MDN nodes in a multi-master replication system, the publication server creates the publication tables and their corresponding shadow tables in the non-MDN node database.When skipTablePrivileges is set to false, which is the default value, the database user privileges on the publication tables in the master definition node are granted to the same database users on the publication tables in the newly created non-MDN node.The required privileges are also granted to these database users on the corresponding shadow tables in the non-MDN node so these database users can perform updates on the publication tables and the changes are recorded in the corresponding shadow tables. This enables proper synchronization replication of any such changes.This granting of privileges occurs only for database users with privileges on the master definition node publication tables at the time the non-MDN node is defined using the xDB Replication Console or the xDB Replication Server CLI.If you do not want the publication server to grant these database user privileges to the non-MDN publication tables and shadow tables when defining the non-MDN node, set skipTablePrivileges to true. In this case, you must explicitly grant the privileges on the publication tables and corresponding shadow tables in the non-MDN node for any database user that you wish to provide update access to on these tables. See Step 2 of Section 220.127.116.11 for information regarding the required privileges.This usage would typically be for the case where different database users are to access the non-MDN node publication tables than the database users who access the master definition node publication tables.Note: This option applies to the subscription server only.Note: This option applies only when both the publication database and the subscription database are Postgres databases.When creating a subscription in a single-master replication system, the subscription server creates the subscription tables in the subscription database.When skipTablePrivileges is set to true, which is the default value, no database user privileges are granted on these subscription tables to any database user. By default the subscription database user specified when the subscription database definition is created (see Section 5.3.2) is the owner of the subscription tables.This is the typical, expected scenario since the data in subscription tables should not be updated by user applications other than the xDB Replication Server.Database users that require access to the subscription tables must be explicitly granted such privileges.If however, you do want the subscription server to grant database user privileges to the subscription tables for the same database users that already have access privileges to the publication tables, set skipTablePrivileges to false in the subscription server configuration file. (The setting of skipTablePrivileges in the publication server configuration file is ignored for this process in a single-master replication system.)In this case, the same access privileges are granted on the subscription tables to database users with privileges on the publication tables at the time the subscription is defined using the xDB Replication Console or the xDB Replication Server CLI.When using the log-based method of synchronization replication the walTxSetCreationInterval option controls the time interval between creations of the transaction sets, which affects the size of the transaction set (that is, the batch size). The default setting results in the creation of a transaction set every 5,000 milliseconds (5 seconds) assuming changes to the publication tables to be replicated are available.This value should be adjusted based on the workload (that is, the transaction per minute (TPM) rate) on the target publication tables.If the TPM rate is on a higher end, the walTxSetCreationInterval option should be set to a relatively low value.If the TPM rate is on a lower end, the option should be set to a higher value to ensure that a transaction set is large enough to allow an average batch size in the range of 100 to 500 transactions.The default value is 5000 milliseconds.The walStreamQueueLimit option defines the upper limit for the number of WAL entries that can be held in the queue pending for processing at a point in time. Once the queue becomes full, the WAL stream receiver blocks additions until space becomes available in the queue as transaction entries are popped out of the queue for processing.A value of 0 indicates there will be no upper limit. Note that too high a setting may result in Java heap space out of memory errors. See Section 5.1.1 for information on adjusting the Java heap memory size.The default value is 10000.The pendingTxSetThreshold option defines the upper threshold limit for the number of pending transaction sets that when reached, causes the extraction of transaction data from the WAL stream and its parsing to be put on hold until the pending transactions are processed.This is to avoid a situation where the data is continuously pushed over the WAL stream channel, but is not being processed and applied due to some failure or lack of scheduling of the synchronization process. This may result in a Java heap space out of heap memory error. See Section 5.1.1 for information on adjusting the Java heap memory size.The Apache Commons Database Connection Pooling (DBCP) component is the Apache pooling framework used by the publication server for establishing JDBC connections.The jdbc.pool.validationQueryTimeout option controls the timeout setting when a validation query is executed at the time of allocating a connection from the pool. This is the amount of time in seconds before an exception is returned if the connection validation query does not succeed.The default timeout value is 30 seconds. In situations where network connections are not reliable, the timeout value can be increased accordingly to allow more time for the connection attempt. Specify a value of 0 if no timeout is desired.If you need to change the password in the xDB Replication Configuration file, you must first encrypt the password. Use the encrypt command of the xDB Replication Server CLI to generate the encrypted form of the password from its plain text form given in an input file.Step 1: Create a text file with the password you wish to encrypt. Do not leave any white space before or after the password.The following example shows the text newpassword in the input file passfile:Step 2: Use the edb-repcli.jar file to execute the xDB Replication Server CLI with the encrypt command by first including the Java bin directory in your PATH environment variable and making XDB_HOME/bin your current working directory.For example, assuming /usr/bin contains the java executable program and xDB Replication Server is installed into the POSTGRES_INSTALL_HOME directory, perform the following:Step 3: Copy and paste the encrypted password into the xDB Replication Configuration file.10.4.3 Writing a Cron ExpressionA cron expression is a text string used to express a schedule of dates and times. The Linux cron tool uses a cron expression to schedule the execution of a job. xDB Replication Server uses the Quartz job scheduling system for scheduling replications.When creating a schedule for an xDB Replication Server replication system, a cron expression can be specified. There are a number of formats for cron expressions. You must use the cron expression format supported by Quartz.The remainder of this section provides an overview of most of the types of cron expressions that can be used for an xDB Replication Server schedule. For a more comprehensive treatment of cron expressions, refer to the Quartz documentation.A Quartz cron expression consists of six mandatory fields, followed by one optional field. Each field is separated from its neighbors by one or more consecutive space characters. The fields are order dependent and are listed as they must appear below:ss mi hr dd mm dow [ yyyy ]
0 - 59 0 - 59 0 - 23 1 - 31 or ? Day of the month – if dow is given, then dd must be specified as ? 1 - 12 or JAN - DEC 1 – 7 or SUN – SAT or ? Day of the week – if dd is given, then dow must be specified as ? (3-letter day of the week abbreviations are not case sensitive) 1970 - 2099 There are a number of characters that have special meaning that can be utilized in all fields unless noted.
MON,WED,FRI – Every Monday, Wednesday, and Friday MON-FRI – Every Monday through Friday 0 10 14 * * ? – Every day of every month at 2:10 PM x/i Specifies an increment, i, starting with x 0 0/10 * * * ? – Every 10 minutes starting on the hour for every day of every month (e.g., 8:00:00, 8:10:00, 8:20:00) When used in the day of the month (dd) field, means the last day of the month 0 30 15 L 8 ? – Every August 31st at 3:30 PM When used by itself in the day of the week field (dow), means Saturday 30 0 12 ? AUG L – The next Saturday in August at 30 seconds past 12:00 noon xxxL When used in the day of the week field (dow) following a day of the week, means the last xxx day of the month 30 0 12 ? AUG 6L – The last Friday in August at 30 seconds past 12:00 noon xW Used in the day of the month field (dd) following a day of the month, x, to specify the weekday closest to x without going over into the next or previous month. 1W – The weekday closest to the 1st of the month. If the 1st is a Wednesday, the result is Wednesday the 1st. If the 1st is a Sunday, the result is Monday the 2nd. If the 1st is a Saturday, the result is Monday the 3rd because the result does not go into the previous or following month. xxx#n Used in the day of the week field (dow) to specify the nth xxx day of the month 2#3 – The third Monday of the month (2 = Monday, 3 = third occurrence)
8:00:00 AM, 8:30:00 AM, 9:00:00 AM, 9:30:00 AM, 10:00:00 AM, 10:30:00 AM on the 15th and the last day of the month of every month In a snapshot replication, the publication server calls EnterpriseDB’s Migration Toolkit, which disables foreign key constraints on tables so it can truncate the target tables before loading rows. In Postgres, foreign key constraints are implemented using triggers, so in actuality, the Migration Toolkit disables triggers on the target tables by setting column relhastriggers of pg_catalog.pg_class to false for each target table.No user (not even a superuser) is allowed to directly modify the data in a Postgres system catalog table unless the following conditions are satisfied:
• In the Postgres system catalog table pg_catalog.pg_authid, the column rolcatupdate is set to true for the row identifying the superuser attempting to update a system catalog table. This requirement applies only to Postgres version 9.4 or earlier. The column rolcatupdate no longer exists in Postgres 9.5 or later.To verify that a user has the privilege to update the system catalog tables, select the user name under the Login Roles node in pgAdmin (Postgres Enterprise Manager Client in Advanced Server). The Update Catalogs property should be set to Yes.If the Update Catalogs property is set to No, click the secondary mouse button on the user name in the Object Browser and choose Properties from the menu. Select the Role Privileges tab, check the Can Modify Catalog Directly box, and click the OK button.A quoted identifier is an identifier created with its name enclosed within double quote characters ("). The text enclosed within double quotes is stored as the object identifier name exactly as given with no default case translation of alphabetic characters. Quoted identifiers occur in both Oracle and Postgres.For example, CREATE TABLE "MyTable" … produces a table name that is stored in the database system’s data dictionary as MyTable. References to this table must be made using an uppercase M, an uppercase T, and lowercase letters for the rest of the name.If a database object is created without the double quotes surrounding its identifier name, default case translation of alphabetic characters occurs.In Oracle, the default case translation is to uppercase. For example, CREATE TABLE MyTable … would result in an object identifier name of MYTABLE.In Postgres, the default case translation is to lowercase. For example, CREATE TABLE MyTable … would result in an object identifier name of mytable.This section discusses how to replicate a table containing the SQL Server SQL_VARIANT data type.The SQL_VARIANT data type defines a column so that the individual values in that column may be of different data types. For example, the same SQL_VARIANT column can store values that have been explicitly cast as character, integer, numeric, and date/time.However, if a table containing a SQL_VARIANT column is to be replicated to a Postgres database, the usage of the column in Postgres is restricted to a single data type to which all the values in the SQL_VARIANT column are implicitly convertible (that is, without the use of explicit casting). For example, an integer value is implicitly convertible to a FLOAT data type, but a floating point value is not implicitly convertible to an INTEGER data type.The following restrictions apply in order to use replication on tables with the SQL_VARIANT data type:
• The values stored within the SQL_VARIANT columns of the table to be replicated must be implicitly convertible to the same data type in Postgres.
• If there is more than one table with SQL_VARIANT columns to be replicated to the same Postgres database, then all such SQL_VARIANT columns must contain values that are implicitly convertible to the same data type in Postgres.In the Postgres subscription database, you define a domain named sql_variant that maps to an underlying data type to which all values in the SQL_VARIANT columns are implicitly convertible.The following example shows how to set up replication for a table containing a SQL_VARIANT data type used to store numeric values, but of different data types.The following query uses a function named SQL_VARIANT_PROPERTY to show the values stored in column f2 and their data types.In the Postgres subscription database, create a domain named sql_variant with an underlying data type that is compatible with the values that are stored in the SQL Server SQL_VARIANT column:After replication occurs, the subscription table is created using the sql_variant domain in place of the SQL_VARIANT data type of the publication table.At the bottom of the following Object Browser window, note the presence of the sql_variant domain under the Domains node of the public schema.
10 Appendix : 10.4 Miscellaneous xDB Replication Server Processing Topics