Use the -loaderCount option to specify the number of parallel threads that Migration Toolkit should use when importing data. This option is particularly useful if the source database contains a large volume of data, and the Postgres host (that is running Migration Toolkit) has high-end CPU and RAM resources. While value may be any non-zero, positive number, we recommend that value should not exceed the number of CPU cores; a dual core CPU should have an optimal value of 2.
Include the -enableConstBeforeDataLoad option if a non-partitioned source table is mapped to a partitioned table. This option enables all triggers on the target table (including any triggers that redirect data to individual partitions) before the data migration. -enableConstBeforeDataLoad is valid only if the -truncLoad parameter is also specified.
If you include the -safeMode
option, Migration Toolkit commits each row as migrated; if the migration fails to transfer all records, rows inserted prior to the point of failure will remain in the target database.
Including the -fastCopy
option specifies that Migration Toolkit should bypass WAL logging to perform the COPY
operation in an optimized way, default disabled. If you choose to use the -fastCopy
option, migrated data may not be recoverable (in the target database) if the migration is interrupted.
However, the Migration Toolkit does not support importation of NULL
character values (embedded binary zeros 0x00) with the JDBC connection protocol. If you are importing data that includes the NULL
character, use the –replaceNullChar
option to replace the NULL
character with a single, non-NULL
, replacement character. Do not enclose the replacement character in quotes or apostrophes.
Include the -analyze
option to invoke the Postgres ANALYZE
operation against a target database. The optimizer consults the statistics collected by the ANALYZE
operation, utilizing the information to construct efficient query plans.
Include the -vacuumAnalyze
option to invoke both the VACUUM
operations against a target database. The optimizer consults the statistics collected by the ANALYZE
operation, utilizing the information to construct efficient query plans. The VACUUM
operation reclaims any storage space occupied by dead tuples in the target database.
Use the -fetchSize
option to specify the number of rows fetched in a result set. If the designated -fetchSize
is too large, you may encounter Out of Memory
exceptions; include the -fetchSize
option to avoid this pitfall when migrating large tables. The default fetch size is specific to the JDBC driver implementation, and varies by database.
specifies the name of a file that contains constraints in key=value
pairs. Each record read from the database is evaluated against the constraints; those that satisfy the constraints are migrated. The left side of the pair lists a table name; please note that the table name should not
be schema-qualified. The right side specifies a condition that must be true for each row migrated. For example, including the following constraints in the property file:
' characters act as an escape string; since '.
' is a reserved character in regular expressions, on Linux use '\\.
' to represent the '.
' character. For example, to use custom mapping to select rows from the EMP_ID
column in the EMP
table, specify the following custom mapping entry:
You can include multiple custom type mappings in a property_file
; specify each entry in the file on a separate line, in a key=value
pair. The left side of each pair selects the columns with a regular expression; the right side of each pair names the data type that column should assume.