By default, Migration Toolkit reads the definition of each selected object and creates an equivalent object in Advanced Server. If you specify the -offlineMigration option in the command line, Migration Toolkit performs an offline migration. During an offline migration, Migration Toolkit reads the definition of each selected object and creates an SQL script that, when executed at a later time, replicates each object in Postgres Plus Advanced Server.
To perform an offline migration, enter the command:
Each database object is saved in a separate file with a name derived from the schema name and object type (e.g. mtk_hr_table_ddl.sql) in the user home folder. To specify an alternative file destination, include a directory name after the -offlineMigration option:
Note: You must include the -schemaOnly option in any commands that contain the -offlineMigration option (at this time, data cannot be migrated offline).
5.4.2 Import Options
By default, Migration Toolkit imports both the data and the object definition (DDL) when migrating a schema; alternatively you can choose to import either the data or the DDL.
This option imports the schema definition and creates all selected schema objects in the target (Advanced Server) database. This option cannot be used in conjunction with -dataOnly option.
This option copies the table data only. When used with the '-tables' option, Migration Toolkit will only import data for the selected tables (see usage details below). This option cannot be used with -schemaOnly option.
5.4.3 Schema Creation Options
By default, Migration Toolkit imports the source schema objects and/or data into a schema of the same name. If the target schema does not exist, Migration Toolkit creates a new schema. Alternatively, you may specify a custom schema name via the -targetSchema option. You can choose to drop the existing schema and create a new schema using the following option:
When set to true, Migration Toolkit drops the existing schema (and any objects within that schema) and creates a new schema. (By default, -dropSchema is false).
Use the -targetSchema option to specify the name of the migrated schema. If you are migrating multiple schemas, specify a name for each schema in a comma-separated list. If the command line does not include the -targetSchema option, the name of the new schema will be the same as the name of the source schema.
You cannot specify 'information-schema', 'dbo', 'sys' or 'pg_catalog' as target schema names. These schema names are reserved for meta-data storage in Advanced Server.
5.4.4 Schema Object Selection Options
Use the following options to select specific schema objects to migrate:
Import all tables from the source schema.
Import the selected tables from the source schema. table_list is a comma-separated list of table names (e.g. -tables emp,dept,acctg).
Import the table constraints. This option is valid only when importing an entire schema or when the -allTables or -tables table_list options are specified.
Omit the migration of foreign key constraints. This option is valid only when importing an entire schema or when the -allTables or -tables table_list options are specified.
Import the table indexes. This option is valid when importing an entire schema or when the -allTables or -tables table_list option is specified.
Import the table triggers. This option is valid when importing an entire schema or when the -allTables or -tables table_list option is specified.
Import the views from the source schema.
Import selected views from the source schema. view_list is a comma-separated list of view names (e.g. -views all_emp,mgmt_list,acct_list)
Import all sequences from the source schema.
Import the selected sequences from the source schema. sequence_list is a comma-separated list of sequence names.
Import all stored procedures from the source schema.
Import the selected stored procedures from the source schema. procedures_list is a comma-separated list of procedure names.
Import all functions from the source schema.
Import the selected functions from the source schema. function_list is a comma-separated list of function names.
Import all packages from the source schema.
Import the selected packages from the source schema. package_list is a comma-separated list of package names.
Import the user-defined object types from the specified schema.
5.4.5 Migration Options
Truncate the data from the table before importing new data. This option can only be used in conjunction with the -dataOnly option.
If you include the -safeMode option, Migration Toolkit commits each row as migrated; if the migration fails to transfer all records, rows inserted prior to the point of failure will remain in the target database.
Including the -fastCopy option specifies that Migration Toolkit should bypass WAL logging to perform the COPY operation in an optimized way, default disabled. If you choose to use the -fastCopy option, migrated data may not be recoverable (in the target database) if the migration is interrupted.
Specify a single character to be used as a delimiter in the copy command when loading table data. The default value is '\t' (tab).
Specify the batch size of bulk inserts. Valid values are 1-1000. The default batch size is 1000; reduce the value of -batchSize if Out of Memory exceptions occur.
Specify the batch Size in MB, to be used in the COPY command. Any value greater than 0 is valid; the default batch size is 8 MB.
Use the -fetchSize option to specify the number of rows fetched in a result set. If the designated -fetchSize is too large, you may encounter Out of Memory exceptions; include the -fetchSize option to avoid this pitfall when migrating large tables. The default fetch size is specific to the JDBC driver implementation, and varies by database.
Specify the default datetime value (in yyyy-MM-dd_hh:mm:ss format) when migrating tables that include incompatible datetime values.
Note: This option is applicable only when migrating from MySQL databases. Unlike Advanced Server, MySQL supports datetime values of 0. Avoid migration error when importing MySQL data by using -defaultDateTime to replace datetime values of zero with datetime values that are PostgreSQL compatible.
file_name specifies the name of a file that contains constraints in key=value pairs. Each record read from the database is evaluated against the constraints; those that satisfy the constraints are migrated. The left side of the pair lists a qualified table name; the right side specifies a condition that must be true for each row migrated. For example, including the following constraints in the property file:
migrates only those countries with a country_id value that is not equal to AR; this constraint applies to the HR.countries table.
Use custom type mapping to change the data type of migrated columns. The left side of each pair specifies the columns with a regular expression; the right side of each pair names the data type that column should assume. You can include multiple pairs in a semi-colon separated column_list. For example, to map any column whose name ends in ID to type INTEGER, use the following custom mapping entry:
Custom mapping is applied to all table columns that match the criteria unless the column is table-qualified.
The '\\' characters act as an escape string; since '.' is a reserved character in regular expressions, use '\\.' to represent the '.' character. For example, to use custom mapping to select rows from the EMP_ID column in the EMP table, specify the following custom mapping entry:
You can include multiple custom type mappings in a property_file; specify each entry in the file on a separate line, in a key=value pair. The left side of each pair selects the columns with a regular expression; the right side of each pair names the data type that column should assume.
5.4.6 Oracle Specific Options
The following options apply only when the source database is Oracle.
Import all users and roles from the source database.
Import the selected users or roles from the source Oracle database. user_list is a comma-separated list of user/role names (e.g. -users MTK,SAMPLE,acctg)
The dblink_ora module provides Advanced Server-to-Oracle connectivity at the SQL level. dblink_ora is bundled and installed as part of the Advanced Server database installation. dblink_ora utilizes the COPY API method to transfer data between databases. This method is considerably faster than the JDBC COPY method.
The following example uses the dblink_ora COPY API to migrate all tables from the HR schema:
$./runMTK.sh -copyViaDBLinkOra -allTables HR
The target Advanced Server database must have dblink_ora installed and configured. For installation details, refer to the dblink_ora setup guide (README-DBLINK_ORA_SETUP.txt), in the /dbserver/lib/postgresql/contrib subfolder under the Postgres Plus Advanced Server installation home directory.
Choose this option to migrate Oracle database links. The password information for each link connection in the source database is encrypted, so unless specified, a dummy password (edb) is substituted.
To migrate all database links using edb as the dummy password for the connected user:
$./runMTK.sh -allDBLinks HR
You can alternatively specify the password for each of the database links through a comma-separated list of name=value pairs. Specify the link name on the left side of the pair and the password value on the right side.
To migrate all database links with the actual passwords specified on the command-line:
Migration Toolkit migrates only the database link types that are currently supported by EnterpriseDB; this includes Fixed User Links of Public and Private type.
5.4.7 Miscellaneous Options
Display the application command-line usage information.
Display the Migration Toolkit version.
Display application log messages on standard output (By default, verbose is on).
Include this option to specify where the log files will be written; log_path represents the path where application log files are saved. By default, the log files are saved in the logs sub-folder under the application root path (/opt/PostgresPlus/8.3R2AS/migrationstudio/logs).