How to Move an Oracle Database into a Cloud Database Cluster

How to Move an Oracle Database into a Cloud Database Cluster

Version 1.4.2

August 21, 2015

by EnterpriseDB Corporation

Copyright © 2012 - 2015 EnterpriseDB Corporation. All rights reserved.

Introduction

This tutorial walks through the steps required to use Migration Toolkit to migrate an 'on-premise' Oracle database into an Advanced Server database hosted in Postgres Plus Cloud Database.

Migration Toolkit facilitates migration of database objects and data into an Advanced Server or PostgreSQL database that resides in a Cloud Database cluster. Migration Toolkit can assist you in migrating from:

    ● Oracle

    ● MySQL

    ● Sybase

    ● SQL Server

You can also use Migration Toolkit to migrate between PostgreSQL and Advanced Server.

In this EnterpriseDB tutorial we will demonstrate:

    ● How to use Migration Toolkit to migrate an Oracle schema into a Cloud Database cluster.

This tutorial assumes:

    ● You have Migration Toolkit installed on a local workstation.

    ● You have a Cloud Database cluster.

    ● You have opened port 22 on the Cloud Database cluster for ssh connections.

By default, only port 9999 on the master server node of a Cloud Database cluster is open for client connections.

Before connecting with ssh or scp, you must first modify the Cloud Database security group, opening port 22. When you are no longer using ssh or scp, you can safely disable this security rule.

For detailed information about modifying a security group, please see Section 12.3, Securing a Cloud Database Cluster in the Cloud Database Getting Started Guide, available through the Cloud Database Dashboard tab.

This tutorial uses the term Postgres when referring to either PostgreSQL or Postgres Plus Advanced Server. Please note that the examples shown are on a Linux system; Migration Toolkit is also available for Windows platforms.

Migration Toolkit

Migration Toolkit is a powerful command-line tool that offers granular control of the migration process. Migration Toolkit can migrate immediately and directly into a Postgres database (online migration), or you can also choose to generate scripts to use at a later time to recreate object definitions in a Postgres database (offline migration).

By default, Migration Toolkit performs an online migration, creating data and database objects directly into a Postgres database. If you include the -offlineMigration option when invoking Migration Toolkit, Migration Toolkit will generate SQL scripts to reproduce the migrated objects or data in a new database. You can alter the migrated objects by customizing the migration scripts generated by Migration Toolkit before you execute them, or schedule the actual migration for a time that best suits your work load.

If you are only migrating schema objects to a Cloud Database cluster, and use an ssh tunnel (with compression enabled), online migration of database object definitions may be a viable option. If you are migrating large amounts of data, network overhead may make an online migration prohibitively slow; Migration Toolkit's -offlineMigration option might provide a better migration path.

Migration Toolkit is distributed with, and installed (by default) with the Advanced Server installer; it can also be installed as an independent utility. Migration Toolkit is available to PostgreSQL users through Application StackBuilder.

For complete installation instructions for Migration Toolkit, please refer to the Postgres Plus Migration Guide, available at:

http://www.enterprisedb.com/documentation/english

Installing a Source-Specific Driver

Before invoking Migration Toolkit, you must download and install a source-specific JDBC driver on your workstation; JDBC drivers are available from the EnterpriseDB website at:

http://www.enterprisedb.com/downloads/third-party-jdbc-drivers

After downloading the source-specific driver, move the driver file into the JAVA_HOME/jre/lib/ext directory.

Create an ssh Tunnel to the Cloud Database Cluster

Forwarding an unused port on your local workstation through an ssh tunnel to the target Cloud Database cluster allows you to take advantage of ssh compression and encryption during the migration process. To forward a port through an ssh tunnel, use the command:

ssh -i /path/ssh_key.pem -C -L local_port:127.0.0.1:target_port root@host_name

Where:

    path specifies the location of your Cloud Database ssh certificate on the system from which you are connecting.

    ssh_key.pem specifies the name of the Cloud Database ssh private key file.

    -C instructs ssh to use compression.

    -L instructs ssh to forward the local_port to the remote host and target_port.

    local_port specifies an otherwise unused port on your local workstation.

    target_port specifies the port on which the target Postgres server is listening on the Cloud Database cluster.

    host_name specifies the host name or IP address of the Cloud Database cluster to which you wish to connect.

You can find the target_port and host_name on the Details panel of the Clusters tab in the Cloud Database console (see Figure 1.1). Please note that there is no performance benefit gained by using the load balancing ports during a migration; specify the port number shown in the DBPORT column.

Figure 1.1 - The Details panel of the Cloud Database Console.

Specify Connections in the toolkit.properties File

Migration Toolkit uses connection information specified in the toolkit.properties file (see Figure 1.2) to connect to the source and target databases. The toolkit.properties file resides in the etc directory, under the Postgres installation on the workstation from which you will invoke Migration Toolkit.

Figure 1.2 - The toolkit.properties file.

The first three parameters listed in the toolkit.properties file provide connection information for the source database:

    SRC_DB_URL contains a JDBC URL that specifies the type and location of the source database.

    SRC_DB_USER specifies a privileged user of the source database.

    SRC_DB_PASSWORD specifies the password of the source database user.

The last three parameters listed in the toolkit.properties file provide connection information for the target (your Cloud Database cluster):

    TARGET_DB_URL contains a JDBC URL of the target database. If you are performing an online migration through an ssh tunnel, specify that the port number of the tunnel in the URL. If you are performing an offline migration, specify the address and port number of your Cloud Database master node when forming the URL.

    TARGET_DB_USER specifies the name of a privileged target database user.

    TARGET_DB_PASSWORD specifies the password of the target database user.

Migrating from an Oracle Source

When migrating from an Oracle database, SRC_DB_URL takes the form of a JDBC URL. An Oracle URL can take one of two forms:

    jdbc:oracle:thin:@host_name:port:database_id

    or

    jdbc:oracle:thin:@//host_name:port/database_id|service_name

Where:

    jdbc identifies the connection protocol.

    The sub-protocol identifies the migration target as an oracle database.

    When migrating from an Oracle database, the URL should include a driver type of thin.

    The host_name parameter specifies the name or IP address of the host where the Oracle server is running.

    port specifies the port number that the Oracle database listener is monitoring.

    database_id specifies the database SID of the Oracle database.

    service_name is the name of the Oracle service.

For example, the following parameters describe an Oracle source database (named mgmt) that resides on host 192.168.2.88, and is listening on port 1521. Migration Toolkit will attempt to connect using a user name of system, and password of password.

    SRC_DB_URL=jdbc:oracle:thin:@//192.168.2.88:1521/mgmt
    SRC_DB_USER=system
    SRC_DB_PASSWORD=password

Invoking Migration Toolkit

After specifying connection properties in the toolkit.properties file, Migration Toolkit is ready to perform an online migration to the master node of the Cloud Database cluster. To migrate a complete schema, navigate to the bin directory, and invoke Migration Toolkit with the command:

    $ ./runMTK.sh schema_name

Where:

    schema_name is the name of the schema within the source database (specified in the toolkit.properties file) that you wish to migrate. You must include at least one schema_name.

Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be Advanced Server. You can use the -sourcedbtype and -targetdbtype keywords to specify an alternate source type or target type. For more information about migrating non-Oracle databases, please see the Postgres Plus Migration Guide at:

http://www.enterprisedb.com/documentation/english

As the migration progresses, Migration Toolkit lists the objects migrated, and any error messages that result from the migration. When the migration completes, the Migration Summary will provide a count of the objects migrated (as shown in Figure 1.3).

Figure 1.3 - The migration summary.

You can use a client application (such as Postgres Enterprise Manager) to confirm that Migration Toolkit has moved the specified objects to the Cloud Database cluster (see Figure 1.4).

Figure 1.4 - The migrated schema now resides in a Cloud Database Cluster.

Performing an Offline Migration

When you perform an offline migration, Migration Toolkit generates scripts that you can use to recreate object definitions and/or data in a Cloud Database cluster. The initial steps required to perform an offline migration are similar to the initial steps in an online migration:

    1. Install Migration Toolkit and the appropriate source-specific driver on your local workstation.

    2. Edit the toolkit.properties file, specifying the source and target database information.

Figure 1.5 - The toolkit.properties file.

When performing an offline migration, you should specify the address and port of the master node of the Cloud Database cluster in the TARGET_DB_URL (see Figure 1.5). You must also provide the name and password of the Cloud Database superuser.

Please note that Migration Toolkit will connect to the target database before performing an offline migration to confirm the type of the target database; your workstation must be able to connect to the Cloud when you invoke Migration Toolkit.

Invoking Migration Toolkit to Perfom an Offline Migration.

Navigate to the directory that contains the Migration Toolkit binary, and invoke Migration Toolkit, specifying the -offlineMigration keyword and the name of the source schema.

      $ ./runMTK.sh -offlineMigration schema_name

By default, when you perform an offline migration that contains table data, a separate file is created for each table. To create a single file that contains the data from multiple tables, specify the -singleDataFile and -safemode keywords:

      $ ./runMTK.sh -offlineMigration -singleDataFile -safeMode schema_name

You can also specify a file destination when invoking migration toolkit. In the example that follows, the migration files for the objects in the hr schema are written to the /tmp/migration directory:

    $ ./runMTK.sh -offlineMigration /tmp/migration -singledatafile -safemode hr

After executing the command shown in the example, the /tmp/migration directory contains:

    mtk_hr_constraint_ddl.sql
    mtk_hr_procedure_ddl.sql
    mtk_hr_trigger_ddl.sql
    mtk_hr_data.sql
    mtk_hr_schema_ddl.sql
    mtk_hr_view_ddl.sql
    mtk_hr_ddl.sql
    mtk_hr_sequence_ddl.sql
    mtk_hr_index_ddl.sql
    mtk_hr_table_ddl.sql

If you inspect the file contents, you will find that the files contain the SQL commands required to recreate the migrated objects on the Cloud Database cluster. The mtk_hr_ddl.sql file contains all of the ddl commands required to recreate all of the database objects, while the mtk_hr_data.sql file contains the commands that restore the data into the database.

Copy the Migration Files to the Cloud

Navigate into the directory that contains the migration files, and use the scp command to copy the files to the master node of the Cloud Database cluster (see Figure 1.6). The syntax of the scp command is:

    scp -i /path/ssh_key.pem file_name root@host_name:

Where:

    path specifies the location of your Cloud Database ssh certificate on the system from which you are connecting.

    ssh_key.pem specifies the name of the Cloud Database ssh private key file.

    file_name specifies the name of the file you are copying to Cloud Database. By default, each database object definition is saved in a separate file with a name derived from the schema name and object type in your home folder; use an * to copy all of the files in the directory with a single command.

    host_name specifies the host name of the master node of the Cloud Database cluster; the host name is located in the DNSNAME column, on the Details panel of the Clusters tab in the Cloud Database console.

The colon (:) at the end of this command specifies that the file will be copied to the root directory on the cluster's primary node; please note that you can specify a file destination by adding a destination path after the colon.

Figure 1.6 - Moving the migration files to Cloud Database.

Execute the Migration Scripts

Use ssh to connect to the master node of your Cloud Database cluster as root, specifying the location of the ssh key (on your workstation) in the command:

    ssh -i /path/ssh_key.pem root@host_name

Where:

    path specifies the location of your Cloud Database ssh certificate on the system from which you are connecting.

    ssh_key.pem specifies the name of the Cloud Database ssh private key file.

    host_name specifies the host name of the node to which you wish to connect.

Once connected, configure your Cloud Database environment by executing the pgplus_env.sh (on Advanced Server) or pg_env.sh (on PostgreSQL) script:

    source /opt/PostgresPlus/9.2AS/pgplus_env.sh

Create a database into which you will restore the migrated database objects:

    createdb -U enterprisedb db_name

Where:

    db_name specifies the name of the new database.

Connect to the new database with edb-psql (on Advanced Server) or psql (on PostgreSQL):

    edb-psql -U user_name db_name

Where:

    user_name specifies the name of the Cloud Database superuser.

    db_name specifies the name of the new database.

Use the \i meta-command to invoke the migration script that creates the object you wish to migrate. For example, to create all of the database objects in the example schema, we invoke the mtk_hr_ddl.sql script:

    acctg=# \i ./mtk_hr_ddl.sql

We can then restore the data into the database with the command:

    acctg=# \i ./mtk_hr_data.sql

Confirm the Migration Results

After restoring the objects and the data, you can connect to Cloud Database with a client application (such as Postgres Enterprise Manager), and query the migrated database (as shown in Figure 1.7).

Figure 1.7 - The migrated data now resides on Cloud Database

For detailed information about migrating to Advanced Server, or using Migration Toolkit options, please refer to the Postgres Plus Migration Guide, available from the EnterpriseDB website at:

http://www.enterprisedb.com/documentation/english