Preparing your Oracle source database v1.2

To prepare your Oracle source for the migration, you need sysdba privileges.

Configure an Oracle source database to:

Step 1: Enable Archivelog Mode

For Debezium to capture all database changes in Oracle, the database must be configured in Archivelog Mode. This is essential because Debezium reads the database's Redo Logs to track changes.

In Archivelog Mode, Oracle saves a copy of each Redo Log file before it's overwritten. This ensures a continuous and complete history of every transaction, which Debezium needs to maintain a reliable stream of change data without any gaps.

Log in to Oracle Container Database (CDB) as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where:

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <HOST> is the hostname of your Oracle database.
  • <PORT> is the port of your Oracle database.
  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

Run the following SQL command to check whether Archivelog Mode is enabled:

archive log list;

The returned content indicates the database mode:

-- below output indicates archive log mode is enabled
Database log mode 	Archive Mode -- (or "Archive Log Mode" in new versions of oracle)
-- or below output indicates archive log mode is not enabled
Database log mode 	No Archive Mode

If you've already enabled Archivelog Mode, you can skip this step. If not, log out from your current SQL session first, then run the following commands to enable it.

Connect to the SQL session:

ORACLE_SID=<DB_IDENTIFIER> sqlplus /nolog

Where

  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

Run following SQL commands:

CONNECT <DBA_USER>/<PASSWORD> AS sysdba;
-- Set the location and size for Fast Recovery Area (FRA)
ALTER SYSTEM SET db_recovery_file_dest_size = <RECOVERY_FILE_DEST_SIZE>;
ALTER SYSTEM SET db_recovery_file_dest = '<RECOVERY_FILE_DEST>' scope=spfile;
-- Restart the database in the mount phase to enable archiving
SHUTDOWN IMMEDIATE;
STARTUP MOUNT
-- Enable archive log mode and open the database for use
ALTER DATABASE archivelog;
ALTER DATABASE open;
-- Verify the change was successful
archive log list;
exit;

Where:

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <RECOVERY_FILE_DEST> is the directory path of your Oracle Fast Recovery Area (FRA), where Oracle Redo Logs are stored.
  • <RECOVERY_FILE_DEST_SIZE> is the space allocated to FRA, for example, 100G for 100 gigabytes.
Note

Setting db_recovery_file_dest and db_recovery_file_dest_size configures Oracle's Fast Recovery Area (FRA), which is a required storage location for the archived logs Debezium reads.

These parameters are critical for database stability. Always consult your DBA before setting them in a production environment.

Step 2: Enable supplemental logging for the database and tables of interest

Supplemental logging refers to the capture of additional information in Oracle Redo Logs, such as "before" state. This extra Redo Log information is needed for some log-based applications, such as EDB DMS, to capture change events. See Oracle's Supplemental Logging for more information.

To use Debezium for data migration, you must enable supplemental logging at both the database and table levels. This is crucial for Debezium to correctly capture change data:

  • Database-level supplemental logging is the foundation for change data capture (CDC). It ensures that every change message includes a tracking number, which is typically the primary key of the modified row. This allows you to uniquely identify which specific record was changed. Without this, Debezium wouldn't be able to link a change event to its source row.

  • Table-level supplemental logging provides the detailed information. It ensures that the change message includes a "before" image of the entire row — a complete copy of all columns before the change occurred. This is especially useful for understanding exactly what was changed, as Debezium can compare the "before" and "after" states. Without this, Debezium would only capture the updated column and not the full context of the change.

Log in to Oracle Container Database (CDB) as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where:

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <HOST> is the hostname of your Oracle database.
  • <PORT> is the port of your Oracle database.
  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

Run the following SQL command to enable supplemental logging at the database level:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Then if you're using a CDB architecture, switch to the PDB that you want to migrate:

ALTER SESSION SET CONTAINER = <PDB_NAME>;

Where:

  • <PDB_NAME> is the name of your Oracle PDB you want to migrate.

If you're using a non-CDB architecture, you can skip the above session switching operation.

And enable supplemental logging for all the tables that you want to migrate:

ALTER TABLE <SCHEMA_NAME>.<TABLE_NAME> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Where:

  • <SCHEMA_NAME> is the schema to which your table belongs.
  • <TABLE_NAME> is the name of a table you want to migrate.
Note
  • Make sure supplemental logging is enabled on tables that you want to migrate. Without it, the synchronization will not work as it is a requirement for Debezium.

  • For Oracle, the terms schema and user are used interchangeably. When you run the SQL commands for data migration, be sure to set the schema name to the correct user account to avoid errors.

Step 3: Prepare tablespaces for migration user

To perform data migration, both DMS and its underlying Debezium library require a ​​dedicated user​​. Before creating this user, set up the required tablespace(s).

For CDB architecture, 2 tablespaces are needed:

  • CDB Level Tablespace This tablespace is for Debezium's internal processes. It's used for log mining and other management operations that must run at the container database level.

  • PDB Level Tablespace This one is for Debezium operations that are specific to your pluggable database. It handles processes related to the individual tables you're capturing within that PDB.

For non-CDB architecture, only 1 tablespace is needed, combining both functions (log mining and table operations) since there’s no container/PDB separation.

Log in to Oracle Container Database (CDB) as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where,

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <HOST> is the hostname of your Oracle database.
  • <PORT> is the port of your Oracle database.
  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

Create a tablespace using the SQL command. For CDB architecture this tablespace is for CDB, while for non-CDB architecture this is for normal database:

CREATE TABLESPACE <TABLESPACE_NAME> DATAFILE '<TABLESPACE_PATH>.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Where

  • <TABLESPACE_NAME> is the name of the tablespace. You can use any valid name here.
  • <TABLESPACE_PATH>.dbf is the file path of this tablespace. Always consult your DBA to determine the correct and appropriate file path for your environment.

Next, switch to the Pluggable Database (PDB) you want to migrate if you're using a CDB architecture:

ALTER SESSION SET CONTAINER = <PDB_NAME>;

Where

  • <PDB_NAME> is the name of your Oracle's PDB you want to migrate.

Create a tablespace for PDB:

CREATE TABLESPACE <TABLESPACE_NAME> DATAFILE '<PDB_TABLESPACE_PATH>.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  -- 25M is a minimal example size. Production systems typically require larger initial allocations (e.g., 512M+). 
  -- Consult your DBA for organization-specific sizing policies.
-- continue adding more datafiles if necessary
-- ALTER TABLESPACE <TABLESPACE_NAME> ADD DATAFILE '<PDB_TABLESPACE_PATH2>.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Where

  • <TABLESPACE_NAME> is the name of the PDB tablespace. For consistency, you can use the same name you used for the CDB tablespace.

  • <PDB_TABLESPACE_PATH>.dbf is the file path for the tablespace. Always consult your DBA to determine the correct path for your specific environment.

If you're using a non-CDB Oracle database, you can skip the PDB tablespace setup.

Step 4: Create Migration user and grant the privileges

As mentioned previously, DMS needs a dedicated ​​migration user​​ (a common user for CDBs, regular user for non-CDBs).

Log in to Oracle Container Database (CDB) as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <HOST> is the hostname of your Oracle database.
  • <PORT> is the port of your Oracle database.
  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

If you're using a CDB architecture, run the following SQLs one by one to create a migration user (which must be a common user) and grant necessary permissions:

CREATE USER <MIGRATION_USER> IDENTIFIED BY <MIGRATION_USER_PASSWORD>
    DEFAULT TABLESPACE <TABLESPACE_NAME>
    QUOTA UNLIMITED ON <TABLESPACE_NAME>
    CONTAINER=ALL;
GRANT CREATE SESSION TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SET CONTAINER TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to <MIGRATION_USER> CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ANY TABLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT EXECUTE_CATALOG_ROLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ANY DICTIONARY TO <MIGRATION_USER> CONTAINER=ALL;
GRANT LOGMINING TO <MIGRATION_USER> CONTAINER=ALL;
GRANT CREATE TABLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT LOCK ANY TABLE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT CREATE SEQUENCE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR TO <MIGRATION_USER> CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR_D TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_LOGS TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$LOGFILE TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVED_LOG TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO <MIGRATION_USER> CONTAINER=ALL;
GRANT SELECT ON V_$TRANSACTION TO <MIGRATION_USER> CONTAINER=ALL;

Where

  • <MIGRATION_USER> is the name of the common user you're creating for DMS.

  • <MIGRATION_USER_PASSWORD> is the password for the new migration user.

  • <TABLESPACE_NAME> is the name of the CDB tablespace you created earlier. For simplicity, we recommend using the same name for both the CDB and PDB tablespaces.

If you used different names for your CDB and PDB tablespaces in the previous step, you must also log in to the PDB and run the following command to set the default tablespace for the user:

ALTER USER <MIGRATION_USER> DEFAULT TABLESPACE <PDB_TABLESPACE_NAME>

If you are using a non-CDB database, use the following SQL command instead:

CREATE USER <MIGRATION_USER> IDENTIFIED BY <MIGRATION_USER_PASSWORD>
    DEFAULT TABLESPACE <TABLESPACE_NAME>
    QUOTA UNLIMITED ON <TABLESPACE_NAME>;
GRANT CREATE SESSION TO <MIGRATION_USER>;
GRANT SELECT ON V_$DATABASE to <MIGRATION_USER>;
GRANT FLASHBACK ANY TABLE TO <MIGRATION_USER>;
GRANT SELECT ANY TABLE TO <MIGRATION_USER>;
GRANT SELECT_CATALOG_ROLE TO <MIGRATION_USER>;
GRANT EXECUTE_CATALOG_ROLE TO <MIGRATION_USER>;
GRANT SELECT ANY TRANSACTION TO <MIGRATION_USER>;
GRANT SELECT ANY DICTIONARY TO <MIGRATION_USER>;
GRANT LOGMINING TO <MIGRATION_USER>;
GRANT CREATE TABLE TO <MIGRATION_USER>;
GRANT LOCK ANY TABLE TO <MIGRATION_USER>;
GRANT CREATE SEQUENCE TO <MIGRATION_USER>;
GRANT EXECUTE ON DBMS_LOGMNR TO <MIGRATION_USER>;
GRANT EXECUTE ON DBMS_LOGMNR_D TO <MIGRATION_USER>;
GRANT SELECT ON V_$LOGMNR_LOGS TO <MIGRATION_USER>;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO <MIGRATION_USER>;
GRANT SELECT ON V_$LOGFILE TO <MIGRATION_USER>;
GRANT SELECT ON V_$ARCHIVED_LOG TO <MIGRATION_USER>;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO <MIGRATION_USER>;
GRANT SELECT ON V_$TRANSACTION TO <MIGRATION_USER>;

Where:

  • <MIGRATION_USER> is the name of the user to create for CDC migration table access.
  • <MIGRATION_USER_PASSWORD> is the password for the migration user.
  • <TABLESPACE_NAME> is the tablespace for <MIGRATION_USER>.

Step 5: Grant SELECT on source tables

Log in to PDB as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where

  • <DBA_USER> is an Oracle user that can log in as sysdba.
  • <PASSWD> is the password of <DBA_USER>.
  • <HOST> is the Oracle database hostname.
  • <PORT> is the Oracle database port.
  • <DB_IDENTIFIER>: for CDB this is the pluggable database name (e.g., PDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

For each table you want to migrate, you must grant SELECT permission to the migration user you created.

GRANT SELECT ON <SCHEMA_NAME>.<TABLE_NAME> TO <MIGRATION_USER>

Where:

  • <SCHEMA_NAME> is the name of the schema that contains the table you want to migrate.
  • <TABLE_NAME> is the name of the specific table you want to migrate.
  • <MIGRATION_USER> is the name of the common user you created in the previous step.
Note

For Oracle, the terms schema and user are interchangeable. Be sure to use the correct schema name to prevent migration errors.

(Optional) Step 6: Ensure adequate Redo Log space is available for better performance

The data migration process consists of two main phases: a consistent snapshot of the data, followed by a continuous stream of changes. This change stream is powered by LogMiner and the Oracle database Redo Logs.

The lifetime of a change in the Redo Logs is limited. This is determined by the size and number of the logs, as well as the database's change throughput. If the Redo Logs are too small, it can cause frequent log switching, which negatively impacts migration performance.

To check the state of your database's Redo Logs, log in to Oracle Container Database (CDB) as sysdba, or if you are using a non-CDB architecture, log in to your non-CDB database as sysdba:

sqlplus <DBA_USER>/<PASSWD>@<HOST>:<PORT>/<DB_IDENTIFIER> as sysdba

Where:

  • <DBA_USER> is the Oracle user with sysdba privileges.
  • <PASSWD> is the password for the DBA_USER.
  • <HOST> is the hostname of your Oracle database.
  • <PORT> is the port of your Oracle database.
  • <DB_IDENTIFIER>: for CDB this is the container database name (e.g., CDB1), for non-CDB: either the service name (recommended) or SID (e.g., ORCL)

Then, run the following SQL queries to view the current log file configuration:

SELECT GROUP#, TYPE, MEMBER FROM V_$LOGFILE;
Output
    GROUP# TYPE    MEMBER
---------- ------- --------------------------------------------------
         1 ONLINE  /opt/oracle/oradata/ORCLCDB/redo03.log
         2 ONLINE  /opt/oracle/oradata/ORCLCDB/redo01.log
         3 ONLINE  /opt/oracle/oradata/ORCLCDB/redo04.log
SELECT GROUP#, ARCHIVED, BYTES/1024/1024 MB, STATUS FROM V_$LOG;
Output
    GROUP# ARC  MB STATUS
---------- --- --- ----------------
         1 YES 2000 INACTIVE
         3 YES 2000 INACTIVE
         3 NO  2000 CURRENT

For example, the output might show three log groups, each with a size of 2000MB, which may be too small for many production environments.

If needed, you can adjust the Redo Logs using commands like these:

ALTER DATABASE ADD LOGFILE GROUP 4 ('/opt/oracle/oradata/ORCLCDB/redo04.log') SIZE 8G;
ALTER DATABASE ADD LOGFILE GROUP 5 ('/opt/oracle/oradata/ORCLCDB/redo05.log') SIZE 8G;
ALTER DATABASE ADD LOGFILE GROUP 6 ('/opt/oracle/oradata/ORCLCDB/redo06.log') SIZE 8G;
ALTER DATABASE ADD LOGFILE GROUP 7 ('/opt/oracle/oradata/ORCLCDB/redo07.log') SIZE 8G;
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM CHECKPOINT;
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM CHECKPOINT;
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM CHECKPOINT;
ALTER DATABASE DROP LOGFILE GROUP 1;
ALTER DATABASE DROP LOGFILE GROUP 2;
ALTER DATABASE DROP LOGFILE GROUP 3;

These commands will create four new 8GB Redo Log groups, each with a single Redo Log file.

Note

Always consult your DBA to determine the appropriate redo log size for your production environment.

Step 7: Validate Configuration

The EDB DMS Reader installation (packaged as cdcreader) comes with a helper script that validates the Oracle configuration and helps you identify any issues. After configuring the database, we recommend running the script to ensure all checks pass.

Run the script without arguments to print the usage:

# Run this command if you're using cdcreader:
/opt/cdcreader/oracleConfigValidation.sh

SSL configuration

Ensure you configure your source database server to accept SSL connections to allow the EDB DMS Reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration.

Your database is ready for CDC migration.

More information