All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed Oracle database to an ApsaraMQ for Kafka instance

Last Updated:Apr 09, 2024

This topic describes how to migrate data from a self-managed Oracle database to an ApsaraMQ for Kafka instance by using Data Transmission Service (DTS).

Prerequisites

  • The source self-managed Oracle database and the destination ApsaraMQ for Kafka instance are created.

    Note

    For more information about the supported versions of the source database and the destination instance, see Overview of data migration scenarios.

  • The self-managed Oracle database is running in ARCHIVELOG mode. Archived log files are accessible and a suitable retention period is set for archived log files. For more information, see Managing Archived Redo Log Files.

  • The supplemental logging feature is enabled for the self-managed Oracle database, and the SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI parameters are set to Yes. For more information, see Supplemental Logging.

  • The available storage space of the destination ApsaraMQ for Kafka instance is larger than the total size of the data in the self-managed Oracle database.

  • In the destination ApsaraMQ for Kafka instance, a topic is created to receive the migrated data. For more information, see the Step 1: Create a topic section of the "Step 3: Create resources" topic.

Limits

Note

DTS does not migrate foreign keys in the source database to the destination database. Therefore, the cascade and delete operations of the source database are not migrated to the destination database.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database information.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to migrate:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.

  • If you want to migrate incremental data, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, DTS may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data may be inconsistent or lost. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform data definition language (DDL) operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases may occur. To ensure data consistency, we recommend that you select Schema Migration, Full Data Migration, and Incremental Data Migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • Before you migrate data, evaluate the impact of data migration on the performance of the source database and destination cluster. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source database and destination cluster. This may increase the loads on the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the failed task is resumed.

  • If the destination ApsaraMQ for Kafka instance is upgraded or downgraded during data migration, you must restart the instance.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

Migration type

Description

Schema migration

DTS migrates the schemas of the required objects from the source Oracle database to the destination ApsaraMQ for Kafka instance.

Note

In this scenario, DTS does not support schema migration for triggers. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by triggers. For more information, see Configure a data synchronization task for a source database that contains a trigger.

Full data migration

DTS migrates historical data of the required objects from the source Oracle database to the destination ApsaraMQ for Kafka instance.

Note

During schema migration and full data migration, do not perform DDL operations on the objects to be migrated. Otherwise, the objects may fail to be migrated.

Incremental data migration

DTS retrieves redo log files from the source Oracle database. Then, DTS migrates incremental data from the source Oracle database to the destination ApsaraMQ for Kafka instance. Incremental data migration ensures service continuity when you migrate data from a self-managed Oracle database.

SQL operations that can be incrementally migrated

Operation type

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

  • CREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, and TRUNCATE TABLE

  • CREATE VIEW, ALTER VIEW, and DROP VIEW

  • CREATE PROCEDURE, ALTER PROCEDURE, and DROP PROCEDURE

  • CREATE FUNCTION, DROP FUNCTION, CREATE TRIGGER, and DROP TRIGGER

  • CREATE INDEX and DROP INDEX

Preparations

Log on to the self-managed Oracle database, create an account that you want to use to collect data, and then grant permissions to the account.

Note

If you have created an account that is granted the permissions listed in the following table, you can skip this step.

Database

Schema migration

Full data migration

Incremental data migration

Source Oracle database

Permissions of the schema owner

Permissions of the schema owner

Permissions of the database administrator (DBA)

To create a database account and grant permissions to the database account, perform the following operations:

Self-managed Oracle database: CREATE USER and GRANT.

Important

If you need to migrate incremental data, you also need to enable archiving and supplementary logging to obtain incremental changes to the data. For more information, please refer to Configure an Oracle database.

Procedure

  1. Go to the Data Migration Tasks page.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, click DTS.

    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    Note
  2. From the drop-down list next to Data Migration Tasks, select the region in which the data migration instance resides.

    Note

    If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.

  3. Click Create Task. In the Create Task wizard, configure the source and destination databases.

    Warning

    After you configure the source and destination databases, we recommend that you read the limits that are displayed in the upper part of the page. Otherwise, the task may fail or data inconsistency may occur.

    Section

    Parameter

    Description

    N/A

    Task Name

    The name of the task. DTS automatically assigns a name to the task. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database

    Select a DMS database instance

    The database instance that you want to use. You can choose whether to use an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the database.

    • If you do not select an existing instance, you must configure parameters for the source database.

    Database Type

    The type of the source database. Select Oracle.

    Access Method

    The access method of the source database. Select the access method based on the location in which the source database is deployed. In this example, Self-managed Database on ECS is selected.

    Note

    If your source database is a self-managed database, you must deploy the network environment for the database. For more information, see Preparation overview.

    Instance Region

    The region where the source Oracle database resides.

    ECS Instance ID

    The ID of the Elastic Compute Service (ECS) instance that hosts the self-managed Oracle database.

    Port Number

    The service port number of the self-managed Oracle database. Default value: 1521.

    Oracle Type

    • The architecture of the source database. If you select Non-RAC Instance, you must configure the SID parameter.

    • If you select RAC or PDB Instance, you must configure the Service Name parameter.

    In this example, Non-RAC Instance is selected.

    Database Account

    The account of the source Oracle database. For more information about the permissions that are required for the account, see the Preparations section of this topic.

    Database Password

    The password of the database account.

    Destination Database

    Select a DMS database instance

    The database instance that you want to use. You can choose whether to use an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the instance.

    • If you do not select an existing instance, you must configure parameters for the destination database.

    Database Type

    The type of the destination database. Select Kafka.

    Access Method

    The access method of destination database. Select Express Connect, VPN Gateway, or Smart Access Gateway.

    Note

    You cannot select ApsaraMQ for Kafka as the access method. You can use the ApsaraMQ for Kafka instance as a self-managed Kafka database to configure data migration.

    Instance Region

    The region in which the destination ApsaraMQ for Kafka instance resides.

    Connected VPC

    The ID of the virtual private cloud (VPC) to which the destination ApsaraMQ for Kafka instance belongs. To obtain the VPC ID, perform the following operations: Log on to the ApsaraMQ for Kafka console and go to the Instance Details page of the ApsaraMQ for Kafka instance. In the Configuration Information section of the Instance Information tab, view the VPC ID.

    IP Address or Domain Name

    An IP address of the default endpoint of the destination ApsaraMQ for Kafka instance.

    Note

    To obtain an IP address, perform the following operations: Log on to the ApsaraMQ for Kafka console and go to the Instance Details page of the ApsaraMQ for Kafka instance. In the Endpoint Information section of the Instance Information tab, obtain an IP address from the Default Endpoint parameter.

    Port Number

    The service port number of the ApsaraMQ for Kafka instance. Default value: 9092.

    Database Account

    The database account of the ApsaraMQ for Kafka instance.

    Note

    If the instance type of the ApsaraMQ for Kafka instance is VPC Type, you do not need to specify the Database Account and Database Password parameters.

    Database Password

    The password of the database account.

    Kafka Version

    The version of the destination ApsaraMQ for Kafka instance.

    Encryption

    Specifies whether to encrypt the connection to the destination instance. Select Non-encrypted or SCRAM-SHA-256 based on your business and security requirements.

    Topic

    The topic that is used to receive the migrated data. Select a topic from the drop-down list.

    Topic That Stores DDL Information

    The topic that is used to store the DDL information. Select a topic from the drop-down list. If you do not configure this parameter, the DDL information is stored in the topic that is specified by the Topic parameter.

    Use Kafka Schema Registry

    Specifies whether to use Kafka Schema Registry. Kafka Schema Registry provides a serving layer for your metadata. It provides a RESTful API to store and retrieve your Avro schemas. Valid values:

    • No

    • Yes If you select Yes for this parameter, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.

  4. If an IP address whitelist is configured for your self-managed database, add the CIDR blocks of DTS servers to the IP address whitelist. Then, click Test Connectivity and Proceed.

    Warning

    If the public CIDR blocks of DTS servers are automatically or manually added to the whitelist of a database instance or to the security group rules of an ECS instance, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or security group rules and forbidding unauthorized CIDR blocks, or connecting the database instance to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  5. Configure the objects to be migrated and advanced settings.

    Parameter

    Description

    Migration Types

    • To perform only full data migration, select Schema Migration and Full Data Migration.

    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.

    Note

    If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

      Note

      If the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:

      • If the source and destination databases have the same schema, DTS does not migrate data records that have the same primary keys as data records in the destination database.

      • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.

    Data Format in Kafka

    The format in which data is stored in the destination ApsaraMQ for Kafka instance.

    • If you select DTS Avro, data is parsed based on the schema definition of DTS Avro. For more information, visit GitHub.

    • If you select SharePlex JSON, data is stored in the SharePlex JSON format. For more information, see the Shareplex Json section of the "Data formats of a Kafka cluster" topic.

    Policy for Shipping Data to Kafka Partitions

    The policy for migrating data to Kafka partitions. Select a policy based on your business requirements. For more information, see Specify the policy for migrating data to Kafka partitions.

    Source Objects

    Select one or more objects from the Source Objects section and click the 向右小箭头 icon to add the objects to the Selected Objects section.

    Selected Objects

    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
    Note
    • If you use the object name mapping feature to rename an object, other objects that depend on the object may fail to be migrated.

    • To specify WHERE conditions to filter data, right-click a table in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Set filter conditions.

    • To select the SQL operations that are performed on a specific database or table, right-click the object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to incrementally migrate. For more information, see the SQL operations that can be incrementally migrated section of this topic.

  6. Click Next: Advanced Settings to configure advanced settings. The following table describes the parameters.

    Parameter

    Description

    Select the dedicated cluster used to schedule the task

    By default, DTS schedules the task to a shared cluster. You do not need to configure this parameter. You can also purchase and specify a dedicated cluster of the required specifications to run the data migration task. For more information, see What is a DTS dedicated cluster.

    Set Alerts

    Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

    • No: does not configure alerting.

    • Yes: configures alerting. If you select Yes, you must also specify the alert threshold and alert contacts. For more information, see the Configure monitoring and alerting for a new DTS task section of the "Configure monitoring and alerting" topic.

    Retry Time for Failed Connections

    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Note
    • If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.

    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

    The wait time before a retry when other issues occur in the source and destination databases.

    The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

    Important

    The value of the The wait time before a retry when other issues occur in the source and destination databases parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

    Enable Throttling for Full Data Migration

    Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only when you select Full Data Migration for the Migration Types parameter.

    Enable Throttling for Incremental Data Migration

    Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.

    Note

    You can configure this parameter only when you select Incremental Data Migration for the Migration Types parameter.

    Environment Tag

    The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements.

    Configure ETL

    Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

    Actual Write Code

    The encoding format in which data is written to the destination database. Select a format based on your business requirements.

  7. In the lower part of the page, click Next: Save Task Settings and Precheck.

    You can move the pointer over Next: Save Task Settings and Precheck and click Preview API Call to view the parameter settings of the API operation that is called to configure the instance.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, run a precheck again.

    • If an alert is generated for an item during the precheck, perform the following operations based on the scenario:

      • If the alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur and your business may be exposed to potential risks.

  8. Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.

  9. On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.

    Section

    Parameter

    Description

    New Instance Class

    Resource Group

    The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?

    Instance Class

    DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Specifications of data migration instances.

  10. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  11. Click Buy and Start to start the data migration task. You can view the progress of the task on the Task Management page of the data migration task.