All Products
Search
Document Center

Data Transmission Service:Migrate Amazon RDS Oracle to Alibaba Cloud RDS MySQL

Last Updated:Mar 19, 2025

This topic explains how to migrate Amazon RDS Oracle to Alibaba Cloud RDS MySQL using Data Transmission Service (DTS). DTS supports schema migration, full data migration, and incremental data migration, enabling a smooth database transition without interrupting the self-managed application.

Prerequisites

  • To enable DTS to connect to Amazon RDS Oracle over the public network, set the Public Availability of Amazon RDS Oracle to Yes.

  • The Amazon RDS Oracle version should be 9i, 10g, 11g, 12C, or later (non-multitenant architecture).

  • The Alibaba Cloud RDS MySQL version should be 5.6 or 5.7.

  • Alibaba Cloud RDS MySQL should have at least double the storage space of the Amazon RDS Oracle objects being migrated.

    Note

    The Binlog generated during migration will take up some space but will be cleared after the migration is complete.

  • You are familiar with the capabilities and limits of DTS if it is used to migrate data from an Oracle database. Advanced Database & Application Migration (ADAM) is used for database evaluation. This helps you smoothly migrate data to the cloud. For more information, see Prepare an Oracle database and Overview.

Limits

  • DTS uses read and write resources of the source and destination databases during full data migration. This may increase the loads of the database servers. If the database performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source database, the tables have no primary keys, or a deadlock occurs in the destination database. Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. For example, you can migrate data when the CPU utilization of the source and destination databases is less than 30%.

  • The tables to be migrated in the source database must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from columns of the FLOAT or DOUBLE data type. If you do not specify a precision, DTS sets the precision for the FLOAT data type to 38 digits and the precision for the DOUBLE data type to 308 digits. You must check whether the precision settings meet your business requirements.

  • DTS automatically creates a destination database in the ApsaraDB RDS for MySQL instance. However, if the name of the source database is invalid, you must manually create a database in the ApsaraDB RDS for MySQL instance before you configure the data migration task.

    Note

    For more information about the database naming conventions of ApsaraDB RDS for MySQL databases and how to create a database, see Manage databases.

  • If a data migration task fails, DTS automatically resumes the task. Before you switch your workloads to the destination instance, stop or release the data migration task. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.

Billing rules

Migration type

Task configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS supports schema migration for objects like tables, indexes, constraints, and sequences. It does not support views, synonyms, triggers, stored procedures, stored functions, packages, custom types, etc.

  • Full data migration

    DTS migrates all historical data of the migration objects from the Amazon RDS Oracle database to the Alibaba Cloud RDS MySQL instance database.

  • Incremental data migration

    Following full migration, DTS polls and captures redo logs generated by the Amazon RDS Oracle database and synchronizes incremental update data to the Alibaba Cloud RDS MySQL instance database. Incremental data migration maintains service continuity during Oracle database migration without stopping local applications.

SQL operations supported by incremental data migration

  • INSERT, DELETE, UPDATE

  • CREATE TABLE

    Note

    Partitioned tables and tables with functions defined within are not supported.

  • ALTER TABLE (includes ADD COLUMN, DROP COLUMN, RENAME COLUMN, and ADD INDEX)

  • DROP TABLE

  • RENAME TABLE, TRUNCATE TABLE, CREATE INDEX

Permissions required for database accounts

Database

Schema migration

Full migration

Incremental data migration

Amazon RDS Oracle database

Permissions of the schema owner

Permissions of the schema owner

Permissions of the MASTER USER

Alibaba Cloud RDS MySQL instance

Read and write permissions for the database to be migrated

Read and write permissions for the database to be migrated

Read and write permissions for the database to be migrated

For more information on creating and authorizing a database account, see:

Data type mapping relationships

For more information, see the data type mapping relationships between heterogeneous databases.

Preparations before data migration

  1. Log on to the Amazon RDS Management Console.

  2. Navigate to the Basic Information page for the Amazon RDS Oracle instance.

  3. In the Security group rules section, click the name of the security group to which the existing inbound rule belongs.

    安全组规则

  4. On the Security Group Settings page, you can add the DTS server address for the respective region to the inbound rules. For details on the IP address segment, see Add the IP address segment of DTS servers.

    编辑入站规则

    Note
    • You need to add only the CIDR blocks of DTS servers that reside in the same region as the destination database. For example, the source database resides in the Singapore region and the destination database resides in the China (Hangzhou) region. You need to add only the CIDR blocks of DTS servers that reside in the China (Hangzhou) region.

    • You can add all of the required CIDR blocks to the inbound rule at a time.

    • If you have other questions, see the official documentation of Amazon or contact technical support.

  5. Modify the log configuration for Amazon RDS Oracle. You can skip this step if incremental data migration isn't necessary.

    • If the Amazon RDS Oracle version is 12C or later (non-multitenant architecture), configure the logs as follows:

      1. Connect to the Amazon RDS Oracle database using the Master User account through the SQL*Plus tool.

      2. Enable archive logging and supplemental logging.

        Log type

        Steps to enable

        Archive log

        1. Execute the following command to check whether archive logging is enabled:

          SELECT LOG_MODE FROM v$database;
        2. Check and set the retention period of archive logs.

          Note

          It is recommended to retain archive logs for at least 72 hours (the following example uses 72 hours as an example).

          exec rdsadmin.rdsadmin_util.show_configuration;
          exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours', 72); 

        Supplemental log

        Enable database-level or table-level supplemental logging based on your business requirements:

        • Enable database-level supplemental logging

          1. Execute the following command to check whether database-level supplemental logging is enabled

            SELECT supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_ui FROM v$database;
          2. Enable supplemental logging for primary keys and unique keys at the database level

            exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD', 'PRIMARY KEY');
            exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD', 'UNIQUE');
        • Enable table-level supplemental logging (choose one of the two)

          • Enable table-level supplemental logging for all columns

            exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD', 'ALL');
          • Enable table-level supplemental logging for primary keys

            exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD', 'PRIMARY KEY');
      3. Grant more fine-grained permissions to the Amazon RDS Oracle database account.

        Fine-grained authorization

        Webpack

        #Create a database account (using RDSDT_DTSACCT as an example) and grant permissions
        create user RDSDT_DTSACCT IDENTIFIED BY RDSDT_DTSACCT;
        grant create session to RDSDT_DTSACCT;
        grant connect to RDSDT_DTSACCT;
        grant resource to RDSDT_DTSACCT;
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_OBJECTS','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_TAB_COLS','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_REGISTRY','RDSDT_DTSACCT','SELECT');
        grant select any table to RDSDT_DTSACCT;
        grant select any transaction to RDSDT_DTSACCT;
        -- v$log privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG','RDSDT_DTSACCT','SELECT');
        -- v$logfile privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE','RDSDT_DTSACCT','SELECT');
        -- v$archived_log privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG','RDSDT_DTSACCT','SELECT');
        -- v$parameter privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$PARAMETER','RDSDT_DTSACCT','SELECT');
        -- v$database privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE','RDSDT_DTSACCT','SELECT');
        -- v$active_instances privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ACTIVE_INSTANCES','RDSDT_DTSACCT','SELECT');
        -- v$instance privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$INSTANCE','RDSDT_DTSACCT','SELECT');
        -- v$logmnr_contents privileges
        exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('USER$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('OBJ$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('COL$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('IND$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('ICOL$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('CDEF$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('CCOL$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('TABPART$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('TABSUBPART$','RDSDT_DTSACCT','SELECT');
        exec rdsadmin.rdsadmin_util.grant_sys_object('TABCOMPART$','RDSDT_DTSACCT','SELECT');
        grant LOGMINING TO RDSDT_DTSACCT;
        grant EXECUTE_CATALOG_ROLE to RDSDT_DTSACCT;
        exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR','RDSDT_DTSACCT','EXECUTE');
        grant select on v$database to rdsdt_dtsacct;
        grant select on dba_objects to rdsdt_dtsacct;
        grant select on DBA_TAB_COMMENTS to rdsdt_dtsacct;
        grant select on dba_tab_cols to rdsdt_dtsacct;
        grant select_catalog_role TO rdsdt_dtsacct;
    • If the Amazon RDS Oracle version is 9i, 10g, or 11g, configure the logs as follows:

      1. Connect to the Amazon RDS Oracle database using the Master User account through the SQL*Plus tool.

      2. Run the archive log list; command to verify that Amazon RDS Oracle is in archive mode.

        Note

        If the instance remains in NOARCHIVELOG mode, you should enable archiving. For more information, see Managing Archived Redo Logs.

      3. Enable force logging mode.

        exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);
      4. Enable supplemental logging for primary keys.

        begin rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD',p_type => 'PRIMARY KEY');end;/
      5. Enable supplemental logging for unique keys.

        begin rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD',p_type => 'UNIQUE');end;/
      6. Set the retention period of archive logs.

        begin rdsadmin.rdsadmin_util.set_configuration(name => 'archivelog retention hours', value => '24');end;/
      7. It is recommended to set the retention period of archive logs to at least 24 hours.

        Note

        Set the retention period of archive logs accordingly.

      8. Commit the changes.

        commit;

Procedure (in the new DTS console)

  1. Use one of the following methods to go to the Data Migration page and select the region in which the data migration instance resides.

    DTS console

    1. Log on to the DTS console.

    2. In the left-side navigation pane, click Data Migration.

    3. In the upper-left corner of the page, select the region in which the data migration instance resides.

    DMS console

    Note

    The actual operation may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration .

    3. From the drop-down list to the right of Data Migration Tasks, select the region in which the data synchronization instance resides.

  2. Click Create Task to go to the task configuration page.

  3. Configure the source and destination databases. The following table describes the parameters.

    Warning

    After you configure the source and destination databases, we recommend that you read the Limits that are displayed in the upper part of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    The name of the DTS task. DTS automatically generates a task name. We recommend that you specify an informative name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database Information

    Select a DMS database instance.

    The instance that you want to use. You can choose whether to use an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the database.

    • If you do not use an existing instance, you must configure the database information below.

    Note

    Database Type

    Select Oracle.

    Access Method

    Select Public IP.

    Instance Region

    Select the region where the Amazon RDS Oracle database resides.

    Note

    If the region where the Amazon RDS Oracle database resides is not available in the options, you can select a region closest to the database.

    Domain Name or IP

    Enter the endpoint of the Amazon RDS Oracle database.

    Note

    You can obtain the connection information of the database on the Basic Information page of Amazon RDS Oracle.

    Port

    Enter the service port of the Amazon RDS Oracle database. The default is 1521.

    Oracle Type

    • Non-rac Instance: After selecting this option, you also need to fill in the SID information.

    • RAC Or PDB Instance: After selecting this option, you also need to fill in the ServiceName information.

    In this example, select Non-rac Instance.

    Database Account

    Enter the database account of Amazon RDS Oracle. For permission requirements, see Permissions required for database accounts.

    Database Password

    The password that is used to access the database instance.

    Destination Database Information

    Select a DMS database instance.

    The instance that you want to use. You can choose whether to use an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the database.

    • If you do not use an existing instance, you must configure the database information below.

    Note

    Database Type

    Select MySQL.

    Access Method

    Select Cloud Instance.

    Instance Region

    Select the region where the destination RDS MySQL instance resides.

    RDS Instance ID

    Select the ID of the destination RDS MySQL instance.

    Database Account

    Enter the database account of the destination RDS MySQL instance. For permission requirements, see Permissions required for database accounts.

    Database Password

    The password that is used to access the database instance.

    Encryption

    Specifies whether to encrypt the connection to the source database instance. Select Non-encrypted or SSL-encrypted based on your business requirements. If you want to set this parameter to SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS for MySQL instance before you configure the DTS task. For more information, see Use a cloud certificate to enable SSL encryption.

  4. In the lower part of the page, click Test Connectivity and Proceed, and then click Test Connectivity in the CIDR Blocks of DTS Servers dialog box that appears.

    Note

    Make sure that the CIDR blocks of DTS servers can be automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add the CIDR blocks of DTS servers.

  5. Configure the objects to be migrated.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Configuration

      Description

      Migration Types

      • To perform only full data migration, select Schema Migration and Full Data Migration.

      • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note
      • If you do not select Schema Migration, make sure a database and a table are created in the destination database to receive data and the object name mapping feature is enabled in Selected Objects.

      • If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: checks whether the destination database contains tables that use the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

        Note

        If the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.

      • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

        Warning

        If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to the following potential risks:

        • If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur:

          • During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained.

          • During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.

        • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.

      Source Objects

      Select one or more objects from the Source Objects section. Click the 向右小箭头 icon to add the objects to the Selected Objects section.

      Note

      The granularity of selecting migration objects is database, table, or column.

      Selected Objects

      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.

      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

      Note
      • If you use the object name mapping feature, other objects that depend on this object may fail to be migrated.

      • If you need to set WHERE conditions to filter data, right-click the table to be migrated in Selected Objects and set the filter conditions in the dialog box that appears. For more information, see Filter task data by SQL conditions.

      • If you need to select SQL operations for incremental migration at the database or table level, right-click the migration object in Selected Objects and select the required SQL operations for incremental migration in the dialog box that appears.

    2. Click Next: Advanced Settings to configure advanced settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on shared clusters without the need for parameter configuration. However, you can purchase a dedicated cluster with specific specifications for running DTS migration tasks. For more information, see What is a DTS dedicated cluster.

      Retry Time for Failed Connections

      The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1,440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

      Note
      • If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.

      • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

      Retry Time for Other Issues

      The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

      Important

      The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

      Enable Throttling for Full Data Migration

      Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.

      Note

      You can configure this parameter only if you select Full Data Migration for the Migration Types parameter.

      Enable Throttling for Incremental Data Migration

      Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.

      Note

      You can configure this parameter only if you select Incremental Data Migration for the Migration Types parameter.

      Environment Tag

      You can select an environment tag to identify the instance based on your actual situation. In this example, no environment tag is selected.

      Actual Write Code

      You can select the encoding type for data written to the destination based on your actual situation.

      Configure ETL

      Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

    3. Click Next Step: Data Verification to configure the data verification task.

      For more information about how to use the data verification feature, see Configure a data verification task.

  6. Save the task settings and run a precheck.

    • To view the parameters to be specified when you call the relevant API operation to configure the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • If you do not need to view or have viewed the parameters, click Next: Save Task Settings and Precheck in the lower part of the page.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.

    • If an alert is triggered for an item during the precheck:

      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.

  7. Purchase an instance.

    1. Wait until Success Rate becomes 100%. Then, click Next: Purchase Instance.

    2. On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.

      Section

      Parameter

      Description

      New Instance Class

      Resource Group

      The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Instance classes of data migration instances.

    3. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

    4. Click Buy and Start. In the message that appears, click OK.

      You can view the progress of the task on the Data Migration page.

Procedure (in the old DTS console)

  1. Log on to the DTS console.

    Note

    If you are redirected to the Data Management (DMS) console, you can click the old icon in the image to go to the previous version of the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. At the top of the Migration Tasks page, select the region where the destination cluster resides.

  4. In the upper-right corner of the page, click Create Migration Task.

  5. Set up the source and destination database details for the migration task.

    源库和目标库连接配置

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database Information

    Instance Type

    Select Self-managed Database With Public IP.

    Instance Region

    When the instance type is set to Self-managed Database With Public IP, Instance Region does not need to be set.

    Database Type

    Select Oracle.

    Hostname or IP Address

    Enter the endpoint of the Amazon RDS Oracle database.

    Note

    You can obtain the connection information of the database on the Basic Information page of Amazon RDS Oracle.

    连接地址

    Port

    Enter the service port of the Amazon RDS Oracle database. The default is 1521.

    Instance Type

    • Non-RAC Instance: After selecting this option, you also need to fill in the SID information.

    • RAC Instance: After selecting this option, you also need to fill in the ServiceName information.

    In this example, select Non-RAC Instance and fill in the SID information.

    Database Account

    Enter the database account of Amazon RDS Oracle. For permission requirements, see Permissions required for database accounts.

    Database Password

    Enter the password of the preceding account.

    Note

    After you configure the source database parameters, click Test Connectivity next to Database Password to verify whether the configured parameters are valid. If the configured parameters are valid, the Passed message is displayed. If the Failed message is displayed, click Check next to Failed to modify the source database parameters based on the check results.

    Destination Database Information

    Instance Type

    Select RDS Instance.

    Instance Region

    Select the region where the Alibaba Cloud RDS instance resides.

    RDS Instance ID

    Select the ID of the Alibaba Cloud RDS instance.

    Database Account

    Enter the database account of Alibaba Cloud RDS. For permission requirements, see Permissions required for database accounts.

    Database Password

    Enter the password of the preceding account.

    Note

    After you configure the destination database parameters, click Test Connectivity next to Database Password to verify whether the configured parameters are valid. If the configured parameters are valid, the Passed message is displayed. If the Failed message is displayed, click Check next to Failed to modify the destination database parameters based on the check results.

  6. In the lower-right corner of the page, click Set Whitelist and Next.

    If the source or destination database instance is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, or is a self-managed database hosted on Elastic Compute Service (ECS), DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the database instance or ECS security group rules. If the source or destination database is a self-managed database on data centers or is from other cloud service providers, you must manually add the CIDR blocks of DTS servers to allow DTS to access the database. For more information about the CIDR blocks of DTS servers, see the "CIDR blocks of DTS servers" section of the Add the CIDR blocks of DTS servers to the security settings of on-premises databases topic.

    Warning

    If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your username and password, limit the ports that are exposed, authenticate API calls, regularly check the whitelist or ECS security group rules and forbid unauthorized CIDR blocks, or connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  7. Choose the objects to migrate and the type of migration.

    选择迁移类型和对象

    Configuration

    Description

    Migration Type

    • If you only need to perform full migration, select Schema Migration and Full Data Migration when selecting the migration type.

    • If you need to perform non-disruptive migration, select Schema Migration, Full Data Migration, and Incremental Data Migration when selecting the migration type.

    Note

    If Incremental Data Migration is not selected, to ensure data consistency, do not write new data to the source database during the data migration.

    Migration Objects

    In the Migration Objects box, click the object to be migrated, and then click 向右小箭头 to move it to the Selected Objects box.

    Note
    • The granularity of selecting migration objects is database, table, or column.

    • By default, after the migration is completed, the names of the migration objects will not change. If you need the names of the migration objects to be different in Alibaba Cloud RDS MySQL, you need to use the object name mapping feature provided by DTS. For more information, see Database and table column mapping.

    • If you use the object name mapping feature, other objects that depend on this object may fail to be migrated.

    Rename Mapping

    If you need to change the name of the migration object in the destination instance, use the object name mapping feature. For more information, see Database and table column mapping.

    Retry Time for Failed Connections to the Source or Destination Database

    By default, if DTS fails to connect to the source or destination database, DTS retries within the next 12 hours. You can specify a retry time range based on your business requirements. If DTS reconnects to the source and destination databases within the specified time, the migration task will automatically resume. Otherwise, the migration task will fail.

    Note

    Because you are charged for the operation of the DTS instance during the retry period, it is recommended to customize the retry time based on your business needs or release the DTS instance as soon as the source and destination database instances are released.

  8. In the lower-right corner of the page, click Precheck.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, you can click the Info icon icon next to each failed item to view details.

      • You can troubleshoot the issues based on the causes and run a precheck again.

      • If you do not need to troubleshoot the issues, you can ignore failed items and run a precheck again.

  9. After the task passes the precheck, click Next.

  10. In the Confirm Settings dialog box, specify the Channel Specification parameter and select Data Transmission Service (Pay-As-You-Go) Service Terms.

  11. Click Buy and Start to start the data migration task.

    • Schema migration and full data migration

      We recommend that you do not manually stop the task during full data migration. Otherwise, the data migrated to the destination database may be incomplete. You can wait until the data migration task automatically stops.

    • Schema migration, full data migration, and incremental data migration

      An incremental data migration task does not automatically stop. You must manually stop the task.

      Important

      We recommend that you select an appropriate time to manually stop the data migration task. For example, you can stop the task during off-peak hours or before you switch your workloads to the destination cluster.

      1. Wait until Incremental Data Migration and The migration task is not delayed appear in the progress bar of the migration task. Then, stop writing data to the source database for a few minutes. The latency of incremental data migration may be displayed in the progress bar.

      2. Wait until the status of incremental data migration changes to The migration task is not delayed again. Then, manually stop the migration task. Stop an incremental data migration task

  12. Migrate your workloads to Alibaba Cloud RDS for MySQL.