All Products
Search
Document Center

Data Transmission Service:Migrate data from a MaxCompute project to an ApsaraDB RDS for MySQL instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) lets you run a one-time full migration from a MaxCompute project to an ApsaraDB RDS for MySQL instance, covering both schema migration and data migration. DTS does not support incremental migration from MaxCompute—stop all writes to the source project before starting the task.

Supported migration types

Migration typeSupported
Schema migrationYes
Full data migrationYes
Incremental data migrationNo

Prerequisites

Before you begin, ensure that you have:

Billing

Migration typeTask configuration feeData transfer fee
Schema migration and full data migrationFreeFree in this example. Fees apply only when data is transferred from Alibaba Cloud over the Internet. See Billing overview.

Limitations

  • No incremental migration. DTS only supports full data migration from MaxCompute. Do not write new data to the source project after the task starts—data written after the task begins will not be migrated and causes data inconsistency.

  • Duplicate records risk. MaxCompute does not support primary key constraints. If network errors occur and DTS retries the task, duplicate records may be written to destination tables that have no primary keys.

  • Auto-resume risk. If a migration task fails, DTS automatically resumes it. Stop or release the task before switching workloads to the destination database. Otherwise, the resumed task overwrites data already in the destination.

  • Schema inconsistency. MaxCompute and ApsaraDB RDS for MySQL are heterogeneous databases. DTS does not guarantee schema consistency after schema migration. Evaluate the impact of data type conversion before starting migration. See Data type mappings between heterogeneous databases.

  • Destination tablespace growth. Concurrent INSERT operations during full migration cause fragmentation in destination tables. After migration completes, the destination tablespace is larger than the source. Run ANALYZE TABLE or OPTIMIZE TABLE to reclaim space.

    Note

    OPTIMIZE TABLE locks the table during execution. Schedule it during off-peak hours.

  • Performance impact. DTS uses read and write resources on both source and destination during migration. Run migrations during off-peak hours when CPU utilization on both databases is below 30%.

  • Destination database naming. DTS automatically creates the destination database. If the source database name does not meet ApsaraDB RDS for MySQL naming conventions, create the destination database manually before configuring the migration task. See Create accounts and databases.

Migrate data

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    Note

    Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.

Step 2: Select the region

From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

Note

In the new DTS console, select the region in the upper-left corner instead.

Step 3: Create a migration task

  1. Click Create Task.

  2. Optional: In the upper-right corner, click New Configuration Page to switch to the new configuration page.

    Note

    Skip this step if Back to Previous Version is displayed instead. The new configuration page is recommended.

Step 4: Configure source and destination databases

Configure the following parameters for the source and destination databases.

Source database (MaxCompute)

ParameterDescription
Task NameA name for the task. DTS auto-assigns a name. Specify a descriptive name for easy identification. The name does not need to be unique.
Select a DMS database instanceSelect an existing DMS database instance, or configure the source database parameters manually. If you select an existing instance, DTS auto-populates the remaining parameters.
Database TypeSelect MaxCompute.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the source MaxCompute project resides.
ProjectThe name of the source MaxCompute project.
AccessKey ID of Alibaba Cloud AccountThe AccessKey ID of the Alibaba Cloud account that owns the MaxCompute project.
AccessKey Secret of Alibaba Cloud AccountThe AccessKey secret of the Alibaba Cloud account that owns the MaxCompute project.

Destination database (ApsaraDB RDS for MySQL)

ParameterDescription
Select a DMS database instanceSelect an existing DMS database instance, or configure the destination database parameters manually.
Database TypeSelect MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination ApsaraDB RDS for MySQL instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No to use an instance under the current Alibaba Cloud account.
RDS Instance IDThe ID of the destination ApsaraDB RDS for MySQL instance.
Database AccountThe database account with read and write permissions on the destination database.
Database PasswordThe password for the database account.
Note

To register a database with DMS, click Create Template in the DMS console. See Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database. To register a database directly with DTS, use the Database Connections page or the new configuration page. See Manage database connections.

Step 5: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

If the source or destination database is an Alibaba Cloud database instance, DTS automatically adds its server CIDR blocks to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. For self-managed databases hosted in a data center or provided by a third-party cloud service, manually add DTS server CIDR blocks to the database whitelist. See CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to a database whitelist or ECS security group rules introduces security risks. Before using DTS, take the following precautions: strengthen your username and password, limit exposed ports, authenticate API calls, regularly review whitelist and security group rules, and remove unauthorized CIDR blocks. For network-level isolation, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.

Step 6: Grant permissions and verify connectivity

Click OK to grant the DTS built-in account access to the MaxCompute project, then click Test Connectivity.

Step 7: Select migration objects

On the Select Objects page, configure the following parameters.

ParameterDescription
Migration TypesSelect both Schema Migration and Full Data Migration.
Processing Mode of Conflicting TablesPrecheck and Report Errors (default): checks for tables with identical names in the source and destination. The precheck fails if conflicts are found, preventing the task from starting. To resolve conflicts, use object name mapping to rename migrated tables. Ignore Errors and Proceed: skips the identical-name check. If the source and destination have the same schema, records with matching primary keys are skipped. If the schemas differ, only certain columns migrate or the task fails entirely. Use with caution.
Capitalization of Object Names in Destination InstanceThe capitalization policy for database names, table names, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names.
Source ObjectsSelect projects or tables to migrate, then click the arrow icon to add them to Selected Objects.
Selected ObjectsTo rename a single object, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names. To filter data by condition, right-click the table and specify filter conditions. See Set filter conditions.
Note

If you rename an object using object name mapping, other objects that depend on it may fail to migrate.

Step 8: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Select the dedicated cluster used to schedule the taskDTS uses a shared cluster by default. To improve migration stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries a failed connection after the task starts. Valid values: 10–1440 minutes. Default: 720 minutes. We recommend that you set the parameter to a value greater than 30. If DTS reconnects within this window, the task resumes. Otherwise, the task fails. If multiple tasks share the same source or destination, the most recently set value takes precedence.
The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries failed DML or DDL operations. Valid values: 1–1440 minutes. Default: 10 minutes. We recommend that you set the parameter to a value greater than 10, and always less than the Retry Time for Failed Connections value.
Enable Throttling for Full Data MigrationLimits DTS read/write resource usage during full migration to reduce load on the destination. When enabled, configure QPS (queries per second) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
Environment TagAn optional tag to identify the DTS instance.
Configure ETLWhether to enable extract, transform, and load (ETL). Select Yesalert notification settings to enter data processing statements. See Configure ETL. Select No to skip. For an overview, see What is ETL?
Monitoring and AlertingWhether to set up alerts for the migration task. Select Yes to configure alert thresholds and notification contacts. See Configure monitoring and alerting.

Step 9: Run the precheck

Click Next: Save Task Settings and Precheck.

Note

To view the API parameters DTS would use for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before starting the migration task. The task can only start after the precheck passes.

  • If the precheck fails, click View Details next to each failed item, fix the issues, and run the precheck again.

  • If the precheck generates an alert, review it before proceeding. If the alert can be safely ignored, click Confirm Alert Details, then click Ignore in the dialog, confirm, and click Precheck Again.

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

Step 10: Purchase a migration instance and start the task

  1. On the Purchase Instance page, configure the following parameters.

    SectionParameterDescription
    New Instance ClassResource Group SettingsThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. Select a class based on your data volume and performance requirements. See Specifications of data migration instances.
  2. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms.

  3. Click Buy and Start to start the migration task. Monitor progress in the task list.

What's next

After migration completes:

  • Verify data consistency between the source MaxCompute project and the destination ApsaraDB RDS for MySQL instance.

  • Stop or release the DTS migration task before switching workloads to the destination database.

  • If the destination tablespace is significantly larger than the source, run ANALYZE TABLE or OPTIMIZE TABLE to reclaim space. Schedule OPTIMIZE TABLE during off-peak hours to avoid table locks.