All Products
Search
Document Center

Data Transmission Service:Migrate data from an RDS for MySQL instance to a Tablestore instance

Last Updated:Mar 28, 2026

Migrating from a relational database to a NoSQL store typically requires a CSV export, an extended maintenance window, and a manual cutover. DTS eliminates this by using binary log-based replication, so you can run schema migration, full data migration, and incremental data migration in sequence — keeping your applications online throughout.

Prerequisites

Before you begin, make sure you have:

Migration types

DTS supports three migration types that you can combine based on your requirements:

Migration typeDescription
Schema migrationMigrates schema definitions from the source database to the destination.
Full data migrationMigrates all existing data from the source database to the destination.
Incremental data migrationAfter full data migration completes, continuously syncs data changes from source to destination. Use this to keep both databases in sync without stopping your applications.

For zero-downtime migration, select all three types. If you select only schema migration and full data migration, do not write new data to the source database during migration to prevent data inconsistency.

Billing

Migration typeTask configuration feeData transfer cost
Schema migration and full data migrationFreeCharged if data transfers over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

SQL operations supported for incremental migration

Incremental migration captures DML operations: INSERT, UPDATE, and DELETE.

Permissions required

DatabaseSchema migrationFull data migrationIncremental data migration
RDS for MySQLSELECTSELECTREPLICATION SLAVE, REPLICATION CLIENT, and SELECT on migration objects (DTS grants these automatically)

For instructions on creating an account and granting permissions, see Create an account and Modify account permissions.

Limitations

Review these limitations before configuring the task to avoid failures mid-migration.

Source database requirements

  • Primary keys or UNIQUE constraints: Tables must have primary keys or UNIQUE constraints with unique fields. Tables without these may produce duplicate data in the destination. Ensure all migration objects have primary keys before starting.

  • Table limit per task: When selecting individual tables and editing them (such as mapping column names), a single task supports up to 1,000 tables. If you exceed this limit, split tables across multiple tasks or configure the task to migrate the entire database.

  • Binary log requirements for incremental migration: The source database must meet all of the following conditions, or the precheck fails and the task cannot start: If DTS cannot obtain binary logs due to insufficient retention, the task fails and data inconsistency or loss may occur. The Service Level Agreement (SLA) guarantee does not apply in this case.

  • DDL operations during migration: Do not run DDL operations on the source database during schema migration or full data migration — the task will fail.

    During full data migration, DTS queries the source database, which creates metadata locks that may block DDL operations on the source.
  • Binary log change data not migrated: Data generated by change operations on binary logs — such as data restored from a physical backup or data from a cascade operation — is not recorded and migrated to the destination database while the data migration instance is running.

    If this change data is not migrated to the destination database, you can perform a full data migration again, provided your business is not affected.
  • Read-only RDS for MySQL V5.6: Instances that do not record transaction logs (such as read-only ApsaraDB RDS for MySQL V5.6) cannot be used as the source for incremental migration.

  • Invisible columns (MySQL 8.0.23 and later): DTS cannot read invisible columns, which causes data loss. Make a column visible before migrating:

    ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;

    Tables without explicit primary keys automatically generate invisible primary keys. Make these visible before migrating. See Invisible Columns and Generated Invisible Primary Keys.

  • EncDB feature: Full data migration is not supported for RDS for MySQL instances with the EncDB feature enabled. Instances with Transparent Data Encryption (TDE) support schema, full, and incremental migration.

Destination database requirements

  • Table limit: The destination Tablestore instance supports a maximum of 64 tables. If your migration exceeds this limit, contact Alibaba Cloud support to raise the quota for the destination instance.

  • Naming conventions: Table and column names must follow Tablestore naming rules, or schema migration fails:

    • Allowed characters: letters, digits, and underscores (_). Names must start with a letter or underscore.

    • Length: 1–255 characters.

  • Data Write Mode and Batch Write Mode: Configure these Tablestore-specific settings carefully. See the parameter descriptions in Configure the objects to migrate.

Other limitations

  • Do not use pt-online-schema-change or similar tools for online DDL on migration objects during the task.

  • For FLOAT and DOUBLE columns, DTS reads values using ROUND(COLUMN, PRECISION). Default precision: FLOAT = 38, DOUBLE = 308. Confirm these values meet your requirements before starting.

  • Run migrations during off-peak hours. Full data migration consumes read and write resources on both databases and may increase loads.

  • After full data migration, the destination table storage is larger than the source due to fragmentation from concurrent INSERT operations.

  • DTS automatically retries failed tasks for up to 7 days. Before switching your business to the destination, stop or release the DTS instance — or run REVOKE to remove write permissions from the DTS account — to prevent automatic retries from overwriting destination data with source data.

  • DTS executes CREATE DATABASE IF NOT EXISTS `test` in the source database periodically to advance the binary log position.

  • If a DTS instance fails, support will attempt to recover it within 8 hours. During recovery, the instance may restart and parameters may be adjusted (only DTS instance parameters, not database parameters). See Modify instance parameters.

Self-managed MySQL

  • If a primary/secondary switchover occurs on the source self-managed MySQL database while the task is running, the task fails.

  • Migration latency is calculated from the timestamp of the latest migrated record in the destination against the current source timestamp. If no DML operations occur on the source for a long time, the displayed latency may be inaccurate. To correct it, perform a DML operation on the source — or create a heartbeat table that is updated every second if you are migrating an entire database.

Configure the migration task

A DTS migration task connects a source RDS for MySQL endpoint to a destination Tablestore endpoint and moves data through three phases: schema migration, full data migration, and incremental data migration. The steps below walk you through creating and starting the task.

Step 1: Go to the Data Migration page

Open the Data Migration page using either the DTS console or DMS console.

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the data migration instance resides.

DMS console

The actual steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

Step 2: Create a task and configure endpoints

Click Create Task to open the task configuration page, then configure the parameters below.

Task name

DTS auto-generates a name. Specify a descriptive name for easy identification — the name does not need to be unique.

Source database

ParameterDescription
Select Existing ConnectionIf the source instance is already registered with DTS, select it from the list — DTS populates the remaining fields automatically. See Manage database connections. Otherwise, fill in the fields below.
Database TypeSelect MySQL.
Access MethodSelect Cloud Instance.
Instance RegionSelect the region where the source RDS for MySQL instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migration.
RDS Instance IDSelect the source RDS for MySQL instance.
Database AccountEnter the database account. See Permissions required.
Database PasswordEnter the password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted. To use SSL encryption, enable it on the RDS for MySQL instance first. See Use a cloud certificate to enable SSL encryption.

Destination database

ParameterDescription
Select Existing ConnectionIf the destination instance is already registered with DTS, select it from the list. See Manage database connections. Otherwise, fill in the fields below.
Database TypeSelect Tablestore.
Access MethodSelect Cloud Instance.
Instance RegionSelect the region where the destination Tablestore instance resides.
Instance IDSelect the destination Tablestore instance.
AccessKey ID Of Alibaba Cloud AccountEnter the AccessKey ID of the account that owns the Tablestore instance. See Create an AccessKey pair.
AccessKey Secret Of Alibaba Cloud AccountEnter the corresponding AccessKey secret. See Create an AccessKey pair.

Step 3: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

DTS server CIDR blocks must be added to the security settings of both the source and destination databases. See Add DTS server IP addresses to a whitelist. For self-managed databases, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 4: Configure the objects to migrate

On the Configure Objects page, set the migration parameters.

Migration types and object selection

ParameterDescription
Migration TypesSelect Schema Migration and Full Data Migration for a one-time migration. Select all three (Schema Migration, Full Data Migration, and Incremental Data Migration) to keep the source and destination in sync during migration.
Source ObjectsSelect one or more objects from the Source Objects section, then click Rightwards arrow to move them to Selected Objects. Migration granularity is at the database and table level. You can only select tables from a single database per task.
Selected ObjectsRight-click an object to rename it in the destination. For bulk renaming, click Batch Edit. To filter rows, right-click a table and set a WHERE clause. See Map object names and Set filter conditions. Hover over a table and click the Edit icon to set column data types for the Tablestore instance.
If you skip Schema Migration, create the target tables in Tablestore manually before starting and enable object name mapping in Selected Objects. Only table names support mapping — mapping objects that other objects depend on may cause migration failures for those dependents.

Conflict handling

ParameterDescription
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the precheck if the destination contains tables with the same names as source tables. Use Map object names to rename tables if you cannot delete or rename the conflicting destination tables. Ignore Errors and Proceed: skips the name conflict check. During full migration, existing destination records with the same primary key are kept. During incremental migration, they are overwritten. If schemas differ, only matching columns are migrated, or the task fails.
Dirty Data Handling PolicySkip: skips rows with write errors. Block: stops the task on write errors.

Tablestore write settings

ParameterDescription
Synchronization Operation TypesSelect the DML operation types to replicate. All types are selected by default.
Data Write ModeUpdate Row: uses PutRowChange for row-level updates. Overwrite Row: uses UpdateRowChange for row-level overwrites.
Batch Write ModeBulkImportRequest (recommended): offline write with higher throughput and lower cost. BatchWriteRowRequest: standard batch write.
More SettingsQueue Size: write queue length for the Tablestore instance. Thread Count: callback processing threads for the Tablestore write process. Concurrency: concurrent request limit for the Tablestore instance. Bucket Count: concurrent buckets for sequential incremental writes — must be less than or equal to Concurrency; a higher value improves concurrent write throughput.
Case Sensitivity For Destination Object NamesControls capitalization of database, table, and column names in the destination. Default is DTS default policy. See Specify the capitalization of object names in the destination instance.

Step 5: Configure advanced settings

Click Next: Advanced Settings and configure these parameters as needed.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks to the shared cluster. Purchase a dedicated cluster to improve task stability. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this window, the task resumes; otherwise, it fails. If multiple tasks share the same source or destination, the most recently set value takes effect.
Note

When DTS retries a connection, you are charged for the DTS instance. Specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

Retry Time for Other IssuesHow long DTS retries after DDL or DML operation failures. Range: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits the read/write load during full migration by capping Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when full data migration is selected.
Enable Throttling for Incremental Data MigrationLimits load during incremental migration by capping RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when incremental data migration is selected.
Environment TagAttach an environment label to the instance for identification.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes: DTS does not write heartbeat SQL to the source, but migration latency may appear elevated. No: DTS writes heartbeat SQL to the source, which may affect physical backup and cloning of the source.
Configure ETLYes: enables the extract, transform, and load (ETL) feature. Enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: disables ETL.
Monitoring and AlertingYes: configure alert thresholds and notification contacts to receive alerts on task failure or high migration latency. See Configure monitoring and alerting. No: no alerting configured.

After configuring advanced settings, click Next: Configure Database And Table Fields to set primary key columns for Tablestore tables.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the OpenAPI parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.

DTS runs a precheck before starting the task. If the precheck fails:

  • Click View Details next to each failed item, fix the issue, then click Precheck Again.

  • For alert items (non-blocking): click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring alerts may lead to data inconsistency.

Step 7: Purchase the instance and start the task

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, set the following:

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassSelect an instance class based on your required migration speed. See Instance classes of data migration instances.
  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Verify the migration

On the Data Migration page, monitor task progress:

  • Schema migration and full data migration tasks stop automatically when complete. The status shows Completed.

  • Incremental data migration tasks run continuously and do not stop automatically. The status shows Running.

Important

Before switching your business to the destination instance, stop or release the DTS instance — or run REVOKE to remove the DTS account's write permissions on the destination — to prevent the task from resuming automatically and overwriting destination data.

What's next