All Products
Search
Document Center

Data Transmission Service:Migrate from RDS SQL Server to RDS MySQL

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) supports heterogeneous data migration from ApsaraDB RDS for SQL Server to ApsaraDB RDS for MySQL, with options for schema migration, full data migration, and near-zero-downtime incremental migration.

Prerequisites

Before you begin, make sure that you have:

If the source instance meets any of the following conditions, split the migration into multiple tasks:

  • The number of databases exceeds 10

  • Log backups are performed on a single database more than once per hour

  • DDL operations are performed on a single database more than 100 times per hour

  • The log volume of a single database exceeds 20 MB/s

  • Change Data Capture (CDC) needs to be enabled for more than 1,000 tables

Billing

Migration typeLink configuration feeData transfer
Schema migration and full data migrationFreeFree, unless Access Method for the destination is set to Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Permissions required

DatabaseSchema migrationFull migrationIncremental migration
RDS SQL ServerRead permissions on the objects to be migratedRead permissions on the objects to be migratedOwner permissions on the objects to be migrated
RDS MySQLRead and write permissionsRead and write permissionsRead and write permissions

To create a database account and grant permissions:

Choose an incremental synchronization mode

This setting applies only when you select Incremental Data Migration for Migration Types.

FeatureHybrid log parsingLog-based (non-heap tables only)Polling CDC
Supports heap tablesYesNoYes
Supports tables without primary keysYesNoYes
Supports compressed tablesYesNoYes
Supports tables with computed columnsYesNoYes
Supports tables with sparse columnsYesNoYes
Source database impactModerate — creates trigger, heartbeat table, and DDL storage table; enables CDCNon-intrusiveModerate — enables CDC; requires CDC permissions
DDL supportComplete — full DDL statementsLimitedLimited
Supported source typesRDS SQL ServerRDS SQL Server, Web Edition (required)Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on VM, Google Cloud SQL for SQL Server
Incremental latencyLowLow~10 seconds
CDC table limit1,000 (recommended)N/A1,000 (recommended)

How to choose:

  • Hybrid log parsing — use this for most RDS SQL Server migrations. It handles all table types and provides the broadest DDL support.

  • Log-based (non-heap tables only) — use this when you need zero intrusion to the source database and all tables have clustered indexes with primary key columns. Not compatible with heap tables, tables without primary keys, compressed tables, tables with computed columns, or tables with sparse columns.

  • Polling CDC — use this when the source database is hosted on a third-party cloud platform (Amazon RDS for SQL Server, Azure SQL Database, etc.).

To check whether your source tables are compatible with log-based mode, see How to view information about heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server.

SQL operations supported for incremental migration

Operation typeSupported operations
DMLINSERT, UPDATE, DELETE.
Note

If an UPDATE operation updates only large fields, DTS does not migrate it.

DDLCREATE TABLE; ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX; DROP INDEX

DDL operations not supported:

  • DDL operations that involve user-defined types

  • Online DDL operations

  • Transactional DDL operations — for example, adding multiple columns in a single SQL statement, or combining DDL and DML in a single statement (may cause data loss)

  • DDL operations using reserved keywords as attribute names

  • DDL operations performed by system stored procedures

  • TRUNCATE TABLE

  • Partitions and functions within table definitions

Limits and notes

During schema migration, DTS migrates foreign keys from the source database to the destination database.
During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Source database limits

LimitDetails
BandwidthThe server hosting the source database must have sufficient outbound bandwidth; otherwise, migration speed is affected.
Primary keysTables to be migrated must have primary keys or UNIQUE constraints with unique fields; otherwise, duplicate data may appear in the destination database.
Table limitA single task supports a maximum of 1,000 tables when migrating table-level objects with name mapping. Exceeding this limit causes an error after you submit the task. Either split tables into multiple tasks or configure a task to migrate the entire database.
Database limitA single task supports a maximum of 10 databases. Exceeding this limit may cause stability and performance issues.
Schema mappingWhen migrating specific objects (not an entire database), you cannot migrate tables that have the same name but different schema names to the same destination database.
Read-only instancesDDL operations cannot be migrated from a read-only instance.
Azure SQL DatabaseA single DTS task can migrate only one database.
TDEIf the source is an RDS for SQL Server instance and the task includes incremental migration, disable Transparent Data Encryption (TDE) to ensure task stability. See Disable TDE.
Object renamingDo not use sp_rename to rename objects such as stored procedures before the initial schema synchronization task runs. Use the ALTER command instead.
`READ_COMMITTED_SNAPSHOT`During full data migration, make sure that READ_COMMITTED_SNAPSHOT is enabled for the source database. This prevents shared locks from affecting data writes. Exceptions caused by this setting not being enabled are not covered by the DTS SLA.

Incremental migration log requirements:

  • Logs must be enabled with the backup mode set to Full. A full physical backup must have been completed.

  • Log retention: more than 24 hours for incremental-only tasks; at least 7 days for tasks that include both full and incremental migration. You can change the log retention period after full migration completes. Issues caused by a log retention period shorter than required are not covered by the DTS SLA.

  • DTS uses the fn_log function to obtain source database logs. Do not clear source database logs too early; otherwise, the DTS task may fail.

CDC prerequisites:

  • The value of the srvname field in the sys.sysservers view must match the return value of the SERVERPROPERTY function.

  • Database owner: sa for self-managed SQL Server; sqlsa for RDS for SQL Server.

  • Enterprise Edition: SQL Server 2008 or later.

  • Standard Edition: SQL Server 2016 SP1 or later.

  • SQL Server 2017 (Standard or Enterprise Edition): upgrade the database version before configuring CDC.

Operation restrictions during migration:

  • During initial schema synchronization and full data migration, do not perform DDL operations on database or table schemas; otherwise, the task fails.

  • If you perform only full data migration, do not write new data to the source instance during migration; otherwise, data inconsistency occurs. To maintain real-time data consistency, select Initial Schema Synchronization, Full Data Migration, and Incremental Data Migration.

  • If a task includes incremental data migration, do not reindex; otherwise, the task may fail or data may be lost. You cannot change the primary keys of tables with CDC enabled.

Web Edition RDS for SQL Server:

Set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when configuring the task.

Other limits

  • Complex DDL operations cannot be migrated.

  • The following data types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined data types created with CREATE TYPE.

  • The following object types cannot be migrated: PROCEDURE, FUNCTION, TRIGGER, DATATYPE, SYNONYM, CATALOG, PLAN_GUIDE, and SEQUENCE.

  • If the number of CDC-enabled tables in a single task exceeds 1,000, the precheck fails.

  • If data contains four-byte characters (uncommon characters), the destination database and tables must use the utf8mb4 charset. If DTS performs schema migration, also set the character_set_server parameter to utf8mb4.

  • If incremental data written to a single field of a CDC-enabled table exceeds 64 KB, run exec sp_configure 'max text repl size', -1; on the source database in advance. By default, a CDC job can process a maximum of 64 KB per field.

  • Migrate data during off-peak hours. DTS consumes read and write resources on both source and destination databases during full data migration, which increases database load.

  • If field column names differ only in case in a table written to the destination MySQL database, the migration result may not be as expected, because column names in MySQL are case-insensitive.

  • If a DDL statement fails to write to the destination database, the DTS task continues running. Check the task logs for the failed statement. See Query task logs.

  • If multiple DTS instances use the same SQL Server database as the source, their incremental data ingestion modules are independent of each other.

  • If a task fails, DTS support staff attempt to restore it within 8 hours. They may restart the task or adjust task parameters (not database parameters). For adjustable parameters, see Modify instance parameters.

  • Full data migration uses concurrent INSERT operations, which causes table fragmentation in the destination database. After full data migration, the table storage space in the destination database is larger than that in the source instance. After migration completes (the instance Status changes to Completed), run analyze table <table_name> to confirm all data has been written to the destination table. For example, if an HA switchover is triggered in the destination MySQL database, data may be written only to the memory, causing data loss.

Polling CDC mode limits

  • The source database account must have permissions to enable CDC. To enable database-level CDC, the account needs the sysadmin role. To enable table-level CDC, a privileged account is required.

    The privileged account provided by Azure SQL Database meets these requirements. For vCore-based databases, all instance types support CDC. For DTU-based databases, only S3 and later instance types support CDC. The privileged account of Amazon RDS for SQL Server meets the requirements for enabling database-level CDC via stored procedures. Clustered columnstore index tables do not support CDC.
  • DTS polls the CDC instance of each table to get incremental data. Migrate no more than 1,000 tables; otherwise, task latency or instability may occur.

  • By default, incremental data in the CDC component is retained for 3 days. To adjust the retention period, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

    <time> is in minutes. If a single table has more than 10 million incremental change SQL statements per day, set <time> to 1440.

  • Do not consecutively add or remove columns. For example, do not perform more than two DDL operations to add or remove columns within one minute; otherwise, the task may fail.

  • Do not change the CDC instance of the source database; otherwise, the task may fail or data may be lost.

Hybrid log parsing mode limits

  • Incremental migration depends on the CDC component. Make sure the CDC job in the source database is running; otherwise, the DTS task fails.

  • By default, incremental data in the CDC component is retained for 3 days. To adjust, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

    <time> is in minutes. Set <time> to 1440 if a single table has more than 10 million incremental change SQL statements per day.

  • Enable CDC for no more than 1,000 tables in a single task; otherwise, task latency or instability may occur.

  • The prerequisite module of an incremental migration task enables CDC for the source database. During this process, the source database may be briefly locked due to SQL Server database kernel limits.

  • Do not consecutively add or remove columns within an interval of less than 10 minutes. For example, running the following SQL statements consecutively causes an error:

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database. In hybrid incremental synchronization mode, DTS also enables database-level CDC and CDC for some tables. Keep the data change rate of CDC-enabled tables at or below 1,000 records per second (RPS).

  • You cannot execute SELECT INTO, TRUNCATE, or RENAME COLUMN statements on tables with CDC enabled in the source database. You cannot manually delete triggers created by DTS in the source database.

Special cases

If the source instance is an RDS for SQL Server instance, DTS creates an rdsdt_dtsacct account in the source instance for data migration. Do not delete this account or change its password while the task is running; otherwise, the task may fail. See System accounts.

SQL Server as a source — important note

SQL Server is a commercial closed-source database. Due to known or unknown format-specific limits, issues may occur when DTS performs CDC and log parsing. Before enabling incremental synchronization or migration for a SQL Server source in production, perform a comprehensive proof of concept (POC) test covering all business change types, table schema changes, and peak-hour stress tests. Make sure the business logic in the production environment matches that in the POC test to guarantee the efficiency and stability of DTS.

Create a migration task

Step 1: Go to the migration task list

Use one of the following methods:

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console

Note

The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance is located.

Step 2: Configure source and destination databases

Click Create Task to open the task configuration page, then configure the following:

Warning

After selecting source and destination instances, review the limits displayed at the top of the page carefully. Ignoring these limits may cause the task to fail or data inconsistency to occur.

CategoryParameterDescription
N/ATask NameDTS auto-generates a task name. Specify a descriptive name for easy identification. The name does not need to be unique.
Source DatabaseSelect Existing ConnectionSelect a database instance added to the system from the drop-down list to auto-fill the database information. In the DMS console, this parameter is named Select a DMS database instance. If no registered instance is available, configure the database information manually.
Database TypeSelect SQL Server.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the source RDS SQL Server instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migrations.
RDS Instance IDSelect the ID of the source RDS SQL Server instance.
Database AccountEnter the database account. See Permissions required.
Database PasswordEnter the password for the database account.
EncryptionSelect Non-encrypted if SSL encryption is disabled on the source database. Select SSL-encrypted if SSL encryption is enabled; DTS trusts the server certificate by default.
Destination DatabaseSelect Existing ConnectionSame as source.
Database TypeSelect MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the destination RDS MySQL instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migrations.
RDS Instance IDSelect the ID of the destination RDS MySQL instance.
Database AccountEnter the database account. See Permissions required.
Database PasswordEnter the password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL encryption on the RDS MySQL instance first. See Quickly enable SSL encryption using a cloud certificate.

After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

Make sure the IP address range of the DTS service is added (automatically or manually) to the security settings of the source and destination databases. See Add DTS server IP addresses to a whitelist. If the source or destination database is self-managed (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 3: Configure task objects

On the Configure Objects page, set the following parameters:

ParameterDescription
Migration TypesSelect Schema Migration and Full Data Migration for a full migration only. To minimize downtime, also select Incremental Data Migration.
Note

Without Schema Migration, create the target database and tables manually, or use the object name mapping feature. Without Incremental Data Migration, do not write new data to the source during migration.

Schema Mapping Mode of Source and Destination DatabasesSelect a schema mapping mode to map schemas between source and destination databases.
Warning

Tables in different schemas of the source database cannot share the same name; otherwise, data inconsistency or task failure may occur.

SQL Server Incremental Synchronization ModeSelect the incremental synchronization mode. See Choose an incremental synchronization mode. This parameter is available only when Incremental Data Migration is selected.
The maximum number of tables for which CDC is enabled that DTS supportsSet the maximum number of CDC-enabled tables per DTS task. Default: 1,000. Not available when SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).
Processing Mode of Conflicting TablesPrecheck and Report Errors: Checks for tables with the same names in the destination database before migration starts. If conflicts exist, reports an error and stops the task. To rename conflicting tables in the destination, see Object name mapping. Ignore Errors and Proceed: Skips the conflict check.
Warning

This option may cause data inconsistency. During full migration, DTS keeps the destination record; during incremental migration, the source record overwrites the destination. If table schemas are inconsistent, only some columns may migrate, or migration may fail.

Capitalization of Object Names in Destination InstanceConfigure the case sensitivity policy for migrated object names (databases, tables, columns). Default: DTS default policy. See Case sensitivity of object names.
Source ObjectsClick objects to migrate in the Source Objects box, then click the right arrow to move them to Selected Objects. Select at the database, table, or column level. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsTo rename a single object in the destination, right-click it in Selected Objects. See Individual table column mapping. To rename multiple objects at once, click Batch Edit in the upper-right corner. See Map multiple object names at a time.
Note

Object name mapping may cause migration failures for dependent objects. To filter data with a WHERE clause, right-click the table in Selected Objects and set the filter condition. See Set filter conditions. To select SQL operations for incremental migration at the database or table level, right-click the object and select the desired operations.

Step 4: Configure advanced settings

Click Next: Advanced Settings to configure the following:

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks on a shared cluster. For more stable tasks, purchase a dedicated cluster.
Retry Time for Failed ConnectionsAfter the task starts, if the connection to the source or destination fails, DTS retries immediately. Default: 720 minutes. Valid range: 10–1,440 minutes. Set to more than 30 minutes. If DTS reconnects within the specified duration, the task resumes automatically.
Note

For multiple DTS instances sharing the same source or destination, the retry time is determined by the last created task. Because you are charged during the retry period, set the retry time based on your business needs and release the DTS instance promptly after the source and destination instances are released.

Retry Time for Other IssuesAfter the task starts, if a non-connectivity issue occurs (such as a DDL or DML execution exception), DTS retries immediately. Default: 10 minutes. Valid range: 1–1,440 minutes. Set to more than 10 minutes.
Important

This value must be less than Retry Time for Failed Connections.

Enable Throttling for Full Data MigrationDuring full migration, DTS consumes read and write resources on source and destination databases. If needed, set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce load. Available only when Full Data Migration is selected. You can also adjust the full migration speed after the task starts.
Enable Throttling for Incremental Data MigrationSet RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce load. Available only when Incremental Data Migration is selected. You can also adjust the incremental migration speed after the task starts.
Environment TagSelect an environment tag to identify the instance. Optional.
Configure ETLChoose whether to enable the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to disable ETL. For an overview of ETL, see What is ETL?
Monitoring and AlertingSelect Yes to set an alert thresholdalert notifications and alert notifications. If migration fails or latency exceeds the threshold, the system sends an alert. Select No for no alerts.

Step 5: Configure data validation

Click Next: Data Validation to configure a data validation task. For details, see Configure data validation.

Step 6: Save and run the precheck

Click Next: Save Task Settings and Precheck.

To view the API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.

Before the task starts, DTS runs a precheck. The task starts only after it passes the precheck.

  • If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck.

  • If a warning is reported:

    • For items that cannot be ignored, click View Details, fix the issue, and rerun the precheck.

    • For ignorable items, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring warnings may cause data inconsistency and pose business risks.

Step 7: Purchase the instance

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the link specification for the migration instance:

    CategoryParameterDescription
    New Instance ClassResource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?
    Instance ClassSelect a specification based on your migration scenario. The link specification affects migration speed. See Data migration link specifications.
  3. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog box.

Monitor the migration task

After the task starts, view its progress on the Data Migration Tasks list page.

  • Full migration only: The task stops automatically after full migration completes. The Status changes to Completed.

  • Includes incremental migration: The task does not stop automatically. The incremental migration task continues running, and the Status remains Running.