All Products
Search
Document Center

Data Transmission Service:Migrate RDS for SQL Server to PolarDB for MySQL

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) supports heterogeneous data migration from an RDS for SQL Server instance to a PolarDB for MySQL cluster, including schema migration, full data migration, and incremental data migration.

Prerequisites

Before you begin, ensure that you have:

If the source instance meets any of the following conditions, split the migration into multiple tasks:

  • More than 10 databases

  • Log backups on a single database more than once per hour

  • DDL operations on a single database more than 100 times per hour

  • Log volume of a single database exceeding 20 MB/s

  • Change Data Capture (CDC) required for more than 1,000 tables

Billing

Migration typeLink configuration feeInternet traffic fee
Schema migration and full data migrationFreeFree
Incremental data migrationCharged. For details, see Billing overview.

Permissions required

DatabaseSchema migrationFull migrationIncremental migration
RDS for SQL Server instanceReadReadOwner
PolarDB for MySQL clusterRead and writeRead and writeRead and write

To create accounts and grant permissions:

SQL operations supported by incremental migration

Operation typeSupported SQL operations
DMLINSERT, UPDATE, DELETE.
Note

UPDATE statements that modify only large object columns are not supported.

DDLCREATE TABLE; ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX; DROP INDEX

The following DDL operations are not supported:

  • DDL operations that involve user-defined types

  • Online DDL operations

  • Transactional DDL operations (for example, adding multiple columns in a single statement, or combining DDL and DML in a single statement)

  • DDL operations that use reserved keywords as attribute names

  • DDL operations performed by system stored procedures

  • TRUNCATE TABLE

  • Partitions and functions within table definitions

Limitations

Source database requirements

  • Tables to be migrated must have primary keys or UNIQUE constraints with unique fields. Otherwise, duplicate data may appear in the destination database.

  • A single migration task supports a maximum of 1,000 tables when you edit table-level objects (such as mapping table or column names). If you exceed this limit, split the tables into multiple migration tasks or configure the task to migrate an entire database.

  • A single migration task supports a maximum of 10 databases. Exceeding this limit may cause stability and performance issues.

  • If you migrate specific objects instead of an entire database, tables with the same name but different schema names cannot be migrated to the same destination database.

  • The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • READ_COMMITTED_SNAPSHOT must be enabled on the source database during full data migration. Disabling it allows shared locks that can affect data writes and may cause data inconsistency or instance failures. Exceptions from this cause are not covered by the DTS Service-Level Agreement (SLA).

Incremental migration requirements

For incremental migration, the source database must meet these conditions:

  • Logs must be enabled with the backup mode set to Full, and a full physical backup must have completed successfully.

  • For incremental-only tasks, DTS requires logs to be retained for more than 24 hours. For tasks that include both full and incremental migration, logs must be retained for at least 7 days. You can increase the retention period to more than 24 hours after full migration completes. Insufficient log retention may cause the DTS task to fail, and in extreme cases, data inconsistency or data loss may occur. Issues caused by insufficient log retention are not covered by the DTS SLA.

  • DTS uses the fn_log function to obtain source database logs. Do not clear source database logs prematurely. Otherwise, the DTS task may fail.

  • Do not perform DDL schema changes during initial schema synchronization and full data migration. Otherwise, the migration task fails.

  • Do not write new data to the source instance if you perform only full data migration. To ensure real-time data consistency, select Schema Migration, Full Data Migration, and Incremental Data Migration together.

  • If a migration task includes incremental data migration, do not reindex. Otherwise, the task may fail or data may be lost.

You cannot change the primary keys of tables with CDC enabled.

CDC enablement conditions

To enable CDC for tables in the source database, all of the following conditions must be met. Otherwise, the precheck fails.

  • The value of the srvname field in the sys.sysservers view must match the return value of the SERVERPROPERTY function.

  • For self-managed SQL Server: the database owner must be sa.

  • For RDS for SQL Server: the database owner must be sqlsa.

  • Enterprise Edition: SQL Server 2008 or later.

  • Standard Edition: SQL Server 2016 SP1 or later.

  • SQL Server 2017 (Standard or Enterprise Edition): upgrade the database version before enabling CDC.

Data types and objects that cannot be migrated

The following SQL Server data types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined types created with the CREATE TYPE command.

The following object types cannot be migrated: PROCEDURE, FUNCTION, TRIGGER, DATATYPE, SYNONYM, CATALOG, PLAN_GUIDE, and SEQUENCE.

Complex DDL operations cannot be migrated.

Special scenarios

ScenarioLimitation
Four-byte characters (rare characters or emojis)The destination database and table must use the utf8mb4 charset. Set the instance-level parameter character_set_server to utf8mb4 in the destination database.
Single CDC-enabled field exceeds 64 KBRun exec sp_configure 'max text repl size', -1; on the source database in advance. By default, a CDC job can process a single field with a maximum length of 64 KB.
Read-only source instanceDDL operations cannot be migrated.
Azure SQL Database sourceA single DTS task can migrate only one database.
RDS for SQL Server source with incremental migrationDisable transparent data encryption (TDE) to ensure task stability. For details, see Disable TDE.
Web Edition RDS for SQL Server sourceSet SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).
Multiple DTS instances sharing the same SQL Server sourceTheir incremental data ingestion modules operate independently.
RDS for SQL Server sourceDTS creates an rdsdt_dtsacct account in the source instance. Do not delete this account or change its password while the task is running. For details, see System accounts.

Other considerations

  • If you use the sp_rename command to rename objects (such as stored procedures) before the initial schema synchronization task runs, the task may not work as expected or may fail. Use the ALTER command to rename objects instead.

  • Full data migration uses concurrent INSERT operations, which cause table fragmentation in the destination database. After full data migration completes, the destination table storage space is larger than the source.

  • If a DDL statement fails to write to the destination database, the DTS task continues running. View the failed statement in the task logs. For details, see View task logs.

  • Schedule migration during off-peak hours. DTS consumes read and write resources on both source and destination databases during full data migration, which increases database load.

  • If a task fails, DTS support staff will attempt to restore it within eight hours. During restoration, they may restart the task or adjust DTS task parameters (not database parameters). For adjustable parameters, see Modify instance parameters.

Important

SQL Server is a commercial closed-source database. Due to format-specific constraints, issues may occur when DTS performs CDC and log parsing on SQL Server. Before enabling incremental migration for a SQL Server source in production, run a comprehensive proof of concept (POC) test that covers all business change types, schema changes, and peak-hour stress scenarios. Make sure the business logic in the production environment matches the POC test environment.

Schema migration note

During schema migration, DTS migrates foreign keys from the source to the destination database. During full and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Performing cascade updates or deletes on the source during migration may cause data inconsistency.

Hybrid log parsing mode (additional limits)

In hybrid log parsing mode, do not consecutively add or remove columns within an interval of less than 10 minutes. For example, the following consecutive statements cause a task error:

ALTER TABLE test_table DROP COLUMN Flag;
ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');

DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database. In hybrid incremental synchronization mode, DTS additionally enables database-level CDC and CDC for some tables. Keep the data change rate for CDC-enabled tables at or below 1,000 records per second (RPS).

Choose an incremental synchronization mode

This setting is available only when Incremental Data Migration is selected.

ModeBest forLimitations
Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing)Heap tables, tables without primary keys, compressed tables, tables with computed columns. Provides high link stability and supports a wide range of DDL scenarios.DTS creates the dts_cdc_sync_ddl trigger, dts_sync_progress heartbeat table, and dts_cdc_ddl_history DDL storage table in the source database, and enables database-level CDC. You cannot run SELECT INTO, TRUNCATE, or RENAME COLUMN on tables with CDC enabled, or manually delete DTS-created triggers.
Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported)Scenarios with no heap tables, tables without primary keys, compressed tables, or tables with computed columns. Non-intrusive to the source database.Does not support heap tables, tables without primary keys, compressed tables, or tables with computed columns. For tables to be migrated, a clustered index containing primary key columns is required.
Polling and querying CDC instances for incremental synchronizationSources that are Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server. Uses the native CDC component to obtain incremental data, improving stability and reducing network bandwidth usage.Requires the source database account to have CDC enablement permissions. Incremental migration latency is approximately 10 seconds. Migrating multiple tables across multiple databases may cause stability and performance issues.

Additional limits for Polling and querying CDC instances for incremental synchronization:

  • For Azure SQL Database: the privileged account (server administrator) from the Azure SQL Database console meets the permissions requirement. For vCore-based databases, all instance types support CDC. For DTU-based databases, only S3 and later support CDC. Clustered columnstore index tables do not support CDC.

  • For Amazon RDS for SQL Server: the privileged account can enable database-level CDC for stored procedures.

  • Do not consecutively add or remove columns — for example, do not perform more than two DDL operations to add or remove columns within one minute. Otherwise, the task may fail.

  • Do not change the CDC instance of the source database. Otherwise, the task may fail or data may be lost.

CDC data retention (for hybrid and polling modes):

By default, incremental data stored in the CDC component is retained for 3 days. Adjust the retention period using:

exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

Where <time> is in minutes. If a single table generates more than 10 million incremental changes per day, set <time> to 1440.

Limit CDC to no more than 1,000 tables per migration task. Exceeding this limit may cause task latency or instability.

The prerequisite module of an incremental migration task enables CDC for the source database. During this process, the source database may be briefly locked due to SQL Server database kernel constraints.

For tables in Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) mode, see How to view information about heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server.

Create a migration task

Step 1: Open the migration task list

Use one of the following methods to navigate to the migration task list for the destination region.

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console

Note

The actual operations may vary based on the mode and layout of the DMS console. For details, see Simple mode console. To change the layout and style, see Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance is located.

Step 2: Configure the source and destination databases

  1. Click Create Task.

  2. Configure the source database.

    Warning

    After you select the source and destination instances, carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    ParameterValue
    Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.
    Select Existing ConnectionSelect a registered database instance from the drop-down list to auto-fill the connection details. If the instance is not registered, configure the connection manually.
    Note

    In the DMS console, this parameter is named Select a DMS database instance.

    Database TypeSelect SQL Server.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionSelect the region of the source RDS for SQL Server instance.
    Replicate Data Across Alibaba Cloud AccountsSelect No (same-account migration).
    RDS Instance IDSelect the ID of the source RDS for SQL Server instance.
    Database AccountEnter the database account. For permission requirements, see Permissions required.
    Database PasswordEnter the account password.
    EncryptionSelect Non-encrypted if SSL encryption is disabled on the source database. Select SSL-encrypted if SSL encryption is enabled. DTS trusts the server certificate by default.
  3. Configure the destination database.

    ParameterValue
    Select Existing ConnectionSelect a registered database instance from the drop-down list to auto-fill the connection details. If the instance is not registered, configure the connection manually.
    Note

    In the DMS console, this parameter is named Select a DMS database instance.

    Database TypeSelect PolarDB for MySQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionSelect the region of the destination PolarDB for MySQL cluster.
    PolarDB Cluster IDSelect the ID of the destination PolarDB for MySQL cluster.
    Database AccountEnter the database account. For permission requirements, see Permissions required.
    Database PasswordEnter the account password.
    EncryptionSelect a connection method as needed. For details about SSL encryption, see Enable SSL encryption.
  4. Click Test Connectivity and Proceed at the bottom of the page.

    DTS automatically adds its IP address ranges to the security settings of the source and destination databases. If the source or destination is a self-managed database (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box. For details, see Add DTS server IP addresses to a whitelist.

Step 3: Configure migration objects

On the Configure Objects page, configure the following settings.

ParameterDescription
Migration TypesSelect the migration types for your scenario. For full migration only, select Schema Migration and Full Data Migration. For zero-downtime migration, also select Incremental Data Migration.
Note

If you skip Schema Migration, make sure the destination database already has the target databases and tables. If you skip Incremental Data Migration, do not write new data to the source instance during migration.

Schema Mapping Mode of Source and Destination DatabasesSelect a schema mapping mode to map schemas between source and destination databases.
Warning

Tables in different schemas of the source database cannot share the same name. Otherwise, data inconsistency or task failure may occur.

SQL Server Incremental Synchronization ModeSelect a mode based on your source database type and table structure. See Choose an incremental synchronization mode. This parameter appears only when Incremental Data Migration is selected.
The maximum number of tables for which CDC is enabledSpecify the maximum number of CDC-enabled tables for this migration instance. Default: 1,000. This option does not appear when Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) is selected.
Processing Mode of Conflicting TablesPrecheck and Report Errors: checks for tables with the same names in the destination database before starting migration. If matches exist, the precheck fails and migration does not start.
Note

If a conflicting table cannot be deleted or renamed, use Object name mapping to rename it. Ignore Errors and Proceed: skips the duplicate table check.

Warning

This may cause data inconsistency. During full migration, DTS keeps the destination record; during incremental migration, the source record overwrites the destination record.

Capitalization of Object Names in Destination InstanceConfigure the case sensitivity policy for migrated object names (databases, tables, columns). Default: DTS default policy. For details, see Case sensitivity of object names in the destination database.
Source ObjectsIn the Source Objects box, click the objects to migrate, then click Right arrow to move them to the Selected Objects box.
Note

The migration granularity is schema, table, or column. Selecting only tables or columns excludes views, triggers, and stored procedures.

Selected ObjectsTo rename an object in the destination, right-click it in the Selected Objects box. For details, see Individual table column mapping. To rename multiple objects at once, click Batch Edit. For details, see Map multiple object names at a time. To filter rows, right-click a table and configure the WHERE condition. For details, see Configure filter conditions. To select specific incremental SQL operations, right-click the object and select the desired operations.
Note

Object name mapping may cause migration failures for other objects that depend on the renamed object.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks on a shared cluster. To improve task stability, purchase a dedicated cluster.
Retry Time for Failed ConnectionsThe duration DTS retries after a connection failure. Default: 720 minutes. Range: 10–1,440 minutes. Set to more than 30 minutes. If DTS reconnects within this duration, the task resumes automatically.
Note

For multiple DTS instances sharing the same source or destination, the retry time is determined by the last created task. Charges apply during the retry period.

Retry Time for Other IssuesThe duration DTS retries after non-connectivity errors (such as DDL or DML exceptions). Default: 10 minutes. Range: 1–1,440 minutes. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit read/write load during full migration by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Adjust the speed after the task starts. For details, see Enable throttling for data migration.
Enable Throttling for Incremental Data MigrationLimit load during incremental migration by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment TagSelect a tag to identify the instance. Optional.
Configure ETLChoose whether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements. For details, see Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingSelect Yes to configure an alert thresholdalert notifications and notifications. DTS sends an alert if migration fails or latency exceeds the threshold.

Step 5: Configure data validation

Click Next: Data Validation to configure a data validation task. For details, see Configure data validation.

Step 6: Save the task and run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.

DTS runs a precheck before the migration starts. The task starts only after it passes all precheck items.

  • If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.

  • If a warning appears:

    • For warnings that cannot be ignored: click View Details, fix the issue, and run the precheck again.

    • For warnings that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again to skip the item. Ignoring warnings may cause data inconsistency or business risks.

Step 7: Purchase the instance and start migration

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the link specification for the migration instance.

    ParameterDescription
    Resource Group SettingsSelect the resource group for the instance. Default: default resource group. For details, see What is Resource Management?
    Instance ClassSelect a specification based on your migration scenario. The link specification affects migration speed. For details, see Data migration link specifications.
  3. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start. In the dialog box, click OK.

Monitor migration progress

After the task starts, go to the Data Migration Tasks list page to monitor progress.

  • If the task does not include incremental migration, it stops automatically after full migration completes and the Status changes to Completed.

  • If the task includes incremental migration, it runs continuously. The Status remains Running until you manually stop the task.