All Products
Search
Document Center

Data Transmission Service:Migrate RDS for SQL Server to AnalyticDB for MySQL 3.0

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) migrates SQL Server databases—including RDS for SQL Server and self-managed SQL Server instances—to AnalyticDB for MySQL for real-time analytics workloads.

This topic uses an RDS for SQL Server instance as the source. The configuration steps are the same for other supported source types.

Supported source databases

  • RDS for SQL Server instance

  • Self-managed SQL Server with a public IP address

  • Self-managed SQL Server hosted on Elastic Compute Service (ECS)

  • Self-managed SQL Server connected over Express Connect, VPN Gateway, or Smart Access Gateway

  • Self-managed SQL Server connected over Database Gateway

Prerequisites

Before you begin, ensure that you have:

Important

RDS for SQL Server 2008 and 2008 R2 do not support incremental migration.

If any of the following conditions apply to your source instance, split the migration into multiple tasks:

  • The instance has more than 10 databases

  • Log backups run more than once per hour on a single database

  • DDL operations occur more than 100 times per hour on a single database

  • The log volume of a single database exceeds 20 MB/s

  • Change Data Capture (CDC) must be enabled for more than 1,000 tables

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration and full data migration Free Charged only when Access Method is set to Public IP Address. See Billing overview.
Incremental data migration Charged. See Billing overview.

Migration types

DTS supports three migration types for this scenario:

  • Schema migration — Migrates schema definitions from the source to the destination.

    • Supported objects: Schema, Table, View, Function, Procedure.

    • Not supported: Assemblies, Service Broker, Full-text Index, Full-text Catalog, Distributed Schema, Distributed Function, CLR Stored Procedure, CLR Scalar Function, CLR Table-valued Function, Internal Table, System, Aggregate Functions.

    Warning

    This is a heterogeneous database migration. Data types may not map one-to-one, which can cause task failure or data loss. Review Data type mapping between heterogeneous databases before you start.

  • Full data migration — Migrates all historical data from the selected objects to the destination database.

  • Incremental data migration — After full migration completes, continuously replicates data changes from the source to the destination. Use this to migrate data with minimal service interruption.

Required permissions

Database Schema migration Full migration Incremental migration
RDS for SQL Server Read Read Owner
AnalyticDB for MySQL cluster Read and write Read and write Read and write

To create accounts and grant permissions:

Choose an incremental synchronization mode

When your migration includes incremental data migration, select one of the three SQL Server incremental synchronization modes based on your source database type and table structure.

Quick selection guide:

  • If your source is Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server — use Polling CDC.

  • If your source is a Web Edition RDS for SQL Server instance — use Log-based (non-heap tables).

  • If your source has heap tables, tables without primary keys, compressed tables, or tables with computed columns — use Hybrid Log-based Parsing.

  • If none of the above applies and you want zero impact on the source database — use Log-based (non-heap tables).

Comparison of incremental synchronization modes:

Feature Hybrid Log-based Parsing Log-based (non-heap tables) Polling CDC
Supports heap tables Yes No Yes
Supports tables without primary keys Yes No Yes
Supports compressed tables Yes No Yes
Supports tables with computed columns Yes No Yes
Intrusive to source database Yes (creates triggers and tables, enables CDC) No Yes (enables CDC)
DDL coverage High (complete DDL statements) Standard Standard
Supports Amazon RDS / Azure / Google Cloud SQL Server No No Yes
Incremental latency Low Low ~10 seconds

SQL operations supported for incremental migration

Operation type Supported SQL statements
DML INSERT, UPDATE, DELETE
DDL CREATE TABLE; ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX; DROP INDEX

DML notes:

  • UPDATE statements that modify only large object (LOB) columns are not supported.

  • When data is written to AnalyticDB for MySQL, UPDATE statements are converted to REPLACE INTO. If the primary key is updated, it is converted to DELETE followed by INSERT.

DDL not supported:

  • DDL operations referencing user-defined types

  • Transactional DDL (for example, adding multiple columns in one statement, or mixing DDL and DML)

  • Online DDL operations

  • DDL operations using reserved keywords as property names

  • DDL operations executed by system stored procedures

  • TRUNCATE TABLE

  • Partitions or table definitions containing functions

Limitations

Schema migration

  • DTS does not migrate foreign keys from the source to the destination.

  • During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

  • During schema migration, do not perform DDL operations on the databases or tables being migrated. Otherwise, the task fails.

Source database requirements

  • Tables must have primary keys or UNIQUE constraints with unique fields. Otherwise, duplicate data may appear in the destination.

  • A single task supports a maximum of 1,000 tables when table-level objects are edited (for example, when mapping table or column names). If you exceed this limit, split the tables into multiple tasks or migrate the entire database.

  • A single task supports a maximum of 10 databases. Exceeding this limit may cause stability and performance issues.

  • Tables with the same name but different schema names cannot be migrated to the same destination database.

  • If the source is a read-only instance, DDL operations cannot be migrated.

  • If the source is an Azure SQL Database, a single DTS task can migrate only one database.

  • Do not use the sp_rename command to rename objects (such as stored procedures) before the initial schema synchronization runs. Use the ALTER command instead.

Incremental migration requirements

Log requirements:

  • Backup mode must be set to Full, and a full physical backup must have completed successfully.

  • For incremental-only tasks, retain data logs for more than 24 hours.

  • For tasks that include both full and incremental migration, retain data logs for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours. Retaining logs for less than the required period is not covered by the DTS Service-Level Agreement (SLA) and may cause task failure or data inconsistency.

CDC requirements for incremental migration:

  • The value of the srvname field in the sys.sysservers view must match the return value of the SERVERPROPERTY function.

  • For self-managed SQL Server: the database owner must be sa.

  • For RDS for SQL Server: the database owner must be sqlsa.

  • Enterprise Edition: SQL Server 2008 or later.

  • Standard Edition: SQL Server 2016 SP1 or later.

  • SQL Server 2017 (Standard or Enterprise): upgrade the database version before migrating.

Additional requirements by mode:

Hybrid Log-based Parsing:

  • DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database, and enables database-level CDC.

  • SELECT INTO, TRUNCATE, and RENAME COLUMN are not supported on tables with CDC enabled.

  • Do not manually delete DTS-created triggers.

  • Do not consecutively add or remove columns in the source database within 10 minutes. For example, running the following statements in sequence causes a task error:

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • Recommended maximum: 1,000 CDC-enabled tables per task.

  • Recommended maximum data change rate for CDC-enabled tables: 1,000 records per second (RPS).

  • By default, the CDC component retains incremental data for 3 days. To extend the retention period, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention = <time>;

    <time> is in minutes. If a single table generates more than 10 million incremental records per day, set <time> to 1440.

  • The CDC prerequisite module may briefly lock the source database during initialization.

Log-based (non-heap tables):

Polling CDC:

  • The DTS account must have permission to enable CDC. Database-level CDC requires the sysadmin role. Table-level CDC requires a privileged account.

    Note

    The privileged accounts for Azure SQL Database (console), Amazon RDS for SQL Server, and vCore-based Azure instances meet these requirements. For vCore-based Azure SQL databases, all instance types support CDC. For DTU-based Azure SQL databases, only instance types of S3 and later support CDC. Clustered columnstore index tables are not supported with CDC.

  • Do not consecutively add or remove columns. For example, more than two DDL operations within one minute may cause the task to fail.

  • Do not change the CDC instance of the source database. This may cause data loss.

  • Recommended maximum: 1,000 tables per task. Migrating more tables across multiple databases may cause stability and performance issues.

  • By default, the CDC component retains incremental data for 3 days. To extend the retention period, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention = <time>;

    <time> is in minutes. If a single table generates more than 10 million incremental records per day, set <time> to 1440.

  • The CDC prerequisite module may briefly lock the source database during initialization.

  • Incremental migration latency is approximately 10 seconds.

Full data migration

  • Enable READ_COMMITTED_SNAPSHOT on the source database to prevent shared locks from affecting data writes. Disabling this is not covered by the DTS SLA and may cause data inconsistency or instance failures.

  • Do not write new data to the source during full data migration if incremental migration is not enabled. Otherwise, data inconsistency occurs between source and destination.

  • Full data migration uses concurrent INSERT operations, which cause table fragmentation in the destination. After migration completes, the table storage size in the destination will be larger than in the source.

  • If the source is an RDS for SQL Server instance with incremental migration enabled, disable transparent data encryption (TDE) to maintain task stability. See Disable TDE.

Destination database requirements

  • The destination must have a custom primary key, or you must set Primary Key Column in the Configurations for Databases, Tables, and Columns step. Otherwise, migration may fail.

  • If disk space usage on any node in the destination cluster exceeds 80%, write performance slows and DTS task latency increases. If usage exceeds 90%, writes fail and the task fails. Estimate required storage before starting.

  • If the destination AnalyticDB for MySQL 3.0 cluster runs a backup while the DTS task is active, the task fails.

Data type limitations

Only basic data types can be migrated. The following types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined types created with CREATE TYPE.

DTS reads FLOAT and DOUBLE values using ROUND(COLUMN, PRECISION). The default precision is 38 for FLOAT and 308 for DOUBLE. Confirm that this meets your business requirements before starting.

Other limitations

  • If a DDL statement fails to write to the destination, the DTS task continues running. View failed DDL statements in the task logs. See View task logs.

  • Do not reindex during incremental migration. Do not change the primary keys of CDC-enabled tables.

  • If a single field in a CDC-enabled table receives data larger than 64 KB, run the following command on the source database before starting the task:

    exec sp_configure 'max text repl size', -1;
  • If multiple DTS instances use the same SQL Server database as the source, their incremental data ingestion modules operate independently.

  • DTS attempts to resume a failed task for up to 7 days. Before switching production traffic to the destination, end or release the task, or revoke write permissions from the DTS account using the revoke command. This prevents source data from overwriting destination data if the task resumes automatically.

  • If a task fails, DTS support will attempt to restore it within 8 hours. During restoration, the task may be restarted or its parameters adjusted. Only DTS task parameters are modified—not database parameters. See Modify instance parameters.

  • SQL Server is a closed-source commercial database. Before enabling incremental migration in a production environment, run a comprehensive proof of concept (POC) test that covers all business change types, table schema changes, and peak-hour stress scenarios. Business logic in production must match the POC test environment exactly.

Special cases

If the source is an RDS for SQL Server instance, DTS creates an rdsdt_dtsacct account in the source instance. Do not delete this account or change its password while the task is running. See System accounts.

Note
  • When AnalyticDB for MySQL is the destination, DTS supports writing the basic data types and complex data types (ARRAY, MAP, and JSON) of AnalyticDB for MySQL. If the destination table contains columns of data types that are not supported by DTS, such as MULTIVALUE, the data synchronization or migration task may fail. For more information, see Basic data types.

Create a migration task

Step 1: Open the migration task list

Use one of the following methods:

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.Data Transmission Service (DTS) console

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console

Note

The exact navigation path may differ based on your DMS console mode and layout. See Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.Data Management (DMS) console

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the destination region.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. On the task configuration page, configure the source database.

    Warning

    After selecting source and destination instances, review the limits displayed at the top of the page. Skipping this step may cause task failure or data inconsistency.

    Parameter Description
    Task Name DTS generates a name automatically. Specify a descriptive name for easy identification. Names do not need to be unique.
    Select Existing Connection (Source) To use a previously registered database instance, select it from the drop-down list. Otherwise, configure the fields below manually. In the DMS console, this parameter is named Select a DMS database instance.
    Database Type Select SQL Server.
    Connection Type Select Cloud Instance.
    Instance Region Select the region where your RDS for SQL Server instance resides.
    Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration.
    RDS Instance ID Select your source RDS for SQL Server instance ID.
    Database Account Enter the database account. See Required permissions.
    Database Password Enter the account password.
    Encryption Select Non-encrypted if SSL is disabled on the source; select SSL-encrypted if SSL is enabled. DTS trusts the server certificate by default.
  3. Configure the destination database.

    Parameter Description
    Select Existing Connection (Destination) To use a previously registered database instance, select it from the drop-down list. Otherwise, configure the fields below manually. In the DMS console, this parameter is named Select a DMS database instance.
    Database Type Select AnalyticDB MySQL 3.0.
    Connection Type Select Cloud Instance.
    Instance Region Select the region where your AnalyticDB for MySQL 3.0 cluster resides.
    Instance ID Select the ID of your AnalyticDB for MySQL 3.0 cluster.
    Database Account Enter the database account. See Required permissions.
    Database Password Enter the account password.
  4. Click Test Connectivity and Proceed.

    Note

    DTS automatically adds its IP address ranges to the security settings of the source and destination databases. If the source or destination is a self-managed database (where Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog. For details, see Add DTS server IP addresses to a whitelist.

Step 3: Configure task objects

On the Configure Objects page, configure the following settings:

Configuration Description
Migration Types Select Schema Migration and Full Migration for a full migration. To migrate without service interruption, also select Incremental Migration. If Incremental Migration is not selected, do not write new data to the source during migration.
Processing Mode of Conflicting Tables Precheck and Report Errors: Checks for tables with the same name in the destination. If duplicates exist, the precheck fails and the task does not start. To resolve naming conflicts without deleting destination tables, see Object name mapping. Ignore Errors and Proceed: Skips the duplicate name check. During full migration, conflicting records in the destination are kept. During incremental migration, source records overwrite destination records. Use with caution—schema inconsistencies may cause partial migration or failure.
Schema Mapping Mode of Source and Destination Databases Select a schema mapping mode to map schemas between source and destination. Tables in different schemas cannot share the same name; otherwise, data inconsistency or task failure may occur.
SQL Server Incremental Synchronization Mode Select a mode based on your source database. See Choose an incremental synchronization mode. This setting appears only when Incremental Migration is selected.
The maximum number of tables for which CDC is enabled that DTS supports Default: 1,000. If the number of CDC-enabled tables in the task exceeds this value, the precheck fails. This option does not appear for Log-based (non-heap tables) mode.
Select DDL and DML to Sync at the Instance Level Select the SQL operations for incremental migration. See SQL operations supported for incremental migration. To select operations at the database or table level, right-click a migration object in the Selected Objects box.
Source Objects Click objects in Source Objects and move them to Selected Objects. For heterogeneous migrations, only tables can be selected—views, triggers, and stored procedures are not migrated.
Selected Objects To rename a single object in the destination, right-click it. See Individual table column mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. Object name mapping may prevent dependent objects from being migrated. To filter rows with a WHERE clause, right-click the table and set conditions. See Set filter conditions.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following:

Configuration Description
Dedicated Cluster for Task Scheduling By default, DTS schedules tasks on a shared cluster. To improve stability, purchase a dedicated cluster.
Retry Time for Failed Connections Default: 720 minutes. Range: 10–1440 minutes. Set to more than 30 minutes. If DTS reconnects within the retry period, the task resumes automatically. After this period, the task fails. Charges apply during the retry period.
Retry Time for Other Issues Default: 10 minutes. Range: 1–1440 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Migration Limits QPS to the source, RPS, and migration speed (MB/s) to reduce load during full migration. Available only when Full Data Migration is selected. Can also be adjusted after the task starts. See Enable throttling.
Enable Throttling for Incremental Data Migration Limits RPS and migration speed (MB/s) for incremental migration. Available only when Incremental Migration is selected. Can also be adjusted after the task starts.
Environment Tag Optional. Select an environment label to identify the instance.
Configure ETL Select Yes to enable the extract, transform, and load (ETL) feature and enter processing statements. See What is ETL? and Configure ETL.
Monitoring and Alerting Select Yes to configure an alert threshold and notification. If migration fails or latency exceeds the threshold, the system sends an alert. See Configure monitoring and alerting.

Step 5: Configure data validation

Click Next: Data Validation to optionally set up a data validation task. For details, see Configure data validation.

Step 6: Configure destination table fields (optional)

Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, and partition key information (Partition Key, Partitioning Rules, Partition Lifecycle) for the destination tables.

Note

This step is available only when Schema Migration is selected. Set Definition Status to All to see all tables. For Primary Key Column, you can select multiple columns to form a composite primary key. At least one primary key column must also be set as the Distribution Key and Partition Key. See CREATE TABLE.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the OpenAPI parameters for this task, hover over the button and click Preview OpenAPI parameters before proceeding.

DTS runs a precheck before the task starts. If the precheck fails:

  • Click View Details next to the failed item, fix the issue, and run the precheck again.

  • For warnings that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring warnings may cause data inconsistency.

Step 8: Purchase the instance and start migration

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the resource group and instance class. For specification details, see Data migration link specifications.

  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

View the migration task progress on the Data Migration Tasks list page.

Note

If the task does not include incremental migration, it stops automatically after full migration completes and its status changes to Completed. If the task includes incremental migration, it continues running and its status remains Running.

What's next

  • Monitor the task on the Data Migration Tasks page. Check latency and throughput to confirm the destination is keeping up.

  • Before switching production traffic to the destination: end or release the DTS task, or revoke write permissions from the DTS account using the revoke command. This prevents source data from overwriting destination data if the task resumes unexpectedly.

  • After switching traffic, verify that application behavior is correct on the AnalyticDB for MySQL cluster.