Data Transmission Service (DTS) migrates data between ApsaraDB RDS for SQL Server instances with minimal downtime, supporting schema migration, full data migration, and incremental data migration.
Prerequisites
Before you begin, make sure that:
Source and destination RDS for SQL Server instances are created. For supported versions, see Migration overview
The destination instance has more storage than the source instance currently uses
Incremental data migration is not supported if the source RDS for SQL Server version is 2008 or 2008 R2
When SQL Server Incremental Synchronization Mode is set to Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing), the following source database versions are supported:
Enterprise or Enterprise Evaluation Edition: 2012, 2014, 2016, 2019, or 2022
Standard Edition: 2016, 2019, or 2022
Split the migration into multiple tasks if any of the following apply to the source instance:
More than 10 databases
Log backups run more than once per hour on a single database
DDL operations exceed 100 per hour on a single database
Log volume exceeds 20 MB/s for a single database
Change Data Capture (CDC) must be enabled for more than 1,000 tables
Billing
| Migration type | Instance fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged when Access Method is set to Public IP Address. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
Migration types
Schema migration copies schema definitions of selected objects from the source to the destination database.
Full data migration copies all existing data from selected objects to the destination database.
Incremental data migration continuously applies changes from the source to the destination after the full migration completes. This keeps both databases in sync, enabling cutover with minimal downtime.
DTS does not migrate foreign keys. Cascade and delete operations defined through foreign keys are not replicated to the destination database.
SQL operations supported for incremental migration
| Operation type | Supported statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE (ADD COLUMN, DROP COLUMN, RENAME COLUMN only), CREATE TABLE, CREATE INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
DML limitations:
UPDATE statements that modify only large object columns are not supported.
DDL limitations:
Transactional DDL operations are not supported (for example, adding multiple columns in one statement or mixing DDL and DML in one statement).
DDL operations involving custom data types are not supported.
Online DDL operations are not supported.
DDL operations that use reserved keywords as property names are not supported.
DDL operations executed by system stored procedures are not supported.
TRUNCATE TABLE operations are not supported.
Partitioning or table definitions containing functions are not supported.
Database account permissions
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source instance | Read | Read | Owner permission on the objects to migrate |
| Destination instance | Read and write | Read and write | Read and write |
For instructions on creating and authorizing a database account, see Create a database and account.
Limitations
Source database requirements
The source server must have sufficient outbound bandwidth. Low bandwidth reduces migration speed.
Tables to be migrated must have primary keys or UNIQUE constraints with unique field values. Without these, duplicate data may appear in the destination database.
For table-level migration with object mapping (such as renaming tables or columns), a single task supports a maximum of 1,000 tables. Exceeding this limit causes an error. Split the tables into multiple tasks, or configure the task to migrate entire databases instead.
A single task supports a maximum of 10 databases. Exceeding this limit may cause stability and performance issues. Split the databases into multiple tasks.
If you select specific objects rather than an entire database, you cannot migrate tables with the same name but different schema names to the same destination database.
Do not use the
sp_renamecommand to rename objects such as stored procedures before the initial schema synchronization runs. Use theALTERcommand instead.If the source database is a read-only instance, DDL operations cannot be migrated.
If the source is a Web Edition RDS for SQL Server instance, set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).
If the source is an Azure SQL Database, a single DTS task can migrate only one database.
Log requirements for incremental migration:
Logs must be enabled with the backup mode set to Full, and at least one full physical backup must have completed successfully.
For an incremental-only task, the source database logs must be retained for more than 24 hours. For a task combining full and incremental migration, retain logs for at least 7 days. Log retention periods shorter than the minimum are not covered by the DTS Service-Level Agreement (SLA).
DTS uses the
fn_logfunction to read source database logs. Do not purge source logs prematurely, as this may cause the task to fail.
CDC prerequisites for incremental migration:
To enable change data capture (CDC) for the tables to be migrated from the source database, the following conditions must be met. Otherwise, the precheck fails:
The value of the
srvnamefield in thesys.sysserversview must be the same as the return value of theSERVERPROPERTYfunction.If the source database is a self-managed SQL Server instance, the database owner must be
sa. If the source database is an RDS for SQL Server instance, the database owner must besqlsa.If the source database is Enterprise Edition, it must be SQL Server 2008 or later.
If the source database is Standard Edition, it must be SQL Server 2016 SP1 or later.
If the source database is SQL Server 2017 (Standard or Enterprise Edition), upgrade the database version.
During migration:
Do not perform DDL operations to change the schemas of databases or tables during initial schema synchronization and full data migration. This causes the task to fail.
If you run only full data migration, do not write new data to the source instance. To maintain real-time data consistency, enable schema migration, full data migration, and incremental data migration together.
Disable Transparent Data Encryption (TDE) on the source RDS for SQL Server instance when the task includes incremental migration. For more information, see Disable TDE.
During full data migration, enable the
READ_COMMITTED_SNAPSHOTtransaction isolation parameter on the source database. Shared locks may otherwise interfere with data writes, causing data inconsistency or instance failures. Such exceptions are not covered by the DTS SLA.To migrate triggers from the source database, the database account must have Owner permissions on the destination database.
For incremental migration, disable triggers and foreign keys on the destination database. Leaving them enabled causes the migration task to fail.
Unsupported data types
The following data types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY.
If a field of the TIMESTAMP data type in the destination database cannot be written to, DTS does not support full or incremental migration for that table. This may cause data inconsistency or task failures.
Float and double precision
DTS reads FLOAT and DOUBLE columns using ROUND(COLUMN, PRECISION). Without explicit precision configuration, FLOAT values are migrated with a precision of 38 and DOUBLE values with a precision of 308. Confirm that this precision meets your requirements before starting the migration.
Cross-version migration
Check version compatibility before migrating data across different SQL Server versions.
Incremental synchronization mode limitations
Log-based parsing (heap tables are not supported):
Tables to be migrated must have a clustered index that contains primary key columns. Tables without clustered indexes, heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns are not supported.
For information on how to identify these table types, see How to view information about heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server.
Hybrid log-based parsing:
DTS creates the
dts_cdc_sync_ddltrigger,dts_sync_progressheartbeat table, anddts_cdc_ddl_historyDDL storage table in the source database. It also enables database-level CDC and CDC for some tables.Do not run
SELECT INTO,TRUNCATE, orRENAME COLUMNon tables with CDC enabled. Do not manually delete triggers that DTS creates in the source database.Do not perform consecutive add-column or drop-column operations within an interval of less than 10 minutes. For example, the following consecutive statements cause a task error:
ALTER TABLE test_table DROP COLUMN Flag; ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');The CDC job in the source database must be running. If it stops, the DTS task fails.
The recommended maximum number of CDC-enabled tables per task is 1,000. Exceeding this may cause task latency or instability.
By default, incremental data in the CDC component is retained for 3 days. To adjust the retention period, run:
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;<time>is in minutes. If a single table has more than 10 million incremental change SQL statements per day, set<time>to 1440.The prerequisite module of an incremental migration task enables CDC on the source database. The source database may be briefly locked during this process.
If a table has a CDC-enabled field with data exceeding 64 KB, run the following command on the source database before starting the task:
exec sp_configure 'max text repl size', -1;
Polling CDC instances:
The source database account must have permission to enable CDC. To enable database-level CDC, the account requires the sysadmin role. To enable table-level CDC, a privileged account is sufficient.
The privileged account from the Azure SQL Database console meets this requirement. For vCore-based databases, all instance types support CDC. For DTU-based databases, only S3 and later instance types support CDC. The privileged account for Amazon RDS for SQL Server can enable database-level CDC through stored procedures.
Clustered columnstore index tables do not support CDC.
Incremental data migration has a latency of approximately 10 seconds.
The recommended maximum number of tables per task is 1,000. Migrating more may cause task latency or instability.
Do not perform consecutive add-column or drop-column operations (more than two DDL operations within one minute). This may cause the task to fail.
Do not change the CDC instance of the source database while the task is running. This may cause task failure or data loss.
By default, incremental data is retained for 3 days. Adjust the retention period using:
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;The prerequisite module of an incremental migration task enables CDC on the source database. The source database may be briefly locked during this process.
General limits
We recommend that the data change rate of tables with CDC enabled in the source database does not exceed 1,000 records per second (RPS).
Full data migration uses concurrent INSERT operations, which causes table fragmentation in the destination database. After full migration completes, the destination table storage will be larger than the source.
DTS attempts to resume a failed task for up to seven days. Before switching your business to the destination instance, end or release the task, or revoke the write permissions of the DTS database account using the
revokecommand. Otherwise, auto-resume may overwrite data in the destination instance.If the task includes incremental migration, do not reindex tables with CDC enabled. Reindexing may cause task failure or data loss. Do not change the primary keys of these tables.
If the number of CDC-enabled tables exceeds the value set for The maximum number of tables for which CDC is enabled that DTS supports, the precheck fails.
If multiple DTS tasks use the same SQL Server database as the source, their incremental data ingestion modules operate independently.
SQL Server is a commercial closed-source database. Due to known or unknown format-specific limitations, issues may occur when DTS performs CDC and log parsing. Before enabling incremental synchronization or migration for a SQL Server source in a production environment, run a thorough proof of concept (POC) test covering all business change types, table schema changes, and peak-hour stress tests. The business logic in production must be consistent with the POC test environment.
If a task fails, DTS support staff will attempt to restore it within eight hours. They may restart the task or adjust its parameters. Only DTS task parameters are modified—database parameters are not changed. For a list of adjustable parameters, see Modify instance parameters.
Special cases
Source is an RDS for SQL Server instance: DTS creates an
rdsdt_dtsacctaccount in the source instance. Do not delete this account or change its password while the task is running. For more information, see System accounts.Destination is an RDS for SQL Server instance: DTS automatically creates the database in the destination instance. If the database name does not comply with RDS for SQL Server naming conventions, create the database manually before configuring the migration task. For more information, see Create a database.
Choose an incremental synchronization mode
This mode is available only when Incremental Data Migration is selected. Choose based on your source database and table types:
| Mode | Advantages | Limitations | Use when |
|---|---|---|---|
| Log-based parsing for non-heap tables and CDC-based incremental synchronization for heap tables (hybrid log-based parsing) | Supports heap tables, tables without primary keys, compressed tables, and tables with computed columns. Provides high link stability with complete DDL support. | Intrusive to the source database: creates triggers, a heartbeat table, and a DDL storage table; enables CDC. Cannot run SELECT INTO, TRUNCATE, or RENAME COLUMN on CDC-enabled tables. | Source database has heap tables, tables without primary keys, compressed tables, or computed columns. |
| Incremental synchronization based on logs of source database (heap tables are not supported) | Non-intrusive to the source database. | Does not support heap tables, tables without primary keys, compressed tables, tables with computed columns, or tables with sparse columns. | Source tables all have clustered indexes with primary key columns. |
| Polling and querying CDC instances for incremental synchronization | Supports Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, and Google Cloud SQL for SQL Server. Uses the native CDC component for improved stability and lower network bandwidth usage. | Requires CDC permissions on the source database account. Incremental latency is approximately 10 seconds. May encounter stability issues with many tables across multiple databases. | Source database is hosted on a third-party cloud (AWS, Azure, Google Cloud). |
Migrate data between RDS for SQL Server instances
Step 1: Open the data migration task list
From the DTS console
Log on to the Data Transmission Service (DTS) console.Data Transmission Service (DTS) console
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance is located.
From the DMS console
The actual steps vary based on the DMS console mode and layout. See Simple mode console and Customize the layout and style of the DMS console.
Log on to the Data Management (DMS) console.Data Management (DMS) console
In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.
To the right of Data Migration Tasks, select the region where the migration instance is located.
Step 2: Create a task
Click Create Task to open the task configuration page.
(Optional) Click New Configuration Page in the upper-right corner. If the button already reads Back to Previous Version, you are already on the new configuration page. The new and old pages have different parameters; the new page is recommended.
Step 3: Configure source and destination databases
| Category | Parameter | Description |
|---|---|---|
| — | Task Name | DTS generates a name automatically. Specify a descriptive name for easy identification. Names do not need to be unique. |
| Source Database | Select Existing Connection | Select a registered database instance from the drop-down list to auto-populate the configuration, or configure the database manually. In the DMS console, this parameter is named Select a DMS database instance. |
| Database Type | Select SQL Server. | |
| Connection Type | Select Cloud instance. | |
| Instance Region | Select the region of the source RDS for SQL Server instance. | |
| Cross-account | Select Not cross-account for same-account migration. | |
| RDS instance ID | Select the source RDS for SQL Server instance ID. | |
| Database Account | Enter the database account. For required permissions, see Database account permissions. | |
| Database Password | Enter the password for the database account. | |
| Encryption | Select Non-encrypted if SSL encryption is disabled on the source database, or SSL-encrypted if SSL is enabled. DTS trusts the server certificate by default. | |
| Destination Database | Select Existing Connection | Select a registered database instance from the drop-down list to auto-populate the configuration, or configure the database manually. In the DMS console, this parameter is named Select a DMS database instance. |
| Database Type | Select SQL Server. | |
| Connection Type | Select Cloud instance. | |
| Instance Region | Select the region of the destination RDS for SQL Server instance. | |
| Instance ID | Select the destination RDS for SQL Server instance ID. | |
| Database Account | Enter the database account. For required permissions, see Database account permissions. | |
| Database Password | Enter the password for the database account. | |
| Encryption | Select Non-encrypted or SSL-encrypted based on your environment. |
After completing the configuration, click Test Connectivity and Proceed.
DTS service IP address ranges must be added to the security settings (whitelist) of both source and destination databases. For more information, see Add DTS server IP addresses to a whitelist. If the source or destination database uses a non-Alibaba Cloud connection method, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 4: Configure migration objects
On the Configure Objects page, set the following parameters:
| Parameter | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a full migration. Add Incremental Data Migration for a near-zero-downtime migration. If you skip Schema Migration, create the destination tables manually or use object name mapping. If you skip Incremental Data Migration, do not write to the source instance during migration. |
| Method to Migrate Triggers in Source Database | Select a trigger migration method if your migration objects include triggers. See Configure trigger migration methods. Available only when both Schema Migration and Incremental Data Migration are selected. |
| SQL Server Incremental Synchronization Mode | Select the mode that matches your source database and table types. See Choose an incremental synchronization mode. Available only when Incremental Data Migration is selected. |
| The maximum number of tables for which CDC is enabled that DTS supports. | Set the maximum number of CDC-enabled tables for this task. Default: 1,000. Not available for log-based parsing mode. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: Checks for tables with the same names in the destination. If duplicates exist, the precheck fails and the task does not start. To resolve naming conflicts, see Object name mapping. Ignore Errors and Proceed: Skips the check. During full migration, existing records in the destination are kept and conflicting source records are skipped. During incremental migration, source records overwrite destination records. Use with caution—this may cause data inconsistency. |
| Source Objects | Click objects in the Source Objects box, then click the arrow to move them to the Selected Objects box. Migration granularity is schema, table, or column. Selecting only tables or columns excludes views, triggers, and stored procedures. |
| Selected Objects | Right-click an object to rename it. See Individual table column mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows with a WHERE condition, right-click a table and set the filter condition. See Set filter conditions. |
Click Next: Advanced Settings.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | DTS uses a shared cluster by default. For more stable tasks, purchase a dedicated cluster. |
| Retry Time for Failed Connections | Time DTS retries after a connection failure. Default: 720 minutes. Range: 10–1,440 minutes. Set to at least 30 minutes. If reconnection succeeds within this period, the task resumes automatically. |
| Retry Time for Other Issues | Time DTS retries after non-connectivity errors (such as DDL or DML exceptions). Default: 10 minutes. Range: 1–1,440 minutes. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits QPS to the source database, RPS, and migration speed (MB/s) to reduce database load. Available only when Full Data Migration is selected. You can also adjust the speed after the task starts. |
| Enable Throttling for Incremental Data Migration | Limits RPS and migration speed (MB/s) for incremental migration. Available only when Incremental Data Migration is selected. You can also adjust the speed after the task starts. |
| Environment Tag | Optional label to identify the instance's environment. |
| Configure ETL | Select Yes to configure ETL (extract, transform, and load) and enter data processing statements. Select No to skip. |
| Monitoring and Alerting | Select Yes to set an alert thresholdalert notifications and notification. DTS sends an alert if the migration fails or latency exceeds the threshold. |
Click Next: Data Validation to configure data validation. For more information, see Configure data validation.
Step 5: Save the task and run a precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. If the precheck fails:
Click View Details next to the failed item, fix the issue, and run the precheck again.
For warnings on non-ignorable items, click View Details and fix the issue.
For ignorable warnings, click Confirm Alert Details > Ignore > OK > Precheck Again to skip the item. Ignoring a warning may cause data inconsistency.
Step 6: Purchase and start the migration instance
When the precheck Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, configure the instance:
Category Parameter Description New Instance Class Resource Group Settings Select the resource group for the instance. Default: default resource group. See What is Resource Management? Instance Class Select the link specification based on your migration speed requirements. See Data migration link specifications. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK in the confirmation dialog box.
The migration task appears on the Data Migration Tasks list page. Use the following table to interpret the task status:
| Status | Meaning |
|---|---|
| Running | The task is in progress. For tasks with incremental migration, this is the normal state—the task runs continuously until you stop it manually. |
| Completed | The task finished successfully. Tasks without incremental migration stop automatically after the full migration completes. |
| Failed | The task encountered an error. DTS attempts to resume failed tasks automatically for up to seven days. |