Data Transmission Service (DTS) migrates SQL Server databases from Microsoft Azure — including Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines — to ApsaraDB RDS for SQL Server.
Choose a migration method
Two entry points are available. Choose based on how much control you need over migration configuration.
|
|
ApsaraDB RDS console |
DTS console |
|
Migration types |
Schema migration + full data migration + incremental data migration (all three, by default) |
Select individually: schema migration, full data migration, or incremental data migration |
|
Advanced settings |
Throttling only |
Throttling, retry policies, ETL, monitoring, dedicated cluster, and more |
|
Best for |
Standard migrations that need all three migration types |
Migrations requiring custom type selection or fine-grained control |
Prerequisites
Before you begin, make sure you have:
-
A source SQL Server database on Microsoft Azure connected to Alibaba Cloud and accessible by DTS. See Connect a Microsoft Azure database to Alibaba Cloud
-
A destination ApsaraDB RDS for SQL Server instance with available storage larger than the total size of the source data. See Create an ApsaraDB RDS for SQL Server instance
-
Database accounts with the permissions listed in Permissions required
Billing
|
Migration type |
Instance configuration fee |
Internet traffic fee |
|
Schema migration and full data migration |
Free |
Charged when Access Method is set to Public IP Address. See Billing overview. |
|
Incremental data migration |
Charged. See Billing overview. |
Migration types
Schema migration — DTS migrates the schemas of selected objects from the source database to the destination.
Supported objects: table, view, trigger, synonym, SQL stored procedure, SQL function, plan guide, user-defined type, rule, default, and sequence.
Not migrated: assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, Common Language Runtime (CLR) stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, systems, and aggregate functions.
Full data migration — DTS migrates all existing data from the selected objects in the source database.
Incremental data migration — After full data migration completes, DTS continuously replicates new changes from the source. This keeps services running during migration without interruption.
SQL operations supported for incremental data migration
|
Type |
Operations |
|
DML |
INSERT, UPDATE, DELETE. Note
UPDATE operations that modify only large fields are not migrated. |
|
DDL |
CREATE TABLE (partitioned tables and tables containing functions are not migrated); ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX and DROP INDEX |
DDL operations that are not migrated:
-
Transactional DDL (for example, DDL on multiple columns, or SQL containing both DDL and DML)
-
DDL with user-defined types
-
Online DDL
-
DDL on objects with reserved keyword names
-
DDL in system stored procedures
-
TRUNCATE TABLE
Permissions required
|
Database |
Schema migration |
Full data migration |
Incremental data migration |
|
SQL Server database on Microsoft Azure |
SELECT |
SELECT |
Permissions of the sysadmin role. The server admin account in Azure SQL Database already has the required permissions. Change Data Capture (CDC) can be enabled for all databases purchased based on the vCore model. For databases purchased based on the database transaction unit (DTU) model, CDC can be enabled only if the service objective is S3 or greater. |
|
ApsaraDB RDS for SQL Server instance |
Read and write |
Read and write |
Read and write. See Create a privileged account or a standard account and Modify the permissions of an account. |
Usage notes
DTS does not migrate foreign keys. As a result, cascade and delete operations in the source database are not replicated to the destination.
Source database limits
-
If the source is an Azure virtual machine (VM) running SQL Server Enterprise Edition, the version must be SQL Server 2008 or later. For Standard Edition, the version must be SQL Server 2016 SP1 or later. SQL Server 2017 is not supported.
-
The source server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
-
Tables must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Without these constraints, the destination database may contain duplicate records.
-
A single migration task can migrate up to 10 databases. For more than 10 databases, configure multiple tasks to maintain performance and stability.
-
If the source is an Azure SQL Database, only one database can be migrated per task.
-
Enable the
READ_COMMITTED_SNAPSHOTtransaction processing mode on the source database during full data migration. Without it, shared locks may affect write operations, cause data inconsistency, or cause instance failures. Issues that occur in such circumstances are not covered by the DTS service level agreement (SLA).
Log retention requirements:
|
Migration type |
Log retention period |
|
Incremental data migration only |
More than 24 hours |
|
Full data migration + incremental data migration |
At least 7 days |
After full data migration completes, you can set the retention period to more than 24 hours. If you do not meet these requirements, DTS may fail to obtain the required logs, and the DTS SLA does not guarantee service reliability or performance.
Operations restricted during migration:
-
During schema migration and full data migration, do not execute DDL statements that change database or table schemas. Doing so causes the migration task to fail.
-
If you run full data migration only, do not write data to the source database during migration. To maintain data consistency, select all three migration types: schema migration, full data migration, and incremental data migration.
-
Read-only instances cannot be used as the source database.
Other limits
-
DTS does not migrate data of these types: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, and GEOGRAPHY.
-
Before incremental data migration begins, DTS enables CDC on the source database. This causes tables to be locked for a few seconds due to SQL Server behavior.
-
Incremental data migration uses CDC. The following limits apply:
-
The source database can have at most 1,000 tables to be migrated. More than 1,000 tables may cause migration delays or instability.
-
The CDC component stores incremental data for 3 days by default. To change the retention period, run:
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;where<time>is the retention time in minutes. If the average daily volume of incremental data exceeds 10 million records, set<time>to 1,440. -
Do not execute DDL statements that add or remove columns more than twice per minute. Exceeding this limit may cause the migration task to fail.
-
Do not modify CDC instances in the source database during migration. Doing so may cause the task to fail or data loss.
-
Migrating multiple tables across multiple databases in a single task may cause stability and performance issues.
-
Incremental data migration has a latency of approximately 10 seconds.
-
-
When migrating between different database versions, make sure the versions are compatible.
-
For migration tasks that include incremental data migration, do not run reindex operations and disable triggers and foreign keys in the destination database. Failing to do so may cause the task to fail or data loss.
DTS cannot migrate DDL operations related to the primary key of a table for which CDC is enabled.
-
If the number of CDC-enabled tables in a single migration task exceeds 1,000 or the The maximum number of tables for which CDC is enabled limit supported by DTS, the precheck fails.
-
If a table has CDC enabled and requires more than 64 KB of data in a single field, run the following command on the source database before starting the task:
exec sp_configure 'max text repl size', -1;. By default, a CDC job processes data up to 64 KB per field. -
DTS automatically creates the destination database in the ApsaraDB RDS for SQL Server instance. If the source database name does not comply with ApsaraDB RDS for SQL Server naming conventions, manually create the database in the destination instance before configuring the migration task. See Create a database.
-
In Incremental Synchronization Based on Logs of Source Database mode, DTS creates the following objects in the source database: a trigger named
dts_cdc_sync_ddl, a heartbeat table nameddts_sync_progress, and a DDL history table nameddts_cdc_ddl_history. In hybrid log-based parsing mode, DTS creates the same objects and also enables CDC for the source database and specific tables. Set the maximum number of records per second to 1,000 for CDC-enabled tables. -
Before migration, evaluate the impact on source database performance. Schedule migration during off-peak hours. Full data migration uses read and write resources on both the source and destination databases, which increases server load.
-
After full data migration, concurrent INSERT operations cause table fragmentation in the destination database. The destination tablespace will be larger than the source.
-
Make sure precision settings for FLOAT and DOUBLE columns meet your requirements. DTS uses the
ROUND(COLUMN, PRECISION)function to read these values. If no precision is specified, DTS defaults to 38 digits for FLOAT and 308 digits for DOUBLE. -
DTS retries failed migration tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks, or run the
REVOKEstatement to revoke write permissions from the DTS accounts on the destination database. Otherwise, the source data may overwrite the destination data when the task resumes. -
CDC collection modules for multiple migration instances that share a source SQL Server database operate independently.
-
If a DTS task fails, DTS technical support will try to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are not modified. Modified parameters may include those in the Modify instance parameters section.
Migrate data using the ApsaraDB RDS console
-
Go to the Data Migration tab of the destination ApsaraDB RDS for SQL Server instance.
-
Go to the Instances page in the ApsaraDB RDS console.
-
In the upper part of the page, select the region where the destination instance resides.
-
Click the ID of the destination instance.
-
In the left-side navigation pane, click Data Migration and Synchronization.
-
Click the Data Migration tab.
-
-
Click One-stop Cloud.
-
Configure the source and destination databases.
Source database
Parameter
Description
Task Name
The name of the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique.
Database Type
Default value: SQL Server. No configuration needed.
Type
Select Azure.
Access Method
Select Public IP Address or Express Connect, VPN Gateway, or Smart Access Gateway based on how the source database connects to Alibaba Cloud. In this example, Public IP Address is selected.
Instance Region
If Access Method is Public IP Address, select the region where the Azure SQL Server database resides. If the region is not available, select the nearest region. If Access Method is Express Connect, VPN Gateway, or Smart Access Gateway, select the region of the Alibaba Cloud virtual private cloud (VPC) to which the database is connected.
Connected VPC
The ID of the Alibaba Cloud VPC to which the source database is connected. This parameter is available only if Access Method is Express Connect, VPN Gateway, or Smart Access Gateway.
Domain Name or IP
The domain name or IP address of the source database. Using the domain name is recommended.
Port Number
The service port of the source database.
Database Account
The account for the source database. For Azure SQL Database accounts, use the format
<Administrator username>@<Server name>. For example, if the admin account istestuserand the server name isdtstest, entertestuser@dtstest. RunSELECT @@SERVERNAME AS ServerNameto query the server name. See Permissions required for required permissions.Database Password
The password for the database account.
Encryption
Select Non-encrypted if SSL encryption is disabled on the source. Select SSL-encrypted if SSL encryption is enabled. DTS trusts the server certificate by default.
Destination database
Parameter
Description
Database Type
Default value: SQL Server. No configuration needed.
Access Method
Default value: Alibaba Cloud Instance. No configuration needed.
Instance Region
The region of the current ApsaraDB RDS for SQL Server instance. This value is fixed and cannot be changed.
Instance ID
The ID of the current ApsaraDB RDS for SQL Server instance. This value is fixed and cannot be changed.
Database Account
The database account for the destination instance. See Permissions required for required permissions.
Database Password
The password for the database account.
Encryption
Select Non-encrypted if SSL encryption is disabled on the destination. Select SSL-encrypted if SSL encryption is enabled. DTS trusts the server certificate by default.
-
Click Test Connectivity and Proceed. Make sure the CIDR blocks of DTS servers are added to the whitelist of the source database. Then click Test Connectivity.
ImportantAdding public CIDR blocks of DTS servers to your database whitelist creates security risks. Before using DTS, understand and acknowledge these risks and take preventive measures, including strengthening username and password security, limiting exposed ports, authenticating API calls, and regularly auditing and removing unauthorized CIDR blocks. See Add the CIDR blocks of DTS servers.
-
Select a data migration plan. The system shows available plans based on the source database type:
-
Full and incremental data migration: Click Configure Objects and proceed to the next step.
-
Any other plan: Skip the next step.
Click View Migration Documentation to see details about other migration plans.
-
-
On the Configure Objects page, configure the objects to migrate.
Parameter
Description
Method to Migrate Triggers in Source Database
Triggers migrated before incremental migration completes can cause data inconsistency. Select Manual Migration to control when triggers are applied. See Synchronize or migrate triggers from the source database. If you do not want to migrate triggers, skip this parameter.
Source Objects
Select objects and click the
icon to add them to Selected Objects. You can select columns, tables, or databases. If you select tables or columns, DTS does not migrate other objects such as views, triggers, and stored procedures.Selected Objects
Right-click an object to rename it in the destination instance, remove it from the selection, specify WHERE filter conditions, or select specific SQL operations to migrate. See Database, table, and column name mapping and Specify filter conditions. Renaming an object may cause dependent objects to fail migration. Click the
icon to move objects back to Source Objects. -
(Optional) Configure advanced settings by clicking Advanced Settings.
Parameter
Description
Enable Throttling for Full Data Migration
Limits the read and write load during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination server.
Enable Throttling for Incremental Data Migration
Limits the load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the load on the destination server.
Data Verification Mode
Select Full Data Verification to check data integrity. Specify Maximum Number of Rows Can be Read per Second in the Full Verification Mode and Maximum Data Volume Can be Read per Second (Byte/s) in the Full Verification Mode to limit verification speed. A value of 0 for both fields disables throttling for verification.
-
Save the task settings and run a precheck.
-
To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
-
Otherwise, click Next: Save Task Settings and Precheck.
- The migration task starts only after passing the precheck. - If the precheck fails, click View Details next to each failed item, fix the issues, and run the precheck again. - For alert items: if the alert cannot be ignored, fix the issue and rerun the precheck. If it can be ignored, click Confirm Alert Details, then Ignore, confirm with OK, and click Precheck Again. Ignoring alerts may result in data inconsistency or business risk.
-
-
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
-
Purchase a data migration instance. Monitor progress on the Data Migration page.
-
On the Purchase Instance page, configure the instance class.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Select based on the required migration speed. See Instance classes of data migration instances. | -
Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
-
Click Buy and Start. In the confirmation dialog, click OK.
- If the task does not include incremental data migration, it stops automatically. The Status shows Completed. - If the task includes incremental data migration, it runs continuously and does not stop automatically. The Status shows Running.
-
Migrate data using the DTS console
-
Go to the Data Migration page.
-
Log on to the DTS console.
-
In the left-side navigation pane, click Data Migration.
-
In the top navigation bar, select the region where the migration instance resides.
-
-
Click Create Task.
-
Configure the source and destination databases.
Source database
Parameter
Description
Task Name
The name of the DTS task. DTS generates a name automatically. Specify an informative name to identify the task easily. The name does not need to be unique.
Select Existing Connection
Choose an existing registered instance to auto-populate parameters, or configure the source manually. To register a database, see Manage database connections.
Database Type
Select SQL Server.
Access Method
Select Public IP Address or Express Connect, VPN Gateway, or Smart Access Gateway based on how the source connects to Alibaba Cloud. In this example, Public IP Address is selected.
Instance Region
If Access Method is Public IP Address, select the region of the Azure SQL Server database. If unavailable, select the nearest region. If Access Method is Express Connect, VPN Gateway, or Smart Access Gateway, select the region of the Alibaba Cloud VPC to which the database is connected.
Replicate Data Across Alibaba Cloud Accounts
Select No (this example uses an instance in the current account). This parameter is available only if Access Method is Express Connect, VPN Gateway, or Smart Access Gateway.
Connected VPC
The ID of the Alibaba Cloud VPC to which the source database is connected. Available only if Access Method is Express Connect, VPN Gateway, or Smart Access Gateway.
Domain Name or IP
The domain name or IP address of the source database. Using the domain name is recommended.
Port Number
The service port of the source database.
Database Account
The account for the source database. For Azure SQL Database, use the format
<Administrator username>@<Server name>. RunSELECT @@SERVERNAME AS ServerNameto query the server name. See Permissions required.Database Password
The password for the database account.
Encryption
Select Non-encrypted or SSL-encrypted based on your source database configuration. DTS trusts the server certificate by default.
Destination database
Parameter
Description
Select Existing Connection
Choose an existing registered instance to auto-populate parameters, or configure the destination manually. To register a database, see Manage database connections.
Database Type
Select SQL Server.
Access Method
Select Alibaba Cloud Instance.
Instance Region
The region of the destination ApsaraDB RDS for SQL Server instance.
Instance ID
The instance ID of the destination ApsaraDB RDS for SQL Server instance.
Database Account
The database account for the destination instance. See Permissions required.
Database Password
The password for the database account.
Encryption
Select Non-encrypted or SSL-encrypted based on your destination database configuration. DTS trusts the server certificate by default.
-
Click Test Connectivity and Proceed. Make sure the CIDR blocks of DTS servers are added to the whitelist of the source database. Then click Test Connectivity.
ImportantAdding public CIDR blocks of DTS servers to your database whitelist creates security risks. Before using DTS, understand and acknowledge these risks and take preventive measures, including strengthening username and password security, limiting exposed ports, authenticating API calls, and regularly auditing and removing unauthorized CIDR blocks. See Add the CIDR blocks of DTS servers.
-
Configure migration objects.
-
On the Configure Objects page, set the following parameters.
Parameter Description Migration Types Select Schema Migration and Full Data Migration for full migration only. Select all three (Schema Migration, Full Data Migration, and Incremental Data Migration) for migrations without service interruption. If Schema Migration is not selected, make sure the destination database has the required tables and enable object name mapping. If Incremental Data Migration is not selected, do not write to the source during migration. Method to Migrate Triggers in Source Database Select Manual Migration to prevent triggers from being applied before incremental migration completes. This parameter is available only if both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database. SQL Server Incremental Synchronization Mode Select Polling and querying CDC instances for incremental synchronization. Available only if Incremental Data Migration is selected. The maximum number of tables for which CDC is enabled that DTS supports Retain the default value. Processing Mode of Conflicting Tables Precheck and Report Errors: fails the precheck if the destination has tables with identical names to the source. Use object name mapping to rename conflicting tables. Ignore Errors and Proceed: skips the identical name check. During full migration, DTS retains existing destination records with the same primary key. During incremental migration, DTS overwrites them. If schemas differ, only matching columns are migrated or the task fails. Capitalization of Object Names in Destination Instance Controls capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. Source Objects Select objects and click the
icon to add them to Selected Objects. You can select columns, tables, or databases. If you select tables or columns, DTS does not migrate other objects such as views, triggers, and stored procedures.Selected Objects Right-click an object to rename it in the destination, remove it from the selection, specify WHERE filter conditions, or select specific SQL operations to migrate. See Database, table, and column name mapping and Specify filter conditions. Renaming an object may cause dependent objects to fail migration. Click the
icon to move objects back to Source Objects. | -
Click Next: Advanced Settings.
Parameter Description Dedicated Cluster for Task Scheduling DTS schedules tasks to the shared cluster by default. Purchase a dedicated cluster for improved stability. See What is a DTS dedicated cluster. Retry Time for Failed Connections How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30. If reconnection succeeds within this period, the task resumes. Otherwise, the task fails. Note: If multiple tasks share a source or destination database, the most recent retry time takes precedence. DTS charges for the instance during retries. Retry Time for Other Issues How long DTS retries after DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10. Must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits load during full migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only if Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only if Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance by environment type. Configure ETL Select Yesalert notification settings to enable extract, transform, and load (ETL) and enter data processing statements. Select No to skip. See What is ETL? and Configure ETL in a data migration or data synchronization task. Monitoring and Alerting Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task. | -
Click Next Step: Data Verification to configure data verification. See Configure data verification.
-
-
Save the task settings and run a precheck.
-
To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
-
Otherwise, click Next: Save Task Settings and Precheck.
- The migration task starts only after passing the precheck. - If the precheck fails, click View Details next to each failed item, fix the issues, and run the precheck again. - For alert items: if the alert cannot be ignored, fix the issue and rerun the precheck. If it can be ignored, click Confirm Alert Details, then Ignore, confirm with OK, and click Precheck Again. Ignoring alerts may result in data inconsistency or business risk.
-
-
Purchase a data migration instance. Monitor progress on the Data Migration page.
-
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
-
On the Purchase Instance page, configure the instance class.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Select based on the required migration speed. See Instance classes of data migration instances. | -
Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
-
Click Buy and Start. In the confirmation dialog, click OK.
- If the task does not include incremental data migration, it stops automatically. The Status shows Completed. - If the task includes incremental data migration, it runs continuously and does not stop automatically. The Status shows Running.
-
FAQ
Can I select which objects to verify when configuring a migration task in the ApsaraDB RDS console?
No. The objects to be verified are the same as the objects selected for migration.
What is the full data verification mode for tasks configured in the ApsaraDB RDS console?
Hash values of migrated data are checked based on a 100% sampling ratio.
What does full data verification check?
Data consistency between the source and destination databases is checked by comparing all data across both.