Data Transmission Service (DTS) supports heterogeneous data migration from an RDS for SQL Server instance to a PolarDB for MySQL cluster, including schema migration, full data migration, and incremental data migration.
Prerequisites
Before you begin, ensure that you have:
A source RDS for SQL Server instance running a version supported by DTS. For supported versions, see Migration solutions
A destination PolarDB for MySQL cluster with storage capacity that exceeds the storage space used by the source instance. To create a cluster, see Create a cluster using the custom purchase method or Create a subscription cluster
Database accounts with the required permissions on both source and destination. See Permissions required
If the source instance meets any of the following conditions, split the migration into multiple tasks:
More than 10 databases
Log backups on a single database more than once per hour
DDL operations on a single database more than 100 times per hour
Log volume of a single database exceeding 20 MB/s
Change Data Capture (CDC) required for more than 1,000 tables
Billing
| Migration type | Link configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Free |
| Incremental data migration | Charged. For details, see Billing overview. | — |
Permissions required
| Database | Schema migration | Full migration | Incremental migration |
|---|---|---|---|
| RDS for SQL Server instance | Read | Read | Owner |
| PolarDB for MySQL cluster | Read and write | Read and write | Read and write |
To create accounts and grant permissions:
RDS for SQL Server: see Create a standard account, a privileged account, or a global read-only account. To adjust permissions after creation, see Modify account permissions.
PolarDB for MySQL: see Create and manage database accounts. To update passwords, see Manage database account passwords.
SQL operations supported by incremental migration
| Operation type | Supported SQL operations |
|---|---|
| DML | INSERT, UPDATE, DELETE. Note UPDATE statements that modify only large object columns are not supported. |
| DDL | CREATE TABLE; ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX; DROP INDEX |
The following DDL operations are not supported:
DDL operations that involve user-defined types
Online DDL operations
Transactional DDL operations (for example, adding multiple columns in a single statement, or combining DDL and DML in a single statement)
DDL operations that use reserved keywords as attribute names
DDL operations performed by system stored procedures
TRUNCATE TABLE
Partitions and functions within table definitions
Limitations
Source database requirements
Tables to be migrated must have primary keys or UNIQUE constraints with unique fields. Otherwise, duplicate data may appear in the destination database.
A single migration task supports a maximum of 1,000 tables when you edit table-level objects (such as mapping table or column names). If you exceed this limit, split the tables into multiple migration tasks or configure the task to migrate an entire database.
A single migration task supports a maximum of 10 databases. Exceeding this limit may cause stability and performance issues.
If you migrate specific objects instead of an entire database, tables with the same name but different schema names cannot be migrated to the same destination database.
The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
READ_COMMITTED_SNAPSHOTmust be enabled on the source database during full data migration. Disabling it allows shared locks that can affect data writes and may cause data inconsistency or instance failures. Exceptions from this cause are not covered by the DTS Service-Level Agreement (SLA).
Incremental migration requirements
For incremental migration, the source database must meet these conditions:
Logs must be enabled with the backup mode set to Full, and a full physical backup must have completed successfully.
For incremental-only tasks, DTS requires logs to be retained for more than 24 hours. For tasks that include both full and incremental migration, logs must be retained for at least 7 days. You can increase the retention period to more than 24 hours after full migration completes. Insufficient log retention may cause the DTS task to fail, and in extreme cases, data inconsistency or data loss may occur. Issues caused by insufficient log retention are not covered by the DTS SLA.
DTS uses the
fn_logfunction to obtain source database logs. Do not clear source database logs prematurely. Otherwise, the DTS task may fail.Do not perform DDL schema changes during initial schema synchronization and full data migration. Otherwise, the migration task fails.
Do not write new data to the source instance if you perform only full data migration. To ensure real-time data consistency, select Schema Migration, Full Data Migration, and Incremental Data Migration together.
If a migration task includes incremental data migration, do not reindex. Otherwise, the task may fail or data may be lost.
You cannot change the primary keys of tables with CDC enabled.
CDC enablement conditions
To enable CDC for tables in the source database, all of the following conditions must be met. Otherwise, the precheck fails.
The value of the
srvnamefield in thesys.sysserversview must match the return value of theSERVERPROPERTYfunction.For self-managed SQL Server: the database owner must be
sa.For RDS for SQL Server: the database owner must be
sqlsa.Enterprise Edition: SQL Server 2008 or later.
Standard Edition: SQL Server 2016 SP1 or later.
SQL Server 2017 (Standard or Enterprise Edition): upgrade the database version before enabling CDC.
Data types and objects that cannot be migrated
The following SQL Server data types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined types created with the CREATE TYPE command.
The following object types cannot be migrated: PROCEDURE, FUNCTION, TRIGGER, DATATYPE, SYNONYM, CATALOG, PLAN_GUIDE, and SEQUENCE.
Complex DDL operations cannot be migrated.
Special scenarios
| Scenario | Limitation |
|---|---|
| Four-byte characters (rare characters or emojis) | The destination database and table must use the utf8mb4 charset. Set the instance-level parameter character_set_server to utf8mb4 in the destination database. |
| Single CDC-enabled field exceeds 64 KB | Run exec sp_configure 'max text repl size', -1; on the source database in advance. By default, a CDC job can process a single field with a maximum length of 64 KB. |
| Read-only source instance | DDL operations cannot be migrated. |
| Azure SQL Database source | A single DTS task can migrate only one database. |
| RDS for SQL Server source with incremental migration | Disable transparent data encryption (TDE) to ensure task stability. For details, see Disable TDE. |
| Web Edition RDS for SQL Server source | Set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported). |
| Multiple DTS instances sharing the same SQL Server source | Their incremental data ingestion modules operate independently. |
| RDS for SQL Server source | DTS creates an rdsdt_dtsacct account in the source instance. Do not delete this account or change its password while the task is running. For details, see System accounts. |
Other considerations
If you use the
sp_renamecommand to rename objects (such as stored procedures) before the initial schema synchronization task runs, the task may not work as expected or may fail. Use theALTERcommand to rename objects instead.Full data migration uses concurrent INSERT operations, which cause table fragmentation in the destination database. After full data migration completes, the destination table storage space is larger than the source.
If a DDL statement fails to write to the destination database, the DTS task continues running. View the failed statement in the task logs. For details, see View task logs.
Schedule migration during off-peak hours. DTS consumes read and write resources on both source and destination databases during full data migration, which increases database load.
If a task fails, DTS support staff will attempt to restore it within eight hours. During restoration, they may restart the task or adjust DTS task parameters (not database parameters). For adjustable parameters, see Modify instance parameters.
SQL Server is a commercial closed-source database. Due to format-specific constraints, issues may occur when DTS performs CDC and log parsing on SQL Server. Before enabling incremental migration for a SQL Server source in production, run a comprehensive proof of concept (POC) test that covers all business change types, schema changes, and peak-hour stress scenarios. Make sure the business logic in the production environment matches the POC test environment.
Schema migration note
During schema migration, DTS migrates foreign keys from the source to the destination database. During full and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Performing cascade updates or deletes on the source during migration may cause data inconsistency.
Hybrid log parsing mode (additional limits)
In hybrid log parsing mode, do not consecutively add or remove columns within an interval of less than 10 minutes. For example, the following consecutive statements cause a task error:
ALTER TABLE test_table DROP COLUMN Flag;
ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database. In hybrid incremental synchronization mode, DTS additionally enables database-level CDC and CDC for some tables. Keep the data change rate for CDC-enabled tables at or below 1,000 records per second (RPS).
Choose an incremental synchronization mode
This setting is available only when Incremental Data Migration is selected.
| Mode | Best for | Limitations |
|---|---|---|
| Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing) | Heap tables, tables without primary keys, compressed tables, tables with computed columns. Provides high link stability and supports a wide range of DDL scenarios. | DTS creates the dts_cdc_sync_ddl trigger, dts_sync_progress heartbeat table, and dts_cdc_ddl_history DDL storage table in the source database, and enables database-level CDC. You cannot run SELECT INTO, TRUNCATE, or RENAME COLUMN on tables with CDC enabled, or manually delete DTS-created triggers. |
| Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) | Scenarios with no heap tables, tables without primary keys, compressed tables, or tables with computed columns. Non-intrusive to the source database. | Does not support heap tables, tables without primary keys, compressed tables, or tables with computed columns. For tables to be migrated, a clustered index containing primary key columns is required. |
| Polling and querying CDC instances for incremental synchronization | Sources that are Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server. Uses the native CDC component to obtain incremental data, improving stability and reducing network bandwidth usage. | Requires the source database account to have CDC enablement permissions. Incremental migration latency is approximately 10 seconds. Migrating multiple tables across multiple databases may cause stability and performance issues. |
Additional limits for Polling and querying CDC instances for incremental synchronization:
For Azure SQL Database: the privileged account (server administrator) from the Azure SQL Database console meets the permissions requirement. For vCore-based databases, all instance types support CDC. For DTU-based databases, only S3 and later support CDC. Clustered columnstore index tables do not support CDC.
For Amazon RDS for SQL Server: the privileged account can enable database-level CDC for stored procedures.
Do not consecutively add or remove columns — for example, do not perform more than two DDL operations to add or remove columns within one minute. Otherwise, the task may fail.
Do not change the CDC instance of the source database. Otherwise, the task may fail or data may be lost.
CDC data retention (for hybrid and polling modes):
By default, incremental data stored in the CDC component is retained for 3 days. Adjust the retention period using:
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;Where <time> is in minutes. If a single table generates more than 10 million incremental changes per day, set <time> to 1440.
Limit CDC to no more than 1,000 tables per migration task. Exceeding this limit may cause task latency or instability.
The prerequisite module of an incremental migration task enables CDC for the source database. During this process, the source database may be briefly locked due to SQL Server database kernel constraints.
For tables in Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) mode, see How to view information about heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server.
Create a migration task
Step 1: Open the migration task list
Use one of the following methods to navigate to the migration task list for the destination region.
From the DTS console
Log on to the Data Transmission Service (DTS) console.
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance is located.
From the DMS console
The actual operations may vary based on the mode and layout of the DMS console. For details, see Simple mode console. To change the layout and style, see Customize the layout and style of the DMS console.
Log on to the Data Management (DMS) console.
In the top menu bar, choose .
To the right of Data Migration Tasks, select the region where the migration instance is located.
Step 2: Configure the source and destination databases
Click Create Task.
Configure the source database.
WarningAfter you select the source and destination instances, carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.
Parameter Value Task Name DTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. Select Existing Connection Select a registered database instance from the drop-down list to auto-fill the connection details. If the instance is not registered, configure the connection manually. NoteIn the DMS console, this parameter is named Select a DMS database instance.
Database Type Select SQL Server. Access Method Select Alibaba Cloud Instance. Instance Region Select the region of the source RDS for SQL Server instance. Replicate Data Across Alibaba Cloud Accounts Select No (same-account migration). RDS Instance ID Select the ID of the source RDS for SQL Server instance. Database Account Enter the database account. For permission requirements, see Permissions required. Database Password Enter the account password. Encryption Select Non-encrypted if SSL encryption is disabled on the source database. Select SSL-encrypted if SSL encryption is enabled. DTS trusts the server certificate by default. Configure the destination database.
Parameter Value Select Existing Connection Select a registered database instance from the drop-down list to auto-fill the connection details. If the instance is not registered, configure the connection manually. NoteIn the DMS console, this parameter is named Select a DMS database instance.
Database Type Select PolarDB for MySQL. Access Method Select Alibaba Cloud Instance. Instance Region Select the region of the destination PolarDB for MySQL cluster. PolarDB Cluster ID Select the ID of the destination PolarDB for MySQL cluster. Database Account Enter the database account. For permission requirements, see Permissions required. Database Password Enter the account password. Encryption Select a connection method as needed. For details about SSL encryption, see Enable SSL encryption. Click Test Connectivity and Proceed at the bottom of the page.
DTS automatically adds its IP address ranges to the security settings of the source and destination databases. If the source or destination is a self-managed database (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box. For details, see Add DTS server IP addresses to a whitelist.
Step 3: Configure migration objects
On the Configure Objects page, configure the following settings.
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types for your scenario. For full migration only, select Schema Migration and Full Data Migration. For zero-downtime migration, also select Incremental Data Migration. Note If you skip Schema Migration, make sure the destination database already has the target databases and tables. If you skip Incremental Data Migration, do not write new data to the source instance during migration. |
| Schema Mapping Mode of Source and Destination Databases | Select a schema mapping mode to map schemas between source and destination databases. Warning Tables in different schemas of the source database cannot share the same name. Otherwise, data inconsistency or task failure may occur. |
| SQL Server Incremental Synchronization Mode | Select a mode based on your source database type and table structure. See Choose an incremental synchronization mode. This parameter appears only when Incremental Data Migration is selected. |
| The maximum number of tables for which CDC is enabled | Specify the maximum number of CDC-enabled tables for this migration instance. Default: 1,000. This option does not appear when Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) is selected. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: checks for tables with the same names in the destination database before starting migration. If matches exist, the precheck fails and migration does not start. Note If a conflicting table cannot be deleted or renamed, use Object name mapping to rename it. Ignore Errors and Proceed: skips the duplicate table check. Warning This may cause data inconsistency. During full migration, DTS keeps the destination record; during incremental migration, the source record overwrites the destination record. |
| Capitalization of Object Names in Destination Instance | Configure the case sensitivity policy for migrated object names (databases, tables, columns). Default: DTS default policy. For details, see Case sensitivity of object names in the destination database. |
| Source Objects | In the Source Objects box, click the objects to migrate, then click Note The migration granularity is schema, table, or column. Selecting only tables or columns excludes views, triggers, and stored procedures. |
| Selected Objects | To rename an object in the destination, right-click it in the Selected Objects box. For details, see Individual table column mapping. To rename multiple objects at once, click Batch Edit. For details, see Map multiple object names at a time. To filter rows, right-click a table and configure the WHERE condition. For details, see Configure filter conditions. To select specific incremental SQL operations, right-click the object and select the desired operations. Note Object name mapping may cause migration failures for other objects that depend on the renamed object. |
Step 4: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules tasks on a shared cluster. To improve task stability, purchase a dedicated cluster. |
| Retry Time for Failed Connections | The duration DTS retries after a connection failure. Default: 720 minutes. Range: 10–1,440 minutes. Set to more than 30 minutes. If DTS reconnects within this duration, the task resumes automatically. Note For multiple DTS instances sharing the same source or destination, the retry time is determined by the last created task. Charges apply during the retry period. |
| Retry Time for Other Issues | The duration DTS retries after non-connectivity errors (such as DDL or DML exceptions). Default: 10 minutes. Range: 1–1,440 minutes. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit read/write load during full migration by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Adjust the speed after the task starts. For details, see Enable throttling for data migration. |
| Enable Throttling for Incremental Data Migration | Limit load during incremental migration by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | Select a tag to identify the instance. Optional. |
| Configure ETL | Choose whether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements. For details, see Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Select Yes to configure an alert thresholdalert notifications and notifications. DTS sends an alert if migration fails or latency exceeds the threshold. |
Step 5: Configure data validation
Click Next: Data Validation to configure a data validation task. For details, see Configure data validation.
Step 6: Save the task and run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.
DTS runs a precheck before the migration starts. The task starts only after it passes all precheck items.
If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.
If a warning appears:
For warnings that cannot be ignored: click View Details, fix the issue, and run the precheck again.
For warnings that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again to skip the item. Ignoring warnings may cause data inconsistency or business risks.
Step 7: Purchase the instance and start migration
When the Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, select the link specification for the migration instance.
Parameter Description Resource Group Settings Select the resource group for the instance. Default: default resource group. For details, see What is Resource Management? Instance Class Select a specification based on your migration scenario. The link specification affects migration speed. For details, see Data migration link specifications. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start. In the dialog box, click OK.
Monitor migration progress
After the task starts, go to the Data Migration Tasks list page to monitor progress.
If the task does not include incremental migration, it stops automatically after full migration completes and the Status changes to Completed.
If the task includes incremental migration, it runs continuously. The Status remains Running until you manually stop the task.