Data Transmission Service (DTS) synchronizes data from RDS SQL Server to AnalyticDB for MySQL 3.0, enabling real-time analytics on operational data.
Prerequisites
Before you begin, ensure that you have:
-
An RDS SQL Server instance. For supported versions, see Synchronization overview. To create one, see Quickly create and use an RDS SQL Server instance.
-
An AnalyticDB for MySQL 3.0 cluster. To create one, see Create a cluster. The cluster's storage space must exceed the source RDS SQL Server instance's storage.
-
Split the synchronization into multiple tasks if any of the following conditions apply to the source instance:
-
The number of databases exceeds 10.
-
Log backups are performed on a single database more than once per hour.
-
DDL operations are performed on a single database more than 100 times per hour.
-
The log volume of a single database exceeds 20 MB/s.
-
Change Data Capture (CDC) needs to be enabled for more than 1,000 tables.
-
In hybrid log parsing mode, where SQL Server Incremental Synchronization Mode is set to Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing), the following source database versions are supported:
-
Enterprise or Enterprise Evaluation Edition: versions 2012, 2014, 2016, 2019, or 2022.
-
Standard Edition: versions 2016, 2019, or 2022.
Limitations
Schema and DML behavior
-
During schema synchronization, DTS does not synchronize foreign keys from the source database to the destination database.
-
During full and incremental synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.
Source database limits
| Category | Limit |
|---|---|
| Primary key requirement | Tables to synchronize must have a primary key or a UNIQUE constraint, and fields must be unique. Otherwise, duplicate data may appear in the destination database. |
| Table count per task | For table-level synchronization with column mapping, a single task supports a maximum of 5,000 tables. If the count exceeds this, split the tables into multiple tasks, or configure a task to synchronize the entire database. |
| Database count per task | A single task supports a maximum of 10 databases. Exceeding this limit risks stability and performance issues. |
| Object naming | When synchronizing specific objects to the same destination database, objects with the same table name but different schema names cannot be selected together. |
| Log access | DTS uses the fn_log function to get logs from the source database. This function has performance bottlenecks. Do not clear the source database logs too early, or the task may fail. |
| Log retention (incremental only) | Data logs must be enabled with backup mode set to Full, and a full physical backup must have been performed. DTS requires logs to be retained for more than 24 hours. |
| Log retention (full + incremental) | DTS requires the source database to retain data logs for at least 7 days. After full synchronization completes, reduce the retention period to more than 24 hours. If the retention period is too short, the DTS task may fail, or data inconsistency or data loss may occur. Such issues are not covered by the DTS Service-Level Agreement (SLA). |
| Transparent Data Encryption (TDE) | If the source database is an RDS for SQL Server instance, ensure that the Transparent Data Encryption (TDE) feature is disabled to ensure the stability of the sync instance. For more information, see Disable TDE. |
| Read-only instances | DDL operations cannot be synchronized from a read-only source instance. |
| Azure SQL Database | A single task can synchronize only one database when the source is Azure SQL Database. |
| Hybrid mode: consecutive DDL | Do not run consecutive add or drop column operations within a 10-minute interval. For example, the following consecutive statements cause the task to fail: ALTER TABLE test_table DROP COLUMN Flag; followed by ALTER TABLE test_table ADD Remark nvarchar(50) not null default(''); |
| sp_rename | Using sp_rename to rename objects (such as stored procedures) before a schema synchronization task runs may produce unexpected results or cause the task to fail. Use the ALTER command to rename database objects instead. |
| DDL during sync | Do not run DDL operations that change database or table schemas during schema synchronization or full synchronization. Otherwise, the synchronization task fails. During full synchronization, DTS queries the source database, creating metadata locks that may block DDL operations. |
| Web-based RDS SQL Server | Set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when the source is a web-based RDS SQL Server. |
| READ_COMMITTED_SNAPSHOT | Keep the READ_COMMITTED_SNAPSHOT transaction processing mode enabled during full data sync. Disabling it causes shared locks that may block data writes and can lead to data inconsistency or instance failures. Such issues are not covered by the DTS SLA. |
CDC requirements
If Change Data Capture (CDC) needs to be enabled for tables in the source database, the following conditions must be met. Otherwise, the precheck fails.
-
The
srvnamefield in thesys.sysserversview must match the return value of theSERVERPROPERTYfunction. -
Self-managed SQL Server: the database owner must be
sa. -
RDS for SQL Server: the database owner must be
sqlsa. -
Enterprise Edition: SQL Server 2008 or later.
-
Standard Edition: SQL Server 2016 SP1 or later.
-
SQL Server 2017 (Standard or Enterprise Edition) is not supported. Upgrade the version before configuring the task.
Other limits
Supported objects for initial schema synchronization: Schema, Table, View, Function, and Procedure.
This scenario involves data synchronization between heterogeneous databases. Data types cannot be mapped one-to-one, which may cause the task to fail or result in data loss. Evaluate how data type mapping affects your business before proceeding. For more information, see Data type mappings for initial schema synchronization.
Objects not supported for initial schema synchronization: assemblies, service broker, full-text indexes, full-text catalogs, distributed schemas, distributed functions, CLR stored procedures, CLR scalar functions, CLR table-valued functions, internal tables, system objects, and aggregate functions.
Unsupported data types: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined types created with the CREATE TYPE command.
Tables with computed columns cannot be synchronized.
Destination Database: The destination database must contain a custom primary key. Alternatively, configure a Primary Key Column in the Configurations for Databases, Tables, and Columns step. Otherwise, synchronization may fail.
Log-based parsing mode restriction: If SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported), the tables to synchronize must have a clustered index that contains the primary key column. Heap tables, tables without a primary key, compressed tables, tables with computed columns, and tables with sparse columns are not supported.
For information about how to view these table types in SQL Server, see How to view information about heap tables, tables without a primary key, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server.
Hybrid log parsing mode additional limits:
-
The DTS incremental synchronization task depends on the CDC component. Ensure the CDC job in the source database is running correctly. Otherwise, the DTS task will fail.
-
By default, CDC retains incremental data for 3 days. Adjust the retention period using
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;where<time>is in minutes. If the average number of daily incremental change SQL statements for a single table exceeds 10 million, set<time>to 1440. -
The prerequisite module for a DTS incremental synchronization task enables CDC at the database and table levels. During this process, the source database may be briefly locked.
-
A single task supports a maximum of 1,000 tables with CDC enabled. Exceeding this limit may cause latency or instability.
Polling and querying CDC instances mode additional limits:
-
The database account used by the DTS instance must have permission to enable CDC. Enabling database-level CDC requires the
sysadminrole. Enabling table-level CDC requires a privileged account.The privileged account in the Azure SQL Database console meets the requirements. For vCore-based databases, all specifications support CDC. For DTU-based databases, the specification must be S3 or higher. The privileged account for Amazon RDS for SQL Server supports enabling database-level CDC for stored procedures. CDC cannot be enabled for tables with clustered columnstore indexes.
-
DTS polls the CDC instance of each table to get incremental data. Do not synchronize more than 1,000 tables. Otherwise, the task may experience latency or become unstable.
-
By default, CDC retains incremental data for 3 days. Adjust the retention period using
exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;where<time>is in minutes. If daily incremental change SQL statements for a single table exceed 10 million, set<time>to 1440. -
Running more than two consecutive add or drop column DDL operations within one minute is not supported. Otherwise, the task may fail.
-
Do not modify the CDC instance in the source database. Otherwise, the task may fail or data may be lost.
Heartbeat and trigger objects: To ensure accurate latency tracking, DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DDL storage table in the source database. In hybrid incremental synchronization mode, DTS also enables database-level CDC and table-level CDC. The data change volume for tables with CDC enabled should not exceed 1,000 records per second (RPS).
AnalyticDB for MySQL disk usage: If disk usage on a node exceeds 80%, write performance slows, causing DTS task latency. If disk usage exceeds 90%, writes are blocked and the DTS task becomes abnormal. Estimate the required storage space before starting the task.
Off-peak synchronization: Evaluate the performance of both databases before synchronizing. Run the task during off-peak hours to avoid increased database load.
Table fragmentation: Initial full synchronization runs concurrent INSERT operations, causing table fragmentation in the destination database. The tablespace of the destination instance will be larger than that of the source instance after full synchronization completes.
Exclusive write access: Do not write data to the destination database from any source other than DTS during synchronization. Otherwise, data inconsistency will occur. For example, using DMS for online DDL operations while another source writes to the destination database may cause data loss.
Reindexing: Reindexing is not supported for a synchronization instance. This operation can cause the task to fail or lead to data loss. Changes related to the primary key are not supported for tables with CDC enabled.
CDC table limit: If the number of tables with CDC enabled in a single task exceeds the value set for The maximum number of tables for which CDC is enabled that DTS supports., the precheck will fail.
Large field size: If a single field in a table with CDC enabled needs to store more than 64 KB of data, run exec sp_configure 'max text repl size', -1; in advance to adjust the source database configuration.
DDL write failures: If a DDL statement fails to be written to the destination database, the DTS task continues to run. Check the task logs for the failed statement. For more information, see Query task logs.
Modify synchronized objects: To use the feature to modify synchronized objects, you cannot remove a database.
AnalyticDB backup conflict: If the destination AnalyticDB for MySQL 3.0 cluster is backing up while the DTS task runs, the task fails.
Multiple sync instances: If multiple sync instances use the same SQL Server database as the source, their incremental data ingestion modules operate independently.
Task failure recovery: If a task fails, DTS support staff will attempt to restore it within eight hours. During restoration, they may restart the task or adjust its parameters. Only DTS task parameters are modified — not database parameters. Parameters that may be adjusted are listed in Modify instance parameters.
SQL Server log format: SQL Server is a commercial, closed-source database. Its log format can cause unavoidable issues during incremental CDC and parsing. Before using DTS for incremental or migration synchronization from SQL Server in a production environment, perform a comprehensive proof of concept (POC) covering all business change types, table schema adjustments, and peak-hour stress tests. Make sure your production business logic is consistent with what you tested during the POC.
Special cases
If the source instance is an RDS for SQL Server instance, DTS creates a rdsdt_dtsacct account in the source instance for data synchronization. Do not delete this account or change its password while the task is running. Otherwise, the task may fail. For more information, see System accounts.
Billing
| Synchronization type | Pricing |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. For more information, see Billing overview. |
Supported synchronization topologies
-
One-way one-to-one synchronization
-
One-way one-to-many synchronization
-
One-way cascade synchronization
-
One-way many-to-one synchronization
For details on each topology, see Synchronization topologies.
Supported SQL operations
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE, ALTER TABLE (ADD COLUMN and DROP COLUMN only), DROP TABLE, CREATE INDEX, DROP INDEX |
DML notes:
-
Incremental synchronization of UPDATE statements that modify only large object fields is not supported.
-
When data is written to AnalyticDB for MySQL, UPDATE statements are automatically converted to REPLACE INTO statements. If the primary key is updated, they are converted to DELETE+INSERT statements.
DDL notes: The following DDL operations are not supported:
-
Custom data types
-
Transactional DDL (for example, adding multiple columns in a single statement, or mixing DDL and DML in a single statement)
-
Online DDL
-
Reserved keywords as property names
-
DDL executed by system stored procedures
-
TRUNCATE TABLE
-
Partition definitions or table definitions that contain functions
Create a synchronization task
-
Go to the data synchronization task list page in the destination region. You can do this in one of two ways.
DTS console
-
Log on to the DTS console.
-
In the navigation pane on the left, click Data Synchronization.
-
In the upper-left corner of the page, select the region where the synchronization instance is located.
DMS console
NoteThe actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
-
Log on to the DMS console.
-
In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.
-
To the right of Data Synchronization Tasks, select the region of the synchronization instance.
-
-
Click Create Task to open the task configuration page.
-
Configure the source and destination databases.
WarningAfter selecting the source and destination instances, review the Limits at the top of the page before proceeding. Otherwise, the task may fail or data inconsistency may occur.
Category Configuration Description (none) Task Name DTS automatically generates a task name. Specify a descriptive name for easy identification. The name does not need to be unique. Source Database Select Existing Connection Select a registered database instance from the drop-down list to auto-fill the database information. If you haven't registered the instance, configure the fields below manually. NoteIn the DMS console, this field is Select a DMS database instance.
Database Type Select SQL Server. Connection Type Select Cloud instance. Instance Region Select the region where the source RDS SQL Server instance resides. Replicate Data Across Alibaba Cloud Accounts Select No if using the same Alibaba Cloud account. RDS Instance ID Select the source RDS SQL Server instance ID. Database Account Enter the database account for the source instance. This account must have ownership permissions on the objects to synchronize. Database Password Enter the password for the database account. Encryption Select Non-encrypted if SSL encryption is not enabled. Select SSL-encrypted if SSL encryption is enabled. DTS trusts the server-side certificate by default. Destination Database Select Existing Connection Select a registered database instance from the drop-down list to auto-fill the database information. If you haven't registered the instance, configure the fields below manually. NoteIn the DMS console, this field is Select a DMS database instance.
Database Type Select AnalyticDB MySQL 3.0. Connection Type Select Cloud instance. Instance Region Select the region where the destination AnalyticDB for MySQL cluster resides. Instance ID Select the destination AnalyticDB for MySQL cluster ID. Database Account Enter the database account for the destination cluster. This account must have read and write permissions. Database Password Enter the password for the database account. -
Click Test Connectivity and Proceed at the bottom of the page.
Add the CIDR blocks of DTS servers to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers. If either database is self-managed (where Access Method is not Alibaba Cloud Instance), click Test Connectivity in the CIDR Blocks of DTS Servers dialog box as well.
-
Configure the task objects.
-
On the Configure Objects page, configure synchronization options.
Configuration Description Synchronization Types DTS always selects Incremental Data Synchronization. By default, also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS initializes the destination cluster with the full data of the selected objects as the baseline for incremental synchronization. Note: Selecting Full Synchronization synchronizes both the schema and data for tables that have a CREATE TABLE statement. Processing Mode of Conflicting Tables Precheck and Report Errors: Checks for tables with the same names in the destination database. If found, an error is reported during the precheck and the task does not start. Note: If you cannot delete or rename the conflicting table, map it to a different name. See Database Table Column Name Mapping. Ignore Errors and Proceed: Skips the check for tables with the same name. Warning: This option may cause data inconsistency. During full data synchronization, if a record with the same primary or unique key exists in the destination, DTS retains the destination record and skips the source record. During incremental synchronization, DTS overwrites the destination record. If table schemas are inconsistent, data initialization may fail. Use with caution. Schema Mapping Mode of Source and Destination Databases Select a schema mapping mode to map schemas between the source and destination databases. Warning: Tables in different schemas of the source database cannot have the same name. Otherwise, data inconsistency or task failure may occur. SQL Server Incremental Synchronization Mode Select based on your source database setup. See the comparison table below. The maximum number of tables for which CDC is enabled that DTS supports. Set the maximum number of tables for which CDC can be enabled for the current synchronization instance. The default value is 1000. Note: This option is unavailable when SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported). Select DDL and DML for Instance-Level Synchronization Select SQL operations to synchronize at the instance level. For supported operations, see Supported SQL operations. Note: To select SQL operations at the database or table level, right-click a synchronization object in the Selected Objects box. Source Objects In the Source Objects box, click the objects, and then click
to move them to the Selected Objects box. Note: This scenario involves synchronization between heterogeneous databases, so you can select objects only at the table level. Views, triggers, and stored procedures are not synchronized.Selected Objects To rename a single object in the destination instance, right-click it in the Selected Objects box. See Map a single object name. To rename multiple objects in bulk, click Batch Edit. See Map multiple object names in bulk. Note: To filter data using a WHERE condition, right-click a table and set the filter condition. See Set filter conditions. Object name mapping may cause dependent objects to fail synchronization. Choosing an incremental synchronization mode
Feature Log-based parsing (non-heap tables only) Hybrid log-based parsing CDC polling Heap tables Not supported Supported Supported Tables without a primary key Not supported Supported Supported Compressed tables Not supported Supported — Tables with computed columns Not supported Supported — Source database intrusiveness Non-intrusive Creates trigger, heartbeat table, DDL storage table; enables CDC Enables CDC at database and table levels DDL support Limited Full DDL statements; wide range of DDL scenarios Limited Supported source types RDS SQL Server (non-web) RDS SQL Server Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, Google Cloud SQL for SQL Server Incremental data latency Low Low ~10 seconds -
Click Next: Advanced Settings.
Configuration Description Dedicated Cluster for Task Scheduling By default, DTS uses a shared cluster. For greater stability, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster?. Retry Time for Failed Connections If the connection to the source or destination fails after the task starts, DTS retries immediately. The default retry duration is 720 minutes. Set a value from 10 to 1,440 minutes. A duration of 30 minutes or more is recommended. If the connection is restored within this period, the task resumes. Note: If multiple DTS instances share a source or destination, DTS uses the shortest configured retry duration across all instances. DTS charges for task runtime during connection retries. Retry Time for Other Issues For non-connection issues (for example, DDL or DML execution errors), DTS retries immediately. The default retry duration is 10 minutes. Set a value from 1 to 1,440 minutes. A duration of 10 minutes or more is recommended. Important: This value must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Synchronization Limit the migration rate by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce load on both databases. Note: This option is available only if Full Data Synchronization is selected. You can also adjust the rate while the instance is running. Enable Throttling for Incremental Data Synchronization Limit the incremental synchronization rate by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). Environment Tag Select an environment label to identify the instance. Configure ETL Choose whether to enable the extract, transform, and load (ETL) feature. Select Yesalert notifications to enable ETL and enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to disable ETL. For more information on ETL, see What is ETL?. Monitoring and Alerting Select Yes to configure alerts for task failures or latency exceeding a threshold. Set the alert threshold and notifications. See Configure monitoring and alerting during task configuration. Select No for no alerts. -
Click Data Verification to configure a data verification task. For details, see Configure data verification.
-
(Optional) Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, and partition key information (Partition Key, Partitioning Rules, and Partition Lifecycle) for tables in the destination database. > Note: This step is available only if Schema Synchronization is selected in Synchronization Types. Set Definition Status to All to make modifications. The Primary Key Column supports composite primary keys consisting of multiple columns. Select one or more columns from the Primary Key Column to serve as the Distribution Key and Partition Key. For more information, see CREATE TABLE.
-
-
Save the task and run the precheck.
-
To view the API parameters for this instance configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters in the tooltip.
-
Click Next: Save Task Settings and Precheck at the bottom of the page.
Before a synchronization task starts, DTS performs a precheck. The task starts only if the precheck passes. If the precheck fails, click View Details next to the failed item, fix the issue as prompted, and rerun the precheck. For non-ignorable warnings, fix the issue and run the precheck again. For ignorable warnings, click Confirm Alert Details, then Ignore, then OK, and finally Precheck Again to skip the warning. Ignoring precheck warnings may lead to data inconsistencies and other business risks. Proceed with caution.
-
-
Purchase the instance.
-
When the Success Rate reaches 100%, click Next: Purchase Instance.
-
On the Purchase page, select the billing method and instance specifications.
Category Parameter Description New Instance Class Billing Method Subscription: Pay upfront for a fixed duration. Cost-effective for long-term, continuous tasks. Pay-as-you-go: Billed hourly for actual usage. Suitable for short-term or test tasks. Resource Group Settings The resource group for the instance. Defaults to the default resource group. For more information, see What is Resource Management?. Instance Class Different specifications affect the synchronization rate. Select based on your business requirements. For more information, see Data synchronization link specifications. Subscription Duration Available only in subscription mode. Monthly options range from 1 to 9 months. Yearly options include 1, 2, 3, or 5 years. -
Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.
-
Click Buy and Start, then click OK in the confirmation dialog box.
-
Monitor the task progress on the data synchronization page.