All Products
Search
Document Center

Data Transmission Service:Synchronize data from a self-managed SQL Server database to an RDS PostgreSQL instance

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) synchronizes data from a self-managed SQL Server database to an ApsaraDB RDS for PostgreSQL instance, supporting schema synchronization, full data synchronization, and incremental data synchronization. This topic describes the setup for a SQL Server database hosted on an Elastic Compute Service (ECS) instance.

Important

This feature is available only in the China (Qingdao), China (Beijing), and China (Hong Kong) regions.

Prerequisites

Before you begin, ensure that you have:

For supported source and destination database versions, see Overview of data synchronization scenarios.

If any of the following conditions apply to the source instance, split the task into multiple smaller tasks:

  • Data spans multiple schemas.

  • A single database backs up its logs more than once per hour.

  • A single database executes more than 100 DDL statements per hour.

  • Logs are written at more than 20 MB/s for a single database.

  • Change data capture (CDC) must be enabled for more than 1,000 tables.

Billing

Synchronization type Fee
Schema synchronization and full data synchronization Free
Incremental data synchronization Charged. See Billing overview.

SQL operations that can be synchronized

Operation type

SQL operation

DML

DML: INSERT, UPDATE, DELETE.

Note

If an UPDATE modifies only large object (LOB) columns, DTS does not synchronize the operation.

Database account permissions

Database Required permissions How to create and authorize
Source self-managed SQL Server sysadmin CREATE USER and User Permission Management
Destination ApsaraDB RDS for PostgreSQL instance Owner of the destination database Create an account and Create a database

Limitations

Source database

  • Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination database may contain duplicate records.

  • When selecting table-level objects with destination editing (such as renaming tables or columns), a single task can synchronize up to 5,000 tables. For more than 5,000 tables, configure multiple tasks or synchronize at the database level.

  • A single task can synchronize from up to 10 databases. For more than 10 databases, configure multiple tasks to avoid performance and stability issues.

  • DTS uses the fn_log function to read source logs. Do not clear source logs before the task completes, or the task may fail.

  • Log backup requirements:

    • Enable data logging. Set the backup mode to Full and perform a full physical backup.

    • For incremental synchronization only: retain logs for more than 24 hours.

    • For full and incremental synchronization combined: retain logs for at least 7 days. After full synchronization completes, reduce the retention period to more than 24 hours. If these requirements are not met, the DTS service level agreement (SLA) may not be guaranteed.

  • For CDC to be enabled on source tables, the following conditions must all be met (otherwise the precheck fails):

    • The srvname field in sys.sysservers must match the return value of the SERVERPROPERTY function.

    • Self-managed SQL Server: the database owner must be the sa user.

    • ApsaraDB RDS for SQL Server: the database owner must be the sqlsa user.

    • Enterprise edition: SQL Server 2008 or later is required.

    • Standard edition: SQL Server 2016 SP1 or later is required.

    • SQL Server 2017 (Standard or Enterprise edition): upgrade to a newer version.

  • Read-only source instances: DDL operations cannot be synchronized.

  • Azure SQL Database source: a single task can synchronize data from only one database.

  • ApsaraDB RDS for SQL Server source: disable Transparent Data Encryption (TDE) for task stability. See the Disable TDE section in "Configure TDE".

  • During full data synchronization, enable the READ_COMMITTED_SNAPSHOT transaction isolation level in the source database to prevent shared locks from interfering with data writes. Without this setting, data inconsistency or instance failures may occur, and such issues are not covered by the DTS SLA.

  • During schema synchronization and full data synchronization, do not execute DDL statements. Doing so causes the task to fail.

  • ApsaraDB RDS for SQL Server Web edition: set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when configuring the task.

Other limits

  • Unsupported data types: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and user-defined types created with CREATE TYPE. Because the source and destination databases are heterogeneous, data types do not map one-to-one. Evaluate the impact of type conversion on your workload. For the type mapping table, see Data type mappings for schema synchronization.

  • Only tables can be selected as synchronization objects, and all selected tables must be in the same schema. Otherwise, data inconsistency or task failure may occur.

  • If you use Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) mode, tables must have clustered indexes that contain primary key columns. The tables cannot be heap tables, tables without primary keys, compressed tables, or tables with computed columns.

  • If the task involves incremental data synchronization, do not perform reindexing operations. Doing so may cause the task to fail and data loss to occur. DTS cannot synchronize DDL operations related to the primary key of a CDC-enabled table.

  • If the number of tables for which CDC is enabled in a data synchronization task exceeds the maximum number of tables for which CDC is enabled that DTS supports, the precheck fails.

  • If a single field written to a CDC-enabled table exceeds 64 KB, run the following command to remove the size limit:

    exec sp_configure 'max text repl size', -1;
  • When modifying synchronization objects, databases cannot be removed.

  • The incremental data collection modules of multiple synchronization tasks sharing the same source SQL Server database operate independently.

  • If a DTS task fails, DTS support restores it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are not modified.

Hybrid log-based parsing mode

  • DTS uses the CDC component to synchronize incremental data. Make sure that the CDC job in the source database runs as expected. Otherwise, the DTS task fails.

  • DTS creates the trigger dts_cdc_sync_ddl, the heartbeat table dts_sync_progress, and the DDL history table dts_cdc_ddl_history in the source database, and enables CDC for the source database and specific tables.

  • Do not execute SELECT INTO, TRUNCATE, or RENAME COLUMN on CDC-enabled tables. Triggers created by DTS cannot be deleted manually.

  • Do not add or remove columns more than once within 10 minutes. For example, running the following two statements within 10 minutes causes a task error:

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • We recommend that you set the maximum number of records per second to 1,000 for the tables for which CDC is enabled in the source database.

  • The CDC component stores incremental data for 3 days by default. To extend the retention period, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

    <time> is in minutes. If the average daily volume of incremental data exceeds 10 million records, set <time> to 1440.

  • A single task supports up to 1,000 CDC-enabled tables. Exceeding this limit may cause delays or instability.

  • When the task pre-module enables database-level and table-level CDC in the source database, table-level locks lasting a few seconds occur due to SQL Server limitations.

CDC-based incremental synchronization (polling mode)

  • The source database account must have permission to enable CDC. To enable database-level CDC, use an account with the sysadmin role. To enable table-level CDC, use a privileged account.

    • Azure SQL Database (vCore model): all databases support CDC. For DTU-model databases, a service tier of S3 or higher is required.

    • Amazon RDS for SQL Server: a privileged account has the required permissions; CDC can be enabled for stored procedures at the database level.

    • CDC cannot be enabled for clustered columnstore indexes.

  • Incremental data synchronization has a latency of 10 seconds.

  • A single task supports up to 1,000 tables. Exceeding this limit may cause delays or instability.

  • Do not add or remove columns more than twice within one minute, or the task may fail.

  • Do not modify CDC instances in the source database during synchronization. Doing so may cause the task to fail or data loss to occur.

  • The CDC component stores incremental data for 3 days by default. To extend the retention period, run:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

    <time> is in minutes. If the average daily volume of incremental data exceeds 10 million records, set <time> to 1440.

  • When the task pre-module enables database-level and table-level CDC in the source database, table-level locks lasting a few seconds occur due to SQL Server limitations.

Foreign keys

  • During schema synchronization, DTS synchronizes foreign keys from the source to the destination.

  • During full data synchronization and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade updates or deletes on the source during synchronization, data inconsistency may occur.

Special cases

If the source is an ApsaraDB RDS for SQL Server instance, DTS automatically creates an account named rdsdt_dtsacct on the instance. Do not delete this account or change its password while the synchronization task is running. For more information, see System accounts.

Storage impact

After initial full data synchronization completes, the destination tables may use more storage than the source. Concurrent INSERT operations during full synchronization cause table fragmentation in the destination.

During initial full data synchronization, DTS uses some read and write resources of the source and destination databases, which may increase the database load. Evaluate the performance of the source and destination databases before you perform data synchronization. We recommend that you perform data synchronization during off-peak hours, for example, when the CPU load of the source and destination databases is below 30%.

Choose an incremental synchronization mode

DTS provides three modes for SQL Server incremental synchronization. Select a mode based on your source database type and table characteristics.

Quick selection guide:

  • Source is a third-party cloud SQL Server (Amazon RDS, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server): use Polling and querying CDC instances.

  • Source tables include heap tables, tables without primary keys, compressed tables, or tables with computed columns: use Hybrid log-based parsing. This mode modifies the source database (creates objects and enables CDC).

  • All tables have clustered indexes with primary key columns and you want zero modifications to the source database: use Log-based parsing (non-heap tables only).

Mode Supports heap tables Modifies source database Supported sources
Hybrid log-based parsing Yes Yes (creates trigger, heartbeat table, DDL history table; enables CDC) Self-managed SQL Server, ApsaraDB RDS for SQL Server
Log-based parsing (non-heap tables only) No No Self-managed SQL Server, ApsaraDB RDS for SQL Server
Polling and querying CDC instances Yes Yes (enables CDC) Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, Google Cloud SQL for SQL Server

Preparations

Before configuring the task, set the source SQL Server database to full recovery mode and create a log backup.

Important

If synchronizing from multiple databases, repeat the following steps for each database.

  1. Set the recovery model to full. Run the following statement on the source database, or use SQL Server Management Studio (SSMS). For details, see View or Change the Recovery Model of a Database (SQL Server).

    use master;
    GO
    ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO

    Replace <database_name> with the name of the source database. For example:

    use master;
    GO
    ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO
  2. Create a full backup of the source database. Skip this step if a full backup already exists.

    Placeholder Description
    <database_name> Name of the source database
    <physical_backup_device_name> Storage path and filename of the backup file
    BACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>';
    GO

    Example:

    BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak';
    GO
  3. Create a log backup for the source database.

    BACKUP LOG <database_name> TO DISK='<physical_backup_device_name>' WITH init;
    GO

    Example:

    BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init;
    GO

Configure the synchronization task

Step 1: Open the Data Synchronization page

Use one of the following methods:

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Synchronization.

  3. In the upper-left corner, select the region where the synchronization task resides.

DMS console

The steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

  3. From the drop-down list to the right of Data Synchronization Tasks, select the region where the instance resides.

Step 2: Configure source and destination databases

Click Create Task. On the task configuration page, set the following parameters.

Task name

Parameter Description
Task Name DTS generates a name automatically. Specify a descriptive name to identify the task. A unique name is not required.

Source database

Parameter Description
Select Existing Connection If the source is registered with DTS, select it from the list. DTS pre-fills the remaining fields. Otherwise, configure the fields below. In the DMS console, select the instance from Select a DMS database instance.
Database Type Select SQL Server.
Access Method Select Self-managed Database on ECS. For other access types, see Preparation overview.
Instance Region Region where the ECS instance hosting the source SQL Server database resides.
ECS Instance ID ID of the ECS instance hosting the source SQL Server database.
Port Number Service port of the source SQL Server database. Default: 1433.
Database Account Account for the source SQL Server database. For required permissions, see Database account permissions.
Database Password Password for the database account.
Encryption Select Non-encrypted if SSL encryption is not enabled on the source. Select SSL-encrypted if it is. DTS trusts the server certificate by default.

Destination database

Parameter Description
Select Existing Connection If the destination is registered with DTS, select it from the list. DTS pre-fills the remaining fields. Otherwise, configure the fields below.
Database Type Select PostgreSQL.
Access Method Select Alibaba Cloud Instance.
Instance Region Region where the destination RDS PostgreSQL instance resides.
Instance ID ID of the destination ApsaraDB RDS for PostgreSQL instance.
Database Name Name of the database in the destination instance that receives the synchronized data.
Database Account Account for the destination instance. For required permissions, see Database account permissions.
Database Password Password for the database account.
Encryption Specifies whether to encrypt the connection. For SSL-encrypted, upload CA Certificate, Client Certificate, and Private Key of Client Certificate as needed, then specify Private Key Password of Client Certificate. For a self-managed PostgreSQL destination with SSL encryption, uploading CA Certificate is required. For SSL configuration on an ApsaraDB RDS for PostgreSQL instance, see SSL encryption.

Step 3: Test connectivity

At the bottom of the page, click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.

Make sure that DTS server CIDR blocks are added to the security settings of both the source and destination databases. See Add the CIDR blocks of DTS servers.

Step 4: Configure synchronization objects

In the Configure Objects step, set the following parameters.

Parameter Description
Synchronization Types By default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes historical data from the source to establish the baseline for incremental synchronization.
SQL Server Incremental Synchronization Mode Select a mode based on your source and table characteristics. See Choose an incremental synchronization mode.
The maximum number of tables for which CDC is enabled that DTS supports Maximum number of tables for which the synchronization instance can enable CDC. Default: 1,000. Not available when SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported).
Processing Mode of Conflicting Tables Precheck and Report Errors: the precheck verifies that destination tables do not share names with source tables. If identical names exist, the precheck fails. To resolve, use object name mapping to rename destination tables. See Map object names. Ignore Errors and Proceed: skips the identical-name precheck. If schemas match and a record in the destination has the same primary key or unique key as a source record: during full synchronization, the existing destination record is kept; during incremental synchronization, the destination record is overwritten. If schemas differ, initialization may fail. Proceed with caution.
Capitalization of Object Names in Destination Instance Controls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source Objects In the Source Objects box, click a table, then click the arrow icon to move it to Selected Objects. All selected tables must be in the same schema.
Selected Objects By default, source schema and table names are used as destination names. To rename a destination schema, right-click the schema in Selected Objects and edit Schema Name. To rename tables or columns, set WHERE filter conditions, or select SQL operations for incremental synchronization, right-click a table in Selected Objects. For details, see Specify filter conditions and Map object names.

Step 5: Configure advanced settings

Click Next: Advanced Settings.

Parameter Description
Dedicated Cluster for Task Scheduling By default, DTS schedules the task to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed Connections How long DTS retries failed connections after the task starts. Valid values: 10–1440 minutes. Default: 720. Set to more than 30 minutes. If multiple tasks share the same source or destination, the shortest retry time takes precedence. DTS charges for the instance during retry periods.
Retry Time for Other Issues How long DTS retries failed DDL or DML operations. Valid values: 1–1440 minutes. Default: 10. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Synchronization Limits read/write resource usage during full synchronization. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed (MB/s). Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data Synchronization Limits resource usage during incremental synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed (MB/s).
Environment Tag Optional. Tag the instance with an environment label.
Configure ETL Whether to enable the extract, transform, and load (ETL) feature. Select Yesalert notification settings to write data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task.
Monitoring and Alerting Whether to configure alerts. Select Yes to set an alert threshold and notification contacts. Alerts are sent when the task fails or synchronization latency exceeds the threshold. See Configure monitoring and alerting when you create a DTS task.

Step 6: Configure data verification (optional)

Click Next Step: Data Verification. For details, see Configure a data verification task.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

  • To preview the API parameters for configuring this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

  • The task must pass the precheck before it can start.

  • If the precheck fails, click View Details next to each failed item, fix the issues, and rerun the precheck.

  • If an alert is triggered: for alerts that cannot be ignored, fix the issue and rerun. For ignorable alerts, click Confirm Alert Details, then Ignore in the details dialog, click OK, and then click Precheck Again.

Step 8: Purchase the instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the purchase page, configure the following parameters.

    Parameter Description
    Billing Method Subscription: pay upfront for a period. More cost-effective for long-term use. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years. Pay-as-you-go: billed hourly. Suitable for short-term use. Release the instance when no longer needed to avoid ongoing charges.
    Resource Group Settings Resource group for the synchronization instance. Default: default resource group. See What is Resource Management?
    Instance Class Determines synchronization speed. Select based on workload requirements. See Instance classes of data synchronization instances.
  3. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start. In the dialog box, click OK.

The task appears in the task list. Monitor its progress from there.

What's next