All Products
Search
Document Center

Data Transmission Service:Synchronize data from RDS for SQL Server to AnalyticDB for PostgreSQL

Last Updated:Mar 30, 2026

When you need to feed SQL Server transactional data into AnalyticDB for PostgreSQL for real-time analytics or large-scale data processing, Data Transmission Service (DTS) provides a managed pipeline that handles schema conversion, full data loading, and ongoing incremental replication — without requiring you to build custom ETL infrastructure.

Prerequisites

Before you begin, ensure that you have:

  • A source ApsaraDB RDS for SQL Server instance running a version supported by DTS. For supported versions, see Overview of data synchronization scenarios

  • A destination AnalyticDB for PostgreSQL instance with available storage greater than the total data size of the source instance. For setup instructions, see Create an instance

  • A database created in the destination instance to receive the synchronized data. For instructions, see the CREATE DATABASE section of the SQL syntax topic

  • (Recommended) Split the synchronization task into multiple subtasks if any of the following apply to the source instance:

    • More than 10 databases

    • A single database backs up logs more frequently than once per hour

    • A single database executes more than 100 DDL statements per hour

    • A single database generates logs at more than 20 MB/s

    • Change data capture (CDC) must be enabled for more than 1,000 tables

Billing

Synchronization type Fee
Schema synchronization and full data synchronization Free
Incremental data synchronization Charged. For details, see Billing overview.

SQL operations that can be synchronized

Type Operations
DML INSERT, UPDATE, DELETE
DDL CREATE TABLE (partitioned tables and tables containing functions are excluded), ADD COLUMN, DROP COLUMN, DROP TABLE, CREATE INDEX, DROP INDEX
DTS does not synchronize DDL operations that reference user-defined types, or transactional DDL operations.

Required permissions

Database Required permissions References
Source ApsaraDB RDS for SQL Server Owner permissions on the objects to be synchronized Create a privileged account or standard account, Modify account permissions
Destination AnalyticDB for PostgreSQL LOGIN; SELECT, CREATE, INSERT, UPDATE, and DELETE on destination tables; CONNECT and CREATE on the destination database; CREATE on destination schemas; COPY (memory-based batch copy). Alternatively, use the initial database account or an account with the RDS_SUPERUSER permission. Create and manage database accounts, Manage users and permissions

Data type mappings

ApsaraDB RDS for SQL Server and AnalyticDB for PostgreSQL use different type systems with no direct one-to-one mapping. During schema synchronization, DTS converts source types to destination types automatically. Review the mapping table before you start to understand potential precision loss or format changes. For the full mapping table, see Data type mappings for initial schema synchronization.

Limitations

Source database limitations

  • Tables must have a PRIMARY KEY or UNIQUE constraint, with all fields unique. Without these constraints, the destination database may contain duplicate records. To find tables that do not meet this requirement, run the following query on the source database:

    USE [your_database_name];
    SELECT SCHEMA_NAME(schema_id) AS schema_name, name AS table_name
    FROM sys.tables
    WHERE OBJECTPROPERTY(object_id, 'TableHasPrimaryKey') = 0
      AND OBJECTPROPERTY(object_id, 'TableHasUniqueCnst') = 0
    ORDER BY schema_name, table_name;
  • When synchronizing tables with destination-side renaming, a single task supports up to 5,000 tables. For more than 5,000 tables, configure multiple tasks or synchronize at the database level.

  • A single task supports up to 10 source databases. For more than 10 databases, configure multiple tasks.

  • DTS uses the fn_log function to read source database logs. This function has performance limits — do not clear the logs before the task completes.

  • Data logging requirements:

    • Set backup mode to Full and perform full physical backup.

    • For incremental-only synchronization: retain logs for more than 24 hours.

    • For full plus incremental synchronization: retain logs for at least 7 days. After full synchronization completes, you can reduce the retention period to more than 24 hours.

    • Failure to meet these requirements may cause the task to fail, or result in data inconsistency or data loss, which are not covered by the DTS service level agreement (SLA).

  • For CDC to be enabled on tables, all of the following conditions must be met:

    • The srvname field in the sys.sysservers view must match the return value of the SERVERPROPERTY function.

    • Self-managed SQL Server: the database owner must be the sa user. ApsaraDB RDS for SQL Server: the database owner must be the sqlsa user.

    • Enterprise edition: SQL Server 2008 or later.

    • Standard edition: SQL Server 2016 SP1 or later.

    • Standard or Enterprise edition running SQL Server 2017: update to a newer version.

  • Read-only source instances: DDL operations cannot be synchronized.

  • ApsaraDB RDS for SQL Server instances running the Web edition: set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported) when configuring the task.

  • Azure SQL Database sources: a single task can synchronize from only one database.

  • ApsaraDB RDS for SQL Server sources: disable Transparent Data Encryption (TDE) to ensure task stability. For instructions, see Disable TDE.

  • During schema synchronization and full data synchronization: do not execute DDL statements that change database or table schemas.

  • During full data synchronization: use the READ_COMMITTED_SNAPSHOT transaction isolation level in the source database to prevent shared locks from blocking data writes. Failure to set this may cause data inconsistency or instance failures, which are not covered by the DTS SLA.

DTS automatically creates an account named rdsdt_dtsacct on the source ApsaraDB RDS for SQL Server instance. Do not delete this account or change its password while the task is running. For details, see System accounts.

Schema synchronization limitations

  • Supported object types for schema synchronization: schema, table, view, function, procedure.

    Warning

    SQL Server and AnalyticDB for PostgreSQL use different type systems. Schema synchronization may fail or cause data loss for some types. Review Data type mappings for initial schema synchronization before proceeding.

  • Object types not supported for schema synchronization: assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, CLR stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, system tables, and aggregate functions.

  • Data types not supported: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY, and custom types created with CREATE TYPE.

  • Tables with computed columns are not synchronized.

  • During schema synchronization, DTS synchronizes foreign keys from source to destination. During full and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade updates and deletes on the source during synchronization may cause data inconsistency.

Incremental synchronization limitations

DTS offers three incremental synchronization modes for SQL Server. Choose a mode based on your table types and source database environment.

Mode Best for Limitations
Hybrid log-based parsing (Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables) Heap tables, tables without primary keys, compressed tables, tables with computed columns DTS creates dts_cdc_sync_ddl, dts_sync_progress, and dts_cdc_ddl_history in the source database and enables CDC. Cannot execute SELECT INTO, TRUNCATE, or RENAME COLUMN on CDC-enabled tables. DTS-created triggers cannot be manually deleted.
Log-based parsing (Incremental Synchronization Based on Logs of Source Database) Standard tables with clustered indexes and primary keys Does not modify source database settings. Does not support heap tables, tables without primary keys, compressed tables, or tables with computed columns. Tables must have clustered indexes containing primary key columns.
Polling CDC (Polling and querying CDC instances for incremental synchronization) Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, Google Cloud SQL for SQL Server Requires an account with permission to enable CDC. Incremental synchronization has a 10-second latency. Performance may degrade when synchronizing multiple tables across multiple databases.

Limits common to all incremental synchronization modes:

  • Do not write data from other sources to the destination during synchronization — this causes data inconsistency.

  • Reindexing operations are not allowed during incremental synchronization. Performing reindexing may cause the task to fail and result in data loss.

  • DTS cannot synchronize DDL operations related to primary key changes on CDC-enabled tables.

  • When modifying synchronized objects, you cannot remove databases.

  • Incremental data collection modules for multiple DTS instances sharing a source SQL Server database operate independently.

Hybrid log-based parsing mode — additional limits:

  • The CDC job in the source database must run as expected. Otherwise, the DTS task fails.

  • Multiple column additions or removals in the source database within 10 minutes cause an error. For example, executing the following statements within 10 minutes triggers a task failure:

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • The CDC component retains incremental data for 3 days by default. To change the retention period, run the following command on the source database:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention = <time>;

    <time> is the retention period in minutes. If average daily incremental data exceeds 10 million rows, set <time> to 1440.

  • Enabling database-level and table-level CDC causes a brief table lock (a few seconds) in the source database due to SQL Server constraints.

  • Keep the number of CDC-enabled tables per task at or below 1,000. Otherwise, the task may be delayed or unstable.

  • Set the maximum write rate to 1,000 records per second for CDC-enabled tables.

Polling CDC mode — additional limits:

  • Account permission requirements:

    • Database-level CDC: requires the sysadmin role.

    • Table-level CDC: requires a privileged account.

    • Azure SQL Database (vCore model): all databases support CDC.

    • Azure SQL Database (DTU model): CDC is available only for service tier S3 or higher.

    • Amazon RDS for SQL Server: CDC can be enabled via stored procedures at the database level.

    • CDC cannot be enabled for clustered columnstore indexes.

  • Maximum 1,000 tables per task. Exceeding this limit causes delays or instability.

  • The CDC component retains incremental data for 3 days by default. To change the retention period, run the following command on the source database:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention = <time>;

    <time> is the retention period in minutes. If average daily incremental data exceeds 10 million rows, set <time> to 1440.

  • Do not execute DDL statements to add or remove columns more than twice within a minute.

  • Do not modify CDC instances during synchronization. Otherwise, the task may fail or data loss may occur.

  • Enabling database-level and table-level CDC causes a brief table lock (a few seconds) in the source database due to SQL Server constraints.

Large field data:

If a single field written to CDC-enabled tables exceeds 64 KB (the default CDC maximum), run the following command on the source database:

exec sp_configure 'max text repl size', -1;

Primary key and distribution key mappings:

  • If the source table has a primary key, the destination table uses the same primary key column.

  • If the source table has no primary key, the destination table uses the distribution key as its primary key.

  • The unique key of the destination table (including the primary key) must contain all columns of the distribution key.

If the number of CDC-enabled tables in a task exceeds the DTS limit, the precheck fails.

Set up a synchronization task

If a DTS task fails, DTS support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are never changed.

The setup consists of seven steps: navigate to the tasks page, configure source and destination databases, select objects and synchronization settings, configure advanced settings, optionally configure table fields, run the precheck, and purchase the synchronization instance.

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

You can also go directly to the Data Synchronization Tasks page. DMS console layout may vary — for details, see Simple mode and Customize the layout and style of the DMS console.

Step 2: Configure source and destination databases

  1. On the right side of the Data Synchronization Tasks page, select the region where your synchronization instance will reside.

    In the new DTS console, select the region in the top navigation bar.
  2. Click Create Task.

  3. Configure the source database:

    Parameter Description
    Task Name Auto-generated. Specify a descriptive name to identify the task easily. The name does not need to be unique.
    Select an existing DMS database instance (optional) Select an existing database to have DTS auto-fill the parameters below, or configure them manually.
    Database Type Select SQL Server.
    Access Method Select Alibaba Cloud Instance.
    Instance Region The region of the source ApsaraDB RDS for SQL Server instance.
    Replicate Data Across Alibaba Cloud Accounts Select No for same-account synchronization.
    RDS Instance ID The ID of the source instance.
    Database Account The account for the source instance. See Required permissions.
    Database Password The password for the database account.
  4. Configure the destination database:

    Parameter Description
    Select an existing DMS database instance (optional) Select an existing database to have DTS auto-fill the parameters below, or configure them manually.
    Database Type Select AnalyticDB for PostgreSQL.
    Access Method Select Alibaba Cloud Instance.
    Instance Region The region of the destination AnalyticDB for PostgreSQL instance.
    Instance ID The ID of the destination instance.
    Database Name The name of the database that receives the synchronized data.
    Database Account The account for the destination instance. See Required permissions.
    Database Password The password for the database account.
  5. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For self-managed databases in data centers or hosted by third-party cloud providers, add the DTS server CIDR blocks manually. For the full list of CIDR blocks, see CIDR blocks of DTS servers.

    Warning

    Adding DTS server CIDR blocks to a whitelist or security group introduces security exposure. Before proceeding, take preventive measures such as strengthening credentials, restricting exposed ports, auditing API calls, reviewing whitelist rules regularly, and considering private connectivity options like Express Connect, VPN Gateway, or Smart Access Gateway.

Step 3: Configure objects and synchronization settings

Parameter Description
Synchronization Types Select Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. Full data synchronization migrates historical data as the baseline for incremental synchronization.
SQL Server Incremental Synchronization Mode See Incremental synchronization limitations for a mode comparison.
DDL and DML Operations to Be Synchronized The SQL operations to synchronize at the instance level. To configure per-database or per-table operations, right-click an object in Selected Objects and select the operations.
Processing Mode of Conflicting Tables Precheck and Report Errors: fails the precheck if identically named tables exist in the destination. Use object name mapping to resolve conflicts. Ignore Errors and Proceed: skips the check. During full synchronization, matching records in the destination are kept; during incremental synchronization, matching records are overwritten. Schema mismatches may cause partial sync or task failure. Use with caution.
Capitalization of Object Names in Destination Instance Controls capitalization of database, table, and column names in the destination. Default is DTS default policy. For details, see Specify the capitalization of object names.
Source Objects Select objects from the Source Objects list and click the arrow icon to move them to Selected Objects. In this heterogeneous scenario, only tables are synchronized — views, triggers, and stored procedures are excluded.
Selected Objects Right-click an object to rename it, configure per-object SQL operations, or set WHERE filter conditions. For bulk renaming, click Batch Edit. Renaming an object may break dependent objects.

Step 4: Configure advanced settings

Click Next: Advanced Settings to configure the following options.

To enable data verification after synchronization completes, see Configure a data verification task.

Parameter Description
Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to the shared cluster. For improved stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Set Alerts No: disables alerting. Yes: sends alerts when the task fails or synchronization latency exceeds the threshold. Configure alert thresholds and notification contacts. See Configure monitoring and alerting.
Retry Time for Failed Connections The time window for retrying failed connections after the task starts. Range: 10–1440 minutes. Default: 720 minutes. Set to at least 30 minutes. If reconnection succeeds within this window, the task resumes; otherwise, the task fails. If multiple tasks share a source or destination database, the shortest retry window applies. DTS charges continue during retries.
Retry Time for Other Issues The time window for retrying failed DDL or DML operations. Range: 1–1440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Synchronization Limits source read QPS, destination write RPS, and data transfer speed (MB/s) during full synchronization to reduce load on the destination. Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data Synchronization Limits incremental synchronization RPS and data transfer speed (MB/s).
Environment Tag Tags the DTS instance with an environment label. Optional.
Enclose Object Names in Quotation Marks When set to Yes, DTS encloses object names in single or double quotation marks during schema and incremental synchronization if: the source database is case-sensitive and object names contain mixed case; a table name does not start with a letter or contains non-standard characters; or object names are reserved keywords or invalid characters in the destination. After synchronization completes, reference objects using quoted names.
Configure ETL Yes: enables extract, transform, and load (ETL). Enter transformation statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: disables ETL.

Step 5: Configure database and table fields (optional)

Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, and Distribution Key for each table synchronized to AnalyticDB for PostgreSQL.

This step is available only when Schema Synchronization is selected. Set Definition Status to All to view and edit all tables.

Specify multiple columns in the Primary Key Column field to form a composite primary key. One or more primary key columns must also be designated as Distribution Key columns. For details, see Manage tables and Define table distribution.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task can start.

  • If the precheck fails, click View Details next to each failed item, fix the issue, then rerun the precheck.

  • If an alert is triggered:

    • For alerts that cannot be ignored: fix the issue, then rerun the precheck.

    • For ignorable alerts: click Confirm Alert Details, then click Ignore in the View Details dialog, click OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 7: Purchase a synchronization instance

Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

Parameter Description
Billing Method Subscription: pay upfront for a fixed period. More cost-effective for long-term use. Pay-as-you-go: billed hourly. Release the instance when no longer needed to stop charges.
Resource Group Settings The resource group for the synchronization instance. Default: default resource group. See What is Resource Management?
Instance Class Determines synchronization throughput. Select based on your data volume and performance requirements. See Instance classes of data synchronization instances.
Subscription Duration Available only for the Subscription billing method. Options: 1–9 months, or 1, 2, 3, or 5 years.

Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

Track synchronization progress in the task list.

What's next