All Products
Search
Document Center

Data Transmission Service:Migrate a self-managed SQL Server database to RDS for SQL Server

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate schema, historical data, and ongoing changes from a self-managed SQL Server database to an ApsaraDB RDS for SQL Server instance — with zero or minimal downtime.

When to use DTS (and when not to)

DTS works well for most self-managed SQL Server migrations. Use the RDS for SQL Server backup feature instead if your source database meets any of the following conditions:

  • More than 10 databases in the source instance

  • A single database backs up logs more than once per hour

  • A single database executes more than 100 DDL statements per hour

  • Logs are written at more than 20 MB/s for a single database

  • Change data capture (CDC) must be enabled for more than 1,000 tables

  • Database logs contain heap tables, tables without primary keys, compressed tables, or tables with computed columns

To check whether your source database contains these table types, run the SQL queries in the appendix.

Migration types

DTS supports three migration types, which you can combine:

  • Schema migration: Migrates schema definitions — tables, views, table triggers, synonyms, SQL stored procedures, SQL functions, plan guides, user-defined types, rules, defaults, and sequences. Objects not supported include assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, CLR stored procedures, CLR scalar functions, CLR table-valued functions, internal tables, system objects, and aggregate functions.

  • Full data migration: Migrates all historical data from the source database to the destination database.

  • Incremental data migration: After full migration completes, continuously replicates incremental changes. This lets you migrate without taking applications offline.

To resolve object dependencies and improve success rates, DTS migrates SQL Server schema and data in this order:

  1. Schema migration for tables, views, synonyms, user-defined types, rules, defaults, and plan guides

  2. Full data migration

  3. Schema migration for SQL stored procedures, SQL functions, triggers, and foreign keys

  4. Incremental data migration

Prerequisites

Before you begin, make sure you have:

  • Created a destination RDS for SQL Server instance. See Create an ApsaraDB RDS for SQL Server instance

  • Verified that the destination instance storage exceeds the storage used by the source database (not total allocated storage)

  • Verified that your source SQL Server version is supported. See Migration solutions

  • Confirmed that the server hosting the source database has sufficient outbound bandwidth (insufficient bandwidth reduces migration speed)

  • Configured database accounts with the required permissions (see Permissions required)

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration + full data migrationFreeCharged when Access Method is set to Public IP Address. See Billing overview
Incremental data migrationCharged. See Billing overview

Permissions required

DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed SQL ServerSELECTSELECTsysadmin
RDS for SQL ServerRead and write permissionsRead and write permissionsRead and write permissions

To create accounts and grant permissions:

To migrate triggers from the source database, the database account used for the task must have Owner permissions on the destination database.

Limitations

General

  • DTS does not migrate foreign keys. Cascade and delete operations in the source database are not replicated to the destination.

  • Tables must have primary keys or UNIQUE constraints with unique fields. Without these, duplicate data may appear in the destination.

  • A single migration task supports a maximum of 1,000 tables when migrating table-level objects with column or name mapping. Split larger sets into multiple tasks, or migrate the entire database without table-level mapping.

  • A single migration task supports a maximum of 10 databases. Split larger sets into multiple tasks to avoid stability and performance issues.

  • When migrating specific objects (not an entire database), tables with the same name but different schema names cannot be migrated to the same destination database.

  • If the source database is read-only, DDL operations cannot be migrated.

  • If the source database is an Azure SQL Database, one DTS instance can migrate only one database.

  • If the source is an RDS for SQL Server instance with incremental migration, disable Transparent Data Encryption (TDE) first. See Disable TDE.

  • Do not use sp_rename to rename objects before the initial schema synchronization task runs. Use the ALTER command instead.

Incremental migration requirements

For incremental migration, the source database must meet these log requirements:

  • Backup mode must be set to Full. A full physical backup must have already been performed.

  • Log retention period: at least 24 hours for incremental-only tasks; at least 7 days for tasks that include both full and incremental migration. You can extend the retention period after full migration completes. Failures caused by insufficient log retention are not covered by the DTS Service-Level Agreement (SLA).

  • DTS uses the fn_log function to obtain source database logs. This function has performance bottlenecks. Do not clear the source database logs too early. Otherwise, the DTS task may fail.

  • During full data migration, enable the READ_COMMITTED_SNAPSHOT transaction processing mode in the source database to prevent shared locks from affecting data writes. Exceptions caused by this issue are not covered by the DTS SLA.

For CDC enablement in the source database, the following conditions must all be met:

  • The srvname field in sys.sysservers must match the return value of the SERVERPROPERTY function

  • Database owner must be sa (self-managed) or sqlsa (RDS for SQL Server)

  • Enterprise Edition: SQL Server 2008 or later

  • Standard Edition: SQL Server 2016 SP1 or later

  • SQL Server 2017 (Standard or Enterprise Edition): upgrade the database version

Additional restrictions during migration:

  • During initial schema synchronization and full data migration, do not perform DDL operations that change database or table schemas.

  • For full-migration-only tasks, do not write to the source database during migration. To avoid this constraint, include incremental data migration.

Unsupported data types

The following data types cannot be migrated: CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, GEOGRAPHY.

If data cannot be written to a TIMESTAMP column in the destination database, DTS does not support full or incremental migration for that table — this may cause data inconsistency or task failures.

DTS reads FLOAT and DOUBLE values using ROUND(COLUMN, PRECISION). Without a specified precision, FLOAT migrates at precision 38 and DOUBLE at precision 308. Verify this meets your business requirements before starting.

SQL operations supported for incremental migration

Operation typeSupported operations
DMLINSERT, UPDATE, DELETE. UPDATE statements that only update large objects (LOBs) are not supported.
DDLCREATE TABLE; ALTER TABLE (ADD COLUMN and DROP COLUMN only); DROP TABLE; CREATE INDEX; DROP INDEX

DDL limitations:

  • Transactional DDL is not supported — for example, adding multiple columns in a single statement, or mixing DDL and DML in one statement

  • DDL that uses user-defined types is not supported

  • Online DDL is not supported

  • DDL that uses reserved keywords as property names is not supported

  • DDL executed by system stored procedures is not supported

  • TRUNCATE TABLE is not supported

  • Partitions and table definitions containing functions are not supported

Incremental synchronization mode limitations

DTS offers three incremental synchronization modes. The table below summarizes which mode handles which table types — choose based on your source database before configuring the task.

ModeSupported table typesKey constraints
Hybrid log-based parsing (log-based for non-heap + CDC for heap)Heap tables, tables without primary keys, compressed tables, tables with computed columnsDTS creates objects in source (dts_cdc_sync_ddl trigger, dts_sync_progress heartbeat table, dts_cdc_ddl_history DDL storage table); SELECT INTO, TRUNCATE, and RENAME COLUMN are restricted on CDC-enabled tables
Log-based parsing (heap tables not supported)Standard tables with clustered indexes and primary key columnsNon-intrusive; does not support heap tables, tables without PKs, compressed tables, computed columns, or sparse columns
CDC polling modeCloud-hosted SQL Server (Amazon RDS, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, Google Cloud SQL for SQL Server)Requires CDC permissions; approximately 10-second latency

Hybrid log-based parsing — additional limits:

  • Cannot consecutively add or remove columns within less than 10 minutes. For example, the following sequence causes a task error:

    ALTER TABLE test_table DROP COLUMN Flag;
    ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • Keep the CDC job running in the source database. If the CDC job stops, the DTS task fails.

  • The default incremental data retention period for the CDC component is 3 days. Adjust it with:

    exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>;

    Where <time> is in minutes. If a single table exceeds 10 million incremental change SQL statements per day, set <time> to 1440.

  • Keep the data change rate for CDC-enabled tables at or below 1,000 records per second (RPS).

  • Enable CDC for no more than 1,000 tables in a single task to avoid latency and instability.

  • During task initialization, the source database may be briefly locked while CDC is being enabled — this is a SQL Server kernel behavior.

Log-based parsing — additional limits:

  • Tables must have a clustered index containing primary key columns.

  • Web Edition RDS for SQL Server instances must use this mode.

CDC polling mode — additional limits:

  • The source database account must have permissions to enable CDC. Enabling database-level CDC requires the sysadmin role; enabling table-level CDC requires a privileged account.

    • For Azure SQL Database: the privileged account (server administrator) provided by the console meets the requirements. For vCore-based databases, all instance types support CDC. For DTU-based databases, only instance types of S3 and later support CDC.

    • For Amazon RDS for SQL Server: the privileged account meets the requirements and can be used to enable database-level CDC for stored procedures.

  • Clustered columnstore index tables do not support CDC.

  • Incremental migration latency is approximately 10 seconds.

  • Migrate no more than 1,000 tables per task to avoid latency and instability.

  • Cannot consecutively add or remove columns. Performing more than two such DDL operations within one minute may cause the task to fail.

  • Do not change the CDC instance of the source database — this may cause task failure or data loss.

  • Adjust the CDC retention period using the same command as hybrid log-based parsing mode (see above).

  • During task initialization, the source database may be briefly locked while CDC is being enabled.

Cross-version migration:

Check compatibility in advance when migrating across different SQL Server versions.

Multiple DTS instances sharing the same source:

If multiple DTS instances use the same SQL Server database as the source, their incremental data ingestion modules run independently.

Task failure recovery:

DTS technical support will attempt to recover a failed task within 8 hours. During recovery, task parameters may be adjusted — only DTS task parameters are changed, not database parameters. DTS automatically retries failed tasks for up to 7 days. Before switching to the destination instance, end or release the migration task, or revoke write permissions from the DTS account using the revoke command. This prevents the source data from overwriting the destination after an automatic task resume.

SQL Server log format notice:

SQL Server is a commercial closed-source database. Due to the proprietary log format, CDC and log parsing may encounter unpredictable issues. Before enabling incremental migration in production, run a comprehensive proof of concept (POC) test that covers all business change types, table schema changes, and peak-hour load. Keep your production business logic consistent with what was tested.

Prepare the source database

Important

For incremental migration, complete these steps before configuring the DTS task. If migrating multiple databases, repeat steps 1–3 for each database. Otherwise, data inconsistency may occur.

Step 1: Set the recovery model to full

Run the following command in the source database:

use master;
GO
ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
GO

Replace <database_name> with the name of the database to migrate.

Example:

use master;
GO
ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
GO

Step 2: Back up the database

Skip this step if a logical backup has already been performed.

BACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>';
GO
PlaceholderDescription
<database_name>Name of the database to migrate
<physical_backup_device_name>Path and filename of the backup file

Example:

BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak';
GO

Step 3: Back up the transaction log

BACKUP LOG <database_name> to DISK='<physical_backup_device_name>' WITH init;
GO

Example:

BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init;
GO

Create a migration task

Step 1: Open the Data Migration page

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.Data Transmission Service (DTS) console

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will be located.

From the DMS console

Note

Navigation may vary based on the DMS console mode and layout. See Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.Data Management (DMS) console

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance will be located.

Step 2: Configure source and destination databases

Click Create Task, then configure the source and destination databases.

Warning

Read the limits displayed at the top of the configuration page before selecting source and destination instances. Skipping this may cause task failure or data inconsistency.

Task Name

DTS generates a task name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.

Source database (self-managed SQL Server)

ParameterValue
Select Existing ConnectionSelect a registered database instance, or configure manually
Database TypeSQL Server
Connection TypePublic IP
Instance RegionRegion where the source database resides
Hostname or IP AddressPublic endpoint of the source database
PortService port of the source database (default: 1433)
Database AccountAccount with the required permissions (see Permissions required)
Database PasswordPassword for the database account
EncryptionNon-encrypted if SSL is disabled on the source; SSL-encrypted if SSL is enabled. DTS trusts the server certificate by default.
When using a self-managed database, additional preparations may be required. See Preparations overview.
In the DMS console, Select Existing Connection is labeled Select a DMS database instance.

Destination database (RDS for SQL Server)

ParameterValue
Select Existing ConnectionSelect a registered database instance, or configure manually
Database TypeSQL Server
Connection TypeCloud Instance
Instance RegionRegion where the destination RDS for SQL Server instance resides
Instance IDID of the destination RDS for SQL Server instance
Database AccountAccount with read and write permissions
Database PasswordPassword for the database account
EncryptionNon-encrypted or SSL-encrypted based on your configuration

Step 3: Test connectivity and configure objects

  1. Click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.

    Add the DTS server IP address ranges to the security settings of both source and destination databases before testing. See Add DTS server IP addresses to a whitelist.
  2. On the Configure Objects page, configure the following settings:

Migration types

ScenarioSelection
Full migration onlySchema Migration + Full Data Migration
Migration with minimal downtimeSchema Migration + Full Data Migration + Incremental Data Migration
Without Schema Migration, create the target databases and tables manually before starting the task, or use object name mapping in Selected Objects.
Without Incremental Data Migration, do not write to the source database during migration.

Incremental sync mode

This setting appears only when Incremental Data Migration is selected. The three modes differ in which underlying mechanism they use to capture changes:

  • Hybrid log-based parsing: Uses log parsing for tables with clustered indexes, and CDC for heap tables and tables without primary keys. Choose this when your source has heap tables, tables without primary keys, compressed tables, or computed columns.

  • Log-based parsing (heap tables not supported): Uses only log parsing. Choose this to minimize impact on the source database and when your source has no heap tables or tables without primary keys.

  • CDC polling mode: Uses the native SQL Server CDC component. Choose this when migrating from cloud-hosted SQL Server (Amazon RDS, Azure SQL Database, Azure SQL Managed Instance, Google Cloud SQL for SQL Server).

ModeBest forTrade-offs
Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing)Sources with heap tables, tables without primary keys, compressed tables, or computed columnsDTS creates objects in the source database; SELECT INTO, TRUNCATE, and RENAME COLUMN are restricted on CDC-enabled tables
Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported)Standard tables with clustered indexes and primary keys; minimizing source impactDoes not support heap tables, tables without PKs, compressed tables, or computed columns
Polling and Querying CDC Instances for Incremental SynchronizationCloud-hosted SQL Server (Amazon RDS, Azure SQL Database, Azure SQL Managed Instance, Google Cloud SQL)Requires CDC permissions; approximately 10-second latency

Other object settings

SettingDescription
Method to Migrate Triggers in Source DatabaseSelect a trigger migration method, or skip if no triggers are involved. Only available when both Schema Migration and Incremental Data Migration are selected. See Configure how to synchronize or migrate triggers.
The maximum number of tables for which CDC is enabled that DTS supportsSet a maximum based on your requirements. Default: 1,000. Not available in log-based parsing mode.
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails precheck if same-name tables exist in the destination. Ignore Errors and Proceed: skips the check (risk of data inconsistency — use with caution).
Source ObjectsClick objects to migrate in Source Objects, then click the right arrow to move them to Selected Objects. Selecting only tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsRight-click an object to rename it (see Individual table column mapping). Click Batch Edit to rename multiple objects at once (see Map multiple object names at a time). Right-click to add WHERE filters (see Configure filter conditions) or select specific SQL operations to migrate. Note: object name mapping may cause dependent objects to fail migration.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following:

SettingDescription
Dedicated Cluster for Task SchedulingDTS uses a shared cluster by default. Purchase a dedicated cluster for more stable task execution.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Default: 720 minutes. Range: 10–1440 minutes. Set to more than 30 minutes. Note: you are charged during the retry period.
Retry Time for Other IssuesHow long DTS retries after non-connectivity failures (DDL or DML errors). Default: 10 minutes. Range: 1–1440 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit QPS, RPS, or throughput (MB/s) to reduce load on source and destination. Available only when Full Data Migration is selected. Also adjustable after the task starts.
Enable Throttling for Incremental Data MigrationLimit RPS or throughput (MB/s) for incremental migration. Available only when Incremental Data Migration is selected. Also adjustable after the task starts.
Environment TagOptional tag to identify the instance.
Configure ETLEnable the extract, transform, and load (ETL) feature if needed. See What is ETL? and Configure ETL in a data migration task.
Monitoring and AlertingSet an alert threshold and notification contacts. When a task fails or latency exceeds the threshold, the system sends an alert. See Configure monitoring and alerting.

Step 5: Configure data validation

Click Next: Data Validation to set up a data validation task. See Configure data validation.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.

DTS runs a precheck before starting the task. The task starts only after the precheck passes.

  • If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.

  • If a warning appears for an item that cannot be ignored, fix the issue and rerun. For ignorable warnings, click Confirm Alert Details > Ignore > OK > Precheck Again to proceed. Ignored warnings may cause data inconsistency.

Step 7: Purchase the instance and start the task

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. Select the instance class for the migration task.

    SettingDescription
    Resource Group SettingsResource group for the instance. Default: default resource group. See What is Resource Management?
    Instance ClassAffects migration speed. See Data migration link specifications.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Monitor the migration task

View task progress on the Data Migration Tasks list page.

Task statusMeaning
RunningFull migration or incremental migration is in progress.
CompletedFull-migration-only tasks stop automatically when migration finishes.
Tasks that include only schema migration and full data migration stop automatically and show Completed when finished.
Tasks that include incremental data migration continue running and show Running. Stop the task manually when you are ready to cut over to the destination instance.

Before switching your application to the destination instance, end or release the DTS task — or revoke write permissions from the DTS account using the revoke command. This prevents the source from overwriting the destination if the task auto-resumes after failure.

Special cases

Source is RDS for SQL Server: DTS creates an rdsdt_dtsacct account in the source instance. Do not delete this account or change its password while the task runs. See System accounts.

Destination is RDS for SQL Server: DTS automatically creates the target database. If the database name does not follow RDS for SQL Server naming conventions, create the database manually before configuring the task. See Create a database.

Triggers in destination database: Disable triggers and foreign keys in the destination database for incremental migration. Otherwise, the task fails.

Reindexing: Do not reindex during incremental migration — this may cause task failure or data loss. Do not change the primary keys of tables with CDC enabled.

Field size for CDC-enabled tables: If data written to a single field in a CDC-enabled table exceeds 64 KB, run the following command on the source database before starting:

exec sp_configure 'max text repl size', -1;

Appendix: Check table types in source database

Run these SQL queries on the source database to identify table types that may affect your migration approach.

Heap tables (tables without a clustered index):

SELECT s.name AS schema_name, t.name AS table_name
FROM sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
  AND t.type = 'U'
  AND s.name NOT IN ('cdc', 'sys')
  AND t.name NOT IN ('systranschemas')
  AND t.object_id IN (
    SELECT object_id FROM sys.indexes WHERE index_id = 0
  );

Tables without primary keys:

SELECT s.name AS schema_name, t.name AS table_name
FROM sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
  AND t.type = 'U'
  AND s.name NOT IN ('cdc', 'sys')
  AND t.name NOT IN ('systranschemas')
  AND t.object_id NOT IN (
    SELECT parent_object_id FROM sys.objects WHERE type = 'PK'
  );

Tables where primary key columns are not included in clustered index columns:

SELECT s.name schema_name, t.name table_name
FROM sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
WHERE t.type = 'U'
  AND s.name NOT IN('cdc', 'sys')
  AND t.name NOT IN('systranschemas')
  AND t.object_id IN (
    SELECT pk_colums_counter.object_id AS object_id
    FROM (
      SELECT pk_colums.object_id, sum(pk_colums.column_id) column_id_counter
      FROM (
        SELECT sic.object_id object_id, sic.column_id
        FROM sys.index_columns sic, sys.indexes sis
        WHERE sic.object_id = sis.object_id
          AND sic.index_id = sis.index_id
          AND sis.is_primary_key = 'true'
      ) pk_colums
      GROUP BY object_id
    ) pk_colums_counter
    INNER JOIN (
      SELECT cluster_colums.object_id, sum(cluster_colums.column_id) column_id_counter
      FROM (
        SELECT sic.object_id object_id, sic.column_id
        FROM sys.index_columns sic, sys.indexes sis
        WHERE sic.object_id = sis.object_id
          AND sic.index_id = sis.index_id
          AND sis.index_id = 1
      ) cluster_colums
      GROUP BY object_id
    ) cluster_colums_counter
    ON pk_colums_counter.object_id = cluster_colums_counter.object_id
      AND pk_colums_counter.column_id_counter != cluster_colums_counter.column_id_counter
  );

Compressed tables:

SELECT s.name AS schema_name, t.name AS table_name
FROM sys.objects t, sys.schemas s, sys.partitions p
WHERE s.schema_id = t.schema_id
  AND t.type = 'U'
  AND s.name NOT IN ('cdc', 'sys')
  AND t.name NOT IN ('systranschemas')
  AND t.object_id = p.object_id
  AND p.data_compression != 0;

Tables with computed columns:

SELECT s.name AS schema_name, t.name AS table_name
FROM sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
  AND t.type = 'U'
  AND s.name NOT IN ('cdc', 'sys')
  AND t.name NOT IN ('systranschemas')
  AND t.object_id IN (
    SELECT object_id FROM sys.columns WHERE is_computed = 1
  );

Tables with sparse columns:

SELECT s.name AS schema_name, t.name AS table_name
FROM sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
  AND t.type = 'U'
  AND s.name NOT IN ('cdc', 'sys')
  AND t.name NOT IN ('systranschemas')
  AND t.object_id IN (
    SELECT object_id FROM sys.columns WHERE is_sparse = 1
  );

If any of these queries return results, review the limitations to determine whether DTS can handle these table types, or whether you should switch to the RDS for SQL Server backup feature.

What's next