All Products
Search
Document Center

ApsaraDB RDS:Migrate data from a self-managed SQL Server database to an ApsaraDB RDS for SQL Server instance

更新时间:Jun 24, 2025

This topic describes how to migrate data from a self-managed SQL Server database to an Alibaba Cloud ApsaraDB RDS for SQL Server instance by using the Data Transmission Service (DTS) console. You can flexibly configure schema migration, full data migration, and incremental data migration. When these three migration types are configured together, you can migrate data without service interruption.

Prerequisites

You have created a destination ApsaraDB RDS for SQL Server instance with storage space larger than that of the source database. If the space is insufficient, you need to increase the instance space in advance.

Usage notes

Please pay attention to the following key notes before migration, ignoring them may cause task failure or errors:

  • Database quantity limit: A single migration task cannot migrate more than 10 databases. Otherwise, stability and performance risks may occur.

  • Table quantity limit: When incremental migration is included, the number of tables to be synchronized from the source database cannot exceed 1000. Otherwise, task delay or instability may occur.

  • Source database operation restrictions: During schema migration and full migration phases, do not execute DDL operations (such as modifying database or table structures). Otherwise, the task will fail.

  • Table structure requirements: Tables to be migrated must have primary keys or unique constraints, and fields must have uniqueness. Otherwise, duplicate data may appear in the destination database.

  • Foreign keys and triggers: If the migration task includes incremental data migration, you need to disable triggers and foreign keys that have been enabled in the destination database. Otherwise, the task may fail or data may be lost.

  • Database name standards: If the name of the database to be migrated does not comply with the definition standards of RDS SQL Server, you need to manually create a database in RDS SQL Server in advance. Otherwise, the task may not run properly.

  • Data log retention time: Incremental migration tasks require the source database's data logs to be retained for more than 24 hours. Full + incremental migration tasks require data logs to be retained for at least 7 days. Otherwise, the task may fail or data inconsistency may occur.

Click to expand and view all limits and notes

Source database limits

  • Bandwidth requirements

    The source database server must have sufficient outbound bandwidth. Otherwise, it will affect the data migration rate.

  • Table structure requirements

    Tables to be migrated must have primary keys or unique constraints, and fields must have uniqueness. Otherwise, duplicate data may appear in the destination database.

  • Migration quantity limits

    • Table-level migration (with column name mapping): A single task supports a maximum of 1000 tables. If this limit is exceeded, you need to split the task or configure database-level migration. Otherwise, a request error will occur.

    • Database quantity limit: A single task supports a maximum of 10 databases. If this limit is exceeded, you need to split the task. Otherwise, stability and performance issues will occur.

  • Incremental migration log requirements

    • Data logs must be enabled, backup mode set to FULL, and relevant backups performed.

    • Data log retention time: Incremental migration tasks require the source database's data logs to be retained for more than 24 hours. Full + incremental migration tasks require data logs to be retained for at least 7 days. Otherwise, the task may fail or data inconsistency may occur.

      Important

      If problems occur because you set the data log retention time lower than the time required by DTS, such situations will not be covered by the DTS Service-Level Agreement (SLA).

  • CDC enabling conditions

    If CDC needs to be enabled for tables to be migrated from the source database, the following conditions must be met. Otherwise, the precheck will fail:

    • The srvname field in the sys.sysservers view must be consistent with the return value of the SERVERPROPERTY function.

    • If the source database is a self-managed SQL Server, the database owner must be sa.

    • Source database version requirements:

      • Enterprise Edition: Must be version 2008 or later.

      • Standard Edition: Must be version 2016 SP1 or later.

      • SQL Server 2017 version (including Standard and Enterprise editions): Version upgrade is recommended.

  • Log cleanup time

    DTS obtains source database logs through the fn_log function, which has performance bottlenecks. Do not clean up source database logs too early to avoid task failure.

Other limits

  • Unsupported data types

    Migration of CURSOR, ROWVERSION, SQL_VARIANT, HIERARCHYID, POLYGON, GEOMETRY, and GEOGRAPHY data types is not supported.

  • Other incremental migration limits

    • Index rebuilding operations are not supported. Otherwise, task failure or data loss may occur. Tables with CDC enabled do not support primary key-related changes.

    • If the number of tables with CDC enabled in a single task exceeds the maximum limit supported by DTS, the precheck will fail.

    • If the instance includes an incremental task and CDC tables need to write single field data exceeding 64 KB, you need to use exec sp_configure 'max text repl size', -1; to adjust the source database configuration in advance. The CDC job's default maximum processing length for a single field is 64 KB.

  • Destination database limits

    If incremental migration is needed, you must disable triggers and foreign keys that have been enabled in the destination database. Otherwise, the task will fail.

  • Multiple migration instances

    Multiple migration instances that use the same SQL Server database as the source database have independent incremental data collection modules.

  • Instance recovery

    • If an instance fails to run, DTS technical support personnel will attempt to recover it within 8 hours.

    • During the recovery process, the instance may be restarted or parameters may be adjusted, but only instance parameters will be modified, not database parameters.

Notes

  • Notes for schema migration and full migration phases

    • DDL operation restrictions: DDL operations that change database or table structures are prohibited during schema migration and full migration phases. Otherwise, the data migration task will fail.

    • Read-only instance restrictions: If the source database is a read-only instance, migration of DDL operations is not supported.

    • DDL restrictions in hybrid log parsing mode: The source database does not support continuous execution of multiple column addition/removal operations (time interval less than 10 minutes). For example, continuously executing the following SQL will cause task errors:

      ALTER TABLE test_table DROP COLUMN Flag;
      ALTER TABLE test_table ADD Remark nvarchar(50) not null default('');
  • Cross-version migration

    If cross-version migration is needed, please confirm compatibility in advance.

  • DTS operations in the source database

    • Source log parsing for incremental synchronization mode: DTS creates the trigger dts_cdc_sync_ddl, heartbeat table dts_sync_progress, and DDL storage table dts_cdc_ddl_history in the source database.

    • Hybrid incremental synchronization mode: DTS creates the trigger dts_cdc_sync_ddl, heartbeat table dts_sync_progress, and DDL storage table dts_cdc_ddl_history in the source database. It also enables database-level CDC and partial table CDC. It is recommended that the data change volume of tables with CDC enabled at the source does not exceed 1000 records per second (RPS).

  • Data consistency and migration stability

    • Data consistency during full data migration: If you only perform full data migration, do not write new data to the source instance. Otherwise, it will cause inconsistency between source and destination data. It is recommended to select schema migration, full data migration, and incremental data migration to maintain real-time consistency.

    • Transaction processing mode parameter requirements: It is recommended to ensure that the source database's transaction processing mode parameter READ_COMMITTED_SNAPSHOT is enabled during the full data migration task to avoid the impact of shared locks on data writing. Otherwise, it may lead to data inconsistency, instance running failure, and other abnormal situations. Abnormal situations caused by this are not covered by the DTS SLA.

    • Task recovery mechanism: DTS will attempt to recover migration tasks that have failed within seven days. Therefore, before switching business to the destination instance, please end or release the task, or revoke the write permission of the DTS access to the destination instance account using the REVOKE command to prevent source data from overwriting destination instance data after the task is automatically recovered.

  • Performance and resource notes

    • Pre-migration assessment: Evaluate the performance of source and destination databases before migration, and it is recommended to perform migration during business off-peak hours.

    • Resource occupation during migration: During full migration, DTS will occupy read and write resources of the source and destination databases, which may cause database load to increase.

    • Storage space changes after migration: After full migration is completed, the storage space of tables in the destination database may be larger than that in the source database due to increased fragmentation caused by concurrent INSERT operations.

  • FLOAT and DOUBLE column precision description

    Please confirm whether the migration precision of DTS for columns with data types FLOAT or DOUBLE meets your business expectations. DTS reads the values of these two types of columns through ROUND(COLUMN,PRECISION). If precision is not explicitly defined, DTS uses a migration precision of 38 digits for FLOAT and 308 digits for DOUBLE.

  • Database name standards

    If the name of the database to be migrated does not comply with the definition standards of RDS SQL Server, you need to manually create a database in RDS SQL Server in advance. Otherwise, the task may not run properly.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Permissions required for database accounts

To successfully complete the data migration task, ensure that the database accounts of the source and destination databases have the following permissions:

Note

Database

Schema migration

Full migration

Incremental migration

Self-managed SQL Server database

SELECT permission

SELECT permission

sysadmin

ApsaraDB RDS for SQL Server instance

Read and write permissions

Preparations

If you need to perform incremental migration, before formally configuring the data migration task, you need to set the recovery mode of the specified database in the self-managed SQL Server database to full mode (FULL) to ensure that transaction logs are completely recorded. You also need to save full data and incremental data through logical backups and log backups respectively, providing a foundation for subsequent data migration.

Important

If you need to migrate multiple databases, you need to repeat steps 1 to 3 in the preparation work. Otherwise, data inconsistency may occur.

  1. Execute the following command in the self-managed SQL Server database to change the recovery mode of the database to be migrated to full mode.

    use master;
    GO
    ALTER DATABASE <database name to be migrated> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO

    Example:

    use master;
    GO
    ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE;
    GO
  2. Execute the following command to perform a logical backup of the database to be migrated. If you have already performed a logical backup, you can skip this step.

    BACKUP DATABASE <database name to be migrated> TO DISK='<specify the storage path and file name of the backup file>';
    GO

    Example:

    BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak';
    GO
  3. Execute the following command to back up the logs of the database to be migrated.

    BACKUP LOG <database name to be migrated> to DISK='<specify the storage path and file name of the backup file>' WITH init;
    GO

    Example:

    BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init;
    GO

Procedure

  1. Visit the Data Transmission Service (DTS) console.

  2. In the left-side navigation pane, click Data Migration, and select a region at the top.

  3. Click Create Task, and configure the source and destination database information.

    Category

    Parameter

    Description

    N/A

    Task Name

    Configure a name with business meaning (no uniqueness requirement) for easy identification later; or keep the system-generated task name.

    Source Database

    Select Existing Connection

    If you have already entered source database information on the DTS Data Connection Management page, you can directly select the database that has been entered here, and you can avoid manually entering source database information later.

    Database Type

    Select SQL Server.

    Access Method

    Select Public IP Address.

    Note

    When selecting a self-managed database, you also need to perform corresponding preparations.

    Instance Region

    Select the region to which the self-managed SQL Server database belongs.

    Hostname Or IP Address

    Enter the access address of the self-managed SQL Server database. In this example, enter the public IP address.

    Port Number

    Enter the service port of the self-managed SQL Server database. The default is 1433.

    Database Account

    Enter the database account of the self-managed SQL Server. For permission requirements, see Permissions required for database accounts.

    Database Password

    The password that is used to access the database instance.

    Encryption

    • If the source database has not enabled SSL encryption, select Non-encrypted.

    • If the source database has enabled SSL encryption, select SSL-encrypted, and DTS will trust the server certificate by default.

    Destination Database

    Select Existing Connection

    If you have already entered destination database information on the DTS Data Connection Management page, you can directly select the database that has been entered here, and you can avoid manually entering destination database information later.

    Database Type

    Select SQL Server.

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region to which the destination ApsaraDB RDS for SQL Server instance belongs.

    Instance ID

    Select the destination ApsaraDB RDS for SQL Server instance ID.

    Database Account

    Enter the database account of the destination ApsaraDB RDS for SQL Server instance. For permission requirements, see Permissions required for database accounts.

    Database Password

    The password that is used to access the database instance.

    Encryption

    • If the destination database has not enabled SSL encryption, select Non-encrypted.

    • If the destination database has enabled SSL encryption, select SSL-encrypted, and DTS will trust the server certificate by default.

  4. After configuration is complete, click Test Connectivity and Proceed at the bottom of the page. In the CIDR Blocks of DTS Servers dialog box that appears, click Test Connectivity.

    Important

    Please ensure that you have added the IP address ranges of the DTS service to the security settings of the source database to allow access from DTS servers.

  5. Configure the objects to be migrated.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Parameter

      Description

      Migration Types

      • For full data migration: We recommend that you select Schema Migration and Full Data Migration.

      • For migration without downtime: We recommend that you select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note

      Method to Migrate Triggers in Source Database

      Select the method for migrating triggers as needed. If the objects you want to migrate do not involve triggers, you do not need to configure this parameter. For more information, see Configure the method for synchronizing or migrating triggers.

      Note

      You can configure this parameter only when you select both Migration Types and Schema Migration and Incremental Data Migration.

      SQL Server Incremental Synchronization Mode

      • Log-based Parsing for Non-heap Tables and CDC-based Incremental Synchronization for Heap Tables (Hybrid Log-based Parsing)

        Limits

        • DTS incremental migration depends on the CDC component. Make sure that the CDC job in the source database runs properly. Otherwise, the task fails.

        • CDC incremental data is retained for three days by default. We recommend that you run the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command to adjust the retention period. <time> is measured in minutes. If the number of incremental SQL operations on a single table in the source database exceeds 10 million per day, we recommend that you set the value to 1440.

        • You can enable CDC for a maximum of 1,000 tables in a single migration task. Otherwise, latency or instability may occur.

        • The incremental migration module enables CDC for the source database. Due to the limitations of the SQL Server database kernel, tables are temporarily locked.

        • DTS creates the dts_cdc_sync_ddl trigger, the dts_sync_progress heartbeat table, and the dts_cdc_ddl_history DD table in the source database. DTS also enables database-level CDC and table-level CDC for some tables.

        • You cannot execute the SELECT INTO, TRUNCATE, or RENAME COLUMN statement on tables for which CDC is enabled in the source database. You cannot manually delete the triggers that DTS creates in the source database.

        Benefits

        • This mode supports heap tables, tables without primary keys, compressed tables, and tables with computed columns in the source database.

        • This mode provides high link stability. Complete DDL statements can be obtained, and a wide range of DDL scenarios are supported.

      • Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported)

        Limits

        • The tables to be migrated must have clustered indexes, and the clustered indexes must contain primary key columns.

        • This mode does not support heap tables, tables without primary keys, compressed tables, or tables with computed columns. You can run the following SQL statements to check whether such tables exist in the source database:

          1. Execute the following SQL statement to check for heap tables:

            SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.indexes WHERE index_id = 0);
          2. Execute the following SQL statement to check for tables without primary keys:

            SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id NOT IN (SELECT parent_object_id FROM sys.objects WHERE type = 'PK');
          3. Execute the following SQL statement to check for primary key columns that are not contained in clustered index columns:

            SELECT s.name schema_name, t.name table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id WHERE t.type = 'U' AND s.name NOT IN('cdc', 'sys') AND t.name NOT IN('systranschemas') AND t.object_id IN ( SELECT pk_colums_counter.object_id AS object_id FROM (select pk_colums.object_id, sum(pk_colums.column_id) column_id_counter from (select sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.is_primary_key = 'true') pk_colums group by object_id) pk_colums_counter inner JOIN ( select cluster_colums.object_id, sum(cluster_colums.column_id) column_id_counter from (SELECT sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.index_id = 1) cluster_colums group by object_id ) cluster_colums_counter ON pk_colums_counter.object_id = cluster_colums_counter.object_id and pk_colums_counter.column_id_counter != cluster_colums_counter.column_id_counter);
          4. Execute the following SQL statement to check for compressed tables:

            SELECT s.name AS schema_name, t.name AS table_name FROM sys.objects t, sys.schemas s, sys.partitions p WHERE s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id = p.object_id AND p.data_compression != 0;
          5. Execute the following SQL statement to check for tables with computed columns:

            SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.columns WHERE is_computed = 1);

        Benefits

        This mode does not intrude on the source database.

      • Polling and querying CDC instances for incremental synchronization

        Limits

        • The source database account must have the permissions to enable CDC: The sysadmin role is required for database-level CDC, and a privileged account is required for table-level CDC.

        • CDC cannot be enabled for tables with clustered columnstore indexes.

        • The incremental migration module enables CDC for the source database. Due to the limitations of the SQL Server database kernel, tables are temporarily locked.

        • The number of tables to be migrated from the source database cannot exceed 1,000. Otherwise, latency or instability may occur.

        • CDC incremental data is retained for three days by default. We recommend that you run the exec console.sys.sp_cdc_change_job @job_type = 'cleanup', @retention= <time>; command to adjust the retention period. <time> is measured in minutes. If the number of incremental SQL operations on a single table exceeds 10 million per day, we recommend that you set the value to 1440.

        • Consecutive column addition and deletion operations (more than two DDL operations within one minute) are not supported. Otherwise, the task may fail.

        • Changes to the CDC instance in the source database are not supported. Otherwise, the task may fail or data may be lost.

        Benefits

        • This mode supports full data migration and incremental data migration when the source database is Amazon RDS for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server on Virtual Machine, or Google Cloud SQL for SQL Server.

        • This mode uses the native CDC component of SQL Server to obtain incremental data, which makes incremental migration more stable and consumes less network bandwidth.

      Note

      This parameter is available only when Migration Types includes Incremental Data Migration.

      The maximum number of tables for which CDC is enabled that DTS supports.

      Please set a reasonable number of tables that are allowed to enable CDC for the current migration instance. The default value is 1000.

      Note

      When SQL Server Incremental Synchronization Mode is set to Incremental Synchronization Based on Logs of Source Database (Heap tables are not supported), this configuration item will not appear.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: DTS checks whether tables with the same names exist in the destination database. If no tables with the same names exist in the destination database, the precheck is passed and the data migration task starts. Otherwise, an error is returned during the precheck and the data migration task does not start.

        Solution: If you cannot delete or rename the tables with the same names in the destination database, you can change the names of the tables in the destination database by configuring object name mapping.

      • Ignore Errors and Proceed: DTS skips the precheck for tables with the same names in the destination database.

        Warning

        If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to risks. For example:

        • If the source and destination tables have the same schema and a record in the destination table has the same primary key value as a record in the source table:

          • During full data migration, DTS retains the existing records in the destination table and does not migrate the records with the same primary key values from the source table to the destination table.

          • During incremental data migration, the data in the destination table may be overwritten by the new data from the source table, resulting in data loss in the destination table.

        • If the source and destination tables have different schemas, only some columns can be migrated or the data migration task may fail. Proceed with caution.

      Source Objects

      Select one or more objects from the Source Objects section. Click the 向右小箭头 icon to add the objects to the Selected Objects section.

      Note

      You can select columns, tables, or schemas as the objects to be migrated. If you select tables or columns as the objects to be migrated, DTS does not migrate other objects, such as views, triggers, or stored procedures, to the destination database.

      Selected Objects

      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.

      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

      Note
      • If you use the object name mapping feature, objects that depend on the renamed objects may fail to be migrated.

      • To set WHERE conditions to filter data, right-click the table to be migrated in Selected Objects and set filter conditions in the dialog box that appears.

      • To select SQL operations to be migrated at the database or table level, right-click the object to be migrated in Selected Objects and select the SQL operations that you want to migrate in the dialog box that appears.

    2. Click Next: Advanced Settings to configure advanced settings.

      Parameter

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules the data migration task to the shared cluster if you do not specify a dedicated cluster. If you want to improve the stability of data migration tasks, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.

      Retry Time for Failed Connections

      The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1,440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS is reconnected to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

      Note
      • If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value that is specified later takes precedence.

      • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source database and destination instance are released.

      Retry Time for Other Issues

      The retry time range for other issues. For example, if DDL or DML operations fail to be performed after the data migration task is started, DTS immediately retries the operations within the retry time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.

      Important

      The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

      Enable Throttling for Full Data Migration

      Specifies whether to enable throttling for full data migration. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers. You can enable throttling for full data migration based on your business requirements. To configure throttling, you must configure the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters. This reduces the loads of the destination database server.

      Note

      You can configure this parameter only if you select Full Data Migration for the Migration Types parameter.

      Enable Throttling for Incremental Data Migration

      Specifies whether to enable throttling for incremental data migration. To configure throttling, you must configure the RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) parameters. This reduces the loads of the destination database server.

      Note

      You can configure this parameter only if you select Incremental Data Migration for the Migration Types parameter.

      Environment Tag

      The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements. In this example, you do not need to configure this parameter.

      Configure ETL

      Specifies whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts receive notifications. Valid values:

    3. Click Next Step: Data Verification to configure the data verification task.

      For more information about how to use the data verification feature, see Configure a data verification task.

  6. Save the task settings and run a precheck.

    • To view the parameters to be specified when you call the relevant API operation to configure the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • If you do not need to view or have viewed the parameters, click Next: Save Task Settings and Precheck in the lower part of the page.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.

    • If an alert is triggered for an item during the precheck:

      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.

  7. Purchase an instance.

    1. Wait until Success Rate becomes 100%. Then, click Next: Purchase Instance.

    2. On the Purchase Instance page, configure the Instance Class parameter for the data migration instance. The following table describes the parameters.

      Section

      Parameter

      Description

      New Instance Class

      Resource Group

      The resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Instance classes of data migration instances.

    3. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

    4. Click Buy and Start. In the message that appears, click OK.

      You can view the progress of the task on the Data Migration page.

      Note
      • If a data migration task cannot be used to migrate incremental data, the task automatically stops. The Completed is displayed in the Status section.

      • If a data migration task can be used to migrate incremental data, the task does not automatically stop. The incremental data migration task never stops or completes. The Running is displayed in the Status section.

Appendix 1: SQL operations that support incremental migration

DML operations

INSERT, UPDATE, DELETE

Note

If an UPDATE operation updates only the large fields, DTS does not migrate the operation.

DDL operations

  • CREATE TABLE

    Note

    If a CREATE TABLE operation creates a partitioned table or a table that contains functions, DTS does not migrate the operation.

  • ALTER TABLE

    ALTER TABLE operations include only ADD COLUMN and DROP COLUMN.

  • DROP TABLE

  • CREATE INDEX, DROP INDEX

Note
  • Transactional DDL statements cannot be migrated. For example, DTS does not migrate an SQL operation that contains DDL operations on multiple columns or an SQL operation that contains both DDL operations and DML operations. Data loss may occur after such SQL operations are migrated.

  • DTS does not migrate DDL operations that contain user-defined types.

  • DTS does not migrate online DDL operations.

  • DTS does not migrate DDL operations performed on objects whose names contain reserved keywords.

  • DTS does not migrate DDL operations performed in system stored procedures.

  • DTS does not migrate the TRUNCATE TABLE operation.

Appendix 2: Objects that support structure migration

  • DTS supports schema migration for the following types of objects: table, view, trigger, synonym, SQL stored procedure, SQL function, plan guide, user-defined type, rule, default, and sequence.

  • DTS does not migrate the schemas of assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, Common Language Runtime (CLR) stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, systems, or aggregate functions.