This topic describes how to migrate data from a self-managed SQL Server database to an ApsaraDB RDS for SQL Server instance by using Data Transmission Service (DTS).
Prerequisites
- The version of the self-managed SQL Server database is supported by DTS. For more information, see Overview of data migration scenarios.
- The destination ApsaraDB RDS for SQL Server instance is created. For more information, see Create an ApsaraDB RDS for SQL Server instance.
- The available storage space of the ApsaraDB RDS for SQL Server instance is larger than the total size of the data in the self-managed SQL Server database.
Limits
Category | Description |
---|---|
Limits on the source database |
|
Other limits |
|
Special cases | If the destination instance is an ApsaraDB RDS for SQL Server instance, take note
of the following limits:
DTS automatically creates a destination database in the ApsaraDB RDS for SQL Server instance. However, if the name of the source database is invalid, you must create a database in the ApsaraDB RDS for SQL Server instance before you configure the data migration task. For more information, see Create a database on an ApsaraDB RDS for SQL Server instance. |
Migration types
- Schema migration
DTS migrates the schemas of required objects from the source database to the destination database.
- DTS supports schema migration for the following types of objects: table, view, trigger, synonym, SQL stored procedure, SQL function, plan guide, user-defined type, rule, default, and sequence.
- DTS does not migrate the schemas of assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, CLR stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, systems, or aggregate functions.
- Full data migration
DTS migrates historical data of required objects from the source database to the destination database.
- Incremental data migration
After full data migration is completed, DTS migrates incremental data from the source database to the destination database. Incremental data migration ensures service continuity when you migrate data between self-managed databases.
SQL operations that can be migrated
Operation type | SQL statement |
---|---|
DML | INSERT, UPDATE, and DELETE
Note If an UPDATE operation updates only the large fields, DTS does not migrate the operation.
|
DDL |
Note For source instances in mixed log-based parsing mode, all common DDL operations can
be migrated.
|
Permissions required for database accounts
Database | Schema migration | Full data migration | Incremental data migration |
---|---|---|---|
Self-managed SQL Server database | The SELECT permission | The SELECT permission | Permissions of the system administrator |
ApsaraDB RDS for SQL Server instance | Read and write permissions |
For more information about how to create a database account and grant permissions to the account, see the following topics:
- Self-managed SQL Server databases: CREATE USER
- ApsaraDB RDS for SQL Server instances: Create an account for an RDS SQL Server instancy and Modify the permissions of a standard account on an ApsaraDB RDS for SQL Server instance
Data migration process
To prevent data migration failures caused by dependencies among objects, DTS migrates the schemas and data from the source SQL Server database in the following order:- Migrate the schemas of tables, views, synonyms, user-defined types, rules, defaults, and plan guides.
- Perform full data migration.
- Migrate the schemas of SQL stored procedures, SQL functions, triggers, and foreign keys.
- Perform incremental data migration.
Preparations
- Run the following command in the self-managed SQL Server database to change the recovery
model to full:
Parametersuse master; GO ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE; GO
<database_name>: the name of the source database.
Example:use master; GO ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE; GO
- Run the following command to create a logical backup for the source database. Skip
this step if you have already created a logical backup.
ParametersBACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>'; GO
- <database_name>: the name of the source database.
- <physical_backup_device_name>: the storage path and file name of the backup file.
BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak'; GO
- Run the following command to back up the log entries of the source database:
ParametersBACKUP LOG <database_name> to DISK='<physical_backup_device_name>' WITH init; GO
- <database_name>: the name of the source database.
- <physical_backup_device_name>: the storage path and file name of the backup file.
BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init; GO
- Incremental data migration by parsing logs of the source database is not supported
for heap tables, tables without primary keys, compressed tables, and tables with calculated
columns. Execute the following SQL statements to check whether the source database
contains the preceding table types:
- Check for heap tables.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id in (select object_id from sys.indexes where index_id = 0)
- Check for tables without primary keys.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id not in (select parent_object_id from sys.objects where type = 'PK')
- Check for compressed tables.
select s.name as schema_name,t.name as table_name from sys.objects t, sys.schemas s, sys.partitions i where s.schema_id=t.schema_id and t.object_id = i.object_id and i.data_compression != 0
- Check for tables with calculated columns.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id in (select object_id from sys.columns where is_computed = 1)
- Check for heap tables.