This topic describes how to migrate data from a self-managed SQL Server database to an AnalyticDB for PostgreSQL instance by using Data Transmission Service (DTS). The data migration feature allows you to transfer data with ease and analyze data in real time.
Prerequisites
- You can configure a data migration task in this scenario only in the new DTS console.
- The version of the self-managed SQL Server database is supported by DTS. For more information, see Overview of data migration scenarios.
- The destination AnalyticDB for PostgreSQL instance is created. For more information, see Create an AnalyticDB for PostgreSQL instance.
- The available storage space of the destination AnalyticDB for PostgreSQL instance is larger than the total size of the data in the self-managed SQL Server database.
Limits
Category | Description |
---|---|
Limits on the source database |
|
Other limits |
|
Migration types
- Schema migration
DTS migrates the schemas of required objects from the source database to the destination database.
- DTS supports schema migration for the following types of objects: schema, table, view, function, and procedure.
- DTS does not migrate the schemas of assemblies, service brokers, full-text indexes, full-text catalogs, distributed schemas, distributed functions, Common Language Runtime (CLR) stored procedures, CLR scalar-valued functions, CLR table-valued functions, internal tables, systems, or aggregate functions.
Warning SQL Server and AnalyticDB for MySQL are heterogeneous databases. Their data types do not have one-to-one correspondence. We recommend that you evaluate the impact of data type conversion on your business. For more information, see Data type mappings between heterogeneous databases. - Full data migration
DTS migrates historical data of required objects from the source database to the destination database.
- Incremental data migration
After full data migration is completed, DTS migrates incremental data from the source database to the destination database. Incremental data migration ensures service continuity when you migrate data between self-managed databases.
SQL operations that can be migrated
Operation type | SQL statement |
---|---|
DML | INSERT, UPDATE, and DELETE |
DDL |
|
Permissions required for database accounts
Database | Schema migration | Full data migration | Incremental data migration |
---|---|---|---|
Self-managed SQL Server database | The SELECT permission | The SELECT permission | sysadmin |
AnalyticDB for PostgreSQL instance |
Note You can use the initial account of the AnalyticDB for PostgreSQL instance.
|
- Self-managed SQL Server databases: CREATE USER and GRANT (Transact-SQL)
- AnalyticDB for PostgreSQL instances: Create a database account and Manage users and permissions
Preparations
- Run the following command in the self-managed SQL Server database to change the recovery
model to full:
Parametersuse master; GO ALTER DATABASE <database_name> SET RECOVERY FULL WITH ROLLBACK IMMEDIATE; GO
<database_name>: the name of the source database.
Example:use master; GO ALTER DATABASE mytestdata SET RECOVERY FULL WITH ROLLBACK IMMEDIATE; GO
- Run the following command to create a logical backup for the source database. Skip
this step if you have already created a logical backup.
ParametersBACKUP DATABASE <database_name> TO DISK='<physical_backup_device_name>'; GO
- <database_name>: the name of the source database.
- <physical_backup_device_name>: the storage path and file name of the backup file.
BACKUP DATABASE mytestdata TO DISK='D:\backup\dbdata.bak'; GO
- Run the following command to back up the log entries of the source database:
ParametersBACKUP LOG <database_name> to DISK='<physical_backup_device_name>' WITH init; GO
- <database_name>: the name of the source database.
- <physical_backup_device_name>: the storage path and file name of the backup file.
BACKUP LOG mytestdata TO DISK='D:\backup\dblog.bak' WITH init; GO
- Incremental data migration by parsing logs of the source database is not supported
for heap tables, tables without primary keys, compressed tables, and tables with calculated
columns. Execute the following SQL statements to check whether the source database
contains the preceding table types:
- Check for heap tables.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id in (select object_id from sys.indexes where index_id = 0)
- Check for tables without primary keys.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id not in (select parent_object_id from sys.objects where type = 'PK')
- Check for compressed tables.
select s.name as schema_name,t.name as table_name from sys.objects t, sys.schemas s, sys.partitions i where s.schema_id=t.schema_id and t.object_id = i.object_id and i.data_compression != 0
- Check for tables with calculated columns.
select s.name, t.name from sys.schemas s inner join sys.tables t on s.schema_id = t.schema_id where t.type = 'U' and t.object_id in (select object_id from sys.columns where is_computed = 1)
- Check for heap tables.