You can use Data Transmission Service (DTS) to migrate a PolarDB for PostgreSQL (Compatible with Oracle) cluster to an AnalyticDB for MySQL 3.0 cluster.
Prerequisites
A destination AnalyticDB for MySQL V3.0 cluster is created. For more information, see Create a cluster.
In the source PolarDB for PostgreSQL (Compatible with Oracle) cluster, the wal_level parameter is set to logical. This adds the information required for logical replication to the write-ahead log (WAL). For more information, see Set cluster parameters.
Precautions
During schema migration, DTS does not migrate foreign keys from the source database to the destination database.
During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.
Type | Description |
Source database limits |
|
Other limits |
|
Billing
Migration type | Instance configuration fee | Internet traffic fee |
Schema migration and full data migration | Free of charge. | When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview. |
Incremental data migration | Charged. For more information, see Billing overview. |
Migration types
Migration type | Note |
Schema migration | DTS migrates the schema definitions of migration objects to the destination database. Currently, DTS supports schema migration for tables. |
Full data migration | DTS migrates all historical data of the migration objects from the source database to the destination database. Note Before schema migration and full data migration are complete, do not perform DDL operations on the migration objects. Otherwise, the migration may fail. |
Incremental data migration | After the full data migration, DTS polls and captures redo logs from the source database and migrates the incremental data to the destination database. Incremental data migration lets you smoothly migrate data without stopping your applications. |
SQL operations that can be incrementally migrated
Operation type | SQL operation statement |
DML | INSERT, UPDATE, DELETE Note When data is written to the destination AnalyticDB for MySQL cluster, the UPDATE statement is automatically converted to the REPLACE INTO statement. If the UPDATE statement is executed on the primary key, the UPDATE statement is converted to the DELETE and INSERT statements. |
Permissions required for database accounts
Database | Required permissions | Account creation and authorization |
PolarDB for PostgreSQL (Compatible with Oracle) cluster | A privileged account. | For more information, see Create a database account. |
AnalyticDB for MySQL V3.0 | Read and write permissions on the destination database that contains the migration objects. | For more information, see Create a database account. |
Steps
Go to the migration task list page of the destination region. You can use one of the following methods.
From the DTS console
Log on to the Data Transmission Service (DTS) console.
In the navigation pane on the left, click Data Migration.
In the upper-left corner of the page, select the region where the migration instance is located.
From the DMS console
NoteThe actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
Log on to the Data Management (DMS) console.
In the top navigation bar, choose .
To the right of Data Migration Tasks, select the region where the migration instance is located.
Click Create Task to go to the task configuration page.
Configure the source and destination databases.
WarningAfter you select the source and destination instances, we recommend that you carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.
Category
Configuration
Note
N/A
Task Name
DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.
Source database
Database Type
Select PolarDB (Compatible with Oracle).
Connection Type
Select the Cloud Instance.
Instance Region
Select the region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster resides.
Instance ID
Select the instance ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.
Database Name
Enter the name of the database to which the migration objects belong in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.
Database Account
Enter the database account of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. For information about the required permissions, see Permissions required for database accounts.
Database Password
Enter the password for the database account.
Destination Database
Database Type
Select AnalyticDB MySQL 3.0.
Connection Type
Select cloud instance.
Instance Region
Select the region where the destination AnalyticDB for MySQL 3.0 database is located.
Instance ID
Select the ID of the destination AnalyticDB for MySQL 3.0 cluster.
Database Account
Enter the account of the destination AnalyticDB for MySQL 3.0 database. For more information, see Permissions required for database accounts.
Database Password
Enter the password for the database account.
After you complete the configuration, click Test Connectivity and Proceed at the bottom of the page.
NoteEnsure that the IP address segment of the DTS service is automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist.
If the source or destination database is a self-managed database (the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box that appears.
Configure the task objects.
On the Configure Objects page, configure the objects to be migrated.
Configuration
Note
Migration Types
Select migration types based on your requirements and the types supported by each engine.
If you only need to perform a full migration, select both Schema Migration and Full Data Migration.
To perform a migration with no downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration.
NoteIf you do not select Schema Migration, ensure that a database and tables to receive the data exist in the destination database. Also, use the object name mapping feature in the Selected Objects box as needed.
If you do not select Incremental Data Migration, do not write new data to the source instance during data migration to ensure data consistency.
Processing Mode of Conflicting Tables
Precheck and Report Errors: Checks whether tables with the same names exist in the destination database. If no tables with the same names exist, the precheck item is passed. If tables with the same names exist, an error is reported during the precheck phase, and the data migration task does not start.
NoteIf a table in the destination database has the same name but cannot be easily deleted or renamed, you can change the name of the table in the destination database. For more information, see Object name mapping.
Ignore Errors and Proceed: Skips the check for tables with the same names.
WarningSelecting Ignore Errors and Proceed may cause data inconsistency and business risks. For example:
If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:
During full migration, DTS keeps the record in the destination cluster. The record from the source database is not migrated to the destination database.
During incremental migration, DTS does not keep the record in the destination cluster. The record from the source database overwrites the record in the destination database.
If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.
DDL and DML Operations to Be Synchronized
Select the DDL or DML operations to be migrated at the instance level. For information about supported operations, see SQL operations that can be incrementally migrated.
NoteTo select SQL operations for incremental migration at the table level, right-click the migration object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate.
Merge Tables
If you select Yes, DTS adds the
__dts_data_sourcecolumn to each table to record data sources. For more information, see Enable multi-table merge.No: This is the default option.
NoteThe table merging feature is task-level. This means that you cannot perform table merging at the table level. To merge some tables but not others, you can create two data migration tasks.
WarningDo not perform DDL operations to change the schema of the source database or tables. Otherwise, data inconsistency or task failure may occur.
Capitalization of Object Names in Destination Instance
You can configure the case sensitivity policy for the English names of migrated objects, such as databases, tables, and columns, in the destination instance. By default, DTS default policy is selected. You can also choose to keep it consistent with the default policy of the source or destination database. For more information, see Case sensitivity of object names in the destination database.
Source Objects
In the Source Objects box, click the objects to migrate, and then click
to move them to the Selected Objects box.ImportantIf you select Incremental Data Migration for Migration Types, you can select only one data table.
If you do not select Incremental Data Migration for Migration Types, you can select databases, tables, and columns.
If the migration object is an entire database, the default behavior is as follows:
If the table to be migrated in the source database has a primary key, such as a single-column primary key or a composite primary key, the primary key columns are used as the distribution keys.
If the table to be migrated in the source database does not have a primary key, an auto-increment primary key column is automatically generated in the destination table. This may cause data inconsistency between the source and destination databases.
Selected Objects
To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Individual table column mapping.
To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
NoteIf you use the object name mapping feature, the migration of other objects that depend on the renamed object may fail.
To set a WHERE clause to filter data, right-click a table to be migrated in the Selected Objects section. In the dialog box that appears, set the filter condition. For more information, see Set filter conditions.
To select SQL operations for migration at the database or table level, right-click the object to be migrated in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate.
Click Next: Advanced Settings to configure advanced parameters.
Configuration
Description
Dedicated Cluster for Task Scheduling
By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.
Retry Time for Failed Connections
After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately starts continuous retry attempts. The default retry duration is 720 minutes. You can also customize the retry time within a range of 10 to 1440 minutes. We recommend that you set it to more than 30 minutes. If DTS reconnects to the source and destination databases within the set time, the migration task automatically resumes. Otherwise, the task fails.
NoteFor multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.
Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.
Retry Time for Other Issues
After the migration task starts, if other non-connectivity issues occur in the source or destination database (such as a DDL or DML execution exception), DTS reports an error and immediately starts continuous retry attempts. The default retry duration is 10 minutes. You can also customize the retry time within a range of 1 to 1440 minutes. We recommend that you set it to more than 10 minutes. If the related operations succeed within the set retry time, the migration task automatically resumes. Otherwise, the task fails.
ImportantThe value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.
Enable Throttling for Full Data Migration
During the full migration phase, DTS consumes some read and write resources of the source and destination databases, which may increase the database load. As needed, you can choose whether to set speed limits for the full migration task. You can set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the pressure on the destination database.
NoteThis configuration item is available only if you select Full Data Migration for Migration Types.
You can also adjust the full migration speed after the migration instance is running.
Enable Throttling for Incremental Data Migration
As needed, you can also choose whether to set speed limits for the incremental migration task. You can set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the pressure on the destination database.
NoteThis configuration item is available only if you select Incremental Data Migration for Migration Types.
You can also adjust the incremental migration speed after the migration instance is running.
Environment Tag
You can select an environment tag to identify the instance. This is not required for this example.
Configure ETL
Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:
Yes: Enables the ETL feature. Enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
No: Disables the ETL feature.
Monitoring and Alerting
Select whether to set alerts and receive alert notifications based on your business needs.
No: Does not set an alert.
Yes: Sets an alert. You must also set the alert threshold and alert notifications. The system sends an alert notification if the migration fails or the latency exceeds the threshold.
Click Next: Data Validation to configure a data validation task.
For more information about the data validation feature, see Configure data validation.
Optional: After you complete the previous configurations, click Next: Configure Database and Table Fields. Then, configure the Type, Primary Key Column, Distribution Key, and partition key parameters for the tables to be migrated to the destination database. The partition key parameters include Partition Key, Partitioning Rules, and Partition Lifecycle.
NoteThis step is available only if you select Schema Migration for Migration Types. To make modifications, select All for Definition Status.
In the Primary Key Column field, you can select multiple columns to form a composite primary key. For a composite primary key, you must also select one or more Primary Key Column to serve as the Distribution Key and the Partition Key. For more information, see CREATE TABLE.
Save the task and run a precheck.
To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble.
If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.
NoteBefore the migration task starts, a precheck is performed. The task starts only after it passes the precheck.
If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.
If a warning is reported during the precheck:
For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.
For check items that can be ignored and do not need to be fixed, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to shield an alert item, it may cause issues such as data inconsistency and pose risks to your business.
Purchase the instance.
When the Success Rate is 100%, click Next: Purchase Instance.
On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.
Category
Parameter
Description
New Instance Class
Resource Group Settings
Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?
Instance Class
DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.
After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, and in the OK dialog box that appears, click OK.
You can view the progress of the migration instance on the Data Migration Tasks list page.
NoteIf the migration instance does not include an incremental migration task, it stops automatically. After the instance stops, its Status is Completed.
If the migration instance includes an incremental migration task, it does not stop automatically, and the incremental migration task continues to run. While the incremental migration task is running normally, the Status of the instance is Running.