Use Data Transmission Service (DTS) to migrate data from a Google Cloud SQL for MySQL instance to an ApsaraDB RDS for MySQL instance with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration.
In this guide, you will:
-
Review prerequisites and limits for both the source and destination databases
-
Configure binary logging on Google Cloud SQL for MySQL (required for incremental migration)
-
Create and run a DTS migration task
-
Complete post-migration cleanup
Prerequisites
Before you begin, ensure that you have:
Source: Google Cloud SQL for MySQL
-
Public access enabled on the instance, with the public endpoint and port available
-
A privileged account created on the instance
For details, see the Google Cloud SQL for MySQL documentation.
Destination: ApsaraDB RDS for MySQL
-
An ApsaraDB RDS for MySQL instance created. See Create an ApsaraDB RDS for MySQL instance.
-
An account with read and write permissions created. See Create databases and accounts for an ApsaraDB RDS for MySQL instance.
Limits
-
Schema migration does not support events.
-
DTS reads FLOAT and DOUBLE column values using the
round(column,precision)function. If precision is not specified, FLOAT values use 38-bit precision and DOUBLE values use 308-bit precision. Verify that these precision levels meet your requirements before migration. -
If object name mapping is applied to an object, other objects that depend on it may fail to migrate.
For incremental data migration only:
-
Binary logging must be enabled on the Google Cloud SQL for MySQL instance.
-
The
binlog_formatparameter must be set torow. -
For MySQL 5.6 or later: the
binlog_row_imageparameter must be set tofull. -
If cross-host migration or reconstruction occurs on the source instance during incremental migration, binary log file IDs may become disordered and incremental data may be lost.
For instructions on modifying these parameters, see the Google Cloud SQL for MySQL documentation.
Configure binary logging for incremental migration
Skip this section if you are performing full data migration only.
For incremental data migration (CDC), enable binary logging on your Google Cloud SQL for MySQL instance and set the following parameters:
| Parameter | Required value | Why this matters |
|---|---|---|
binlog_format |
ROW |
Captures row-level changes, ensuring accurate replication |
binlog_row_image |
FULL |
Required for MySQL 5.6 and later; captures complete row images so DTS can reconstruct changes |
Google Cloud SQL provides a built-in option to enable binary logging. In the Google Cloud console, go to your Cloud SQL instance settings and enable Binary logging under the Backups section.
Migrate data
DTS automatically resumes abnormal migration tasks that have been running within the last seven days. As a result, data from the source database may overwrite service data already written to the destination instance. After migration is complete, run the REVOKE statement to revoke the write permissions of the DTS account on the ApsaraDB RDS for MySQL instance.
Step 1: Go to the Data Migration page
Use one of the following methods:
DTS console
-
Log on to the DTS console.
-
In the left-side navigation pane, click Data Migration.
-
In the upper-left corner, select the region where the migration instance resides.
Data Management Service (DMS) console
The steps may vary depending on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
-
Log on to the DMS console.
-
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
-
From the drop-down list to the right of Data Migration Tasks, select the region.
Step 2: Configure the source and destination databases
Click Create Task. On the task configuration page, fill in the following:
After configuring the source and destination databases, review the Limits displayed at the top of the page before proceeding. Ignoring these limits may cause the task to fail or result in data inconsistency.
Source Database
| Parameter | Value |
|---|---|
| Select a DMS database instance. | Select an existing instance, or configure manually |
| Database Type | MySQL |
| Connection Type | Public IP Address |
| Instance Region | Region closest to the source instance |
| Domain name or IP | Public IP address of the Google Cloud SQL instance. To find it: in the left-side navigation pane of your Cloud SQL instance, click Connections. On the SUMMARY tab, under Networking, copy the Public IP address value. |
| Port | 3306 (default) |
| Database Account | The privileged account on the source instance |
| Database Password | Password for the account |
Destination Database
| Parameter | Value |
|---|---|
| Select a DMS database instance. | Select an existing instance, or configure manually |
| Database Type | MySQL |
| Connection Type | Alibaba Cloud Instance |
| Instance Region | Region of the destination RDS instance |
| Replicate Data Across Alibaba Cloud Accounts | No |
| RDS Instance ID | ID of the destination ApsaraDB RDS for MySQL instance |
| Database Account | The account with read and write permissions |
| Database Password | Password for the account |
| Connection Method | Select Non-encrypted or SSL-encrypted. If using SSL encryption, enable SSL on the RDS instance first. See Use a cloud certificate to enable SSL encryption. |
Step 3: Test connectivity
Click Test Connectivity and Proceed.
DTS needs access to both databases. If your source database has an IP address whitelist configured, add the CIDR blocks of DTS servers to the whitelist. See Add the CIDR blocks of DTS servers.
Adding DTS server CIDR blocks to a database whitelist or ECS security group rules may introduce security risks. Before proceeding, take precautions such as enforcing strong credentials, restricting exposed ports, auditing API calls, and reviewing whitelist rules regularly. Alternatively, connect the source database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.
Step 4: Select objects to migrate
Configure the following parameters:
Migration Types
Choose based on your requirements:
-
Full migration only: Select Schema Migration and Full Data Migration. We recommend that you do not write data to the source database during migration to ensure data consistency.
-
Migration with minimal downtime: Select Schema Migration, Full Data Migration, and Incremental Data Migration.
If Schema Migration is not selected, create the target database and table manually before starting, and enable object name mapping in Selected Objects.
If Incremental Data Migration is not selected, avoid writing data to the source database during migration.
Processing Mode for Existing Destination Tables
| Option | Behavior |
|---|---|
| Precheck and Report Errors | Checks for tables with identical names in source and destination. The task fails precheck if conflicts exist. Use object name mapping to rename conflicting tables. See Map object names. |
| Ignore Errors and Proceed | Skips the precheck for name conflicts. During full migration, existing records in the destination are retained. During incremental migration, existing records are overwritten. If schemas differ, only specific columns are migrated or the task may fail. |
Other parameters
| Parameter | Description |
|---|---|
| Method to Migrate Triggers in Source Database | Select a method based on your requirements. Configure only when Schema Migration is selected. See Synchronize or migrate triggers from the source database. |
| Enable Migration Assessment | When set to Yesalert notification settings, DTS checks whether source and destination schemas meet migration requirements (index length, stored procedures, dependent tables). Configurable only when Schema Migration is selected. Results do not affect the precheck outcome. |
| Capitalization of Object Names in Destination Instance | Controls capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select objects to migrate, then click |
| Selected Objects | Right-click an object to rename it or set a WHERE condition for row filtering. Click Batch Edit to rename multiple objects at once. See Map object names and Specify filter conditions. |
Step 5: Configure advanced settings
Click Next: Advanced Settings.
Data verification settings
To verify data integrity after migration, configure a data verification task. See Configure a data verification task.
Advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to a shared cluster. To use a dedicated cluster, purchase one first. See What is a DTS dedicated cluster? |
| Monitoring and Alerting | Configure alerts for task failures or latency exceeding a threshold. See Configure monitoring and alerting. |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database. | Controls handling of temporary tables from online DDL operations (DMS or gh-ost only). Important
pt-online-schema-change is not supported and will cause the task to fail. |
| Retry Time for Failed Connections | How long DTS retries failed connections before marking the task as failed. Range: 10–1,440 minutes. Default: 720 minutes. Set to 30 or higher. If multiple tasks share the same source or destination, the shortest retry time applies. |
| Retry Time for Other Issues | How long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10 minutes. Set to a value greater than 10. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits queries per second (QPS) to the source database, rows per second (RPS), and migration speed (MB/s) during full migration. Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits RPS and migration speed (MB/s) during incremental migration. Available only when Incremental Data Migration is selected. |
| Environment Tag | Optional. Tag the migration task with an environment label. |
| Configure ETL | Enable the extract, transform, and load (ETL) feature to process data during migration. See Configure ETL in a data migration or data synchronization task. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Controls whether DTS writes SQL operations on heartbeat tables to the source database. Selecting Yes prevents writes but may display migration latency. Selecting No enables writes but may affect physical backup and cloning of the source. |
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
To review the API parameters for this task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before the migration task can start. After the precheck completes:
-
Passed items: No action needed.
-
Failed items: Click View Details next to the failed item. Fix the issue, then click Precheck Again.
-
Alert items:
-
If the alert cannot be ignored: click View Details, fix the issue, then rerun the precheck.
-
If the alert can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Note that ignoring alerts may cause data inconsistency.
-
Step 7: Purchase the migration instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following:
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The migration speed varies by instance class. Select based on your data volume and time requirements. See Instance classes of data migration instances. |
Step 8: Start migration
-
Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
-
Click Buy and Start, then click OK in the confirmation dialog.
Track progress on the Data Migration page.
Next steps
After the migration task completes:
-
Run the
REVOKEstatement to revoke the write permissions of the DTS account on the ApsaraDB RDS for MySQL instance. This prevents accidental overwrites if DTS resumes the task. -
Verify data integrity using the data verification feature. See Configure a data verification task.
-
Update your application connection strings to point to the ApsaraDB RDS for MySQL instance.