Use Data Transmission Service (DTS) to migrate a self-managed MariaDB database to an RDS for MariaDB instance with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration. Run all three types together to keep your application online throughout the migration.
To migrate to an RDS for MySQL instance instead, follow the same steps but select your RDS for MySQL instance as the destination.
Prerequisites
Before you begin, make sure you have:
-
An RDS for MariaDB instance created as the destination
-
A destination instance with more storage space than the source database
Limits
Source database limits
| Limit | Detail |
|---|---|
| Bandwidth | The server hosting the source database must have enough outbound bandwidth. Insufficient bandwidth slows migration. |
| Primary keys | Tables to be migrated must have primary keys or UNIQUE constraints with unique field values. Without these, duplicate data may appear in the destination database. |
| Large-scale table-level migration | If you are migrating more than 1,000 tables at the table level with object name mapping, split them across multiple migration tasks or migrate at the database level instead. |
| Binary logging for incremental migration | Enable binary logging (binlog) and set binlog_row_image to FULL. If these are not configured, the precheck fails and the task cannot start. Retain binary logs for at least 7 days. If DTS cannot retrieve the logs, the task fails. In extreme cases, data inconsistency or data loss may occur. Issues caused by insufficient log retention are not covered by the DTS Service-Level Agreement (SLA). |
| DDL during full migration | Do not perform DDL operations that change database or table schemas during the full data migration phase. DTS queries the source database during this phase, which creates metadata locks that may block DDL operations on the source. |
| Full migration only | If you run only full data migration, do not write new data to the source database during migration. For continuous data consistency, select both full data migration and incremental data migration. |
Binlog configuration reference
When enabling binary logging for incremental migration, confirm these parameters are set correctly:
| Parameter | Required value | Reason |
|---|---|---|
binlog (binary logging) |
Enabled | DTS reads binary logs to capture incremental changes. |
binlog_row_image |
FULL |
Ensures DTS captures complete row images; partial images cause the precheck to fail. |
| Log retention period | At least 7 days | DTS must be able to retrieve historical logs; shorter retention can cause task failure or data loss. |
Other limits
| Limit | Detail |
|---|---|
| Invisible columns | Make sure no invisible columns exist in the data being migrated or in the destination columns. Invisible columns may cause the DTS instance to fail or result in data loss. |
| Database load | DTS consumes read and write resources on both the source and destination databases during full data migration. Evaluate database performance before starting. Run the migration during off-peak hours when CPU load is below 30%. |
| Table fragmentation | Full data migration runs concurrent INSERT operations, which causes table fragmentation in the destination database. After migration, the table storage space in the destination is larger than in the source. |
| Concurrent writes to destination | If data is written to the destination from sources other than DTS during migration, data inconsistency or task failure may occur. After migration, use Data Management (DMS) to perform online DDL operations. |
| Failed DDL statements | If a DDL statement fails to write to the destination, the DTS task continues. Check the failed statement in the task logs. |
| Instance recovery | If a DTS instance fails, DTS helpdesk attempts recovery within 8 hours. During recovery, the instance may be restarted or its parameters adjusted. Only DTS instance parameters are modified—database parameters are not changed. For details on which parameters may be modified, see Modify instance parameters. |
| Foreign key migration | During schema migration, DTS migrates foreign keys from the source database to the destination database. |
| Foreign key cascade operations | During full and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Data inconsistency may occur if cascade update or delete operations are performed on the source database while the task is running. |
Special cases
| Case | Detail |
|---|---|
| Primary/secondary switchover | If a primary/secondary switchover occurs in the source database during migration, the migration task fails. |
| Migration latency | DTS calculates latency by comparing the timestamp of the last migrated data with the current time. If no DML operations are performed on the source for an extended period, the latency shown may be inaccurate. Run a DML operation on the source to refresh the latency. |
Billing
| Migration type | Link configuration fee | Data transfer cost |
|---|---|---|
| Schema migration and full data migration | Free | Free |
| Incremental data migration | Charged. See Billing overview. |
Migration types
| Type | Description |
|---|---|
| Schema migration | DTS migrates the schemas of selected objects—tables, views, triggers, stored procedures, and stored functions—from the source to the destination. See notes below. |
| Full data migration | DTS migrates all existing data from the selected objects in the source database to the destination. |
| Incremental data migration | After full data migration completes, DTS continuously migrates new changes from the source to the destination. This keeps your application running without interruption during the migration window. |
Schema migration behavior:
Theroutine_bodyof stored procedures and stored functions, and theselect_statementof views cannot be modified during migration.
For views, stored procedures, and functions, DTS changes the SECURITY attribute from DEFINER to INVOKER and sets the DEFINER to the destination database account used in the migration. The SECURITY attribute and DEFINER cannot be modified during migration.
DTS does not migrate user information. To call a view, stored procedure, or stored function in the destination database, grant the read and write permissions to INVOKER.
SQL operations supported for incremental migration
| Type | Operations |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
RENAME TABLE may cause data inconsistency. If you rename a table that is selected as a migration object, its data will not be migrated. To prevent this, select the database (not just the table) as the migration object, and make sure both the pre-rename and post-rename databases are included in the migration objects.
Permissions required
Grant the following permissions to the database accounts used by DTS before starting the migration.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed MariaDB | SELECT | SELECT | SELECT on objects to be migrated; REPLICATION CLIENT; REPLICATION SLAVE; SHOW VIEW; permissions to create databases and tables |
| RDS for MariaDB | Read and write permissions | ||
For incremental data migration, DTS creates a database named test to record heartbeat data. The source database account must have permission to create databases and tables.
To grant the required permissions on the self-managed MariaDB source, run the following SQL statements. Replace <username> and <password> with your actual values.
-- Create account (if not already created)
CREATE USER '<username>'@'%' IDENTIFIED BY '<password>';
-- Grant permissions for schema migration and full data migration
GRANT SELECT ON *.* TO '<username>'@'%';
-- Additional permissions required for incremental data migration
GRANT REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW ON *.* TO '<username>'@'%';
GRANT CREATE ON *.* TO '<username>'@'%';
FLUSH PRIVILEGES;
To create accounts and grant permissions using documentation:
-
Self-managed MariaDB: See Create an account and Grant permissions in the MariaDB documentation.Create an accountGrant permissions to an account
-
RDS for MariaDB: See Create an account and Modify or reset account permissions.
Create a data migration task
Step 1: Open the Data Migration page
Use one of the following consoles to open the Data Migration page.
DTS console
-
Log on to the DTS console.DTS console
-
In the left-side navigation pane, click Data Migration.
-
In the upper-left corner, select the region where the migration instance will reside.
DMS console
Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
-
Log on to the DMS console.DMS console
-
In the top navigation bar, go to Data + AI > DTS (DTS) > Data Migration.
-
From the drop-down list to the right of Data Migration Tasks, select the region.
Step 2: Create the task
-
Click Create Task.
-
If the New Configuration Page button appears in the upper-right corner, click it to switch to the new configuration page. Skip this step if Back to Previous Version is shown instead.
Step 3: Configure the source and destination databases
After configuring the source and destination databases, read the Limits displayed at the top of the page. Skipping this may cause task failure or data inconsistency.
Configure the following parameters:
| Category | Parameter | Description |
|---|---|---|
| — | Task Name | A name for the DTS task. DTS generates one automatically. Use a descriptive name for easier identification. The name does not need to be unique. |
| Source Database | Select Existing Connection | If the source instance is registered with DTS, select it from the list—DTS fills in the database parameters automatically. Otherwise, configure the parameters below manually. In the DMS console, select the instance from Select a DMS database instance. |
| Database Type | Select MariaDB. | |
| Connection Type | Select the option that matches the deployment of your source database. This topic uses Self-managed Database On ECS as an example. | |
| Instance Region | The region where the source MariaDB database is located. | |
| ECS Instance ID | The ID of the ECS instance hosting the source database. | |
| Port | The service port of the source MariaDB database. Default: 3306. | |
| Database Account | The account for the source MariaDB database. See Permissions required. | |
| Database Password | The password for the source database account. | |
| Connection Method | Select Non-encrypted Connection. | |
| Destination Database | Select Existing Connection | If the destination instance is registered with DTS, select it from the list. Otherwise, configure the parameters below manually. |
| Database Type | Select Mariadb. | |
| Connection Type | Select Alibaba Cloud Instance. | |
| Instance Region | The region where the destination RDS for MariaDB instance is located. | |
| RDS Instance ID | The ID of the destination RDS for MariaDB instance. | |
| Database Account | The account for the destination RDS instance. See Permissions required. | |
| Database Password | The password for the destination database account. | |
| Connection Method | Select Unencrypted Connection. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS server CIDR blocks must be added to the security settings of the source and destination databases. DTS can do this automatically, or you can add them manually. See Add DTS server IP addresses to a whitelist.
If the source or destination database is self-managed and Connection Type is not Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 5: Configure migration objects
On the Configure Objects page, set the following parameters:
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types to run: <br>- Full migration only: Select Schema Migration and Full Data Migration. <br>- Zero-downtime migration: Select Schema Migration, Full Data Migration, and Incremental Data Migration. <br><br> Note
If you skip Schema Migration, create the database and table in the destination before starting. If you skip Incremental Data Migration, do not write to the source during migration. |
| Method to Migrate Triggers in Source Database | Select a trigger migration method if needed. See Configure a method to synchronize or migrate triggers. This parameter is available only when both Schema Migration and Incremental Data Migration are selected. |
| Processing Mode for Existing Destination Tables | Choose how DTS handles tables in the destination with the same name as tables in the source: <br>- Precheck and Report Errors: The precheck fails if conflicting table names are found. Use object name mapping to rename tables before retrying. See Map object names. <br>- Ignore Errors and Proceed: Skips the precheck for conflicting names. Warning
This may cause data inconsistency. During full migration, conflicting records are not migrated—existing destination records are kept. During incremental migration, conflicting records overwrite existing destination records. If schemas differ, only specific columns may be migrated or the task may fail. |
| Capitalization of Object Names in Destination Instance | Controls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names. |
| Source Objects | Select the databases, tables, or columns to migrate, then click the rightwards arrow icon to move them to Selected Objects. Note
If you select tables or columns, DTS does not migrate other object types such as views, triggers, or stored procedures. |
| Selected Objects | To rename a single object before migration, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names. Note
Renaming objects may cause dependent objects to fail migration. To filter data using a WHERE condition, right-click the table and configure the condition. See Filter the data to be migrated. To select specific SQL operations at the database or table level, right-click the object and select the operations. |
Step 6: Configure advanced settings
Click Next: Advanced Settings, then configure the following:
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules tasks to a shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, the task fails. Note
If multiple tasks share the same source or destination database with different retry settings, the most recently set value applies. DTS charges for the instance during retries. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML failures. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits DTS resource usage during full data migration to reduce load on the source and destination. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits DTS resource usage during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | (Optional) Tag the instance for identification. |
| Configure ETL | Whether to enable extract, transform, and load (ETL). Select Yesalert notification settings to enter data processing statements in the code editor. See Configure ETL and What is ETL?. |
| Monitoring and Alerting | Whether to configure alerts. Select Yes to set an alert threshold and notification contacts. DTS notifies contacts if the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting. |
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck.
To view the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before the task can start.
-
If any item fails, click View Details next to the failed item, fix the issue, then click Precheck Again.
-
If an alert is triggered:
-
If the alert cannot be ignored, click View Details, fix the issue, then run the precheck again.
-
If the alert can be ignored, click Confirm Alert Details, then click Ignore in the dialog box, click OK, and click Precheck Again.
-
Ignoring precheck alerts may result in data inconsistency or other issues.
Step 8: Purchase the instance and start the task
-
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
-
On the Purchase Instance page, configure the instance:
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Select an instance class based on the required migration speed. See Instance classes of data migration instances. -
Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
-
Click Buy and Start, then click OK in the confirmation dialog.
Monitor the task
After the task starts, monitor its progress on the Data Migration page.
-
Full migration only: The task stops automatically when complete. The Status column shows Completed.
-
With incremental migration: The task runs continuously and never stops automatically. The Status column shows Running. Stop the task manually after you have verified that all data has been migrated and cut over your application.