All Products
Search
Document Center

Data Transmission Service:Migrate data from an OceanBase database in MySQL mode to an RDS for MySQL instance

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) migrates data from an OceanBase database in MySQL mode to an ApsaraDB RDS for MySQL instance. Use this guide to configure and run a migration task end to end.

Prerequisites

Before you begin, make sure you have:

  • A source OceanBase database that is Community Edition V4.x

  • A destination ApsaraDB RDS for MySQL instance with available storage space larger than the total data size of the source database. For setup instructions, see Create an ApsaraDB RDS for MySQL instance

  • Database accounts with the required permissions. See Permissions required below

Billing

Migration type Task configuration fee Data transfer fee
Schema migration + full data migration Free Free of charge in this example
Incremental data migration Charged

For incremental data migration pricing, see Billing overview.

Limitations

Hard limits

Category Limitation
Source database type OceanBase Community Edition V4.x only
ApsaraDB for OceanBase source Must be a cluster instance in China (Shenzhen) or China (Shanghai). Tenant instances are not supported.
Table constraints Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Tables without these constraints can result in duplicate records in the destination.
Table rename task limit When selecting tables as migration objects and renaming tables or columns in the destination, a single task supports a maximum of 1,000 tables. For more than 1,000 tables, run multiple tasks in batches, or migrate the entire database instead.
GEOMETRY data type Full data migration only. Incremental migration of GEOMETRY data is not supported.
Foreign keys DTS migrates foreign keys during schema migration. During full and incremental migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade delete operations on the source during migration can cause data inconsistency.
Column name capitalization If column names in the same destination table differ only in capitalization, migration results may be unexpected because MySQL column names are case-insensitive by default.

Data type handling

Data type Behavior
GEOMETRY Full data migration only. Incremental migration is not supported.
FLOAT DTS uses ROUND(COLUMN, PRECISION). If no precision is specified, DTS defaults to 38 digits.
DOUBLE DTS uses ROUND(COLUMN, PRECISION). If no precision is specified, DTS defaults to 308 digits.

Verify that the default precision settings for FLOAT and DOUBLE meet your requirements before starting the migration.

Allowed operations on the source database during migration

Migration phase Allowed source operations
Schema migration + full data migration Read-only. Do not run DDL statements that change database or table schemas. The task fails if DDL statements are executed.
Full data migration only (no incremental) Read-only. Do not write data to the source database. Write operations cause data inconsistency.

Considerations

  • Database naming: If the source database name does not follow ApsaraDB RDS for MySQL naming conventions, create the destination database manually before configuring the task. Then use object name mapping to rename it during the Configure Objects and Advanced Settings step.

  • Failed task retry: DTS retries failed migration tasks for up to seven days. Before switching workloads to the destination, stop or release any failed tasks — or run REVOKE to remove DTS write permissions on the destination — to prevent source data from overwriting destination data when the task resumes.

  • DDL failures in destination: If DDL statements fail to execute in the destination, the DTS task continues running. Check task logs for details. See View task logs.

  • Post-migration verification: After migration completes, run ANALYZE TABLE <table_name> to confirm data was written successfully. In high-availability (HA) switchover scenarios, data may exist only in memory and can be lost if not persisted.

  • Source database performance: Full data migration increases load on both source and destination. Run migrations during off-peak hours and enable throttling if needed.

  • Destination tablespace size: Concurrent INSERT operations during full migration cause table fragmentation. The destination tablespace will be larger than the source after migration completes.

SQL operations supported for incremental migration

Operation type Supported statements
DML INSERT, UPDATE, DELETE
DDL ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, TRUNCATE TABLE, RENAME TABLE
Important

RENAME TABLE operations can cause data inconsistency. If you rename a table during migration and that table was added as a migration object by name, the renamed table's data will not be migrated. To prevent this, add the database (not the individual table) as the migration object, and make sure both the pre-rename and post-rename databases are included.

Permissions required

Database Schema migration Full data migration Incremental data migration
OceanBase (user tenant or regular tenant) SELECT SELECT Regular tenant
ApsaraDB RDS for MySQL Read and write on the destination database Read and write on the destination database Read and write on the destination database
Important

For incremental data migration, install oblogproxy on the server hosting the source OceanBase database and configure the sys tenant. oblogproxy is a proxy service for managing incremental logs. See Install and deploy oblogproxyInstall and deploy oblogproxy using the installation package.

To create database accounts and grant permissions:

Create a migration task

The migration workflow has seven steps: navigate to the Data Migration Tasks page, configure source and destination databases, test connectivity, select migration objects, configure advanced settings, run a precheck, and purchase the instance to start the task.

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    You can also go directly to the Data Migration Tasks page. Console layout may vary — see Simple mode and Customize the layout and style of the DMS console for navigation options.
  4. From the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.

    In the new DTS console, select the region in the upper-left corner instead.

Step 2: Configure source and destination databases

Click Create Task. On the Create Data Migration Task page, configure the following parameters.

Source database

Parameter Description
Select an existing DMS database instance Optional. If selected, DTS automatically fills in the database parameters.
Database type Select ApsaraDB OceanBase for MySQL.
Access method Select based on where the source database is deployed. This example uses Public IP Address. For self-managed databases, prepare the environment first. See Preparation overview.
Instance region The region where the source OceanBase database resides.
IP Address or Domain Name The endpoint of the source OceanBase database.
Port number The service port. Default: 2881.
IP address in log proxy (domain name not supported) The IP address of oblogproxy for the source OceanBase database.
Port in log proxy The listening port of oblogproxy. Default: 2983.
Database account The source OceanBase database account. See Permissions required.
Database password The password for the database account.

Destination database

Parameter Description
Select an existing DMS database instance Optional. If selected, DTS automatically fills in the database parameters.
Database type Select MySQL.
Access method Select Alibaba Cloud Instance.
Instance region The region where the destination ApsaraDB RDS for MySQL instance resides.
Replicate data across Alibaba Cloud accounts Select No for same-account migration.
RDS instance ID The ID of the destination ApsaraDB RDS for MySQL instance.
Database account The destination database account. See Permissions required.
Database password The password for the database account.
Encryption Select Non-encrypted or SSL-encrypted. To use SSL, enable SSL encryption on the RDS instance first. See Configure SSL encryption.

Step 3: Test connectivity and configure the IP address whitelist

Click Test connectivity and proceed.

If your source or destination database uses an IP address whitelist, add the CIDR blocks of DTS servers to that whitelist.

Database location How CIDR blocks are added
Alibaba Cloud database instances (ApsaraDB RDS for MySQL, ApsaraDB for MongoDB, etc.) DTS adds the CIDR blocks automatically.
Self-managed databases on Elastic Compute Service (ECS) DTS automatically adds CIDR blocks to the ECS security group rules. If the database is hosted across multiple ECS instances, manually add the CIDR blocks to each instance's security group.
Self-managed databases in data centers or on third-party clouds Manually add the CIDR blocks. See CIDR blocks of DTS servers.
Warning

Adding public CIDR blocks to a database whitelist or ECS security group creates security risks. Before proceeding, take preventive measures: use strong credentials, limit exposed ports, authenticate API calls, audit whitelist rules regularly, and remove unauthorized CIDR blocks. Consider using Express Connect, VPN Gateway, or Smart Access Gateway for a more secure connection.

Important

If the source database is an ApsaraDB for OceanBase instance, manually add the CIDR blocks of DTS servers to the IP address whitelist of ApsaraDB for OceanBase. Use the same CIDR blocks as those for databases whose Access method is Express Connect, VPN Gateway, or Smart Access Gateway. See Create a whitelist group and CIDR blocks of DTS servers.

Step 4: Select migration types and objects

Configure the following parameters.

Migration types

Select the migration type based on your downtime requirements.

Type When to use Notes
Schema migration + full data migration One-time migration with a planned downtime window Do not write to the source database during migration.
Schema migration + full data migration + incremental data migration Near-zero downtime migration; keeps source and destination in sync during cutover Requires oblogproxy installed on the source server.

Object and advanced settings

Parameter Description
Method to migrate triggers in source database Available when schema migration is selected. Choose based on your requirements. See Synchronize or migrate triggers.
Processing mode of conflicting tables Precheck and report errors: fails precheck if source and destination share table names. Ignore errors and proceed: skips the check, but risks data inconsistency or partial migration failure.
Capitalization of object names in destination instance Controls the case of database names, table names, and column names in the destination. Default: DTS default policy. See Specify object name capitalization.
Source objects Select objects from the Source objects section and move them to Selected objects using the 向右小箭头 icon. Selectable at the column, table, or database level.
Selected objects Right-click an object to rename it. Click Batch edit to rename multiple objects at once. See Map object names. To set row filter conditions, right-click a table and specify WHERE conditions. See Set filter conditions.
Object name mapping may cause dependent objects to fail migration. Column mapping for non-full table migration, or schema mismatches between source and destination, can result in data loss for unmapped columns.

Step 5: Configure advanced task settings

Click Next: Advanced settings and configure the following.

Parameter Description
Dedicated cluster for task scheduling By default, DTS uses a shared cluster. Purchase a dedicated cluster for higher stability. See What is a DTS dedicated cluster.
Set alerts Configure alerting for task failures or migration latency exceeding a threshold. Select Yes to specify an alert threshold and notification contacts. See Configure monitoring and alerting.
Retry time for failed connections Range: 10–1440 minutes. Default: 720 minutes. We recommend that you set the parameter to a value greater than 30. DTS resumes the task if reconnected within this window; otherwise the task fails. If multiple tasks share the same source or destination, the most recently set value applies.
Retry time for other issues Range: 1–1440 minutes. Default: 10 minutes. We recommend that you set the parameter to a value greater than 10. Must be less than the Retry time for failed connections value.
Enable throttling for full data migration Limits read/write load on source and destination during full migration. Set QPS to the source database, RPS of full data migration, and Data migration speed (MB/s). Available only when full data migration is selected.
Enable throttling for incremental data migration Limits load during incremental migration. Set RPS of incremental data migration and Data migration speed (MB/s). Available only when incremental data migration is selected.
Environment tag Optional. Tag the DTS instance for environment identification.
Configure ETL Enable the extract, transform, and load (ETL) feature to apply data transformations during migration. See What is ETL? and Configure ETL.

Step 6: Run the precheck

Click Next: Save task settings and precheck.

To preview the OpenAPI parameters for configuring this task via API, hover over Next: Save task settings and precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the migration task can start.

  • If a check item fails: click View details, resolve the issue, and click Precheck again.

  • If a check item shows an alert: determine whether it can be safely ignored. To ignore it, click View details > Ignore > OK > Precheck again. Ignoring alerts may cause data inconsistency.

Wait until the success rate reaches 100%.

Step 7: Purchase the migration instance and start the task

  1. Click Next: Purchase instance.

  2. On the Purchase instance page, configure the following:

    Section Parameter Description
    New instance class Resource group settings The resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance class The instance class determines migration speed. See Specifications of data migration instances.
  3. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  4. Click Buy and start.

The migration task starts. Track progress in the task list.

FAQ

What do I enter for oblogproxy fields if oblogproxy is not installed?

Use the default values for IP address in log proxy (domain name not supported) and Port in log proxy, and do not select Incremental data migration for Migration types. Selecting incremental migration without oblogproxy installed causes an error.

The region of my source OceanBase database is not in the Instance region drop-down list. What do I select?

Select the region nearest to your source OceanBase database.

My source OceanBase database is deployed in a cluster. What do I enter for the Domain Name or IP parameter?

Set this parameter to the value that you specified for OBServer Node when you created the cluster.

How do I set Port number for a clustered OceanBase deployment?

If the source OceanBase database is in standalone mode, use the default port. If it is deployed in a cluster, use the value you specified for SQL Port when creating the cluster.

How do I format the Database account field for the source OceanBase database?

Use the format <username>@<tenant_name>. For example, if the user is dtstest and the tenant is dts, enter dtstest@dts.