All Products
Search
Document Center

Data Transmission Service:Migrate data from a MySQL-compatible OceanBase database to Lindorm

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) migrates data from a self-managed MySQL-compatible OceanBase database or an ApsaraDB for OceanBase instance to the LindormTable engine of a Lindorm instance. This topic walks through migrating a self-managed OceanBase database accessible over the Internet.

Prerequisites

Before you begin, ensure that you have:

Create namespaces, tables, and columns in the Lindorm instance with the same names as the corresponding objects in the source OceanBase database.

Permissions required

Grant the following permissions to the accounts used by DTS before you configure the migration task.

DatabaseFull data migrationIncremental data migration
Self-managed OceanBase – userSELECTSELECT
Self-managed OceanBase – tenantRegular tenantRegular tenant
ApsaraDB for OceanBaseSELECTSELECT
Lindorm instanceRead and write on the destination namespaceRead and write on the destination namespace
Important

For incremental data migration from a self-managed OceanBase database, install oblogproxy on the source server and configure the system tenant. oblogproxy is a proxy service that manages incremental logs. For more information, see Install and deploy oblogproxy using the installation package.

For instructions on creating accounts and granting permissions, see:

Limitations

Review these limitations before configuring the migration task.

Source database

  • For an ApsaraDB for OceanBase source, manually add the CIDR blocks of DTS servers to the IP address whitelist of the instance. For more information, see Create a whitelist group and Add the CIDR blocks of DTS servers.

  • The source server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, the destination may contain duplicate records.

  • When selecting tables as objects to migrate with object name mapping, a single task supports up to 1,000 tables. Exceeding this limit causes a request error. Split the work into multiple tasks, or migrate the entire database in a single task.

  • During full data migration, do not perform DDL operations that change database or table schemas. Otherwise, the migration task fails.

  • If you run full data migration only (without incremental migration), do not write data to the source during migration. To ensure data consistency, select both Full Data Migration and Incremental Data Migration.

  • GEOMETRY data can only be migrated using full data migration. Incremental migration of GEOMETRY data is not supported.

General

  • Schema migration is not supported.

  • Tables must contain at least one non-primary key field. Migrating only primary key fields is not supported.

  • DTS writes data only to the LindormTable engine of the Lindorm instance.

  • Full data migration uses read and write resources on both the source and destination databases, which may increase server load. Run full data migration during off-peak hours when CPU load is below 30%.

  • After full data migration completes, the destination tablespace is larger than the source due to fragmentation caused by concurrent INSERT operations.

  • DTS retrieves FLOAT and DOUBLE column values using ROUND(COLUMN,PRECISION). The default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements.

  • DTS attempts to resume failed tasks for up to seven days. Before switching workloads to the destination, stop or release the migration task, or revoke the write permissions of the DTS account on the destination. Otherwise, source data may overwrite destination data when the task resumes.

  • During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade update or delete operations on the source during migration, data inconsistency may occur.

Billing

Migration typeLink configuration feeData transfer cost
Full data migrationFreeCharged when data exits Alibaba Cloud over the Internet. For more information, see Billing overview.
Incremental data migrationCharged. For more information, see Billing overview.

SQL operations supported for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, DROP TABLE, ADD COLUMN

Create a migration task

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the top navigation bar, select the region where your DTS instance resides.

  4. Click Create Task.

  5. (Optional) Click New Configuration Page in the upper-right corner.

    Skip this step if the Back to Previous Version button is displayed. Use the new configuration page when available, as specific parameters may differ between versions.
  6. Configure the source and destination databases.

    Source database is a self-managed OceanBase database

    Source database

    ParameterDescription
    Task NameA name for the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task. Uniqueness is not required.
    Select Existing ConnectionSelect an existing registered instance to reuse its connection settings, or leave blank to configure connection settings manually. You can register a database with DTS on the Database Connections page or the new configuration page. For more information, see Manage database connections. If you are using the DMS console, select an existing database from the Select a DMS database instance drop-down list, or click Add DMS Database Instance to register a database. For more information, see Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database.
    Database TypeSelect ApsaraDB OceanBase for MySQL.
    Access MethodSelect the access method based on where the source database is deployed. This example uses Public IP Address. For self-managed databases, complete the environment setup before migration. For more information, see Preparation overview.
    Instance RegionThe region where the source OceanBase database resides. If Public IP Address is selected and the region is not listed, select the geographically closest region.
    Domain Name or IPThe endpoint of the source OceanBase database.
    Port NumberThe service port of the source OceanBase database. Default: 2881.
    IP Address in Log Proxy (Domain Name Not Supported)The IP address of oblogproxy for the source OceanBase database.
    Port in Log ProxyThe listening port of oblogproxy. Default: 2983.
    Database AccountThe source database account. For required permissions, see Permissions required.
    Database PasswordThe password for the source database account.

    Destination database

    ParameterDescription
    Select Existing ConnectionSelect an existing registered instance to reuse its connection settings, or leave blank to configure connection settings manually. You can register a database with DTS on the Database Connections page or the new configuration page. For more information, see Manage database connections. If you are using the DMS console, select an existing database from the Select a DMS database instance drop-down list, or click Add DMS Database Instance to register a database. For more information, see Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database.
    Database TypeSelect Lindorm.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination Lindorm instance resides.
    Instance IDThe ID of the destination Lindorm instance.
    Database AccountThe destination database account. For required permissions, see Permissions required.
    Database PasswordThe password for the destination database account.
  7. Click Test Connectivity and Proceed. Add the CIDR blocks of DTS servers to the OceanBase whitelist before clicking Test Connectivity.

    Important

    Adding public CIDR blocks to a database whitelist carries security risks. Before using DTS to migrate data, take preventive measures including strengthening account credentials, restricting exposed ports, authenticating API calls, and regularly auditing the whitelist. For more information, see Add the CIDR blocks of DTS servers.

  8. Configure the objects to migrate. On the Configure Objects page, set the following parameters:

    ParameterDescription
    Migration TypesSelect Full Data Migration for a one-time migration. Select both Full Data Migration and Incremental Data Migration to keep the destination synchronized during migration. If you select full data migration only, do not write to the source during migration.
    Processing Mode of Conflicting TablesPrecheck and Report Errors: DTS checks for tables with identical names in the source and destination. If matches are found, the precheck fails and the task cannot start. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: DTS skips the check. During full data migration, existing destination records are retained. During incremental data migration, existing destination records are overwritten. If schemas differ, only specific columns are migrated or the task fails. Use with caution.
    Capitalization of Object Names in Destination InstanceThe capitalization policy for database names, table names, and column names in the destination. Default: DTS default policy. For more information, see Specify the capitalization of object names in the destination instance.
    Source ObjectsSelect objects from Source Objects and click the arrow icon to move them to Selected Objects. You can select columns, tables, or databases. Selecting tables or columns excludes other objects such as views, triggers, and stored procedures.
    Selected ObjectsRight-click an object to rename it (object name mapping), add WHERE filter conditions, or select specific SQL operations. To remove an object, click it and then click the remove icon to move it back to Source Objects. Renaming an object may cause dependent objects to fail migration.
  9. Click Next: Advanced Settings and configure the following parameters:

    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks to a shared cluster. Purchase a dedicated cluster for improved stability. For more information, see What is a DTS dedicated cluster.
    Retry Time for Failed ConnectionsThe period during which DTS retries failed connections. Valid values: 10–1,440 minutes. Default: 720. Set a value greater than 30. If reconnection succeeds within this period, the task resumes automatically. When multiple tasks share the same source or destination, the most recently specified value takes effect. DTS charges for instances during retry periods.
    Retry Time for Other IssuesThe period during which DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10. Set a value greater than 10 and less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationThrottle full data migration to reduce load on the source and destination databases. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationThrottle incremental data migration to reduce load on the destination database. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment Tag(Optional) A tag to identify the DTS instance.
    Configure ETLSelect Yes to enable extract, transform, and load (ETL) and enter data processing statements. Select No to skip. For more information, see Configure ETL in a data migration or data synchronization task.
    Monitoring and AlertingSelect Yes to receive alerts when the task fails or migration latency exceeds the configured threshold. Configure the alert threshold and notification settings. For more information, see Configure monitoring and alerting.
  10. Save the task settings and run a precheck.

    • To preview the API parameters for configuring this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • Click Next: Save Task Settings and Precheck.

    The task runs a precheck before starting. The task can start only after the precheck passes. If the precheck fails, click View Details next to each failed item, troubleshoot the issue, and run the precheck again. For alert items that can be ignored, click Confirm Alert Details, click Ignore in the dialog box, click OK, and then click Precheck Again. Ignoring alert items may cause data inconsistency.
  11. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  12. Purchase a data migration instance.

    1. On the Purchase Instance page, configure the following parameters:

      ParameterDescription
      Resource GroupThe resource group for the data migration instance. Default: default resource group. For more information, see What is Resource Management?
      Instance ClassThe instance class determines migration speed. Select based on your requirements. For more information, see Instance classes of data migration instances.
    2. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

    3. Click Buy and Start, then click OK.

View the task progress on the Data Migration page.