Data Transmission Service (DTS) migrates data from a self-managed MySQL-compatible OceanBase database or an ApsaraDB for OceanBase instance to the LindormTable engine of a Lindorm instance. This topic walks through migrating a self-managed OceanBase database accessible over the Internet.
Prerequisites
Before you begin, ensure that you have:
A self-managed OceanBase database running OceanBase Database Community Edition V4.x, or an ApsaraDB for OceanBase instance with binary logging enabled. For more information, see Operations related to the binlog service.
A Lindorm instance created with the LindormTable engine, with available storage larger than the total size of data in the source OceanBase database. For more information, see Create an instance.
The MySQL compatibility feature enabled on the destination Lindorm instance. For more information, see Enable the MySQL compatibility feature.
A namespace and a wide table created in the destination Lindorm instance. The wide table must be pre-partitioned based on full data. For more information, see Use a MySQL client to connect to and use LindormTable, Use Lindorm-cli to connect to and use LindormTable, Use Lindorm Shell to connect to LindormTable, CREATE TABLE, and Destination database: a Lindorm instance.
Create namespaces, tables, and columns in the Lindorm instance with the same names as the corresponding objects in the source OceanBase database.
Permissions required
Grant the following permissions to the accounts used by DTS before you configure the migration task.
| Database | Full data migration | Incremental data migration |
|---|---|---|
| Self-managed OceanBase – user | SELECT | SELECT |
| Self-managed OceanBase – tenant | Regular tenant | Regular tenant |
| ApsaraDB for OceanBase | SELECT | SELECT |
| Lindorm instance | Read and write on the destination namespace | Read and write on the destination namespace |
For incremental data migration from a self-managed OceanBase database, install oblogproxy on the source server and configure the system tenant. oblogproxy is a proxy service that manages incremental logs. For more information, see Install and deploy oblogproxy using the installation package.
For instructions on creating accounts and granting permissions, see:
Self-managed OceanBase: Create a tenant, Create a user, and Grant privileges.
ApsaraDB for OceanBase: Create a tenant, Create a database user, and User privileges.
Lindorm: Manage users.
For an OceanBase database, see Create a tenant, Create a user, and Grant permissions.
Limitations
Review these limitations before configuring the migration task.
Source database
For an ApsaraDB for OceanBase source, manually add the CIDR blocks of DTS servers to the IP address whitelist of the instance. For more information, see Create a whitelist group and Add the CIDR blocks of DTS servers.
The source server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.
Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, the destination may contain duplicate records.
When selecting tables as objects to migrate with object name mapping, a single task supports up to 1,000 tables. Exceeding this limit causes a request error. Split the work into multiple tasks, or migrate the entire database in a single task.
During full data migration, do not perform DDL operations that change database or table schemas. Otherwise, the migration task fails.
If you run full data migration only (without incremental migration), do not write data to the source during migration. To ensure data consistency, select both Full Data Migration and Incremental Data Migration.
GEOMETRY data can only be migrated using full data migration. Incremental migration of GEOMETRY data is not supported.
General
Schema migration is not supported.
Tables must contain at least one non-primary key field. Migrating only primary key fields is not supported.
DTS writes data only to the LindormTable engine of the Lindorm instance.
Full data migration uses read and write resources on both the source and destination databases, which may increase server load. Run full data migration during off-peak hours when CPU load is below 30%.
After full data migration completes, the destination tablespace is larger than the source due to fragmentation caused by concurrent INSERT operations.
DTS retrieves FLOAT and DOUBLE column values using
ROUND(COLUMN,PRECISION). The default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements.DTS attempts to resume failed tasks for up to seven days. Before switching workloads to the destination, stop or release the migration task, or revoke the write permissions of the DTS account on the destination. Otherwise, source data may overwrite destination data when the task resumes.
During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade update or delete operations on the source during migration, data inconsistency may occur.
Billing
| Migration type | Link configuration fee | Data transfer cost |
|---|---|---|
| Full data migration | Free | Charged when data exits Alibaba Cloud over the Internet. For more information, see Billing overview. |
| Incremental data migration | Charged. For more information, see Billing overview. | — |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE, DROP TABLE, ADD COLUMN |
Create a migration task
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the top navigation bar, select the region where your DTS instance resides.
Click Create Task.
(Optional) Click New Configuration Page in the upper-right corner.
Skip this step if the Back to Previous Version button is displayed. Use the new configuration page when available, as specific parameters may differ between versions.
Configure the source and destination databases.
Source database is a self-managed OceanBase database
Source database
Parameter Description Task Name A name for the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task. Uniqueness is not required. Select Existing Connection Select an existing registered instance to reuse its connection settings, or leave blank to configure connection settings manually. You can register a database with DTS on the Database Connections page or the new configuration page. For more information, see Manage database connections. If you are using the DMS console, select an existing database from the Select a DMS database instance drop-down list, or click Add DMS Database Instance to register a database. For more information, see Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database. Database Type Select ApsaraDB OceanBase for MySQL. Access Method Select the access method based on where the source database is deployed. This example uses Public IP Address. For self-managed databases, complete the environment setup before migration. For more information, see Preparation overview. Instance Region The region where the source OceanBase database resides. If Public IP Address is selected and the region is not listed, select the geographically closest region. Domain Name or IP The endpoint of the source OceanBase database. Port Number The service port of the source OceanBase database. Default: 2881. IP Address in Log Proxy (Domain Name Not Supported) The IP address of oblogproxy for the source OceanBase database. Port in Log Proxy The listening port of oblogproxy. Default: 2983. Database Account The source database account. For required permissions, see Permissions required. Database Password The password for the source database account. Destination database
Parameter Description Select Existing Connection Select an existing registered instance to reuse its connection settings, or leave blank to configure connection settings manually. You can register a database with DTS on the Database Connections page or the new configuration page. For more information, see Manage database connections. If you are using the DMS console, select an existing database from the Select a DMS database instance drop-down list, or click Add DMS Database Instance to register a database. For more information, see Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database. Database Type Select Lindorm. Access Method Select Alibaba Cloud Instance. Instance Region The region where the destination Lindorm instance resides. Instance ID The ID of the destination Lindorm instance. Database Account The destination database account. For required permissions, see Permissions required. Database Password The password for the destination database account. Click Test Connectivity and Proceed. Add the CIDR blocks of DTS servers to the OceanBase whitelist before clicking Test Connectivity.
ImportantAdding public CIDR blocks to a database whitelist carries security risks. Before using DTS to migrate data, take preventive measures including strengthening account credentials, restricting exposed ports, authenticating API calls, and regularly auditing the whitelist. For more information, see Add the CIDR blocks of DTS servers.
Configure the objects to migrate. On the Configure Objects page, set the following parameters:
Parameter Description Migration Types Select Full Data Migration for a one-time migration. Select both Full Data Migration and Incremental Data Migration to keep the destination synchronized during migration. If you select full data migration only, do not write to the source during migration. Processing Mode of Conflicting Tables Precheck and Report Errors: DTS checks for tables with identical names in the source and destination. If matches are found, the precheck fails and the task cannot start. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: DTS skips the check. During full data migration, existing destination records are retained. During incremental data migration, existing destination records are overwritten. If schemas differ, only specific columns are migrated or the task fails. Use with caution. Capitalization of Object Names in Destination Instance The capitalization policy for database names, table names, and column names in the destination. Default: DTS default policy. For more information, see Specify the capitalization of object names in the destination instance. Source Objects Select objects from Source Objects and click the arrow icon to move them to Selected Objects. You can select columns, tables, or databases. Selecting tables or columns excludes other objects such as views, triggers, and stored procedures. Selected Objects Right-click an object to rename it (object name mapping), add WHERE filter conditions, or select specific SQL operations. To remove an object, click it and then click the remove icon to move it back to Source Objects. Renaming an object may cause dependent objects to fail migration. Click Next: Advanced Settings and configure the following parameters:
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to a shared cluster. Purchase a dedicated cluster for improved stability. For more information, see What is a DTS dedicated cluster. Retry Time for Failed Connections The period during which DTS retries failed connections. Valid values: 10–1,440 minutes. Default: 720. Set a value greater than 30. If reconnection succeeds within this period, the task resumes automatically. When multiple tasks share the same source or destination, the most recently specified value takes effect. DTS charges for instances during retry periods. Retry Time for Other Issues The period during which DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10. Set a value greater than 10 and less than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Throttle full data migration to reduce load on the source and destination databases. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Throttle incremental data migration to reduce load on the destination database. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag (Optional) A tag to identify the DTS instance. Configure ETL Select Yes to enable extract, transform, and load (ETL) and enter data processing statements. Select No to skip. For more information, see Configure ETL in a data migration or data synchronization task. Monitoring and Alerting Select Yes to receive alerts when the task fails or migration latency exceeds the configured threshold. Configure the alert threshold and notification settings. For more information, see Configure monitoring and alerting. Save the task settings and run a precheck.
To preview the API parameters for configuring this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck.
The task runs a precheck before starting. The task can start only after the precheck passes. If the precheck fails, click View Details next to each failed item, troubleshoot the issue, and run the precheck again. For alert items that can be ignored, click Confirm Alert Details, click Ignore in the dialog box, click OK, and then click Precheck Again. Ignoring alert items may cause data inconsistency.
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
Purchase a data migration instance.
On the Purchase Instance page, configure the following parameters:
Parameter Description Resource Group The resource group for the data migration instance. Default: default resource group. For more information, see What is Resource Management? Instance Class The instance class determines migration speed. Select based on your requirements. For more information, see Instance classes of data migration instances. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
Click Buy and Start, then click OK.
View the task progress on the Data Migration page.