All Products
Search
Document Center

Data Transmission Service:Migrate data from PolarDB for MySQL to Elasticsearch

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to an Elasticsearch instance. DTS supports schema migration, full historical data migration, and ongoing incremental replication — enabling near-zero-downtime cutover.

Prerequisites

Before you begin, make sure that you have:

Enabling binary logging on a PolarDB for MySQL cluster consumes storage space and incurs storage fees.

Choose a migration type

DTS supports three migration types, which you can combine based on your requirements.

Migration typeWhat it doesUse when
Schema migrationMigrates schema definitions (tables, indexes) from the source to the destinationAlways include this unless you have pre-created indexes in Elasticsearch
Full data migrationMigrates all existing data at the time the task startsYou need a one-time copy of historical data
Incremental data migrationContinuously replicates data changes after the full migration completesYou need near-zero downtime and want to keep the destination in sync with the source

Recommended combination for production migrations: Select all three types. This lets you complete the migration without interrupting your applications — the full migration copies existing data, and incremental migration keeps the destination up to date while you prepare to switch over.

If you only need a point-in-time copy, select Schema migration and Full data migration only. In this case, do not write new data to the source during migration to avoid inconsistency.

Supported DML operations for incremental migration: INSERT, UPDATE, DELETE. UPDATE operations that remove fields are not supported.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of chargeCharged when Access Method is set to Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Limitations

Source database

  • Tables must have primary keys or UNIQUE constraints, and all fields in those tables must be unique. Without this, duplicate records may appear in the destination.

  • When migrating at the table level with object editing (such as column name mapping), a single task can migrate a maximum of 1,000 tables. To migrate more, split the tables across multiple tasks, or configure a task for the entire database. Otherwise, a request error may be reported after you submit the task.

  • DDL operations that change database or table schemas are not allowed during schema migration.

  • Data from read-only nodes of the source PolarDB for MySQL instance cannot be migrated.

  • DTS does not support migration of Object Storage Service (OSS) external tables from PolarDB for MySQL.

  • The following object types are not supported: INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, and FK.

Incremental migration

  • Binary logs must be retained for at least 3 days (7 days recommended). If DTS cannot obtain the binary logs, the task fails. In extreme cases, this can cause data inconsistency or loss. Issues caused by a retention period shorter than 3 days are not covered by the DTS Service Level Agreement (SLA). To set the retention period, see Modify the retention period.

  • Do not use tools such as pt-online-schema-change for online DDL operations on migration objects in the source database during migration.

Destination (Elasticsearch)

  • You cannot migrate data to an index that has a parent-child relationship or a Join field type mapping. Doing so may cause task errors or query failures.

  • Development and test specifications of Elasticsearch instances are not supported.

Other

  • During full data migration, DTS consumes read and write resources on both databases, which may increase server load. Run full migrations during off-peak hours.

  • DTS does not support primary/standby switchover scenarios during full data migration. If a switchover occurs, reconfigure the migration task.

  • For FLOAT and DOUBLE columns, DTS reads values using ROUND(COLUMN, PRECISION). If you have not defined precision, DTS uses 38 for FLOAT and 308 for DOUBLE. Verify that these precision values meet your requirements.

  • To add a column to a table being migrated: first update the mapping in the destination Elasticsearch instance, then run the DDL operation in the source database, then pause and restart the migration task.

  • DTS tries to resume tasks that failed within the past seven days. Before switching your business to the destination instance, stop or release the migration instance — or run REVOKE to remove write permissions from the DTS database account — to prevent an automatic resume from overwriting data in the destination.

  • If a task fails, DTS support staff will attempt to restore it within eight hours. During this process, they may restart the task or adjust task parameters. Only DTS task parameters are modified, not database parameters. For a list of adjustable parameters, see Modify instance parameters.

Data type mappings

PolarDB for MySQL and Elasticsearch support different data types, so types cannot always be mapped one-to-one. During schema migration, DTS maps source types to the closest supported types in Elasticsearch. See Data type mappings for initial schema synchronization.

The following table shows how Elasticsearch concepts map to relational database concepts.

ElasticsearchRelational database
IndexDatabase
TypeTable
DocumentRow
FieldColumn
MappingDatabase schema
DTS does not set the mapping parameter in dynamic during schema migration. This parameter's behavior depends on your Elasticsearch instance settings. If your source data is in JSON format, make sure that values for the same key have the same data type across all rows in a table — otherwise DTS may report synchronization errors. See dynamic.

Create a migration task

Step 1: Open the migration task list

Navigate to the migration task list using one of the following methods.

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will be located.

From the DMS console

The actual operations may vary based on your DMS console mode and layout. See Simple mode console and Customize the layout and style of the DMS console.
  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance will be located.

Step 2: Configure source and destination databases

Click Create Task, then configure the source and destination database connections.

Task settings

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.

Source database

ParameterDescription
Select Existing ConnectionTo reuse a registered database instance, select it from the list — the fields below are filled automatically. In the DMS console, this parameter is named Select a DMS database instance.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Cloud Instance.
Instance RegionSelect the region where the source PolarDB for MySQL instance is located.
Cross-accountSelect Within The Same Account for same-account migrations.
PolarDB Instance IDSelect the source PolarDB for MySQL instance ID.
Database AccountEnter the database account for the source instance. The account must have read permissions on the migration objects.
Database PasswordEnter the password for the database account.
EncryptionSelect an option that meets your security requirements. For SSL encryption details, see Set SSL encryption.

Destination database

ParameterDescription
Select Existing ConnectionTo reuse a registered database instance, select it from the list — the fields below are filled automatically. In the DMS console, this parameter is named Select a DMS database instance.
Database TypeSelect Elasticsearch.
Connection TypeSelect Cloud Instance.
Instance RegionSelect the region where the destination Elasticsearch instance is located.
TypeSelect Cluster Edition or Serverless as needed.
Instance IDSelect the destination Elasticsearch instance ID.
Database AccountEnter the Elasticsearch account. The default is elastic.
Database PasswordEnter the password you set when creating the Elasticsearch instance.
EncryptionSelect HTTP or HTTPS as needed.

After completing the configuration, click Test Connectivity and Proceed.

Important

Make sure DTS server IP addresses are added to the security whitelists of both the source and destination databases. See Add DTS server IP addresses to a whitelist. If the source or destination is a self-managed database (where Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog.

Step 3: Configure migration objects

On the Configure Objects page, set the following options.

ParameterDescription
Migration TypesSelect the migration types that match your scenario. See Choose a migration type. If you omit Schema Migration, make sure the destination database already has tables to receive the data.
Processing Mode for Existing Destination TablesPrecheck and Report an Error: Fails the precheck if a table with the same name exists in the destination. Ignore the Error and Continue: Skips the check. During full migration, existing records in the destination are retained; during incremental migration, they are overwritten.
Index NameTable Name: The destination index uses the source table name. DatabaseName_TableName: The destination index follows the DatabaseName_TableName convention.
Case Policy for Destination Object NamesControls how DTS handles case sensitivity for object names. The default is DTS Default Policy. See Case-sensitivity policy for object names in the destination database.
Source ObjectsSelect databases or tables to migrate, then click Rightwards arrow to move them to Selected Objects. If you select tables, objects such as views, triggers, and stored procedures are not migrated.
Selected ObjectsTo rename a single object in the destination, right-click it. See Individual table column mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time.
Only underscores (_) are supported as special characters in index names and type names. To apply WHERE clause filtering or to set index name, type name, or column names, right-click the table in Selected Objects and configure the settings. See Set filter conditions.

Step 4: Configure advanced settings

Click Next: Advanced Settings to configure the following options.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS runs tasks on a shared cluster. To improve stability, purchase a dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Default: 720 minutes. Range: 10–1,440 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically.
Retry Time for Other IssuesHow long DTS retries after non-connectivity errors (such as DDL or DML exceptions). Default: 10 minutes. Range: 1–1,440 minutes. We recommend setting this to more than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits read/write load during full migration. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. You can also adjust the full migration speed after the task starts.
Enable Throttling for Incremental Data MigrationLimits load during incremental migration. Set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. You can also adjust the incremental migration speed after the task starts.
Environment TagOptional. Tag the instance by environment (for example, production or staging).
Sharding ConfigurationSet the number of primary shards and replica shards for the destination index, within the limits of your Elasticsearch instance.
String IndexHow string values are indexed in Elasticsearch: analyzed (analyzed before indexing — select an analyzer; see Analyzers), not analyzed (indexed as-is), or no (not indexed).
Time ZoneTime zone to apply when migrating time-related types such as DATETIME and TIMESTAMP. If time-zone information is not needed in the destination, configure the document type for these fields in Elasticsearch before starting the migration.
DOCIDMaps to the _id field in Elasticsearch. By default, DOCID corresponds to the table's primary key. If the table has no primary key, Elasticsearch auto-generates an ID.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether DTS writes heartbeat SQL to the source database. Yes: No heartbeat writes (the instance may show latency). No: Heartbeat writes are enabled (may interfere with operations like physical backups and cloning).
Configure ETLSelect Yes to configure ETL (extract, transform, and load) processing, then enter data processing statements. Select No to skip ETL.
Monitoring and AlertingSelect Yes to configure an alert threshold and notification. DTS sends alerts if a migration fails or latency exceeds the threshold.

Step 5: Configure table and field settings

Click Next: Configure Table and Field Settings to define routing and document ID settings for each table being migrated to Elasticsearch.

ParameterDescription
Is _routing Set?Controls which shard documents are stored on. Yes: Use custom columns for routing. No: Use _id for routing. If the destination Elasticsearch instance is version 7.*x*, select No. See _routing.
_id ValuePrimary Key Column: Composite primary keys are merged into a single column. Business Primary Key: Select this and specify the Business Primary Key Column.

Step 6: Save settings and run a precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before starting the task. If the precheck fails:

  • Click View Details next to the failed item, fix the issue, then click Precheck Again.

  • For warning items that can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring warnings may cause data inconsistency.

Step 7: Purchase the instance and start migration

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select an instance class. The class determines migration speed. See Data migration link specifications and What is Resource Management? for resource group options.

  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Verify the migration

After starting the task, go to the Data Migration Tasks list to monitor progress.

  • Full migration only: The task stops automatically after the full migration completes. The task Status changes to Completed.

  • Incremental migration included: The task continues running. The Status remains Running. Monitor the replication latency to confirm the destination is in sync before switching your application traffic to Elasticsearch.

Usage notes

  • DTS periodically runs CREATE DATABASE IF NOT EXISTS `test` on the source database to advance the binary log offset. This is expected behavior.

  • The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • For multiple DTS instances sharing the same source or destination, the network retry time is determined by the setting of the most recently created task.

  • During the connection retry period, DTS charges apply. Release the DTS instance promptly if the source and destination database instances are released.

What's next