Use mysqldump to export your self-managed MySQL database and import it into a PolarDB for MySQL cluster. This approach supports any MySQL version and covers schema migration, full data migration, and database objects such as stored procedures, triggers, and functions. Because mysqldump does not support incremental or hot migration, it requires a maintenance window during which writes to the source database are stopped.
Prerequisites
Before you begin, ensure that the destination PolarDB for MySQL cluster has:
-
A database created. See Database management.
-
A database account with privileged access or read and write permissions. See Create and manage a database account.
-
An IP whitelist configured to allow connections from the migration host. See Configure an IP whitelist.
-
A public endpoint applied for. See Manage the endpoints of a cluster.
Table names imported into the PolarDB for MySQL cluster are case-insensitive and converted to lowercase.
Choose a migration method
| Feature | mysqldump | DTS |
|---|---|---|
| Self-managed MySQL version | No limits | 5.1, 5.5, 5.6, 5.7, or 8.0 |
| Schema migration and full data migration | Supported | Supported |
| Incremental data migration | Not supported | Supported |
| Hot migration | Not supported | Supported |
Use mysqldump when:
-
Your self-managed MySQL version is not 5.1, 5.5, 5.6, 5.7, or 8.0 (Data Transmission Service (DTS) does not support it).
-
Your database is small enough to tolerate downtime during migration.
-
You do not need ongoing replication after the initial import.
Use DTS when you need incremental data migration or hot migration with minimal downtime.
Export data from the self-managed MySQL database
The following steps use a self-managed MySQL 8.0 database running on Linux as an example.
Stop all write operations to the source database before exporting. Updating data during export can result in an inconsistent dump.
Step 1: Export table data
Run the following command in the Linux CLI to export the database schema and table data:
mysqldump -h <source-endpoint> -u <username> -p \
--opt --default-character-set=utf8 --hex-blob \
--skip-triggers --skip-lock-tables \
<database-name> > /tmp/<database-name>.sql
Replace the following placeholders:
| Placeholder | Description | Example |
|---|---|---|
<source-endpoint> |
Endpoint of the self-managed MySQL database. Use 127.0.0.1 if the database is on an Elastic Compute Service (ECS) instance; use the public endpoint if it is an on-premises database. |
127.0.0.1 |
<username> |
MySQL user with export permissions | user |
<database-name> |
Name of the database to export | testdb |
The following table describes the key parameters:
| Parameter | Effect |
|---|---|
--opt |
Enables a set of options that optimize dump speed and output for large databases. |
--default-character-set=utf8 |
Exports data using UTF-8 character set encoding to preserve multi-byte characters. |
--hex-blob |
Exports binary column values (BINARY, VARBINARY, BLOB) as hexadecimal strings to prevent corruption during text-mode transfer. |
--skip-triggers |
Excludes triggers from this dump. Export triggers separately in Step 2. |
--skip-lock-tables |
Skips table locking during export. |
Example:
mysqldump -h 127.0.0.1 -u user -p --opt --default-character-set=utf8 --hex-blob testdb --skip-triggers --skip-lock-tables > /tmp/testdb.sql
Step 2: Export stored procedures, triggers, and functions (optional)
Skip this step if the database has no stored procedures, triggers, or functions.
The following command exports all routines and triggers with DEFINER clauses stripped. Stripping DEFINER prevents Access denied; you need (at least one of) the SUPER privilege(s) for this operation errors when importing into PolarDB for MySQL.
mysqldump -h <source-endpoint> -u <username> -p \
--opt --default-character-set=utf8 --hex-blob \
<database-name> -R \
| sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' \
> /tmp/<database-name>Trigger.sql
Example:
mysqldump -h 127.0.0.1 -u user -p --opt --default-character-set=utf8 --hex-blob testdb -R | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > /tmp/testdbTrigger.sql
Import data into PolarDB for MySQL
Run the following commands to import the exported files into the destination PolarDB cluster:
mysql -h <polardb-endpoint> -P <port> -u <account> -p <polardb-database> < /tmp/<database-name>.sql
mysql -h <polardb-endpoint> -P <port> -u <account> -p <polardb-database> < /tmp/<database-name>Trigger.sql
Replace the following placeholders:
| Placeholder | Description | Example |
|---|---|---|
<polardb-endpoint> |
Public endpoint of the PolarDB cluster | polardbtest.mysql.polardb.rds.aliyuncs.com |
<port> |
Port of the PolarDB cluster | 3306 |
<account> |
PolarDB database account (must be a privileged account or have read and write permissions) | testuser |
<polardb-database> |
Name of an existing database on the PolarDB cluster | testdb |
Example:
mysql -h polardbtest.mysql.polardb.rds.aliyuncs.com -P 3306 -u testuser -p testdb < /tmp/testdb.sql
mysql -h polardbtest.mysql.polardb.rds.aliyuncs.com -P 3306 -u testuser -p testdb < /tmp/testdbTrigger.sql
Verify the migration
After the import completes, log in to the PolarDB cluster database and confirm that the data is correct. See Connect to a cluster.
FAQ
What do I do if the error `Access denied; you need (at least one of) the SUPER privilege(s) for this operation` appears?
The exported SQL file contains statements that require SUPER privileges — typically DEFINER clauses in stored procedures, triggers, or views. Delete those statements from the script, then run it again.