All Products
Search
Document Center

ApsaraDB for ClickHouse:Migrate self-managed ClickHouse to ApsaraDB for ClickHouse Enterprise Edition

Last Updated:Mar 23, 2026

Learn how to migrate data from a self-managed ClickHouse cluster to an ApsaraDB for ClickHouse Enterprise Edition cluster using the console or manually.

Prerequisites

  • Self-managed cluster: A database account and password have been created, and the account must have database and table read permission and SYSTEM command execution permission. (If you need to migrate foreign tables that involve account passwords, the account must have the displaySecretsInShowAndSelect permission).

  • Target Cluster: Requires a Database Account and Password. The account must have highest privileges.

  • Network Connectivity

    • If the Self-hosted Cluster and Target Cluster are in the same Virtual Private Cloud (VPC), add the IP addresses of all nodes in the Target Cluster and the IPv4 CIDR block of its switch to the Allowlist of the Self-hosted Cluster.

      • To configure an Allowlist for ApsaraDB for ClickHouse, see Configure an Allowlist.

      • To configure the Allowlist for your self-hosted cluster, see its product documentation.

      • You can use the SELECT * FROM system.clusters WHERE internal_replication = 1; query to view the IP addresses of all nodes in an ApsaraDB for ClickHouse cluster.

    • If the Self-hosted Cluster and Target Cluster are in different VPCs, or if the Self-hosted Cluster is located on-premises or on another cloud platform, you must first establish network connectivity. For instructions, see How to establish network connectivity between a target cluster and a data source.

      Note

      In this scenario, IP Mapping may be configured to avoid CIDR block conflicts between different VPCs. If you configure IP Mapping, you must also add the mapped IP addresses to the Allowlists of both clusters.

Migration validation

Before you begin data migration, create a test environment to verify business compatibility and performance, and to confirm that the migration will succeed. After the validation is complete, start the data migration in the production environment. This crucial step helps you identify and resolve potential issues early, ensuring a smooth migration and minimizing impact on the production environment.

  1. Create a migration task.

  2. Perform a performance bottleneck analysis to verify that the migration will succeed.

  3. Validate cloud compatibility using one of the following methods:

    1. Manual validation. For more information, see Compatibility analysis and resolution.

    2. Console validation. For more information, see (Optional) Check SQL Compatibility.

Migration approaches

Migration approach

Advantages

Disadvantages

Use cases

Console migration

Provides a visual interface and eliminates manual Metadata migration.

Supports only Full Migration and Incremental Migration for an entire Cluster. Migrating specific databases, tables, or partial Historical Data is not supported.

Ideal for migrating an entire Cluster.

Manual Migration

Provides fine-grained control over which databases and tables to migrate.

Involves complex operations and requires manual Metadata migration.

  • Migrating specific databases and tables.

  • Migrating a single Node that has over 1 TB of Cold Storage Data.

  • Migrating a single Node that has over 10 TB of Hot Data.

  • Migrating an entire Cluster when Console Migration requirements are not met.

Procedure

Console migration

Considerations

During migration

  • The merge process for databases and tables involved in the migration is paused on the target cluster, but it continues on the self-managed cluster.

    Note

    If a migration task runs for too long, an excessive amount of metadata can accumulate on the target cluster. We recommend that migration tasks do not exceed 5 days. Tasks that run longer than 5 days are Automatically Canceled.

  • The target cluster must use the default cluster. If your self-managed cluster uses a different name, the system automatically converts the cluster definition in distributed tables to default.

Migration scope

Note

The migration process converts the database and table schemas for some engines. For more information, see the tables below.

  • Database schema: The following database engines are supported.

    Engine name

    Engine conversion

    Atomic

    Replaced with Replicated database

    Replicated

    No change

    Ordinary

    Replaced with Replicated database

  • Table schema: The following table engines are supported.

    Engine name

    Engine conversion

    MaterializedView

    No change

    View

    GenerateRandom

    Buffer

    URL

    Null

    Merge

    SharedMergeTree

    SharedVersionedCollapsingMergeTree

    SharedSummingMergeTree

    SharedReplacingMergeTree

    SharedAggregatingMergeTree

    SharedCollapsingMergeTree

    SharedGraphiteMergeTree

    MergeTree

    Replaced with SharedMergeTree

    ReplicatedMergeTree

    VersionedCollapsingMergeTree

    Replaced with SharedVersionedCollapsingMergeTree

    ReplicatedVersionedCollapsingMergeTree

    SummingMergeTree

    Replaced with SharedSummingMergeTree

    ReplicatedSummingMergeTree

    ReplacingMergeTree

    Replaced with SharedReplacingMergeTree

    ReplicatedReplacingMergeTree

    AggregatingMergeTree

    Replaced with SharedAggregatingMergeTree

    ReplicatedAggregatingMergeTree

    ReplicatedCollapsingMergeTree

    Replaced with SharedCollapsingMergeTree

    CollapsingMergeTree

    GraphiteMergeTree

    Replaced with SharedGraphiteMergeTree

    ReplicatedGraphiteMergeTree

  • Data: Data in MergeTree family tables is migrated incrementally.

Important
  • The database and table schemas listed above are migrated automatically. All other database and table schemas must be handled manually based on any resulting Warning or Error messages.

  • If your data does not meet these requirements, use manual migration.

Cluster impact

  • Self-managed cluster

    • When the migration task reads data from the self-managed cluster, CPU and memory usage increases.

    • DDL operations are not allowed.

  • Target cluster

    • When the migration task writes data, CPU and memory usage increases in the target cluster.

    • DDL operations are not allowed on the databases and tables involved in the migration. This restriction does not apply to databases and tables that are not part of the migration.

    • The system stops the merge process for the databases and tables being migrated, but not for others.

    • After the migration is complete, the cluster performs frequent merge operations for a period, increasing I/O usage and potentially increasing business request latency. We recommend that you calculate the merge time after the migration to plan for the potential impact on request latency.

Step 1: Check self-managed cluster and enable system tables

For an incremental migration, you must update the config.xml file on your self-managed cluster to configure system.part_log and system.query_log.

If system.part_log and system.query_log are not enabled

If you have not enabled system.part_log and system.query_log, add the following configurations to the config.xml file.

system.part_log
<part_log>
    <database>system</database>
    <table>part_log</table>
    <partition_by>event_date</partition_by>
    <order_by>event_time</order_by>
    <ttl>event_date + INTERVAL 15 DAY DELETE</ttl>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
</part_log>
system.query_log
<query_log>
    <database>system</database>
    <table>query_log</table>
    <partition_by>event_date</partition_by>
    <order_by>event_time</order_by>
    <ttl>event_date + INTERVAL 15 DAY DELETE</ttl>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
</query_log>

Enabled system.part_log and system.query_log

  1. Compare the configurations of system.part_log and system.query_log in the config.xml file with the following content. If there are any inconsistencies, modify the configurations to match. Otherwise, the migration may fail or proceed slowly.

    system.part_log
    <part_log>
        <database>system</database>
        <table>part_log</table>
        <partition_by>event_date</partition_by>
        <order_by>event_time</order_by>
        <ttl>event_date + INTERVAL 15 DAY DELETE</ttl>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </part_log>
    system.query_log
    <query_log>
        <database>system</database>
        <table>query_log</table>
        <partition_by>event_date</partition_by>
        <order_by>event_time</order_by>
        <ttl>event_date + INTERVAL 15 DAY DELETE</ttl>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_log>
  2. After you modify the configuration, run the drop table system.part_log and drop table system.query_log statements. The system.part_log and system.query_log tables are automatically recreated when you insert data into a business table.

Step 2: Configure target cluster compatibility

To ensure that the behavior of the target cluster is as compatible as possible with the self-managed cluster, connect to the target cluster and modify the compatibility parameter to make the version numbers of the target cluster and the self-managed cluster consistent.

Important

If you set compatibility to an earlier version, some new features such as ParallelRepica will become unavailable.

Example:

SELECT currentProfiles(); // Get the profile used by the user.
SELECT
    profile_name,
    setting_name,
    value
FROM system.settings_profile_elements
WHERE (setting_name = 'compatibility') AND (profile_name = 'xxxx'); // Query the compatibility configuration value.
ALTER PROFILE XXXX SETTINGS compatibility = '23.8'; // Modify the profile.

Step 3: Create a migration task

  1. Log on to the ApsaraDB for ClickHouse console. On the Clusters page, select Enterprise Edition Clusters, and then click the ID of the target cluster.

  2. In the left-side navigation pane, click Data Migration and Synchronization > Migration from ClickHouse.

  3. Click Create Migration Task.

  4. Select Source and Target Instances.

    Parameter

    Description

    Example

    Task Name

    The name of the migration task. It can contain only uppercase letters, lowercase letters, and digits. The name must be unique and is case-insensitive.

    MigrationTask1229

    Source Cluster Name

    Obtain the cluster name of the self-managed instance by running SELECT * FROM system.clusters; .

    default

    VPC IP Address

    The IP and PORT addresses of each shard in the cluster are separated by commas. Format: IP:PORT,IP:PORT,....

    You can use the following SQL statement to get the IP address and port of the self-managed cluster:

    SELECT shard_num, replica_num, host_address as ip, port FROM system.clusters WHERE cluster = '<cluster_name>' and replica_num = 1;

    Parameters:

    • cluster_name: The name of the self-managed cluster.

    • replica_num=1 selects the first replica. You can also select other replica sets or choose one replica from each shard.

    Important
    • You cannot use the VPC domain name or SLB address of ClickHouse.

    • If you use Network Address Translation (NAT), provide the public-facing IP address and port of the shard.

    192.168.0.5:9000,192.168.0.6:9000

    Database Account

    The database account of the self-managed cluster.

    test

    Database Password

    The password for the database account of the self-managed cluster.

    test******

    Source Instance Kernel Version

    Click Get Version.

    22.8.5.29

  5. Based on the retrieved source instance version, perform one of the following operations:

    • If the source instance version is 22.10 or later, click Next.

    • If the source instance version is earlier than 22.10, enter the Destination Instance Information as prompted, and then click Next.

    • If version retrieval fails: Retrieval may fail due to incorrect source instance information or network connectivity issues. Resolve the issue based on the prompt, and then click Get Version again.

    Note

    Due to parameter incompatibility between earlier Community versions and the Enterprise Edition, if the source instance version is earlier than 22.10, you must synchronize data by pushing data from the source to the target. This requires making the target cluster's IP address routable from the self-managed network. If the self-managed network and the Enterprise Edition instance are in the same VPC, or if they are connected through a VPC peering connection, you can use the original IP address directly.

  6. Check connectivity and configuration

    1. Click Start Check.

      Click to view check items.

      • Connectivity Verification: Ensure there is full network connectivity between the self-managed instance and the target instance, allowing mutual access between all nodes.

      • Account Permission Verification: Verifies that the source account and password are correct and can be used to connect to the source instance.

      • Check source instance system tables: A self-managed instance must have the system.query_log, system.parts, and system.part_log system tables.

      • Check the configuration: The self-managed instance and the Target Instance have the same Time Zone, and the Target Instance's compatibility parameter is the same as the source version.

    2. During the check, you can click the image icon in the upper-right corner to view real-time progress.

    3. After the check is complete, proceed with the subsequent operations based on the results.

      You can select a Result Level and check item, then click the image button to view the corresponding check results. The results are described as follows.

      • Success: If all checks pass, click Next to continue.

      • Warning: Indicates a non-blocking issue. You must determine if the warning affects your business or the migration. You can either ignore the warning or resolve the issue based on the warning message and then click Start Check again.

      • Error: This indicates a blocking issue. You must resolve the error based on the error message and then click Start Check again.

        For error messages and solutions, see FAQ.

  7. Check database and table schema

    1. Click Start Check.

    2. During the check, you can click the image icon in the upper-right corner to view real-time progress.

    3. After the check is complete, proceed with the subsequent operations based on the results.

      For a description of the check results, see step 6.

  8. Database and table schema migration

    1. Click Start Migration.

    2. During the migration, you can click the image icon in the upper-right corner to view real-time progress.

    3. After the check is complete, proceed with the subsequent operations based on the results.

      For a description of the check results, see step 6.

  9. (Optional) Check SQL compatibility

    The SQL compatibility check verifies syntax compatibility between different kernel versions by replaying SQL statements from the self-managed instance on the target instance. You can decide whether to perform this step based on your needs.

    • To skip this step, click Skip.

    • To perform this step, select a Request Replay Time and then click Start Check. If the check passes, click Next. If the check fails, see step 6 for how to handle it.

      Important
      • This check only verifies syntax compatibility and does not require data in the instance's databases and tables. If you need data, you can first proceed to the next step to migrate some data.

      • False positives may occur if the client version used for replaying SQL does not match the target instance. If you encounter an exception, run the SQL statement manually to verify the error.

  10. Start synchronization

    1. Click Start Sync.

    2. During synchronization, you can click the image icon in the upper-right corner to view real-time progress.

      During the synchronization process, you can use the Stop, Restart, and Cancel Migration operations to control the migration flow. Click to view a description and the impact of each operation.

      Actions

      Functionality

      Impact

      Use Case

      Stop

      Immediately stops data migration and migrates the remaining database and table schemas.

      • Data may not be fully migrated.

      • Before you restart the migration, you must clear the migrated data from the target cluster to avoid data duplication.

      • To proactively stop the migration task after all data has been migrated.

      • To test the migration after a portion of the data has been migrated, without stopping writes to the self-managed cluster.

      Restart

      If an exception occurs while checking and cleaning other databases/tables, migrating data, or migrating remaining schemas, you can resolve the issue based on the error message. This action will retry the current step and then proceed with the subsequent steps.

      None

      To resume the migration from a breakpoint after resolving an exception that occurred during the process.

      Cancel Migration

      Forcibly cancels the task and skips all subsequent steps.

      Important

      After you cancel the migration, the migration task is locked, preventing you from modifying the migration flow. You can use the Previous, Next, or Refresh buttons to view the execution results of the migration steps.

      • The migration task is forcibly terminated. The target instance's database and table schemas and configurations may be incomplete and cannot be used for business operations.

      • Before you restart the migration, you must clear the migrated data from the target cluster to avoid data duplication.

      To quickly end the migration and re-enable writes if the migration task is affecting the self-managed cluster.

    3. When the Migrate Data step is running, switch to the Migrate Data tab and click the image button to view the Migration Progress and Estimated Time Remaining.

      Click to see how to evaluate if the migration can be completed.

      Whether the migration can succeed depends on the relationship between the migration speed and the write speed of the self-managed cluster.

      • The following table shows migration speed data from our tests:

        Average part size

        Source Instance Type

        Source Disk Type

        Target Instance Type

        Target Storage Medium

        Number of Cluster Nodes

        Single-Node Migration Speed

        Overall Migration Speed

        402.54 MB

        8C32G

        PL1

        16CCU

        OSS

        16

        47 MB/s

        752.34 MB/s

        402.54 MB

        80C384G

        PL3

        48CCU

        ESSD_L2

        8

        197.74 MB/s

        1581.95 MB/s

      • Determine the relationship between the write speeds of the target and self-managed clusters:

        Data migration speed depends on factors such as part size (in our tests, migration speeds were higher when the average part size was between 100 MB and 10 GB), instance type, disk type, and business data characteristics. Therefore, the test data is for reference only. To determine the actual write speed of the target cluster, you need to check its disk throughput. For instructions on how to check disk throughput, see View cluster monitoring information.

        • If the write speed of the target cluster is slower than the write speed of the self-managed cluster, the migration is likely to fail. We recommend that you cancel the migration task and use manual migration instead.

        • If the write speed of the target cluster is faster than the write speed of the self-managed cluster, you can continue with the subsequent steps. To improve the likelihood of a successful migration, we recommend that the time required for the migration, calculated as Data Volume / (Migration Speed - Self-managed Instance Write Speed), be 5 days or less.

      Important
      • You must closely monitor the Migration Progress of the target task. Based on the Estimated Time Remaining, stop writes to the self-managed cluster and handle tables with the Kafka and RabbitMQ engines.

      • The current maximum migration time threshold is set to 5 days. If the migration continues for 5 days, the system will automatically cancel the migration task. If your migration task requires more time, submit a ticket to contact technical support to adjust the threshold.

      Click to see how to estimate when to stop writing to the self-managed cluster.

      Before switching over your business traffic, you must ensure that no new data is written to the self-managed instance to guarantee data integrity after migration. To do this, stop business writes and delete the Kafka and RabbitMQ tables. The procedure is as follows:

      1. Log on to the self-managed cluster and run the following statement to query the tables that need to be handled:

        SELECT * FROM system.tables WHERE engine IN ('RabbitMQ', 'Kafka');
      2. View the CREATE TABLE statement for the target table.

        SHOW CREATE TABLE <aim_table_name>;
      3. Connect to the target cluster and run the CREATE TABLE statement you obtained in the previous step.

      4. Log on to the self-managed cluster and delete the migrated Kafka and RabbitMQ engine tables.

        Important

        When you delete a Kafka table, you must also delete any materialized views that reference it. Otherwise, the materialized views cannot be migrated, which will cause the entire migration to fail.

    4. When the Migration Progress reaches 100% and you have confirmed that the source instance has stopped receiving writes, click the Stop button to end the migration process and proceed to the next steps.

      image

    5. After the synchronization is complete, click Completed.

      Important

      After the "Start Synchronization" step is complete, the migration task is locked, which means you are not allowed to modify the migration flow. You can use the Previous, Next, or Refresh buttons to view the execution results of the migration steps.

Step 4: Migrate data from non-MergeTree tables

For non-MergeTree tables, the migration task migrates only the table schema (for example, MySQL tables) or does not support them at all (for example, Log tables). Therefore, after the migration task is complete, the target cluster may contain tables that have a schema but no business data. You must migrate this business data manually by following these steps:

  1. Log on to the self-managed cluster and view the non-MergeTree tables whose data needs to be migrated.

    SELECT
        `database` AS database_name,
        `name` AS table_name,
        `engine`
    FROM `system`.`tables`
    WHERE (`engine` NOT LIKE '%MergeTree%') AND (`engine` != 'Distributed') AND (`engine` != 'MaterializedView') AND (`engine` NOT IN ('Kafka', 'RabbitMQ')) AND (`database` NOT IN ('system', 'INFORMATION_SCHEMA', 'information_schema')) AND (`database` NOT IN (
        SELECT `name`
        FROM `system`.`databases`
        WHERE `engine` IN ('MySQL', 'MaterializedMySQL', 'MaterializeMySQL', 'Lazy', 'PostgreSQL', 'MaterializedPostgreSQL', 'SQLite')
    ))
  2. Log on to the target cluster and perform data migration using the remote function.

Manual migration

image.png

Note

In ApsaraDB for ClickHouse Enterprise Edition, regardless of whether your source table has shards or replicas, you only need to create a corresponding target table. In this table, you can omit the Engine parameter because the system will automatically use the SharedMergeTree table engine. The ApsaraDB for ClickHouse Enterprise Edition cluster automatically handles vertical and horizontal scaling, so you do not need to worry about the specific implementation of replication and sharding.

Overview

The process for migrating from a self-managed ClickHouse cluster to an ApsaraDB for ClickHouse Enterprise Edition cluster is as follows.

  1. Add a read-only user to the source cluster.

  2. Replicate the source table schema on the target cluster.

  3. If the source cluster is publicly accessible, you can pull its data to the target cluster. Otherwise, you must push the data from the source cluster.

  4. (Optional) Delete the IP address of the source cluster from the target cluster.

  5. Delete the read-only user from the source cluster.

Steps

  1. On the source cluster (where the source table already contains data), perform the following operations:

    1. Add a read-only user to the table db.table.

      CREATE USER exporter
      IDENTIFIED WITH SHA256_PASSWORD BY 'password-here'
      SETTINGS readonly = 1;
      GRANT SELECT ON db.table TO exporter;
    2. Copy the source table schema.

      SELECT create_table_query
      FROM system.tables
      WHERE database = 'db' and table = 'table'
  2. On the target cluster, perform the following operations.

    1. Create a database.

      CREATE DATABASE db
    2. Use the CREATE TABLE statement from the source data table to create the destination data table.

      Note

      When you run the CREATE TABLE statement, change the ENGINE to SharedMergeTree but do not include any parameters. This is because the ApsaraDB for ClickHouse Enterprise Edition cluster always replicates tables and provides the correct parameters. The ORDER BY, PRIMARY KEY, PARTITION BY, SAMPLE BY, TTL, and SETTINGS clauses define the table's structure and metadata. Retain these clauses to ensure that the table is created correctly in the target ApsaraDB for ClickHouse Enterprise Edition cluster.

      CREATE TABLE db.table ...
    3. Use the Remote function to read data or push data.

      Note

      If the source ClickHouse server is not accessible from the external network, you can push data instead of reading it, because the Remote function supports select and insert operations.

      • In the target cluster, use the Remote function to read data from the source table in the source cluster.

        image.png

        INSERT INTO db.table SELECT * FROM
        remote('source-hostname:9000', db, table, 'exporter', 'password-here')
      • Use the Remote function to push data from the source cluster to the target cluster.

        image.png

        Note

        To allow the Remote function to connect to your ApsaraDB for ClickHouse Enterprise Edition cluster, you need to add the source cluster's IP address to the target cluster's allowlist. For more information, see Configure an allowlist.

        INSERT INTO FUNCTION
        remote('target-hostname:9000', 'db.table',
        'default', 'PASS') SELECT * FROM db.table

FAQ

Connectivity and configuration check errors

Error message

Description

Solution

Tcp connectivity check failed for '{host}:{port}':{error}.

The network connection to the self-managed cluster timed out.

Troubleshoot the network issue based on the error message.

No such cluster: {cluster}, please run 'SELECT DISTINCT(cluster) FROM system.clusters;' to check

The cluster specified in the migration task does not exist in the self-managed cluster.

Run the SQL statement to query the clusters in the self-managed cluster and update the migration task configuration.

not exists

One or more of the following system tables are missing from the self-managed cluster: system.query_log, system.parts, or system.part_log.

Create the required system tables in the self-managed cluster.

Timezone mismatch with source, which may cause time data anomalies.

The time zone of the self-managed cluster does not match that of the target cluster.

Adjust the time zone settings to ensure both clusters use the same time zone.

Compatibility mismatch with source version, which may cause incompatibility.

The compatibility setting of the target cluster does not match the version of the self-managed cluster.

Adjust the compatibility setting on the target cluster to match the version of the self-managed cluster.

Important

If you set the compatibility parameter to an earlier version, some new features, such as ParallelReplica, become unavailable.

Database and table schema check errors

Error message

Description

Solution

ERROR: Not consistent across nodes.

The database or table definitions are inconsistent across nodes in the self-managed cluster.

Check the databases and tables on each node of the self-managed cluster and resolve any inconsistencies.

ERROR: Cannot get secrets (shown as [HIDDEN]), please set display_secrets_in_show_and_select=1 (restart required).

The passwords defined in the database or table schemas are hidden.

Set display_secrets_in_show_and_select=1 and restart the cluster.

Note: This operation requires the display_secrets_in_show_and_select permission.

ERROR: Unsupported engine.

The database engine in the self-managed cluster is not supported for migration.

Consider changing the database engine to one that is supported by the target cluster.

WARN:Unsupported engine, it will be automatically replaced with a Replicated database to bypass migration exceptions.

The database engine in the self-managed cluster is not supported for migration.

The system automatically replaces the unsupported database engine with a Replicated database to bypass this check.

WARN:Unsupported engine, please replace the data synchronization capability with DTS, or create a same-name database to bypass migration exceptions.

The database engine in the self-managed cluster is not supported for migration.

Use Data Transmission Service (DTS) for data synchronization, or create a database with the same name in the target cluster to bypass this check.

WARN:Unsupported engine, it will be automatically ignored during migration.

The database engine in the self-managed cluster is not supported for migration.

The database using this unsupported engine is automatically ignored during migration.

WARN:It's not recommended to use the Distributed engine as it will cause scaling issues in enterprise instances. Please drop this table and query the underlying MergeTree table directly.

Using the Distributed table engine is not recommended for ApsaraDB for ClickHouse Enterprise Edition instances.

Drop the distributed table in the self-managed cluster. After migration, query the underlying MergeTree table directly.

WARN:Please confirm referenced IP addresses are accessible.

This warning indicates a potential accessibility issue with an external database engine.

You must confirm that the referenced IP addresses are accessible from the target cluster. If not, establish a network connection and add the IP addresses to the cluster's allowlist.

WARN:Only structure, does not support data migration.

Certain database engines only support schema migration, not data migration.

Migrate the data manually by using tools such as the remote() function.

WARN:Unsupported engine, please create a same-name MergeTree table manually to bypass migration exceptions.

Tables that use certain database engines are not supported for migration.

Manually create a MergeTree table with the same name in the target cluster and then migrate the data.

WARN:Ignored engine, please create table manually.

Tables that use certain database engines are not supported for migration.

See Step 4 in the "Procedure" section.

ERROR: Table has data in destination cluster.

During the database and table schema check, the corresponding table in the target cluster must be empty.

Delete all data from the corresponding table in the target cluster.

ERROR: Unsupported function origin.

Only the migration of user-defined Function(function.origin="SQLUserDefined") is supported.

Create the required function manually in the target cluster.

Other issues

For solutions to other migration issues, see FAQ.