ApsaraDB for ClickHouse Community-compatible Edition supports one-click major engine version upgrades from the console. The upgrade process differs based on your cluster architecture—identify which architecture your cluster uses before proceeding.
How it works
The upgrade mechanism differs between old-architecture and new-architecture clusters:
New-architecture clusters: The system upgrades the cluster in place by cloning the instance and upgrading the kernel. No data is migrated to a new cluster. The cluster restarts multiple times and is unavailable for read and write operations during the upgrade.
Old-architecture clusters: The system creates a new cluster running the target version and migrates data from the source cluster to the new cluster. The cluster remains available for read and write operations except for the last 10 minutes of migration, when writes are paused.
Confirm the cluster architecture
The cluster architecture determines the upgrade procedure and validation method.
On the Clusters page, click the cluster ID to go to the Cluster Information page.
In the Cluster Properties section, click Major Version Upgrade next to Version.
In the Major Version Upgrade dialog box, check the third configuration item:
Version Upgrade Execution Time: new-architecture cluster. See Upgrade a new-architecture cluster.
Time of Stopping Data Writing: old-architecture cluster. See Upgrade an old-architecture cluster.
Upgrade a new-architecture cluster
Prerequisites
Before you begin, ensure that you have:
A cluster in the Running state
Key constraints
Review these constraints before starting the upgrade.
The upgrade cannot be canceled once started.
Version rollback is not supported after the upgrade completes. To preserve rollback capability, use the clone-based upgrade method instead. See Upgrade the major engine version by cloning.
Stop write operations from your application before starting the upgrade. The cluster automatically stops writes during the upgrade, but stopping writes in advance prevents excessively long synchronization wait times.
If Tiered Storage of Hot and Cold Data is enabled and you use clone validation, the one-click upgrade takes longer than the clone validation. The clone operation includes only hot data.
Impact on the cluster
The cluster restarts multiple times during the upgrade and is unavailable for read and write operations throughout. Schedule the upgrade during off-peak hours.
Validate version compatibility
Because this upgrade applies to a single cluster and cannot be rolled back, validate compatibility before proceeding. This step confirms that the new version supports your application's queries and estimates the upgrade duration.
If Tiered Storage of Hot and Cold Data is enabled, the clone operation includes only hot data. Cold data is not queryable in the cloned cluster.
Go to the Cluster Information page of the target cluster. In the left navigation pane, click Backup and Restoration. After the cluster is backed up, click Restore Instance.
Select to clone from a real-time replica and set the destination kernel version to the target upgrade version.
Create the cloned cluster.
Validate compatibility:
Test your application queries against the new version. See SQL compatibility validation.
Run regression tests on your application features.
Upgrade the cluster
Log on to the ApsaraDB for ClickHouse console.
In the upper-left corner, select the region where the target cluster is located.
In the left navigation pane, click Clusters of Community-compatible Edition.
Click the cluster ID to go to the Cluster Information page.
In the Cluster Properties section, click Major Version Upgrade next to Version.
Configure the parameters and click OK.
| Parameter | Description | Example |
|---|---|---|
| Upgrade Cluster Kernel Version to | The target version. Currently, only version 25.3 is supported. The version cannot be rolled back after the upgrade. | 23.8 (LTS Version) |
| Version Upgrade Execution Time | When to run the upgrade. Choose based on your situation: Specified Time — schedule the upgrade for a specific future time. Upgrade Within Maintenance Window — use the cluster's existing maintenance window (selected by default). Update Now — start the upgrade immediately. After you select Specified Time or Upgrade Within Maintenance Window, the cluster status changes from Running to Upgrading. Before the scheduled time arrives, the cluster continues to serve read and write requests, but O&M operations such as upgrades, scaling, and migrations are not available. | 2024-05-29 14:46 |
| Whether to Perform Clone Validation | Select Clone Validation Performed or Skip Clone Verification (Not Recommended). | Clone Validation Performed |
Upgrade an old-architecture cluster
Prerequisites
Before you begin, ensure that you have:
A cluster in the Running state
Key constraints
Review these constraints before starting the upgrade.
Version rollback is not supported after the upgrade completes. To preserve rollback capability, use the migration-based upgrade method instead. See Upgrade by migration.
Kafka and RabbitMQ engine tables are not supported for migration. Delete these tables before upgrading.
After the upgrade, inner node IP addresses change. If your application connects using node IP addresses, retrieve the Virtual Private Cloud (VPC) CIDR block again. See Obtain the VPC CIDR block of a cluster.
Data migration behavior
Note the following about databases and tables during the upgrade:
| Object type | Migration behavior |
|---|---|
| MergeTree-family engine tables | Historical data is migrated and automatically redistributed |
| Tables not using MergeTree-family (external tables, Log tables) | Only table schemas are migrated; data is not migrated |
| Materialized views | Only schemas are migrated; data is not migrated |
| Kafka or RabbitMQ engine tables | Migration is not supported; delete these tables before upgrading |
Impact on the cluster
During the upgrade, the cluster remains available for read and write operations—except for the last 10 minutes of migration, when writes are paused. To check the remaining migration time, see View the upgrade progress.
Validate version compatibility
Because this upgrade applies to a single cluster and cannot be rolled back, validate compatibility before proceeding.
Purchase a new cluster to run migration validation. See Upgrade the major engine version by data migration.
In the new cluster, run SQL compatibility validation. See SQL compatibility validation.
Run regression tests on your application features.
Upgrade the cluster
Log on to the ApsaraDB for ClickHouse console.
In the upper-left corner, select the region where the target cluster is located.
In the left navigation pane, click Clusters of Community-compatible Edition.
Click the cluster ID to go to the Cluster Information page.
In the Cluster Properties section, click Major Version Upgrade next to Version.
Configure the parameters and click OK.
| Parameter | Description | Example |
|---|---|---|
| Upgrade Instance Kernel Version To | The target version. Currently, only version 23.8 is supported. The version cannot be rolled back after the upgrade. | 23.8 (LTS Version) |
| Time of Stopping Data Writing | The time window during which the cluster stops write operations to ensure data consistency. Rules: set the window to at least 30 minutes; the end date must be no later than 5 days from today. To reduce business impact, choose a low-traffic period. | 2025-03-20 10:08 – 2025-03-25 10:08 |
| Perform Instance Migration Validation | Select Verification Performed or Skip Instance Migration Validation (not Recommended). | Instance migration validation performed |
After submitting:
To monitor progress, see View the upgrade progress.
If the write suspension window passes but the cluster is still in the Upgrading state, see Modify the write suspension time.
To stop the upgrade if it affects your services, see Cancel the upgrade.
View the upgrade progress
Log on to the ApsaraDB for ClickHouse console.
In the upper-left corner, select the region where the target cluster is located.
In the left navigation pane, click Clusters of Community-compatible Edition.
Click the cluster ID to go to the Cluster Information page.
In the Cluster Status section, click View Progress next to Status.
The Modify Data Write-Stop Time Window dialog box shows the current upgrade progress, including MergeTree schema migration status, data migration progress, estimated remaining time, and other schema migration progress.
Modify the write suspension time
If the write suspension window has passed but the cluster is still in the Upgrading state, data migration is not yet complete. Extend the write suspension time to let the upgrade finish.
Log on to the ApsaraDB for ClickHouse console.
In the upper-left corner, select the region where the target cluster is located.
In the left navigation pane, click Clusters of Community-compatible Edition.
Click the cluster ID to go to the Cluster Information page.
In the Cluster Status section, click View Progress next to Status.
In the Modify Data Write-Stop Time Window dialog box, update the Time of Stopping Data Writing and click Confirm.
The same rules apply as when originally setting the Time of Stopping Data Writing: minimum 30 minutes, end date no later than 5 days from today.
Cancel the upgrade
If the upgrade affects your services, cancel it to stop the process.
Log on to the ApsaraDB for ClickHouse console.
In the upper-left corner, select the region where the target cluster is located.
In the left navigation pane, click Clusters of Community-compatible Edition.
Click the cluster ID to go to the Cluster Information page.
In the Cluster Status section, click View Progress next to Status.
In the Modify Data Write-Stop Time Window dialog box, click Cancel Upgrade.
After clicking Cancel Upgrade, the upgrade task stops completely after about 5 minutes.
FAQ
What do I do if the error Unsupported Kafka table definition appears?
Kafka tables in the current version do not support the DEFAULT keyword for defining default field values, which causes the kernel package to fail to start. To fix this:
Run the following query to find all Kafka tables:
SELECT create_table_query FROM system.tables WHERE engine = 'Kafka'Back up the Data Definition Language (DDL) statements for the affected tables.
Delete the tables.
Re-create the tables without the
DEFAULTkeyword.
Do not use the DEFAULT keyword to define default field values when re-creating the tables.
What do I do if the error Unsupported MaterializedMySQL table definition appears?
The MaterializedMySQL engine configuration parameters in the target version are incompatible with those in the source version. To fix this:
Run the following query to find affected databases:
SELECT name FROM system.databases WHERE engine = 'MaterializedMySQL'Back up the DDL statements for the affected databases.
Delete the databases.
Upgrade the kernel version.
Adjust the backed-up DDL statements to match the target version's parameter format, then re-create the databases using the MaterializedMySQL engine.
What do I do if the error Unsupported table definition other than 20.3: Nullable(Array(*))/SecondaryIndex(KEY definition exists) appears?
This error affects clusters on version 20.3 that use features Alibaba Cloud added but did not merge into open-source ClickHouse: the Nullable(Array(*)) field type and secondary indexes defined with the KEY keyword. Versions later than 20.8 do not include these features.
The version span from 20.3 to the current version is large. Perform thorough compatibility validation before upgrading to avoid service disruption.
To resolve this, adjust the affected tables before upgrading:
Run SQL compatibility validation to identify all affected objects.
For
Nullable(Array(*))fields: drop the field and add it again using a supported type.For
KEY-based secondary indexes: drop the index. After the upgrade, add a skipping index as the replacement.
Skipping indexes and KEY-based secondary indexes have different implementation principles and may behave differently in performance.