All Products
Search
Document Center

ApsaraDB for ClickHouse:Disk downgrade

Last Updated:Dec 02, 2025

This topic describes how to reduce disk space for ApsaraDB for ClickHouse community-compatible clusters.

Prerequisites

  • Community-Compatible Edition cluster.

  • Cluster status: Running.

  • No unpaid renewal orders.

    Note

    Log on to the ApsaraDB for ClickHouse console. In the upper-right corner of the page, choose Expenses > Expenses and Costs. In the navigation pane on the left, click Orders > My Orders. You can then pay for or cancel the order.

Notes

  • The storage space of a single node supports a minimum of 100 GB and a maximum of 32000 GB.

  • After the disk is downgraded, historical data of MergeTree engine tables will be migrated to the new cluster and automatically redistributed.

    The following items are supported for migration:

    • Databases, data dictionaries, and materialized views.

    • Table schema: All table schemas except for tables that use the Kafka or RabbitMQ engine.

    • Data: Incremental migration of data from MergeTree family tables.

    The following items are not supported for migration:

    • Tables that use the Kafka or RabbitMQ engine and their data.

      Important

      When you change the configuration, data is migrated to a new instance, and traffic is eventually switched to the new instance. To ensure that Kafka and RabbitMQ data is not split, first delete the Kafka and RabbitMQ engine tables from the source cluster. After the change is complete, recreate them.

    • Data from tables that are not of the MergeTree type, such as external tables and Log tables.

    Important

    For the content that cannot be migrated, you need to handle it manually during the disk downgrade process according to the procedure.

  • DDL operations are prohibited during the disk downgrade process. If this rule is not followed, data verification may fail at the end of the downgrade, resulting in downgrade failure.

  • After the disk is downgraded, the internal node IP addresses will change. If you rely on node IP addresses for data writing and access, you need to obtain the VPC CIDR block IP addresses of the cluster again. For more information, see Obtain the VPC CIDR block IP addresses of a cluster.

  • After you change the cluster configuration, frequent merge operations occur for a period of time. These operations increase I/O usage and can lead to increased latency for business requests. You should plan for the potential impact of this increased latency. For information about how to calculate the duration of merge operations, see Calculate the merge duration after migration.

  • During the disk downgrade process, the CPU and memory usage of the cluster will increase. The estimated resource usage per node is less than 5 cores and 20 GB.

Expenses

After the cluster configuration is changed, the expenses will change. For specific expenses, refer to the console. For more information, see Billing for configuration changes.

Procedure

Step 1: Handle tables with the Kafka and RabbitMQ engines

Migration is not supported for tables that use the Kafka or RabbitMQ engine. You must handle these tables manually.

  1. Log on to the cluster and run the following statement to query for the tables that you need to handle. For more information, see Connect to a ClickHouse cluster using DMS.

    SELECT * FROM  `system`.`tables` WHERE engine IN ('RabbitMQ', 'Kafka');
  2. View and back up the `CREATE TABLE` statement for each target table.

    SHOW CREATE TABLE <aim_table_name>;
  3. Delete the tables that use the Kafka and RabbitMQ engines.

    Important

    When you delete a Kafka table, you must also delete the materialized views that reference it. Otherwise, the scale-out or scale-in operation fails.

Step 2: Back up business data from non-MergeTree tables

  1. Log on to the cluster and run the following statement to identify the non-MergeTree tables whose data requires migration.

    SELECT
        `database` AS database_name,
        `name` AS table_name,
        `engine`
    FROM `system`.`tables`
    WHERE (`engine` NOT LIKE '%MergeTree%') AND (`engine` != 'Distributed') AND (`engine` != 'MaterializedView') AND (`engine` NOT IN ('Kafka', 'RabbitMQ')) AND (`database` NOT IN ('system', 'INFORMATION_SCHEMA', 'information_schema')) AND (`database` NOT IN (
        SELECT `name`
        FROM `system`.`databases`
        WHERE `engine` IN ('MySQL', 'MaterializedMySQL', 'MaterializeMySQL', 'Lazy', 'PostgreSQL', 'MaterializedPostgreSQL', 'SQLite')
    ))
  2. Back up the data.

    You must back up the data from the non-MergeTree tables that you identified. For more information, see Back up data to OSS.

Step 3: Perform disk downgrade operations in the console

  1. Log on to the ApsaraDB for ClickHouse console.

  2. In the upper-left corner of the page, select the region where the cluster is located.

  3. On the Clusters page, select Clusters of Community-compatible Edition.

  4. In the Actions column of the target cluster ID, click Change Configurations.

  5. In the Change Configurations dialog box, select Downgrade Disk Specification and click OK.

  6. In the disk downgrade detection window that appears, check the detection status.

    • If the detection is successful, click Next.

    • If the detection fails, make the corresponding modifications according to the page prompts, and then click Retry Detection. After the detection is successful, click Next.

      During the disk downgrade process, the main reasons for detection failure are as follows:

      • Missing unique distributed table: A local table does not have a corresponding distributed table. You need to create one.

      • Corresponding distributed table is not unique: A local table has more than one distributed table. Delete the extra distributed tables and keep only one.

      • Kafka/RabbitMQ engine tables are not supported: Kafka or RabbitMQ engine tables exist. Delete them.

      • A primary-replica instance has non-replicated *MergeTree tables: Data is inconsistent between replicas. This will cause an exception during data migration for the scale-out or scale-in operation.

      • The columns of the distributed table and the local table are inconsistent: You must ensure that the columns of the distributed table and the local table are consistent. Otherwise, an exception occurs during data migration for the scale-out or scale-in operation.

      • The table is missing on some nodes: You need to create tables with the same name on different shards. For the inner table of a materialized view, rename the inner table and then rebuild the materialized view to point to the renamed inner table. For more information, see The inner table of a materialized view is inconsistent across shards.

  7. On the downgrade page, configure Storage Capacity and Start and End Time for Stopping Data Writing according to your business requirements.

    1. Storage Space: The storage space of a single node supports a minimum of 100 GB and a maximum of 32000 GB.

    2. Write Suspension Time: Disk downgrade involves data migration. To ensure the success of migration, the cluster needs to suspend write operations.

      Note

      The write suspension time has the following requirements:

      • It is recommended to set the write suspension time to at least 30 minutes.

      • The disk downgrade must be completed within 5 days after the configuration change is created. Therefore, the end date of the write suspension time must be less than or equal to current date + 5.

      • To reduce the impact of migration on your business, it is recommended to set the write suspension time range during your business off-peak hours.

  8. Click Buy Now and complete the payment according to the page prompts.

  9. On the Purchase Completed page, click Console.

  10. In the Status column of the community edition instance list, you can view the status of the target cluster. When the cluster status changes to Running, the disk downgrade is successful.

Note

The disk downgrade is expected to take more than 30 minutes. The waiting time is related to the data volume. The specific task execution status depends on the cluster status displayed in the console.

Step 4: Recreate tables with the Kafka and RabbitMQ engines

Log on to the cluster and execute the `CREATE TABLE` statements that you backed up in Step 1: Handle tables with the Kafka and RabbitMQ engines. For more information, see Connect to a ClickHouse cluster using DMS.

Step 5: Migrate business data from non-MergeTree tables

Log on to the cluster and use OSS to migrate the data backed up in Step 2: Back up business data from non-MergeTree tables. For more information, see Import data from OSS.