All Products
Search
Document Center

Elasticsearch:Node configuration for Alibaba Cloud Elasticsearch clusters

Last Updated:Mar 31, 2026

When creating an Alibaba Cloud Elasticsearch cluster, you must configure the specifications and storage for different node types to meet your business requirements. An Elasticsearch cluster consists of data nodes, Kibana nodes, dedicated master nodes, warm nodes, frozen nodes, and coordinating nodes, each with distinct responsibilities.

Data node

Data nodes store index data and perform indexing, searching, and aggregation. Data nodes have high CPU, memory, and I/O requirements. If resources are insufficient, we recommend that you add new data nodes to your cluster.

Note
  • If a cluster has dedicated master nodes, data nodes function only as data nodes.

  • If a cluster does not have dedicated master nodes, data nodes serve as both data nodes and dedicated master nodes.

  • Alibaba Cloud Elasticsearch clusters use one of two control plane architectures: a basic control plane (v2) or a cloud-native control plane (v3). In the v2 architecture, scaling up a cluster that lacks dedicated master nodes triggers a cluster restart. If the overall cluster load is low and your indexes have replica shards, the cluster can remain available during the restart. However, in certain scenarios, such as high write and query loads, access timeouts may occur during the restart. Therefore, we recommend performing these operations during off-peak hours.

Parameter

Description

Data node specifications

We recommend that you use data nodes with 2 vCPUs and 4 GiB of memory in testing environments and use data nodes with higher specifications in production environments.

Data node disk type

  • ESSD (Default): Provides low latency, high throughput, and fast response times. Ideal for latency-sensitive applications or I/O-intensive workloads. For more information about ESSD specifications, see Node specifications. For more information about performance, see ESSDs.

  • Ultra Disk: Provides cost-effective storage. Suitable for logging and analytics scenarios that involve large volumes of data.

  • Standard SSD: Offers high IOPS and responsiveness. Suitable for online analytics and search scenarios.

Note
  • You can view the supported disk types on the buy page.

  • After a cluster is created, you cannot change the disk types of nodes in the cluster.

ESSD performance level

If you set the Data Node Disk Type parameter to ESSD, you need to configure this parameter.

Data node disk encryption

  • Disk encryption offers the maximum data security without requiring you to make additional changes to your businesses and applications. However, disk encryption may have a small impact on the performance of your cluster.

  • Disk encryption is free of charge. No additional fees are generated when you read data from or write data to encrypted disks.

Storage space per data node

The storage space of each data node depends on the disk type. Unit: GiB.

  • ESSD: Supports up to 6 TiB of storage space.

  • Ultra Disk: Supports up to 20 TiB of storage space for Elasticsearch clusters of V6.7, V7.7, and later. For other versions, the maximum storage space is 5 TiB.

  • Standard SSD: Supports up to 6 TiB of storage space for Elasticsearch clusters of V6.7, V7.7, and later. For other versions, the maximum storage space is 2 TiB.

Note

When you resize an ultra disk with a storage space greater than 2,560 GiB, only a blue-green update can be performed for the ultra disk because the disk is designed to run in disk arrays or RAID 0.

Number of data nodes

The number of nodes that you purchase must be a multiple of the number of zones.

Important

A cluster that contains only two data nodes has high split-brain risks and delivers low stability. If a cluster of an earlier version, such as V6.X or V5.X, contains only two data nodes, a dedicated master node may not be selected if node restart is required, and the cluster may not provide services. Therefore, you must configure this parameter based on your business requirements.

Kibana node

  • The value of the Kibana Node parameter can be only Yes.

  • A Kibana node with 1 vCPU and 2 GiB of memory is free of charge. However, we recommend that you use the Kibana node with 1 vCPU and 2 GiB of memory only for testing purposes.

  • Due to the impact on cluster performance and stability, we recommend that you purchase a Kibana node with 2 vCPUs and 4 GiB of memory or higher specifications.

Dedicated master node

You can use dedicated master nodes to perform operations on clusters, such as creating indexes, deleting indexes, tracking nodes, and allocating shards. The stability of dedicated master nodes is important to the health of clusters. By default, each node in a cluster may be used as a dedicated master node. Operations, such as data indexing, searches, and queries, require a large number of CPU, memory, and I/O resources. To ensure the stability of a cluster, we recommend that you purchase dedicated master nodes to separate the dedicated master nodes from data nodes.

Important
  • If the dedicated master nodes in your cluster are free of charge, you are charged for these nodes after you upgrade the configuration of the cluster.

  • If you perform a blue-green change for a cluster that does not contain dedicated master nodes and is deployed in the old architecture (V2), data nodes will be restarted the next time you perform a change on the cluster. Therefore, we recommend that you purchase dedicated master nodes.

Parameter

Description

Dedicated master node

  • To improve the stability of your services, we recommend that you purchase dedicated master nodes.

  • For a multi-zone Elasticsearch cluster, the default value of this parameter is Yes, and you cannot change the value.

Note
  • You cannot release the dedicated master nodes that you have purchased.

  • After a cluster is created, you can purchase dedicated master nodes when you upgrade the configuration of the cluster.

Dedicated master node specifications

You can view the supported specifications on the buy page.

Dedicated master node disk type

  • ESSD (Default): Provides low latency, high throughput, and fast response times. Ideal for latency-sensitive applications or I/O-intensive workloads. For more information about ESSD specifications, see Node specifications. For more information about performance, see ESSDs.

  • Ultra Disk: Provides cost-effective storage. Suitable for logging and analytics scenarios that involve large volumes of data.

  • Standard SSD: Offers high IOPS and responsiveness. Suitable for online analytics and search scenarios.

You can view the supported disk types on the buy page.

Dedicated master node storage space

The value of this parameter can be only 20G.

Number of dedicated master nodes

The value of this parameter can be only 3.

Warm node

If your workload involves both of the following types of data, we recommend using a hot-warm architecture, which combines high-performance hot nodes with large-capacity warm nodes:

  • Hot data: Indexes that are frequently queried, have high write loads, and are latency-sensitive.

  • Warm data: Indexes that are infrequently queried and are mostly read-only or have minimal writes. These are typically historical data.

By deploying hot and warm data on different node types, you can prevent resource contention caused by warm data from affecting hot data performance, significantly reduce storage costs, and improve overall cluster efficiency and stability.

For more information, see "Hot-Warm" Architecture in Elasticsearch 5.x.

Note

If dedicated master nodes are purchased, warm nodes are used only as warm nodes.

If dedicated master nodes are not purchased, warm nodes are also used as dedicated master nodes.

Parameter

Description

Warm node

You can disable purchased warm nodes. If the cluster gets stuck when you disable a warm node, see What to do if a cluster is stuck after a warm node is disabled?

Warm node specifications

For information about the supported specifications, see the buy page.

For scenarios with high I/O and large storage requirements, you can also use cost-effective local disk instance types, such as the 20 vCPUs, 88 GiB memory (SATA: 8 × 7300 GiB) instance type. The following limits apply to local disk instance types:

  • Only kernel-enhanced instances of V7.17 that are deployed in the cloud-native control plane (v3) architecture across two or three availability zones support local disk instance types.

  • For instances that use the v3 architecture, node-level blue-green changes, such as node scale-out, are not supported for local disk instance types.

Note
  • Configure at least one replica when you use a local disk instance type to prevent data loss on local disks.

  • You cannot change a local disk instance type to a cloud disk instance type.

  • If your application architecture cannot ensure data reliability, we recommend that you create a cluster by using a cloud disk instance type. Machine-level snapshots are not supported for cloud disk instance types.

Warm node disk type

Ultra Disk and ESSD are supported.

Warm node disk encryption

  • Disk encryption offers the maximum data security without requiring you to make additional changes to your businesses and applications. However, disk encryption may have a small impact on the performance of your cluster.

  • Disk encryption is free of charge. No additional fees are generated when you read data from or write data to encrypted disks.

Note
  • You cannot disable disk encryption for encrypted disks.

  • You cannot enable disk encryption for purchased disks. When you upgrade the configuration of a cluster, you cannot enable disk encryption for purchased disks. If you purchase cloud disks when you upgrade the configuration of the cluster, you can enable disk encryption.

Warm node storage space

The minimum value of this parameter is 500. Unit: GiB.

Number of warm nodes

The number of nodes that you purchase must be a multiple of the number of zones.

After you purchase warm nodes, the system adds the -Enode.attr.box_type parameter to the node startup parameters, as shown in the following table.

Node type

Startup parameter

Data node

-Enode.attr.box_type=hot

Warm node

-Enode.attr.box_type=warm

Frozen nodes

A frozen node is the compute layer for the Searchable Snapshot feature. It maintains index metadata, manages a local shared cache, and pulls required data blocks from Object Storage Service (OSS) on demand for queries. By storing historical data as snapshots in Alibaba Cloud OSS, frozen nodes can reduce storage costs by up to 90% while retaining search capabilities.

  • Frozen nodes are supported only for instances of V8.17.0 and later. After enabling frozen nodes, you cannot disable them. To disable them, contact technical support.

  • Frozen nodes do not require large-capacity local disks because data is stored in OSS. We recommend configuring sufficient memory to improve the cache hit ratio.

  • For detailed examples, see Searchable Snapshot.

Parameter

Description

Frozen node

When you purchase a new instance of V8.17.0, you can select the checkbox in the instance specification section to enable this feature.

Frozen node specifications

We recommend an instance type with 4 vCPUs and 16 GiB of memory or higher. A frozen node's memory maintains index metadata and manages the shared cache. Sufficient memory can improve the cache hit ratio. For information about the supported specifications, see the buy page.

Frozen node disk type

Ultra Disk and ESSD are supported. Local disks serve only as the shared cache, and data is persisted in OSS.

Frozen node storage space

We recommend 500 GiB or more. The local disk space serves as a shared cache. The cache uses 90% of the total disk space of a node or the total space minus 100 GiB, whichever is smaller. A Least Recently Used (LRU) policy evicts cold data blocks. The larger the disk space, the higher the cache hit ratio.

Number of frozen nodes

The number of nodes that you purchase must be a multiple of the number of availability zones.

After you purchase frozen nodes, the system adds the -Enode.attr.box_type parameter to the node startup parameters, as shown in the following table.

Node type

Startup parameter

data node

-Enode.attr.box_type=hot

warm node

-Enode.attr.box_type=warm

frozen node

-Enode.attr.box_type=frozen

Client node

You can purchase client nodes to share the CPU overheads of data nodes. This improves the processing performance and service stability of a cluster. For CPU-intensive services, such as services that require a large number of aggregate queries, we recommend that you purchase client nodes.

Parameter

Description

Coordinating node

You cannot release the client nodes that you have purchased for a cluster that is deployed in the cloud-native control architecture (a cluster of V7.16 or later). You can check whether you can release the client nodes in your cluster on the buy page.

Coordinating node specifications

You can view the supported specifications on the buy page.

Coordinating node disk type

The value of this parameter can be only Ultra Disk.

Coordinating node storage space

The value of this parameter can be only 20G.

Number of coordinating nodes

The number of nodes that you purchase must be a multiple of the number of zones.

References

FAQ

Cluster stuck after disabling a warm node

1. Check whether node allocation rules based on box_type are manually configured in the cluster (that is, whether indices are forcibly allocated to nodes tagged as warm )

  • Query the index.routing.allocation.require.box_type setting for all existing indexes.

    GET */_settings/index.routing.allocation.require.box_type

    If the output is {"index.routing.allocation.require.box_type": "warm"}, the index must be allocated to nodes where box_type=warm.

  • Check if a box_type allocation rule is configured in any index templates.

    Query all index templates to check whether a box_type allocation rule is configured. If a template returns "index.routing.allocation.require.box_type": "warm", all new indices are allocated to warm nodes by default.

    When a new index is created, it inherits the configuration from the template. If this value is set in the template, all subsequent indices automatically apply this rule.

  • GET _ilm/policy?filter_path=*.policy.phases.warm.actions.allocate.require.box_type

    Queries the node allocation configuration for the warm phase of all Index Lifecycle Management (ILM) policies.

If an index is allocated to a warm node, shutting down the cold data nodes by performing a scale-down operation will cause the cluster change to become Change is blocked:

2. Solution

  • Remove the box_type configuration from the policy

    # First, stop ILM.
    POST _ilm/stop
    
    # View the specific ILM policy.
    GET _ilm/policy/your_policy_name
    
    # Update the ILM policy and remove the box_type configuration from the warm phase.
    PUT _ilm/policy/your_policy_name
    {
      "policy": {
        "phases": {
          "warm": {
            "actions": {
              "allocate": {
                "require": {
                  "box_type": null  # Remove this configuration.
                }
              }
            }
          },
          "hot": {
            "actions": {
              "allocate": {
                "require": {
                  "box_type": null  # If present, remove this as well.
                }
              }
            }
          }
        }
      }
    }
    
    # If the require field under allocate is empty, delete the entire allocate action.
    # Alternatively, retain only other necessary allocation rules, such as the number of replicas.
  • Remove the box_type configuration from the index template

    # View the specific template name.
    GET _template/?filter_path=*.settings.index.routing.allocation.require.box_type
    # Update the template and remove the box_type configuration.
    PUT _template/your_template_name
    {
      "settings": {
        "index.routing.allocation.require.box_type": null
      }
    }
    # Alternatively, resubmit the complete template definition without the box_type field.
  • Remove the box_type configuration from the index

    # Remove the box_type configuration for a specific index.
    PUT /your_index_name/_settings
    {
      "index.routing.allocation.require.box_type": null
    }
    # Remove the configuration for all indexes in a batch.
    {
      "index.routing.allocation.require.box_type": null
    }