All Products
Search
Document Center

Elasticsearch:Cross-region ES data replication

Last Updated:Mar 28, 2026

Use Cloud Enterprise Network (CEN), Network Load Balancer (NLB), and PrivateLink to establish a private network connection between two Elasticsearch clusters in different regions. You can then configure Cross-Cluster Replication (CCR) to achieve near-real-time synchronization of index data. This setup supports use cases such as cross-region disaster recovery and read-write splitting.

Prerequisites

  • You have created two Elasticsearch instances in different regions, such as China (Shanghai) and China (Hangzhou), as the leader cluster and the follower cluster. For more information, see Create an Alibaba Cloud Elasticsearch instance.

  • The management and deployment mode for both clusters is cloud-native control plane (v3). If a cluster uses the v1 or v2 architecture, you must upgrade its architecture first. For more information, see Upgrade the architecture of an instance.

  • Both clusters are running Elasticsearch 7.10.0 or later, and the follower cluster's version must not be older than the leader cluster's version.

Overview

Alibaba Cloud Elasticsearch instances are deployed in a dedicated management VPC, not in your VPC. Even if you use CEN to connect the VPCs in the two regions, the clusters cannot communicate privately by default. You must also use NLB and PrivateLink to bridge the clusters' management VPCs. The overall process is as follows:

  1. Use CEN to connect the VPCs where the leader and follower clusters are located.

  2. In the follower cluster's VPC, create an NLB instance that uses an IP-based server group to forward traffic across regions to the leader cluster's private IP address.

  3. Create a PrivateLink endpoint service for the NLB instance.

  4. In the Elasticsearch console of the follower cluster, configure a private connection to obtain the PrivateLink domain name.

  5. In the Kibana console of the follower cluster, add a remote cluster and configure CCR.

Procedure

Step 1: Connect cross-region VPCs

Use CEN to connect the VPCs where the leader and follower clusters are located. For detailed steps, see Connect VPCs across regions.

Important

A CEN Transit Router requires the VPC to have at least two VSwitches in different availability zones. If your VPC has only one VSwitch, you must create a new VSwitch in another availability zone before you can attach the VPC to the Transit Router.

Step 2: Obtain the leader cluster's private IP

  1. Log on to the Alibaba Cloud Elasticsearch console.

  2. On the Basic Information page of the leader cluster, find the Internal Endpoint field and copy the private domain name.

  3. From an ECS instance in the same VPC, run the following command to resolve the private IP address of the leader cluster:

    ping <private_domain_name_of_the_leader_cluster>

    Record the resolved IP address for later use.

Step 3: Create an NLB instance and server group

Create an NLB instance in the VPC where the follower cluster is located to forward traffic to the leader cluster.

  1. Log on to the Network Load Balancer (NLB) console.

  2. Create a server group.

    1. In the left-side navigation pane, click Server Group.

    2. Click Create Server Group and configure the following parameters:

      Parameter

      Description

      Server group type

      Select IP.

      Forwarding port

      Enable All-port Forwarding.

      Health check

      Set the port to 9300.

    3. In the server group that you created, click Add Backend Server. Add the private IP address of the leader cluster that you obtained in Step 2. Keep the default port setting.

  3. Create an NLB instance and a listener.

    If you already have an NLB instance, you can create a listener directly. If not, you must first create an NLB instance and skip the listener configuration.

    1. In the left-side navigation pane, click Instances, and then select an existing NLB instance or create a new one.

    2. Go to the instance details page, click the Listeners tab, and then click Create Listener.

    3. Enable the All Ports feature and set the listener port range to 9200-9300.

    4. For Server Group, select the IP server group you created in the previous step.

Step 4: Configure PrivateLink

Use PrivateLink to establish a network connection from the management VPC of the follower cluster to the leader cluster.

  1. Log on to the PrivateLink console.

  2. Create an endpoint service.

    1. In the left-side navigation pane, click Endpoint Services.

    2. Click Create Endpoint Service and configure the following parameters:

      Parameter

      Description

      Service resource type

      Select NLB.

      Service resource

      Select the NLB instance that you created or used in Step 3.

      Availability zone

      Select the availability zone where the NLB instance is located.

      Automatically accept endpoint connection

      Select Yes.

  3. Add a private connection in the follower cluster.

    1. Log on to the Alibaba Cloud Elasticsearch console and go to the details page of the follower cluster instance.

    2. In the left-side navigation pane, choose Configuration and Management > Security Settings.

    3. In the Network Settings section, click Configure Private Connection.

    4. Click Add Private Connection and select the endpoint service that you created in the previous step.

    5. Wait until the connection status changes to Connected.

  4. Obtain the PrivateLink domain name.

    After the connection is successful, return to the PrivateLink console. On the Endpoint Connection Status tab of the endpoint service, find the automatically created endpoint and copy its domain name. This is the PrivateLink domain name you will need to configure the remote cluster.

Step 5: Configure CCR

  1. Log on to the Kibana console of the follower cluster. For more information, see Log on to Kibana by using the public endpoint (for v2/v3 deployment architectures).

    On the instance details page of the follower cluster, click Visualization Control in the left-side navigation pane, and then click Go to Kibana.

  2. Add a remote cluster.

    1. In the left-side menu of Kibana, click Stack Management.

    2. In the Data section, click Remote clusters.

    3. Click Add a remote cluster and configure the following parameters:

      Parameter

      Description

      Name

      Enter the instance ID of the leader cluster.

      Proxy mode

      Enable proxy mode.

      Proxy address

      Enter the PrivateLink domain name that you obtained in Step 4, in the format <domain_name>:9300.

    4. Click Save and confirm that the connection status is Connected.

  3. Configure CCR replication.

    CCR supports two modes:

    Mode

    Description

    Follower index

    Replicates a single, specified index.

    Auto-follow pattern

    Automatically replicates indices that match a specified name pattern. This is suitable for bulk synchronization.

    After an auto-follow pattern is created, data from new indices in the leader cluster is automatically synchronized to the follower cluster. Existing indices are not automatically synchronized. To synchronize an existing index, you must manually create a follower index in the follower cluster.

    The following example shows how to configure an auto-follow pattern:

    1. In Stack management, click Cross-cluster replication.

    2. Select the Auto-follow patterns tab, and then click Create an auto-follow pattern.

    3. For Remote cluster, select the remote cluster that you added in the previous step. For Index patterns, enter * to replicate all indices. To replicate specific indices, enter a name pattern, such as logs-*.

    4. Click Create.

Step 6: Verify data synchronization

  1. Verify existing data synchronization.

    In the Kibana console of the follower cluster, run the following command to query a follower index:

    GET /<index_name>/_search

    If the results match the data in the corresponding index on the leader cluster, the existing data has been synchronized.

  2. Verify incremental data synchronization.

    1. In the Kibana console of the leader cluster, create a new index and write a document to it:

      PUT /test-increment-index
      
      POST /test-increment-index/_doc
      {
          "title": "increment test",
          "content": "This is a test document for CCR incremental sync."
      }
    2. In the Kibana console of the follower cluster, query the index:

      GET /test-increment-index/_search

      If the returned document content matches what you wrote, the incremental data has been synchronized in near-real time.

FAQ

Why is the remote cluster connection status not "connected"?

Check the following items:

  • Ensure the leader cluster's private access whitelist includes the CIDR block of the VPC where the NLB instance is located. In a cross-region setup, NLB forwards traffic, including health check probes, to the leader cluster through CEN. The source IP addresses for this traffic are from the NLB's VPC. If this CIDR block is not whitelisted, NLB health checks fail and the connection cannot be established.

  • Verify that cross-region bandwidth is allocated for CEN and that the network connection between the two VPCs is established.

  • Ensure that the NLB listener port range includes 9200-9300.

  • Verify that the private IP address of the leader cluster in the server group is correct.

  • Ensure that the health check port for the server group is 9300.

  • Verify that the status of the PrivateLink endpoint connection is "Connected".

Is cross-region CCR latency higher than same-region?

The synchronization latency for cross-region CCR is affected by network latency and is typically slightly higher than in a same-region scenario, but can still be kept within seconds. The actual latency depends on the CEN bandwidth configuration, data volume, and network conditions. We recommend that you configure the CEN cross-region bandwidth based on your business requirements.

What are the version requirements for CCR?

The follower cluster's version cannot be older than the leader cluster's version. Both clusters must be version 7.10.0 or later and use the cloud-native control plane (v3) management and deployment mode.