All Products
Search
Document Center

Alibaba Cloud Service Mesh:Reduce push latency using ASM remote control plane

Last Updated:Mar 10, 2026

When data plane clusters run outside Alibaba Cloud -- in another cloud provider or an on-premises data center -- sidecar proxies must reach the Alibaba Cloud Service Mesh (ASM) control plane over the Internet. As the number of pods grows, network connections and bandwidth consumption scale linearly, and frequent configuration or service changes increase push latency. The ASM remote control plane solves this by deploying a local control plane instance inside the external cluster, so sidecar proxies receive xDS configurations locally instead of over constrained network links.

How it works

When a data plane cluster runs inside an Alibaba Cloud Virtual Private Cloud (VPC), sidecar proxies connect to the managed ASM control plane through the VPC network. Latency is low, configuration push is smooth, and the remote control plane is unnecessary.

When a data plane cluster runs outside Alibaba Cloud, the situation changes. All pods connect to the managed control plane over the Internet, and the number of connections and bandwidth usage scale with the pod count. If configurations or services change frequently, push latency increases.

The remote control plane addresses this by running a control plane instance inside the external cluster:

Remote control plane architecture

With the remote control plane enabled:

  • xDS configurations are pushed to sidecar proxies locally, within the cluster.

  • Only a small number of connections remain between the managed ASM control plane and the external cluster, carrying control plane component updates and service discovery data.

  • The dependency on low-latency, high-bandwidth network links is greatly reduced.

Constraints

Before you enable the remote control plane, review the following constraints:

  • Create ASM resources with the ASM kubeconfig. After you enable the remote control plane, create all ASM-related Kubernetes resources using the kubeconfig of the ASM instance. If you use the remote cluster's kubeconfig, resources may be overwritten.

  • Global service discovery applies. Workloads managed by the ASM managed control plane can access services managed by the remote control plane. Communication uses mTLS by default, and east-west ASM gateways are supported.

  • Disable the data plane Kubernetes API access feature first. The remote control plane conflicts with the Use the Kubernetes API of clusters on the data plane to access Istio resources feature. Disable that feature before you enable the remote control plane.

Prerequisites

Before you begin, make sure that you have:

Set up cluster context variables

To avoid errors when switching between clusters, set environment variables for each kubeconfig context before you start:

export CTX_ASM=<kubeconfig-context-for-asm-instance>
export CTX_CLUSTER2=<kubeconfig-context-for-cluster-2>

Replace the placeholders with actual values:

PlaceholderDescriptionExample
<kubeconfig-context-for-asm-instance>The kubeconfig context name for your ASM instanceasm-mesh-xxx
<kubeconfig-context-for-cluster-2>The kubeconfig context name for the external clustercluster-2-context

All subsequent commands in this guide use these variables.

Enable the remote control plane

  1. Open the ASMMeshConfig resource for editing:

       kubectl --context="${CTX_ASM}" edit ASMMeshConfig
  2. Add the externalIstiodConfigurations section under .spec: Replace <cluster-2-id> with the ClusterID of cluster-2.

       apiVersion: istio.alibabacloud.com/v1beta1
       kind: ASMMeshConfig
       metadata:
         name: default
       spec:
         # ... existing configuration ...
         externalIstiodConfigurations:
           <cluster-2-id>:
             replicas: 2
             # Optional: specify resource requests and limits.
             # The structure matches standard Kubernetes pod resource fields.
             # If omitted, ASM uses default resource settings.
    Note

    Enabling the remote control plane restarts the ASM gateway in the target cluster. Assess the impact on your traffic before you proceed.

Deploy test applications and verify

  1. Deploy the sleep and httpbin applications to cluster-2. See Deploy the httpbin application.

  2. Confirm that both pods are running with sidecar injection: Expected output: The 2/2 in the READY column confirms that a sidecar proxy has been injected alongside each application container.

       kubectl --context="${CTX_CLUSTER2}" get pod
       NAME                       READY   STATUS    RESTARTS   AGE
       httpbin-7df7fxxxxx-xxxxx   2/2     Running   0          3h15m
       sleep-6b7f9xxxxx-xxxxx     2/2     Running   0          3h15m
  3. Send a test request from sleep to httpbin: Expected output: The ASCII teapot (HTTP 418) confirms that traffic flows through the sidecar proxy and the remote control plane is serving configurations correctly.

       kubectl --context="${CTX_CLUSTER2}" exec deploy/sleep -it -- curl httpbin:8000/status/418
       -=[ teapot ]=-
    
              _...._
            .'  _ _ `.
           | ."` ^ `". _,
           \_;`"---"`|//
             |       ;/
             \_     _/
               `"""`

Configure cross-cluster access

By default, workloads managed by the ASM managed control plane can access services of the remote control plane. The reverse does not apply -- services behind the remote control plane cannot reach services behind the managed control plane.

Choose the option that matches your requirements: