All Products
Search
Document Center

Container Service for Kubernetes:Use LoadBalancer Services to expose applications in ACK Edge clusters

Last Updated:Mar 26, 2026

This topic covers three deployment scenarios—pods on on-cloud ECS nodes, pods on edge ENS nodes, and hybrid setups using Network Load Balancer (NLB)—and explains how to choose between the Local and Cluster external traffic policies.

For a full list of LoadBalancer Service configuration options, see:

How it works

The following diagrams show how traffic flows in each scenario.

On-cloud pods (ECS + CLB)

Pods run on Elastic Compute Service (ECS) instances inside a virtual private cloud (VPC). The on-cloud control plane automatically manages Classic Load Balancer (CLB) listeners and backend servers, and distributes requests evenly across backend pods.

image

Edge pods (ENS + ECS forwarding)

Pods run on edge servers in an Edge Node Service (ENS) node pool. On-cloud ECS instances forward incoming requests across the Express Connect circuit to edge pods.

image

Hybrid pods (NLB — recommended)

Pods run in both on-cloud and on-premises data centers. NLB distributes traffic across both node pools, with pods registered directly as IP-type backend servers.

image

Prerequisites

Before you begin, ensure that you have:

  • An ACK Edge cluster

  • The appropriate network plug-in for your scenario (Terway Edge or Flannel VXLAN)

For the ENS scenario only:

  • An Express Connect circuit linking the on-cloud VPC to the edge data center

  • externalTrafficPolicy set to Cluster on the LoadBalancer Service (the ECS forwarding path does not work with Local policy)

Important

Without the ECS instance's network forwarding capability, traffic load balancing cannot reach edge pods.

For the NLB scenario only:

  • An Express Connect circuit connecting on-cloud and edge node pools

  • Terway Edge as the network plug-in

Expose on-cloud pods with CLB

When pods run on ECS instances in a VPC, create a standard LoadBalancer Service. The control plane provisions a CLB instance and keeps its listeners and backend servers in sync with pod lifecycle events—no additional annotations are required for basic load balancing.

Expose edge pods via ECS forwarding

When pods run in an ENS node pool, traffic enters a CLB listener on an on-cloud ECS instance and is forwarded over the Express Connect circuit to the edge pod. Set externalTrafficPolicy: Cluster so that all cluster nodes are eligible as backend servers; traffic can then be forwarded even when the receiving node has no local pod.

Expose hybrid pods with NLB (recommended)

When pods span on-cloud and edge node pools, use NLB to reach pods directly by IP. Add the following annotations to the Service manifest to register pods as backend servers:

Annotation Purpose
service.beta.kubernetes.io/backend-type: "eni" Add pods to NLB instances as backend servers
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-server-group-type: "Ip" Set the backend server group type to IP

Choose an external traffic policy

Configure externalTrafficPolicy on any LoadBalancer or NodePort Service to control how external requests reach backend pods. The setting applies to both Terway Edge and Flannel Virtual Extensible Local Area Network (VXLAN) plug-ins.

image
Local Cluster
Backend servers Only nodes running a backend pod are added to the SLB instance All nodes in the cluster are added to the SLB instance
SLB resource quota Low — fewer backend entries High — every node is a backend entry. See Quotas
SLB IP access Only nodes with a local backend pod All nodes
Pod-level load balancing Disabled by default. To enable, add service.beta.kubernetes.io/alibaba-cloud-loadbalancer-scheduler:"wrr" to set weighted round-robin (WRR) scheduling Enabled by default
Source IP preservation Supported Not supported
Session persistence Supported Not supported
When to use Apps that must log or act on client IP addresses High-availability apps that do not need client IP preservation, such as large web application clusters

What's next