In ACK Edge clusters, you can connect computing resources across multiple edge regions to the cloud and expose them through a single LoadBalancer Service. This topic describes how to use Edge Load Balancer (ELB) instances to expose Services deployed in Edge Node Service (ENS) node pools across multiple regions.
How it works
Node pools in ACK Edge clusters are classified as on-cloud node pools or edge node pools. ELB instances handle load balancing for edge node pools, with each ELB instance scoped to a single region.
When you create a LoadBalancer Service, the edge-controller-manager (ECM) automatically creates a PoolService for each matched node pool. Each PoolService manages the lifecycle of the ELB instance in its region. As a result, a single LoadBalancer Service maps to endpoints in multiple data centers — one ELB instance per region.
The following diagram illustrates this architecture. ENS node pools are deployed in two regions (China (Hefei) and China (Chengdu)), and a LoadBalancer Service exposes the application across both regions through separate ELB instances.
Prerequisites
Before you begin, make sure you have:
-
An ACK Edge cluster with ENS node pools in the target regions
-
ECM version 2.1.0 or later
-
The
kubectlCLI configured to connect to your cluster -
(For self-managed ELB) Existing ELB instances in each target region
Usage notes
-
The ECM configures ELB instances only for Services with
type: LoadBalancer. -
ELB instances managed by the ECM are named in the
k8s/${Service_Name}/${Service_Namespace}/${NodePool_Id}/${Cluster_Id}format. Avoid duplicate names to prevent accidental deletion. -
Elastic IP addresses (EIPs) managed by the ECM follow the same naming format. Avoid duplicate names to prevent accidental deletion.
-
To share one ELB instance across multiple Services, use self-managed EIPs and ELB instances, and set
externalTrafficPolicytoCluster. -
For ENS instances without elastic network interfaces (ENIs), create edge networks and use ELB instances to expose them. To enable internet access, assign EIPs to the ENS instances or configure NAT.
-
For ENS instances with ENIs, add routing rules to the host networks:
# 10.0.0.3: internal network interface controller; 10.0.0.1: internal gateway address ip rule add from 10.0.0.3 lookup 4 ip route add default via 10.0.0.1 table 4
Choose an ELB management approach
Two approaches are available depending on whether you want the ECM to fully manage the ELB lifecycle or retain control of existing ELB instances.
| Criteria | Auto-managed ELB | Self-managed ELB |
|---|---|---|
| EIP management | Automatically created and deleted per region | Manual; EIPs are not auto-created or deleted |
| ELB instance control | ECM creates and names instances automatically | You specify an existing ELB instance ID per node pool |
| Multiple Services sharing one ELB | Not supported | Supported (requires externalTrafficPolicy: Cluster) |
| Cleanup on Service or node pool deletion | ELB instances and EIPs are automatically deleted | ELB instances and EIPs are retained; manual cleanup required |
| Use when | You want zero-configuration load balancing | You need precise control over ELB instances or share one ELB across multiple Services |
Deleting an auto-managed ELB Service or its associated node pools also deletes the corresponding ELB instances and EIPs. Updating the node pool selector or node pool labels may also trigger ELB instance deletion if node pools no longer match the selector.
Step 1: Deploy an application
Deploy a DaemonSet named cube on all nodes in the ENS node pools.
-
Create a file named
cube.yamlwith the following content:apiVersion: apps/v1 kind: DaemonSet metadata: name: cube labels: app: cube spec: selector: matchLabels: app: cube template: metadata: labels: app: cube spec: containers: - name: cube image: registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0 ports: - containerPort: 80 -
Apply the manifest:
kubectl apply -f cube.yaml -
Verify the DaemonSet is running:
kubectl get ds cubeExpected output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE cube 4 4 4 4 4 <none> 3d1h
Step 2: Annotate the node pools
Add network annotations and a Service label to each ENS node pool. Repeat for each region — in this example, the China (Hefei) and China (Chengdu) node pools.
-
Get the node pool names:
kubectl get nodepool -
Add the network ID annotation:
kubectl annotate nodepool np-xxx alibabacloud.com/ens-network-id=n-xxx -
Add the region ID annotation:
kubectl annotate nodepool np-xxx alibabacloud.com/ens-region-id=cn-xxx-xxx -
Add the vSwitch ID annotation:
kubectl annotate nodepool np-xxx alibabacloud.com/ens-vswitch-id=vsw-xxx,vsw-xxx -
Add the Service label so the LoadBalancer Service can select this node pool:
kubectl label nodepool np-xxx k8s-svc=cube
Step 3: Expose the application with a multi-region ELB Service
Two options are available. If you don't have existing ELB instances, use Option A. If you have existing ELB instances and need to retain control over them, use Option B.
Option A: Auto-managed ELB
The ECM automatically creates and manages ELB instances and EIPs in each matched region.
-
Create a file named
cube-svc.yamlwith the following content:apiVersion: v1 kind: Service metadata: name: cube-svc labels: app: cube annotations: openyurt.io/topologyKeys: openyurt.io/nodepool # Enable Service topology service.openyurt.io/nodepool-labelselector: k8s-svc=cube # Select ENS node pools spec: selector: app: cube type: LoadBalancer loadBalancerClass: alibabacloud.com/elb externalTrafficPolicy: Local ports: - name: cube port: 80 protocol: TCP targetPort: 80 -
Apply the manifest:
kubectl apply -f cube-svc.yaml -
Verify the Service is created and has external IPs assigned:
kubectl get svc cube-svcExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cube-svc LoadBalancer 192.168.xxx.xxx 39.106.XX.XX,144.121.XX.XX 80:30081/TCP 5mThe
EXTERNAL-IPfield lists one IP per region, separated by commas. -
Verify access to the application:
curl http://<EXTERNAL-IP>:80Replace
<EXTERNAL-IP>with one of the IP addresses from the previous step.
Option B: Self-managed ELB
Specify your own ELB instances per node pool. The ECM creates PoolServices automatically but does not manage ELB instance lifecycle.
-
Create a file named
cube-svc.yamlwith the following content:apiVersion: v1 kind: Service metadata: name: cube-svc labels: app: cube annotations: openyurt.io/topologyKeys: openyurt.io/nodepool # Enable Service topology service.openyurt.io/nodepool-labelselector: k8s-svc=cube # Select ENS node pools service.beta.kubernetes.io/alibaba-cloud-loadbalancer-managed-by-user: "true" # Use self-managed ELB spec: selector: app: cube type: LoadBalancer loadBalancerClass: alibabacloud.com/elb externalTrafficPolicy: Local ports: - name: cube port: 80 protocol: TCP targetPort: 80 -
Apply the manifest:
kubectl apply -f cube-svc.yaml -
Verify that PoolServices are created for each node pool:
kubectl get psExpected output:
NAME AGE cube-svc-np-heifei 32s cube-svc-np-chengdu 32sEach PoolService corresponds to one node pool. At this point, no ELB instance is attached yet.
-
Bind an existing ELB instance to each PoolService:
-
China (Hefei) node pool:
kubectl annotate ps cube-svc-np-heifei service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id=lb-xxx -
China (Chengdu) node pool:
kubectl annotate ps cube-svc-np-chengdu service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id=lb-xxx
-
-
Verify the Service has external IPs assigned:
kubectl get svc cube-svcExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cube-svc LoadBalancer 192.168.xxx.xxx 39.106.XX.XX,144.121.XX.XX 80:30081/TCP 5m -
Verify access to the application:
curl http://<EXTERNAL-IP>:80Replace
<EXTERNAL-IP>with one of the IP addresses from the previous step.
ELB update policy
The following table describes how ELB resources are managed based on whether you use auto-managed or self-managed ELB instances.
| Resource | Self-managed ELB | Auto-managed ELB |
|---|---|---|
| ELB attributes | Create: specify the node pool selector and ELB instance ID using service.openyurt.io/nodepool-labelselector and service.beta.kubernetes.io/alibaba-cloud-loadbalancer-managed-by-user. Update: attributes cannot be updated. Delete: ELB instances are not automatically released. |
Create: specify the node pool selector using service.openyurt.io/nodepool-labelselector. Update: attributes cannot be updated. Delete: ELB instances are not automatically deleted. |
| Backend server groups | Create: updated based on Service and pod status. Update: backend servers are dynamically added or removed (local mode). Delete: backend server groups are not automatically deleted; manual deletion required. | Create: updated based on Service and pod status. Update: backend servers are dynamically added or removed (local mode). Delete: all backend server groups are automatically deleted. |
| Listeners | Create: automatically added based on spec.ports. Update: automatically added, updated, and deleted based on port changes. Delete: listeners are not automatically deleted; manual deletion required. |
Create: automatically added based on spec.ports. Update: automatically added, updated, and deleted based on port changes. Delete: all listeners are automatically deleted. |
| EIP attributes | Create: EIPs are not automatically created; manual management required. Update: EIP attributes cannot be updated. Delete: EIPs are not automatically deleted. | Create: EIPs are automatically created in each region. Update: EIP bandwidth can be increased or decreased. Delete: EIPs are automatically deleted. |
What's next
-
To configure advanced load balancing options using annotations, see Use annotations to configure ELB instances.
-
For more information about the ELB product, see What is ELB?