All Products
Search
Document Center

Container Service for Kubernetes:Edge cluster Ingress overview

Last Updated:Mar 26, 2026

ACK Edge clusters support two ways to deploy Ingress controllers, each suited to a different network topology. Choose the deployment method that matches your cloud-edge connectivity and traffic routing requirements.

Key concepts

Ingress is a Kubernetes API object that manages Layer 7 HTTP and HTTPS routing from outside the cluster to Services running inside it. By defining forwarding rules on an Ingress resource, you can direct external requests to the correct backend pods. For background on Ingress principles, see Ingress management.

Ingress resources handle only HTTP and HTTPS routing rules. Advanced features such as load balancing algorithms and session affinity must be configured on the Ingress controller, not on the Ingress resource.

Ingress controller is the component that watches Ingress resources and forwards incoming HTTP/HTTPS requests to the appropriate backend pods.

ACK Edge cluster structure

An ACK Edge cluster extends ACK Pro clusters to connect edge nodes with the data center. Each cluster has two parts:

  • Cloud node pool: Contains resources such as Elastic Compute Service (ECS) instances within the cluster VPC.

  • Edge node pool: One or more pools of edge nodes that connect to the data center.

For more information, see Node pools.

Choose a deployment method

Deployment method Supported Ingress types Network requirement Service topology
Node pool deployment NGINX Ingress only Leased line or public network Optional (leased line) / Mandatory (public network)
Cloud deployment NGINX Ingress and ALB Ingress Leased line required Not used

Node pool deployment: Use this method when your edge nodes connect over a public network, or when you need each node pool to handle its own traffic locally without routing through the cloud.

Cloud deployment: Use this method when your cloud node pool and edge node pools are connected through a leased line, and you do not need traffic to stay local within each edge node pool.

Node pool deployment

image

In node pool deployment, an Ingress controller runs in each node pool — both the cloud node pool and the edge node pools.

  • The Ingress controller in the cloud node pool uses a LoadBalancer Service. The endpoint is the IP address of a Classic Load Balancer (CLB) instance.

  • The Ingress controller in each edge node pool uses a NodePort Service. The endpoint is the IP address of any node within that node pool.

  • Service topology: Configure a Service topology to ensure requests are forwarded to backend pods within the same node pool. This is optional when edge nodes connect over a leased line, and mandatory when they connect over the public network. See Configure a Service topology.

For installation instructions, see Install the NGINX Ingress controller.

Cloud deployment

image

In cloud deployment, the Ingress controller runs only in the cloud node pool.

Important

The cloud node pool and edge node pools must be connected through a leased line for intranet communication, including both host-level and container network interconnection.

  • The Ingress controller provides Internet-facing access through a LoadBalancer Service, using a Classic Load Balancer (CLB) instance IP as the endpoint.

  • External traffic enters through the Ingress controller in the cloud node pool and is routed to backend pods with load balancing applied. Service topology is not used in this method.

For installation instructions, see the following topics:

Next steps