Application Load Balancer (ALB) Ingresses serve as cloud-native gateways and can deeply integrate with other cloud-native services. This topic describes the concept, benefits, and scenarios of ALB Ingresses and how ALB Ingresses work.
An ALB Ingress can route Layer 7 traffic across Services in Container Service for Kubernetes (ACK), Serverless Kubernetes (ASK), or Kubernetes. The ALB Ingress controller is deployed in Kubernetes clusters to retrieve the changes to Ingresses from the API server and dynamically generates AlbConfig objects when Ingress changes are detected. For more information about the ALB Ingress controller, see ALB Ingress Controller.
Work with ALB IngressesApplication Load Balancer (ALB) Ingresses are compatible with NGINX Ingresses and provide improved traffic management based on ALB instances. ALB Ingresses support complex routing, automatic certificate discovery, and the HTTP, HTTPS, and Quick UDP Internet Connection (QUIC) protocols. These features fully meet the requirements of cloud-native applications for ultra-high elasticity and balancing of heavy traffic loads at Layer 7.
Work with ALB Ingresses
ALB Ingresses are deeply integrated with cloud-native services, provide multiple features, and are easy to use. The following workflow shows how to use an ALB Ingress in an ACK or an ASK cluster.
|Install the ALB Ingress controller in the console||
ASK clusters provide the managed ALB Ingress controller that implements Layer 7 forwarding rules based on ALB.
You can install the ALB Ingress controller when you create an ASK cluster, or install the ALB Ingress controller on the Components page. For more information, see Manage the ALB Ingress controller.
|Grant permissions to the ALB Ingress controller||If you want to access Services by using an ALB Ingress in an ACK dedicated cluster, you must grant the required permissions to the ALB Ingress controller before you deploy the Services. For more information about, see Grant permissions to the ALB Ingress controller in an ACK dedicated cluster.|
|Create an AlbConfig object and an IngressClass||
After the permissions are granted, you can create an AlbConfig object and an IngressClass, associate the IngressClass with the AlbConfig object, and then add annotations to the ALB Ingress to configure forwarding rules. This way, clients can use the ALB Ingress to access resources in Kubernetes. For more information, see the following topics:
|Create resources such as Ingresses, Services, and Deployments in Kubernetes||After you complete the preceding steps, you can create resources such as Ingresses, Services, and Deployments in Kubernetes to allow access from clients. For more information, see Access Services by using an ALB Ingress.|
How ALB Ingresses work
- An AlbConfig object is used to configure ALB instances and listeners. An AlbConfig object corresponds to one ALB instance.
- Annotations: You can add annotations to ALB Ingresses to configure forwarding rules. This way, HTTP and HTTPS requests can be forwarded to Services based on the forwarding rules.
- A Service is an abstraction of a backend application that runs on a set of replicated pods.
ALB Ingresses are fully managed and provide ultra-high processing capabilities. NGINX Ingresses require manual maintenance. You can use NGINX Ingresses if you need highly customizable gateways. NGINX Ingresses and ALB Ingresses provide different service scopes, architectures, and processing and security capabilities. For more information about the differences between NGINX Ingresses and ALB Ingresses, see Comparison of NGINX Ingresses and ALB Ingresses.
ALB Ingresses outperform NGINX Ingresses in the following scenarios:
- Persistent connections
Persistent connections are ideal for scenarios in which frequent interaction is required, such as Internet of Things (IoT), Internet finance, and online gaming. When configuration changes are made, NGINX Ingresses must reload processes and temporarily close the persistent connections. This may cause service interruptions. ALB Ingresses are free of this issue.
- High QPS
Internet services are expected to withstand high queries per second (QPS) in most scenarios, such as promotional activities and breaking events. ALB supports automatic scaling. Virtual IP addresses are added as the QPS value increases. ALB Ingresses respond faster than NGINX Ingresses because each ALB instance supports up to one million QPS. In addition, NGINX Ingresses do not use physical processing units (PPUs) when the NGINX Ingresses process non-persistent connections. Compared with ALB Ingresses, NGINX Ingresses support lower QPS per server and require more servers in high QPS scenarios.
- High concurrency
IoT services are expected to maintain a large number of concurrent connections initiated from terminal devices. ALB Ingresses integrate with Cloud Network Management to converge sessions. Each ALB instance supports up to tens of millions of connections. NGINX Ingresses require manual maintenance and support a lower number of sessions per server. A large number of NGINX servers are required even if network-enhanced virtual machines (VMs) are used.
- Large workload fluctuations
ALB provides pricing models that are ideal for services with large workload fluctuations, such as e-commerce and gaming services. ALB supports the pay-as-you-go billing method. Fewer Load Balancer Capacity Units (LCUs) are consumed during off-peak hours. In addition, ALB supports automatic scaling. You do not need to check traffic flows in real time. However, NGINX does not automatically release idle resources during off-peak hours. You must manually adjust the quantity and specifications of machines to cope with workload fluctuations. In addition, NGINX also requires buffers due to support for disaster recovery. Buffers increase the expenses.
- Supports load balancing and scheduling at multiple levels and can process up to one million QPS.
- Integrates software and hardware to provide ultra-high routing capabilities.
- Supports auto scaling to simplify O&M and guarantees 99.995％ of service uptime.
- Provides a customizable system to route complex workloads.