All Products
Search
Document Center

Container Service for Kubernetes:ALB Ingress FAQ

Last Updated:Mar 26, 2026

This page answers common questions about Application Load Balancer (ALB) Ingresses in Container Service for Kubernetes (ACK).

Why do ALB Ingress rules fail to take effect?

ALB Ingresses maintain routing rules in inline mode. When multiple Ingresses share the same ALB instance, a configuration error in any one of them prevents all other Ingresses on that instance from taking effect.

Warning

A single misconfigured Ingress blocks all other Ingresses on the same ALB instance. When rules stop working, check all Ingresses on the instance — not just the one you most recently created.

To fix this, find the Ingress with the error — typically one created before the affected Ingresses — and correct its configuration. The other Ingresses will resume working once the error is resolved.

What is the difference between ALB Ingresses and NGINX Ingresses?

ALB Ingresses are built on ALB, a fully managed cloud service that requires no manual maintenance. NGINX Ingresses require you to manage the controller yourself.

For a detailed comparison, see Comparison among NGINX Ingresses, ALB Ingresses, and MSE Ingresses.

What is the kube-system-fake-svc-80 server group?

ALB requires a default forwarding rule before a listener can be created, and each forwarding rule must be associated with exactly one server group. The kube-system-fake-svc-80 server group fulfills this requirement — it is a placeholder used by the default forwarding rule. It does not process any requests and cannot be deleted.

Can I enable both internal and external access for an ALB Ingress?

Yes. Create an Internet-facing ALB instance. The instance automatically provisions an elastic IP address (EIP) in each zone for internet traffic, and is also assigned a private virtual IP address (VIP) for internal network access — so both access modes work simultaneously.

If you only need internal access, create an internal-facing ALB instance instead.

Why can't I see the ALB Ingress controller pod in my cluster?

The ALB Ingress controller pod is visible in the kube-system namespace only in ACK dedicated clusters. In ACK Basic, ACK Pro, and ACK Serverless clusters, the ALB Ingress controller runs as a fully managed component and is not exposed as a pod in your cluster.

Cluster typeALB Ingress controller pod visible?
ACK dedicatedYes — visible in kube-system namespace
ACK BasicNo — fully managed component
ACK ProNo — fully managed component
ACK ServerlessNo — fully managed component

If you want to migrate from an ACK dedicated cluster, see Hot migration from ACK dedicated clusters to ACK Pro clusters.

How do I keep the ALB domain name from changing?

The domain name stays stable as long as you don't modify the IngressClass or the AlbConfig object it references. When you use an AlbConfig object to create an ALB instance, the Ingress references that instance through an IngressClass — the domain name is tied to the ALB instance, not to the Ingress itself.

Is an ALB instance automatically created when I enable ALB Ingress during cluster creation?

No. Selecting ALB Ingress during ACK managed cluster creation installs the ALB Ingress controller only. The ALB instance itself must be created separately.

Why are my ALB console changes being overwritten?

ALB Ingress configuration is managed through the ALB Ingress or AlbConfig object on the cluster's API server. Changes made directly in the ALB console are not synced back to the API server, so any internal call or cluster operation will overwrite the console changes with the configuration stored in the cluster.

Important

Always modify the ALB Ingress or AlbConfig object in the cluster — not the configuration in the ALB console.

What do I do if I get HTTP 503 after deleting a forwarding rule I just created?

Check whether the ALB Ingress for the forwarding rule has the canary:true annotation. For canary releases, traffic is redirected from the old Service version to the canary version — add canary:true only to the canary Ingress, not to the old Service's Ingress.

Canary releases support only two Ingresses and a limited number of forwarding conditions. For more flexible traffic routing, use custom forwarding rules instead. See Use ALB Ingresses to perform canary releases and Configure custom forwarding rules for ALB Ingresses.

Why don't my ALB Ingress changes take effect even though there are no errors?

The IngressClass may be pointing to the wrong AlbConfig object. Check that the parameters field in your IngressClass is correctly configured. See Use an IngressClass to associate an AlbConfig object with an Ingress.

Why were some listeners deleted after I ran kubectl apply to update the AlbConfig?

kubectl apply overwrites the entire AlbConfig resource. If your YAML file omits any listeners that were previously configured, those listeners are deleted.

Warning

Before running kubectl apply on an AlbConfig, run kubectl diff to preview the changes and confirm that all existing listeners are included in your YAML file.

Use kubectl edit when possible — it opens the full current resource for editing and avoids accidental omissions:

kubectl -n <namespace> edit AlbConfig <albconfig-name>

If listeners were already deleted, add them back using the same kubectl edit command above.

How do I reduce reconciliation time during pod scaling?

Reconciliation time increases with the number of Ingresses associated with a Service. Use either of these approaches:

  • Limit Ingresses per Service: Keep no more than 30 Ingresses associated with any single Service.

  • Consolidate Ingress rules: Associate multiple Services with the same Ingress and define separate routing rules within it, rather than creating one Ingress per Service.