This page answers common questions about Application Load Balancer (ALB) Ingresses in Container Service for Kubernetes (ACK) and Container Compute Service (ACS).
Quick reference: why isn't my configuration taking effect?
Several distinct root causes produce a similar symptom — an ALB Ingress that appears healthy but whose changes don't apply. Use this table to locate your situation first.
| Symptom | Most likely cause | Fix |
|---|---|---|
| Rules on a newly created Ingress don't apply | Another Ingress sharing the same ALB instance has a configuration error | Find and fix the errored Ingress |
| Changes not applied, no errors in the Ingress | IngressClass points to the wrong AlbConfig | Verify the parameters field in IngressClass |
| Changes made in the ALB console are overwritten | Console changes aren't persisted to the API server | Edit the AlbConfig or ALB Ingress directly via kubectl |
Some listeners disappeared after kubectl apply | kubectl apply overwrote the AlbConfig with an incomplete YAML | Restore missing listeners with kubectl edit |
Why do ALB Ingress rules fail to take effect?
ALB Ingresses maintain routing rules in inline mode. When multiple Ingresses share the same ALB instance, they all write to a single rule set. If any one Ingress contains a configuration error, the error blocks the entire rule set — all other Ingresses on that instance stop taking effect.
When multiple ALB Ingresses share the same ALB instance, a single misconfigured Ingress prevents all other Ingresses on that instance from applying their rules. Check every Ingress associated with that instance, not just the one you most recently created.
If your newly created Ingress isn't working, check whether an earlier Ingress on the same ALB instance has a configuration error. Fix that error first, then your new Ingress will take effect.
To locate the error, run:
kubectl describe ingress <ingress-name> -n <namespace>Look for events with Error or Warning status. To check whether the ALB Ingress controller is reporting errors, run:
kubectl logs -n kube-system -l app=alb-ingress-controller --tail=50What are the differences between ALB Ingresses and NGINX Ingresses?
ALB Ingress is a fully managed cloud service with no infrastructure to maintain. NGINX Ingress requires you to manage the controller deployment and its underlying resources yourself.
ALB Ingresses listen for requests sent to the kube-system-fake-svc-80 server group by default. What is the purpose of this server group?
Every listener requires a default forwarding rule, and each forwarding rule must be associated with exactly one server group. The kube-system-fake-svc-80 server group is a placeholder used by that default forwarding rule. It doesn't process any requests and cannot be deleted.
Can I enable internal access and external access at the same time?
Yes. Create an Internet-facing ALB instance. The instance automatically creates an elastic IP address (EIP) in each zone for Internet access, and is also assigned a private virtual IP address (VIP) for internal network access.
For internal-only access, create an internal-facing ALB instance instead.
Why can't I see the ALB Ingress controller pod in my cluster?
The ALB Ingress controller pod is visible in the kube-system namespace only in ACK dedicated clusters. In ACK Basic, ACK Pro, and Alibaba Cloud Container Compute Service (ACS) clusters, the ALB Ingress controller runs as a fully managed component and is not visible in the cluster.
How do I keep the ALB domain name stable?
The ALB domain name is tied to the ALB instance, which the ALB Ingress references through an IngressClass -> AlbConfig chain. As long as you don't modify the IngressClass or the AlbConfig, the domain name remains unchanged after the instance is created.
Is an ALB instance automatically created when I create an ACK managed cluster with ALB Ingress enabled?
No. Selecting ALB Ingress during cluster creation installs the ALB Ingress controller automatically, but does not create an ALB instance. Create the ALB instance separately after the cluster is ready.
Why do my ALB console changes get overwritten?
The ALB Ingress controller treats the ALB Ingress and AlbConfig resources on the cluster's API server as the source of truth. Changes made directly in the ALB console are not persisted to the API server. Any internal call or cluster operation causes the controller to overwrite the ALB console with the configuration stored in the API server.
To make persistent changes, edit the AlbConfig or ALB Ingress resource directly:
kubectl -n <namespace> edit AlbConfig <albconfig-name>What do I do when HTTP 503 is returned right after I delete a forwarding rule?
Check whether any of the Ingresses involved in the forwarding rule have the canary:true annotation. The canary:true annotation belongs only on the canary-version Ingress, not on the original Service's Ingress. Placing it on the wrong Ingress causes the controller to remove the forwarding rule for the original Service, which produces 503 errors.
Also note that canary releases support only two Ingresses and a limited number of forwarding conditions. For more flexible traffic routing, use custom forwarding rules instead. For the canary release setup, see Use ALB Ingresses to perform canary releases.
What do I do when the ALB Ingress shows no errors but changes don't take effect?
If reconciliation events for an AlbConfig aren't processed, the most common cause is that the IngressClass references the wrong AlbConfig. Check the parameters field in the IngressClass:
kubectl get ingressclass <ingressclass-name> -o yamlVerify that the parameters field points to the correct AlbConfig. For the correct configuration, see Use an AlbConfig to configure an ALB instance.
Why do some listeners get deleted after I run kubectl apply to update my AlbConfig?
kubectl apply updates an AlbConfig by replacing its entire spec. If the YAML you apply doesn't include all existing listener configurations, those listeners are deleted.
Always run kubectl diff before kubectl apply to preview exactly what will change. Use kubectl edit for targeted changes that don't risk omitting existing listeners.
kubectl diff -f <your-albconfig.yaml>To restore deleted listeners, add the missing listener configurations back using kubectl edit:
kubectl -n <namespace> edit AlbConfig <albconfig-name>How do I reduce server reconciliation time during pod scaling?
Reconciliation time increases with the number of Ingresses associated with each Service. Two approaches help:
Limit Ingresses per Service: Keep the number of Ingresses associated with a single Service to 30 or fewer.
Consolidate Ingress rules: Associate multiple Services with the same Ingress and define routing rules within that Ingress, rather than creating a separate Ingress per Service.
How do I enable automatic node weight assignment with the Flannel plugin and Local mode?
This feature requires ALB Ingress controller version 2.13.1-aliyun.1 or later. Update the controller before enabling this feature.
When the Flannel network plugin is installed and the Local mode is enabled for a Service, the ALB Ingress controller automatically calculates node weights based on pod distribution:
| Pods per Service | Weight calculation |
|---|---|
| 100 pods or fewer | Weight = number of pods on each node. For example, nodes with 1, 2, and 3 pods get weights of 1, 2, and 3, distributing traffic at a 1:2:3 ratio. |
| More than 100 pods | Weight = percentage of pods on each node (rounded). For example, nodes with 100, 200, and 300 pods out of 600 total get weights of 16, 33, and 50. |
Traffic is distributed proportionally to pod count, so all pods receive an even load regardless of which node they run on.