All Products
Search
Document Center

Container Service for Kubernetes:ALB Ingress FAQ

Last Updated:Sep 09, 2025

This topic lists frequently asked questions about ALB Ingress.

Index

Category

Link

General questions

Usage anomalies

Performance Optimization

General questions

What are the differences between ALB Ingress and Nginx Ingress?

ALB Ingress offers several advantages over Nginx Ingress. Unlike Nginx Ingress, which you must manage yourself, ALB Ingress is based on Alibaba Cloud Application Load Balancer (ALB). ALB is a fully managed Alibaba Cloud service that eliminates the need for operations and maintenance. It provides powerful Ingress traffic management and a high-performance gateway service. For a detailed comparison of Nginx Ingress, ALB Ingress, and MSE Ingress, see Comparison of Nginx Ingress, ALB Ingress, and MSE Ingress.

Does ALB Ingress support both public and internal network access?

Symptom

In some business scenarios, you may need to access an ALB Ingress from both the internet and an internal network.

Solution

Yes, it does. To access an ALB Ingress from both the internet and an internal network, you can create a public-facing ALB instance. This instance creates a public elastic IP address (EIP) in each zone, which allows the ALB instance to communicate with the internet. The instance also provides an internal-facing virtual IP address (VIP). You can use the internal-facing VIP to access the ALB instance from an internal network. If you only need to access the ALB Ingress from an internal network, you can create an internal-facing ALB instance.

How can I ensure that ALB Ingress uses a fixed ALB domain name?

After you create an ALB instance using an AlbConfig, the ALB Ingress references the AlbConfig through an IngressClass to use the domain name of the corresponding ALB instance. The domain name remains unchanged as long as the IngressClass and AlbConfig associated with the ALB Ingress are not modified.

Why can't I find the ALB Ingress Controller pod in my cluster?

Symptom

When you search for the ALB Ingress Controller pod in your cluster, you cannot find any related pods in the kube-system namespace.

Solution

You can find the ALB Ingress Controller pod in the kube-system namespace only in ACK dedicated clusters. For ACK standard clusters, ACK Pro clusters, and ACK Serverless clusters, the ALB Ingress Controller component is managed by Alibaba Cloud. Therefore, you cannot find the pod in the cluster. For more information about how to upgrade an ACK dedicated cluster to an ACK Pro cluster, see Hot migrate an ACK dedicated cluster to an ACK Pro cluster.

How do I mount IP-based servers?

Symptom

You need to mount backend pods to an ALB instance as IP-based servers. However, with the default configurations, the service cannot automatically create an IP-based server group. This prevents traffic from being distributed to backend services.

Solution

You can add the alb.ingress.kubernetes.io/server-group-type: Ip annotation to the service's annotations. This creates an IP-based server group for the service and lets you register backend pods to the ALB instance by IP address.

Note
  • The server group type cannot be changed after it is created. In Flannel network mode, if you change the service type, such as by switching between ClusterIP and NodePort, the backend attachment type switches between IP and ECS. This prevents the backends from being added to the original server group. Therefore, you cannot directly modify the service type.

  • To change the server group type, you can create a new service and specify the server-group-type: Ip annotation. This avoids affecting the nodes attached to the existing server group.

  • After you set the server-group-type annotation, do not delete it. If you delete the annotation, the server group type of the service becomes inconsistent. This causes reconciliation to fail and prevents backend nodes from being added to the server group.

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/server-group-type: Ip
  name: tea-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: tea
  type: ClusterIP

If I select ALB Ingress when creating an ACK managed cluster, is an ALB instance automatically created?

No, it is not. If you select ALB Ingress when you create an ACK managed cluster, only the ALB Ingress Controller is installed. An ALB instance is not automatically created.

The ALB backend listener forwards requests to the kube-system-fake-svc-80 server group by default. What is the purpose of this server group?

When you create a listener, you must specify a default forwarding rule. The forwarding rule can only forward requests to a server group. Therefore, the kube-system-fake-svc-80 server group is created to enable the listener. This server group is not involved in business processing and must not be deleted.

Troubleshooting

Why are my ALB Ingress rules not taking effect?

Symptom

After you create an ALB Ingress rule, the routing rule does not take effect as expected. Requests are not forwarded to the corresponding backend service.

Cause

An ALB instance maintains routing rules in a serial manner. This means that if multiple ALB Ingresses use the same ALB instance, a configuration error in one ALB Ingress prevents changes to other ALB Ingresses from taking effect.

Solution

If an ALB Ingress that you create does not take effect, a previously created ALB Ingress may have an error. You must correct the faulty ALB Ingress before the new ALB Ingress can take effect.

What should I do if my changes to ALB Ingress do not take effect but no anomalous activity is reported?

Symptom

After you change the configuration of an ALB Ingress or associate it with an AlbConfig, the changes do not take effect, but no anomalous activity is reported.

Solution

If reconciliation events related to the AlbConfig are not executed or change events are not processed, the binding between the IngressClass and the AlbConfig may be incorrect. Check whether the parameters parameter specified in the IngressClass is correct. For more information, see Use an IngressClass to associate an AlbConfig with an Ingress.

What should I do if an ALB Ingress forwarding rule is deleted immediately after creation and a 503 status code is returned?

Symptom

An ALB Ingress forwarding rule is deleted immediately after it is created. As a result, requests to the service return a 503 status code and traffic is not distributed.

Solution

Check whether the canary:true annotation is added to all Ingresses that correspond to the forwarding rule. A canary release requires a primary version to route traffic. Therefore, you do not need to add the canary:true annotation to the Ingress for the primary version. For more information about how to implement a phased release for a service using ALB Ingress, see Implement a phased release using ALB Ingress.

The canary release method supports only two Ingresses and has limitations. You can use custom forwarding rules for more flexible traffic routing solutions. For more information, see Customize forwarding rules for an ALB Ingress.

Why does the AlbConfig resource report the "listener is not exist in alb, port: xxx" error?

Symptom

When you try to access ports other than port 80, the requests fail to connect. The AlbConfig resource reports the "listener is not exist in alb, port: xxx" error. This error indicates that the relevant ports are not being listened on and traffic is not being forwarded.

Solution

By default, an AlbConfig contains a listener only for port 80. To listen on other ports, you must add listener configurations for those ports to the AlbConfig. For more information, see Create a listener.

After I configure HTTP and HTTPS listeners in AlbConfig, why can't I access the HTTP and HTTPS listener ports?

Symptom

You have configured HTTP and HTTPS listener ports in the AlbConfig. However, when you try to access the service, neither the HTTP nor the HTTPS port is listening for or forwarding traffic. The service is inaccessible through these ports.

Solution

Confirm that you have added the alb.ingress.kubernetes.io/listen-ports annotation to the Ingress resource's annotations. This annotation specifies that the ALB Ingress listens on both the HTTP (80) and HTTPS (443) ports. For example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: https-ingress
  annotations:
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]' # Add this annotation to ensure that the ALB Ingress works correctly when multiple listeners are used.
spec:
  #...

Why are manual configuration changes made in the ALB console lost, added rules deleted, and access logs disabled?

Symptom

After you manually modify configurations in the ALB console, you notice that the changes are lost or automatically deleted. The access log feature is also disabled.

Solution

To change the configuration of an ALB Ingress, you must modify the resources in the cluster. The corresponding configurations are saved in the API Server of the cluster as an ALB Ingress or AlbConfig. Manual changes made in the ALB console do not modify the resources in the API Server. Therefore, the changes do not take effect. If a reconciliation operation is triggered in the cluster, the configurations in the Ingress or AlbConfig overwrite the configurations in the console, which causes your manual changes to be lost. You can modify the ALB Ingress or AlbConfig to change the corresponding configurations.

Why do I receive the "Specified parameter array contains too many items, up to 15 items, Certificates is not valid" error during reconciliation?

Symptom

During reconciliation, the following error message appears: "Specified parameter array contains too many items, up to 15 items, Certificates is not valid". This prevents the ALB Ingress from being associated with the required certificates.

Solution

Starting from version v2.11.0-aliyun.1, the ALB Ingress Controller component supports certificate paging. If you receive the "Specified parameter array contains too many items, up to 15 items, Certificates is not valid" error during reconciliation, it means that your ALB Ingress Controller version does not support certificate paging and you are trying to associate more than 15 certificates in a single reconciliation. To resolve this issue, you can upgrade the ALB Ingress Controller component to the latest version. For more information about component versions, see ALB Ingress Controller. For more information about how to upgrade the component, see Manage the ALB Ingress Controller component.

After I configure an ALB instance in the console, why are some listeners deleted when I run the kubectl apply command to update the access control list (ACL) configuration of the AlbConfig?

Symptom

After you create and configure an ALB instance in the console, you run the kubectl apply command to update the access control list (ACL) configuration of the AlbConfig. As a result, some listeners are unexpectedly deleted, which causes the related ports or rules to become invalid.

Solution

Note

You can use the kubectl edit command to directly update the resource configuration. If you must use the kubectl apply command, you can run the kubectl diff command to preview the changes before you run the kubectl apply command. Make sure that the changes meet your expectations. Then, you can run the kubectl apply command to apply the changes to the Kubernetes cluster.

The kubectl apply command overwrites the AlbConfig. Therefore, when you use the kubectl apply command to update the ACL configuration of an AlbConfig, you must make sure that the YAML file contains the complete listener configuration. This prevents unlisted listeners from being deleted.

If you find that a listener is deleted after you run the kubectl apply command, you can recover the listener as follows:

  1. Check whether the YAML file specifies the complete list of listeners.

    If the deleted listener is missing from the list, proceed to the next step. Otherwise, no action is required.

  2. Run the following command to edit the AlbConfig and add the deleted listener configuration.

    kubectl -n <namespace> edit AlbConfig <albconfig-name> # Replace <namespace> and <albconfig-name> with the namespace and name of the AlbConfig resource.

Performance optimization

Optimize server reconciliation time for pod scaling in a service

Problem

In a Kubernetes environment, when pods associated with a service scale in or out, the server reconciliation process can take a long time. This delay affects the real-time performance of elastic scaling. The reconciliation time increases significantly as the number of associated Ingresses grows.

Solution

To improve server reconciliation efficiency, you can apply the following optimizations:

  • Limit the number of Ingresses: Do not attach more than 30 Ingresses to a single service.

  • Merge Ingress rules: If you have many Ingresses, you can attach multiple services to a single Ingress. Then, you can define multiple forwarding rules within that Ingress resource to improve server reconciliation performance.

Automatically assign node weights when using the Flannel plugin and Local mode services

Problem

When you use the Flannel network plugin and configure a service in Local mode, traffic is not distributed evenly across nodes. This imbalance causes high loads on some nodes and prevents balanced traffic distribution. The goal is to automatically assign weights to nodes based on the number of pods on each node for more effective traffic distribution.

Solution

Note

Starting from version v2.13.1-aliyun.1, ALB Ingress Controller supports automatic node weight assignment. Ensure you upgrade to the latest version to use this feature. For more information, see Upgrade the ALB Ingress Controller component.

In a cluster that uses the Flannel plugin, node weights are calculated as follows when a service is set to Local mode. The following figure shows an example where application pods (app=nginx) are deployed on three ECS instances and exposed through Service A.

image

Total number of backend pods for the service

Description

Number of pods <= 100

ALB Ingress Controller sets the weight of each node to the number of pods on that node.

For example, in the preceding figure, the three ECS instances have 1, 2, and 3 pods. The weights for these ECS instances are set to 1, 2, and 3 respectively. Traffic is then distributed to the instances in a 1:2:3 ratio, which results in a more balanced load across the pods.

Number of pods > 100

ALB Ingress Controller calculates the node weight based on the percentage of the total pods that are deployed on the node.

For example, if the three ECS instances in the preceding figure have 100, 200, and 300 pods, the corresponding ECS weights are set to 16, 33, and 50. Traffic is then distributed to these ECS instances in a 16:33:50 ratio to achieve a more balanced pod load distribution.