This topic provides answers to some frequently asked questions about Services in Container Service for Kubernetes (ACK) clusters.
Table of contents
FAQ about Server Load Balancer (SLB)
Which external traffic policy should I use when I create a Service, Local or Cluster?
Why are no events collected during the synchronization between a Service and an SLB instance?
How do I handle an SLB instance that remains in the Pending state?
What do I do if the vServer groups of an SLB instance are not updated?
What do I do if the annotations of a Service do not take effect?
Why does the cluster fail to access the IP address of the SLB instance?
If I delete a Service, is the SLB instance associated with the Service automatically deleted?
How do I rename an SLB instance if the CCM version is V1.9.3.10 or earlier?
FAQ about using existing SLB instances
Other issues
FAQ about SLB
Which external traffic policy should I use when I create a Service, Local or Cluster?
The features of the Local and Cluster external traffic policies vary based on the network plug-in that is used by the cluster. For more information about the differences between the Local external traffic policy and the Cluster external traffic policy, see Differences between external traffic policies.
Why are no events collected during the synchronization between a Service and an SLB instance?
If no event is generated after you run the kubectl -n {your-namespace} describe svc {your-svc-name}
command, check the version of the Cloud Controller Manager (CCM).
If the CCM version is earlier than V1.9.3.276-g372aa98-aliyun, no event is generated for the synchronization between a Service and an SLB instance. We recommend that you update the CCM version. For more information about how to view and update the CCM version, see Manually update the CCM.
If the CCM version is V1.9.3.276-g372aa98-aliyun or later, submit a ticket.
How do I handle an SLB instance that remains in the Pending state?
Run the
kubectl -n {your-namespace} describe svc {your-svc-name}
command to view the events.Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.
If no errors are reported in the events, see Why are no events collected during the synchronization between a Service and an SLB instance?
What do I do if the vServer groups of an SLB instance are not updated?
Run the
kubectl -n {your-namespace} describe svc {your-svc-name}
command to view the events.Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.
If no errors are reported in the events, see Why are no events collected during the synchronization between a Service and an SLB instance?
What do I do if the annotations of a Service do not take effect?
Perform the following steps to view the errors:
Run the
kubectl -n {your-namespace} describe svc {your-svc-name}
command to view the events.Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.
If no errors are reported, you can resolve the issue based on the following scenarios:
Make sure that the CCM version meets the requirements of the annotations. For more information about the correlation between annotations and CCM versions, see Common annotations.
On the Services page, find the Service that you want to manage and click View in YAML in the Actions column. In the panel that appears, check whether annotations are configured for the Service. If annotations are not configured for the Service, you must configure annotations for the Service.
For more information about how to configure annotations, see Add annotations to the YAML file of a Service to configure CLB instances.
For more information about how to view the list of Services, see Use Services to expose applications.
Verify that the annotations are valid.
Why is the configuration of an SLB instance modified?
When specific conditions are met, the CCM calls a declarative API to update the configuration of an SLB instance based on the Service configuration. If you modify the configurations of an SLB instance in the SLB console, the CCM may overwrite the changes. We recommend that you use annotations to configure an SLB instance. For more information about how to configure annotations for an SLB instance, see Add annotations to the YAML file of a Service to configure CLB instances.
If the SLB instance is created and managed by the CCM, we recommend that you do not modify the configuration of the SLB instance in the SLB console. Otherwise, the CCM may overwrite the configuration and the Service may be unavailable.
Why does the cluster fail to access the IP address of the SLB instance?
Scenario 1: The SLB instance uses a private IP address and is not automatically created for a Service. In this case, the backend pods of the SLB instance and the client pod are deployed on the same node. Consequently, the client pod cannot access the private IP address of the SLB instance.
This is because Layer 4 SLB does not allow you to use the backend servers of an SLB instance as the clients to access the backend services. To resolve this issue, you can use the following methods to avoid deploying the client pod and backend pods on the same node:
Change the IP address of the SLB instance to a public IP address.
Create a Service that automatically creates an SLB instance. When you configure the Service, set the external traffic policy to Cluster. This way, requests from within the cluster are forwarded by kube-proxy instead of the SLB instance.
Scenario 2: The external traffic policy of the Service that is used to expose your application is set to Local. As a result, pods in the cluster cannot access the IP address of the SLB instance.
For more information about the issue and how to resolve the issue, see What do I do if I cannot access the IP address of the SLB instance associated with a LoadBalancer Service from within the cluster.
When is the SLB instance automatically deleted?
The system automatically deletes the SLB instance of the LoadBalancer Service under specific conditions if the SLB instance is created by the CCM. The following table describes the conditions under which the SLB instance is automatically deleted.
Condition | When an SLB instance created by the CMM is used | When an existing SLB instance is used |
Delete the LoadBalancer Service | Delete the SLB instance | Retain the SLB instance |
Change the Service type of the LoadBalancer Service | Delete the SLB instance | Retain the SLB instance |
If I delete a Service, is the SLB instance associated with the Service automatically deleted?
If the SLB instance is reused, it is not deleted together with the Service. If the SLB instance is not reused, it is deleted. If a Service contains the annotation service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: {your-slb-id}
, the corresponding SLB instance is reused.
If you change the type of the Service, for example, from LoadBalancer to NodePort, and the SLB instance is created by the CCM, the SLB instance is automatically deleted.
What do I do if I accidentally delete an SLB instance?
Scenario 1: What do I do if I accidentally delete the SLB instance of the API server?
The deleted SLB instance cannot be restored. You must create a new SLB instance. For more information, see Create an ACK Pro cluster.
Scenario 2: What do I do if I delete the SLB instance of an Ingress?
Perform the following steps to recreate the SLB instance:
Log on to the ACK console.
In the left-side navigation pane of the ACK console, click Clusters.
On the Clusters page, find the cluster that you want to manage. Then, click the name of the cluster or click Details in the Actions column of the cluster.
Choose
.On the top of the Services page, select kube-system from the Namespace drop-down list. Then, find nginx-ingress-lb in the Services list and click View in YAML in the Actions column.
If you cannot find nginx-ingress-lb in the Services list, use the following template to create a Service named nginx-ingress-lb:
apiVersion: v1 kind: Service metadata: labels: app: nginx-ingress-lb name: nginx-ingress-lb namespace: kube-system spec: externalTrafficPolicy: Local ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 selector: app: ingress-nginx type: LoadBalancer
In the Edit YAML dialog box, delete the content in the status field. Then, click Update. This way, the CCM creates a new SLB instance.
Scenario 3: What do I do if I delete an SLB instance that is configured to handle workloads?
If you no longer need the Service that is associated with the SLB instance, delete the Service.
If you want to keep the Service, perform the following steps:
Log on to the ACK console.
In the left-side navigation pane of the ACK console, click Clusters.
On the Clusters page, find the cluster that you want to manage. Then, click the name of the cluster or click Details in the Actions column of the cluster.
In the left-side navigation pane of the cluster details page, choose
.On the top of the Services page, select All Namespaces from the Namespace drop-down list. Then, find the Service in the Services list and click View in YAML in the Actions column.
In the Edit YAML dialog box, delete the content in the status field. Then, click Update. This way, CCM creates a new SLB instance.
How do I rename an SLB instance if the CCM version is V1.9.3.10 or earlier?
For CCM versions later than V1.9.3.10, a tag is automatically added to the SLB instances in the cluster. You need only to change the value if you want to rename an SLB instance. For CCM V1.9.3.10 and earlier, you must manually add a specific tag to an SLB instance if you want to rename the SLB instance.
You can rename an SLB instance by adding a tag to the instance only if the CCM version is V1.9.3.10 or earlier.
The Service type is LoadBalancer.
Log on to a master node in an ACK cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the
kubectl get svc -n ${namespace} ${service}
command to view the Service type and IP address of the Service.NoteReplace namespace with the cluster namespace and service with the Service name.
Run the following command to create the tag that you want to add to the SLB instance:
kubectl get svc -n ${namespace} ${service} -o jsonpath="{.metadata.uid}"|awk -F "-" '{print "kubernetes.do.not.delete: "substr("a"$1$2$3$4$5,1,32)}'
Log on to the SLB console, select the region where the SLB instance is deployed, and then find the specified SLB instance based on the IP address that is returned in Step 2.
Add the tag that is generated in Step 3 to the SLB instance. Callout 1 in the preceding figure is the tag key, and callout 2 is the tag value. For more information, see Manage tags.
How does the CCM calculate node weights in Local mode?
In this example, pods with the app=nginx label are deployed on three ECS instances. In the following figure, when externalTrafficPolicy is set to Local, the pods provide services for external users by using Service A. The following sections describe how node weights are calculated.

For CCM versions earlier than V1.9.3.164-g2105d2e-aliyun
For CCM versions that are earlier than V1.9.3.164-g2105d2e-aliyun, the following figure shows that the weight of each ECS instance in Local mode is 100. This indicates that traffic loads are evenly distributed to the ECS instances. However, the load amounts of the pods are different because the pods are unevenly deployed on the ECS instances. For example, the pod on ECS 1 takes the heaviest load and the pods on ECS 3 take the lightest load.
For CCM versions that are later than V1.9.3.164-g2105d2e-aliyun but earlier than V1.9.3.276-g372aa98-aliyun
For CCM versions that are later than V1.9.3.164-g2105d2e-aliyun but earlier than V1.9.3.276-g372aa98-aliyun, the node weights are calculated based on the number of pods deployed on each node, as shown in the following figure. The weights of the ECS instances are 16, 33, and 50 based on this calculation. Therefore, traffic loads are distributed to the ECS instances at the ratio of approximately 1:2:3.
The node weight is calculated based on the following formula.
For CCM V1.9.3.276-g372aa98-aliyun and later
The weights of pods are slightly imbalanced due to the precision of the calculation formula. For CCM V1.9.3.276-g372aa98-aliyun and later, the weight of each node equals the number of pods deployed on the node. In the following figure, the weights of the ECS instances are 1, 2, and 3. Traffic loads are distributed to the ECS instances at the ratio of 1:2:3. This way, the pods have a more balanced load than the pods in the preceding figure.
The node weight is calculated based on the following formula.
FAQ about using existing SLB instances
Why does the system fail to use an existing SLB instance for more than one Services?
Check the version of the CCM. If the version is earlier than v1.9.3.105-gfd4e547-aliyun, the CCM cannot use existing SLB instances for more than one Services. For more information about how to view and update the CCM version, see Manually update the CCM.
Check whether the reused SLB instance is created by the cluster. The SLB instance cannot be reused if it is created by the cluster.
Check whether the SLB instance is used by the API server. The SLB instance cannot be reused if it is used by the API server.
If the SLB instance is an internal-facing SLB instance, check whether the SLB instance and the cluster are deployed in the same virtual private cloud (VPC). The SLB instance cannot be reused if they are deployed in different VPCs.
Why is no listener created when I reuse an existing SLB instance?
Make sure that you set service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners
to true
in the annotation settings. If you do not set the value to true, no listener is automatically created.
The following list explains why the CCM does not overwrite the listeners of an existing CLB instance:
If the listeners of the CLB instance are associated with applications, service interruptions may occur after the configurations of the listeners are overwritten.
The CCM supports limited backend configurations and cannot handle complex configurations. If you require complex backend configurations, you can manually configure listeners in the CLB console without overwriting the existing listeners.
In both cases, we recommend that you do not overwrite the listeners of existing CLB instances. However, you can overwrite an existing listener if the port of the listener is no longer in use.
Other issues
How do I troubleshoot failures to update the CCM?
For more information about how to troubleshoot failures to update the CCM, see Troubleshoot a check failure that occurs before you update the CCM.
What do I do if errors occur in Services?
Error message | Description and solution |
The backend server number has reached to the quota limit of this load balancers | The quota of backend servers is insufficient. Solution: You can use the following methods to resolve this issue.
|
The loadbalancer does not support backend servers of eni type | Shared-resource SLB instances do not support elastic network interfaces (ENIs). Solution: If you want to specify an ENI as a backend server, create a high-performance SLB instance. Add the annotation: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: "slb.s1.small" annotation to the Service. Important Make sure that the annotations that you add meet the requirements of the CCM version. For more information about the correlation between annotations and CCM versions, see Common annotations. |
There are no available nodes for LoadBalancer | No backend server is associated with the SLB instance. Check whether pods are associated with the Service and whether the pods run as normal. Solution:
|
| The system fails to associate a Service with the SLB instance. Solution: Log on to the SLB console, select the region where the SLB instance is deployed, and then find the SLB instance based on the EXTERNAL-IP of the Service.
|
ORDER.ARREARAGE Message: The account is arrearage. | Your account has overdue payments. |
PAY.INSUFFICIENT_BALANCE Message: Your account does not have enough balance. | The account balance is insufficient. |
Status Code: 400 Code: Throttlingxxx | API throttling is triggered for SLB. Solution:
|
Status Code: 400 Code: RspoolVipExist Message: there are vips associating with this vServer group. | The listener that is associated with the vServer group cannot be deleted. Solution:
|
Status Code: 400 Code: NetworkConflict | The reused internal-facing SLB instance and the cluster are not deployed in the same virtual private cloud (VPC). Solution: Make sure that your SLB instance and the cluster are deployed in the same VPC. |
Status Code: 400 Code: VSwitchAvailableIpNotExist Message: The specified VSwitch has no available ip. | The idle IP addresses in the vSwitch are insufficient. Solution: Specify another vSwitch in the same VPC by using the |
The specified Port must be between 1 and 65535. | The targetPort field does not support STRING type values in ENI mode. Solution: Set the |
Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued. | By default, earlier versions of CCM automatically create shared-resource SLB instances, which are no longer available for purchase. Solution: Manually update the CCM. |
can not change ResourceGroupId once created | You cannot modify the resource group of an SLB instance after the resource group is created. Solution: Delete the |
can not find eniid for ip x.x.x.x in vpc vpc-xxxx | The specified IP address of the ENI cannot be found in the VPC. Solution: Check whether the |
| You cannot change the billing method of the SLB instance used by a Service from pay-as-you-go to pay-by-specification. Solution:
|
SyncLoadBalancerFailed the loadbalancer xxx can not be reused, can not reuse loadbalancer created by kubernetes. | The SLB instance created by the CCM is reused. Solution:
|
alicloud: can not change LoadBalancer AddressType once created. delete and retry | You cannot change the type of an SLB instance after it is created. Solution: Recreate the related Service. |
the loadbalancer lb-xxxxx can not be reused, service has been associated with ip [xxx.xxx.xxx.xxx], cannot be bound to ip [xxx.xxx.xxx.xxx] | You cannot associate an SLB instance with a Service that is already assocaited with another SLB instance. Solution: You cannot reuse an existing SLB instance by modifying the value of the annotation |
How do I configure listeners for a NodePort Service?
You can use the CCM to configure listeners only for LoadBalancer Services. Therefore, you need to change the type of Service from NodePort to LoadBalancer.
How do I access a NodePort Service?
You can access a NodePort Service from within the cluster by sending requests to ClusterIP+Port or Node IP+NodePort Service port. The default port number exposed by a NodePort Service is greater than 30000.
You can access a NodePort Service from outside the cluster by sending requests to Node IP+NodePort Service port. The default port number exposed by a NodePort Service is greater than 30000.
If you want to access the NodePort Service over the Internet or VPCs other than the one in which the cluster resides, you must create a LoadBalancer Service and use the external endpoint of the LoadBalancer Service to expose the NodePort Service.
NoteIf the external traffic policy of your Service is set to Local, make sure that at least one backend pod of the Service runs on the node on which the Service is deployed. For more information about the external traffic policies supported by Services, see Differences between external traffic policies.
How do I configure a proper node port range?
The Kubernetes API server allows you to configure the --service-node-port-range
parameter to specify the port range for NodePort services or LoadBalancer Services on nodes. The default port range is 30000 to 32767. In an ACK Pro cluster, you can specify a custom port range by configuring the parameters of control plane components. For more information, see Customize the parameters of control plane components in ACK Pro clusters.
Exercise caution when you specify a custom node port range. Make sure that the node port range does not overlap with the port range specified by the
net.ipv4.ip_local_port_range
kernel parameter of Linux on nodes in the cluster. Theip_local_port_range
kernel parameter of a node specifies the local port range for all Linux programs on the node. The default value ofip_local_port_range
is 32768 to 60999.The default values of --service-node-port-range and
ip_local_port_range
do not overlap with each other. If the two port ranges overlap with each other after you modify one of them, network errors may occasionally occur on nodes. In addition, health checks may fail to be performed for your applications and nodes may be disconnected from your cluster. In this case, we recommend that you reset the parameters to the default values or modify the parameter values to make sure that the port ranges do not overlap with each other.After you modify the node port range, some existing NodePort or LoadBalancer Services may still use ports that belong to the range specified by the
ip_local_port_range
kernel parameter. In this case, you must modify the ports used by the NodePort or LoadBalancer Services. You can run thekubectl edit <service-name>
command and then change the value of thespec.ports.nodePort
parameter to an idle node port.