All Products
Search
Document Center

Alibaba Cloud Service Mesh:Resolve pods failing to access the CLB IP address of an ingress gateway

Last Updated:Mar 11, 2026

When externalTrafficPolicy is set to Local on the Classic Load Balancer (CLB) instance of an ingress gateway, pods on certain Kubernetes nodes cannot reach the CLB IP address. This topic explains why this happens and provides three solutions.

Symptoms

After you add a Kubernetes cluster to a Service Mesh (ASM) instance and configure a CLB instance with externalTrafficPolicy: Local for the ingress gateway:

  • Pods on some nodes can access the CLB IP address of the ingress gateway.

  • Pods on other nodes cannot.

Diagnose the issue

Before you apply a fix, verify that your issue matches this scenario.

  1. Check the externalTrafficPolicy value on the ingress gateway service: If the output is Local, your issue likely matches this scenario.

    kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.externalTrafficPolicy}'
  2. Identify which nodes run ingress gateway pods: Compare the NODE column against the nodes where your failing pods run. Pods that cannot reach the CLB IP address typically run on nodes that have no ingress gateway pod.

    kubectl get pods -n istio-system -l app=istio-ingressgateway -o wide

Cause

When externalTrafficPolicy is set to Local, kube-proxy (in iptables or IP Virtual Server (IPVS) mode) only programs forwarding rules for the CLB external IP on nodes that run a backend pod of the service. On nodes without a backend pod, no iptables or IPVS rule exists for the CLB IP address, so requests from local pods fail.

  • Node runs an ingress gateway pod: kube-proxy has a local forwarding rule. Pods on this node can reach the CLB IP address.

  • Node does not run an ingress gateway pod: no forwarding rule exists. Pods on this node cannot reach the CLB IP address.

This is standard Kubernetes behavior. For background, see Why does kube-proxy add the external LB address to node-local iptables rules?

Solutions

Choose a solution based on whether you need to preserve source IP addresses:

SolutionPreserves source IPPrerequisitesComplexity
Use the cluster-internal service nameN/A (internal traffic)NoneLow
Set externalTrafficPolicy to ClusterNoNoneLow
Enable ENI direct connectionYesTerway CNI or inclusive ENI modeMedium

Solution 1 (recommended): Use the cluster-internal service name

For traffic that originates inside the cluster, access the ingress gateway through its ClusterIP address or its cluster-internal DNS name instead of the external CLB IP address:

istio-ingressgateway.istio-system

This works on every node regardless of the externalTrafficPolicy setting, because it routes through the ClusterIP rather than the external load balancer.

Note

Accessing the external CLB IP address from inside the cluster is an anti-pattern. External IPs are designed for traffic entering from outside the cluster. For internal traffic, always use the service DNS name or ClusterIP.

Solution 2: Set externalTrafficPolicy to Cluster

If your workloads must reach the CLB IP address from inside the cluster and you do not need source IP preservation, change externalTrafficPolicy to Cluster. This tells kube-proxy to program forwarding rules on all nodes.

Trade-off: Source IP addresses are lost because kube-proxy performs SNAT when forwarding cross-node traffic.

Update the IstioGateway custom resource:

apiVersion: istio.alibabacloud.com/v1beta1
kind: IstioGateway
metadata:
  name: ingressgateway
  namespace: istio-system
  ...
spec:
  externalTrafficPolicy: Cluster
  ...

For the full list of configurable fields, see CRD fields for a gateway.

Solution 3: Enable ENI direct connection

If you use the Terway CNI plugin or your cluster runs in inclusive elastic network interface (ENI) mode, you can access the CLB IP address from inside the cluster while preserving source IP addresses.

Set externalTrafficPolicy to Cluster and add the service.beta.kubernetes.io/backend-type: eni annotation. This bypasses kube-proxy entirely: the CLB sends traffic directly to pod ENIs, so no SNAT occurs and source IPs are preserved.

Update the IstioGateway custom resource:

apiVersion: istio.alibabacloud.com/v1beta1
kind: IstioGateway
metadata:
  name: ingressgateway
  namespace: istio-system
  ...
spec:
  externalTrafficPolicy: Cluster
  maxReplicas: 5
  minReplicas: 2
  ports:
    - name: status-port
      port: 15020
      targetPort: 15020
    - name: http2
      port: 80
      targetPort: 80
    - name: https
      port: 443
      targetPort: 443
    - name: tls
      port: 15443
      targetPort: 15443
  replicaCount: 2
  resources:
    limits:
      cpu: '2'
      memory: 2G
    requests:
      cpu: 200m
      memory: 256Mi
  runAsRoot: false
  serviceAnnotations:
    service.beta.kubernetes.io/backend-type: eni
  serviceType: LoadBalancer

For the full list of configurable fields, see CRD fields for a gateway.

Verify the fix

After you apply a solution, confirm that previously unreachable pods can now access the ingress gateway:

# From a pod on a node that previously could not reach the CLB IP address
kubectl exec -it <pod-name> -- curl -I http://<clb-ip>
PlaceholderDescriptionExample
<pod-name>Name of the pod to test frommy-app-pod-abc12
<clb-ip>CLB IP address of the ingress gateway47.xx.xx.xx

A successful response (such as HTTP/1.1 200 OK or HTTP/1.1 404 Not Found) confirms that the pod can reach the ingress gateway. A connection timeout indicates the issue persists.

References