All Products
Search
Document Center

Container Compute Service:Use readiness gates to seamlessly launch pods that are associated with ALB Ingresses during rolling updates

Last Updated:Mar 25, 2026

During a rolling update, new pods can pass their Kubernetes readiness probes and be marked Ready before the Application Load Balancer (ALB) registers them in the backend server group. If traffic reaches an unregistered pod, requests fail with 502 errors. Readiness gates solve this by keeping a pod in the not-ready state until the ALB Ingress controller confirms the pod is registered and healthy in the backend server group.

This topic demonstrates the difference through two deployments — one without readiness gates and one with — so you can observe rolling update behavior before and after enabling the feature.

How it works

Standard Kubernetes readiness probes check whether a pod's containers are healthy from the kubelet's perspective. They don't account for registration in an external load balancer. This creates a timing gap:

  1. A rolling update starts and new pods pass their container readiness probes.

  2. Kubernetes marks the pods as Ready and begins terminating old pods.

  3. The ALB Ingress controller hasn't finished registering the new pods in the backend server group.

  4. During this window, the backend server group contains only pods in an initializing or draining state, causing service outages.

Readiness gates close this gap. When you set .spec.readinessGates.conditionType to target-health.alb.k8s.alibabacloud in a Deployment, the ALB Ingress controller sets a custom condition on each pod. Kubernetes holds the pod in the not-ready state until that condition is True — which only happens after the ALB Ingress controller registers the pod in the backend server group and it passes health checks. Old pods aren't terminated until new pods are confirmed healthy in ALB.

Prerequisites

Before you begin, make sure you have:

Step 1: Deploy the tea application

  1. Create tea-service.yaml with the following content. This creates a Deployment named tea and a Service named tea-svc.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tea
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: tea
      template:
        metadata:
          labels:
            app: tea
        spec:
          containers:
          - name: tea
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tea-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: tea
      type: ClusterIP
  2. Deploy the Deployment and Service:

    kubectl apply -f tea-service.yaml
  3. Check pod status:

    kubectl get pods -o wide

    Expected output:

    NAME                   READY   STATUS    RESTARTS   AGE    IP                NODE  NOMINATED NODE   READINESS GATES
    tea-5cb56xxxxx-xxxxx   1/1     Running   0          1m4s   192.168.xxx.xxx   xxx   <none>           <none>

    The READINESS GATES column shows <none>, confirming readiness gates are not configured.

  4. Create tea-ingress.yaml with the following content:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: tea-ingress
    spec:
      ingressClassName: alb
      rules:
       - host: www.example.com
         http:
          paths:
          - path: /tea
            pathType: Prefix
            backend:
              service:
                name: tea-svc
                port:
                  number: 80
  5. Create the Ingress:

    kubectl apply -f tea-ingress.yaml
  6. Confirm the ALB Ingress is ready:

    kubectl get ingress

    Expected output:

    NAME          CLASS   HOSTS             ADDRESS                                              PORTS   AGE
    tea-ingress   alb     www.example.com   alb-qu066wzmi5fbixxxxx.cn-xxxxxxx.alb.aliyuncs.com   80      6m47s
www.example.com is used as an example. For the actual domain name, see Configure domain name resolution.

Step 2: Verify that rolling updates interrupt traffic without readiness gates

  1. Create a test script named test.sh. The script continuously sends HTTP requests to the application and prints each HTTP status code.

    #!/bin/bash
    HOST="www.example.com"
    DNS="alb-qu066wzmi5fbixxxxx.cn-xxxxxxx.alb.aliyuncs.com"   # Replace with the ADDRESS value from your ALB Ingress.
    while true; do
      RESPONSE=$(curl -H Host:$HOST  -s -o /dev/null -w "%{http_code}" -m 1  http://$DNS/tea)
      TIMESTAMP=$(date +%Y-%m-%d_%H:%M:%S)
      echo "$TIMESTAMP - $RESPONSE"
    done
  2. Run the test script:

    bash test.sh
  3. In a separate terminal, trigger a rolling update:

    kubectl rollout restart deploy tea
  4. Observe the test script output. A small number of 502 (Bad Gateway) responses appear during the update. This confirms that new pods received traffic before being registered in the ALB backend server group.

    image

Step 3: Deploy the tea-readiness application with readiness gates

  1. Create tea-readiness-service.yaml with the following content. The key addition is .spec.readinessGates.conditionType: target-health.alb.k8s.alibabacloud, which enables the ALB readiness gate for each pod.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tea-readiness
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: tea-readiness
      template:
        metadata:
          labels:
            app: tea-readiness
        spec:
          containers:
          - name: tea-readiness
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
          # Configure readiness gates.
          readinessGates:
            - conditionType: target-health.alb.k8s.alibabacloud
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tea-readiness-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: tea-readiness
      type: ClusterIP
  2. Deploy the Deployment and Service:

    kubectl apply -f tea-readiness-service.yaml
  3. Check pod status:

    kubectl get pods -o wide

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE    IP                NODE   NOMINATED NODE   READINESS GATES
    tea-5cb56xxxxx-xxxxx             1/1     Running   0          11m    192.168.xxx.xxx   xxx    <none>           <none>
    tea-readiness-5cb56xxxxx-xxxxx   1/1     Running   0          4m7s   192.168.xxx.xxx   xxx    <none>           0/1

    The tea-readiness pod shows 0/1 in the READINESS GATES column because no Ingress is configured yet — the ALB Ingress controller hasn't registered the pod in any backend server group.

  4. Create tea-readiness-ingress.yaml with the following content:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: tea-readiness-ingress
    spec:
      ingressClassName: alb
      rules:
       - host: www.example.com
         http:
          paths:
          - path: /tea-readiness
            pathType: Prefix
            backend:
              service:
                name: tea-readiness-svc
                port:
                  number: 80
  5. Create the Ingress:

    kubectl apply -f tea-readiness-ingress.yaml
  6. Confirm both Ingresses are ready:

    kubectl get ingress

    Expected output:

    NAME                    CLASS   HOSTS             ADDRESS                                              PORTS   AGE
    tea-ingress             alb     www.example.com   alb-qu066wzmi5fbi5lg85.cn-beijing.alb.aliyuncs.com   80      12m
    tea-readiness-ingress   alb     www.example.com   alb-qu066wzmi5fbi5lg85.cn-beijing.alb.aliyuncs.com   80      65s

Step 4: Verify that rolling updates are seamless with readiness gates

  1. Confirm the readiness gate is active. The READINESS GATES column should show 1/1.

    kubectl get pods -o wide

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE    IP                NODE   NOMINATED NODE   READINESS GATES
    tea-5cb56xxxxx-xxxxx             1/1     Running   0          13m    192.168.xxx.xxx   xxx    <none>           <none>
    tea-readiness-5cb56xxxxx-xxxxx   1/1     Running   0          6m2s   192.168.xxx.xxx   xxx    <none>           1/1
  2. Create a test script named test-readiness.sh:

    #!/bin/bash
    HOST="www.example.com"
    DNS="alb-qu066wzmi5fbixxxxx.cn-xxxxxxx.alb.aliyuncs.com"   # Replace with the ADDRESS value from your ALB Ingress.
    while true; do
      RESPONSE=$(curl -H Host:$HOST  -s -o /dev/null -w "%{http_code}" -m 1  http://$DNS/tea-readiness)
      TIMESTAMP=$(date +%Y-%m-%d_%H:%M:%S)
      echo "$TIMESTAMP - $RESPONSE"
    done
  3. Run the test script:

    bash test-readiness.sh
  4. In a separate terminal, trigger a rolling update:

    kubectl rollout restart deploy tea-readiness
  5. Observe the test script output. Only 200 responses appear throughout the rolling update. The readiness gate held new pods in the not-ready state until they were registered and healthy in the ALB backend server group, preventing any traffic interruption.

    image