All Products
Search
Document Center

Container Service for Kubernetes:Advanced ALB Ingress configurations

Last Updated:May 19, 2025

An Application Load Balancer (ALB) Ingress is an API object that provides Layer 7 load balancing to manage external access to Services in a Kubernetes cluster. This topic describes how to use ALB Ingresses to forward requests to backend server groups based on domain names and URL paths, redirect HTTP requests to HTTPS, and implement canary releases.

Table of contents

Category

Examples

ALB Ingress configurations

Configure health checks

Port and protocol configurations

Forwarding rule configurations

Advanced configuration

Prerequisites

Forward requests based on domain names

Perform the following steps to create an Ingress with a domain name and an Ingress without a domain name, and then use the Ingresses to forward requests.

Valid domain names

  1. Use the following template to create a Deployment, a Service, and an Ingress. Requests to the domain name of the Ingress are forwarded to the Service.

    Clusters that run Kubernetes 1.19 or later

    apiVersion: v1
    kind: Service
    metadata:
      name: demo-service
      namespace: default
    spec:
      ports:
        - name: port1
          port: 80
          protocol: TCP
          targetPort: 8080
      selector:
        app: demo
      sessionAffinity: None
      type: NodePort
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: demo
      template:
        metadata:
          labels:
            app: demo
        spec:
          containers:
            - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
              imagePullPolicy: IfNotPresent
              name: demo
              ports:
                - containerPort: 8080
                  protocol: TCP
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - host: demo.domain.ingress.top
          http:
            paths:
              - backend:
                  service:
                    name: demo-service
                    port: 
                      number: 80
                path: /hello
                pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: v1
    kind: Service
    metadata:
      name: demo-service
      namespace: default
    spec:
      ports:
        - name: port1
          port: 80
          protocol: TCP
          targetPort: 8080
      selector:
        app: demo
      sessionAffinity: None
      type: NodePort
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: demo
      template:
        metadata:
          labels:
            app: demo
        spec:
          containers:
            - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
              imagePullPolicy: IfNotPresent
              name: demo
              ports:
                - containerPort: 8080
                  protocol: TCP
    ---
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: demo
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - host: demo.domain.ingress.top
          http:
            paths:
              - backend:
                  serviceName: demo-service
                  servicePort: 80
                path: /hello
                pathType: ImplementationSpecific
  2. Run the following command to access the application by using the specified domain name.

    Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the kubectl get ing command.

    curl -H "host: demo.domain.ingress.top" <ADDRESS>/hello

    Expected output:

    {"hello":"coffee"}

Empty domain names

  1. The following template shows the configuration of the Ingress:

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - host: ""
          http:
            paths:
              - backend:
                  service:
                    name: demo-service
                    port: 
                      number: 80
                path: /hello
                pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: demo
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - host: ""
          http:
            paths:
              - backend:
                  serviceName: demo-service
                  servicePort: 80
                path: /hello
                pathType: ImplementationSpecific
  2. Run the following command to access the application without using a domain name.

    Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the kubectl get ing command.

    curl <ADDRESS>/hello

    Expected output:

    {"hello":"coffee"}

Forward requests based on URL paths

ALB Ingresses can forward requests based on URL paths. You can use the pathType parameter to configure different URL match policies. The valid values of pathType are Exact, ImplementationSpecific, and Prefix.

Note

URL match policies may conflict with each other. When conflicting URL match policies exist, requests are matched against the policies in descending order of priority. For more information, see Configure forwarding rule priorities.

Match mode

Rule

URL path

Whether the URL path matches the rule

Prefix

/

(All paths)

Yes

Prefix

/foo

  • /foo

  • /foo/

Yes

Prefix

/foo/

  • /foo

  • /foo/

Yes

Prefix

/aaa/bb

/aaa/bbb

No

Prefix

/aaa/bbb

/aaa/bbb

Yes

Prefix

/aaa/bbb/

/aaa/bbb

Yes. The URL path ignores the trailing forward slash (/) of the rule.

Prefix

/aaa/bbb

/aaa/bbb/

Yes. The rule matches the trailing forward slash (/) of the URL path.

Prefix

/aaa/bbb

/aaa/bbb/ccc

Yes. The rule matches the subpath of the URL path.

Prefix

Configure two rules at the same time:

  • /

  • /aaa

/aaa/ccc

Yes. The URL path matches the / rule.

Prefix

Configure two rules at the same time:

  • /aaa

  • /

/aaa/ccc

Yes. The URL path matches the /aaa rule.

Prefix

Configure two rules at the same time:

  • /aaa

  • /

/ccc

Yes. The URL path matches the / rule.

Prefix

/aaa

/ccc

No

Exact or ImplementationSpecific

/foo

/foo

Yes

Exact or ImplementationSpecific

/foo

/bar

No

Exact or ImplementationSpecific

/foo

/foo/

No

Exact or ImplementationSpecific

/foo/

/foo

No

You can perform the following steps to configure different URL match policies.

Exact

  1. The following template shows the configuration of the Ingress:

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-path
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /hello
              backend:
                service:
                  name: demo-service
                  port: 
                    number: 80
              pathType: Exact

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: demo-path
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /hello
              backend:
                serviceName: demo-service
                servicePort: 80
              pathType: Exact
  2. Run the following command to access the application.

    Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the kubectl get ing command.

    curl <ADDRESS>/hello

    Expected output:

    {"hello":"coffee"}

ImplementationSpecific: the default match policy

The ALB Ingress configuration is the same as that for the Exact match policy.

  1. The following template shows the configuration of the Ingress:

  2. Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-path
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /hello
              backend:
                service:
                  name: demo-service
                  port:
                    number: 80
              pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: demo-path
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /hello
              backend:
                serviceName: demo-service
                servicePort: 80
              pathType: ImplementationSpecific
  3. Run the following command to access the application.

  4. Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the kubectl get ing command.

  5. curl <ADDRESS>/hello

    Expected output:

    {"hello":"coffee"}

Prefix

Match a specified prefix against URL paths. The elements in URL paths are separated by forward slashes (/). The prefix is case-sensitive and matched against each element of the path.

  1. The following template shows the configuration of the Ingress:

  2. Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-path-prefix
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /
              backend:
                service:
                  name: demo-service
                  port:
                    number: 80
              pathType: Prefix

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: demo-path-prefix
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /
              backend:
                serviceName: demo-service
                servicePort: 80
              pathType: Prefix
  3. Run the following command to access the application.

  4. Replace ADDRESS with the IP address of the related ALB instance. You can query the IP address by running the kubectl get ing command.

  5. curl <ADDRESS>/hello

    Expected output:

    {"hello":"coffee"}

Configure health checks

You can configure health checks for ALB Ingresses by using the following annotations.

The following YAML template provides an example on how to create an Ingress for which health checks are enabled.

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    alb.ingress.kubernetes.io/healthcheck-enabled: "true"
    alb.ingress.kubernetes.io/healthcheck-path: "/"
    alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
    alb.ingress.kubernetes.io/healthcheck-httpversion: "HTTP1.1"
    alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
    alb.ingress.kubernetes.io/healthcheck-code: "http_2xx"
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
    alb.ingress.kubernetes.io/healthy-threshold-count: "3"
    alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
  ingressClassName: alb
  rules:
  - http:
      paths:
      # Configure a context path.
      - path: /tea
        backend:
          service:
            name: tea-svc
            port:
              number: 80
      # Configure a context path.
      - path: /coffee
        backend:
          service:
            name: coffee-svc
            port:
              number: 80

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    alb.ingress.kubernetes.io/healthcheck-enabled: "true"
    alb.ingress.kubernetes.io/healthcheck-path: "/"
    alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
    alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
    alb.ingress.kubernetes.io/healthcheck-httpcode: "http_2xx"
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
    alb.ingress.kubernetes.io/healthy-threshold-count: "3"
    alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
  ingressClassName: alb
  rules:
  - http:
      paths:
      # Configure a context path. 
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      # Configure a context path. 
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

Parameter

Description

alb.ingress.kubernetes.io/healthcheck-enabled

Specifies whether to enable health checks for backend server groups.

  • true: enables health checks.

  • false: disables health checks.

Default value: false.

alb.ingress.kubernetes.io/healthcheck-path

The URL path based on which health checks are performed.

Default value: /.

alb.ingress.kubernetes.io/healthcheck-protocol

The protocol that is used for health checks.

  • HTTP: performs HTTP health checks by sending HEAD or GET requests to a backend server to check whether the backend server is healthy.

  • HTTPS: performs HTTPS health checks by sending HEAD or GET requests to a backend server to check whether the backend server is healthy.

  • TCP: performs TCP health checks by sending SYN packets to the backend server to check whether the port of the backend server is available to receive requests.

  • GRPC: performs gRPC health checks by sending POST or GET requests to a backend server to check whether the backend server is healthy.

Default value: HTTP.

alb.ingress.kubernetes.io/healthcheck-httpversion

The version of the HTTP protocol. This parameter takes effect only when the healthcheck-protocol is set to HTTP or HTTPS.

  • HTTP1.0

  • HTTP1.1

Default value: HTTP1.1.

alb.ingress.kubernetes.io/healthcheck-method

The request method that is used for health checks.

  • HEAD

  • POST

  • GET

Default value: HEAD.

Important

If you set healthcheck-protocol to GRPC, you must set this parameter to POST or GET.

alb.ingress.kubernetes.io/healthcheck-httpcode

The status codes returned for health checks. A backend server is considered healthy only when the health check request is successful and one of the specified status codes is returned.

You can select one or more of the following status codes, and separate multiple status codes with commas (,):

  • http_2xx

  • http_3xx

  • http_4xx

  • http_5xx

Default value: http_2xx.

alb.ingress.kubernetes.io/healthcheck-code

The status codes returned for health checks. A backend server is considered healthy only when the health check request is successful and one of the specified status codes is returned.

If you specify both this parameter and healthcheck-httpcode, this parameter takes precedence.

Values for this parameter depend on the value specified in healthcheck-protocol:

  • HTTP or HTTPS: You can select one or more of the following status codes, and separate multiple status codes with commas (,):

    • http_2xx

    • http_3xx

    • http_4xx

    • http_5xx

    Default value: http_2xx.

  • GRPC:

    Valid values: 0 to 99.

    Default value: 0.

    Value ranges are supported. You can enter up to 20 value ranges and must separate each range with a comma (,).

alb.ingress.kubernetes.io/healthcheck-timeout-seconds

The timeout period of a health check.

Unit: seconds.

Valid values: 1 to 300.

Default value: 5.

alb.ingress.kubernetes.io/healthcheck-interval-seconds

The interval between two consecutive health checks.

Unit: seconds.

Valid values: 1 to 50.

Default value: 2.

alb.ingress.kubernetes.io/healthy-threshold-count

The number of times that a backend server must consecutively pass health checks before the server is considered healthy.

Valid values: 2 to 10.

Default value: 3.

alb.ingress.kubernetes.io/unhealthy-threshold-count

The number of times that a backend server must consecutively fail health checks before the server is considered unhealthy.

Valid values: 2 to 10.

Default value: 3.

alb.ingress.kubernetes.io/healthcheck-connect-port

The port that is used for health checks.

Default value: 0.

Note

A value of 0 indicates that the port on a backend server is used for health checks.

Configure a redirection from HTTP requests to HTTPS requests

You can configure an ALB Ingress to redirect HTTP requests to HTTPS port 443 by adding the annotation alb.ingress.kubernetes.io/ssl-redirect: "true".

Important

You cannot create listeners by using an ALB Ingress. To ensure that an ALB Ingress can work as expected, you need to specify the ports and the protocols of listeners in an AlbConfig, and then associate the listeners with Services in the ALB Ingress.

Example:

Clusters that run Kubernetes 1.19 or later

apiVersion: v1
kind: Service
metadata:
  name: demo-service-ssl
  namespace: default
spec:
  ports:
    - name: port1
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: demo-ssl
  sessionAffinity: None
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-ssl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo-ssl
  template:
    metadata:
      labels:
        app: demo-ssl
    spec:
      containers:
        - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
          imagePullPolicy: IfNotPresent
          name: demo-ssl
          ports:
            - containerPort: 8080
              protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/ssl-redirect: "true"
  name: demo-ssl
  namespace: default
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - ssl.alb.ingress.top
  rules:
    - host: ssl.alb.ingress.top
      http:
        paths:
          - backend:
              service:
                name: demo-service-ssl
                port: 
                  number: 80
            path: /
            pathType: Prefix

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: v1
kind: Service
metadata:
  name: demo-service-ssl
  namespace: default
spec:
  ports:
    - name: port1
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: demo-ssl
  sessionAffinity: None
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-ssl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo-ssl
  template:
    metadata:
      labels:
        app: demo-ssl
    spec:
      containers:
        - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
          imagePullPolicy: IfNotPresent
          name: demo-ssl
          ports:
            - containerPort: 8080
              protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/ssl-redirect: "true"
  name: demo-ssl
  namespace: default
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - ssl.alb.ingress.top
  rules:
    - host: ssl.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: demo-service-ssl
              servicePort: 80
            path: /
            pathType: Prefix

Configure HTTPS or gRPC as the backend protocol

ALB Ingresses support HTTPS or gRPC as the backend protocol. To configure HTTPS or gRPC, add the alb.ingress.kubernetes.io/backend-protocol: "grpc" or alb.ingress.kubernetes.io/backend-protocol: "https" annotation. If you want to use an Ingress to distribute requests to a gRPC service, you must configure an SSL certificate for the gRPC service and use the TLS protocol to communicate with the gRPC service. Example:

Note

After creating an Ingress, you cannot modify the backend protocol. Delete and recreate the Ingress to modify the protocol.

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: "grpc"
  name: lxd-grpc-ingress
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - demo.alb.ingress.top
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:  
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grpc-demo-svc
            port:
              number: 9080

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: "grpc"
  name: lxd-grpc-ingress
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - demo.alb.ingress.top
  rules:
    - host: demo.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: grpc-demo-svc
              servicePort: 9080
            path: /
            pathType: Prefix

Configure regular expressions

You can specify regular expressions as routing conditions in the path field. Example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
   alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field. 
   alb.ingress.kubernetes.io/conditions.service-a: | # The Service specified in the annotation must be an existing Service in the cluster, and the Service name must be the same as the Service name in backend of the rule field. 
     [{
       "type": "Path",
       "pathConfig": {
           "values": [
              "~*^/pathvalue1", # The regular expression must follow the regular expression flag ~*.
              "/pathvalue2" # No need to add ~* before the path for exact match.
           ]
       }
      }]
  name: ingress-example
spec:
  ingressClassName: alb
  rules:
   - http:
      paths:
      - path: /test
        pathType: Prefix
        backend:
          service:
            name: service-a
            port:
              number: 88

Configure rewrite rules

ALB Ingresses support rewrite rules. To configure rewrite rules, add the annotation alb.ingress.kubernetes.io/rewrite-target: /path/${2}. The following rules apply:

  • In the rewrite-target annotation, you must set the pathType field to Prefix for variables of the ${number} type.

  • By default, the path parameter does not support characters that are supported by regular expressions, such as asterisks (*) and question marks (?). To specify characters that are used by regular expressions in the path parameter, you must add the alb.ingress.kubernetes.io/use-regex: "true" annotation.

  • The value of the path parameter must start with a forward slash (/).

Note

If you want to specify regular expressions in rewrite rules, take note of the following items:

  • You can specify one or more regular expressions in the path field of an ALB Ingress and use parentheses () to enclose the regular expressions. However, you can use up to three variables (${1}, ${2}, and ${3}) in the rewrite-target annotation to form the path that overwrites the original path.

  • Variables that match the regular expressions are concatenated to form the path that overwrites the original path.

  • The original path is overwritten by the variables that match the regular expressions only if the following requirements are met: Multiple regular expressions that are enclosed in parentheses () are specified and the rewrite-target annotation is set to one or more of the following variables: ${1}, ${2}, and ${3}.

Assume that the path parameter of an ALB Ingress is set to /sys/(.*)/(.*)/aaa and the rewrite-target annotation is set to /${1}/${2}. If the client sends a request to the /sys/ccc/bbb/aaa path, the request matches the regular expression /sys/(.*)/(.*)/aaa. The rewrite-target annotation takes effect and replaces ${1} with ccc and ${2} with bbb. As a result, the request is redirected to /ccc/bbb.

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field. 
    alb.ingress.kubernetes.io/rewrite-target: /path/${2} # Variables that match the regular expressions are concatenated to form the path that overwrites the original path. 
  name: rewrite-ingress
spec:
  ingressClassName: alb
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /something(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: rewrite-svc
            port:
              number: 9080

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/use-regex: "true" # Supports regular expressions in the path field. 
    alb.ingress.kubernetes.io/rewrite-target: /path/${2} # Variables that match the regular expressions are concatenated to form the path that overwrites the original path. 
  name: rewrite-ingress
spec:
  ingressClassName: alb
  rules:
    - host: demo.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: rewrite-svc
              servicePort: 9080
            path: /something(/|$)(.*)
            pathType: Prefix

Configure custom listening ports

ALB Ingresses allow you to configure custom listening ports. This allows you to expose a service through ports 80 and 443 at the same time.

Important

You cannot create listeners by using an ALB Ingress. To ensure that an ALB Ingress can work as expected, you need to specify the ports and protocols of listeners in an AlbConfig, and then associate the listeners with Services in the ALB Ingress.

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - demo.alb.ingress.top
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /tea
        pathType: ImplementationSpecific
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
  name: cafe-ingress
spec:
  ingressClassName: alb
  tls:
  - hosts:
    - demo.alb.ingress.top
  rules:
    - host: demo.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: tea-svc
              servicePort: 80
            path: /tea-svc
            pathType: ImplementationSpecific

Configure forwarding rule priorities

By default, forwarding rules are prioritized based on the following rules:

  • Forwarding rules of different ALB Ingresses are prioritized in the lexicographical order of the values of the namespace/name parameter. A forwarding rule whose namespace/name value appears the first among all forwarding rules in the lexicographical order has the highest priority.

  • The forwarding rules of an ALB Ingress are displayed in descending order of priority in the rule parameter.

To prioritize forwarding rules without modifying the namespace/name field of an ALB Ingress, configure the following annotations:

Note

The priority of each forwarding rule within a listener must be unique. You can use the annotation alb.ingress.kubernetes.io/order to specify the priorities of the forwarding rules of an ALB Ingress. Valid values: 1 to 1000. A smaller value indicates a higher priority. The default value is 10.

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
   alb.ingress.kubernetes.io/order: "2"
spec:
  ingressClassName: alb
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /tea
        pathType: ImplementationSpecific
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/order: "2" 
  name: cafe-ingress
spec:
  ingressClassName: alb
  rules:
    - host: demo.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: tea-svc
              servicePort: 80
            path: /tea-svc
            pathType: ImplementationSpecific

Use annotations to perform phased releases

ALB allows you to configure canary releases based on request headers, cookies, and weights to handle complex traffic routing. You can add the annotation alb.ingress.kubernetes.io/canary: "true" to enable the canary release feature. Then, you can use the following annotations to configure different canary release rules.

Note
  • Canary releases that use different rules take effect in the following order: header-based > cookie-based > weight-based.

  • When you perform canary releases to test a new application version, do not modify the original Ingress rules. Otherwise, access to the application may be interrupted. After the new application version passes the test, replace the backend Service used by the earlier application version with the backend Service used by the new application version. Then, delete the Ingress rules for implementing canary releases.

  • alb.ingress.kubernetes.io/canary-by-header and alb.ingress.kubernetes.io/canary-by-header-value: This rule matches the headers and header values of requests. You must add both annotations if you want to use this rule.

    • If the header and header value of a request match the rule, the request is routed to the new application version.

    • If the header of a request does not match the header-based rule, the request is matched against other types of rules based on the priorities of the rules.

    If you set the alb.ingress.kubernetes.io/canary-by-header annotation to location: hz, requests that match the rule are routed to the new application version. Requests that fail to match the rule are routed based on weight-based rules. Example:

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "1"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-by-header: "location"
        alb.ingress.kubernetes.io/canary-by-header-value: "hz"
      name: demo-canary
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  service:
                    name: demo-service-hello
                    port: 
                      number: 80
                path: /hello
                pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "1"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-by-header: "location"
        alb.ingress.kubernetes.io/canary-by-header-value: "hz"
      name: demo-canary
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  serviceName:demo-service-hello
                  servicePort: 80
                path: /hello
                pathType: ImplementationSpecific
  • alb.ingress.kubernetes.io/canary-by-cookie: This rule matches the cookies of requests.

    • If you set cookie to always, requests that match the rule are routed to the new application version.

    • If you set cookie to never, requests that match the rule are routed to the old application version.

    Note

    Cookie-based canary release rules do not support other settings. The cookie value must be always or never.

    Requests that contain the demo=always cookie are routed to the new application version. Example:

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "2"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-by-cookie: "demo"
      name: demo-canary-cookie
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  service:
                    name: demo-service-hello
                    port: 
                      number: 80
                path: /hello
                pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "2"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-by-cookie: "demo"
      name: demo-canary-cookie
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  serviceName:demo-service-hello
                  servicePort: 80
                path: /hello
                pathType: ImplementationSpecific
  • alb.ingress.kubernetes.io/canary-weight: This rule allows you to set the percentage of requests that are sent to a specified Service. You can enter an integer from 0 to 100.

    In the following example, the percentage of requests that are routed to the new application version is set to 50%:

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "3"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-weight: "50"
      name: demo-canary-weight
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  service:
                    name: demo-service-hello
                    port: 
                      number: 80
                path: /hello
                pathType: ImplementationSpecific

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      annotations:
        alb.ingress.kubernetes.io/order: "3"
        alb.ingress.kubernetes.io/canary: "true"
        alb.ingress.kubernetes.io/canary-weight: "50"
      name: demo-canary-weight
      namespace: default
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - backend:
                  serviceName: demo-service-hello
                  servicePort: 80
                path: /hello
                pathType: ImplementationSpecific

Configure session persistence by using annotations

ALB Ingresses allow you to configure session persistence by using the following annotations:

  • alb.ingress.kubernetes.io/sticky-session: specifies whether to enable session persistence. Valid values: true and false. Default value: false.

  • alb.ingress.kubernetes.io/sticky-session-type: the method that is used to handle a cookie. Valid values: Insert and Server. Default value: Insert.

    • Insert: inserts a cookie. ALB inserts a cookie (SERVERID) into the first HTTP or HTTPS response packet that is sent to a client. The next request from the client contains this cookie and the listener distributes this request to the recorded backend server.

    • Server: rewrites a cookie. When ALB detects a user-defined cookie, it overwrites the original cookie with the user-defined cookie. The next request from the client will contain the user-defined cookie, and the listener will distribute this request to the recorded backend server.

    Note

    This parameter takes effect when the StickySessionEnabled parameter is set to true for the server group.

  • alb.ingress.kubernetes.io/cookie-timeout: specifies the timeout period of cookies. Valid values: 1 to 86400. Default value: 1000. Unit: seconds.

  • alb.ingress.kubernetes.io/cookie: specifies a custom cookie. Type: string. Default value: "".

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress-v3
  annotations:
    alb.ingress.kubernetes.io/sticky-session: "true"
    alb.ingress.kubernetes.io/sticky-session-type: "Insert" # To support custom cookies, the inserted cookie type must be set to Server.
    alb.ingress.kubernetes.io/cookie-timeout: "1800"
    alb.ingress.kubernetes.io/cookie: "test"
spec:
  ingressClassName: alb
  rules:
  - http:
      paths:
      # Configure a context path. 
      - path: /tea2
        backend:
          service:
            name: tea-svc
            port: 
             number: 80
      # Configure a context path. 
       - path: /coffee2
         backend:
           service:
              name: coffee-svc
              port: 
               number: 80

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress-v3
  annotations:
    alb.ingress.kubernetes.io/sticky-session: "true"
    alb.ingress.kubernetes.io/sticky-session-type: "Insert" # To support custom cookies, the inserted cookie type must be set to Server.
    alb.ingress.kubernetes.io/cookie-timeout: "1800"
    alb.ingress.kubernetes.io/cookie: "test"
spec:
  ingressClassName: alb
  rules:
  - http:
      paths:
      # Configure a context path. 
      - path: /tea2
        backend:
          serviceName: tea-svc
          servicePort: 80
      # Configure a context path. 
      - path: /coffee2
        backend:
          serviceName: coffee-svc
          servicePort: 80

Specify a load balancing algorithm for backend server groups

ALB Ingresses allow you to specify a load balancing algorithm for backend server groups by using the annotation alb.ingress.kubernetes.io/backend-scheduler. Example:

Clusters that run Kubernetes 1.19 or later

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    alb.ingress.kubernetes.io/backend-scheduler: "uch" # Replace uch with wrr, sch, or wlc based on your business requirements. 
    alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only if the load balancing algorithm is uch. You do not need to configure this parameter if the scheduling algorithm is wrr, sch, or wlc. 
spec:
  ingressClassName: alb
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /tea
        pathType: ImplementationSpecific
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Clusters that run Kubernetes versions earlier than 1.19

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-scheduler: "uch" # Replace uch with wrr, sch, or wlc based on your business requirements. 
    alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only when the load balancing algorithm is uch. You do not need to configure this parameter when the scheduling algorithm is wrr, sch, or wlc. 
  name: cafe-ingress
spec:
  ingressClassName: alb
  rules:
    - host: demo.alb.ingress.top
      http:
        paths:
          - backend:
              serviceName: tea-svc
              servicePort: 80
            path: /tea-svc
            pathType: ImplementationSpecific

Set the annotation alb.ingress.kubernetes.io/backend-scheduler based on the following description:

  • wrr: Backend servers that have higher weights receive more requests than those that have lower weights. This is the default value.

  • wlc: Requests are distributed based on the weight and load of each backend server. The load refers to the number of connections to a backend server. If multiple backend servers have the same weight, requests are forwarded to the backend server with the least connections.

  • sch: consistent hashing that is based on source IP addresses.

  • uch: consistent hashing that is based on URL parameters. You can add the annotation alb.ingress.kubernetes.io/backend-scheduler-uch-value to the ALB Ingress to specify URL parameters for consistent hashing when the load balancing algorithm for backend server groups is uch.

Configure CORS

The following code block shows an example of the Cross-Origin Resource Sharing (CORS) configuration supported by the ALB Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: alb-ingress
  annotations:
    alb.ingress.kubernetes.io/enable-cors: "true"
    alb.ingress.kubernetes.io/cors-expose-headers: ""
    alb.ingress.kubernetes.io/cors-allow-methods: "GET,POST"
    alb.ingress.kubernetes.io/cors-allow-credentials: "true"
    alb.ingress.kubernetes.io/cors-max-age: "600"

spec:
  ingressClassName: alb
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: cloud-nodeport
            port:
              number: 80

Parameter

Description

alb.ingress.kubernetes.io/cors-allow-origin

The URLs that can be used to access resources on the origin server by using a browse. Separate multiple URLs with commas (,). Each URL must start with http:// or https:// and contain a valid domain name or a top-level wildcard domain name.

Default value: *. Example: alb.ingress.kubernetes.io/cors-allow-origin: "https://example.com:4443, http://aliyundoc.com, https://example.org:1199".

alb.ingress.kubernetes.io/cors-allow-methods

The HTTP methods that are allowed. The values are not case-sensitive. Separate multiple HTTP methods with commas (,).

Default value: GET, PUT, POST, DELETE, PATCH, OPTIONS. Example: alb.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS".

alb.ingress.kubernetes.io/cors-allow-headers

The request headers that are allowed. The request headers can contain letters, digits, underscores (_), and hyphens (-). Separate multiple request headers with commas (,).

Default value: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization. Example: alb.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO".

alb.ingress.kubernetes.io/cors-expose-headers

The headers that can be exposed. The headers can contain letters, digits, underscores (_), hyphens (-), and asterisks (*). Separate multiple headers with commas (,).

Default value: empty. Example: alb.ingress.kubernetes.io/cors-expose-headers: "*, X-CustomResponseHeader".

alb.ingress.kubernetes.io/cors-allow-credentials

Specifies whether to include credentials in CORS requests.

Default value: true. Example: alb.ingress.kubernetes.io/cors-allow-credentials: "false".

alb.ingress.kubernetes.io/cors-max-age

The maximum cache duration (in seconds) for OPTIONS preflight requests on non-simple requests. Valid values ranging from -1 (disable caching) to 172,800 (48 hours).

Default value: 172800.

Configure persistent connections

Traditional load balancers access backend servers over short-lived connections. A traditional load balancer needs to create a connection and close the connection each time the load balancer forwards a request to a backend server. Consequently, network connections become the performance bottleneck. To reduce the amount of resources used to establish network connections and improve forwarding performance, you can use the persistent TCP connection feature. You can add the alb.ingress.kubernetes.io/backend-keepalive annotation to the ALB Ingress to enable the persistent TCP connection feature. Example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: alb-ingress
  annotations:
    alb.ingress.kubernetes.io/backend-keepalive: "true"
spec:
  ingressClassName: alb
  rules:
  - host: demo.alb.ingress.top
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: cloud-nodeport
            port:
              number: 80

Enable IPv6 for backend server groups

After enabling IPv6 for a backend server group, if you need to attach dual-stack pods to the server group, ensure the cluster has IPv4/IPv6 dual-stack enabled and that the corresponding Service has ipFamilies and ipFamilyPolicy defined. In a dual-stack configuration:

  • ipFamilies must be set to either IPv4 or IPv6.

  • ipFamilyPolicy must be set to RequireDualStack or PreferDualStack.

You can enable IPv6 for specific server groups by adding the annotation alb.ingress.kubernetes.io/enable-ipv6: "true" in ALB Ingress. After creating a dual-stack ALB instance, you can specify whether to enable IPv6 for backend server groups. Server groups can mount both IPv4 and IPv6 backend servers.

Note

The restrictions for enabling IPv6 are as follows:

  • If the virtual private cloud (VPC) associated with a server group does not have IPv6 enabled, IPv6 cannot be enabled for that server group.

  • IPv6 cannot be enabled for IP-type server groups mounted through custom forwarding or for Function Compute-type server groups.

  • IPv6 cannot be enabled for Ingress when associated with IPv4-only ALB instances.

apiVersion: v1
kind: Service
metadata:
  name: tea-svc
  annotations:
spec:
  # For dual-stack configuration: set ipFamilies to IPv4 or IPv6, and ipFamilyPolicy to RequireDualStack or PreferDualStack.
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
  - IPv4
  - IPv6
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: tea
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tea
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tea
  template:
    metadata:
      labels:
        app: tea
    spec:
      containers:
      - name: tea
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
        ports:
        - containerPort: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    alb.ingress.kubernetes.io/enable-ipv6: "true"
spec:
  ingressClassName: alb
  rules:
   - host: demo.alb.ingress.top
     http:
      paths:
      - path: /tea
        pathType: Prefix
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Configure QPS throttling

ALB natively supports QPS throttling for forwarding rules, with valid values ranging from 1 to 1,000,000. To enable this in ALB Ingress, set the alb.ingress.kubernetes.io/traffic-limit-qps annotation. Example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    alb.ingress.kubernetes.io/traffic-limit-qps: "50"
spec:
  ingressClassName: alb
  rules:
   - host: demo.alb.ingress.top
     http:
      paths:
      - path: /tea
        pathType: ImplementationSpecific
        backend:
          service:
            name: tea-svc
            port:
              number: 80
      - path: /coffee
        pathType: ImplementationSpecific
        backend:
          service:
            name: coffee-svc
            port:
              number: 80

Backend slow start

After a new pod is added to the Service backend, if the ALB Ingress immediately distributes traffic to the new pod, it may cause a sudden spike in CPU or memory usage, leading to access issues. In slow start mode, ALB Ingress gradually shifts traffic to the new pod to ease the impact of sudden traffic surges. The following sample code shows an example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/slow-start-enabled: "true"
    alb.ingress.kubernetes.io/slow-start-duration: "100"
  name: alb-ingress
spec:
  ingressClassName: alb
  rules:
  - host: alb.ingress.alibaba.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Parameter

Description

alb.ingress.kubernetes.io/slow-start-enabled

Specifies whether to enable the slow start mode.

  • true: enables slow start mode.

  • false: disables slow start mode.

By default, this mode is disabled.

alb.ingress.kubernetes.io/slow-start-duration

The traffic ramp-up duration after slow start in seconds. Extending the duration reduces the rate of traffic growth.

Valid values: 30 to 900.

Default value: 30.

Connection draining

If a pod enters the Terminating state, the ALB Ingress will remove it from the backend. However, ongoing requests may still exist in the established connections, and errors may occur if the ALB Ingress immediately closes all connections. With connection draining enabled, ALB Ingress keeps connections open for a specified period after the pod is removed, ensuring a smooth shutdown once current requests are completed. The connection draining modes are:

  • Disabled: When a pod enters the Terminating state, the ALB Ingress removes the pod from the backend and immediately closes all connections.

  • Enabled: When a pod enters the Terminating state, the ALB Ingress maintains ongoing requests open but does not accept new ones:

    • If there are ongoing requests, the ALB Ingress closes all connections and removes the pod when the timeout is reached.

    • If the pod completes all requests before the timeout, the ALB Ingress immediately removes it.

Important

Before the end of connection draining, ALB Ingress will not terminate its connections with the pod, but it cannot ensure the pod remains running. You can control the availability of the pod in the Terminating state by configuring spec.terminationGracePeriodSeconds or using a preStop hook.

The following sample code shows an example to configure connection draining:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/connection-drain-enabled: "true"
    alb.ingress.kubernetes.io/connection-drain-timeout: "199"
  name: alb-ingress
spec:
  ingressClassName: alb
  rules:
  - host: alb.ingress.alibaba.com
    http:
      paths:
      - path: /test
        pathType: Prefix
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Parameter

Description

alb.ingress.kubernetes.io/connection-drain-enabled

Specifies whether to enable connection draining.

  • true: enables connection draining.

  • false: disables connection draining.

By default, connection draining is disabled.

alb.ingress.kubernetes.io/connection-drain-timeout

The timeout period of connection draining. Unit: seconds.

Valid values: 0 to 900.

Default value: 300.

Disable cross-zone forwarding

By default, ALB enables cross-zone forwarding, which distributes traffic evenly across backend services in different availability zones within the same region. If this feature is disabled for an ALB server group, traffic will only be distributed to backend services within the same zone in the region.

Important

Before disabling cross-zone forwarding:

  • Ensure that ALB has available backend services configured in each zone.

  • Verify that backend servers in each zone have sufficient resources.

  • Proceed with caution to avoid service disruptions.

The following sample code shows how to disable cross-zone forwarding:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/cross-zone-enabled: "false"
  name: alb-ingress
spec:
  ingressClassName: alb
  rules:
  - host: alb.ingress.alibaba.com
    http:
      paths:
      - path: /test
        pathType: Prefix
        backend:
          service:
            name: tea-svc
            port:
              number: 80

Parameter

Description

alb.ingress.kubernetes.io/cross-zone-enabled

Specifies whether to enable cross-zone forwarding.

  • true: enables cross-zone forwarding.

  • false: disables cross-zone forwarding.

By default, cross-zone forwarding is enabled.