In a Kubernetes cluster, ALB Ingress manages externally accessible API objects within the cluster service (Service) and offers Layer 7 load balancing capabilities. This topic describes how to utilize ALB Ingress for directing requests from various domain names or URL paths to distinct backend server groups, converting HTTP access to HTTPS, and implementing features like grayscale release.
Index
Feature Classification | Configuration Example |
ALB Ingress Configuration | |
Port/Protocol Configuration | |
Forwarding Policy Configuration | |
Advanced Configuration |
Prerequisites
The ALB Ingress controller is installed in the cluster. For more information, see Manage the ALB Ingress controller.
NoteTo use an ALB Ingress to access Services deployed in an ACK dedicated cluster, you need to first grant the cluster the permissions required by the ALB Ingress controller. For more information, see Authorize an ACK dedicated cluster to access the ALB Ingress controller.
An AlbConfig is created. For more information, see Create AlbConfig.
Forward Requests Based on Domain Name
To forward requests based on the specified normal or empty domain name, create a simple Ingress. The following example demonstrates this process.
Forward Requests Based on Normal Domain Name
Deploy the template below to create Service, Deployment, and Ingress, which will direct access requests to Service via the Ingress domain name.
Clusters of Version 1.19 and Later
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: service: name: demo-service port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters Before Version 1.19
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Use the following command to access the service through the specified normal domain name.
Replace ADDRESS with the domain name address of the ALB instance, retrievable via
kubectl get ing
.curl -H "host: demo.domain.ingress.top" <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Forward Requests Based on Empty Domain Name
Deploy the template below to create Ingress.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: service: name: demo-service port: number: 80 path: /hello
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Use the following command to access the service through the empty domain name.
Replace ADDRESS with the domain name address of the ALB instance, retrievable via
kubectl get ing
.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Forward Requests Based on URL Path
ALB Ingress supports URL-based request forwarding by setting different URL matching strategies in the pathType
field. The pathType
field accommodates three matching methods: exact match (Exact), default (ImplementationSpecific), and prefix match (Prefix).
URL matching strategies may conflict, leading to request forwarding based on rule priority. For more information, see Configure Forwarding Rule Priority.
Matching Method | Rule Path | Request Path | Whether Path and Request Path Match |
Prefix | / | (All paths) | Yes |
Prefix | /foo |
| Yes |
Prefix | /foo/ |
| Yes |
Prefix | /aaa/bb | /aaa/bbb | No |
Prefix | /aaa/bbb | /aaa/bbb | Yes |
Prefix | /aaa/bbb/ | /aaa/bbb | Yes, the request path ignores the trailing slash of the rule path |
Prefix | /aaa/bbb | /aaa/bbb/ | Yes, the rule path matches the trailing slash of the request path |
Prefix | /aaa/bbb | /aaa/bbb/ccc | Yes, the rule path matches the subpath of the request path |
Prefix | Set two rule paths simultaneously:
| /aaa/ccc | Yes, the request path matches the |
Prefix | Set two rule paths simultaneously:
| /aaa/ccc | Yes, the request path matches the |
Prefix | Set two rule paths simultaneously:
| /ccc | Yes, the request path matches the |
Prefix | /aaa | /ccc | No, prefix not matched |
Exact or ImplementationSpecific | /foo | /foo | Yes |
Exact or ImplementationSpecific | /foo | /bar | No |
Exact or ImplementationSpecific | /foo | /foo/ | No |
Exact or ImplementationSpecific | /foo/ | /foo | No |
The examples for the three matching methods are as follows:
Exact
Deploy the template below to create Ingress.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: service: name: demo-service port: number: 80 pathType: Exact
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: serviceName: demo-service servicePort: 80 pathType: Exact
Use the following command to access the service.
Replace ADDRESS with the domain name address of the ALB instance, retrievable via
kubectl get ing
.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
(Default) ImplementationSpecific
In ALB Ingress, this is treated the same as Exact
.
Deploy the template below to create Ingress.
Use the following command to access the service.
Replace ADDRESS with the domain name address of the ALB instance, retrievable via
kubectl get ing
.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
service:
name: demo-service
port:
number: 80
pathType: ImplementationSpecific
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
serviceName: demo-service
servicePort: 80
pathType: ImplementationSpecific
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Prefix
Prefix matching is case-sensitive and compares URL paths element by element, separated by /
.
Deploy the template below to create Ingress.
Use the following command to access the service.
Replace ADDRESS with the domain name address of the ALB instance, retrievable via
kubectl get ing
.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
service:
name: demo-service
port:
number: 80
pathType: Prefix
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
serviceName: demo-service
servicePort: 80
pathType: Prefix
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Configure Health Check
ALB Ingress allows for the configuration of health checks through the use of specific annotations.
ALB Ingress allows health check configuration through specific annotations. Below is a YAML example for setting up health checks:
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-httpversion: "HTTP1.1"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-code: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path
- path: /tea
backend:
service:
name: tea-svc
port:
number: 80
# Configure Context Path
- path: /coffee
backend:
service:
name: coffee-svc
port:
number: 80
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-httpcode: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path.
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
# Configure Context Path.
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Parameter | Description |
alb.ingress.kubernetes.io/healthcheck-enabled | Whether to enable health checks for the backend server group.
Default value: |
alb.ingress.kubernetes.io/healthcheck-path | Health check path. Default value: |
alb.ingress.kubernetes.io/healthcheck-protocol | Protocol used for health checks.
Default value: |
alb.ingress.kubernetes.io/healthcheck-httpversion | HTTP protocol version. It takes effect when
Default value: |
alb.ingress.kubernetes.io/healthcheck-method | Method of health checks.
Default value: Important When |
alb.ingress.kubernetes.io/healthcheck-httpcode | Health check status code. The backend server is considered healthy only when the probe request is successful and returns the specified status code. You can fill in any one or more combinations of the following options, separated by commas (,):
Default value: |
alb.ingress.kubernetes.io/healthcheck-code | Health check status code. The backend server is considered healthy only when the probe request is successful and returns the specified status code. When used simultaneously with Optional parameters depend on the value filled in
|
alb.ingress.kubernetes.io/healthcheck-timeout-seconds | Health check timeout period in seconds (s). Value range: [1, 300]. Default value: |
alb.ingress.kubernetes.io/healthcheck-interval-seconds | Health check interval period in seconds (s). Value range: [1, 50]. Default value: |
alb.ingress.kubernetes.io/healthy-threshold-count | The number of successful health checks required to determine success. Value range: [2, 10]. Default value: |
alb.ingress.kubernetes.io/unhealthy-threshold-count | The number of failed health checks required to determine failure. Value range: [2, 10]. Default value: |
alb.ingress.kubernetes.io/healthcheck-connect-port | Port used for health checks. Default value: Note
|
Configure HTTP Redirection to HTTPS
ALB Ingress can redirect HTTP requests to HTTPS by setting the annotation alb.ingress.kubernetes.io/ssl-redirect: "true"
.
ALB does not support direct listener creation in Ingress. To ensure proper Ingress functionality, create necessary listener ports and protocols in AlbConfig first, then link these listeners with the service in Ingress. For details on creating ALB listeners, see Configure ALB Listeners Through AlbConfig.
Here is an example configuration:
Clusters of Version 1.19 and Later
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
service:
name: demo-service-ssl
port:
number: 80
path: /
pathType: Prefix
Clusters Before Version 1.19
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
serviceName: demo-service-ssl
servicePort: 80
path: /
pathType: Prefix
Support Backend HTTPS and gRPC Protocols
ALB backend protocols now include HTTPS and gRPC. Configure alb.ingress.kubernetes.io/backend-protocol: "grpc"
or alb.ingress.kubernetes.io/backend-protocol: "https"
in the annotations to use these protocols with Ingress. For gRPC services, ensure the domain name has an SSL certificate and uses TLS for communication. Here's a configuration example for the gRPC protocol:
The backend protocol cannot be modified post-creation. To change the protocol, delete and recreate Ingress.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grpc-demo-svc
port:
number: 9080
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: grpc-demo-svc
servicePort: 9080
path: /
pathType: Prefix
Configure Regular Expression
For custom forwarding conditions, users must write regular expressions in the path
field. Below is a configuration example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Allow the path field to use regular expressions.
alb.ingress.kubernetes.io/conditions.service-a: | # The service in this annotation is an existing service in the cluster, and the service name must be consistent with the service name under the rule field backend.
[{
"type": "Path",
"pathConfig": {
"values": [
"~*^/pathvalue1", # A regular expression must be preceded by ~* as a regular flag, and the content after ~* is the actual effective regular expression.
"/pathvalue2" # Exact match does not need to be preceded by ~*.
]
}
}]
name: ingress-example
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: service-a
port:
number: 88
Support Rewrite
ALB now supports Rewrite. Simply configure alb.ingress.kubernetes.io/rewrite-target: /path/${2}
in the annotations. The rules are as follows:
In the
rewrite-target
annotation, variables of the${number}
type must be configured on thepath
with the Prefix type.By default, the
path
cannot include regular symbols like*
and?
. To use these symbols, configure therewrite-target
annotation.The
path
must start with/
.
ALB's Rewrite supports regular expression replacement with the following rules:
In the
path
field of an Ingress, you can specify one or more regular expressions, which support multiple()
groups. Therewrite-target
annotation allows for the configuration of variables${1}
,${2}
, and${3}
, up to a total of three variables.Rewrite supports using the results of regular expression matching as parameters to freely combine and splice the rewrite rules you want.
The logic of Rewrite through regular expression replacement is: the client's request matches the regular expression, the regular expression has multiple
()
, and therewrite-target
annotation contains one or more of the variables${1}
,${2}
, and${3}
.
For example, if the path
of Ingress is configured as /sys/(.*)/(.*)/aaa
, and the rewrite-target
annotation is configured as /${1}/${2}
. When the client sends a request with the path /sys/ccc/bbb/aaa
, the path
will match /sys/(.*)/(.*)/aaa
, and the rewrite-target
annotation will take effect, replacing ${1}
with ccc
and ${2}
with bbb
. The final request path received by the backend server will be /ccc/bbb
.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Allow the path field to use regular expressions.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # This annotation supports regular expression replacement.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: rewrite-svc
port:
number: 9080
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Allow the path field to use regular expressions.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # This annotation supports regular expression replacement.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: rewrite-svc
servicePort: 9080
path: /something(/|$)(.*)
pathType: Prefix
Configure Custom Listener Port
Ingress now supports custom listener port configuration, allowing services to be exposed on both port 80 and port 443 simultaneously. Here is an example configuration:
ALB does not support direct listener creation in Ingress. To ensure proper Ingress functionality, create necessary listener ports and protocols in AlbConfig first, then link these listeners with the service in Ingress. For details on creating ALB listeners, see Configure ALB Listeners Through AlbConfig.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
name: cafe-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Configure Forwarding Rule Priority
By default, Ingress prioritizes ALB forwarding rules as follows:
Ingresses are sorted by the dictionary order of
namespace/name
, with higher priority given to those with a smaller dictionary order.Within the same Ingress, rules are sorted by the order of the
rule
field, with higher priority for those listed first.
To define ALB forwarding rule priorities without modifying the namespace/name
field of Ingress, configure the following Ingress annotations:
Rule priority within the same listener must be unique. Use alb.ingress.kubernetes.io/order
to determine priority between Ingresses, with a value range of 1 to 1000. The lower the value, the higher the priority. The default value for Ingress is 10.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/order: "2"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/order: "2"
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Implement Grayscale Release Through Annotation
ALB offers advanced routing capabilities, including grayscale release functions based on Header, Cookie, and weight. These functions are enabled by setting annotations, such as alb.ingress.kubernetes.io/canary: "true"
. Different grayscale release functions are achieved through various annotations:
Grayscale release priority follows this order: Header-based, Cookie-based, and then Weight-based, descending from highest to lowest.
Grayscale priority order is Header-based, Cookie-based, then weight-based, from highest to lowest.
-
The
alb.ingress.kubernetes.io/canary-by-header
andalb.ingress.kubernetes.io/canary-by-header-value
settings define the value of the request header used for matching. To customize the request header value, these settings must be used in conjunction withalb.ingress.kubernetes.io/canary-by-header
.-
When the request's
header
andheader-value
match the predefined values, traffic is directed to the grayscale service endpoint. -
For other
header
values not specified, theheader
will be disregarded, and request traffic will be directed to the grayscale service according to the established priority of other rules.
When the request header includes
location: hz
, traffic is directed to the grayscale service. For other headers, traffic allocation to the grayscale service is determined by the grayscale weight. Below is a configuration example:Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
-
-
alb.ingress.kubernetes.io/canary-by-cookie
: Splits traffic according to Cookie values.-
When the
cookie
value is set toalways
, request traffic is directed to the grayscale service endpoint. -
When the
cookie
value is set tonever
, request traffic will not be directed to the grayscale service endpoint.
NoteCookie-based grayscale deployment only supports the settings
always
andnever
.When the request Cookie matches
demo=always
, it is directed to the grayscale service. Below is an example of the configuration:Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
-
-
alb.ingress.kubernetes.io/canary-weight
: Specify the percentage (an integer between 0 and 100) of requests that should be directed to the designated service.An example configuration for setting a canary service weight of 50% is:
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true```html alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName: demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
Implement Session Persistence Through Annotation
ALB Ingress enables session persistence configuration via annotations:
-
alb.ingress.kubernetes.io/sticky-session
: Specifies whether to enable session persistence. Possible values:true
orfalse
; the default isfalse
. -
alb.ingress.kubernetes.io/sticky-session-type
: Determines the cookie handling method. Possible values:Insert
orServer
; the default isInsert
.-
Insert
: Inserts a cookie. On the client's first access, the load balancer inserts a cookie in the HTTP or HTTPS response message. On subsequent accesses with this cookie, the load balancing service directs the request to the same backend server. -
Server
: Rewrites the cookie. If a custom cookie is detected, the load balancer rewrites the original cookie. The client's next access with the new cookie will be directed to the same backend server.
NoteThis setting is effective only when the server group's
StickySessionEnabled
is set totrue
. -
-
alb.ingress.kubernetes.io/cookie-timeout
: The cookie expiration time in seconds. The range is 1 to 86400 seconds; the default is1000
. -
alb.ingress.kubernetes.io/cookie
: Sets a custom cookie value. The type is a string; the default value is""
.
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert" # When supporting custom cookies, the inserted cookie type must be Server.
alb.ingress.kubernetes.io/cookie-timeout: "1800"
alb.ingress.kubernetes.io/cookie: "test"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path.
- path: /tea2
backend:
service:
name: tea-svc
port:
number: 80
# Configure Context Path.
- path: /coffee2
backend:
service:
name: coffee-svc
port:
number: 80
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert" # When supporting custom cookies, the inserted cookie type must be Server.
alb.ingress.kubernetes.io/cookie-timeout: "1800"
alb.ingress.kubernetes.io/cookie: "test"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path.
- path: /tea2
backend:
serviceName: tea-svc
servicePort: 80
# Configure Context Path.
- path: /coffee2
backend:
serviceName: coffee-svc
servicePort: 80
Specify Server Group Load Balancing Algorithm
You can specify the server group load balancing algorithm for ALB Ingress by setting the annotation alb.ingress.kubernetes.io/backend-scheduler
. Below is a configuration example:
Clusters of Version 1.19 and Later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # Here, uch can be configured as wrr, sch, and wlc as needed.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter needs to be configured only when the load balancing algorithm is uch. When the scheduling algorithm is wrr, sch, or wlc, this parameter does not need to be configured.
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters Before Version 1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # Here, uch can also be configured as wrr, sch, and wlc as needed.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter needs to be configured only when the load balancing algorithm is uch. When the scheduling algorithm is wrr, sch, or wlc, this parameter does not need to be configured.
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
The description of values for the scheduling algorithm alb.ingress.kubernetes.io/backend-scheduler
:
-
wrr
: This is the default setting. The probability of a backend server being selected increases with its weight value. -
wlc
: Selection is based on both the weight of each backend server and its current load, which is the number of active connections. When weights are equal, the server with fewer connections is more likely to be chosen. -
sch
: This method uses a hash of the source IP to maintain consistency. -
uch
: This method employs a hash of URL parameters for consistency. The ALB Ingress can specify URL parameters for the hash using the annotationalb.ingress.kubernetes.io/backend-scheduler-uch-value
, applicable when the server group's load balancing algorithm is set touch
.
Cross-Domain Configuration
ALB Ingress now supports the following cross-domain configurations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/enable-cors: "true"
alb.ingress.kubernetes.io/cors-expose-headers: ""
alb.ingress.kubernetes.io/cors-allow-methods: "GET,POST"
alb.ingress.kubernetes.io/cors-allow-credentials: "true"
alb.ingress.kubernetes.io/cors-max-age: "600"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Parameter | Description |
| Sites allowed to access server resources through the browser. Sites are separated by commas (,). A single value must start with http:// or https:// followed by a correct domain name or a first-level wildcard domain name. Default value: |
| Allowed cross-domain methods, case-insensitive. Cross-domain methods are separated by commas (,). Default value: |
| Allowed request headers for cross-domain propagation. Only letters, numbers, underscores (_), and hyphens (-) can be entered. Request headers are separated by commas (,). Default value: |
| Allowed exposed header list. Letters, numbers, underscores (_), hyphens (-), and asterisks (*) are allowed. Headers are separated by commas (,). Default value: |
| Set whether to allow carrying credential information during cross-domain access. Default value: |
| For non-simple requests, set the maximum cache time (seconds) for OPTIONS preflight requests in the browser. Value range: [-1, 172800]. Default value: |
Backend Persistent Connection
Traditional load balancing techniques connect to backend server groups using short-lived connections, necessitating the establishment and termination of a TCP connection for each request. This can lead to network connectivity becoming a bottleneck in high-performance systems. By enabling backend persistent connections, load balancing can significantly reduce resource consumption at the connection layer and enhance processing performance. In ALB Ingress, you can activate backend persistent connections by applying the annotation alb.ingress.kubernetes.io/backend-keepalive
. An example of how to use this annotation is provided below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/backend-keepalive: "true"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Server Group Supports IPv6 Mounting
ALB Ingress can activate IPv6 support for a designated server group via the annotation alb.ingress.kubernetes.io/enable-ipv6: "true"
. Upon setting up a dual-stack ALB instance, IPv6 can be enabled for the associated backend server group, allowing it to simultaneously support both IPv4 and IPv6 backend servers.
There are certain limitations to enabling IPv6 mounting:
IPv6 mounting cannot be enabled for server groups in a VPC that lacks IPv6 support.
IPv6 mounting is not available when mounting IP or Function Compute type server groups via custom forwarding actions.
Server groups associated with IPv4-only ALB instances cannot have IPv6 mounting enabled.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/enable-ipv6: "true"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Support QPS Throttling
ALB natively supports QPS (Queries Per Second) throttling for forwarding rules, allowing you to specify a throttling value between 1 and 100,000. Within ALB Ingress, setting the QPS limit is straightforward; simply apply the annotation alb.ingress.kubernetes.io/traffic-limit-qps
to your configuration. An example is provided below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/traffic-limit-qps: "50"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: ImplementationSpecific
backend:
service:
name: coffee-svc
port:
number: 80
Backend Slow Start
Introducing a new pod to the Service backend can result in immediate CPU or memory strain if the ALB Ingress directs traffic to it too quickly, potentially causing access issues. To mitigate this, slow start allows for a gradual increase in traffic to the new pod, easing the effects of sudden traffic spikes. Below is an example of how to configure this feature:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/slow-start-enabled: "true"
alb.ingress.kubernetes.io/slow-start-duration: "100"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/slow-start-enabled | Whether to enable the slow start function.
Not enabled by default. |
alb.ingress.kubernetes.io/slow-start-duration | The time taken for the gradual increase of traffic to complete during slow start. The longer the time, the slower the speed of traffic increase. Unit: seconds (s). Value range: [30, 900]. Default value: |
Connection Draining
Connection draining ensures smooth service decommissioning by keeping connections open for a specified duration after a pod is removed from the backend during its transition to the Terminating state. This feature prevents errors by allowing ongoing requests to complete before the connection is closed. The operation of connection draining is as follows:
-
Without connection draining, ALB Ingress removes the pod from the backend and immediately terminates all connections when the pod enters the Terminating state.
-
With connection draining enabled, ALB Ingress maintains open connections for ongoing requests but does not accept new ones once the pod enters the Terminating state:
-
If there are ongoing requests, ALB Ingress will close the connections and remove the pod after the connection draining period expires.
-
If the pod completes all requests before the draining period ends, ALB Ingress will promptly remove the pod.
-
ALB Ingress does not actively close connections with the pod before the end of the draining period, but it cannot ensure the pod remains operational. To manage pod availability in the Terminating state, configure spec.terminationGracePeriodSeconds
or use a preStop hook.
To configure connection draining, refer to the following example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/connection-drain-enabled: "true"
alb.ingress.kubernetes.io/connection-drain-timeout: "199"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/connection-drain-enabled | Whether to enable connection draining.
Not enabled by default. |
alb.ingress.kubernetes.io/connection-drain-timeout | Connection draining timeout period in seconds (s). Value range: [0, 900]. Default value: |