All Products
Search
Document Center

Server Load Balancer:Use an ALB Ingress to expose Services in an ACK cluster

Last Updated:Dec 04, 2023

Application Load Balancer (ALB) provides Ingresses that can enable access to Services. ALB Ingresses are ideal for handling traffic fluctuations. This topic describes how to use an ALB Ingress to expose Services in a Container Service for Kubernetes (ACK) cluster.

Usage notes

  • The Kubernetes version of the ACK cluster must be 1.18 or later.

  • If you use the Flannel network plug-in, the backend Services of the ALB Ingress must be of the NodePort or LoadBalancer type.

Prerequisites

Procedure

操作步骤

Step 1: Install the ALB Ingress controller

Method 1: Install the ALB Ingress controller when you create an ACK cluster

When you create an ACK managed cluster or ACK dedicated cluster, select ALB Ingress in the Ingress section. For more information, see Create an ACK Pro cluster.

Method 2: Install the ALB Ingress controller from the Add-ons page in the ACK console

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Operations > Add-ons in the left-side navigation pane.

  3. On the Add-ons page, click the Networking tab. On the ALB Ingress Controller card, click Install.

    Note

    The ALB Ingress controller is not supported in the China (Hohhot) and China (Heyuan) regions.

  4. In the Install ALB Ingress Controller message, click OK.

Step 2: (Optional) Grant permissions to the ALB Ingress controller

Important

You need to grant permissions only to the ALB Ingress controller in ACK dedicated clusters. You can directly access Services by using ALB Ingresses in other types of ACK clusters.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and click the Cluster Resources tab.

  3. On the Cluster Resources tab, click KubernetesWorkerRole-**** on the right side of Worker RAM Role.

  4. In the Resource Access Management (RAM) console, modify the trust policy and the RAM policy.

    1. On the K8sWorkerRole-**** page, click the Trust Policy Management tab.

    2. Check whether the content of the trust policy is the same as the following content. If not, click Edit Trust Policy. In the Edit Trust Policy panel, copy and paste the following content to the editor and click OK.

      {
        "Statement": [
          {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
              "Service": [
                "ecs.aliyuncs.com"
              ]
            }
          }
        ],
        "Version": "1"
      }
    3. On the K8sWorkerRole-**** page, click the Permissions tab and click the name of the K8sWorkerRolePolicy-**** policy.

    4. On the details page of the policy, check whether the following ALB Ingress permissions are included in the policy. If the following ALB Ingress permissions are not included in the policy, click Modify Policy Document. In the Modify Policy Document panel, add the following content and then click OK:

      {
                  "Action": [
                      "alb:TagResources",
                      "alb:UnTagResources",
                      "alb:ListServerGroups",
                      "alb:ListServerGroupServers",
                      "alb:AddServersToServerGroup",
                      "alb:RemoveServersFromServerGroup",
                      "alb:ReplaceServersInServerGroup",
                      "alb:CreateLoadBalancer",
                      "alb:DeleteLoadBalancer",
                      "alb:UpdateLoadBalancerAttribute",
                      "alb:UpdateLoadBalancerEdition",
                      "alb:EnableLoadBalancerAccessLog",
                      "alb:DisableLoadBalancerAccessLog",
                      "alb:EnableDeletionProtection",
                      "alb:DisableDeletionProtection",
                      "alb:ListLoadBalancers",
                      "alb:GetLoadBalancerAttribute",
                      "alb:ListListeners",
                      "alb:CreateListener",
                      "alb:GetListenerAttribute",
                      "alb:UpdateListenerAttribute",
                      "alb:ListListenerCertificates",
                      "alb:AssociateAdditionalCertificatesWithListener",
                      "alb:DissociateAdditionalCertificatesFromListener",
                      "alb:DeleteListener",
                      "alb:CreateRule",
                      "alb:DeleteRule",
                      "alb:UpdateRuleAttribute",
                      "alb:CreateRules",
                      "alb:UpdateRulesAttribute",
                      "alb:DeleteRules",
                      "alb:ListRules",
                      "alb:CreateServerGroup",
                      "alb:DeleteServerGroup",
                      "alb:UpdateServerGroupAttribute",
                      "alb:DescribeZones",
                      "alb:CreateAcl",
                      "alb:DeleteAcl",
                      "alb:ListAcls",
                      "alb:AddEntriesToAcl",
                      "alb:AssociateAclsWithListener",
                      "alb:ListAclEntries",
                      "alb:RemoveEntriesFromAcl",
                      "alb:DissociateAclsFromListener",
                      "alb:EnableLoadBalancerIpv6Internet",
                      "alb:DisableLoadBalancerIpv6Internet"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              },
              {
                  "Action": "ram:CreateServiceLinkedRole",
                  "Resource": "*",
                  "Effect": "Allow",
                  "Condition": {
                      "StringEquals": {
                          "ram:ServiceName": [
                              "alb.aliyuncs.com",
                              "audit.log.aliyuncs.com",
                              "logdelivery.alb.aliyuncs.com"
                          ]
                      }
                  }
              },
              {
                  "Action": [
                      "yundun-cert:DescribeSSLCertificateList",
                      "yundun-cert:DescribeSSLCertificatePublicKeyDetail",
                      "yundun-cert:CreateSSLCertificateWithName",
                      "yundun-cert:DeleteSSLCertificate"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              }
      Note

      To specify multiple actions, add a comma (,) to the end of the content of each action before you enter the content of the next action.

  5. Check whether the RAM role of the Elastic Compute Service (ECS) instance is in a normal state.

    1. In the left-side navigation pane of the details page, choose Nodes > Nodes.

    2. On the Nodes page, click the ID of the node that you want to manage, such as i-2ze5d2qi9iy90pzb****.

    3. On the page that appears, click the Instance Details tab. In the Other Information section, check whether a RAM role exists in the RAM Role field.

      If no RAM role exists, assign a RAM role to the ECS instance. For more information, see Step 2: Create an ECS instance and attach the RAM role to the instance.

  6. Delete the pod of alb-ingress-controller and check the status of the recreated pod.

    1. Use kubectl to connect to the cluster and run the following command to query the pod of alb-ingress-controller:

      kubectl -n kube-system get pod | grep alb-ingress-controller

      Expected output:

      NAME                          READY   STATUS    RESTARTS   AGE
      alb-ingress-controller-***    1/1     Running   0          60s
    2. Run the following command to delete the pod of alb-ingress-controller:

      Replace alb-ingress-controller-*** with the pod name that you obtained in the previous step.

      kubectl -n kube-system delete pod alb-ingress-controller-***

      Expected output:

      pod "alb-ingress-controller-***" deleted
    3. Wait for a few minutes and then run the following command to query the recreated pod:

      kubectl -n kube-system get pod

      Expected output:

      NAME                          READY   STATUS    RESTARTS   AGE
      alb-ingress-controller-***2    1/1     Running   0          60s

      The output indicates that the recreated pod named alb-ingress-controller-***2 is in the Running state.

Step 3: Create an AlbConfig

  1. Create a file named alb-test.yaml and copy and paste the following content to the file. The file is used to create an AlbConfig.

    apiVersion: alibabacloud.com/v1
    kind: AlbConfig
    metadata:
      name: alb-demo
    spec:
      config:
        name: alb-test
        addressType: Internet
        zoneMappings:
        - vSwitchId: vsw-wz9e2usil7e5an1xi****
        - vSwitchId: vsw-wz92lvykqj1siwvif****
      listeners:
        - port: 80
          protocol: HTTP

    Parameter

    Description

    spec.config.name

    The name of the ALB instance. This parameter is optional.

    spec.config.addressType

    The network type of the ALB instance. This parameter is required. Valid values:

    • Internet (default): Internet-facing. If you create an Internet-facing ALB instance, a public IP address and a private IP address are assigned to each zone. By default, Internet-facing ALB instances use elastic IP addresses (EIPs) to provide services over the Internet. If you create an Internet-facing ALB instance, you are charged instance fees and bandwidth fees or data transfer fees for the EIPs.

      • EIPs are used to provide services over the Internet and expose ALB instances to the Internet.

      • Private IP addresses allow ECS instances in virtual private clouds (VPCs) to access ALB instances.

    • Intranet: internal-facing. If you create an internal-facing ALB instance, a private IP address is assigned to each zone. The ALB instance can be accessed only over the internal network of Alibaba Cloud.

    spec.config.zoneMappings

    The IDs of the vSwitches that are used by the ALB Ingress. You must specify at least two vSwitch IDs and the vSwitches must be deployed in different zones. The zones of the vSwitches must be supported by ALB Ingresses. This parameter is required. For more information about the regions and zones that are supported by ALB Ingresses, see Supported regions and zones.

  2. Run the following command to create an AlbConfig:

    kubectl apply -f alb-test.yaml

    Expected output:

    AlbConfig.alibabacloud.com/alb-demo created
  3. Create an alb.yaml file that contains the following content. The file is used to create an IngressClass.

    Clusters that run Kubernetes versions earlier than 1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: IngressClass
    metadata:
      name: alb
    spec:
      controller: ingress.k8s.alibabacloud/alb
      parameters:
        apiGroup: alibabacloud.com
        kind: AlbConfig
        name: alb-demo

    Clusters that run Kubernetes 1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: IngressClass
    metadata:
      name: alb
    spec:
      controller: ingress.k8s.alibabacloud/alb
      parameters:
        apiGroup: alibabacloud.com
        kind: AlbConfig
        name: alb-demo
  4. Run the following command to create an IngressClass:

    kubectl apply -f alb.yaml

    Expected output:

    ingressclass.networking.k8s.io/alb created
  5. View the status of the ALB instance.

    • Method 1: Run the following command to query the ID of the ALB instance:

      kubectl get albconfig alb-demo     

      查看ALB实例ID

    • Method 2: Log on to the ALB console and view the ALB instance in the ALB console.

Step 4: Deploy Services

  1. Create a file named cafe-service.yaml and copy and paste the following content to the file. The file is used to deploy two Deployments named coffee and tea and two Services named coffee-svc and tea-svc.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coffee
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: coffee
      template:
        metadata:
          labels:
            app: coffee
        spec:
          containers:
          - name: coffee
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: coffee-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: coffee
      type: NodePort
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tea
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: tea
      template:
        metadata:
          labels:
            app: tea
        spec:
          containers:
          - name: tea
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tea-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        app: tea
      type: NodePort
  2. Run the following command to deploy the Deployments and Services:

    kubectl apply -f cafe-service.yaml

    Expected output:

    deployment "coffee" created
    service "coffee-svc" created
    deployment "tea" created
    service "tea-svc" created
  3. Run the following command to query the status of the Services:

    kubectl get svc,deploy

    Expected output:

    NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    coffee-svc   NodePort    172.16.231.169   <none>        80:31124/TCP   6s
    tea-svc      NodePort    172.16.38.182    <none>        80:32174/TCP   5s
    NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deploy/coffee   2         2         2            2           1m
    deploy/tea      1         1         1            1           1m

Step 5: Configure an Ingress

  1. Create a file named cafe-ingress.yaml and copy and paste the following content to the file:

    Clusters that run Kubernetes versions earlier than V1.19

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: cafe-ingress
    spec:
      ingressClassName: alb
      rules:
       - host: demo.aliyundoc.com
         http:
          paths:
          # Specify a context path. 
          - path: /tea
            backend:
              serviceName: tea-svc
              servicePort: 80
          # Specify a context path. 
          - path: /coffee
            backend:
              serviceName: coffee-svc
              servicePort: 80

    Clusters that run Kubernetes V1.19 or later

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: cafe-ingress 
    spec:
      ingressClassName: alb
      rules:
       - host: demo.aliyundoc.com
         http:
          paths:
          # Specify a context path.
          - path: /tea
            pathType: ImplementationSpecific
            backend:
              service:
                name: tea-svc
                port:
                  number: 80
          # Specify a context path.
          - path: /coffee
            pathType: ImplementationSpecific
            backend:
              service:
                name: coffee-svc
                port: 
                  number: 80
  2. Run the following command to configure an accessible domain name and a path for the coffee and tea Services:

    kubectl apply -f cafe-ingress.yaml

    Expected output:

    ingress "cafe-ingress" created
  3. Run the following command to obtain the domain name of the ALB instance:

    kubectl get ing

    Expected output:

    NAME           CLASS    HOSTS                         ADDRESS                                               PORTS   AGE
    cafe-ingress   alb      demo.aliyundoc.com      alb-3lzokczr3c******z7.cn-hangzhou.alb.aliyuncs.com          80      50s

Step 6: Access Services

Method 1: Access Services by using the domain name of the ALB instance

  1. Run the following command to use the domain name of the ALB instance to access the coffee Service:

    curl -H Host:demo.aliyundoc.com http://alb-3lzokczr3c******z7.cn-hangzhou.alb.aliyuncs.com/coffee
  2. Run the following command to use the domain name of the ALB instance to access the tea Service:

    curl -H Host:demo.aliyundoc.com http://alb-3lzokczr3c******z7.cn-hangzhou.alb.aliyuncs.com/tea

Method 2: Access Services by using a custom domain name

  1. Use a CNAME record to map a common domain name to the domain name of the ALB instance created in Step 3: Create an AlbConfig.

    For more information, see (Optional) Step 4: Configure a CNAME record. In this example, the custom domain name http://demo.aliyundoc.com is mapped to the public domain name of the ALB instance.

  2. Run the following command to use the ALB instance to access the coffee Service:

    curl http://demo.aliyundoc.com/coffee
  3. Run the following command to use the ALB instance to access the tea Service:

    curl http://demo.aliyundoc.com/tea

References