All Products
Search
Document Center

Container Service for Kubernetes:Authorize an ACK dedicated cluster to access the ALB Ingress controller

Last Updated:Jun 24, 2024

To use an Application Load Balancer (ALB) Ingress to access Services deployed in an ACK dedicated cluster, you need to first grant the cluster permissions on the ALB Ingress controller. This topic describes how to authorize an ACK dedicated cluster to access the ALB Ingress controller.

Prerequisites

Usage notes

You need to authorize a cluster to access the ALB Ingress controller only if the cluster is an ACK dedicated cluster. You can skip this step if the cluster is an ACK managed cluster or ACK Serverless cluster.

Procedure

  1. Create a custom Resource Access Management (RAM) policy.

    1. In the left-side navigation pane, choose Permissions > Policies.

    2. On the Policies page, click Create Policy.

    3. On the Create Policy page, click the JSON tab.

    4. On the Create Policy page, click the JSON tab. Copy the following policy content to the code editor and click Next to edit policy information.

      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "alb:TagResources",
                      "alb:UnTagResources",
                      "alb:ListServerGroups",
                      "alb:ListServerGroupServers",
                      "alb:AddServersToServerGroup",
                      "alb:RemoveServersFromServerGroup",
                      "alb:ReplaceServersInServerGroup",
                      "alb:CreateLoadBalancer",
                      "alb:DeleteLoadBalancer",
                      "alb:UpdateLoadBalancerAttribute",
                      "alb:UpdateLoadBalancerEdition",
                      "alb:EnableLoadBalancerAccessLog",
                      "alb:DisableLoadBalancerAccessLog",
                      "alb:EnableDeletionProtection",
                      "alb:DisableDeletionProtection",
                      "alb:ListLoadBalancers",
                      "alb:GetLoadBalancerAttribute",
                      "alb:ListListeners",
                      "alb:CreateListener",
                      "alb:GetListenerAttribute",
                      "alb:UpdateListenerAttribute",
                      "alb:ListListenerCertificates",
                      "alb:AssociateAdditionalCertificatesWithListener",
                      "alb:DissociateAdditionalCertificatesFromListener",
                      "alb:DeleteListener",
                      "alb:CreateRule",
                      "alb:DeleteRule",
                      "alb:UpdateRuleAttribute",
                      "alb:CreateRules",
                      "alb:UpdateRulesAttribute",
                      "alb:DeleteRules",
                      "alb:ListRules",
                      "alb:CreateServerGroup",
                      "alb:DeleteServerGroup",
                      "alb:UpdateServerGroupAttribute",
                      "alb:DescribeZones",
                      "alb:CreateAcl",
                      "alb:DeleteAcl",
                      "alb:ListAcls",
                      "alb:AddEntriesToAcl",
                      "alb:AssociateAclsWithListener",
                      "alb:ListAclEntries",
                      "alb:RemoveEntriesFromAcl",
                      "alb:DissociateAclsFromListener",
                      "alb:EnableLoadBalancerIpv6Internet",
                      "alb:DisableLoadBalancerIpv6Internet"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              },
              {
                  "Action": "ram:CreateServiceLinkedRole",
                  "Resource": "*",
                  "Effect": "Allow",
                  "Condition": {
                      "StringEquals": {
                          "ram:ServiceName": [
                              "alb.aliyuncs.com",
                              "audit.log.aliyuncs.com",
                              "logdelivery.alb.aliyuncs.com"
                          ]
                      }
                  }
              },
              {
                  "Action": [
                      "yundun-cert:DescribeSSLCertificateList",
                      "yundun-cert:DescribeSSLCertificatePublicKeyDetail",
                      "yundun-cert:CreateSSLCertificateWithName",
                      "yundun-cert:DeleteSSLCertificate"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              }
          ]
      }
      Note

      To specify multiple actions, add a comma (,) to the end of the content of each action before you enter the content of the next action.

    5. Specify the Name and Description fields.

    6. Click OK.

  2. Attach the RAM policy to the worker RAM role used by your cluster.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, the cluster that you want to manage and click its name. On the page that appears, click the Cluster Resources tab.

    3. On the Cluster Resources tab, click the hyperlink next to Worker RAM Role field to log on to the RAM console.

    4. On the Permissions tab, click Grant Permission. In the Grant Permission panel, select Custom Policy from the drop-down list and select the custom policy that you created in the previous step.

    5. Click Grant permissions.

    6. Click Close.

  3. Check whether the RAM role of the Elastic Compute Service (ECS) instance is normal.

    1. In the left-side navigation pane of the details page, choose Nodes > Nodes.

    2. On the Nodes page, click the ID of the node that you want to manage, such as i-2ze5d2qi9iy90pzb****.

    3. On the page that appears, click the Instance Details tab. Then, check whether the RAM Role parameter in the Other Information section displays the RAM role of the ECS instance.

      If no RAM role exists, assign a RAM role to the ECS instance. For more information, see Step 2: Create an ECS instance and attach the RAM role to the instance.

  4. Delete the pod of alb-ingress-controller and check the status of alb-ingress-controller after the pod is recreated.

    Important

    We recommend that you perform this step during off-peak hours.

    1. Run the following command to query the name of the alb-ingress-controller pod:

      kubectl -n kube-system get pod | grep alb-ingress-controller

      Expected output:

      NAME                          READY   STATUS    RESTARTS   AGE
      alb-ingress-controller-***    1/1     Running   0          60s
    2. Run the following command to delete the pod of alb-ingress-controller:

      Replace alb-ingress-controller-*** with the pod name that you obtained in the previous step.

      kubectl -n kube-system delete pod alb-ingress-controller-***

      Expected output:

      pod "alb-ingress-controller-***" deleted
    3. Wait a few minutes and run the following command to query the status of the recreated pod:

      kubectl -n kube-system get pod

      Expected output:

      NAME                          READY   STATUS    RESTARTS   AGE
      alb-ingress-controller-***2    1/1     Running   0          60s

      The output indicates that the recreated pod is in the Running state.

What to do next

For more information about how to use an ALB Ingress to access Services in an ACK dedicated cluster, see Access Services by using an ALB Ingress.