On the AlibabaCloud, how to use RAM Role to authenticate the ACK container cluster-Alibaba Cloud Developer Community

If you are an administrator of an ACK Container cluster (AlibabaCloud Container Service for Kubernetes), you may often need to create different RAM sub-accounts for other common developer roles and perform authorization operations, when multiple developers need to be granted the same ACK cluster operation permissions, it is too complicated to create and authorize a sub-account for each developer.

This topic describes how to configure an ACK cluster to use RAM Role for authentication based on the ack-ram-authenticator project.

0. Step overview

(1) create a RAM user in the RAM console kubernetes-dev, dev01, dev02... devNAnd RAM Role KubernetesDevauthorize kubernetes-dev to deploy and run the ack-ram-authenticator server in the ACK cluster (3) configure the ACK cluster Apiserver use the ack-ram-authenticator server(4) configure kubectl to use the authentication token provided by the ack-ram-authenticator

1. Create sub-accounts and RAM Role in the RAM console

1.1 create a sub-account kubernetes-dev dev1 dev2.

Grant the AliyunSTSAssumeRoleAccess to dev01 dev02 devN:

1.2 Grant developer permissions to the sub-account kubernetes-dev in the ACK cluster

follow the instructions to complete the authorization:

1.3 create RAM Role KubernetesDev

2. Deploy and run the ack-ram-authenticator server

$ git clone https://github.com/haoshuwei/ack-ram-authenticator

see the ConfigMap configuration file in the example.yaml file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: ack-ram-authenticator
  labels:
    k8s-app: ack-ram-authenticator
data:
  config.yaml: |
    # a unique-per-cluster identifier to prevent replay attacks
    # (good choices are a random token or a domain name that will be unique to your cluster)
    clusterID: <your cluster id>
    server:
      # each mapRoles entry maps an RAM role to a username and set of groups
      # Each username and group can optionally contain template parameters:
      #  1) "{{AccountID}}" is the 16 digit RAM ID.
      #  2) "{{SessionName}}" is the role session name.
      mapRoles:
      # statically map acs:ram::000000000000:role/KubernetesAdmin to a cluster admin
      - roleARN: acs:ram::<your main account uid>:role/KubernetesDev
        username: 2377xxxx # <your subaccount kubernetes-dev uid>
$ kubectl apply -f ack-ram-authenticator-cm.yaml

deployment DaemonSet:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: ack-ram-authenticator
  labels:
    k8s-app: ack-ram-authenticator
spec:
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        k8s-app: ack-ram-authenticator
    spec:
      # run on the host network (don't depend on CNI)
      hostNetwork: true

      # run on each master node
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists
      containers:
      - name: ack-ram-authenticator
        image: registry.cn-hangzhou.aliyuncs.com/acs/ack-ram-authenticator:v1.0.1
        imagePullPolicy: Always
        args:
        - server
        - --config=/etc/ack-ram-authenticator/config.yaml
        - --state-dir=/var/ack-ram-authenticator
        - --generate-kubeconfig=/etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml

        resources:
          requests:
            memory: 20Mi
            cpu: 10m
          limits:
            memory: 20Mi
            cpu: 100m

        volumeMounts:
        - name: config
          mountPath: /etc/ack-ram-authenticator/
        - name: state
          mountPath: /var/ack-ram-authenticator/
        - name: output
          mountPath: /etc/kubernetes/ack-ram-authenticator/

      volumes:
      - name: config
        configMap:
          name: ack-ram-authenticator
      - name: output
        hostPath:
          path: /etc/kubernetes/ack-ram-authenticator/
      - name: state
        hostPath:
          path: /var/ack-ram-authenticator/
$ kubectl apply -f ack-ram-authenticator-ds.yaml

check whether the ack-ram-authenticator is running properly on the three master nodes of the ack cluster:

$ kubectl -n kube-system get po|grep ram
ack-ram-authenticator-7m92f                         1/1     Running   0          42s
ack-ram-authenticator-fqhn8                         1/1     Running   0          42s
ack-ram-authenticator-xrxbs                         1/1     Running   0          42s

3. Configure the ACK cluster Apiserver to use the ack-ram-authenticator server

Kubernetes API use token authentication webhook to integrate ACK RAM Authenticator. When running ACK RAM Authenticator Server, it generates a webhook configuration file and saves it in the host file system, therefore, we need to configure and use this configuration file in the API Server:

modify the api server configurations on the three Masters /etc/kubernetes/manifests/kube-apiserver.yamladd the following fields:

spec.containers.command:

--authentication-token-webhook-config-file=/etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml

spec.containers.volumeMounts:

- mountPath: /etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
      name: ack-ram-authenticator
      readOnly: true

spec.volumes:

- hostPath:
      path: /etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
      type: FileOrCreate
    name: ack-ram-authenticator

restart kubelet to make it take effect:

$ systemctl restart kubelet.service

4. Set kubectl to use the authentication token provided by ack-ram-authenticator

configure the kubeconfig file that developers can use: modify the kubeconfig file based on the kubernetes-dev sub-account (obtained in the console) as follows:

apiVersion: v1
clusters:
- cluster:
    server: https://xxx:6443
    certificate-authority-data: xxx
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: "2377xxx"
  name: 2377xxx-xxx
current-context: 2377xxx-xxx
kind: Config
preferences: {}
// 以下为修改部分
users:
- name: "kubernetes-dev"
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: ack-ram-authenticator
      args:
        - "token"
        - "-i"
        - "<your cluster id>"
        - "-r"
        - "acs:ram::xxxxxx:role/kubernetesdev"

in this case, the kubeconfig file can be shared with developers to download and use.

Developers must install and deploy the ack-ram-authenticator binary client file in their own environment before using the shared kubeconfig file:

download and install the ack-ram-authenticator binary client file:

$ go get -u -v github.com/AliyunContainerService/ack-ram-authenticator/cmd/ack-ram-authenticator

dev1 dev2. **devN sub-accounts use their own AK and configure files ~/.acs/credentials:

{
  "AcsAccessKeyId": "xxxxxx",
  "AcsAccessKeySecret": "xxxxxx"
}

in this case, dev1, Dev2, and **devN use the shared kubeconfig to access the cluster resources, all of which are mapped to the kubernetes-dev permissions:

description of the access permissions of different roles in ACK:

verification:

$ kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
busybox-c5bd49fb9-n26zj   1/1     Running   0          3d3h
nginx-5966f7d8c5-rtzb6    1/1     Running   0          3d2h
$ kubectl get no
Error from server (Forbidden): nodes is forbidden: User "237753164652952730" cannot list resource "nodes" in API group "" at the cluster scope
Selected, One-Stop Store for Enterprise Applications
Support various scenarios to meet companies' needs at different stages of development

Start Building Today with a Free Trial to 50+ Products

Learn and experience the power of Alibaba Cloud.

Sign Up Now