ECS node initialization configuration after ACK registration

Expand Alibaba Cloud ECS nodes for self-built Kubernetes clusters
Users need to set --provider-id=${ALIBABA_CLOUD_PROVIDE_ID}* and append *--node-labels=${ALIBABA_CLOUD_LABELS} in the node initialization script.

The values of the ALIBABA_CLOUD_PROVIDE_ID and ALIBABA_CLOUD_LABELS variables are as follows:

$ clusterID=xxxxx
$ aliyunRegionID=$(curl 100.100.100.200/latest/meta-data/region-id)
$ aliyunInstanceID=$(curl 100.100.100.200/latest/meta-data/instance-id)

$ ALIBABA_CLOUD_PROVIDE_ID=${aliyunRegionID}.${aliyunInstanceID}
$ ALIBABA_CLOUD_LABELS="ack.aliyun.com=${clusterID},alibabacloud.com/instance-id=${aliyunInstanceID},alibabacloud.com/external=true"

Mark existing nodes of self-built Kubernetes clusters in batches

After the self-built Kubernetes cluster is connected to the ACK registration cluster, you need to add a node label to the existing node. The role of the node label is as follows:

1: ack.aliyun.com=${clusterID}. Used for ACK control to identify Alibaba Cloud ECS nodes in self-built Kubernetes from the cluster dimension.
2: alibabacloud.com/instance-id=${aliyunInstanceID}. Used for ACK control to identify Alibaba Cloud ECS nodes in self-built Kubernetes from the node dimension.
3: alibabacloud.com/external=true. It is used to identify Alibaba Cloud ECS nodes for components such as Terway and CSI in self-built Kubernetes clusters.

Deploy global-job-controller

$ cat < global-job-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: global-job-controller
namespace: kube-system
labels:
app: global-job-controller
spec:
replicas: 1
selector:
matchLabels:
app: global-job-controller
template:
metadata:
labels:
app: global-job-controller
spec:
restartPolicy: Always
serviceAccount: jobs
containers:
- name: global-job-controller
image: registry.cn-hangzhou.aliyuncs.com/acs/global-job:v1.0.0.36-g0d1ac97-aliyun
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: jobs
rules:
- apiGroups:
- jobs.aliyun.com
resources:
- global jobs
verbs:
- "*"
- apiGroups:
- "*"
resources:
- pods
-events
- configmaps
verbs:
- "*"
- apiGroups:
- "*"
resources:
-nodes
verbs:
- "*"
- apiGroups:
- apiextensions.k8s.io
resources:
- custom resource definitions
verbs:
- get
- list
- create

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jobs-role-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jobs
subjects:
- kind: ServiceAccount
name: jobs
namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jobs
namespace: kube-system
EOF
$ kubectl apply -f global-job-controller.yaml
Wait for the global-job-controller to run normally.

deployment globaljob
$ export CLUSTER_ID=xxxxxx
$ cat << EOF > globaljob.yaml
apiVersion: jobs.aliyun.com/v1alpha1
kind: GlobalJob
metadata:
name: globaljob
namespace: kube-system
spec:
maxParallel: 100
terminalStrategy:
type: Never
template:
spec:
serviceAccountName: ack
restartPolicy: Never
containers:
-name: globaljob
image: registry.cn-hangzhou.aliyuncs.com/acs/marking-agent:v1.13.1.39-g4186808-aliyun
imagePullPolicy: Always
env:
- name: REGISTRY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CLUSTER_ID
value: "$CLUSTER_ID"
$ kubectl apply -f globaljob.yaml

After running, you can check whether the ecs node has been marked correctly and release the above resources.

$ kubectl delete -f globaljob.yaml -f global-job-controller.yaml

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us