The cloud controller manager (CCM) enables integration between Kubernetes and Alibaba Cloud services such as Classic Load Balancer (CLB) and Virtual Private Cloud (VPC). In addition, you can use the CCM to associate a CLB instance with nodes inside a Kubernetes cluster as well as an Elastic Compute Service (ECS) instances outside the Kubernetes cluster. This prevents traffic interruptions during service migrations. You can use the CCM to distribute traffic to multiple Kubernetes clusters. Data backup and disaster recovery are also supported to ensure high availability of your services. This topic describes how to deploy the CCM in a self-managed Kubernetes cluster.
Prerequisites
Virtual nodes (vNodes) are deployed in a self-managed Kubernetes cluster.
If your Kubernetes cluster is deployed in an on-premises data center, the data center is able to communicate with Alibaba Cloud.
Background information
The CCM is a component provided by Alibaba Cloud that allows you to integrate Kubernetes with Alibaba Cloud services. The CCM has the following features:
Manage Server Load Balancer (SLB) instances
If the service is of the LoadBalancer type, the CCM automatically creates a CLB instance for the service and configures listeners and backend server groups. When the ECS instance endpoint or cluster nodes of a vServer group for a service are changed, the CCM automatically updates the vServer groups of the CLB instance.Enable cross-node communication
If Flannel is used as the network plug-in of a Kubernetes cluster, the CCM can enable network connections between containers and nodes and writes the pod CIDR block to the route table of the VPC where the cluster is deployed. This enables cross-node communication. This feature is immediately available after the CCM is installed, without the need for additional configurations.
For more information, see Cloud Controller Manager.
The CCM is open source. For more information about the project, see cloud-provider-alibaba-cloud.
Preparations
If ECS instances are not used as the nodes in your self-managed Kubernetes cluster, skip the preparations. If ECS instances are used as the nodes in your self-managed Kubernetes cluster, perform the following steps to change the value of providerID of the ECS instances so that the CCM can manage the routes of these nodes.
Change the value of providerID of the ECS instances.
kubectl patch node <node-name> -p '{"spec":{"providerID": "<region-id>.<ecs-id>"}}'
NoticeThe value of providerID can be set only once. If providerID is set to a non-null value, it cannot be changed. Proceed with caution. Make sure that the value of providerID is correctly set.
Set the value of providerID in the<region-id>.<ecs-id>
format. For example, if the hostname of an ECS instance that resides in the China (Beijing) region is k8s-worker and the ID of the node is i-2ze0thsxmrgawfzo****, run the following command:kubectl patch node k8s-node1 -p '{"spec":{"providerID": "cn-beijing.i-2ze0thsxmrgawfzo****"}}'
Check whether the value of providerID is changed.
kubectl get nodes <node-name> -o yaml
In the command output, check whether the value of providerID in the spec field is changed Sample command output:
spec: podCIDR: 10.XX.0.0/24 podCIDRs: - 10.XX.0.0/24 providerID: cn-beijing.i-2ze0thsxmrgawfzo****
Procedure
Create a ConfigMap.
Save the AccessKey pair of your Alibaba Cloud account to environment variables.
export ACCESS_KEY_ID=LTAI******************** export ACCESS_KEY_SECRET=HAeS**************************
For more information about how to obtain an AccessKey ID and AccessKey secret, see Obtain an AccessKey pair.
Run the following script to create a ConfigMap.
Save the following content to configmap-ccm.sh and then run the script:
#!/bin/bash ## create ConfigMap kube-system/cloud-config for CCM. accessKeyIDBase64=`echo -n "$ACCESS_KEY_ID" |base64 -w 0` accessKeySecretBase64=`echo -n "$ACCESS_KEY_SECRET"|base64 -w 0` cat <<EOF >cloud-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: cloud-config namespace: kube-system data: cloud-config.conf: |- { "Global": { "accessKeyID": "$accessKeyIDBase64", "accessKeySecret": "$accessKeySecretBase64" } } EOF kubectl create -f cloud-config.yaml
bash configmap-ccm.sh
After the script is run, a ConfigMap named cloud-config is created in the Kube-system namespace.
Deploy the CCM.
- Modify
${ImageVersion}
and{$ClusterCIDR}
, and then save the following content to ccm.yaml.CCM is open source. View the release notes of the CCM in GitHub to obtain
ImageVersion
. For more information, see CCM release notes.You can run the
kubectlcluster-infodump | grep -m1 cluster-cidr
command to viewClusterCIDR
.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:cloud-controller-manager rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: - get - list - update - create - apiGroups: - "" resources: - persistentvolumes - services - secrets - endpoints - serviceaccounts verbs: - get - list - watch - create - update - patch - apiGroups: - "" resources: - nodes verbs: - get - list - watch - delete - patch - update - apiGroups: - "" resources: - services/status verbs: - update - patch - apiGroups: - "" resources: - nodes/status verbs: - patch - update - apiGroups: - "" resources: - events - endpoints verbs: - create - patch - update --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-controller-manager namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:cloud-controller-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:cloud-controller-manager subjects: - kind: ServiceAccount name: cloud-controller-manager namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:shared-informers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:cloud-controller-manager subjects: - kind: ServiceAccount name: shared-informers namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:cloud-node-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:cloud-controller-manager subjects: - kind: ServiceAccount name: cloud-node-controller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:pvl-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:cloud-controller-manager subjects: - kind: ServiceAccount name: pvl-controller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:route-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:cloud-controller-manager subjects: - kind: ServiceAccount name: route-controller namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: cloud-controller-manager tier: control-plane name: cloud-controller-manager namespace: kube-system spec: selector: matchLabels: app: cloud-controller-manager tier: control-plane template: metadata: labels: app: cloud-controller-manager tier: control-plane annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: cloud-controller-manager tolerations: - effect: NoSchedule operator: Exists key: node-role.kubernetes.io/master - effect: NoSchedule operator: Exists key: node.cloudprovider.kubernetes.io/uninitialized nodeSelector: node-role.kubernetes.io/master: "" containers: - command: - /cloud-controller-manager - --leader-elect=true - --cloud-provider=alicloud - --use-service-account-credentials=true - --cloud-config=/etc/kubernetes/config/cloud-config.conf - --configure-cloud-routes=true - --route-reconciliation-period=3m - --leader-elect-resource-lock=endpoints # replace ${cluster-cidr} with your own cluster cidr # example: 172.16.0.0/16 - --cluster-cidr=${ClusterCIDR} # replace ${ImageVersion} with the latest release version # example: v2.1.0 image: registry.cn-hangzhou.aliyuncs.com/acs/cloud-controller-manager-amd64:${ImageVersion} livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10258 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: cloud-controller-manager resources: requests: cpu: 200m volumeMounts: - mountPath: /etc/kubernetes/ name: k8s - mountPath: /etc/ssl/certs name: certs - mountPath: /etc/pki name: pki - mountPath: /etc/kubernetes/config name: cloud-config hostNetwork: true volumes: - hostPath: path: /etc/kubernetes name: k8s - hostPath: path: /etc/ssl/certs name: certs - hostPath: path: /etc/pki name: pki - configMap: defaultMode: 420 items: - key: cloud-config.conf path: cloud-config.conf name: cloud-config name: cloud-config
Run the following command to deploy the CCM:
kubectl create -f ccm.yaml
Result verification
Create a LoadBalancer service and the endpoints of the service.
Save the following content to ccm-test.yaml.
Replace the image address with the address of the region where the vNodes reside to prevent image download failures.
apiVersion: v1 kind: Service metadata: name: nginx namespace: default annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: test-nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry-vpc.cn-beijing.aliyuncs.com/eci_open/nginx:1.14.2
Run the following command to create the service and deployment:
kubectl create -f ccm-test.yaml
After the service and deployment are deployed, the CCM automatically creates CLB instances for the service and configures listeners and backend server groups.
Check whether the service works normally.
Run the curl command to access the service address. Nginx in the backend can be accessed by using the service address, as shown in the following figure.