All Products
Search
Document Center

Container Service for Kubernetes:Build a hybrid cloud cluster and add ECS instances to the cluster

Last Updated:Sep 28, 2023

You can build a hybrid cloud cluster by using a registered cluster to register a self-managed Kubernetes cluster in a data center to Container Service for Kubernetes (ACK). After you build a hybrid cloud cluster, you can add Elastic Compute Service (ECS) instances to the cluster and centrally manage the cloud resources and on-premises resources that belong to the cluster. This topic describes how to build a hybrid cloud cluster by registering a self-managed Kubernetes cluster that uses the Calico network plug-in to ACK.

Prerequisites

  • The data center in which the self-managed cluster resides is connected to the virtual private cloud (VPC) in which the registered cluster is deployed. The computing nodes and containers in the data center can communicate with the computing nodes and containers in the VPC. You can use Cloud Enterprise Network (CEN) to establish the network connection. For more information, see Overview.

  • The self-managed Kubernetes cluster is registered to ACK by using the registered cluster.

  • The cloud computing nodes that are added to the registered cluster can access the API server of the self-managed Kubernetes cluster in the data center.

  • A kubectl client is connected to the registered cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

Hybrid cloud cluster architecture

Calico is commonly used in self-managed Kubernetes clusters. In this topic, the self-managed cluster uses the Calico route reflector mode. We recommend that you choose a network plug-in that is optimized for your cloud platform to manage container networks. ACK uses the Terway network plug-in to manage container networks. The following figure shows the networking architecture of the hybrid cloud cluster. 架构图The private CIDR block of the data center is 192.168.0.0/24 and the CIDR block of the container network is 10.100.0.0/16. The self-managed Kubernetes cluster uses the route reflector mode of the Calico network plug-in. The CIDR block of the VPC is 10.0.0.0/8, the vSwitch CIDR block of computing nodes is 10.10.24.0/24, and the vSwitch CIDR block of pods is 10.10.25.0/24. The registered cluster uses the one ENI for multi-pod mode of the Terway network plug-in.

Use a registered cluster to build a hybrid cloud cluster

  1. Configure a network plug-in that runs in the data center and a network plug-in that runs on the cloud.

    To build a hybrid cloud cluster, you must configure a network plug-in that runs only in the data center and a network plug-in that runs only on the cloud.

    The alibabacloud.com/external=true label is automatically added to the ECS instances that are added to a registered cluster. Therefore, you must configure node affinity settings for the Calico pods to prevent the pods from being scheduled to the nodes in the cloud. Example:

    cat <<EOF > calico-ds.patch
    spec:
      template:
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: alibabacloud.com/external
                    operator: NotIn
                    values:
                    - "true"
                  - key: type
                    operator: NotIn
                    values:
                    - "virtual-kubelet"
    EOF
    kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.patch)"
  2. Configure RAM permissions for the Terway network plug-in.

    Use onectl

    1. Install and configure onectl on your on-premises machine. For more information, see Use onectl to manage registered clusters.

    2. Run the following command to configure RAM permissions for the Terway plug-in:

      onectl ram-user grant --addon terway-eniip

      Expected output:

      Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.

    Use the console

    Use the following policy to grant RAM permissions to the Terway network plug-in. For more information, see Grant permissions to RAM users.

    {
        "Version": "1",
        "Statement": [
            {
                "Action": [
                    "ecs:CreateNetworkInterface",
                    "ecs:DescribeNetworkInterfaces",
                    "ecs:AttachNetworkInterface",
                    "ecs:DetachNetworkInterface",
                    "ecs:DeleteNetworkInterface",
                    "ecs:DescribeInstanceAttribute",
                    "ecs:AssignPrivateIpAddresses",
                    "ecs:UnassignPrivateIpAddresses",
                    "ecs:DescribeInstances",
                    "ecs:ModifyNetworkInterfaceAttribute"
                ],
                "Resource": [
                    "*"
                ],
                "Effect": "Allow"
            },
            {
                "Action": [
                    "vpc:DescribeVSwitches"
                ],
                "Resource": [
                    "*"
                ],
                "Effect": "Allow"
            }
        ]
    }
  3. Install the Terway plug-in.

    Use onectl

    Run the following command to install the Terway plug-in:

    onectl addon install terway-eniip

    Expected output:

    Addon terway-eniip, version **** installed.

    Use the console

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage and choose Operations > Add-ons in the left-side navigation pane.

    3. On the Add-ons page, search for the terway-eniip component, click Install in the lower-right corner of the card, and then click OK.

    4. Run the following command to query the DaemonSet that is created for Terway:

  4. After you connect to the cluster by using kubetcl, run the following command in the registered cluster to view the DaemonSet that is created for Terway.

    The Terway pods can be scheduled only to nodes in the cloud.

    kubectl -nkube-system get ds |grep terway

    Expected output:

    terway-eniip   0         0         0       0            0           alibabacloud.com/external=true      16s

    The output indicates that the Terway pods are scheduled only to ECS instances with the alibabacloud.com/external=true label.

  5. Run the following command to modify eni-config in the ConfigMap and configure eni_conf.access_key and eni_conf.access_secret:

    kubectl -n kube-system edit cm eni-config

    The following template provides an example of eni-config:

    kind: ConfigMap
    apiVersion: v1
    metadata:
     name: eni-config
     namespace: kube-system
    data:
     eni_conf: |
      {
       "version": "1",
       "max_pool_size": 5,
       "min_pool_size": 0,
       "vswitches": {{.PodVswitchId}},
       "eni_tags": {"ack.aliyun.com":"{{.ClusterID}}"},
       "service_cidr": "{{.ServiceCIDR}}",
       "security_group": "{{.SecurityGroupId}}",
       "access_key": "",
       "access_secret": "",
       "vswitch_selection_policy": "ordered"
      }
     10-terway.conf: |
      {
       "cniVersion": "0.3.0",
       "name": "terway",
       "type": "terway"
      }
  6. Create a custom node initialization script.

    1. Create a custom node initialization script based on the original node initialization script of the self-managed Kubernetes cluster.

      In this example, the self-managed Kubernetes cluster is initialized by using the Kubeadm tool. The following code block is an example of the original initialization script used to add nodes to the self-managed Kubernetes cluster. The script is named init-node.sh.

      Click to view the init-node.sh script

      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification

      Create a custom node initialization script named init-node-ecs.sh for the registered cluster based on the init-node.sh script. The custom script is used to receive and configure the following environment variables issued by the registered cluster:ALIBABA_CLOUD_PROVIDER_ID, ALIBABA_CLOUD_NODE_NAME, ALIBABA_CLOUD_LABELS, and ALIBABA_CLOUD_TAINTS. The following code block is an example of the custom script:

      Click to view the init-node-ecs.sh script

      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      ####### <The following content is added.
      # Configure node labels, taints, the node name, and the Provider ID.
      #KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
      KUBELET_CONFIG_FILE="/etc/sysconfig/kubelet"
      #KUBELET_CONFIG_FILE="/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
      if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
        option="--node-labels"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
        option="--register-with-taints"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
        option="--hostname-override"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
        option="--provider-id"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      # Restart Docker and start kubelet.
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      
      ####### The preceding content is added.>
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification
    2. Save and configure the custom script.

      Save the custom script to an HTTP file server, such as an OSS bucket. In this example, the OSS path of the custom script is https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh.

      Set the addNodeScriptPath parameter in the following code block to https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh and save the change:

      apiVersion: v1
      data:
        addNodeScriptPath: https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh
      kind: ConfigMap
      metadata:
        name: ack-agent-config
        namespace: kube-system

    After the preceding steps are complete, you can create node pools in the registered cluster and add ECS instances to the node pool.

  7. Create a node pool and add ECS instances to the node pool.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage and choose Nodes > Node Pools in the left-side navigation pane.

    3. On the Node Pools page, create a node pool and add ECS instances to the node pool. For more information, see Create a node pool.

References

Plan the container networks for the cluster that uses the Terway network plug-in. For more information, see Plan CIDR blocks for an ACK cluster.

Connect a data center to a VPC. For more information, see Functions and features.

Create a registered cluster in a VPC and register a self-managed Kubernetes cluster to ACK. For more information, see Create a registered cluster in the ACK console.