A hybrid cluster connects your on-premises self-managed Kubernetes cluster to Alibaba Cloud through a registered cluster. This setup enables you to scale your self-managed Kubernetes cluster with cloud-based compute nodes and manage both on-premises and cloud resources. This topic uses a self-managed Kubernetes cluster in a data center that uses the Calico container network plugin as an example to demonstrate how to create a hybrid cluster.
Prerequisites
-
The network of your on-premises self-managed Kubernetes cluster and the virtual private cloud (VPC) used by the registered cluster must be connected. This includes connectivity between both the compute node networks and the container networks. You can use Cloud Enterprise Network (CEN) to establish this connection. For more information, see Establish multi-VPC connections in different scenarios.
-
The destination cluster must use the private cluster import agent configuration provided by the registered cluster to connect to it.
-
Cloud-based compute nodes added through the registered cluster must be able to access the API Server of your on-premises self-managed Kubernetes cluster.
-
You have connected to the registered cluster using kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Hybrid elastic container cluster architecture
Self-managed Kubernetes clusters often use Calico for routing. This topic uses a self-managed cluster in a data center that uses Calico's route reflector mode as an example. For cloud environments, use network plugins customized for the specific cloud platform. Alibaba Cloud Container Service for Kubernetes (ACK) uses the Terway plugin to manage container networks. The following figure shows the network topology of a hybrid cluster.
In your data center, the private CIDR block is 192.168.0.0/24 and the container network CIDR block is 10.100.0.0/16. The Calico network plugin uses route reflector mode. On the cloud side, the VPC CIDR block is 10.0.0.0/8. The virtual switch CIDR block for compute nodes is 10.10.24.0/24, and the virtual switch CIDR block for pods is 10.10.25.0/24. The Terway network component uses shared mode.
Create a hybrid container cluster using a registered ACK cluster
-
Configure the on-premises and cloud-based container network plugins.
In a hybrid cluster, ensure that the on-premises Calico plugin runs only on-premises and the cloud-based Terway component runs only in the cloud.
Cloud-based Elastic Compute Service (ECS) nodes added to a registered ACK cluster are automatically labeled with alibabacloud.com/external=true. To ensure that Calico pods in the data center run only on-premises, configure NodeAffinity for them. The following example demonstrates how to do this:
cat <<EOF > calico-ds.patch spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: alibabacloud.com/external operator: NotIn values: - "true" - key: type operator: NotIn values: - "virtual-kubelet" EOF kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.patch)" -
Configure RAM permissions for the Terway plugin.
Configure using onectl
-
Install and configure onectl on your local machine. For more information, see Manage registered clusters using onectl.
-
Run the following command to configure RAM permissions for the Terway plugin.
onectl ram-user grant --addon terway-eniipExpected output:
Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.
Configure in the console
Configure the Resource Access Management (RAM) permissions required for the AccessKey information of the Terway network component. The following policy document lists the required permissions. For more information, see Manage RAM user permissions.
{ "Version": "1", "Statement": [ { "Action": [ "ecs:CreateNetworkInterface", "ecs:DescribeNetworkInterfaces", "ecs:AttachNetworkInterface", "ecs:DetachNetworkInterface", "ecs:DeleteNetworkInterface", "ecs:DescribeInstanceAttribute", "ecs:AssignPrivateIpAddresses", "ecs:UnassignPrivateIpAddresses", "ecs:DescribeInstances", "ecs:ModifyNetworkInterfaceAttribute" ], "Resource": [ "*" ], "Effect": "Allow" }, { "Action": [ "vpc:DescribeVSwitches" ], "Resource": [ "*" ], "Effect": "Allow" } ] } -
-
Install the Terway plugin.
Install using onectl
Run the following command to install the Terway plugin.
onectl addon install terway-eniipExpected output:
Addon terway-eniip, version **** installed.Install in the console
Log on to the Container Service Management Console . In the navigation pane on the left, click Clusters.
On the Clusters page, click the name of your cluster. In the navigation pane on the left, click Add-ons.
-
On the Add-ons page, search for the terway-eniip component. In the lower-right corner of the component card, click Install, and then click OK.
-
Run the following command to view the DaemonSet of the Terway network component.
-
After connecting to the cluster using kubectl, run the following command in the registered cluster to view the DaemonSet of the Terway network component.
Before scaling out the hybrid cluster with cloud-based nodes, Terway is not scheduled on any on-premises nodes.
kubectl -nkube-system get ds |grep terwayExpected output:
terway-eniip 0 0 0 0 0 alibabacloud.com/external=true 16sThe expected output shows that the Terway pod runs only on cloud-based ECS nodes labeled with alibabacloud.com/external=true.
-
Run the following command to edit the
eni-configConfigMap and configure eni_conf.access_key and eni_conf.access_secret.kubectl -n kube-system edit cm eni-configThe following example shows the
eni-configconfiguration:kind: ConfigMap apiVersion: v1 metadata: name: eni-config namespace: kube-system data: eni_conf: | { "version": "1", "max_pool_size": 5, "min_pool_size": 0, "vswitches": {"AZoneID":["VswitchId"]}, "eni_tags": {"ack.aliyun.com":"{{.ClusterID}}"}, "service_cidr": "{{.ServiceCIDR}}", "security_group": "{{.SecurityGroupId}}", "access_key": "", "access_secret": "", "vswitch_selection_policy": "ordered" } 10-terway.conf: | { "cniVersion": "0.3.0", "name": "terway", "type": "terway" } -
Configure a custom node initialization script.
-
Modify the original node initialization script of the self-managed Kubernetes cluster.
This example uses a self-managed Kubernetes cluster in a data center initialized with the kubeadm tool. The following example shows the original initialization script, `init-node.sh`, which adds new nodes to the cluster in the data center.
The custom node initialization script required by the registered ACK cluster, `init-node-ecs.sh`, is based on the `init-node.sh` script. It receives and configures the ALIBABA_CLOUD_PROVIDER_ID, ALIBABA_CLOUD_NODE_NAME, ALIBABA_CLOUD_LABELS, and ALIBABA_CLOUD_TAINTS environment variables passed from the registered cluster. The following example shows the `init-node-ecs.sh` script.
-
Save and configure the custom script.
Save the custom script on an HTTP file server, such as in an Object Storage Service (OSS) bucket. The following example shows a sample URL:
https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh.Set the addNodeScriptPath field to the path of the custom node addition script
https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.shand save the configuration. The following example shows how to do this:apiVersion: v1 data: addNodeScriptPath: https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh kind: ConfigMap metadata: name: ack-agent-config namespace: kube-system
After completing these configurations, you can create a node pool and scale out ECS nodes in the destination registered ACK cluster.
-
-
Create a node pool and scale out ECS nodes.
Log on to the Container Service Management Console . In the navigation pane on the left, click Clusters.
On the Clusters page, click the name of your cluster. In the navigation pane on the left, click .
-
On the Node Pools page, create a node pool and scale out nodes as needed. For more information, see Create and manage a node pool.
References
Plan the container network for the Terway scenario. For more information, see Network planning for ACK managed clusters.
Connect the network of a Kubernetes cluster in a data center to a VPC in the cloud. For more information, see Features.
Create a registered cluster and connect it to a self-managed Kubernetes cluster in a data center. For more information, see Create an ACK One registered cluster.