All Products
Search
Document Center

Container Service for Kubernetes:Deploy and configure Terway

Last Updated:Jul 22, 2024

The container network plug-ins used in a hybrid cluster consist of two parts: the network plug-ins that run in the data center and the network plug-ins that run on cloud compute nodes. This topic describes how to deploy and configure Terway in a hybrid cluster.

Prerequisites

Scenario 1: The data center uses an overlay network for container networking

In this scenario, the data center uses an overlay network for container networking. Cloud compute nodes can also use this network mode. You need to only make sure that the cloud compute nodes can pull the container image used by the DaemonSet of the container network plug-in.

The following overlay network modes are commonly used:

  • Flannel VXLAN

  • Calico IPIP

  • Cilium VXLAN

Scenario 2: The data center uses a BGP network for container networking

In this scenario, the data center uses a Border Gateway Protocol (BGP) network for container networking. You must use the Terway network plug-in on cloud compute nodes. For more information about how to connect on-premises networks and the cloud, see Configure and manage BGP.

In this scenario, make sure that the following conditions are met:

  • The DaemonSet of the on-premises container network plug-in, such as the BGP route reflector in Calico, is not scheduled to cloud compute nodes.

  • The DaemonSet of the Terway network plug-in is not scheduled to on-premises compute nodes.

Each compute node that is added from a node pool in a registered cluster has the alibabacloud.com/external=true label. You can use this label to distinguish cloud compute nodes from on-premises compute nodes.

For example, you can configure node affinity to ensure that the DaemonSet of the on-premises Calico network plug-in is not scheduled to nodes that have the alibabacloud.com/external=true label. You can use the same method to ensure that other on-premises workloads are not scheduled to cloud compute nodes. Run the following command to update the Calico network plug-in:

cat <<EOF > calico-ds.pactch
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: alibabacloud.com/external
                operator: NotIn
                values:
                - "true"
EOF
kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.pactch)"

By default, the DaemonSet of Terway is scheduled to nodes that have the alibabacloud.com/external=true label.

Scenario 3: The data center uses the host network for container networking

In this scenario, the data center uses the host network for container networking. You need to only make sure that the DaemonSet of the Terway network plug-in is not scheduled to on-premises compute nodes. By default, the DaemonSet of the Terway network plug-in is scheduled only to nodes that have the alibabacloud.com/external=true label.

Install and configure the Terway network plug-in

In Scenario 2 and Scenario 3, you must install and configure the Terway network plug-in on the cloud compute nodes of the hybrid cluster.

Step 1: Grant permissions to the Terway network plug-in

Use onectl

  1. Install onectl on your on-premises machine. For more information, see Use onectl to manage registered clusters.

  2. Run the following command to grant Resource Access Management (RAM) permissions to Terway:

    onectl ram-user grant --addon terway-eniip

    Expected output:

    Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.

Use the RAM console

Create a RAM user and attach the following policy to the RAM user. For more information, see Create a custom RAM policy.

View sample code

{
    "Version": "1",
    "Statement": [
        {
            "Action": [
                "ecs:CreateNetworkInterface",
                "ecs:DescribeNetworkInterfaces",
                "ecs:AttachNetworkInterface",
                "ecs:DetachNetworkInterface",
                "ecs:DeleteNetworkInterface",
                "ecs:DescribeInstanceAttribute",
                "ecs:AssignPrivateIpAddresses",
                "ecs:UnassignPrivateIpAddresses",
                "ecs:DescribeInstances",
                "ecs:ModifyNetworkInterfaceAttribute"
            ],
            "Resource": [
                "*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "vpc:DescribeVSwitches"
            ],
            "Resource": [
                "*"
            ],
            "Effect": "Allow"
        }
    ]
}

Step 2: Install the Terway plug-in

Use onectl

Run the following command to install the Terway plug-in:

onectl addon install terway-eniip

Expected output:

Addon terway-eniip, version **** installed.

Use the ACK console

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose Operations > Add-ons.

  3. On the Add-ons page, click the Networking tab. Select the terway-eniip component and then click Install.

Step 3: Configure Terway plug-in

Run the following command to modify the eni-config ConfigMap, and configure the eni_conf.access_key and eni_conf.access_secret parameters:

kubectl -n kube-system edit cm eni-config

The following sample code provides an example of the eni-config ConfigMap:

kind: ConfigMap
apiVersion: v1
metadata:  
 name: eni-config  
 namespace: kube-system
data:  
 eni_conf: |    
  {      
   "version": "1",      
   "max_pool_size": 5,      
   "min_pool_size": 0,      
   "vswitches": {"AZoneID":["VswitchId"]},      
   "eni_tags": {"ack.aliyun.com":"{{.ClusterId}}"},      
   "service_cidr": "{{.ServiceCIDR}}",      
   "security_group": "{{.SecurityGroupId}}",      
   "access_key": "",      
   "access_secret": "",      
   "vswitch_selection_policy": "ordered"    
  }  
 10-terway.conf: |    
  {      
   "cniVersion": "0.3.0",      
   "name": "terway",      
   "type": "terway"    
  }

You can use a kubeconfig file to connect to the registered cluster and query the DaemonSet that is created for the Terway network plug-in. Before cloud compute nodes are added to the hybrid cluster, the DaemonSet is not scheduled to on-premises compute nodes.

Run the following command to query the Terway network:

kubectl -nkube-system get ds |grep terway

Expected output:

terway-eniip   0         0         0       0            0           alibabacloud.com/external=true      16s

Enable the NetworkPolicy feature of Terway

By default, the NetworkPolicy feature of Terway is disabled in a registered cluster. If you do not want to enable the NetworkPolicy feature of Terway, skip this step. For more information, see Use network policies in ACK clusters.

Important

The NetworkPolicy feature of Terway is dependent on the CustomResourceDefinitions (CRDs) related to Calico. If you enable the NetworkPolicy feature of Terway in a cluster that uses Calico, errors may occur in the existing Calico networks. If you have any questions, submit a ticket.

  1. Use the following template to deploy the required CRDs:

    View sample code

    kubectl apply -f - <<EOF ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: felixconfigurations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: FelixConfiguration
        plural: felixconfigurations
        singular: felixconfiguration
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: bgpconfigurations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: BGPConfiguration
        plural: bgpconfigurations
        singular: bgpconfiguration
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: ippools.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: IPPool
        plural: ippools
        singular: ippool
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: hostendpoints.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: HostEndpoint
        plural: hostendpoints
        singular: hostendpoint
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: clusterinformations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: ClusterInformation
        plural: clusterinformations
        singular: clusterinformation
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: globalnetworkpolicies.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: GlobalNetworkPolicy
        plural: globalnetworkpolicies
        singular: globalnetworkpolicy
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: globalnetworksets.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: GlobalNetworkSet
        plural: globalnetworksets
        singular: globalnetworkset
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: networkpolicies.crd.projectcalico.org
    spec:
      scope: Namespaced
      group: crd.projectcalico.org
      version: v1
      names:
        kind: NetworkPolicy
        plural: networkpolicies
        singular: networkpolicy
    EOF
  2. Run the following command to modify the eni-config ConfigMap by adding settings that enable the NetworkPolicfy feature:

    kubectl -n kube-system edit cm eni-config

    The following sample code provides an example of the eni-config ConfigMap:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: eni-config
      namespace: kube-system
    data:
      eni_conf: |
        {
          ...
          ...
        }
      10-terway.conf: |
        {
          ...
          ...
        }
      disable_network_policy: "false"