All Products
Search
Document Center

Container Service for Kubernetes:User guide for Terway Edge

Last Updated:Jan 24, 2025

Terway Edge is a network plug-in that Alibaba Cloud provides to help you build underlay networks for containers in ACK Edge clusters. Terway Edge is developed based on Terway and Flannel (Route mode). To use Terway Edge as the network plug-in of an ACK Edge cluster, you must select Terway Edge when you create the cluster. After the cluster is created, you cannot change the network plug-in. This topic describes how to configure Terway Edge for an ACK Edge cluster.

Install Terway Edge when you create a cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click Create Kubernetes Cluster.

  3. Click the ACK Edge tab and configure key network parameters for Terway.

    Parameter

    Description

    VPC

    Specify the ID of the virtual private cloud (VPC) where you want to deploy the cluster.

    Network Plug-in

    Select Terway Edge.

    vSwitch

    IP addresses are assigned from the CIDR blocks of the selected vSwitches to the nodes in the cluster. We recommend that you select at least three vSwitches in different zones to ensure high availability.

    Pod vSwitch

    IP addresses are assigned from the CIDR blocks of the selected vSwitches to the pods in the cluster. The CIDR blocks of pod vSwitches can overlap with the CIDR blocks of node vSwitches.

    Service CIDR

    The Service CIDR block cannot overlap with the node or pod CIDR block.

    Pod CIDR Block

    The CIDR block of pods at the edge. For more information, see Plan the network of an ACK cluster.

    Number of Pods per Node

    The maximum number of pods supported by each edge node.

Network infrastructure management in private networks

You can establish secure, stable, and fast private connections to connect data centers to the VPCs where ACK Edge clusters are deployed. In the following example, Express Connect circuits are used to demonstrate how to configure the network:

  1. The following figure shows the CIDR blocks: the data center CIDR block is 172.16.0.0/16, the VPC CIDR block is 192.168.0.0/16, and the edge pod CIDR block is 10.0.0.0/8. A route that routes packets from the edge or data center to the VPC is an inbound route. A route that routes packets from the VPC to the edge or data center is an outbound route.

  2. Configure inbound routes on the switch, gateway device, and virtual border router (VBR) in the data center to route packets to the VPC (192.168.0.0/16), and configure inbound routes on the Express Connect router (ECR) and Cloud Enterprise Network (CEN) instance to route packets to the VPC. This way, requests can be sent from the data center to the control planes and Elastic Compute Service (ECS) instances in the ACK Edge cluster deployed in the VPC.

  3. Configure outbound routes to ensure that requests can be sent from the control planes, ECS instances, and containers of the ACK Edge cluster to the data center (172.16.0.0/16) and containers (10.0.0.0/8) in the data center.

  4. Configure connections over Express Connect circuits based on the actual scenario. For more information, see Express Connect.

image

Terway Edge runs as Terway in the cloud

Terway Edge runs as Terway in node pools in the cloud, which is the same as Terway running in ACK Pro clusters. For more information, see Terway.

Note

Terway in ACK Edge clusters can run only in inclusive ENI mode and does not support DataPath acceleration, Kubernetes network policies, or elastic network interface (ENI) trunking.

Terway Edge runs as Terway in ENS network

Terway Edge provides container networking on Edge Node Service (ENS) nodes based on ENIs of ENS. For more information, see Use Terway in the ENS network.

Terway Edge runs as Flannel at the edge

Terway Edge runs as Flannel (Route mode) at the edge. After an edge node is connected to an ACK Edge cluster, the control planes of the cluster automatically assign a pod CIDR block to the node and add container routes to the host route table.

Use BGP to advertise container routes

When a container on a node communicates with a container on another node, packets are forwarded based on the host network stack.

  • If the two nodes reside in the same LAN, the source node can learn the IP address of the destination node from the host routes, which are configured by Flannel. This way, packets can be forwarded to the destination node.

  • If the two nodes reside in different LANs, packets are sent from the node to external network devices, on which no routes are configured for the source container. As a result, packets cannot reach the destination container.

Use Flannel to advertise container routes

To configure container routes on external network devices, Flannel (Route mode) launches a Border Gateway Protocol (BGP) service to establish a BGP session with the network device. This ensures that container routes are dynamically advertised from the LAN to the network devices.

image

Use BGP to advertise container routes

Note

The external network devices (Layer 3 switches) must support BGP and make sure that you can configure these devices.

Create a CustomResourceDefinition of the BGPPeer type to configure BGP peers.

Step 1: Configure a BGP speaker in the cluster

  1. Create a file named bgppeer.yaml and copy the following content to the file. Modify the file based on your business requirements.

    apiVersion: network.openyurt.io/v1alpha1
    kind: BGPPeer
    metadata:
      name: peer
    spec:
      localSpeakers:
        - node-1
        - node-2
      localAsNumber: 65010
      peerIP: 172.16.0.1
      peerAsNumber: 65001
      nodeSelector: alibabacloud.com/nodepool-id=npxxx
      # Optional.
      authPassword:
        secretKeyRef:
          name: bgp-secret
          key: password
    ---
    # Optional.
    apiVersion: v1
    kind: Secret
    metadata:
      name: bgp-secret
      namespace: kube-system
    type: Opaque
    data:
      password: bXlTZWNyZXRWYWx1ZQ==  # The value is encoded in Base64.

    The following table describes the parameters.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the BGP peer.

    spec.localSpeakers

    Yes

    The nodes in the cluster that function as BGP speakers. The BGP speakers advertise the container routes on all nodes in the LAN. We recommend that you select at least two nodes as BGP speakers.

    spec.localAsNumber

    Yes

    The autonomous system number (ASN) of the BGP peer, which specifies the autonomous system (AS) to which the BGP speakers belong. ASNs are unique identifiers in BGP. A LAN is an AS and the private ASNs range from 64512 to 65535.

    spec.peerIP

    Yes

    The IP address of the LAN gateway or Layer 3 switch, which is used to establish BGP sessions between the BGP speakers in the cluster.

    spec.peerAsNumber

    Yes

    The ASN of the LAN gateway or Layer 3 switch.

    spec.gateway

    No

    The custom gateway address for container communication across different ASs. By default, this parameter is set to the gateway or vSwitch address of the LAN.

    spec.nodeSelector

    Yes

    The node selector. The node selector is used to select nodes that belong to the AS of the BGP peer. We recommend that you add a node pool label. This way, you can select all nodes in the node pool. If all edge nodes reside in the same LAN, you can set the value to all(), because the edge nodes belong to the same AS.

    spec.authPassword

    No

    The password used to establish BGP sessions. You must first create a Secret in the kube-system namespace. Then, specify the Secret key and Secret name in the parameter.

  2. Run the following command to create the BGPPeer.

    kubectl apply -f bgppeer.yaml
  3. Run the following command to query the BGPPeer:

    kubectl get bgppeer peer

    Expected output:

    NAME   LOCALSPEAKERS         LOCALASNUMBER   PEERIP         PEERASNUMBER     AGE
    peer   ["node-1","node-2"]   65010           172.16.0.1     65001            10m

Step 2: Configure external network devices to launch a BGP service

Important
  • We recommend that you select at least three nodes in the cluster as BGP peers to ensure the persistence of BGP sessions during component updates and to prevent container network disruptions caused by aging container route.

  • By default, Terway Edge enables BGP Graceful Restart with a timeout of 600 seconds. Configure BGP Graceful Restart on your vSwitch as needed.

  1. Configure BGP settings based on the actual network device models. Then, launch BGP services.

  2. To ensure that BGP sessions can be established, configure the BGP nodes you selected in Step 1 as the BGP peers.

Step 3: Verify BGP session establishment and container route advertisement

  1. Check the events of the BGP peers.

    kubectl describe bgppeers peer-1

    If the output includes FailedEstablish events, it means the BGP session failed to establish.

  2. Check if the vSwitch contains the container routes.