Terway Edge provides underlay networks for containers. Containers can communicate with each other via routers and switches. Terway Edge provides efficient, scalable, and stable networks for containers. This topic describes how to use Terway Edge to implement communication between containers in a complex on-premises network environment.
Prerequisites
An ACK Edge cluster that runs Kubernetes 1.28 or later is created.
The cluster uses the Terway Edge network plug-in.
Billing
For billing of Cloud Enterprise Network (CEN), see Billing rules.
For billing of leased line (Express Connect), see Billing overview.
Communication between edge containers
Edge communication provides bidirectional communication between computing devices, containers, cloud services on the cloud and devices at the edge. You can use ACK Edge clusters to establish communication between on-cloud devices and edge devices.
On-cloud devices are on-cloud node pools, such as Elastic Compute Service (ECS) instances, Elastic Container Instance, and other cloud services, and the managed control plane in the virtual private cloud (VPC).
Edge devices are edge node pools, such as on-premises network devices and computing devices in the self-managed data center.
The following figure shows the communication between edge containers:
Type | CIDR block |
On-cloud devices | VPC CIDR block: 192.168.0.0/16 |
Edge data center | Host CIDR block: 10.0.0.0/16 Container CIDR block: 10.10.0.0/16. You can configure a custom CIRD block for the container when you create an ACK Edge cluster. |
Prerequisites
VPC, CEN, Virtual Border Router (VBR), and Express Connect circuits are created.
A Transit Router (TR) for connecting the VPC and on-premises data center is created in CEN. The VBR and the VPC are attached to the TR.
Configuration method
Step 1: Configure an uplink route (the pathway for the edge data center to access the on-cloud VPC)
On the data center core switch, set the next hop for VPC CIDR block (
192.168.0.0/16) to the VBR.NoteIf a switch or a border gateway is also available at the upper level, configure routing rules uniformly. This way, the packets forwarded to the VPC (
192.168.0.0/16) can be directly forwarded to VBR via Express Connect circuits.Set the next hop of the VPC CIDR block
(192.168.0.0/16) to the TR.In the TR's route table, set the next hop of the VPC CIDR block (
192.168.0.0/16) to the on-cloud VPC.
Step 2: Configure a downlink route (the pathway for the on-cloud VPC to access the edge data center)
In the VPC route table, set the next hop of the data center host CIDR block (
10.0.0.0/16) and container CIDR block (10.10.0.0/16) to the TR.In the TR's route table, set the next hop of the data center host CIDR block (
10.0.0.0/16) and container CIDR block (10.10.0.0/16) to the VBR.Set the next hops of the data center host CIDR block (
10.0.0.0/16) and container CIDR block (10.10.0.0/16) on the VBR to core switch. The packets are forwarded via Express Connect circuits.
Step 3: Configure BGP advertised routes for nodes in the cluster
After the upstream and downstream routes are configured, the communication between the on-cloud VPC and the data center host is established. Based on the configuration of the current network, when the packets of the container network reach the core switch of the on-premises data center, the packets cannot be further transmitted due to a lack of container routes on the switch. You must configure a Border Gateway Protocol (BGP) peer in your cluster to enable specific nodes to announce pod IP routes to your data center core switch via the BGP protocol. For more information, see Usage guide for Terway Edge.
In the preceding figure, Node-1, Node-2, and Node-3 are configured as BGP speakers to announce the pod networks for all nodes connected to the data center core switch and add the ASN=65010 label to all nodes belonging to the switch. For more information, see the following YAML manifest:
apiVersion: network.openyurt.io/v1alpha1
kind: BGPPeer
metadata:
name: peer
spec:
localSpeakers:
- Node-1
- Node-2
- Node-3
localAsNumber: 65010 # The ID of the self-managed domain of the container network.
peerIP: 10.0.0.1 # The IP address of the switch.
peerAsNumber: 65001 # The ID of the self-managed domain of the Switch.
nodeSelector: ASN=65010 # Select all nodes that have this label.We recommend designating at least three nodes in your cluster to act as BGP speakers. This provides high availability and ensures that the BGP session is maintained during add-on upgrades, preventing network disruptions caused by pod routes aging out.
Step 4: Configure the switch to accept BGP advertised routes
Enable the BGP service on your network device, following the specific commands and configuration methods required for your device's model.
On the network device, configure
Node-1,Node-2, andNode-3as BGP peers. After configuration, verify that the BGP sessions are successfully established.We recommend configuring BGP graceful restart with a restart timer of at least 300 seconds. This will preserve routing information and prevent network interruptions if a BGP session drops unexpectedly.
Step 5: Check the switch
On the data center core switch, check the BGP session status and inspect the routing table. Confirm that routes to the pod subnets have been learned, for example, a route to 10.10.0.0/24 via the corresponding node's IP (10.0.0.10).
Communication between containers across LANs
If your data center has a complex network topology and multiple local area networks (LANs) are deployed in the same domain, you must connect the devices on multiple LANs to one ACK Edge cluster for centralized management. See the following figure for an example.
Containers on the same node communicate with each other via the host network stack.
The following section describes the route configuration of the Terway Edge plug-in for communication between containers across nodes:
If the two nodes are located in the same LAN and use the same autonomous system number (ASN), the next hop of the container route is set to the corresponding node address.
If the two nodes are on different LANs and use different ASNs, the default route of the host is used, which points to the switch address.
Two nodes on the same LAN
In scenarios in which two nodes are located in the same LAN, the nodes communicate with each other on the Layer 2 network. In the preceding figure, pods on Node-1 access pods on Node-2. Container network packets pass through the Layer 2 network to reach Node-2 and then enter the pods through the Node-2 host network stack.
Two nodes on different LANs
In scenarios in which two nodes are located in different LANs, the Layer 2 networks between the nodes do not communicate with each other. Therefore, you must use the Layer 3 forwarding capability of switches. Configure BGP sessions to ensure smooth network connections. In the preceding figure, pods on Node-1 access pods on Node-5. This example shows how to publish a container route to a network device outside the cluster. In this example, the following configurations are used.
Establish a BGP session between
Node-1andSwitch Ato publish the container routes inLAN AtoSwitch A. Sample YAML file:apiVersion: network.openyurt.io/v1alpha1 kind: BGPPeer metadata: name: peer-a spec: localSpeakers: - Node-1 localAsNumber: 65010 # The ID of the self-managed domain of the container network. peerIP: 10.0.10.1 # The IP address of the switch. peerAsNumber: 65002 # The ID of the self-managed domain of the Switch. nodeSelector: ASN=65010 # Select all nodes that have this label.Build a BGP session between
Node-5andSwitch Bto publish the container routes inLAN BtoSwitch B. Sample YAML file:apiVersion: network.openyurt.io/v1alpha1 kind: BGPPeer metadata: name: peer-b spec: localSpeakers: - Node-5 localAsNumber: 65020 # The ID of the self-managed domain of the container network. peerIP: 10.0.20.1 # The IP address of the switch. peerAsNumber: 65003 # The ID of the self-managed domain of the Switch. nodeSelector: ASN=65020 # Select all nodes that have this label.Establish a BGP session between
Switch AandSwitch B. This way,Switch AandSwitch Bcan learn each other's container routes.The route for sending packets:
Node-1->Switch A->Switch B->Node-5The route for receiving packets:
Node-5->Switch B->Switch A->Node-1
(Optional) If your network device has a core switch that connects the hosts and containers of the on-cloud VPC, as shown in the following figure, you must establish a BGP session among
Switch A,Core Switch, andSwitch B. Establish a BGP session betweenSwitch AandCore Switch, and establish a BGP session betweenSwitch BandCore Switch.
Communication between containers across domains
The following example shows how to connect data centers in different network domains to ACK Edge clusters.
The following YAML code provides an example on how to create a BGP session to advertise routes from containers in network domain A to
Switch A.apiVersion: network.openyurt.io/v1alpha1 kind: BGPPeer metadata: name: peer-a spec: localSpeakers: - Node-1 localAsNumber: 65010 # The ID of the self-managed domain of the container network. peerIP: 10.0.10.1 # The IP address of the switch. peerAsNumber: 65001 # The ID of the self-managed domain of the Switch. nodeSelector: ASN=65010 # Select all nodes that have this label.The following YAML code provides an example on how to create a BGP session to advertise routes from containers in network domain B to
Switch B.apiVersion: network.openyurt.io/v1alpha1 kind: BGPPeer metadata: name: peer-b spec: localSpeakers: - Node-5 localAsNumber: 65020 # The ID of the self-managed domain of the container network. peerIP: 10.0.20.1 # The IP address of the switch. peerAsNumber: 65002 # The ID of the self-managed domain of the Switch. nodeSelector: ASN=65020 # Select all nodes that have this label.Connect different data centers.
If two data centers communicate with each other at the network layer via Express Connect circuits, you must establish a BGP session between
Switch AandSwitch Bto advertise the container routes of their respective network domains to each other.If two data centers do not communicate with each other at the network layer via Express Connect circuits, you must establish the BGP protocol between
Switch Aand the VBR, and betweenSwitch Band the VBR. This allows you to use the VBRs to forward cross-domain network packets.