All Products
Search
Document Center

Container Service for Kubernetes:Overview of the cloud-edge communication component Raven

Last Updated:Mar 07, 2024

In edge computing, computing devices are usually distributed across isolated regions and network domains. Therefore, edge devices in a cluster are usually managed in groups. As a result, nodes in different groups are isolated from each other, and applications deployed on these nodes cannot communicate. Raven is introduced in ACK Edge clusters to enhance the maintenance and monitoring capabilities in cloud-edge communication. This topic introduces the terms used in Raven and the features of Raven. This topic also describes how Raven works.

Cross-region network communication in the cloud-edge collaboration architecture

ACK Edge clusters use a cloud-edge collaboration architecture in which the central cloud manages edge data centers and edge devices. Data centers and infrastructure resources that reside at the edge use network connections such as SD-WAN, VPN, and Express Connect circuits to communicate with the control planes of ACK clusters deployed on the cloud. This allows you to manage a large number of edge devices in a cloud-native manner.

3.png

ACK Edge clusters use node pools to manage nodes in multiple regions. Nodes in different node pools reside in different network domains and cannot directly communicate with each other. In addition, the CIDR blocks of different network domains may overlap and the node IP addresses may conflict with each other.

To resolve these issues, ACK Edge clusters that run Kubernetes 1.26.3 and later use Raven to enhance O&M capabilities and optimize network communication between containers in cloud-edge collaboration.

How it works

The following figure shows a typical cloud-edge collaboration scenario:

1.png

  • Node pool A: node pool in the cloud. All nodes reside in the same virtual private cloud (VPC). A host is selected as a gateway node (in green) and a Server Load Balancer (SLB) instance is used to expose the nodes to the Internet.

  • Node pool B: data center. The nodes can communicate with each other at Layer 3 and can communicate with the VPC at Layer 3 through an Express Connect circuit.

  • Node pool C: edge data center. The nodes can communicate with each other at Layer 3. A host is selected as a gateway node (in green). A tunnel is established between the gateway node and the Internet NAT gateway. All requests sent from other node pools pass through the gateway node and the tunnel for cross-domain communication.

  • Node pool D: a group of edge devices. These edge devices usually come with public IP addresses and serve as gateway nodes to establish tunnels with gateway nodes in the cloud for cloud-edge communication.

Take note of the following issues in the cloud-edge collaboration scenario:

  • When you create an ACK cluster that contains a node pool in the cloud, such as node pool A, you must create at least one Elastic Compute Service (ECS) instance that serves as the gateway node of the node pool.

  • Hosts at the edge, such as node pool C, interact with the control planes of ACK clusters deployed on the cloud over the Internet. When you create encrypted tunnels between the gateway nodes of different node pools, you must purchase at least one Classic Load Balancer (CLB) instance and one elastic IP address (EIP), and configure network access control lists (ACLs) to accept requests sent from gateway nodes at the edge.

Architecture

Components

Raven consists of two components: the control plane component named yurt-manager and the data plane component named raven-agent-ds.

  • yurt-manager: It divides network domains based on node pools and creates gateways.

  • raven-agent-ds: It is deployed as a DaemonSet and runs on each node of the ACK cluster. It runs in a container named raven-agent and serves as a proxy to configure routes or VPN tunnels between the gateway nodes. The two components use the host network mode and support cloud-edge communication among hosts in different regions with conflicting IP addresses.

Communication mode

Raven provides the proxy mode and tunnel mode for cloud-edge communication.

  • Proxy mode (recommended): Create a reverse proxy to allow cross-host communication. The gateway node serves as a proxy to forward cross-domain requests at Layer 7 based on a combination of node name and port in the NodeName+Port format. The proxy mode supports cross-host communication for components such as the API server, metrics server, and Prometheus. It also supports kubectl commands such as kubectl logs, kubectl exec, kubectl attach, and kubectl top.

    View how the proxy mode works

    • One of the edge nodes is elected as a gateway node. The elected edge node will launch the ProxyClient module and create an encrypted reverse tunnel with the ProxyServer module of the gateway node in the cloud.

    • The solo node is a gateway node and can directly create an encrypted reverse tunnel with the gateway node in the cloud.

    • Cross-domain requests from the cloud are redirected by the ProxyServer proxy of gateway nodes in the cloud to the ProxyClient proxy of gateway nodes at the edge before they reach the target services in a network domain.

    The following figure shows the architecture.

    图片 1.png

  • Tunnel mode: Create VPN tunnels for cross-domain communication among containers. All cross-domain traffic will be forwarded by the gateway node of each edge node pool. This mode is typically used to collect the metrics of containers deployed on edge nodes.

    Important

    This feature is in public preview. Data loss may occur during cross-domain communication through the Internet. Do not use this feature to transmit business-critical data. If you encounter issues when you use the tunnel mode or have any suggestions, submit a ticket.

    View how the tunnel mode works

    • For edge nodes and nodes in the cloud, the Raven agent creates a Virtual eXtensible Local-Area Network (VXLAN) in the network domain and adds a network interface controller (NIC) named raven0 to forward requests in the network domain.

    • One of the edge nodes is elected as a gateway node. The elected edge node will create IPsec-VPN tunnels with the gateway nodes in the cloud.

    • The solo node does not need to serve as a proxy for cross-domain requests from other nodes, and therefore the VXLAN is not required in the network domain.

    • The Kubernetes-native Container Network Interface (CNI) plug-in is used to forward requests among containers in this network domain. Cross-domain requests are forwarded to the gateway nodes through the raven0 NIC. This allows containers to communicate with each other across network domains through the VPN tunnel.

    The following figure shows the architecture of Raven.

    图片 2.png

References

  • For more information about how to change the communication mode, configure network ACLs, or configure custom gateways, see Use the cloud-edge communication component Raven.

  • The raven-agent-ds component will be continuously updated for ACK Edge clusters. For more information about the release notes, see raven-agent-ds.