Community Blog Deep-Dive Into OpenYurt: YurtHub Extended Capabilities

Deep-Dive Into OpenYurt: YurtHub Extended Capabilities

This article describes the extended capabilities of YurtHub and explore how they apply to edge computing scenarios.

By Xinsheng, an Alibaba Cloud Technical Expert

Since OpenYurt was made open-source, its non-intrusive architecture design for integrating cloud-native and edge computing has attracted attention from a lot of developers. Alibaba Cloud launched the open-source OpenYurt project to share its experience in the cloud-native edge computing field with the open-source community, accelerate the extension of cloud computing to the edge, and work with the community to define unified standards for future cloud-native edge computing architectures. We published the Deep-Dive Into OpenYurt article series to help the community understand OpenYurt. The third of this series describes the extended capabilities of YurtHub.

Recommended articles in this series:

Introduction to OpenYurt

OpenYurt is a cloud-native edge computing solution that was made open-source one year after ACK@Edge was released. OpenYurt is different from other open-source containerized edge computing solutions, as it adheres to the design philosophy of extending your native Kubernetes to the edge without requiring any modifications to Kubernetes. It can instantly convert Kubernetes clusters to OpenYurt clusters to give native Kubernetes clusters edge cluster capabilities.

OpenYurt will adhere to the following development concepts during evolution:

  • Non-intrusive Kubernetes enhancement
  • Synchronous evolution with the mainstream technologies of the cloud-native community

YurtHub Architecture

In the previous article, we introduced the OpenYurt edge autonomy design and discussed the YurtHub component. The following figure shows the YurtHub architecture:


One of YurtHub's advantages is its compatibility with the Kubernetes design, which makes it easier for YurtHub to extend more capabilities. Next, we will describe YurtHub's extended capabilities in detail.

1. Edge Network Autonomy

Edge network autonomy enables cross-node communication so that edge businesses can continue or automatically recover when the edge is disconnected from the cloud and service containers or edge nodes are restarted.

To ensure edge network autonomy, OpenYurt needs to meet the following requirements (here, a Flannel VXLAN overlay network is used as an example):

  • (1) Network configurations on nodes must be autonomous. Network configurations, such as iptables or ipvs rules of kube-proxy, FDB, ARP, or route configuration of Flannel, and domain name resolution of CoreDNS, must be able to automatically recover after a node restart. Otherwise, cross-node communication at the edge will fail.
  • (2) Service containers must have fixed IP addresses. When the edge is disconnected from the cloud, other nodes cannot be notified of container IP address changes.
  • (3) Virtual Tunnel End Points (VTEPs) must have fixed IP addresses. The reason is similar to the need for fixed container IP addresses.

According to requirement 1, we must ensure the autonomy of the kube-proxy, Flannel, CoreDNS, and other components to ensure autonomous network configurations. If edge autonomy was implemented by reconstructing the kubelet, there would be great difficulties in achieving edge network autonomy. If the autonomous capabilities of the reconstructed kubelet are forcibly migrated to various network components, such as kube-proxy, Flannel, and CoreDNS, it would be a nightmare for the entire architecture.

In OpenYurt, YurtHub is independent, and therefore network components, such as kube-proxy, Flannel, and CoreDNS can easily use YurtHub to achieve autonomous network configurations. YurtHub caches network configuration resources, such as services, in the local storage. When the network is disconnected or nodes are restarted, network components can still obtain the statuses and configuration information of objects from before the network interruption, as shown in the following figure:


Requirements 2 and 3 are independent of the Kubernetes core and involve Container Network Interface (CNI) plugins and flanneld enhancement. We will describe them in detail in subsequent articles.

2. Multiple Cloud Addresses

When a Kubernetes cluster is deployed on a public cloud in high availability (HA) mode, an SLB is deployed between the multi-instance kube-apiserver and the edge. However, in private cloud or edge computing scenarios, edge nodes need to use different cloud addresses to access the kube-apiserver.

  • In private cloud scenarios, no SLB is deployed. Therefore, you need to use virtual IP addresses (VIPs) to ensure load balancing for the kube-apiserver on the cloud.
  • Alternatively, you need to deploy NGINX on each node, such as kubespray, to ensure that nodes use different addresses to access the cloud.
  • In edge computing scenarios, you may use a private line or public network for communication between the edge and cloud in some scenarios to provide the stability and security required for the network between the edge and cloud. For example, you can use a private line. When the private line encounters a fault, you can use a public network.

YurtHub supports cloud access with different IP addresses to meet the preceding requirements. You can use either of the following load balancing modes for cloud access addresses:

  • rr (round-robin): polling mode, which is selected by default.
  • priority: priority-based mode: Addresses with a higher priority are preferred, and addresses with a lower priority are used only when addresses with a higher priority are unavailable

For more information, please see the LB module of YurtHub, as shown in the following figure:


3. Node-Based Cloud Throttling

Throttling is required for a distributed system. Native Kubernetes encapsulates throttling in the kube-apiserver from the cluster perspective and in the client-go library from the visitors' perspective. In edge computing scenarios, client-go throttling is decentralized and intrudes upon businesses to a certain extent. Therefore, it cannot solve the throttling problem.

At the edge, YurtHub can take over the cloud access traffic from both system components and service containers and implement node-based throttling on the cloud. In this case, when the number of concurrent requests to the cloud from a single node exceeds 250, YurtHub rejects new requests.

4. Node Certificate Change Management

Kubernetes supports automatic node certificate changing. When a node certificate is about to expire, the kubelet automatically applies for a new node certificate from the cloud. However, in edge computing scenarios, the kubelet may fail to change node certificates when the edge is disconnected from the cloud. If the connection with the cloud is restored after a certificate has expired, it may not be automatically changed, so the kubelet is repeatedly restarted.

When YurtHub takes over the traffic between edge nodes and the cloud, it can also take over node certificate management. It avoids inconsistencies in node certificate management by using different installation tools and ensures automatic certificate changes for expired certificates after the network recovers. Currently, YurtHub is used together with kubelet for node certificate management. YurtHub will provide an independent node certificate management feature in the near future.

5. Other Capabilities

In addition to the preceding extended capabilities, YurtHub also provides other valuable capabilities, including:

  • Multi-Tenant Node Isolation: In a Kubernetes cluster with multi-tenant isolation capability, YurtHub ensures that all cloud requests on a node return only the resources of the tenant to which the node belongs. For example, the list service command only returns the services of the tenant. The multi-tenant isolation capability does not require any modifications to other components. However, to implement multi-tenant isolation in a cluster, multiple groups of custom resource definitions (CRDs) are required. For more information, please see the kubernetes-sigs/multi-tenancy project.
  • Inter-Cluster Node Migration: In some scenarios, you may need to migrate edge nodes from cluster A to cluster B. Usually, this is done by removing these nodes from cluster A, connecting them to cluster B, and deploying applications on these nodes in cluster B. YurtHub takes over the node traffic and node certificate management, so the information of cluster B can be injected to YurtHub to ensure lossless node migration to cluster B.
  • YurtHub also provides other functions, such as cloud kube-apiserver access with domain names.


With the preceding extended capabilities, YurtHub serves as a reverse proxy on edge nodes that supports data caching. It also adds an encapsulation layer to the application lifecycle management of Kubernetes nodes and provides core control capabilities required for edge computing.

YurtHub applies to edge computing scenarios and can function as a common component on nodes for use in all Kubernetes scenarios. We believe that these extended capabilities will drive YurtHub to become more stable and provide better performance. We invite you to participate in the project.


0 0 0
Share on

You may also like


Related Products

  • IoT Platform

    Provides secure and reliable communication between devices and the IoT Platform which allows you to manage a large number of devices on a single IoT Platform.

    Learn More
  • IoT Solution

    A cloud solution for smart technology providers to quickly build stable, cost-efficient, and reliable ubiquitous platforms

    Learn More
  • Link IoT Edge

    Link IoT Edge allows for the management of millions of edge nodes by extending the capabilities of the cloud, thus providing users with services at the nearest location.

    Learn More
  • Edge Node Service

    An all-in-one service that provides elastic, stable, and widely distributed computing, network, and storage resources to help you deploy businesses on the edge nodes of Internet Service Providers (ISPs).

    Learn More