Community Blog OpenYurt v0.3.0 Released: Improve Application Deployment Efficiency in Edge Scenarios

OpenYurt v0.3.0 Released: Improve Application Deployment Efficiency in Edge Scenarios

This article discusses the release of OpenYurt v0.3.0.


Built on native Kubernetes, OpenYurt is the first non-intrusive edge computing project in the industry. It will expand the services of Kubernetes to support edge computing scenarios. It provides complete Kubernetes API compatibility, supports all Kubernetes workloads, services, operators, CNI plug-ins, and CSI plug-ins. With good node autonomy capability, applications running on the edge nodes are not affected, even if the edge nodes are disconnected from the cloud. OpenYurt can be deployed easily in any Kubernetes cluster service, and the cloud-native capabilities can be extended to the edge.

Release of OpenYurt v0.3.0

OpenYurt v0.3.0 was released in China on November 8, 2020. It proposed the concept of node pool and unit deployment for the first time, and added the cloud Yurt-App-Manager component. By doing so, the application deployment efficiency in edge scenarios improved, and the complexity of O&M for edge nodes and applications is reduced. In addition, the performance of core components, such as YurtHub and YurtTunnel, is optimized. The kubeadm provider provided by YurtCtl can convert Kubernetes clusters created by kubeadm into OpenYurt clusters quickly and easily.


Yurt-App-Manager: Designed for the O&M of Edge Application

OpenYurt provides the Yurt-App-Manager component. Yurt-App-Manager is a standard extension of Kubernetes. It can be used in conjunction with Kubernetes to provide two controllers: NodePool and UnitedDeployment. They provide O&M capability for nodes and applications in edge computing scenarios.


In the edge scenarios, the edge nodes generally have strong regional or logical grouping characteristics, such as the same CPU architecture, the same carrier, and the same cloud provider. Nodes in different groups are often featured with isolation, such as network disconnection, unshared resources, heterogeneous resources, and independent applications. This is also why NodePool came into being. As the name implies, NodePool is a pool, a group, or a unit of nodes. The hosts are classified and managed through Label to manage the worker nodes that have the same attributes. However, as the number of nodes and labels increases, the O&M for different nodes (for example, the configuration of scheduling policies and taints in a batch) become less effective and flexible, as shown in the following figure:


NodePool abstracts the division of nodes in a higher dimension of node groups. Therefore, the hosts in different edge regions can be managed, operated, and maintained uniformly from the perspective of NodePool, as shown in the following figure:



In the edge scenarios, the same application may need to be deployed on compute nodes in different regions. Take the Deployment as an example. Traditionally, the same label for compute nodes in the same region is set. Then, multiple Deployment tasks are created, and node selectors are used to select different labels to deploy the same application to different regions in sequence. However, there are few differences between these Deployment tasks that represent the same application, except for names, node selectors, and replicas, as shown in the following figure:


However, as the nodes have been distributed in more regions with different requirements for applications, O&M becomes more complex, as shown in the following aspects:

  • Deployment tasks need to be modified one by one to upgrade the image version.
  • The naming specification for Deployment tasks should be customized to indicate the same applications.
  • As the edge scenarios become more complex and the demands increase, the Deployment tasks in each node pool are performed differently, which are difficult to manage.

UnitedDeployment automatically creates, updates, and deletes these sub-tasks of Deployment through the abstraction of an upper layer, as shown in the following figure:


The UnitedDeployment controller can provide a template to define the application and match different regions by managing multiple workloads. The workloads in each region under each UnitedDeployment are called Pools. Currently, pools support two types of workloads: StatefulSet and Deployment workloads. The controller creates a sub-workload resource object based on the pool configuration of the UnitedDeployment. Each resource object has an expected number of replicas pods. Multiple Deployment or Statefulset resources can be maintained automatically through one UnitedDeployment instance. Differentiated configurations of replicas are also enabled. For a more direct operation experience, please see the Yurt-App-Manager Tutorial and Developer Tutorials.

For more information about Yurt-App-Manager, please see the community issue and pull request:

Node Autonomy Component: YurtHub

YurtHub is a daemon running on each node of a Kubernetes cluster. It acts as a proxy for outbound traffic from Kubelet, Kubeproxy, and CNI plug-ins. It caches the status of all resources that the daemons on Kubernetes nodes may access in the local storage of edge nodes. If the edge node is offline, these daemons can help the nodes recover their states after restarting. By doing so, the edge autonomy can be realized. In OpenYurt v0.3.0, the community has made a number of functional enhancements to YurtHub, including:

  • When YurtHub connects to the kube-apiserver, it automatically requests a certificate from kube-apiserver. The expiration time of the certificate can be modified.
  • When watching cloud resources, YurtHub adds the timeout mechanism.
  • When the locally cached data does not exist, YurtHub optimizes the response.

Cloud-Edge O&M Channel Component: YurtTunnel

YurtTunnel consists of TunnelServer on the cloud and TunnelAgent running on each edge node. TunnelServer establishes a connection with the TunnelAgent daemon running in each edge node through the reverse proxy. It aims to establish secure network access between the control plane of the public cloud and the edge nodes in the intranet of the enterprise. In OpenYurt v0.3.0, the community made many enhancements to the YurtTunnel in terms of reliability, stability, and integration testing.

O&M Component: YurtCtl

In OpenYurt v0.3.0, YurtCtl supports the kubeadm provider. It can convert native Kubernetes clusters created by kubeadm into Kubernetes clusters that can adapt to the weak network environment on the edge quickly and easily. This improves the user experience of OpenYurt.

Future Plans

OpenYurt v0.3.0 improves the extensibility of native Kubernetes in edge scenarios. The Yurt-App-Manager component was released to address the application deployment in edge scenarios. In the future, the OpenYurt community will continue to focus on device management, edge O&M and scheduling, community governance, and contributor experience. We would like to thank Intel and Vmware developers for their help. If you are interested in OpenYurt, you are welcome to join us to build a stable and reliable cloud-native edge computing platform. For more community details, please see: https://github.com/alibaba/openyurt


0 0 0
Share on

You may also like


Related Products

  • IoT Platform

    Provides secure and reliable communication between devices and the IoT Platform which allows you to manage a large number of devices on a single IoT Platform.

    Learn More
  • Link IoT Edge

    Link IoT Edge allows for the management of millions of edge nodes by extending the capabilities of the cloud, thus providing users with services at the nearest location.

    Learn More
  • IoT Solution

    A cloud solution for smart technology providers to quickly build stable, cost-efficient, and reliable ubiquitous platforms

    Learn More
  • Global Internet Access Solution

    Migrate your Internet Data Center’s (IDC) Internet gateway to the cloud securely through Alibaba Cloud’s high-quality Internet bandwidth and premium Mainland China route.

    Learn More