×
Community Blog OpenYurt v1.6: Introduce Node-level Traffic Multiplexing Capability

OpenYurt v1.6: Introduce Node-level Traffic Multiplexing Capability

The key features of OpenYurt v1.6 include node-level traffic multiplexing and enhanced edge autonomy.

The OpenYurt v1.6 was officially released on January 8, Beijing Time. As an open-source project in the edge cloud-native field, OpenYurt is committed to solving the challenges of managing dispersed computing resources and business operations. OpenYurt adopts a cloud-edge-end integrated architecture based on the Kubernetes project and provides core capabilities such as edge autonomy, cross-regional communication, multi-regional resource and application management, and device management in a non-intrusive manner. The key features of the newly released v1.6 include node-level traffic multiplexing and enhanced edge autonomy.

Node-level Traffic Multiplexing

In an OpenYurt cluster, control components are deployed in the cloud, while edge nodes typically communicate with these cloud-based control components over public networks to list/watch cluster resources. For example, components like Kubelet, Flannel, Kube-proxy, and CoreDNS require dedicated instances deployed on each edge node, which continuously list/watch resources such as services and EndpointSlices in the cluster. As the scale and resources of clusters increase, cloud-edge communication traffic is under pressure and requires more traffic costs. To alleviate this issue, we have introduced a traffic multiplexing module in YurtHub. This module actively fetches specific resources from the Kube-apiserver and caches them. When a client requests these resources through YurtHub, YurtHub no longer proxies the request to the Kube-apiserver. Instead, it retrieves the data from the cache and returns it to the client, thus achieving traffic multiplexing. This capability can reduce cloud-edge communication traffic by approximately 50% when a large number of pods and services are deployed.

The following figure shows the request forwarding process between modules.

1

1) Multiplexer Cache requests that the kube-apiserver list/watch resources designated for multiplexing and caches them in memory.

2) The client sends requests to the YurtHub Server to retrieve target resources.

3) YurtHub determines whether the requested resource is one designated for multiplexing:

For multiplexed resources, the data is obtained from the local MultiPlexer Cache to reduce cloud-edge traffic.

For non-multiplexed resources, the data is acquired from the Kube-apiserver.

Enhanced Edge Autonomy

OpenYurt already provides strong edge autonomy features to ensure that applications on edge nodes continue to run even when disconnected from the cloud. However, there are still several areas where the current edge autonomy feature could be improved:

• If nodes use autonomous annotations, the cloud controller does not automatically evict pods, regardless of whether the disconnection is caused by a cloud-edge network problem or a node failure. However, you may wish for automatic pod eviction during node failures.

• Implementing the autonomy capability requires disabling the NodeLifeCycle controller in the kube-controller-manager component, which cannot be directly used in managed Kubernetes environments.

In v1.6, we have further enhanced the autonomy capabilities. These enhancements include:

• A new node autonomy annotation (node.openyurt.io/autonomy-duration) has been added, allowing you to specify the duration of node autonomy. If the time without receiving a heartbeat report is less than the specified duration, the system will consider it to be caused by a network disconnection and will not evict the Pod. If the time exceeds this duration, the system will consider it to be a node failure and will proceed with Pod eviction.

• In the Kube-Controller-Manager, the NodeLifeCycle controller is no longer disabled. Yurt-Manager now incorporates an Endpoints/EndpointSlices Webhook to prevent Pods from being excluded from service endpoints during node autonomy. The following figure shows the Webhook workflow.

2

1) The Pod NotReady status triggers the Endpoints/EndpointSlices controller to update the corresponding resources and set the related addresses to NotReady.

2) The Kube-apiserver calls the Webhook.

3) The Webhook modifies the status of the corresponding addresses based on whether the node associated with the Pod has autonomy configured:

  • If autonomy is configured, the status of the corresponding address is adjusted to Ready.
  • If autonomy is not configured, no changes are made.

4) The Webhook returns the modified resources.

5) The Kube-apiserver writes the resources to etcd.

Other Updates

You can visit the GitHub release page to see more changes and their authors and commit records.

Community Engagement

All capabilities of the current OpenYurt v1.6 have also been made available in the ACK Edge product. We welcome anyone interested to consult and try it out.

Meanwhile, the community is actively working on the v1.7 release, which will focus on traffic multiplexing at the node pool level and deploying local Kubernetes clusters using OpenYurt. We warmly invite you to join us in the OpenYurt open-source community through GitHub or engage in discussions by joining the community Slack channel.

0 1 0
Share on

Alibaba Container Service

201 posts | 33 followers

You may also like

Comments

Alibaba Container Service

201 posts | 33 followers

Related Products