×
Community Blog More Flexible O&M for Cloud-Native Edge Computing: Patch Feature Added in OpenYurt with UnitedDeployment

More Flexible O&M for Cloud-Native Edge Computing: Patch Feature Added in OpenYurt with UnitedDeployment

This article discusses a new OpenYurt patch feature.

By Zhang Jie (Bingyu)

Background

Let's review the concept and the design idea of unitized deployment at the beginning. In edge computing scenarios, computing nodes may be deployed across regions, and the same applications may run on nodes in different regions. Let's take Deployment as an example. As shown in the figure below, traditionally, the compute nodes of the same region are first set to the same label. Then, multiple Deployments are created where different Deployments have different labels selected by NodeSelectors. By doing so, the same application can be deployed to different regions.

1

However, as geographic distributions increase, its O&M becomes more complex, as shown in the following aspects:

  • When a new image version is updated, a large amount of related image version configuration for each Deployment should be modified.
  • The naming convention of Deployment needs to be customized to indicate the same application.
  • There is a lack of a higher-level perspective on the management and O&M of these deployments. The complexity of O&M grows linearly with the increase of applications and geographic distributions.

Based on the requirements and problems above, the yurt-app-manager of OpenYurt provides UnitedDeployment to centrally manage these sub-Deployments through a higher level of abstraction, such as automatic creation, update, and deletion, reducing the complexity of O&M significantly.

Yurt-app-manager components: https://github.com/openyurtio/yurt-app-manager

The following figure shows the specific information:

2

UnitedDeployment provides a higher level of abstraction for these workloads, consisting of two main configurations – WorkloadTemplate and Pools. The format of WorkloadTemplate can be Deployment or Statefulset. Pools are lists, and each list has the configuration of a Pool. Each Pool has its name, replicas, and nodeSelector configuration. Users can select a group of machines with nodeSelector. Therefore, in the edge scenarios, the Pool can be considered as a group of machines in a certain region. Users can distribute a Deployment or Statefulset application to different regions easily with WorkloadTemplate and Pools.

The following is a specific UnitedDeployment example:

apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
  name: test
  namespace: default
spec:
  selector:
    matchLabels:
      app: test
  workloadTemplate:
    deploymentTemplate:
      metadata:
        labels:
          app: test
      spec:
        selector:
          matchLabels:
            app: test
        template:
          metadata:
            labels:
              app: test
          spec:
            containers:
            - image: nginx:1.18.0
              imagePullPolicy: Always
              name: nginx
  topology:
    pools:
    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      replicas: 1
    - name: hangzhou
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - hangzhou
      replicas: 2

The specific logic of the controller for the UnitedDeployment is listed below:

A UnitedDeployment CR has been defined, in which a DeploymentTemplate and two pools are defined.

  • The format of DeploymentTemplate is defined as a Deployment. In this example, the Image is nginx:1.18.0.
  • The name of Pool1 is beijing, replicas are 1, and the nodeSelector is apps.openyurt.io/nodepool=beijing. The UnitedDeployment controller needs to create a sub-Deployment, in which the replicas are 1, the nodeSelector is apps.openyurt.io/nodepool=beijing, and other configurations are inherited from the DeploymentTemplate.
  • The name of Pool2 is hangzhou, replicas are 2, and the nodeSelector is apps.openyurt.io/nodepool=hangzhou. The UnitedDeployment controller needs to create a sub-Deployment, in which the replicas are 2, nodeSelector is apps.openyurt.io/nodepool=hangzhou, and other configurations are inherited from the DeploymentTemplate.

After detecting that a UnitedDeployment CR instance with the name test has been created, the UnitedDeployment controller generates a Deployment template object based on the configuration in the DeploymentTemplate first. Based on the configuration of Pool1 and Pool2 with the template object of Deployment, two deployment resource objects with the name prefix test-hangzhou- and test-beijing- are generated. These two Deployment resource objects have their own nodeSelector and replica configurations. By doing so, users can distribute workloads to different regions by combining the workloadTemplate and Pools without maintaining a large number of Deployment resources.

Problems Solved by UnitedDeployment

UnitedDeployment can maintain multiple Deployments or Statefulsets automatically through one UnitizedDeployment instance. Each Deployment or Statefulset follows a unified naming convention. Meanwhile, differentiated configurations, such as Name, NodeSelectors, and Replicas can also be implemented, simplifying O&M for users in edge scenarios.

New Requirements

UnitedDeployment can meet most of the needs of users. However, in the process of promotion, implementation, and discussion with community members, we found that the functions provided by UnitedDeployment are somewhat insufficient under some specific scenarios. For example:

  • When an application image is upgraded, users plan to perform verification in a node pool. If the verification is successful, the upgrade will be performed in all node pools.
  • Users can build private image warehouses in different node pools to speed up image pull. Therefore, the image names of the same application in each node pool are different.
  • The number of servers, specifications, and business access pressure is inconsistent in different node pools. Therefore, the configurations of pods, such as CPU and memory, for the same application in various node pools are different.
  • The same application may use different configmap resources in different node pools.

These requirements drive the need for UnitedDeployment to provide the functionality to customize the configuration of each pool, allowing users to make personalized configurations, such as mirrors, requests, and limits, for pods based on the real-world conditions of the different node pools. We decide to add a Patch field in the pool to maximize the flexibility, allowing users to customize Patch content. However, this needs to follow a strategic merge patch for Kubernetes, which is similar to the commonly used kubectl patch.

Add a new patch in the pool, as shown in the following example:

    pools:
    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      replicas: 1          
      patch:
        spec:
          template:
            spec:
              containers:
              - image: nginx:1.19.3
                name: nginx

The content defined in the patch must follow the strategic merge patch for Kubernetes. If you have used kubectl patch, you will know how to write the patch content. For details, please see and use the kubectl patch to update Kubernetest api object.

There is a demonstration of the use of the UnitedDeployment patch below.

Feature Demonstration

1. Preparation

  • Provide an OpenYurt cluster or Kubernetes cluster, in which at least two nodes are deployed. The label of one node is apps.openyurt.io/nodepool=beijing, and the other node is apps.openyurt.io/nodepool=hangzhou.
  • The yurt-app-manager components must be installed in the cluster.

Yurt-app-manager components: https://github.com/openyurtio/yurt-app-manager

2. Create a UnitedDeployment Instance

cat <<EOF | kubectl apply -f -

apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
  name: test
  namespace: default
spec:
  selector:
    matchLabels:
      app: test
  workloadTemplate:
    deploymentTemplate:
      metadata:
        labels:
          app: test
      spec:
        selector:
          matchLabels:
            app: test
        template:
          metadata:
            labels:
              app: test
          spec:
            containers:
            - image: nginx:1.18.0
              imagePullPolicy: Always
              name: nginx
  topology:
    pools:
    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      replicas: 1
    - name: hangzhou
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - hangzhou
      replicas: 2              
EOF

The workloadTemplate of the instance uses the Deployment template, and the image named nginx is nginx:1.18.0. Meanwhile, two pools defined in the topology are beijing and hangzhou, and their number of replicas is 1 and 2, respectively.

3. View the Deployment Created by UnitedDeployment

# kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
test-beijing-rk8g8    1/1     1            1           6m4s
test-hangzhou-kfhvj   2/2     2            2           6m4s

The yurt-app-manager controller has created two deployments corresponding to the pools in beijing and hangzhou. The naming convention of deployments is prefixed with {UnitedDeployment name}-{pool name}. From the configurations of these two deployments, the Replicas and Nodeselector inherit the configuration of each corresponding Pool, while other configurations inherit the configuration of the workloadTemplate.

4. View Corresponding Created Pods

# kubectl get pod
NAME                                   READY   STATUS    RESTARTS   AGE
test-beijing-rk8g8-5df688fbc5-ssffj    1/1     Running   0          3m36s
test-hangzhou-kfhvj-86d7c64899-2fqdj   1/1     Running   0          3m36s
test-hangzhou-kfhvj-86d7c64899-8vxqk   1/1     Running   0          3m36s 

Note: One pod with the name prefix test-beijing and two pods with the name prefix test-hangzhou are created.

5. Differentiated Configuration with Patch Capability

Run the kubectl edit ud test command to add a patch to the pool of beijing. In this patch, the version of the container image named nginx is modified to nginx:1.19.3.

The following sample code provides an example of the valid format:

    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      replicas: 1
      patch:
        spec:
          template:
            spec:
              containers:
              - image: nginx:1.19.3
                name: nginx

6. View the Deployment Instance Configuration

Re-check the Deployment with the prefix test-beijing to see that the image configuration of the container has changed to 1.19.3.

kubectl get deployments  test-beijing-rk8g8 -o yaml

Summary

Workloads can be distributed to different regions quickly by inheriting templates through the WorkloadTemplate and Pools of UnitedDeployment. Together with the Pool's patch capability, it can provide more flexible differentiated configurations while inheriting the configuration of the template, meeting the special needs of most customers in edge scenarios.

0 0 0
Share on

You may also like

Comments

Related Products

  • Bastionhost

    A unified, efficient, and secure platform that provides cloud-based O&M, access control, and operation audit.

    Learn More
  • Link IoT Edge

    Link IoT Edge allows for the management of millions of edge nodes by extending the capabilities of the cloud, thus providing users with services at the nearest location.

    Learn More
  • Managed Service for Grafana

    Managed Service for Grafana displays a large amount of data in real time to provide an overview of business and O&M monitoring.

    Learn More
  • Function Compute

    Alibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

    Learn More