YurtAppSet lets you deploy and manage applications across multiple node pools in an ACK Edge cluster from a single configuration. It watches for changes in node pool labels and automatically keeps workloads synchronized—so you can update a version or scale an application cluster-wide without touching each node pool individually.
Prerequisites
Before you begin, ensure that you have:
-
An ACK Edge cluster
-
kubectl configured to connect to the cluster
-
Permissions to create and manage custom resources in the cluster
Choose your approach
| Cluster version | Recommended resource |
|---|---|
| 1.26 or later | YurtAppSet (apps.openyurt.io/v1beta1) |
| Earlier than 1.26 | UnitedDeployment (apps.kruise.io/v1alpha1) |
Background
In edge computing, the same application typically runs across nodes in multiple regions. The traditional approach—assigning labels to nodes in each region and creating a separate Deployment per region—becomes unmanageable as the number of regions grows:
-
Tedious updates: Each version change requires manually updating every Deployment.
-
Complex management: As regions multiply, tracking and maintaining separate Deployments becomes error-prone.
-
Redundant configuration: Deployments across regions share nearly identical specs, making changes cumbersome.
YurtAppSet solves these problems with three core mechanisms:
| Mechanism | Purpose |
|---|---|
workloadTemplate |
Defines a single workload template applied across all target node pools |
nodepoolSelector |
Selects target node pools by label; automatically picks up newly created or removed node pools |
workloadTweaks |
Overrides specific fields (replicas, container image, arbitrary patches) for individual node pools without creating separate workloads |
YurtAppSet (cluster version 1.26 or later)
Create a YurtAppSet instance
The following YAML creates a YurtAppSet that deploys an nginx workload to two node pools. The np1xxxxxx node pool uses 2 replicas of nginx:1.19.1; the np2xxxxxx node pool overrides the defaults with 3 replicas of nginx:1.20.1.
apiVersion: apps.openyurt.io/v1beta1
kind: YurtAppSet
metadata:
name: example
namespace: default
spec:
revisionHistoryLimit: 5
pools:
- np1xxxxxx
- np2xxxxxx
nodepoolSelector:
matchLabels:
yurtappset.openyurt.io/type: "nginx"
workload:
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: example
spec:
replicas: 2
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- image: nginx:1.19.1
imagePullPolicy: Always
name: nginx
workloadTweaks:
- pools:
- np2xxxxxx
tweaks:
replicas: 3
containerImages:
- name: nginx
targetImage: nginx:1.20.1
patches:
- path: /metadata/labels/test
operation: add
value: test
YAML field reference
Spec fields
| Field | Description | Required |
|---|---|---|
spec.pools |
List of node pool names to deploy to. If both pools and nodepoolSelector are set, the union is used. |
No |
spec.nodepoolSelector |
Selects node pools by matching their labels (metadata.labels on the NodePool resource). Automatically includes new node pools that match and excludes removed ones. |
No |
spec.workload.workloadTemplate |
The workload template to deploy. Supports deploymentTemplate and statefulSetTemplate. |
Yes |
spec.workload.workloadTweaks |
Per-node-pool overrides applied on top of workloadTemplate. |
No |
PrefernodepoolSelectoroverpoolsfor most deployments. Label-based selection automatically handles newly added or removed node pools without requiring changes to the YurtAppSet configuration. To modify NodePool labels, edit the YAML of the corresponding NodePool on the Custom Resources page in the cluster console.
workloadTweaks fields
| Field | Description | Required |
|---|---|---|
workloadTweaks[*].pools |
Node pools (by name) that this override applies to. | No |
workloadTweaks[*].nodepoolSelector |
Node pools (by label) that this override applies to. | No |
workloadTweaks[*].tweaks.replicas |
Override the replica count for the selected node pools. | No |
workloadTweaks[*].tweaks.containerImages |
Override the container image for the selected node pools. | No |
workloadTweaks[*].tweaks.patches |
Apply arbitrary field patches to workloadTemplate for the selected node pools. |
No |
tweaks.patches[*].path |
JSON path of the field to modify within workloadTemplate. |
No |
tweaks.patches[*].operation |
Patch operation: add, remove, or replace. |
No |
tweaks.patches[*].value |
Value to set. Applies only to add and replace operations. |
No |
Status fields
| Field | Description |
|---|---|
status.conditions |
Current status of the YurtAppSet, including node pool selection status and workload status. |
status.readyWorkloads |
Number of managed workloads where all replicas are ready. |
status.updatedWorkloads |
Number of managed workloads where all replicas are updated to the latest version. |
status.totalWorkloads |
Total number of workloads managed by this YurtAppSet. |
UnitedDeployment (cluster version earlier than 1.26)
Create a UnitedDeployment instance
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: example
namespace: default
spec:
revisionHistoryLimit: 5
selector:
matchLabels:
app: example
template:
deploymentTemplate:
metadata:
creationTimestamp: null
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- image: nginx:1.19.3
imagePullPolicy: Always
name: nginx
dnsPolicy: ClusterFirst
restartPolicy: Always
topology:
subsets:
- name: cloud
nodeSelectorTerm:
matchExpressions:
- key: alibabacloud.com/nodepool-id
operator: In
values:
- np4b9781c40f0e46c581b2cf2b6160****
replicas: 2
- name: edge
nodeSelectorTerm:
matchExpressions:
- key: alibabacloud.com/nodepool-id
operator: In
values:
- np47832359db2e4843aa13e8b76f83****
replicas: 2
tolerations:
- effect: NoSchedule
key: apps.openyurt.io/taints
operator: Exists
YAML field reference
| Field | Description |
|---|---|
spec.workloadTemplate |
Workload template to deploy. Supports deploymentTemplate and statefulSetTemplate. |
spec.topology.subsets |
List of node pools to deploy to. |
spec.topology.subsets[*].name |
Name of the node pool. |
spec.topology.subsets[*].nodeSelectorTerm |
Node affinity configuration. Use alibabacloud.com/nodepool-id as the key and the node pool ID as the value. To find the node pool ID, go to the Node Pools page and look under the Name column. |
spec.topology.subsets[*].tolerations |
Toleration configuration for the node pool. |
spec.topology.subsets[*].replicas |
Number of Pod replicas in the node pool. |
Manage edge applications
Once a YurtAppSet is running, all lifecycle operations work by modifying its spec. The YurtAppSet controller propagates changes to each node pool's workload automatically.
Upgrade the application version
Update spec.workload.workloadTemplate with the new container image or other template changes. The controller applies the updated template to all managed workloads, triggering a rolling update in each node pool.
Run a grayscale update in a specific region
Update spec.workload.workloadTweaks[*].containerImages for the target node pool. Only the Pods in that node pool receive the new image; other node pools are unaffected.
Scale an application in a specific region
Update spec.workload.workloadTweaks[*].tweaks.replicas for the target node pool. This overrides the replica count defined in workloadTemplate for that node pool only.
Deploy to a new region
Create a new node pool with a label that matches spec.nodepoolSelector. YurtAppSet detects the new NodePool resource and automatically creates a workload for it. Then add nodes from that region to the node pool.
No changes to the YurtAppSet configuration are required when using nodepoolSelector.
Take an application offline in a region
Delete the node pool for the target region. YurtAppSet detects the deletion and automatically removes the workload associated with that node pool.