All Products
Search
Document Center

Container Service for Kubernetes:Get started with application distribution

Last Updated:Mar 26, 2026

ACK One Fleet lets you distribute applications from a Fleet instance to multiple member clusters without relying on Git repositories. This guide walks you through deploying an NGINX Deployment to two clusters using a PropagationPolicy.

By the end of this guide, you will have:

  • Created an application on a Fleet instance

  • Defined a PropagationPolicy to distribute the application to multiple clusters

  • (Optional) Applied an OverridePolicy to customize per-cluster configuration

  • Verified, updated, and cleaned up distributed resources

Prerequisites

Before you begin, ensure that you have:

(Optional) Step 1: Create a namespace on the Fleet instance

If the namespace for your application does not exist on the Fleet instance, create it. Skip this step if the namespace already exists.

Run the following command to create a namespace named demo:

kubectl create namespace demo

Step 2: Create an application on the Fleet instance

Fleet supports distributing ConfigMaps, Deployments, and Services. This example uses an NGINX Deployment.

  1. Create a file named web-demo.yaml with the following content:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: demo
      name: web-demo
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: web-demo
      template:
        metadata:
          labels:
            app: web-demo
        spec:
          containers:
          - name: nginx
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/web-demo:0.5.0
            ports:
            - containerPort: 80
  2. Deploy the application:

    kubectl apply -f web-demo.yaml

Step 3: Create a PropagationPolicy to distribute the application

A PropagationPolicy tells the Fleet controller which resources to distribute and which clusters to target. Once created, the controller automatically detects matching resources and pushes them to the specified clusters.

This example distributes the Deployment to two clusters in Duplicated mode, so each cluster runs three replicas independently.

  1. Get the IDs of the member clusters managed by the Fleet instance:

    kubectl get mcl

    The output is similar to:

    NAME                                HUB ACCEPTED   MANAGED CLUSTER URLS   JOINED   AVAILABLE   AGE
    cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx   true                                  True     True        3d23h
    cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx   true                                  True     True        5d21h
  2. Create a file named propagationpolicy.yaml with the following content. Replace ${cluster1-id} and ${cluster2-id} with your actual cluster IDs.

    ClusterPropagationPolicy distributes cluster-scoped resources (such as Namespaces). PropagationPolicy distributes namespace-scoped resources and can only select resources within the namespace where the policy resides.
    apiVersion: policy.one.alibabacloud.com/v1alpha1
    kind: ClusterPropagationPolicy
    metadata:
      name: web-demo
    spec:
      resourceSelectors:
      - apiVersion: v1
        kind: Namespace
        name: demo
      placement:
        clusterAffinity:
          clusterNames:
          - ${cluster1-id} # The ID of your cluster.
          - ${cluster2-id} # The ID of your cluster.
        replicaScheduling:
          replicaSchedulingType: Duplicated
    ---
    apiVersion: policy.one.alibabacloud.com/v1alpha1
    kind: PropagationPolicy
    metadata:
      name: web-demo
      namespace: demo
    spec:
      preserveResourcesOnDeletion: true # When true, deleting resources from the Fleet instance keeps them in the member cluster. Set to false to delete them together.
      resourceSelectors:
      - apiVersion: apps/v1
        kind: Deployment
        name: web-demo
        namespace: demo
      placement:
        clusterAffinity:
          clusterNames:
          - ${cluster1-id} # The ID of your cluster.
          - ${cluster2-id} # The ID of your cluster.
        replicaScheduling:
          replicaSchedulingType: Duplicated

    The following table describes the key parameters. For the full parameter reference, see PropagationPolicy.

    ParameterDescriptionExample
    resourceSelectorsResources to distribute. Specify apiVersion, kind, name, namespace, or labelSelector to match resources.Select the web-demo Deployment in the demo namespace.
    placement.clusterAffinityTarget clusters for distribution. Enter cluster IDs, not cluster names.${cluster1-id}, ${cluster2-id}
    replicaScheduling.replicaSchedulingTypeScheduling mode. Duplicated replicates the full replica count to each cluster. The replica count is set by spec.replicas in the workload.Duplicated
  3. Apply the PropagationPolicy:

    kubectl apply -f propagationpolicy.yaml

(Optional) Step 4: Create an OverridePolicy to customize per-cluster configuration

An OverridePolicy modifies resource configuration before deployment, letting you tailor settings for individual clusters.

This example applies to ${cluster2-id} and makes two changes:

  • Reduces replicas from 3 to 1

  • Prepends a registry prefix to the image

Before applying the OverridePolicy, the Deployment on ${cluster2-id} has:

spec:
  replicas: 3
  ...
  containers:
    - image: registry-cn-hangzhou.ack.aliyuncs.com/acs/web-demo:0.5.0

After applying the OverridePolicy, the Deployment on ${cluster2-id} becomes:

spec:
  replicas: 1
  ...
  containers:
    - image: {{Registry}}/registry-cn-hangzhou.ack.aliyuncs.com/acs/web-demo:0.5.0
  1. Create a file named overridepolicy.yaml with the following content:

    apiVersion: policy.one.alibabacloud.com/v1alpha1
    kind: OverridePolicy
    metadata:
      name: example
      namespace: demo
    spec:
      resourceSelectors:
        - apiVersion: apps/v1
          kind: Deployment
          name: web-demo
      overrideRules:
        - targetCluster:
            clusterNames:
              - ${cluster2-id}
          overriders:
            plaintext:
              - operator: replace
                path: /spec/replicas
                value: 1
            imageOverrider:
              - component: Registry
                operator: add
                value: {{Registry}}

    The following table describes the key parameters. For the full parameter reference, see OverridePolicy.

    ParameterDescriptionExample
    resourceSelectorsResources to override. Specify apiVersion, kind, name, namespace, or labelSelector.Select the web-demo Deployment.
    overrideRules.plaintextOverride resource fields using JSONPatch. Specify operator, path, and value.Change spec.replicas to 1.
    overrideRules.imageOverriderOverride image components: Registry, Repository, or Version.Prepend a registry prefix to the image.
  2. Apply the OverridePolicy:

    kubectl apply -f overridepolicy.yaml

Step 5: View the distribution status

Run the following AMC command to check whether the application has been distributed to all member clusters:

kubectl amc get deploy -ndemo -M

The output is similar to:

NAME       CLUSTER          READY   UP-TO-DATE   AVAILABLE   AGE    ADOPTION
web-demo   cxxxxxxxxxxxxx   3/3     3            3           3d4h   Y
web-demo   cxxxxxxxxxxxxx   3/3     3            3           3d4h   Y

READY: 3/3 and ADOPTION: Y confirm that the application is running in each cluster.

Step 6: Update the application

  1. Update web-demo.yaml to increase replicas to 4:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: demo
      name: web-demo
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: web-demo
      template:
        metadata:
          labels:
            app: web-demo
        spec:
          containers:
          - name: nginx
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/web-demo:0.5.0
            ports:
            - containerPort: 80
  2. Apply the updated manifest:

    kubectl apply -f web-demo.yaml
  3. Verify the update has propagated to all clusters:

    kubectl amc get deploy -ndemo -M

    The output is similar to:

    NAME       CLUSTER          READY   UP-TO-DATE   AVAILABLE   AGE    ADOPTION
    web-demo   cxxxxxxxxxxxxx   4/4     4            4           3d4h   Y
    web-demo   cxxxxxxxxxxxxx   4/4     4            4           3d4h   Y

    READY: 4/4 confirms that replicas in both clusters have been updated.

Step 7: Delete application resources

Fleet distributes resources but does not delete them by default when you remove the application or PropagationPolicy from the Fleet instance. This prevents accidental deletion of workloads in member clusters.

To delete resources from member clusters, follow these steps:

  1. Set preserveResourcesOnDeletion to false in ClusterPropagationPolicy:

    apiVersion: policy.one.alibabacloud.com/v1alpha1
    kind: ClusterPropagationPolicy
    metadata:
      name: web-demo
    spec:
      preserveResourcesOnDeletion: false
      resourceSelectors:
      - apiVersion: apps/v1
        kind: Deployment
        name: web-demo
      - apiVersion: v1
        kind: Namespace
        name: demo
      placement:
        clusterAffinity:
          clusterNames:
          - ${cluster1-id} # The ID of your cluster.
          - ${cluster2-id} # The ID of your cluster.
        replicaScheduling:
          replicaSchedulingType: Duplicated
  2. Apply the updated policy:

    kubectl apply -f propagationpolicy.yaml
  3. Delete the application resources:

    kubectl delete -f web-demo.yaml
  4. Confirm the resources have been removed from member clusters:

    kubectl amc get deploy -ndemo -M

    The output is similar to:

    cluster(cxxxxxxxxxxxxx): deployments.apps "web-demo" not found
    cluster(cxxxxxxxxxxxxx): deployments.apps "web-demo" not found

Related topics