All Products
Search
Document Center

Container Service for Kubernetes:Application distribution overview

Last Updated:Mar 26, 2026

ACK One Fleet instances distribute applications across multiple associated clusters using declarative policies—no Git repository required. Define a PropagationPolicy to target specific clusters and an OverridePolicy to apply per-cluster configuration overrides, then let the Fleet instance handle synchronization automatically.

How it works

Create your application resources on a Fleet instance, then define distribution and override policies to control how those resources propagate to associated clusters.

  1. Create application resources (Deployments, Services, ConfigMaps, and so on) on the Fleet instance.

  2. Create a PropagationPolicy or ClusterPropagationPolicy to select target clusters and set scheduling rules.

  3. Optionally create an OverridePolicy or ClusterOverridePolicy to apply cluster-specific configuration differences.

  4. The Fleet instance automatically syncs subsequent resource updates to all targeted clusters.

Note

Create PropagationPolicy and OverridePolicy only once during initial setup. Both policies remain active after creation, and any future resource updates are automatically propagated. Use kubectl amc to check distribution progress across associated clusters.

image

Advanced features

Workload scheduling

ACK One Fleet instances schedule pod replicas across associated clusters based on distribution policies. Two scheduling modes are available:

Static weight scheduling

The cluster administrator assigns a fixed weight to each associated cluster. The scheduler places pod replicas proportionally to those weights.

For configuration details, see replicaScheduling.

Dynamic weight scheduling

The scheduler calculates each cluster's weight dynamically based on currently available resources, then places pod replicas proportionally.

For configuration details, see Dynamic distribution and descheduling.

Descheduling

Available resources in associated clusters change over time. When a pod fails to schedule due to low priority or insufficient resources, the descheduler automatically moves it to another cluster where resources are available. Descheduling is enabled by default.

For verification steps, see Verify descheduling.

Application-level failover

When online and offline workloads share a cluster, offline jobs may be interrupted by node failures, resource preemption, or preemptible instance releases. Application-level failover detects when a job stops running and automatically migrates it to another cluster.

For a configuration example using PyTorchJob with gang scheduling, see How to use Kube Queue on a Fleet instance and schedule PyTorchJob by using gang scheduling.

Multi-cluster gang scheduling

Gang scheduling ensures all pods in a correlated group are scheduled simultaneously—if any pod cannot be placed, none of the group is scheduled. ACK One Fleet instances extend this to multiple clusters using resource pre-allocation or dynamic resource checks to place the entire group on a single cluster.

This is particularly useful for distributed AI and data processing jobs:

  • PyTorch and TensorFlow training jobs (master-worker architecture): ensures the master pod and all worker pods land on the same cluster, maintaining direct communication.

  • Spark jobs (driver-executor architecture): ensures the driver pod and all executor pods are co-located, avoiding cross-cluster network overhead.

Together with descheduling and application-level failover, multi-cluster gang scheduling ensures AI jobs reach clusters with sufficient resources and continue running without manual intervention. For details, see Job distribution.

Distributable resources

Both distribution policies and override policies support all resources in the following table. By default, any principal with permission to create resources on a Fleet instance also has permission to distribute those resources to associated clusters.

Resource level

Resource type

APIVersion

Distribution policy

Override policy

Cluster

Namespace

v1

Supported

Supported

PersistentVolume

v1

Supported

Supported

StorageClass

storage.k8s.io/v1

Supported

Supported

CustomResourceDefinition

apiextensions.k8s.io/v1

Supported

Supported

Namespace

Deployment

apps/v1

Supported

Supported

StatefulSet

apps/v1

Supported

Supported

DaemonSet

apps/v1

Supported

Supported

Job

batch/v1

Supported

Supported

CronJob

batch/v1

Supported

Supported

Ingress

networking.k8s.io/v1

Supported

Supported

Service

v1

Supported

Supported

PersistentVolumeClaim

v1

Supported

Supported

ConfigMap

v1

Supported

Supported

Secret

v1

Supported

Supported

Pod

v1

Supported

Supported

LimitRange

v1

Supported

Supported

ResourceQuota

v1

Supported

Supported

HorizontalPodAutoscaler

autoscaling/v2

Supported

Supported

What's next

TaskDescriptionReference
Deploy your first applicationUse kubectl to create a PropagationPolicy and distribute resources to associated clusters, with an OverridePolicy for per-cluster differences.Getting started with application distribution
Understand policy parametersLearn the full parameter set for PropagationPolicy and OverridePolicy, including cluster selection, replica scheduling, and override rules.PropagationPolicy and OverridePolicy
Check distribution progressRun kubectl amc to query the rollout status of an application across associated clusters.Use AMC command line