All Products
Search
Document Center

Container Service for Kubernetes:Seamlessly migrate large-scale applications across clusters by using the MCS feature of ACK One

Last Updated:Mar 26, 2026

Use the Multi-Cluster Services (MCS) feature of Distributed Cloud Container Platform for Kubernetes (ACK One) to migrate Services across clusters in phases — without modifying application code, CoreDNS configuration, or pod DNSConfig. This tutorial walks through a simulated migration scenario with three interdependent Services.

When to use this approach

When applications span hundreds or thousands of interdependent Services, a one-time "lift and shift" migration carries significant risk:

  • Full verification of every Service is time-consuming and extends the migration window.

  • Complex inter-Service dependencies make it impossible to reduce risk through phased cutover.

  • Teams migrating on different schedules can cause downtime for Services that depend on each other.

The MCS-based approach eliminates these problems by enabling bidirectional cross-cluster Service access through native Service names. Services in the old cluster can call Services in the new cluster — and vice versa — using the same DNS names they already use today. This means you can migrate one Service at a time, verify it in production, and roll back instantly if something goes wrong, without touching any other Service.

Trade-offs and known constraints:

ConstraintDetail
Network requirementThe pod CIDR blocks of the two clusters must be mutually routable. If your clusters do not yet have cross-cluster network connectivity, set that up before proceeding. See Multi-cluster Service overview.
Fleet dependencyBoth clusters must be added to an ACK One Fleet instance.

How it works

Create a MultiClusterService on the ACK One Fleet instance and a Service with the same name in the consumer cluster. The consumer cluster can then resolve and route traffic to the Service in the provider cluster using the native Service name, with no changes to application code.

image

Each MultiClusterService maps to a Service by name and namespace, and lists both provider clusters (which serve the traffic) and consumer clusters (which route requests):

apiVersion: networking.one.alibabacloud.com/v1alpha1
kind: MultiClusterService
metadata:
   name: service
   namespace: demo
spec:
  consumerClusters:
    - name: <your consumer cluster id>
  providerClusters:
    - name: <your provider cluster id>
The name and namespace of the MultiClusterService must match the name and namespace of the Service being migrated. A cluster can act as both a provider and a consumer simultaneously.

For a deeper explanation of MCS, see Access Services across clusters by using domain names.

Migration overview

The phased migration process follows four steps:

StepActionCluster
1Deploy all Services and DeploymentsOld Cluster
2Deploy Service definitions only (no Deployments)New Cluster
3Create MultiClusterServices for all Services — both clusters as mutual providers and consumersFleet instance
4Migrate Services one batch at a time: deploy a Deployment to New Cluster, verify traffic, scale Old Cluster Deployment to zero. Repeat until complete, then redirect external traffic.New Cluster
image

Prerequisites

Before you begin, make sure you have:

Step 1: Deploy Services and Deployments to Old Cluster

This step sets up the demo environment in Old Cluster: three Services (service1, service2, service3) in a call chain where service1 calls service2, which calls service3.

Option A: Deploy with kubectl

  1. Using the kubeconfig of Old Cluster, create web-demo-svc-old.yaml with the following content:

    View the YAML file

    apiVersion: v1
    kind: Service
    metadata:
      name: service1
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service1
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: service1
      name: svc-demo-service1
      namespace: mcs-demo
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: service1
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: service1
        spec:
          containers:
          - env:
            - name: ENV_NAME
              value: oldcluster
            - name: MY_SERVICE_NAME
              value: service1
            - name: SERVICE_URL
              value: http://service2.mcs-demo:80/call
            - name: MY_CLUSTER
              value: oldcluster
            image: registry-cn-hongkong.ack.aliyuncs.com/acs/web-demo:v0.6.0-2
            imagePullPolicy: Always
            name: svc-demo
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: service2
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service2
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: service2
      name: svc-demo-service2
      namespace: mcs-demo
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: service2
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: service2
        spec:
          containers:
          - env:
            - name: ENV_NAME
              value: oldcluster
            - name: MY_SERVICE_NAME
              value: service2
            - name: SERVICE_URL
              value: http://service3.mcs-demo:80/svc
            - name: MY_CLUSTER
              value: oldcluster
            image: registry-cn-hongkong.ack.aliyuncs.com/acs/web-demo:v0.6.0-2
            imagePullPolicy: Always
            name: svc-demo
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: service3
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service3
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: service3
      name: svc-demo-service3
      namespace: mcs-demo
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: service3
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: service3
        spec:
          containers:
          - env:
            - name: ENV_NAME
              value: oldcluster
            - name: MY_SERVICE_NAME
              value: service3
            - name: SERVICE_URL
              value: http://service3.mcs-demo:80/svc
            - name: MY_CLUSTER
              value: oldcluster
            image: registry-cn-hongkong.ack.aliyuncs.com/acs/web-demo:v0.6.0-2
            imagePullPolicy: Always
            name: svc-demo
  2. Apply the manifest:

    kubectl apply -f web-demo-svc-old.yaml
  3. Create client-pod.yaml to deploy a curl client for testing:

    apiVersion: v1
    kind: Pod
    metadata:
      name: curl-client
    spec:
      containers:
      - name: curl-client
        image: registry-cn-hangzhou.ack.aliyuncs.com/dev/curl:8.11.1
        command: ["sh", "-c", "sleep 12000"]
  4. Deploy the client pod:

    kubectl apply -f client-pod.yaml

Option B: Deploy with GitOps

  1. Using the kubeconfig of the Fleet instance, get the server URL for Old Cluster:

    kubectl get secret -n argocd cluster-<Old-Clsuter-ID> -ojsonpath='{.data.server}' |base64 -d
  2. In the ACK One console, go to Fleet > Multi-cluster GitOps.

  3. In the upper-left corner of the Multi-cluster Applications page, click Dingtalk_20231226104633.jpg and select your Fleet instance.

  4. Choose Create Multi-cluster Application > GitOps to open the Create Multi-cluster Application - GitOps page.

  5. On the Create from YAML tab, paste the following ApplicationSet and click OK. Replace <Your-Old-Cluster-Server-URL> with the server URL from step 1.

    apiVersion: argoproj.io/v1alpha1
    kind: ApplicationSet
    metadata:
      name: svc-demo-oldcluster
      namespace: argocd
    spec:
      generators:
        - list:
            elements:
            - envSvcName: service1
              envCluster: oldcluster
              envCallSvcName: service2
              envAPI: call
            - envSvcName: service2
              envCluster: oldcluster
              envCallSvcName: service3
              envAPI: svc
            - envSvcName: service3
              envCluster: oldcluster
              envCallSvcName: service3
              envAPI: svc
      template:
        metadata:
          name: '{{envSvcName}}-{{envCluster}}-svc-demo'
        spec:
          project: default
          source:
            repoURL: https://github.com/AliyunContainerService/gitops-demo.git
            targetRevision: main
            path: manifests/helm/svc-demo
            helm:
              parameters:
              - name: envSvcName
                value: '{{envSvcName}}'
              - name: envCallSvcName
                value: '{{envCallSvcName}}'
              - name: envAPI
                value: '{{envAPI}}'
              - name: envCluster
                value: '{{envCluster}}'
              valueFiles:
                - values.yaml
          destination:
            server: <Your-Old-Cluster-Server-URL>
            namespace: mcs-demo
          syncPolicy:
            automated: {}
            syncOptions:
              - CreateNamespace=true
  6. Deploy the curl client pod. On the Multi-cluster Applications page, click GitOps console to open the Argo CD UI and create the following Application. Replace <Your-Old-Cluster-Server-URL> with the server URL from step 1.

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: curl-client-pod
      namespace: argocd
    spec:
      destination:
        namespace: customer-ns
        server: <Your-Old-Cluster-Server-URL>
      project: default
      source:
        path: manifests/directory/curlclient
        repoURL: https://github.com/AliyunContainerService/gitops-demo.git
        targetRevision: HEAD
      syncPolicy:
        syncOptions:
        - CreateNamespace=true

Verify the call chain in Old Cluster

Run the following commands to confirm the three Services are calling each other correctly:

kubectl exec -it -ncustomer-ns curl-client -- sh
curl service1.mcs-demo/call

Expected output:

service1(oldcluster) --> service2(oldcluster) --> service3(oldcluster)

Step 2: Deploy Service definitions to New Cluster

Deploy the Service resources (without Deployments) to New Cluster. This registers the Service names in the new cluster so they can be referenced by MultiClusterServices in the next step.

Option A: Deploy with kubectl

  1. Using the kubeconfig of New Cluster, create web-demo-svc-new.yaml:

    apiVersion: v1
    kind: Service
    metadata:
      name: service1
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service1
      type: ClusterIP
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: service2
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service2
      type: ClusterIP
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: service3
      namespace: mcs-demo
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: service3
      type: ClusterIP
  2. Apply the manifest:

    kubectl apply -f web-demo-svc-new.yaml

Option B: Deploy with GitOps

  1. Using the kubeconfig of the Fleet instance, get the server URL for New Cluster:

    kubectl get secret -n argocd cluster-<New-Clsuter-ID> -ojsonpath='{.data.server}' |base64 -d
  2. In the ACK One console, go to Fleet > Multi-cluster GitOps.

  3. Select your Fleet instance from the drop-down list in the upper-left corner.

  4. Choose Create Multi-cluster Application > GitOps.

  5. On the Create from YAML tab, paste the following ApplicationSet and click OK. Replace <Your-New-Cluster-Server-URL> with the server URL from step 1.

    apiVersion: argoproj.io/v1alpha1
    kind: ApplicationSet
    metadata:
      name: svc-demo-newcluster
      namespace: argocd
    spec:
      generators:
        - list:
            elements:
            - envSvcName: service1
              envCluster: newcluster
              onlyService: "true"
            - envSvcName: service2
              envCluster: newcluster
              onlyService: "true"
            - envSvcName: service3
              envCluster: newcluster
              onlyService: "true"
      template:
        metadata:
          name: '{{envSvcName}}-{{envCluster}}-svc-demo'
        spec:
          project: default
          source:
            repoURL: https://github.com/AliyunContainerService/gitops-demo.git
            targetRevision: main
            path: manifests/helm/svc-demo
            helm:
              parameters:
              - name: envSvcName
                value: '{{envSvcName}}'
              - name: onlyService
                value: '{{onlyService}}'
              valueFiles:
                - values.yaml
          destination:
            server: <Your-New-Cluster-Server-URL>
            namespace: mcs-demo
          syncPolicy:
            automated: {}
            syncOptions:
              - CreateNamespace=true

Step 3: Create MultiClusterServices on the Fleet instance

With both clusters registered as providers and consumers, traffic for any Service can be routed to whichever cluster has running pods — enabling seamless cross-cluster calls throughout the migration.

  1. Create the mcs-demo namespace on the Fleet instance:

    kubectl create ns mcs-demo
  2. Using the kubeconfig of the Fleet instance, create multiclusterservice.yaml. Replace <Your-New-Cluster-ID> and <Your-Old-Cluster-ID> with the IDs of your associated clusters.

    The name and namespace of each MultiClusterService must match the name and namespace of the corresponding Service.
    apiVersion: networking.one.alibabacloud.com/v1alpha1
    kind: MultiClusterService
    metadata:
       name: service1
       namespace: mcs-demo
    spec:
      consumerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
      providerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
    ---
    apiVersion: networking.one.alibabacloud.com/v1alpha1
    kind: MultiClusterService
    metadata:
       name: service2
       namespace: mcs-demo
    spec:
      consumerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
      providerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
    ---
    apiVersion: networking.one.alibabacloud.com/v1alpha1
    kind: MultiClusterService
    metadata:
       name: service3
       namespace: mcs-demo
    spec:
      consumerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
      providerClusters:
        - name: <Your-New-Cluster-ID>
        - name: <Your-Old-Cluster-ID>
  3. Apply the manifest:

    kubectl apply -f multiclusterservice.yaml

Step 4: Migrate Services in batches

After Steps 1–3, both clusters share a bidirectional MCS mesh. The active call chain is still:

service1(oldcluster) --> service2(oldcluster) --> service3(oldcluster)

This step demonstrates migrating service2 by deploying its Deployment to New Cluster. Once the pods are running, MCS automatically routes traffic to the new backend. Repeat this step for each Service until all traffic is in New Cluster.

  1. Using the kubeconfig of New Cluster, create web-demo-new.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: service2
      name: svc-demo-service2
      namespace: mcs-demo
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: service2
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: service2
        spec:
          containers:
          - env:
            - name: ENV_NAME
              value: newcluster
            - name: MY_SERVICE_NAME
              value: service2
            - name: SERVICE_URL
              value: http://service3.mcs-demo:80/svc
            - name: MY_CLUSTER
              value: newcluster
            image: registry-cn-hangzhou.ack.aliyuncs.com/dev/web-demo:v0.6.0
            imagePullPolicy: Always
            name: svc-demo
  2. Apply the manifest:

    kubectl apply -f web-demo-new.yaml
  3. Verify the call chain:

    kubectl exec -it -ncustomer-ns curl-client -- sh
    curl service1.mcs-demo/call

    Expected output:

    service1(oldcluster) --> service2(newcluster) --> service3(oldcluster)

    The output confirms that traffic to service2 is now being served by New Cluster, while service1 and service3 continue running in Old Cluster — with no code or DNS changes required.

  4. Repeat steps 1–3 for each remaining Service until all are running in New Cluster.

  5. After all Services are migrated, redirect external traffic to the Front Service in New Cluster to complete the migration.

Verify and roll back

After deploying each Deployment to New Cluster, verify the call chain output before committing. Because each Service is migrated independently and both clusters remain active throughout, rolling back a single Service has no impact on any other Service in the call chain.

If verification passes: Scale the corresponding Deployment in Old Cluster to 0 replicas using the kubeconfig of Old Cluster. MCS routes all traffic to New Cluster.

If verification fails: Scale the Deployment in New Cluster to 0 replicas using the kubeconfig of New Cluster to instantly restore traffic to Old Cluster. After resolving the issue, increase the replicas again and re-verify.

What's next