Use the Multi-Cluster Services (MCS) feature of Distributed Cloud Container Platform for Kubernetes (ACK One) to migrate Services across clusters in phases — without modifying application code, CoreDNS configuration, or pod DNSConfig. This tutorial walks through a simulated migration scenario with three interdependent Services.
When to use this approach
When applications span hundreds or thousands of interdependent Services, a one-time "lift and shift" migration carries significant risk:
Full verification of every Service is time-consuming and extends the migration window.
Complex inter-Service dependencies make it impossible to reduce risk through phased cutover.
Teams migrating on different schedules can cause downtime for Services that depend on each other.
The MCS-based approach eliminates these problems by enabling bidirectional cross-cluster Service access through native Service names. Services in the old cluster can call Services in the new cluster — and vice versa — using the same DNS names they already use today. This means you can migrate one Service at a time, verify it in production, and roll back instantly if something goes wrong, without touching any other Service.
Trade-offs and known constraints:
| Constraint | Detail |
|---|---|
| Network requirement | The pod CIDR blocks of the two clusters must be mutually routable. If your clusters do not yet have cross-cluster network connectivity, set that up before proceeding. See Multi-cluster Service overview. |
| Fleet dependency | Both clusters must be added to an ACK One Fleet instance. |
How it works
Create a MultiClusterService on the ACK One Fleet instance and a Service with the same name in the consumer cluster. The consumer cluster can then resolve and route traffic to the Service in the provider cluster using the native Service name, with no changes to application code.
Each MultiClusterService maps to a Service by name and namespace, and lists both provider clusters (which serve the traffic) and consumer clusters (which route requests):
apiVersion: networking.one.alibabacloud.com/v1alpha1
kind: MultiClusterService
metadata:
name: service
namespace: demo
spec:
consumerClusters:
- name: <your consumer cluster id>
providerClusters:
- name: <your provider cluster id>Thenameandnamespaceof the MultiClusterService must match the name and namespace of the Service being migrated. A cluster can act as both a provider and a consumer simultaneously.
For a deeper explanation of MCS, see Access Services across clusters by using domain names.
Migration overview
The phased migration process follows four steps:
| Step | Action | Cluster |
|---|---|---|
| 1 | Deploy all Services and Deployments | Old Cluster |
| 2 | Deploy Service definitions only (no Deployments) | New Cluster |
| 3 | Create MultiClusterServices for all Services — both clusters as mutual providers and consumers | Fleet instance |
| 4 | Migrate Services one batch at a time: deploy a Deployment to New Cluster, verify traffic, scale Old Cluster Deployment to zero. Repeat until complete, then redirect external traffic. | New Cluster |
Prerequisites
Before you begin, make sure you have:
Fleet management enabled on your ACK One instance
Two associated clusters — referred to as Old Cluster and New Cluster — added to the Fleet instance. See Manage associated clusters
Kubernetes 1.22 or later on both clusters
Pod CIDR blocks of Old Cluster and New Cluster mutually interconnected. See Multi-cluster Service overview
kubeconfig files for both clusters. See Get a cluster kubeconfig and connect to the cluster using kubectl
Step 1: Deploy Services and Deployments to Old Cluster
This step sets up the demo environment in Old Cluster: three Services (service1, service2, service3) in a call chain where service1 calls service2, which calls service3.
Option B: Deploy with GitOps
Using the kubeconfig of the Fleet instance, get the server URL for Old Cluster:
kubectl get secret -n argocd cluster-<Old-Clsuter-ID> -ojsonpath='{.data.server}' |base64 -dIn the ACK One console, go to Fleet > Multi-cluster GitOps.
In the upper-left corner of the Multi-cluster Applications page, click
and select your Fleet instance.Choose Create Multi-cluster Application > GitOps to open the Create Multi-cluster Application - GitOps page.
On the Create from YAML tab, paste the following ApplicationSet and click OK. Replace
<Your-Old-Cluster-Server-URL>with the server URL from step 1.apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: svc-demo-oldcluster namespace: argocd spec: generators: - list: elements: - envSvcName: service1 envCluster: oldcluster envCallSvcName: service2 envAPI: call - envSvcName: service2 envCluster: oldcluster envCallSvcName: service3 envAPI: svc - envSvcName: service3 envCluster: oldcluster envCallSvcName: service3 envAPI: svc template: metadata: name: '{{envSvcName}}-{{envCluster}}-svc-demo' spec: project: default source: repoURL: https://github.com/AliyunContainerService/gitops-demo.git targetRevision: main path: manifests/helm/svc-demo helm: parameters: - name: envSvcName value: '{{envSvcName}}' - name: envCallSvcName value: '{{envCallSvcName}}' - name: envAPI value: '{{envAPI}}' - name: envCluster value: '{{envCluster}}' valueFiles: - values.yaml destination: server: <Your-Old-Cluster-Server-URL> namespace: mcs-demo syncPolicy: automated: {} syncOptions: - CreateNamespace=trueDeploy the curl client pod. On the Multi-cluster Applications page, click GitOps console to open the Argo CD UI and create the following Application. Replace
<Your-Old-Cluster-Server-URL>with the server URL from step 1.apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: curl-client-pod namespace: argocd spec: destination: namespace: customer-ns server: <Your-Old-Cluster-Server-URL> project: default source: path: manifests/directory/curlclient repoURL: https://github.com/AliyunContainerService/gitops-demo.git targetRevision: HEAD syncPolicy: syncOptions: - CreateNamespace=true
Verify the call chain in Old Cluster
Run the following commands to confirm the three Services are calling each other correctly:
kubectl exec -it -ncustomer-ns curl-client -- sh
curl service1.mcs-demo/callExpected output:
service1(oldcluster) --> service2(oldcluster) --> service3(oldcluster)Step 2: Deploy Service definitions to New Cluster
Deploy the Service resources (without Deployments) to New Cluster. This registers the Service names in the new cluster so they can be referenced by MultiClusterServices in the next step.
Option A: Deploy with kubectl
Using the kubeconfig of New Cluster, create
web-demo-svc-new.yaml:apiVersion: v1 kind: Service metadata: name: service1 namespace: mcs-demo spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: service1 type: ClusterIP --- apiVersion: v1 kind: Service metadata: name: service2 namespace: mcs-demo spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: service2 type: ClusterIP --- apiVersion: v1 kind: Service metadata: name: service3 namespace: mcs-demo spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: service3 type: ClusterIPApply the manifest:
kubectl apply -f web-demo-svc-new.yaml
Option B: Deploy with GitOps
Using the kubeconfig of the Fleet instance, get the server URL for New Cluster:
kubectl get secret -n argocd cluster-<New-Clsuter-ID> -ojsonpath='{.data.server}' |base64 -dIn the ACK One console, go to Fleet > Multi-cluster GitOps.
Select your Fleet instance from the drop-down list in the upper-left corner.
Choose Create Multi-cluster Application > GitOps.
On the Create from YAML tab, paste the following ApplicationSet and click OK. Replace
<Your-New-Cluster-Server-URL>with the server URL from step 1.apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: svc-demo-newcluster namespace: argocd spec: generators: - list: elements: - envSvcName: service1 envCluster: newcluster onlyService: "true" - envSvcName: service2 envCluster: newcluster onlyService: "true" - envSvcName: service3 envCluster: newcluster onlyService: "true" template: metadata: name: '{{envSvcName}}-{{envCluster}}-svc-demo' spec: project: default source: repoURL: https://github.com/AliyunContainerService/gitops-demo.git targetRevision: main path: manifests/helm/svc-demo helm: parameters: - name: envSvcName value: '{{envSvcName}}' - name: onlyService value: '{{onlyService}}' valueFiles: - values.yaml destination: server: <Your-New-Cluster-Server-URL> namespace: mcs-demo syncPolicy: automated: {} syncOptions: - CreateNamespace=true
Step 3: Create MultiClusterServices on the Fleet instance
With both clusters registered as providers and consumers, traffic for any Service can be routed to whichever cluster has running pods — enabling seamless cross-cluster calls throughout the migration.
Create the
mcs-demonamespace on the Fleet instance:kubectl create ns mcs-demoUsing the kubeconfig of the Fleet instance, create
multiclusterservice.yaml. Replace<Your-New-Cluster-ID>and<Your-Old-Cluster-ID>with the IDs of your associated clusters.The
nameandnamespaceof each MultiClusterService must match the name and namespace of the corresponding Service.apiVersion: networking.one.alibabacloud.com/v1alpha1 kind: MultiClusterService metadata: name: service1 namespace: mcs-demo spec: consumerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID> providerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID> --- apiVersion: networking.one.alibabacloud.com/v1alpha1 kind: MultiClusterService metadata: name: service2 namespace: mcs-demo spec: consumerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID> providerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID> --- apiVersion: networking.one.alibabacloud.com/v1alpha1 kind: MultiClusterService metadata: name: service3 namespace: mcs-demo spec: consumerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID> providerClusters: - name: <Your-New-Cluster-ID> - name: <Your-Old-Cluster-ID>Apply the manifest:
kubectl apply -f multiclusterservice.yaml
Step 4: Migrate Services in batches
After Steps 1–3, both clusters share a bidirectional MCS mesh. The active call chain is still:
service1(oldcluster) --> service2(oldcluster) --> service3(oldcluster)This step demonstrates migrating service2 by deploying its Deployment to New Cluster. Once the pods are running, MCS automatically routes traffic to the new backend. Repeat this step for each Service until all traffic is in New Cluster.
Using the kubeconfig of New Cluster, create
web-demo-new.yaml:apiVersion: apps/v1 kind: Deployment metadata: labels: app: service2 name: svc-demo-service2 namespace: mcs-demo spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: service2 template: metadata: creationTimestamp: null labels: app: service2 spec: containers: - env: - name: ENV_NAME value: newcluster - name: MY_SERVICE_NAME value: service2 - name: SERVICE_URL value: http://service3.mcs-demo:80/svc - name: MY_CLUSTER value: newcluster image: registry-cn-hangzhou.ack.aliyuncs.com/dev/web-demo:v0.6.0 imagePullPolicy: Always name: svc-demoApply the manifest:
kubectl apply -f web-demo-new.yamlVerify the call chain:
kubectl exec -it -ncustomer-ns curl-client -- sh curl service1.mcs-demo/callExpected output:
service1(oldcluster) --> service2(newcluster) --> service3(oldcluster)The output confirms that traffic to
service2is now being served by New Cluster, whileservice1andservice3continue running in Old Cluster — with no code or DNS changes required.Repeat steps 1–3 for each remaining Service until all are running in New Cluster.
After all Services are migrated, redirect external traffic to the Front Service in New Cluster to complete the migration.
Verify and roll back
After deploying each Deployment to New Cluster, verify the call chain output before committing. Because each Service is migrated independently and both clusters remain active throughout, rolling back a single Service has no impact on any other Service in the call chain.
If verification passes: Scale the corresponding Deployment in Old Cluster to 0 replicas using the kubeconfig of Old Cluster. MCS routes all traffic to New Cluster.
If verification fails: Scale the Deployment in New Cluster to 0 replicas using the kubeconfig of New Cluster to instantly restore traffic to Old Cluster. After resolving the issue, increase the replicas again and re-verify.