All Products
Search
Document Center

Container Service for Kubernetes:Migrate applications from external Kubernetes clusters to ACK clusters

Last Updated:Mar 26, 2026

Use the backup center of Container Service for Kubernetes (ACK) to migrate stateful applications—including persistent volumes—from a self-managed Kubernetes cluster to an ACK cluster. The backup center backs up your application workloads and volume data to an Object Storage Service (OSS) bucket, then restores them in the target ACK cluster with optional StorageClass conversion and namespace remapping.

How it works

Register your self-managed cluster to ACK as a registered cluster, then point both clusters at the same backup vault in the same OSS bucket. The backup center takes an application-level backup in the registered cluster and syncs it to the vault. You then trigger a restore in the ACK cluster, specifying namespace and StorageClass mappings for the target environment.

Before you begin

Review these constraints before starting the migration. They determine whether this approach works for your environment:

  • Same-region requirement: The registered cluster, ACK cluster, and OSS bucket must all be in the same region. Cross-region migration is not supported.

  • Kubernetes version: The external cluster must run Kubernetes 1.16 or later. The ACK restore cluster must run Kubernetes 1.18 or later.

  • Storage plugin: The restore cluster must use the Container Storage Interface (CSI) plugin. Restoration is not supported in clusters that use FlexVolume or csi-compatible-controller with FlexVolume.

  • VPC networking: If your registered cluster connects to a virtual private cloud (VPC) through Cloud Enterprise Network (CEN), Express Connect, or VPN, configure a route to the internal endpoint of the OSS bucket's region before starting. For details, see Internal OSS endpoints and VIP ranges.

Prerequisites

Before you begin, make sure that:

  • A registered cluster exists and your self-managed Kubernetes cluster is registered to ACK through it. For details, see Create a registered cluster.

  • An ACK cluster (version 1.18 or later) for restoring applications is deployed in the same region as the registered cluster. For details, see Create an ACK managed cluster or Create an ACK dedicated cluster.

  • The cluster backup feature is enabled for both clusters. For details, see Install migrate-controller and grant permissions.

  • Cloud Backup is activated.

  • For an ACK managed cluster: Cloud Backup is activated and an OSS bucket named cnfs-oss-**** is created.

  • For an ACK dedicated cluster: The worker RAM role has permissions to access OSS and Cloud Backup. For details, see Grant OSS permissions to an ACK dedicated cluster and Grant Cloud Backup permissions to an ACK dedicated cluster or registered cluster.

  • In the registered cluster, a Resource Access Management (RAM) user is created with permissions to access OSS and Cloud Backup. A Secret named alibaba-addon-secret exists in the csdr namespace, storing the RAM user's AccessKey ID and AccessKey secret. Verify the Secret exists:

    kubectl get secret alibaba-addon-secret -n csdr

    Expected output:

    alibaba-addon-secret   Opaque   2      5d22h
  • Before backing up data, create persistent volumes (PVs) and persistent volume claims (PVCs) to mount local volumes to the registered cluster. For details, see Local volumes.

  • The system components your application depends on are installed and configured in the restore cluster. For example:

    • aliyun-acr-credential-helper: Grant permissions to the restore cluster and configure acr-configuration.

    • alb-ingress-controller: Configure an ALBConfig.

Step 1: Deploy an application in the external cluster

This example deploys a MySQL StatefulSet backed by a local volume, then migrates it to an ACK cluster with StorageClass conversion to alibabacloud-cnfs-nas.

  1. Create a namespace named test1:

    kubectl create namespace test1
  2. Create a file named app-mysql.yaml with the following content. Replace <your-hostname> with the name of the node you want to back up. Set username and password to Base64-encoded values for the MySQL application credentials.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql-sts
      namespace: test1
    spec:
      selector:
        matchLabels:
          app: mysql-sts
      serviceName: mysql-sts
      template:
        metadata:
          labels:
            app: mysql-sts
        spec:
          containers:
          - name: mysql-sts
            image: mysql:5.7
            env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-pass
                  key: password
            ports:
            - containerPort: 80
              name: mysql-sts
            volumeMounts:
            - name: mysql
              mountPath: /var/lib/mysql
          volumes:
            - name: mysql
              persistentVolumeClaim:
                claimName: example-pvc
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /mnt/disk
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <your-hostname> # Replace with the name of the node to back up.
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-pvc
      namespace: test1
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 25Gi
      storageClassName: local-storage
      volumeName: example-pv
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: mysql-pass
      namespace: test1
    type: Opaque
    data:
      username: dGVz****         # Replace with the Base64-encoded username.
      password: dGVzdDEt****     # Replace with the Base64-encoded password.
  3. Deploy the MySQL application, PV, and PVC:

    kubectl create -f app-mysql.yaml

    Expected output:

    statefulset.apps/mysql-sts created
    persistentvolume/example-pv created
    persistentvolumeclaim/example-pvc created
    secret/mysql-pass created
  4. Verify the pod is running:

    kubectl get pod -n test1 | grep mysql-sts

    Expected output:

    mysql-sts-0   1/1     Running   1 (4m51s ago)   4m58s

Step 2: Back up the MySQL application in the registered cluster

After registering the external cluster to ACK, perform the backup in the registered cluster.

  1. Create a backup vault in the registered cluster. For details, see Create a backup vault.

  2. Create a real-time backup task in the registered cluster. For details, see Create a backup plan or back up instantly. Set the following parameters:

    ParameterValue
    NameMySQL
    Backup VaultsSelect the backup vault you created
    Backup Namespacestest1
  3. On the Application Backup page, click the Backup Records tab. When the status of the mysql-backup task changes from InProgress to Completed, the backup is done.

Step 3: Restore the backup in the ACK cluster

This step restores the MySQL application and its data to the test2 namespace in the ACK cluster, converting the StorageClass from local-storage to alibabacloud-cnfs-nas.

  1. Create a restore task named mysql-restore. For details, see Restore applications and volumes. Set the following parameters:

    ParameterValue
    Namemysql-restore
    Backup VaultsSelect the backup vault you created, then click Initialize Backup Vault to associate the restore cluster with the vault
    Select BackupMySQL
    Reset NamespaceChange from test1 to test2
    StorageClass ConversionSelect alibabacloud-cnfs-nas for the example-pvc PVC
  2. Click View Restoration Records to the right of Restore. When the status of the mysql-restore task changes from InProgress to Completed, the application and data are restored.

  3. Verify the MySQL pod is running in the ACK cluster:

    kubectl get pod -n test2 | grep mysql-sts

    Expected output:

    mysql-sts-0   1/1     Running   0          4s
  4. Verify the data is restored correctly:

    1. Check that the PVC StorageClass is alibabacloud-cnfs-nas:

      kubectl get pvc -n test2 | grep example-pvc

      Expected output:

      example-pvc   Bound    nas-acde4acd-59b6-4332-90af-b74ef6******   25Gi       RWO            alibabacloud-cnfs-nas   31m
    2. Confirm the example-pvc PVC is mounted to the MySQL application:

      kubectl describe pvc example-pvc -n test2 | grep "Used By"

      Expected output:

      Used By:       mysql-sts-0

What's next