You can perform a hot migration to migrate an ACK dedicated cluster to an ACK managed Pro cluster. Hot migration does not interrupt services and does not affect the normal operation of the cluster.
The creation of ACK dedicated clusters is stopped in Container Service for Kubernetes since August 21, 2024. We recommend that you use ACK managed Pro clusters in the production environment for higher reliability, security, and scheduling efficiency. This allows you to take advantage of features and capabilities of the ACK managed Pro clusters, such as managed control planes and high availability.
Prerequisites
An ACK dedicated cluster (to be migrated) that runs Kubernetes 1.18 or later is created. For more information about how to upgrade a cluster, see Manually upgrade ACK clusters.
After the migration, the Kubernetes version of the cluster remains unchanged is not forcibly upgraded. If you want to migrate and upgrade the cluster, we recommend that you migrate the cluster before you upgrade the cluster.
An Object Storage Service (OSS) bucket is created in the region of the ACK cluster to be migrated and hotlink protection is disabled for the bucket because hotlink protection can cause migration failures. For more information, see Create buckets and Hotlink protection.
Usage notes
Item | Description |
Billing |
|
Internet access |
|
Custom pod configurations | If your ACK dedicated cluster has custom pod configurations enabled, you cannot migrate the cluster to an ACK managed Pro cluster. You must stop terway-controlplane before the migration starts and then enable terway-controlplane after the migration is complete. For more information, see Stop terway-controlplane before cluster migration. For more information about how to customize pod configurations, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod. |
Master nodes | Cloud Assistant Agent is not installed on some old master nodes. You must manually install it. For more information, see Install the Cloud Assistant Agent. After the cluster migration is complete, the status of the master node changes to Not Ready. |
Release of ECS instances | When you remove master nodes, ACK releases all pay-as-you-go ECS instances and their data disks. You must manually release subscription instances. You must manually release subscription ECS instances. For more information, see Release or unsubscribe from an ApsaraDB RDS for MySQL instance. |
Step 1: Perform a hot migration to migrate an ACK dedicated cluster to an ACK managed Pro cluster
Before you start, make sure that all prerequisites are met and you have read and understand the considerations. After you migrate to an ACK managed Pro cluster, you cannot roll back to the ACK dedicated cluster.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, choose More>Migrate to Pro in the Actions column of the ACK cluster to be migrated.
In the Migrate to Pro dialog box, complete the precheck and Resource Access Management (RAM) authorization, select the OSS bucket that you created for hot migration, and then click OK.
After the migration is complete, the Migrate to Pro dialog box displays a message. You can check the type of the ACK cluster and the status of the master nodes.
Cluster type: Go back to the Clusters page. The cluster type in the Type column changes from ACK Dedicated Cluster to ACK Managed. Professional displayed in the Cluster Specification column.
Master node status: On the Clusters page, click Details in the Actions column of the cluster. In the left-side navigation pane, choose Nodes > Nodes. If the Role/Status column of the master nodes displays Unknown, the master nodes are disconnected from the cluster. You can refer to Step 2: Remove master nodes from the ACK dedicated cluster after the hot migration is complete to remove master nodes after the hot migration is complete.
Step 2: Remove master nodes from the ACK dedicated cluster after the hot migration is complete
After the hot migration is complete, you can use the console or run kubectl commands to remove master nodes from the ACK dedicated cluster.
Use the ACK console
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the one you want to change. In the left-side navigation pane, choose .
On the Nodes page, choose More > Remove in the Actions column of a master node or select one or more master nodes and click Batch Remove at the bottom. In the dialog box that appears, configure parameters and click OK.
Use kubectl
Before you run the command, make sure that you have connected to the cluster by using kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Query and record the names of the master nodes that you want to remove.
kubectl get node | grep control-plane
Remove a master node. Replace
<MASTER_NAME>
with the name of the master node.kubectl delete node <MASTER_NAME>
To remove multiple master nodes at a time, replace
<MASTER_NAME>
with the names of the master nodes. For example, to remove master nodescn-hangzhou.192.xx.xx.65
andcn-hangzhou.192.xx.xx.66
at the same time, run the following command:kubectl delete node cn-hangzhou.192.xx.xx.65 cn-hangzhou.192.xx.xx.66
(Optional) Step 3: Handle components
Check whether the Application Load Balancer (ALB) Ingress controller or ack-virtual-node is installed in the ACK dedicated cluster. If yes, you must reinstall or migrate the component.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose
.On the Add-ons page, check whether the ALB Ingress controller or ack-virtual-node is installed in the ACK dedicated cluster.
Reinstall the ALB Ingress controller
If your ACK dedicated cluster has the ALB Ingress controller installed, you must reinstall it after the migration is complete. For more information about how to install the ALB Ingress controller, see Manage components.
After the installation is complete, run the following command to delete the original application and make sure that the application is connected to the cluster by using kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
kubectl delete deployment alb-ingress-controller -n kube-system
Reinstall the ACK Virtual Node component
If your ACK dedicated cluster has the ACK Virtual Node component installed, to migrate without business interruptions, you must manually reinstall the ACK Virtual Node component in the ACK managed Pro cluster after the migration is complete.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Add-ons page, find and install the ACK Virtual Node component.
After the ACK Virtual Node component is installed, run the following commands in sequence to delete the original components and configurations.
# Delete the original vk-webhook Service, ack-virtual-node-controller Deployment, ClusterRoleBindings related to virtual nodes, and virtual node ServiceAccounts in sequence. kubectl -n kube-system delete service vk-webhook kubectl -n kube-system delete deployment ack-virtual-node-controller kubectl -n kube-system delete clusterrolebinding virtual-kubelet kubectl -n kube-system delete serviceaccount virtual-kubelet
After the migration is complete, create pods to check whether the cluster runs as normal.
What to do next
After you migrate to an ACK managed Pro cluster, you must manually limit the permissions of the worker RAM role assumed by nodes in the cluster in order to enhance node security. For more information, see Manually limit the permissions of the worker RAM role of an ACK managed cluster.
If your ACK dedicated cluster has cGPU Basic Edition installed, after you migrate to an ACK managed Pro cluster, you must upgrade cGPU Basic Edition to cGPU Professional Edition. For more information, see Upgrade cGPU Basic Edition to cGPU Professional Edition in an ACK Pro cluster.
FAQ
Are the services in the ACK managed Basic cluster affected during the migration?
During the migration, the control plane components of the ACK dedicated cluster are dormant. The running services are not affected.
How long does the migration process take?
The cluster migration includes three stages: the control plane enters sleep mode, etcd data is backed up, and managed components are started. The overall process is expected to take 10 to 15 minutes. During this time, the API server is expected to be unavailable for 5 to 10 minutes.
Does the access link change after the cluster migration?
After the migration, the IP address of the SLB instance of the API server does not change. When you use the kubeconfig file to access the cluster, the IP address of the cluster does not change.
How do I handle failures in environment variable configurations for ACK Virtual Node during the precheck?
If the ACK Virtual Node component is installed in the ACK dedicated cluster, you must manually configure an internal endpoint for kube-apiserver before the migration starts. To do this, perform the following steps:
On the Cluster Information page, obtain the internal endpoint of kube-apiserver.
On the Deployments page, select the kube-system namespace, find the Deployment named ack-virtual-node-controller, and then add the following environment variables to the
spec.template.spec.containers[0].env
field of the Deployment:KUBERNETES_APISERVER_HOST
: The private IP address of kube-apiserver.KUBERNETES_APISERVER_PORT
: the private port of kube-apiserver, which is set to 6443 in most cases.