If you are using ACK dedicated clusters and you want to experience the features provided by ACK Pro clusters, you can use the hot migration feature to dynamically migrate from ACK dedicated clusters to ACK Pro clusters. This topic describes how to dynamically migrate from ACK dedicated clusters to ACK Pro clusters and how to remove master nodes after the hot migration is complete.
Table of contents
Prerequisites
Prerequisite | Description |
Clusters | An ACK dedicated cluster that runs Kubernetes 1.18 or later is created. Update your ACK cluster if the Kubernetes version of the cluster does not meet the requirement. For more information, see Update an ACK cluster. |
Server Load Balancer (SLB) instance specification | An internal-facing SLB instance of slb.s1.small or higher is deployed in the cluster. If your SLB instance does not meet the requirement, upgrade the SLB instance. For more information, see Modify the configurations of pay-as-you-go CLB instances. |
Pod eviction | Make sure that all pods are migrated to worker nodes, except for the following pods: pods of control plane components, such as the API server, kube-controller-manager, cloud controller manager (CCM), and pods of DaemonSets that belong to the kube-system namespace. After the hot migration is complete, the control plane components are replaced by managed components. |
OSS Bucket | An Object Storage Service (OSS) bucket is created in the region of the ACK dedicated cluster and hotlink protection is disabled for the OSS bucket. Hotlink protection may cause hot migration failures. For more information, see Create buckets and Hotlink protection. |
Considerations
Consideration | Description |
Internet access | ACK dedicated clusters of earlier versions still use Internet-facing SLB instances to access API servers. After you migrate from such a cluster to an ACK Pro cluster, the cluster can no longer access the API server through the Internet-facing SLB instance. To resolve this issue, you need to manually switch to the elastic IP address (EIP) mode by associating an EIP with the internal-facing SLB instance of the API server. This way, the cluster can continue to access the API server over the Internet. For more information about how to manually switch to the EIP mode, see Control public access to the API server of a cluster. |
Custom pod configurations | After you configure an ACK dedicated cluster to use custom pod configurations, you cannot directly migrate from the ACK dedicated cluster to an ACK Pro cluster. You need to stop terway-controlplane before the migration starts and then enable terway-controlplane after the migration is complete. For more information, see Stop terway-controlplane before cluster migration. For more information about how to customize pod configurations, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod. |
Components | If ALB Ingress controller is installed in your ACK dedicated cluster, you need to reinstall the component after the migration is complete. For more information about how to install the ALB Ingress controller, see Manage components. After the ALB Ingress controller is installed, you need to use kubectl to run the following command to delete the original Deployment. Before you run the command, make sure that the kubectl client is connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
|
Master nodes | The Cloud Assistant Agent is not installed in ACK clusters of earlier versions. You need to manually install the Cloud Assistant Agent. For more information, see Install the Cloud Assistant Agent. |
Rollback | After you migrate from an ACK dedicated cluster to an ACK Pro cluster, you cannot roll back. |
Release of ECS instances | If you choose to release Elastic Compute Service (ECS) instances when you remove master nodes, ACK will release all pay-as-you-go ECS instances and their data disks. You need to manually release subscription ECS instances. For more information, see Release an instance. |
Step 1: Perform a hot migration to migrate from ACK dedicated clusters to ACK Pro clusters
Before you start, make sure that all prerequisites are met and you have read and understand the considerations.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, choose More > Migrate to Pro in the Actions column of the cluster to be migrated.
In the Migrate to Pro dialog box, complete the precheck and Resource Access Management (RAM) authorization, select the OSS bucket that you created for hot migration, and then click OK.
After the migration is complete, the Migrate to Pro dialog box displays a message. You can check the type of the ACK cluster and the status of the master nodes.
Cluster type: Go back to the Clusters page. The cluster type in the Type column changes from ACK Dedicated to ACK Pro.
Master node status: On the Clusters page, click Details in the Actions column of the cluster. In the left-side navigation pane, choose Nodes > Nodes. If the Role/Status column of the master nodes displays Unknown, the master nodes are disconnected from the cluster. You can remove the master nodes by following the steps in Step 2: Remove the master nodes of the ACK dedicated cluster after the hot migration is complete.
Step 2: Remove the master nodes of the ACK dedicated cluster after the hot migration is complete
After the hot migration is complete, you can use the ACK console or kubectl to remove master nodes from the cluster.
Method 1: Use the ACK console
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and choose in the left-side navigation pane.
On the Nodes page, choose More > Remove in the Actions column of a master node or select one or more master nodes and click Batch Remove at the bottom. In the dialog box that appears, configure parameters and click OK.
Method 2: Use kubectl
Before you run the commands, make sure that a kubectl client is connected to the cluster. For more information about how to use kubectl to connect to a cluster, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to query and record the names of the master nodes to be removed:
kubectl get node | grep control-plane
Run the following command to remove a master node. Replace
<MASTER_NAME>
with the name of the master node.kubectl delete node <MASTER_NAME>
To remove multiple master nodes at a time, replace
<MASTER_NAME>
with the names of the master nodes. For example, run the following command to remove master nodes cn-hangzhou.192.xx.xx.65 and cn-hangzhou.192.xx.xx.66:kubectl delete node cn-hangzhou.192.xx.xx.65 cn-hangzhou.192.xx.xx.66
What to do next
If you migrate workloads from an ACK dedicated cluster installed with cGPU Basic Edition to an ACK Pro cluster, you must upgrade to cGPU Professional Edition in the ACK Pro cluster after the migration is complete. For more information, see Upgrade cGPU Basic Edition to cGPU Professional Edition in an ACK Pro cluster.
After the migration is complete, you must minimize the permissions of the worker role of the ACK Pro cluster. For more information, see Minimize the permissions of the worker role of an ACK Pro cluster after workload migration is completed.