All Products
Search
Document Center

Container Service for Kubernetes:Hot migration from ACK dedicated clusters to ACK Pro clusters

Last Updated:Dec 12, 2023

If you are using ACK dedicated clusters and you want to experience the features provided by ACK Pro clusters, you can use the hot migration feature to dynamically migrate from ACK dedicated clusters to ACK Pro clusters. This topic describes how to dynamically migrate from ACK dedicated clusters to ACK Pro clusters and how to remove master nodes after the hot migration is complete.

Table of contents

Prerequisites

Prerequisite

Description

Clusters

An ACK dedicated cluster that runs Kubernetes 1.18 or later is created. Update your ACK cluster if the Kubernetes version of the cluster does not meet the requirement. For more information, see Update an ACK cluster.

Server Load Balancer (SLB) instance specification

An internal-facing SLB instance of slb.s1.small or higher is deployed in the cluster. If your SLB instance does not meet the requirement, upgrade the SLB instance. For more information, see Modify the configurations of pay-as-you-go CLB instances.

View how to check the specification of an SLB instance

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and click Cluster Information in the left-side navigation pane.

  3. Click the Cluster Resources tab. Click the ID of the SLB instance next to API Server SLB to go to the instance details page. In the Specification section of the Billing Information page, check the specification of the SLB instance.

Pod eviction

Make sure that all pods are migrated to worker nodes, except for the following pods: pods of control plane components, such as the API server, kube-controller-manager, cloud controller manager (CCM), and pods of DaemonSets that belong to the kube-system namespace. After the hot migration is complete, the control plane components are replaced by managed components.

OSS Bucket

An Object Storage Service (OSS) bucket is created in the region of the ACK dedicated cluster and hotlink protection is disabled for the OSS bucket. Hotlink protection may cause hot migration failures. For more information, see Create buckets and Hotlink protection.

Considerations

Consideration

Description

Internet access

ACK dedicated clusters of earlier versions still use Internet-facing SLB instances to access API servers. After you migrate from such a cluster to an ACK Pro cluster, the cluster can no longer access the API server through the Internet-facing SLB instance. To resolve this issue, you need to manually switch to the elastic IP address (EIP) mode by associating an EIP with the internal-facing SLB instance of the API server. This way, the cluster can continue to access the API server over the Internet. For more information about how to manually switch to the EIP mode, see Control public access to the API server of a cluster.

Custom pod configurations

After you configure an ACK dedicated cluster to use custom pod configurations, you cannot directly migrate from the ACK dedicated cluster to an ACK Pro cluster. You need to stop terway-controlplane before the migration starts and then enable terway-controlplane after the migration is complete. For more information, see Stop terway-controlplane before cluster migration. For more information about how to customize pod configurations, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.

Components

If ALB Ingress controller is installed in your ACK dedicated cluster, you need to reinstall the component after the migration is complete. For more information about how to install the ALB Ingress controller, see Manage components. After the ALB Ingress controller is installed, you need to use kubectl to run the following command to delete the original Deployment. Before you run the command, make sure that the kubectl client is connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

kubectl delete deployment alb-ingress-controller -n kube-system

Master nodes

The Cloud Assistant Agent is not installed in ACK clusters of earlier versions. You need to manually install the Cloud Assistant Agent. For more information, see Install the Cloud Assistant Agent.

Rollback

After you migrate from an ACK dedicated cluster to an ACK Pro cluster, you cannot roll back.

Release of ECS instances

If you choose to release Elastic Compute Service (ECS) instances when you remove master nodes, ACK will release all pay-as-you-go ECS instances and their data disks. You need to manually release subscription ECS instances. For more information, see Release an instance.

Step 1: Perform a hot migration to migrate from ACK dedicated clusters to ACK Pro clusters

Before you start, make sure that all prerequisites are met and you have read and understand the considerations.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, choose More > Migrate to Pro in the Actions column of the cluster to be migrated.

  3. In the Migrate to Pro dialog box, complete the precheck and Resource Access Management (RAM) authorization, select the OSS bucket that you created for hot migration, and then click OK.

    View how to complete the precheck

    Click precheck to log on to the Container Intelligence Service console. Click Start on the Migration Check page. Then, confirm the check items in the panel that appears, select I know and agree, and then click Start.

    If the cluster fails to pass the precheck, follow the instructions on the page to fix the issues.

    前置检查.png

    View how to complete the RAM authorization

    1. Click RAM console and complete the RAM authorization. Obtain the name of the OSS bucket, which will be used in the following steps. migrate

    2. Click a policy whose name starts with k8sMasterRolePolicy. On the Policy Document tab of the policy details page, click Modify Policy Document. Then, add the following content to the Statement field in the JSON editor and click OK.

      Replace <YOUR_BUCKET_NAME> with the name of the OSS bucket that is specified in the Migrate to Pro dialog box. You need to delete the angle brackets (<>).

      ,
              {
                  "Action": [
                      "oss:PutObject",
                      "oss:GetObject"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:<YOUR_BUCKET_NAME>/*"  
                  ]
              }

    After the migration is complete, the Migrate to Pro dialog box displays a message. You can check the type of the ACK cluster and the status of the master nodes.

    • Cluster type: Go back to the Clusters page. The cluster type in the Type column changes from ACK Dedicated to ACK Pro.

    • Master node status: On the Clusters page, click Details in the Actions column of the cluster. In the left-side navigation pane, choose Nodes > Nodes. If the Role/Status column of the master nodes displays Unknown, the master nodes are disconnected from the cluster. You can remove the master nodes by following the steps in Step 2: Remove the master nodes of the ACK dedicated cluster after the hot migration is complete.

Step 2: Remove the master nodes of the ACK dedicated cluster after the hot migration is complete

After the hot migration is complete, you can use the ACK console or kubectl to remove master nodes from the cluster.

Method 1: Use the ACK console

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Nodes > Nodes in the left-side navigation pane.

  3. On the Nodes page, choose More > Remove in the Actions column of a master node or select one or more master nodes and click Batch Remove at the bottom. In the dialog box that appears, configure parameters and click OK.

Method 2: Use kubectl

Before you run the commands, make sure that a kubectl client is connected to the cluster. For more information about how to use kubectl to connect to a cluster, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  1. Run the following command to query and record the names of the master nodes to be removed:

    kubectl get node | grep control-plane
  2. Run the following command to remove a master node. Replace <MASTER_NAME> with the name of the master node.

    kubectl delete node <MASTER_NAME>

    To remove multiple master nodes at a time, replace <MASTER_NAME> with the names of the master nodes. For example, run the following command to remove master nodes cn-hangzhou.192.xx.xx.65 and cn-hangzhou.192.xx.xx.66:

    kubectl delete node cn-hangzhou.192.xx.xx.65 cn-hangzhou.192.xx.xx.66

What to do next