This topic uses the Swarm cluster swarm-piggymetrics-cluster as an example to describe how to migrate cluster configurations. The configuration migration process includes the following steps: create a Kubernetes cluster, migrate node labels, verify that the Kubernetes cluster can access other Alibaba Cloud services over a Virtual Private Cloud (VPC), migrate volumes, and migrate configuration items.
Create a Kubernetes cluster
- Select the same region and zone where the Swarm cluster swarm-piggymetrics-cluster resides. This guarantees successful migration of resources that cannot be migrated across regions, such as a VPC.
- Kubernetes clusters only support VPCs, but not the classic network.
- If the Swarm cluster swarm-piggymetrics-cluster is in a VPC, create the managed Kubernetes cluster in the same VPC.
- If the Swarm cluster swarm-piggymetrics-cluster is in the classic network, you must migrate the Swarm cluster to a VPC. For more information, see Hybrid migration.
- When you create the managed Kubernetes cluster, select Install the CloudMonitor Agent on ECS Nodes. This allows you to view the monitoring information of the Elastic Compute Service (ECS) instances in the CloudMonitor console when you switch traffic to the Kubernetes cluster later.
- If the Swarm cluster swarm-piggymetrics-cluster uses Log Service, select Enable Log Service and the Log Service project used by the Swarm cluster. In this way, you do not need to migrate Log Service configurations separately.
- If the Swarm cluster swarm-piggymetrics-cluster uses a Relational Database Service (RDS) instance, select the RDS instance to add the nodes of the managed Kubernetes cluster to the whitelist of the RDS instance. This guarantees that the Kubernetes cluster can access the RDS instance.
Migrate node labels
If nodes in the Swarm cluster swarm-piggymetrics-cluster contain custom labels and the custom labels are used to specify the nodes to deploy applications in application configurations, you must migrate these custom labels. To migrate custom labels, follow these steps:
- View the custom labels of nodes in the Swarm cluster.
- Log on to the Container Service - Swarm console. In the left-side navigation pane, click Cluster. On the Clusters page, find the target cluster and click Manage in the Actions column.
- In the left-side navigation pane, click Custom Labels. In the Custom Labels column on the page that appears, you can find the custom labels configured in the Swarm cluster swarm-piggymetrics-cluster.
- Add the same labels to corresponding nodes in the managed Kubernetes cluster that
- Log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose . On the Nodes page, click Manage Labels in the upper right corner.
- On the Manage Labels page, select the target node and click Add Label. In the Add dialog box that appears, enter the label name and value and click OK.
Verify that the Kubernetes cluster can access other Alibaba Cloud services over a VPC
If the Swarm cluster swarm-piggymetrics-cluster is in the classic network, the managed Kubernetes cluster that resides in a VPC may fail to access the RDS instance and Object Storage Service (OSS) bucket used by the Swarm cluster. You have added the nodes of the managed Kubernetes cluster to the whitelist of the RDS instance when creating the managed Kubernetes cluster. The OSS bucket provides two endpoints: one for the classic network and one for VPCs. In this section, you must verify that the managed Kubernetes cluster can access the RDS instance and OSS bucket over the VPC where the cluster resides.
- Log on to the RDS console. On the Instances page, click the target RDS instance. On the instance details page, click Data security in the left-side navigation pane. On the Whitelist Settings tab, verify that the IP addresses of the nodes of the managed Kubernetes cluster have been added to the whitelist of the RDS instance.
- In the left-side navigation pane, click Database Connection. On the Instance Connection tab, obtain the value of the Internal Endpoint parameter, which is endpoint of the RDS instance for VPCs.
- Log on to the ECS console and verify that the ECS instances that serve as nodes of the managed Kubernetes cluster
can access the RDS instance, as shown in the following figure.
- If the ECS instances can access the RDS instance, go to the next step.
- If the ECS instances fail to access the RDS instance, locate the cause and fix the
- The IP addresses of the ECS instances may have not been added to the whitelist of the RDS instance. To resolve this issue, log on to the RDS console, click the target RDS instance on the page, and click Data Security to set the whitelist. For more information, see Configure a whitelist for an RDS PostgreSQL instance.
- For more information about other causes, see What do I do if I cannot connect an ECS instance to an ApsaraDB for RDS instance?.
After you troubleshoot the connection failure, log on to the ECS console again. Make sure that the ECS instances can access the RDS instance. Then, go to the next step.
- Log on to the OSS console. Find the OSS bucket used by the Swarm cluster and view the endpoints for the classic network and VPCs.
Volumes are attached at the cluster level in both Swarm and Kubernetes clusters. Therefore, you can migrate volume configurations when migrating cluster configurations and modify the reference to volumes when migrating application configurations. Volumes in the Swarm cluster correspond to Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) in the Kubernetes cluster.
Swarm supports three types of volumes: Apsara File Storage NAS, Alibaba Cloud disk, and OSS. You can use them in volume mode or by creating a PV or PVC. For more information, see Overview. This topic assumes that you use volumes by creating a PV or PVC.
- Log on to the Container Service - Swarm console. In the left-side navigation pane, click Volume. On the Data Volume List page, select the cluster swarm-piggymetrics-cluster and find volumes of the NAS, disk, and OSS types.
- In the managed Kubernetes cluster that is created, create PVs and PVCs for the three types of volumes, respectively. For more information, see the following topics:
- A disk is a non-shared storage provided by Alibaba Cloud and can only be attached to one pod at a time. If the Swarm and Kubernetes clusters write data to a disk simultaneously, an error is returned. Therefore, services must be stopped to migrate disks from the Swarm cluster to the Kubernetes cluster. Switch the Swarm cluster to use NAS or OSS volumes before the migration, or migrate disks by stopping services during off-peak hours. You must detach a disk from the Swarm cluster before attaching it to the Kubernetes cluster. For more information, see Detach a cloud disk and View and delete data volumes.
- In the Kubernetes cluster, the name of a PV or PVC cannot contain uppercase letters
or underscores (_). If a volume name in the Swarm cluster contains uppercase letters
or underscores (_), convert the name in the following ways:
- Change uppercase letters to lowercase letters.
- Change underscores (_) to hyphens (-).
- When you create a PV of the OSS type, set Endpoint to VPC Endpoint because Kubernetes only supports VPCs.
- When you create a PVC, use the name of the corresponding volume as the PVC name but change uppercase letters to lowercase letters and underscores (_) to hyphens (-). When you use Kompose to convert a Swarm Compose file to Kubernetes resource files, Kompose specifies PVC names in the same way.
Migrate configuration items
In a Container Service for Swarm cluster, you can create a configuration file and set configuration items to manage environment variables for multiple containers in a unified manner. When you deploy an application, you can use the configuration file to replace variables, which start with a dollar sign ($), in the Swarm Compose file with the actual values.
This feature is an enhanced feature provided by the Container Service - Swarm console. Standard Kubernetes does not provide a feature that exactly matches this feature. The configuration files in the Swarm cluster are different from ConfigMaps in the Kubernetes cluster. Therefore, configuration items in the Swarm cluster cannot be automatically migrated. You must manually replace the variables in the Swarm Compose file with the actual values.