ACK lets you change the category of a cloud disk attached to a volume without rebuilding your workload. For example, if your application's storage demands grow, you can upgrade a standard SSD to an Enterprise SSD (ESSD) to get higher IOPS.
Prerequisites
Before you begin, ensure that you have:
-
A cluster running Kubernetes 1.20 or later with the Container Storage Interface (CSI) plug-in installed. To upgrade your cluster, see Manually upgrade a cluster.
-
storage-operator version 1.26.1-50a1499-aliyun or later installed. By default, storage-operator is installed in the cluster. To verify the version, go to the cluster details page in the ACK console, choose Operations > Add-ons in the left-side navigation pane, and click the Storage tab. For more information, see Manage the storage-operator component.
-
(ACK dedicated clusters only) Worker and master RAM (Resource Access Management) roles with permission to call the
ModifyDiskSpecoperation. This is not required for ACK managed clusters. See Create a custom policy.
Limitations
-
Basic disks and ephemeral disks cannot be changed to other disk categories.
-
ESSD AutoPL disks cannot be changed to other disk categories.
-
Only pay-as-you-go disks can be mounted as volumes.
-
The new disk category must be supported by the ECS instance type of the node hosting your pod. For compatibility details, see Overview of instance families.
-
When a category change involves a regional ESSD, the volume's affinity settings cannot be modified. Pods using the disk cannot be rescheduled to other zones.
For a complete list of limits, see Limits.
Supported disk category conversions
The following table summarizes supported upgrades and downgrades.
| Source disk category | Target disk categories |
|---|---|
| Ultra disk | Standard SSD, ESSD Entry, PL0/PL1/PL2/PL3 ESSD, ESSD AutoPL disk |
| Standard SSD | PL1/PL2/PL3 ESSD, ESSD AutoPL disk |
| PL0 ESSD | PL1/PL2/PL3 ESSD, ESSD AutoPL disk |
| PL1/PL2/PL3 ESSD (pay-as-you-go) | Change between PL1, PL2, and PL3; or to ESSD AutoPL disk |
| PL1/PL2/PL3 ESSD (subscription) | Upgrade only (low to high): PL1→PL2, PL1→PL3, PL2→PL3; PL1→ESSD AutoPL disk |
Usage notes:
-
Only ESSD Entry disks can be attached to instances of universal instance families and the e economy instance family.
-
The performance level an ESSD can be upgraded to depends on disk capacity. If a higher performance level is unavailable, extend the disk first, then upgrade the performance level.
-
After changing a disk to a PL3 ESSD, attach the disk to an instance or restart the instance to activate optimal performance. Data reliability is not affected during this period.
-
After the disk category change, billing follows the rules of the new disk category.
-
When changing to or from an ESSD AutoPL disk: you can enable or disable performance provision after the change (extra charges apply when enabled — see Modify the provisioned performance of an ESSD AutoPL disk). Performance burst cannot be configured during the change, but can be enabled or disabled after it completes — see Enable or disable performance burst for an ESSD AutoPL disk.
desiredDiskType parameter reference
Use the following values for the desiredDiskType parameter when creating the custom resource in step 2.
| Value | Disk category |
|---|---|
cloud_ssd |
Standard SSD |
cloud_essd.PL0 |
PL0 ESSD |
cloud_essd.PL1 |
PL1 ESSD |
cloud_essd.PL2 |
PL2 ESSD |
cloud_essd.PL3 |
PL3 ESSD |
cloud_auto |
ESSD AutoPL disk |
Step 1: Enable disk category changes in storage-operator
Run the following command to update the storage-operator ConfigMap and enable the disk category change feature. The feature is implemented by the storage-controller plug-in.
kubectl patch configmap/storage-operator \
-n kube-system \
--type merge \
-p '{"data":{"storage-controller":"{\"imageRep\":\"acs/storage-controller\",\"imageTag\":\"\",\"install\":\"true\",\"template\":\"/acs/templates/storage-controller/install.yaml\",\"type\":\"deployment\"}"}}'
Step 2: Change the disk category
To minimize impact on your workloads, perform disk category changes during off-peak hours.
1. Create a StatefulSet
Skip this step if a StatefulSet with a mounted cloud disk already exists in your cluster.
Create a file named StatefulSet.yaml using the following template. This creates a StatefulSet with one pod that has a 40-GiB PL1 ESSD mounted.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-diskspec
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
ports:
- containerPort: 80
volumeMounts:
- name: pvc-disk
mountPath: /data
volumes:
- name: pvc-disk
persistentVolumeClaim:
claimName: disk-pvc
volumeClaimTemplates:
- metadata:
name: pvc-disk
labels:
app: nginx
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "alicloud-disk-essd"
resources:
requests:
storage: 40Gi
Apply the manifest and verify the pod is running:
kubectl create -f StatefulSet.yaml
kubectl get pod -l app=nginx
Expected output:
NAME READY STATUS RESTARTS AGE
nginx-diskspec-0 1/1 Running 0 4m4s
2. Identify the PersistentVolume (PV) to change
Get the PersistentVolumeClaim (PVC) used by the StatefulSet and note the bound PV name in the VOLUME column.
kubectl get pvc pvc-disk-nginx-diskspec-0
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc-disk-nginx-diskspec-0 Bound d-uf6ijdcp3aeoi82w**** 40Gi RWO alicloud-disk-essd <unset> 5m6s
Verify the current disk category of the PV:
kubectl get pv d-uf6ijdcp3aeoi82w**** -o=jsonpath='{.metadata.labels}'
Expected output — cloud_essd.PL1 confirms the PV is backed by a PL1 ESSD:
{"csi.alibabacloud.com/disktype":"cloud_essd.PL1"}
3. Create a custom resource to trigger the category change
Create a file named cr.yaml. Replace pvNames with the PV name from the previous step, and set desiredDiskType to the target disk category (refer to the desiredDiskType parameter reference above).
apiVersion: storage.alibabacloud.com/v1beta1
kind: ContainerStorageOperator
metadata:
name: default
spec:
operationType: DISKUPGRADE
operationParams:
pvNames: "d-uf6ijdcp3aeoi82w****"
desiredDiskType: "cloud_auto"
| Parameter | Description |
|---|---|
operationType |
Set to DISKUPGRADE for disk upgrade or downgrade operations. |
pvNames |
The PV to change. Separate multiple PVs with commas, for example: "disk-1*,disk-2*,disk-3***". |
desiredDiskType |
The target disk category. See the desiredDiskType parameter reference above. |
Apply the custom resource:
kubectl create -f cr.yaml
4. Verify the disk category change
Query the custom resource status to check whether the disk category change is complete:
kubectl get ContainerStorageOperator default -o yaml
Expected output when the change is complete:
status:
message: []
process: 100%
status: SUCCESS
Confirm the PV label reflects the new disk category:
kubectl get pv d-uf6ijdcp3aeoi82w**** -o=jsonpath='{.metadata.labels}'
Expected output:
{"csi.alibabacloud.com/disktype":"cloud_auto"}