This topic explains how to mount an Alibaba Cloud Network Attached Storage (NAS) volume to pods in a Container Service for Kubernetes (ACK) cluster using static provisioning. Static provisioning lets you connect pods to an existing NAS file system for persistent, shared storage across multiple pods simultaneously.
Prerequisites
Before you begin, make sure you have:
-
An ACK Serverless cluster. For more information, see Create an ACK Serverless cluster
-
A NAS file system. For more information, see Create a file system
-
A NAS mount target in the same Virtual Private Cloud (VPC) as the cluster nodes. For more information, see Manage mount targets
-
kubectl connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster
To encrypt data in the NAS volume, configure the encryption type when you create the NAS file system — not afterward.
Usage notes
| Constraint | Detail |
|---|---|
| Extreme NAS path | The path field must be a subdirectory of /share (for example, /share/path1). General-purpose NAS defaults to /. |
| Concurrent pod writes | A NAS file system can be mounted to multiple pods simultaneously. If multiple pods write to the same path, your application must handle data synchronization. |
| Root directory permissions | The permissions, owner, and group of the / directory of a NAS file system cannot be modified. |
securityContext.fsgroup |
Setting this parameter causes kubelet to run chmod or chown after the volume mounts, which increases mount time. For a workaround, see The mount time of a NAS volume is prolonged. |
Use cases
-
Applications with high disk I/O requirements
-
File sharing across multiple hosts — for example, using a NAS volume as a shared file server
Mount a NAS volume using the console
Step 1: Create a Persistent Volume (PV)
-
Log on to the ACK console and click Clusters in the left navigation pane.
-
On the Clusters page, click the name of the target cluster or click Details in the Actions column.
-
In the left navigation pane of the cluster details page, choose Volumes > Persistent Volumes.
-
On the Persistent Volumes page, click Create.
-
In the Create PV dialog box, configure the parameters.
For Extreme NAS, Mount Path must start with
/share, for example/share/data.Parameter Description Required Default PV Type Select NAS. Yes — Name Name of the PV. Must be unique within the cluster. Example: pv-nas.Yes — Capacity Capacity allocated to the PV. NAS file systems have no inherent capacity limit; this value sets the PV record, not a NAS quota. Yes — Access Mode Select ReadWriteMany or ReadWriteOnce. Yes ReadWriteMany Mount Target Domain Name The mount address for the cluster to access the NAS file system. Select Select Mount Target or specify a Custom domain name. Yes — Mount Path *(Advanced, optional)* Subdirectory in the NAS file system to mount. If the directory does not exist, it is created automatically. No /(General-purpose NAS) or/share(Extreme NAS)Reclaim Policy When the PVC is deleted, the PV and NAS file system are retained and must be deleted manually. No Retain Mount Options NFS mount parameters, including the NFS protocol version. Use NFS v3 — Extreme NAS supports only v3. For details, see NFS protocol. No — Label Labels to add to the PV. No — -
Click OK.
Step 2: Create a Persistent Volume Claim (PVC)
-
In the left navigation pane of the cluster details page, choose Volumes > Persistent Volume Claims.
-
On the Persistent Volume Claims page, click Create.
-
In the Create PVC dialog box, configure the parameters.
If no PV exists yet, set Allocation Mode to Create Volume to create a PV inline.
Parameter Description Required PVC Type Select NAS. Yes Name Name of the PVC. Must be unique within the cluster. Yes Allocation Mode Select Existing Volumes to bind to the PV created in Step 1. Yes Existing Volumes Click Select PV, find the target PV, and click Select in the Actions column. Yes Capacity Capacity claimed by the PVC. Cannot exceed the capacity of the bound PV. Yes -
Click OK. After the PVC is created, its status changes to Bound, confirming it is linked to the PV.
Step 3: Create an application
-
In the left navigation pane of the cluster details page, choose Workloads > Deployments.
-
On the Deployments page, click Create From Image.
-
Configure the key parameters below, then click Create. For all other parameters, see Create a stateless application using a Deployment.
Section Parameter Example Basic Information Name nas-testBasic Information Replicas 2Container Image Name anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6Container Required Resources 0.25 Core, 512 MiB Volume Mount Source Select the PVC created in Step 2, for example pvc-nas.Volume Container Path /data -
Verify deployment status: on the Deployments page, click the application name, then confirm that all pods on the Pods tab are in the Running state.
Mount a NAS volume using kubectl
All three YAML files below use the NAS Container Storage Interface (CSI) driver (nasplugin.csi.alibabacloud.com) to mount the NAS file system into pods.
Step 1: Create a PV
Create pv-nas.yaml with the following content, then apply it.
kubectl create -f pv-nas.yamlapiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nas
labels:
alicloud-pvname: pv-nas
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
csi:
driver: nasplugin.csi.alibabacloud.com
volumeHandle: pv-nas
volumeAttributes:
server: "2564f4****-ysu87.cn-shenzhen.nas.aliyuncs.com"
path: "/csi"
mountOptions:
- nolock,tcp,noresvport
- vers=3
| Parameter | Description | Required |
|---|---|---|
name |
Name of the PV. | Yes |
labels |
Labels used by the PVC selector to bind to this PV. | No |
storage |
Capacity allocated to the PV. | Yes |
accessModes |
Access mode for the volume. | Yes |
driver |
The NAS CSI driver identifier: nasplugin.csi.alibabacloud.com. |
Yes |
volumeHandle |
Unique ID for this PV. Must be unique across all PVs in the cluster. | Yes |
server |
Domain name of the NAS mount target. | Yes |
path |
Subdirectory of the NAS file system to mount. For Extreme NAS, must be a subdirectory of /share. |
Yes |
vers |
NFS protocol version. Use 3 — Extreme NAS supports only v3. |
Yes |
Step 2: Create a PVC
Create pvc-nas.yaml with the following content. The selector field uses the label from the PV to bind this PVC to the specific PV created in Step 1.
kubectl create -f pvc-nas.yamlkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nas
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
alicloud-pvname: pv-nas
| Parameter | Description |
|---|---|
name |
Name of the PVC. |
accessModes |
Must match the access mode of the PV. |
storage |
Capacity requested. Cannot exceed the PV capacity. |
matchLabels |
Label selector that binds this PVC to the target PV. |
Step 3: Create a Deployment
Create nas.yaml with the following content. This creates a Deployment named nas-static with two replicas, each mounting the PVC at /data.
kubectl create -f nas.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: nas-static
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: pvc-nas
mountPath: "/data"
volumes:
- name: pvc-nas
persistentVolumeClaim:
claimName: pvc-nas
| Parameter | Description |
|---|---|
mountPath |
Path inside the container where the NAS volume is mounted. |
claimName |
Name of the PVC to bind. Must match the PVC created in Step 2. |
Verify that pods are running:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
nas-static-5b5cdb85f6-n**** 1/1 Running 0 32s
nas-static-c5bb4746c-4**** 1/1 Running 0 32s
Verify persistence
This procedure confirms that data written to the NAS volume survives pod deletion and recreation.
-
Run the following command to view the names of the pods for the deployed application.
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE nas-static-5b5cdb85f6-n**** 1/1 Running 0 32s nas-static-c5bb4746c-4**** 1/1 Running 0 32s -
Confirm
/datais initially empty in one of the pods:kubectl exec nas-static-5b5cdb85f6-n**** -- ls /dataNo output is returned, confirming no files exist yet.
-
Create a test file in
/data:kubectl exec nas-static-5b5cdb85f6-n**** -- touch /data/nas -
Verify the file was successfully created:
kubectl exec nas-static-5b5cdb85f6-n**** -- ls /dataExpected output:
nas -
Delete the pod:
kubectl delete pod nas-static-5b5cdb85f6-n**** -
In a separate terminal, watch Kubernetes recreate the pod:
kubectl get pod -w -l app=nginx -
After the new pod reaches Running state, get its name:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE nas-static-5b5cdb85f6-n**** 1/1 Running 0 32s nas-static-c5bb4746c-4**** 1/1 Running 0 32s -
Verify the file still exists in the recreated pod:
kubectl exec nas-static-5b5cdb85f6-n**** -- ls /dataExpected output:
nasThe
nasfile is present, confirming that data in the NAS volume persists across pod restarts.
Verify shared storage
This procedure confirms that data written to the NAS volume from one pod is immediately visible in all other pods mounting the same volume.
-
View the names of the pods where the application is deployed, then check the files in
/dataof both pods.kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE nas-static-5b5cdb85f6-n**** 1/1 Running 0 32s nas-static-c5bb4746c-4**** 1/1 Running 0 32skubectl exec nas-static-5b5cdb85f6-n**** -- ls /data kubectl exec nas-static-c5bb4746c-4**** -- ls /data -
Create a file in
/dataof one pod:kubectl exec nas-static-5b5cdb85f6-n**** -- touch /data/nas -
Verify the file appears in both pods:
kubectl exec nas-static-5b5cdb85f6-n**** -- ls /dataExpected output:
naskubectl exec nas-static-c5bb4746c-4**** -- ls /dataExpected output:
nasThe same file is visible in both pods, confirming they share the same NAS volume.