Container Service for Kubernetes (ACK) uses Container Network File System (CNFS) to manage the lifecycle of Object Storage Service (OSS) buckets independently from your cluster. You can create, configure, and mount OSS buckets as persistent volumes (PVs) directly through Kubernetes manifests, without manually provisioning buckets in the OSS console.
Two methods are available:
Method 1: Use CNFS to create a new OSS bucket and mount it to a Deployment and a StatefulSet as a dynamically provisioned volume.
Method 2: Create a CNFS Custom Resource Definition (CRD) that references an existing OSS bucket, then mount it as a statically or dynamically provisioned volume.
Prerequisites
Before you begin, make sure you have:
csi-plugin and csi-provisioner at version 1.24.2-5b34494d-aliyun or later. For upgrade instructions, see Update csi-plugin and csi-provisioner
storage-operator at version 1.24.95-e2d0756-aliyun or later. For upgrade instructions, see Manage components
Limitations
The only supported reclaim policy is
Retain. Deleting a CNFS CRD keeps the associated OSS bucket.Archive and Cold Archive objects must be restored before you can read or write them.
If you enable versioning for a bucket, you cannot configure retention policies or OSS-HDFS for that bucket. To use those features, set
enableVersioningtoNonefirst.After versioning is set to
enabled, you cannot disable it. You can only suspend it.The
redundancyTypeandenableVersioningparameters require storage-operator v1.26.2-1de13b6-aliyun or later.If
redundancyTypeis set toZRS, Cold Archive and Deep Cold Archive storage classes are not supported.
Method 1: Create a new OSS bucket with CNFS
This method uses CNFS to create an OSS bucket named cnfs-oss-<clusterid> and mount it to a Deployment and a StatefulSet as a dynamically provisioned volume.
If an existing bucket has the same name as the one you specify, CNFS associates the existing bucket with the ContainerNetworkFileSystem object instead of creating a new one.
Step 1: Create the CNFS CRD and StorageClass
Create the Secret, ContainerNetworkFileSystem CRD, and StorageClass. Replace <clusterid> with your cluster ID.
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: oss-secret
stringData:
akId: "xxxx" # AccessKey ID for mounting the OSS bucket
akSecret: "xxxx" # AccessKey Secret for mounting the OSS bucket
---
apiVersion: storage.alibabacloud.com/v1beta1
kind: ContainerNetworkFileSystem
metadata:
name: cnfs-oss-<clusterid> # Set the CNFS CRD name to match the bucket name
spec:
description: "cnfs-oss"
type: oss
reclaimPolicy: Retain # Only Retain is supported. Deleting the CRD keeps the bucket.
parameters:
bucketName: cnfs-oss-<clusterid> # Replace <clusterid> with your cluster ID; bucket name must be unique
encryptType: "AES256" # AES-256 server-side encryption
storageType: "Standard" # Storage class
aclType: "private" # Bucket owner and authorized users only
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: alibabacloud-cnfs-oss
parameters:
containerNetworkFileSystem: cnfs-oss-<clusterid> # Reference to the CNFS CRD
otherOpts: -o max_stat_cache_size=0 -o allow_other # Cache settings
path: /
# volumeAs: subpath # Uncomment to auto-create a subpath for each PV
csi.storage.k8s.io/node-publish-secret-name: oss-secret
csi.storage.k8s.io/node-publish-secret-namespace: default
provisioner: ossplugin.csi.alibabacloud.com
reclaimPolicy: Retain
EOFThe following table describes the CNFS CRD parameters.
| Parameter | Description | Required | Default |
|---|---|---|---|
description | Description of the CNFS file system. | No | — |
type | Volume type. Set to oss. | Yes | — |
reclaimPolicy | Reclaim policy. Only Retain is supported. | Yes | — |
parameters.bucketName | Name of the OSS bucket. If a bucket with this name exists, it is associated with the CNFS CRD. If not, a new bucket is created. | Yes | — |
parameters.storageType | Storage class of the OSS bucket. Valid values: Standard, IA (Infrequent Access), Archive, ColdArchive. Archive and Cold Archive objects must be restored before read/write. | No | Standard |
parameters.redundancyType | Storage redundancy type. LRS stores copies within a single zone; ZRS (zone-redundant storage) stores copies across multiple zones. Requires storage-operator v1.26.2-1de13b6-aliyun or later. ZRS does not support Cold Archive or Deep Cold Archive. | No | ZRS |
parameters.encryptType | Server-side encryption algorithm. Valid values: None, AES256, SM4. | No | — |
parameters.aclType | Access control list (ACL) type. Valid values: private (bucket owner and authorized users only), public-read (all users can read; only the owner and authorized users can write), public-read-write (all users, including anonymous users, can read and write — use with caution). | No | private |
parameters.enableVersioning | Versioning status. Valid values: enabled, suspended, None. Once enabled, versioning cannot be disabled — only suspended. Enabling versioning prevents configuration of retention policies or OSS-HDFS. Requires storage-operator v1.26.2-1de13b6-aliyun or later. For billing details, see Lifecycle. | No | None |
Step 2: Create a PVC and sample workloads
Create a persistent volume claim (PVC), a Deployment, and a StatefulSet to verify the bucket mounts correctly.
cat << EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cnfs-oss-pvc
spec:
accessModes:
- ReadOnlyMany
storageClassName: alibabacloud-cnfs-oss
resources:
requests:
storage: 100Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cnfs-oss-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/data"
name: cnfs-oss-pvc
volumes:
- name: cnfs-oss-pvc
persistentVolumeClaim:
claimName: cnfs-oss-pvc
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cnfs-oss-sts
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/data"
name: www
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadOnlyMany" ]
storageClassName: "alibabacloud-cnfs-oss"
resources:
requests:
storage: 100Gi
EOFStep 3: Verify the bucket and workloads
Run the following commands to confirm the OSS bucket and pods are ready.
Verify the CNFS CRD:
kubectl get cnfs/cnfs-oss-<clusterid> -o yamlExpected output:
apiVersion: storage.alibabacloud.com/v1beta1
kind: ContainerNetworkFileSystem
...
status:
conditions:
- lastProbeTime: "2022-09-18 15:02:39"
reason: The oss bucket is complete initialization.
status: Ready
fsAttributes:
accessGroupName: DEFAULT_VPC_GROUP_NAME
aclType: private
bucketName: cnfs-oss-****
encryptType: AES256
endPoint:
extranet: oss-****.aliyuncs.com
internal: oss-****-internal.aliyuncs.com
regionId: ****
storageType: Standard
status: AvailableThe status.status field shows Available when the bucket is ready. The following table describes the status fields.
| Field | Description |
|---|---|
status | CNFS CRD status. Valid values: Pending, Creating, Initialization, Available, Unavailable (recoverable), Fatal (not recoverable), Terminating. |
conditions.lastProbeTime | Timestamp of the last status probe. |
conditions.reason | Reason for the current status. |
conditions.status | Readiness status. Ready means the CRD is available for use; NotReady means it is not. |
fsAttributes.accessGroupName | Permission group for the mount point. DEFAULT_VPC_GROUP_NAME is the default group for virtual private clouds (VPCs). |
fsAttributes.encryptType | Encryption algorithm in use: None, AES256, or SM4. |
fsAttributes.regionId | Region where your ACK cluster resides. |
fsAttributes.storageType | Storage class of the bucket: Standard, IA, Archive, or ColdArchive. |
fsAttributes.redundancyType | Storage redundancy type: LRS (locally redundant storage) or ZRS (zone-redundant storage). |
fsAttributes.aclType | ACL type of the bucket. |
fsAttributes.endPoint | Endpoints: extranet (public) and internal (internal network). |
fsAttributes.enableVersioning | Versioning status: enabled, suspended, or None. |
Verify the pods:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE
cnfs-oss-deployment-5864fd8d98-4**** 1/1 Running 0 2m21s
cnfs-oss-sts-0 1/1 Running 0 2m21s
cnfs-oss-sts-1 1/1 Running 0 2m16sAll pods show Running, confirming the Deployment and StatefulSet have successfully mounted the OSS bucket.
Method 2: Use an existing OSS bucket
This method creates a CNFS CRD that references an existing OSS bucket by name, without creating a new bucket.
Step 1: Create the CNFS CRD
cat <<EOF | kubectl apply -f -
apiVersion: storage.alibabacloud.com/v1beta1
kind: ContainerNetworkFileSystem
metadata:
name: cnfs-oss-exist-bucket-name
spec:
description: "cnfs-oss"
type: oss
reclaimPolicy: Retain
parameters:
bucketName: bucket-name # Name of your existing OSS bucket
EOFStep 2: Verify the CNFS CRD
kubectl get cnfs/cnfs-oss-exist-bucket-name -o yamlExpected output:
apiVersion: storage.alibabacloud.com/v1beta1
kind: ContainerNetworkFileSystem
...
status:
conditions:
- lastProbeTime: "2022-09-14 17:00:21"
reason: The oss bucket is complete initialization.
status: Ready
fsAttributes:
accessGroupName: DEFAULT_VPC_GROUP_NAME
aclType: private
bucketName: exist-bucket-name
encryptType: AES256
endPoint:
extranet: oss-****.aliyuncs.com
internal: oss-****-internal.aliyuncs.com
regionId: ****
storageType: Standard
status: AvailableStep 3: Mount the bucket to a workload
Mount the OSS bucket to your Deployment as a statically or dynamically provisioned volume. Follow the same steps as Step 2: Create a PVC and sample workloads in Method 1, referencing the CNFS CRD created in this method.
What's next
Configure lifecycle rules to automatically transition or expire objects in the bucket.
Review billing items for different storage classes and redundancy types.