All Products
Search
Document Center

Cloud Parallel File Storage:View the performance monitoring data of a CPFS file system

Last Updated:Nov 21, 2025

The performance monitoring feature lets you view a real-time performance overview of a file system, including metrics such as read/write input/output operations per second (IOPS) and read/write throughput. This topic describes how to view the performance monitoring data of a Cloud Parallel File Storage (CPFS) file system.

Prerequisites

View data in the Apsara File Storage NAS console

  1. Log on to the NAS console.

  2. In the left-side navigation pane, choose Monitoring Audit > Performance Monitoring.

  3. In the top navigation bar, select a region.

  4. On the Performance Monitoring page, perform the following steps to view the performance monitoring details of a specific file system.

    1. From the File System Type drop-down list, select CPFS.

    2. Select the ID of the file system from the File System ID drop-down list.

    3. Select a time range, such as 1 Hour, 1 Day, 7 Days, 14 Days, Current Month, or Last Month. You can also specify a custom time range. The time range cannot exceed 30 days.

      The Performance Monitoring page displays charts for metrics related to IOPS, throughput, latency, and metadata queries per second (QPS).

      If you access the file system using the NFS protocol, the page displays performance monitoring charts for NFS.

      Note
      • If a chart displays No Data, it indicates that the file system has not received requests from clients or that the corresponding service is not enabled.

      • To generate monitoring data for write throughput, run the fio command on the client where the CPFS file system is mounted. For example, assume that the CPFS file system is mounted to the /mnt directory. For more information about performance tests for file systems, see Performance Testing.

        fio -direct=1 -ioengine=libaio -iodepth=1 -rw=write -bs=1m -size=1G -numjobs=256 -runtime=600 -time_based=1 -group_reporting -directory="/mnt" -name=Seq_Write_Testing

View data in the CloudMonitor console

  1. Log on to the CloudMonitor console.

  2. In the navigation pane on the left, choose Cloud Resource Monitoring > Cloud Service Monitoring.

  3. On the Cloud Service Monitoring page, enter CPFS in the search box, and then click Cloud Parallel File Storage(CPFS).

  4. On the Cloud Parallel File Storage(CPFS) page, select a region. Find the file system that you want to manage and click Monitoring Charts in the Actions column.

  5. On the Monitoring Charts page, click the File System Monitoring tab to view the performance monitoring details of the file system.

View data using CloudMonitor APIs

You can also query CPFS monitoring data by calling CloudMonitor APIs. The following APIs are available.

The following table describes the request parameters for CPFS.

Name

Description

Namespace

The data namespace of the Alibaba Cloud service. For CPFS, the value is acs_nas.

MetricName

The name of the metric. Valid values:

  • File system

    • IopsRead

    • IopsWrite

    • LatencyRead

    • LatencyWrite

    • QpsMeta

    • ThruputRead

    • ThruputWrite

  • NFS protocol service

    • NFSReadThroughput

    • NFSWriteThroughput

    • NFSReadIOPS

    • NFSWriteIOPS

Dimensions

A map of dimensions used to query the monitoring data of a specific resource. Format: {"userId":"xxxxxx","fileSystemId":"cpfs-xxxxx"}.

Note

The value of Dimensions must be a JSON string that represents the map object. The parameters in the string must be in the specified order.

Related operations

  • You can set alert rules to be promptly notified of abnormal data. This allows technical personnel to immediately respond to exceptions and quickly resume your services. For more information, see Configure a basic alert rule.

  • You can also create an application group for multiple file systems to manage alert rules and view monitoring data by group. This simplifies management and improves monitoring efficiency. For more information, see Create an application group.