All Products
Search
Document Center

Cloud Parallel File Storage:ApplyDataFlowAutoRefresh

Last Updated:Dec 03, 2025

Adds AutoRefresh configurations to a dataflow.

Operation description

  • This operation is available only to Cloud Parallel File Storage (CPFS) file systems.

  • Only CPFS V2.2.0 and later support data flows. You can view the version information on the file system details page in the console.

  • You can add AutoRefresh configurations only to the dataflows that are in the Running state.

  • You can add a maximum of five AutoRefresh configurations to a dataflow.

  • It generally takes 2 to 5 minutes to create an AutoRefresh configuration. You can call the DescribeDataFlows operation to query the dataflow status.

  • AutoRefresh depends on the object modification events collected by EventBridge from the source OSS bucket. You must first activate EventBridge.

    **

    Note The event buses and event rules created for CPFS in the EventBridge console contain the Create for cpfs auto refresh description. The event buses and event rules cannot be modified or deleted. Otherwise, AutoRefresh cannot work properly.

  • The AutoRefresh configuration applies only to the prefix and is specified by the RefreshPath parameter. When you add an AutoRefresh configuration to the prefix for a CPFS dataflow, an event bus is created at the user side and an event rule is created for the prefix of the source OSS bucket. When an object is modified in the prefix of the source OSS bucket, an OSS event is generated in the EventBridge console. The event is processed by the CPFS data flow.

  • After AutoRefresh is configured, if the data in the source OSS bucket is updated, the updated metadata is automatically synchronized to the CPFS file system. You can load the updated data when you access files, or run a data flow task to load the updated data.

  • AutoRefreshInterval refers to the interval at which CPFS checks whether data is updated in the prefix of the source OSS bucket. If data is updated, CPFS runs an AutoRefresh task. If the frequency of triggering the object modification event in the source OSS bucket exceeds the processing capability of the CPFS data flow, AutoRefresh tasks are accumulated, metadata updates are delayed, and the data flow status becomes Misconfigured. To resolve these issues, you can increase the data flow specifications or reduce the frequency of triggering the object modification event.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • For mandatory resource types, indicate with a prefix of * .
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
nas:ApplyDataFlowAutoRefreshupdate
*DataFlow
acs:nas:{#regionId}:{#accountId}:filesystem/{#fileSystemId}
    none
none

Request parameters

ParameterTypeRequiredDescriptionExample
FileSystemIdstringYes

The ID of the file system.

cpfs-099394bd928c****
DataFlowIdstringYes

The ID of the dataflow.

df-194433a5be31****
AutoRefreshsarray<object>Yes

The automatic update configurations.

objectYes
RefreshPathstringYes

The automatic update directory. CPFS automatically checks whether the source data only in the directory is updated and imports the updated data.

Limits:

  • The directory must be 2 to 1,024 characters in length.
  • The directory must be encoded in UTF-8.
  • The directory must start and end with a forward slash (/).
Note The directory must be an existing directory in the CPFS file system and must be in a fileset where the dataflow is enabled.
/prefix1/prefix2/
AutoRefreshPolicystringNo

The automatic update policy. The updated data in the source storage is imported into the CPFS file system based on the policy. Valid values:

  • None (default): Updated data in the source storage is not automatically imported into the CPFS file system. You can run a dataflow task to import the updated data from the source storage.
  • ImportChanged: Updated data in the source storage is automatically imported into the CPFS file system.
None
AutoRefreshIntervallongNo

The automatic update interval. CPFS checks whether data is updated in the directory at the interval specified by this parameter. If data is updated, CPFS starts an automatic update task. Unit: minute.

Valid values: 10 to 525600. Default value: 10.

10
DryRunbooleanNo

Specifies whether to perform a dry run.

During the dry run, the system checks whether the request parameters are valid and whether the requested resources are available. During the dry run, no file system is created and no fee is incurred.

Valid values:

  • true: performs a dry run. The system checks the required parameters, request syntax, limits, and available NAS resources. If the request fails the dry run, an error message is returned. If the request passes the dry run, the HTTP status code 200 is returned. No value is returned for the FileSystemId parameter.
  • false (default): performs a dry run and sends the request. If the request passes the dry run, a file system is created.
false
ClientTokenstringNo

The client token that is used to ensure the idempotence of the request. You can use the client to generate the token, but you must make sure that the token is unique among different requests.

The token can contain only ASCII characters and cannot exceed 64 characters in length. For more information, see How do I ensure the idempotence?

Note If you do not specify this parameter, the system automatically uses the request ID as the client token. The value of RequestId may be different for each API request.
123e4567-e89b-12d3-a456-42665544****

Response parameters

ParameterTypeDescriptionExample
object
RequestIdstring

The request ID.

98696EF0-1607-4E9D-B01D-F20930B6****

Examples

Sample success responses

JSONformat

{
  "RequestId": "98696EF0-1607-4E9D-B01D-F20930B6****"
}

Error codes

HTTP status codeError codeError messageDescription
400IllegalCharactersThe parameter contains illegal characters.The parameter contains illegal characters.
400MissingFileSystemIdFileSystemId is mandatory for this action.-
400MissingDataFlowIdDataFlowId is mandatory for this action.-
403OperationDenied.InvalidStateThe operation is not permitted when the status is processing.The operation is not permitted when the status is processing.
403InvalidFileSystem.AlreadyExistedThe specified file system already exists.The specified file system already exists.
403OperationDenied.DependencyViolationThe operation is denied due to dependancy violation.-
403OperationDenied.NestedDirThe operation is denied due to nested directory.-
403OperationDenied.ConflictOperationThe operation is denied due to a conflict with an ongoing operation.-
403OperationDenied.DataFlowNotSupportedThe operation is not supported.-
404InvalidFileSystem.NotFoundThe specified file system does not exist.The specified file system does not exist.
404InvalidDataFlow.NotFoundThe specified data flow does not exist.-
404InvalidRefreshPath.InvalidParameterRefresh path is invalid.-
404InvalidRefreshPath.NotFoundRefresh path does not exist.-
404InvalidRefreshPolicy.InvalidParameterRefresh policy is invalid.-
404InvalidRefreshInterval.OutOfBoundsRefresh interval is out of bounds.-
404InvalidRefreshPath.AlreadyExistThe refresh path already exists.-
404InvalidRefreshPath.TooManyPathsThe number of refresh paths exceeds the limit.-

For a list of error codes, visit the Service error codes.

Change history

Change timeSummary of changesOperation
2024-09-05The Error code has changedView Change Details