OSSFS is a FUSE-based file system officially provided by Alibaba Cloud (click https://github.com/aliyun/ossfs to view the project home page). OSSFS data volumes can package Object Storage Service (OSS) buckets as data volumes.

The performance and functions of OSSFS differ from those of local file systems because data must be synchronized to the cloud by means of network. We recommend that you do not run I/O-intensive applications such as databases or applications that require constantly rewriting files such as logs on OSSFS. OSSFS  is applicable to scenarios such as sharing configuration files among containers and attachment upload that do not require rewriting.

OSSFS differs from local file systems in the following ways:

  • Random write or append write leads to the entire file being overwritten.
  • Metadata operations, such as list directory, provide poor performance because the system needs to remotely access the OSS server.
  • The file/folder rename operation is not atomic.
  • Coordinate the actions of each client on your own when multiple clients are mounted to the same OSS bucket. For example, avoid multiple clients from writing the same file.
  • Hard links are not supported.

Prerequisites

To activate the data volume function, your cluster must meet the following two conditions:

  • The cluster Agent is of version 0.6 or later.

    You can view your Agent version on the Cluster List page. ClickMore > Upgrade Agent.



    If your Agent version is earlier than 0.6, upgrade the Agent. For more information about how to upgrade the Agent, see Upgrade Agent.

  • The acsvolumedriver application is deployed in the cluster. We recommend that you upgrade to the latest version.

    You can deploy and upgrade the acsvolumedriver application by upgrading system services. For more information, see Upgrade system services.

    Note
    When acsvolumedriver is upgraded or restarted, containers using OSSFS data volumes are restarted, and your services are also restarted.

Procedure

Step 1. Create an OSS bucket

Log on to the OSS console and create a bucket. For more information, see Create a bucket.

In this example, a bucket located in China South 1 (Shenzhen) is created.



Step 2. Create an OSSFS data volume

  1. Log on to the Container Service console.
  2. Click Data Volumes in the left-side navigation pane.
  3. Select the cluster in which you want to create a data volume (tfoss in this example) from the Cluster drop-down list. Click Create in the upper-right corner.


  4. The Create Data Volume dialog box appears.  Select the Data Volume Type, as the OSS, set the data volume parameters and click Create. The Container Service creates data volumes with the same name on all nodes of the cluster.


    • Name: The data volume name that  must be unique in the cluster.
    • Access Key ID/Access Key Secret: The AccessKey required to access OSS. You can obtain them from the AccessKey console.
    • Bucket ID: The name of the OSS bucket to be used. Click Select Bucket. Select the bucket (tensorflow-sample in this example) in the displayed dialog box and click Select.
    • Access Domain Name: Select VPC.
    • File Caching: Select Disable if you want to synchronize the modifications of the same file on multiple machines (for example, modify the file on machine A and read the modified contents on machine B).
      Note
      Disabling the file caching slows down the ls folder, especially when many files are in the same folder. If you do not have the preceding requirement, enable the file cache to speed up `ls`.

Subsequent operations

After creating a data volume, you can use it in your application. For how to use data volumes in applications, see Use third-party data volumes.