Network Attached Storage (NAS) supports the NFSv3 and NFSv4 protocols. However, you must pay attention to the following limits:

  • Attributes not supported by NFSv4.0 include: FATTR4_MIMETYPE, FATTR4_QUOTA_AVAIL_HARD, FATTR4_QUOTA_AVAIL_SOFT, FATTR4_QUOTA_USED, FATTR4_TIME_BACKUP, and FATTR4_TIME_CREATE. If these attributes are attempted, an NFS4ERR_ATTRNOTSUPP error is returned to the client.

  • Attributes not supported by NFSv4.1 include: FATTR4_DIR_NOTIF_DELAY, FATTR4_DIRENT_NOTIF_DELAY, FATTR4_DACL, FATTR4_SACL, FATTR4_CHANGE_POLICY, FATTR4_FS_STATUS, FATTR4_LAYOUT_HINT, FATTR4_LAYOUT_TYPES, FATTR4_LAYOUT_ALIGNMENT, FATTR4_FS_LOCATIONS_INFO, FATTR4_MDSTHRESHOLD, FATTR4_RETENTION_GET, FATTR4_RETENTION_SET, FATTR4_RETENTEVT_GET, FATTR4_RETENTEVT_SET, FATTR4_RETENTION_HOLD, FATTR4_MODE_SET_MASKED, and FATTR4_FS_CHARSET_CAP. If these attributes are attempted, an NFS4ERR_ATTRNOTSUPP error is returned to the client.

  • OPs not supported by NFSv4.1 include: OP_DELEGPURGE, OP_DELEGRETURN, and NFS4_OP_OPENATTR. If these OPs are attempted, an NFS4ERR_NOTSUPP error is returned to the client.

  • NFSv4 currently does not support Delegation.

  • Issues concerning UID and GID

    • For the NFSv3 protocol, if the file’s UID or GID exists in a Linux local account, then the corresponding user name or group name is displayed based on the mapping relations of the local UID and GID; if the file’s UID or GID does not exist in the local account, then the UID or GID is displayed directly.

    • For the NFSv4 protocol, if the version of the local Linux kernel is earlier than 3.0, the UID and GID of all files is displayed as nobody; if the version is later than 3.0, then the display rule is the same as that of NFSv3 protocol.

      Note If you use NFSv4 protocol to mount a file system, and the version of your Linux kernel is earlier than 3.0, we recommend that you do not change owner or group of the file or directory. Otherwise, the UID and GID of the file or directory is changed to nobody.
  • A single file system can be simultaneously mounted and accessed by up to 10,000 computing nodes.