All Products
Search
Document Center

Object Storage Service:Access OSS-HDFS by using RootPolicy

Last Updated:Jul 02, 2025

OSS-HDFS supports RootPolicy. You can use RootPolicy to configure a custom prefix for OSS-HDFS. This allows jobs to run on OSS-HDFS without modifying the original access prefix hdfs://.

Prerequisites

Procedure

  1. Configure environment variables.

    1. Connect to an Elastic Compute Service (ECS) instance. For more information, see Connect to an ECS instance.

    2. Download the JindoFS command line interface.

    3. Configure the AccessKey pair and environment variables.

      1. Navigate to the bin directory of the installed JindoFS JAR package.

        The following example uses jindofs-sdk-x.x.x-linux. Replace x.x.x with the corresponding version number.

        cd jindofs-sdk-x.x.x-linux/bin/
      2. Create a configuration file named jindofs.cfg in the bin directory and configure the AccessKey pair of your Alibaba Cloud account or a Resource Access Management (RAM) user that has the required permissions.

        [client]
        fs.oss.accessKeyId = <key>              
        fs.oss.accessKeySecret = <secret>
      3. Configure the environment variables.

        Note

        Replace <JINDOSDK_CONF_DIR> with the absolute path of the jindofs.cfg configuration file.

        export JINDOSDK_CONF_DIR=<JINDOSDK_CONF_DIR>
  2. Configure RootPolicy.

    Run the following SetRootPolicy command to specify a registered address that contains a custom prefix for a bucket:

    ./jindofs admin -setRootPolicy oss://<bucket_name>.<dls_endpoint>/ hdfs://<your_ns_name>/

    The following table describes the parameters in the SetRootPolicy command.

    Parameter

    Description

    bucket_name

    The name of the bucket for which OSS-HDFS is enabled.

    dls_endpoint

    The endpoint of the region in which the bucket for which OSS-HDFS is enabled. Example: cn-hangzhou.oss-dls.aliyuncs.com.

    If you do not want to repeatedly add the <dls_endpoint> parameter to the SetRootPolicy command each time you run RootPolicy, you can use one of the following methods to add configuration items to the core-site.xml file of Hadoop:

    • Method 1:

      <configuration>
          <property>
              <name>fs.oss.endpoint</name>
              <value><dls_endpoint></value>
          </property>
      </configuration>
    • Method 2:

      <configuration> 
       <property>
              <name>fs.oss.bucket.<bucket_name>.endpoint</name>
              <value><dls_endpoint></value>
          </property>
      </configuration>

    your_ns_name

    The custom nsname that is used to access OSS-HDFS. A non-empty string is supported, such as test. The current version supports only the root directory.

  3. Configure Access Policy discovery address and Scheme implementation class.

    You must configure the following parameters in the core-site.xml file of Hadoop:

    <configuration>
        <property>
            <name>fs.accessPolicies.discovery</name>
            <value>oss://<bucket_name>.<dls_endpoint>/</value>
        </property>
        <property>
            <name>fs.AbstractFileSystem.hdfs.impl</name>
            <!-- Select fs.AbstractFileSystem.hdfs.impl based on your Hadoop version -->
            <!-- Use com.aliyun.jindodata.hdfs.v28.HDFS for Hadoop 2.x -->
            <!-- Use com.aliyun.jindodata.hdfs.v3.HDFS for Hadoop 3.x -->
            <value>com.aliyun.jindodata.hdfs.v3.HDFS</value>
        </property>
        <property>
            <name>fs.hdfs.impl</name>
            <!-- Select fs.hdfs.impl based on your Hadoop version -->
            <!-- Use com.aliyun.jindodata.hdfs.v28.JindoDistributedFileSystem for Hadoop 2.x -->
            <!-- Use com.aliyun.jindodata.hdfs.v3.JindoDistributedFileSystem for Hadoop 3.x -->
            <value>com.aliyun.jindodata.hdfs.v3.JindoDistributedFileSystem</value>
        </property>
    </configuration>

    If you want to configure Access Policy discovery addresses and Scheme implementation classes for multiple buckets, separate the buckets with commas (,). Example: <value>oss://<bucket1_name>.<dls_endpoint>, oss://<bucket2_name>.<dls_endpoint>/</value>.

  4. Run the following command to verify that RootPolicy is successfully configured:

    hadoop fs -ls hdfs://<your_ns_name>/

    If the following results are returned, RootPolicy is successfully configured:

    drwxr-x--x   - hdfs  hadoop          0 2025-06-30 12:27 hdfs://<your_ns_name>/apps
    drwxrwxrwx   - spark hadoop          0 2025-06-30 12:27 hdfs://<your_ns_name>/spark-history
    drwxrwxrwx   - hdfs  hadoop          0 2025-06-30 12:27 hdfs://<your_ns_name>/tmp
    drwxrwxrwx   - hdfs  hadoop          0 2025-06-30 12:27 hdfs://<your_ns_name>/user
  5. Use a custom prefix to access OSS-HDFS.

    After you restart services such as Hive and Spark, you can access OSS-HDFS using a custom prefix.

  6. Optional. Use RootPolicy for other purposes.

    • List all registered addresses that contain a custom prefix specified for a bucket

      Run the following listAccessPolicies command to list all registered addresses that contain a custom prefix specified for a bucket:

      ./jindofs admin -listAccessPolicies oss://<bucket_name>.<dls_endpoint>/
    • Delete all registered addresses that contain a custom prefix specified for a bucket

      Run the following unsetRootPolicy command to delete all registered addresses that contain a custom prefix specified for a bucket:

      ./jindofs admin -unsetRootPolicy oss://<bucket_name>.<dls_endpoint>/ hdfs://<your_ns_name>/

References

For more information, see Jindo CLI user guide.