OSS-HDFS supports RootPolicy. You can use RootPolicy to configure a custom prefix for OSS-HDFS. This allows jobs to run on OSS-HDFS without modifying the original access prefix hdfs://.
Prerequisites
A Hadoop environment, Hadoop cluster, or Hadoop client is created. For more information about how to install Hadoop, see Step 2: Create a Hadoop runtime environment.
OSS-HDFS is enabled for specific buckets. For more information, see Enable OSS-HDFS and grant access permissions.
JindoSDK 4.5.0 or later is installed and configured. For more information, see Connect non-EMR clusters to OSS-HDFS.
Procedure
Configure environment variables.
Connect to an Elastic Compute Service (ECS) instance. For more information, see Connect to an ECS instance.
Download the JindoFS command line interface.
Configure the AccessKey pair and environment variables.
Navigate to the bin directory of the installed JindoFS JAR package.
The following example uses
jindofs-sdk-x.x.x-linux. Replacex.x.xwith the corresponding version number.cd jindofs-sdk-x.x.x-linux/bin/Create a configuration file named jindofs.cfg in the bin directory and configure the AccessKey pair of your Alibaba Cloud account or a Resource Access Management (RAM) user that has the required permissions.
[client] fs.oss.accessKeyId = <key> fs.oss.accessKeySecret = <secret>Configure the environment variables.
NoteReplace <JINDOSDK_CONF_DIR> with the absolute path of the
jindofs.cfgconfiguration file.export JINDOSDK_CONF_DIR=<JINDOSDK_CONF_DIR>
Configure RootPolicy.
Run the following SetRootPolicy command to specify a registered address that contains a custom prefix for a bucket:
./jindofs admin -setRootPolicy oss://<bucket_name>.<dls_endpoint>/ hdfs://<your_ns_name>/The following table describes the parameters in the SetRootPolicy command.
Parameter
Description
bucket_name
The name of the bucket for which OSS-HDFS is enabled.
dls_endpoint
The endpoint of the region in which the bucket for which OSS-HDFS is enabled. Example:
cn-hangzhou.oss-dls.aliyuncs.com.If you do not want to repeatedly add the <dls_endpoint> parameter to the SetRootPolicy command each time you run RootPolicy, you can use one of the following methods to add configuration items to the
core-site.xmlfile of Hadoop:Method 1:
<configuration> <property> <name>fs.oss.endpoint</name> <value><dls_endpoint></value> </property> </configuration>Method 2:
<configuration> <property> <name>fs.oss.bucket.<bucket_name>.endpoint</name> <value><dls_endpoint></value> </property> </configuration>
your_ns_name
The custom nsname that is used to access OSS-HDFS. A non-empty string is supported, such as
test. The current version supports only the root directory.Configure Access Policy discovery address and Scheme implementation class.
You must configure the following parameters in the core-site.xml file of Hadoop:
<configuration> <property> <name>fs.accessPolicies.discovery</name> <value>oss://<bucket_name>.<dls_endpoint>/</value> </property> <property> <name>fs.AbstractFileSystem.hdfs.impl</name> <!-- Select fs.AbstractFileSystem.hdfs.impl based on your Hadoop version --> <!-- Use com.aliyun.jindodata.hdfs.v28.HDFS for Hadoop 2.x --> <!-- Use com.aliyun.jindodata.hdfs.v3.HDFS for Hadoop 3.x --> <value>com.aliyun.jindodata.hdfs.v3.HDFS</value> </property> <property> <name>fs.hdfs.impl</name> <!-- Select fs.hdfs.impl based on your Hadoop version --> <!-- Use com.aliyun.jindodata.hdfs.v28.JindoDistributedFileSystem for Hadoop 2.x --> <!-- Use com.aliyun.jindodata.hdfs.v3.JindoDistributedFileSystem for Hadoop 3.x --> <value>com.aliyun.jindodata.hdfs.v3.JindoDistributedFileSystem</value> </property> </configuration>If you want to configure Access Policy discovery addresses and Scheme implementation classes for multiple buckets, separate the buckets with commas (
,). Example:<value>oss://<bucket1_name>.<dls_endpoint>, oss://<bucket2_name>.<dls_endpoint>/</value>.Run the following command to verify that RootPolicy is successfully configured:
hadoop fs -ls hdfs://<your_ns_name>/If the following results are returned, RootPolicy is successfully configured:
drwxr-x--x - hdfs hadoop 0 2025-06-30 12:27 hdfs://<your_ns_name>/apps drwxrwxrwx - spark hadoop 0 2025-06-30 12:27 hdfs://<your_ns_name>/spark-history drwxrwxrwx - hdfs hadoop 0 2025-06-30 12:27 hdfs://<your_ns_name>/tmp drwxrwxrwx - hdfs hadoop 0 2025-06-30 12:27 hdfs://<your_ns_name>/userUse a custom prefix to access OSS-HDFS.
After you restart services such as Hive and Spark, you can access OSS-HDFS using a custom prefix.
Optional. Use RootPolicy for other purposes.
List all registered addresses that contain a custom prefix specified for a bucket
Run the following listAccessPolicies command to list all registered addresses that contain a custom prefix specified for a bucket:
./jindofs admin -listAccessPolicies oss://<bucket_name>.<dls_endpoint>/Delete all registered addresses that contain a custom prefix specified for a bucket
Run the following unsetRootPolicy command to delete all registered addresses that contain a custom prefix specified for a bucket:
./jindofs admin -unsetRootPolicy oss://<bucket_name>.<dls_endpoint>/ hdfs://<your_ns_name>/
References
For more information, see Jindo CLI user guide.