Some data in OSS-HDFS is not frequently accessed but needs to be retained to meet compliance or archiving requirements. To meet these requirements, OSS-HDFS provides the automatic storage tiering feature. This feature automatically moves frequently accessed data to the Standard storage class and rarely accessed data to the Infrequent Access (IA), Archive, or Cold Archive storage class to reduce storage costs.
Prerequisites
Data is written to OSS-HDFS.
The bucket for which you want to enable the automatic storage tiering feature is located in one of the following regions: China (Hangzhou), China (Shanghai), China (Beijing), China (Shenzhen), China (Zhangjiakou), China (Hong Kong), Singapore, Germany (Frankfurt), US (Silicon Valley), US (Virginia), and Indonesia (Jakarta).
A ticket is submitted to use the automatic storage tiering feature.
JindoSDK 4.4.0 or later is installed and configured. For more information, see Connect non-EMR clusters to OSS-HDFS.
Usage notes
You are charged data retrieval fees when you read IA, Archive, or Cold Archive objects in OSS-HDFS. We recommend that you do not store frequently accessed data as IA, Archive, or Cold Archive objects. For more information about the data retrieval fees, see Data processing fees.
When you configure a storage policy for data in OSS-HDFS, you must add tags to data blocks. You are charged for the tags based on the object tagging billing rules. For more information, see Object tagging fees.
You cannot directly create an object in an IA, Archive, or Cold Archive directory. If you need to create an object in an IA, Archive, or Cold Archive directory, you can create an object and close it in a Standard directory. Then, move the object to the IA, Archive, or Cold Archive directory by using the rename operation.
When you convert the storage class of objects to Archive or Cold Archive, additional system overheads are generated and data restoration is slow. Proceed with caution.
You cannot convert Archive objects to Cold Archive objects and Cold Archive objects to Archive objects.
Procedure
Configure environment variables.
Connect to an Elastic Compute Service (ECS) instance. For more information, see Connect to an instance.
Go to the bin directory of the installed JindoSDK JAR package.
cd jindosdk-x.x.x/bin/
Notex.x.x indicates the version number of the JindoSDK JAR package.
Grant read and write permissions to the
jindo-util
file in the bin directory.chmod 700 jindo-util
Rename the
jindo-util
file tojindo
.mv jindo-util jindo
Create a configuration file named
jindosdk.cfg
, and then add the following parameters to the configuration file.[common] Retain the following default configurations: logger.dir = /tmp/jindo-util/ logger.sync = false logger.consolelogger = false logger.level = 0 logger.verbose = 0 logger.cleaner.enable = true hadoopConf.enable = false [jindosdk] Specify the following parameters: <!-- In this example, the China (Hangzhou) region is used. Specify your actual region. --> fs.oss.endpoint = cn-hangzhou.oss-dls.aliyuncs.com <! -- Configure the AccessKey ID and AccessKey secret that are used to access OSS-HDFS. --> fs.oss.accessKeyId = LTAI5tJCTj5SxJepqxQ2**** fs.oss.accessKeySecret = i0uLwyd0mHxXetZo7b4j4CXP16****
Configure environment variables.
export JINDOSDK_CONF_DIR=<JINDOSDK_CONF_DIR>
Set <JINDOSDK_CONF_DIR> to the absolute path of the
jindosdk.cfg
configuration file.
Specify a storage policy for the data that is written to OSS-HDFS. The following table describes the storage policy.
Scenario
Command
Result
IA
./jindo fs -setStoragePolicy -path oss://examplebucket/dir1 -policy CLOUD_IA
Objects in the dir1/ directory contain a tag whose key is transition-storage-class and whose value is IA.
Archive
./jindo fs -setStoragePolicy -path oss://examplebucket/dir2 -policy CLOUD_AR
Objects in the dir2/ directory contain a tag whose key is transition-storage-class and whose value is Archive.
Cold Archive
./jindo fs -setStoragePolicy -path oss://examplebucket/dir3 -policy CLOUD_COLD_AR
Objects in the dir3/ directory contain a tag whose key is transition-storage-class and whose value is ColdArchive.
Enable the automatic storage tiering feature.
Log on to the OSS console.
In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket for which you want to enable the automatic storage tiering feature.
In the left-side navigation tree, choose .
On the OSS-HDFS tab, click Configure.
In the Basic Settings section of the Automatic Storage Tiering panel, turn on Status.
To prevent the automatic storage tiering feature from failing to run as expected due to incorrect configurations, OSS automatically creates a lifecycle rule to convert the storage class of data in OSS-HDFS that contains a specific tag:
The lifecycle rule specifies that the storage class of the data that contains a tag whose key is transition-storage-class and whose value is IA in the .dlsdata/ directory is changed to IA one day after the data is last modified.
The lifecycle rule specifies that the storage class of the data that contains a tag whose key is transition-storage-class and whose value is Archive in the .dlsdata/ directory is changed to Archive one day after the data is last modified.
The lifecycle rule specifies that the storage class of the data that contains a tag whose key is transition-storage-class and whose value is ColdArchive in the .dlsdata/ directory is changed to Cold Archive one day after the data is last modified.
ImportantDo not modify the lifecycle rule that is automatically created after the automatic storage tiering feature is enabled. Otherwise, data or OSS-HDFS service exceptions may occur.
Click OK.
OSS-HDFS changes the storage class for objects based on the policy configured in Step 2.
OSS loads the rule within 24 hours after a lifecycle rule is created. After the rule is loaded, OSS starts to execute the rule at 08:00 (UTC+8) every day. The specific execution time varies based on the number of objects. The objects are converted to the specified storage class within at least 48 hours.
Related commands
Syntax | Description |
| Specifies a storage policy for data in a path.
If you do not specify the storage class for an object or a subdirectory, the object or subdirectory inherits the storage class of the directory to which they belong. For example, if the storage class of the oss://examplebucket/dir directory is CLOUD_STD and you do not specify a storage class for the oss://examplebucket/dir/subdir subdirectory, the storage class of the oss://examplebucket/dir/subdir subdirectory is also CLOUD_STD. |
| Obtains the storage policy of data in a specific path. |
| Deletes the storage policy of data in a specific path. |
| Obtains the status of storage class conversion for data in a specific path based on the storage policy. Valid values:
|