All Products
Search
Document Center

Object Storage Service:Convert object storage classes

Last Updated:Dec 23, 2025

Object Storage Service (OSS) supports multiple storage classes: Standard, Infrequent Access (IA), Archive, Cold Archive, and Deep Cold Archive. You can use lifecycle rules to automatically convert the storage class of an object. You can also manually convert the storage class of an object by calling the CopyObject operation.

Warning
  • For buckets with OSS-HDFS enabled, do not change the storage class of any object in the .dlsdata/ data storage directory.

  • If you change the storage class of an object in the .dlsdata/ directory to IA, the object remains accessible through OSS-HDFS. If you change the storage class to Archive, Cold Archive, or Deep Cold Archive, the object is inaccessible through OSS-HDFS. To access the object, you must first restore it.

Automatically convert object storage classes using lifecycle rules

Convert storage classes based on the last modified time

  • Locally redundant storage (LRS)本地冗余

    The conversion rules for LRS objects are as follows:

    • Standard LRS objects can be converted to IA LRS, Archive LRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • IA LRS objects can be converted to Archive LRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • Archive LRS objects can be converted to Cold Archive LRS or Deep Cold Archive LRS.

    • Cold Archive LRS objects can be converted to Deep Cold Archive LRS.

    If you configure lifecycle rules to convert objects to IA, Archive, Cold Archive, and Deep Cold Archive for the same bucket, the specified transition periods must meet the following condition:

    Transition period to IA < Transition period to Archive < Transition period to Cold Archive < Transition period to Deep Cold Archive

  • Zone-redundant storage (ZRS)同城

    The transform rules for files in zone-redundant storage are as follows:

    • Standard ZRS objects can be converted to IA ZRS, Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • IA ZRS objects can be converted to Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • Archive ZRS objects can be converted to Cold Archive LRS or Deep Cold Archive LRS.

    • Cold Archive LRS objects can be converted to Deep Cold Archive LRS.

For more information, see Lifecycle rules based on the last modified time.

Convert storage classes based on the last access time

Important
  • To convert objects from the Standard or IA storage class to the Archive, Cold Archive, or Deep Cold Archive storage class, you must submit a ticket to apply for the required permissions. After your application is approved, you can specify the destination storage class.

  • After your application is approved, if you use a lifecycle rule based on the last access time to convert an object from Standard or IA to Archive, Cold Archive, or Deep Cold Archive, the last access time of the object is considered the time when access tracking was enabled for the bucket.

  • Locally Redundant Storage (LRS)

    1.jpg

    The conversion rules for LRS objects are as follows:

    • Standard LRS objects can be converted to IA LRS, Archive LRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • After an object is converted from Standard LRS to IA LRS, you can also specify whether to automatically convert it back to Standard LRS after it is accessed.

    • IA LRS objects can be converted to Archive LRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • Archive LRS objects can be converted to Cold Archive LRS or Deep Cold Archive LRS.

    • Cold Archive LRS objects can be converted to Deep Cold Archive LRS.

  • ZRS

    2.jpg

    The transform rules for files in zone-redundant storage are as follows:

    • Standard ZRS objects can be converted to IA ZRS, Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • After an object is converted from Standard ZRS to IA ZRS, you can also specify whether to automatically convert it back to Standard ZRS after it is accessed.

    • IA ZRS objects can be converted to Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS.

    • Archive ZRS objects can be converted to Cold Archive LRS or Deep Cold Archive LRS.

    • Cold Archive LRS objects can be converted to Deep Cold Archive LRS.

For more information, see Lifecycle rules based on the last access time.

Methods to convert storage classes using lifecycle rules

You can use several methods to configure lifecycle rules. Lifecycle rules can transition objects to a specified storage class after a specified period or delete expired objects and parts. The following steps describe how to use a lifecycle rule to transition objects to a specified storage class.

Use the OSS console

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the navigation pane on the left, choose Data Management > Lifecycle.

  4. Optional: To create a lifecycle rule based on the last access time, turn on the Enable Access Tracking switch on the Lifecycle page.

  5. On the Lifecycle page, click Create Rule.

  6. In the Create Lifecycle Rule panel, set the parameters for the rule as described in the following tables.

    • Unversioned bucket

      Section

      Parameter

      Description

      Basic Settings

      Status

      Specify the status of the lifecycle rule. You can select Enabled or Disabled.

      • After a lifecycle rule is enabled, the storage class of objects is converted or objects are deleted based on the lifecycle rule.

      • After you disable a lifecycle rule, the lifecycle tasks of the lifecycle rule are interrupted.

      Applied To

      Specify the objects for which you want the lifecycle rule to take effect. You can select Objects with Specified Prefix or Whole Bucket.

      Note

      If you select Objects with Specified Prefix, you must specify a full prefix. For example, if you want to apply the lifecycle rule to objects whose names contain the src/dir1 prefix, enter src/dir1. If you enter only dir1, the lifecycle rule does not produce the effect that you want.

      Allow Overlapped Prefixes

      Specify whether to allow prefixes that overlap. By default, OSS checks whether the prefixes of each lifecycle rule overlap. For example, if the bucket has an existing lifecycle rule (Rule 1) and you want to configure another lifecycle rule (Rule 2) that contains an overlapping prefix:

      • Rule 1

        Delete all objects whose names contain the dir1/ prefix in the bucket 180 days after the objects are last modified.

      • Rule 2

        Convert the storage class of all objects whose names contain the dir1/dir2/ prefix in the bucket to IA 30 days after the objects are last modified and delete the objects 60 days after they are last modified.

      If you do not select this check box, OSS detects that objects in the dir1/dir2/ directory match two lifecycle rules, rejects the creation of Rule 2, and returns the Overlap for same action type Expiration. error message.

      If you select this check box, Rule 2 is created to convert the storage class of the objects in the dir1/dir2/ directory to IA 30 days after the objects are last modified and delete them 60 days after they are last modified. Other objects in the dir1/ directory are deleted 180 days after the objects are last modified.

      Note

      If a bucket has multiple lifecycle rules, one of which applies to the whole bucket, the lifecycle rules have overlapping prefixes.

      Prefix

      Specify the prefix in the names of objects for which you want the lifecycle rule to take effect.

      • If you set the prefix to img, all objects whose names contain the img prefix, such as imgtest.png and img/example.jpg, match the lifecycle rule.

      • If you set the prefix to img/, all objects whose names contain the img/ prefix, such as img/example.jpg and img/test.jpg, match the lifecycle rule.

      Tag

      Specify tags. The rule takes effect only for objects that have the specified tags.

      • For example, if you specify a tag in a lifecycle rule and does not specify a prefix in the lifecycle rule, the lifecycle rule applies to all objects who have the tag in the bucket.

      • If you specify the a=1 tag and the img prefix in a lifecycle rule, the lifecycle rule applies to all objects that have the img prefix in their object names and have the a=1 tag in the bucket.

      For more information, see Tag objects.

      NOT

      Specify that the lifecycle rule does not take effect for the objects that have the specified name prefix and tag.

      Important
      • If you turn on NOT, at least one of the Prefix and Tag parameters must be specified for the lifecycle rule.

      • The key of the tag specified for the NOT parameter cannot be the same as the key specified for the Tag parameter.

      • If you turn on NOT, you cannot include a part policy in the lifecycle rule.

      Object Size

      Specify the size of objects for which the lifecycle rule takes effect.

      • Minimum Size: Specify that the lifecycle rule takes effect only for objects whose sizes are greater than the specified size. You can specify a minimum object size that is greater than 0 B and less than 5 TB.

      • Maximum Size: Specify that the lifecycle rule takes effect only for objects whose sizes are smaller than the specified size. You can specify a maximum object size that is greater than 0 B and less than or equal to 5 TB.

      Important

      If you specify a minimum object size and a maximum object size in the same lifecycle rule, take note of the following items:

      • The maximum object size must be greater than the minimum object size.

      • You cannot include a part policy in the lifecycle rule.

      • You cannot include a policy to remove delete markers.

      Policy for Objects

      Object Lifecycle

      Specify an object expiration policy. You can select Validity Period (Days), Expiration Date, or Disabled. If you select Disabled, no object expiration policy is configured.

      Lifecycle-based Rules

      Configure the lifecycle rule to convert the storage class of objects or delete expired objects. You can select IA, Archive, Cold Archive, Deep Cold Archive, or Delete Objects (Cannot Be Recovered).

      For example, you select Expiration Date for Object Lifecycle, specify September 24, 2023 as the expiration date, and specify Delete Objects (Cannot Be Recovered). In this case, objects that are last modified before September 24, 2023 are automatically deleted and cannot be recovered.

      Policy for Parts

      Part Lifecycle

      Specify a part policy. If you configure the Tag parameter, this parameter is unavailable. You can select Validity Period (Days), Expiration Date, or Disabled. If you select Disabled, no part policy is configured.

      Important

      A lifecycle rule must contain at least one of the object expiration policies and part expiration policies.

      Rules for Parts

      Specify when parts expire. You can specify a validity period or expiration date. Expired parts are automatically deleted and cannot be recovered.

    • Versioned bucket

      Configure the parameters in the Basic Settings and Policy for Parts sections in the same way you configure the parameters for an unversioned bucket. The following table describes only the parameters that are different from the parameters that you configure for an unversioned bucket.

      Important

      Before you configure a lifecycle rule, be aware of the following:

      If your bucket has versioning enabled and is the destination for Cross-Region Replication (CRR), delete markers replicated from the source bucket will cause objects with the same name in this bucket to become previous versions.

      Therefore, configure any lifecycle rules that clean up previous versions with extreme caution to avoid unintended data loss in the destination bucket.

      Section

      Parameter

      Description

      Policy for Current Versions

      Removal of Delete Marker

      If the bucket is versioned, the Removal of Delete Marker option is added to the Object Lifecycle parameter. Other parameters are the same as those you can configure for an unversioned bucket.

      If you select Removal of Delete Marker, and an object has only one version, which is a delete marker, OSS considers the delete marker expired and removes the delete marker. If an object has multiple versions and the current version of the object is a delete marker, OSS retains the delete marker. For more information about delete markers, see Delete marker.

      Important

      If a matched object has previous versions, the lifecycle rule does not remove the delete marker of the object. We recommend that you remove previous object versions that you no longer need and delete markers to prevent a listing performance decline due to a large number of delete markers.

      Policy for Previous Versions

      Object Lifecycle

      Specify the time when previous versions expire. You can select Validity Period (Days) or Disabled. If you select Disabled, no object policy is configured.

      Lifecycle-based Rules

      Specify the number of days for which objects can be retained after they become previous versions. After they expire, the specified actions are performed on the previous versions the next day. For example, if you set the Validity Period (Days) parameter to 30, objects that become previous versions on September 1, 2023 are moved to the specified storage class or deleted on October 1, 2023.

      Important

      You can determine when an object becomes a previous version based on the time when the later version is generated.

  7. Click OK.

    After the lifecycle rule is saved, you can view the configured lifecycle rule in the list of rules.

Use Alibaba Cloud SDKs

The following code provides examples of how to configure lifecycle rules using common SDKs. For information about how to configure lifecycle rules using other SDKs, see Overview.

Java

import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.common.utils.DateUtil;
import com.aliyun.oss.model.LifecycleRule;
import com.aliyun.oss.model.SetBucketLifecycleRequest;
import com.aliyun.oss.model.StorageClass;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class Demo {

    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
        String region = "cn-hangzhou";

        // Create an OSSClient instance. 
        // Call the shutdown method to release resources when the OSSClient is no longer in use.
        ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
        clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
        OSS ossClient = OSSClientBuilder.create()
        .endpoint(endpoint)
        .credentialsProvider(credentialsProvider)
        .clientConfiguration(clientBuilderConfiguration)
        .region(region)               
        .build();

        try {
            // Create a request by using SetBucketLifecycleRequest. 
            SetBucketLifecycleRequest request = new SetBucketLifecycleRequest(bucketName);

            // Specify the ID of the lifecycle rule. 
            String ruleId0 = "rule0";
            // Specify the prefix that you want the lifecycle rule to match. 
            String matchPrefix0 = "A0/";
            // Specify the tag that you want the lifecycle rule to match. 
            Map<String, String> matchTags0 = new HashMap<String, String>();
            // Specify the key and value of the tag. In the example, the key is set to owner and the value is set to John. 
            matchTags0.put("owner", "John");


            String ruleId1 = "rule1";
            String matchPrefix1 = "A1/";
            Map<String, String> matchTags1 = new HashMap<String, String>();
            matchTags1.put("type", "document");

            String ruleId2 = "rule2";
            String matchPrefix2 = "A2/";

            String ruleId3 = "rule3";
            String matchPrefix3 = "A3/";

            String ruleId4 = "rule4";
            String matchPrefix4 = "A4/";

            String ruleId5 = "rule5";
            String matchPrefix5 = "A5/";

            String ruleId6 = "rule6";
            String matchPrefix6 = "A6/";

            // Set the expiration time to three days after the last modified time. 
            LifecycleRule rule = new LifecycleRule(ruleId0, matchPrefix0, LifecycleRule.RuleStatus.Enabled, 3);
            rule.setTags(matchTags0);
            request.AddLifecycleRule(rule);

            // Specify that objects that are created before the specified date expire. 
            rule = new LifecycleRule(ruleId1, matchPrefix1, LifecycleRule.RuleStatus.Enabled);
            rule.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
            rule.setTags(matchTags1);
            request.AddLifecycleRule(rule);

            // Specify that parts expire three days after they are last modified. 
            rule = new LifecycleRule(ruleId2, matchPrefix2, LifecycleRule.RuleStatus.Enabled);
            LifecycleRule.AbortMultipartUpload abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
            abortMultipartUpload.setExpirationDays(3);
            rule.setAbortMultipartUpload(abortMultipartUpload);
            request.AddLifecycleRule(rule);

            // Specify that parts that are created before the specific date expire. 
            rule = new LifecycleRule(ruleId3, matchPrefix3, LifecycleRule.RuleStatus.Enabled);
            abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
            abortMultipartUpload.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
            rule.setAbortMultipartUpload(abortMultipartUpload);
            request.AddLifecycleRule(rule);

            // Specify that the storage classes of objects are changed to IA 10 days after they are last modified, and to Archive 30 days after they are last modified. 
            rule = new LifecycleRule(ruleId4, matchPrefix4, LifecycleRule.RuleStatus.Enabled);
            List<LifecycleRule.StorageTransition> storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.IA);
            storageTransition.setExpirationDays(10);
            storageTransitions.add(storageTransition);
            storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransition.setExpirationDays(30);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            request.AddLifecycleRule(rule);

            // Specify that the storage classes of objects that are last modified before October 12, 2022 are changed to Archive. 
            rule = new LifecycleRule(ruleId5, matchPrefix5, LifecycleRule.RuleStatus.Enabled);
            storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            storageTransition = new LifecycleRule.StorageTransition();

            storageTransition.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));

            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            request.AddLifecycleRule(rule);

            // Specify that rule6 is configured for versioning-enabled buckets. 
            rule = new LifecycleRule(ruleId6, matchPrefix6, LifecycleRule.RuleStatus.Enabled);
            // Specify that the storage classes of objects are changed to Archive 365 days after the objects are last modified. 
            storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransition.setExpirationDays(365);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            // Configure the lifecycle rule to automatically delete expired delete markers. 
            rule.setExpiredDeleteMarker(true);
            // Specify that the storage classes of the previous versions of objects are changed to IA 10 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition =
                    new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(10).withStrorageClass(StorageClass.IA);
            // Specify that the storage classes of the previous versions of objects are changed to Archive 20 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition2 =
                    new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(20).withStrorageClass(StorageClass.Archive);
            // Specify that the previous versions of objects are deleted 30 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionExpiration noncurrentVersionExpiration = new LifecycleRule.NoncurrentVersionExpiration().withNoncurrentDays(30);
            List<LifecycleRule.NoncurrentVersionStorageTransition> noncurrentVersionStorageTransitions = new ArrayList<LifecycleRule.NoncurrentVersionStorageTransition>();
            noncurrentVersionStorageTransitions.add(noncurrentVersionStorageTransition2);
            rule.setStorageTransition(storageTransitions);
            rule.setNoncurrentVersionExpiration(noncurrentVersionExpiration);
            rule.setNoncurrentVersionStorageTransitions(noncurrentVersionStorageTransitions);
            request.AddLifecycleRule(rule);

            // Initiate a request to configure lifecycle rules. 
            ossClient.setBucketLifecycle(request);

            // Query the lifecycle rules that are configured for the bucket. 
            List<LifecycleRule> listRules = ossClient.getBucketLifecycle(bucketName);
            for(LifecycleRule rules : listRules){
                System.out.println("ruleId="+rules.getId()+", matchPrefix="+rules.getPrefix());
            }
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}

PHP

<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;
use OSS\Model\LifecycleConfig;
use OSS\Model\LifecycleRule;
use OSS\Model\LifecycleAction;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
$provider = new EnvironmentVariableCredentialsProvider();
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
$endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Specify the name of the bucket. 
$bucket= "examplebucket";

// Specify the rule ID and the prefix contained in the names of the objects that match the rule. 
$ruleId0 = "rule0";
$matchPrefix0 = "A0/";
$ruleId1 = "rule1";
$matchPrefix1 = "A1/";

$lifecycleConfig = new LifecycleConfig();
$actions = array();
// Specify that objects expire three days after they are last modified. 
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DAYS, 3);
$lifecycleRule = new LifecycleRule($ruleId0, $matchPrefix0, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
$actions = array();
// Specify that the objects that are created before the specified date expire. 
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DATE, '2022-10-12T00:00:00.000Z');
$lifecycleRule = new LifecycleRule($ruleId1, $matchPrefix1, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
try {
    $config = array(
        "provider" => $provider,
        "endpoint" => $endpoint,
        "signatureVersion" => OssClient::OSS_SIGNATURE_VERSION_V4,
        "region"=> "cn-hangzhou"
    );
    $ossClient = new OssClient($config);

    $ossClient->putBucketLifecycle($bucket, $lifecycleConfig);
} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . ": OK" . "\n");

Node.js

const OSS = require('ali-oss')

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
});

async function getBucketLifecycle () {
  try {
    const result = await client.getBucketLifecycle('Yourbucketname');
    console.log(result.rules); // Query the lifecycle rules. 

    rules.forEach(rule => {
      console.log(rule.id) // Query the rule IDs.  
      console.log(rule.status) // Query the status of the rules. 
      console.log(rule.tags) // Query the tags configured in the lifecycle rules. 
      console.log(rule.expiration.days) // Query the validity period configurations. 
      console.log(rule.expiration.createdBeforeDate) // Query the expiration date configurations. 
      // Query the rule for expired parts. 
      console.log(rule.abortMultipartUpload.days || rule.abortMultipartUpload.createdBeforeDate)
      // Query the rule of storage class conversion. 
      console.log(rule.transition.days || rule.transition.createdBeforeDate) // Query the conversion date configurations. 
      console.log(rule.transition.storageClass) // Query the configurations used to convert storage classes. 
      // Query the lifecycle rule to check whether expired delete markers are automatically deleted. 
      console.log(rule.transition.expiredObjectDeleteMarker)
      // Query the configurations used to convert the storage class of previous versions of the objects. 
      console.log(rule.noncurrentVersionTransition.noncurrentDays) // Query the conversion date configurations for objects of previous versions. 
      console.log(rule.noncurrentVersionTransition.storageClass) // Query the configurations used to convert the storage classes of previous versions of objects. 
    })
  } catch (e) {
    console.log(e);
  }
}
getBucketLifecycle();

Python

# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
import datetime
from oss2.models import (LifecycleExpiration, LifecycleRule, 
                        BucketLifecycle,AbortMultipartUpload, 
                        TaggingRule, Tagging, StorageTransition,
                        NoncurrentVersionStorageTransition,
                        NoncurrentVersionExpiration)

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of the bucket.
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

# Specify that objects expire three days after they are last modified. 
rule1 = LifecycleRule('rule1', 'tests/',
                      status=LifecycleRule.ENABLED,
                      expiration=LifecycleExpiration(days=3))

# Specify that objects created before the specified date expire. 
rule2 = LifecycleRule('rule2', 'tests2/',
                      status=LifecycleRule.ENABLED,
expiration = LifecycleExpiration(created_before_date=datetime.date(2023, 12, 12)))

# Specify that the parts expire three days after they are last modified. 
rule3 = LifecycleRule('rule3', 'tests3/',
                      status=LifecycleRule.ENABLED,
            abort_multipart_upload=AbortMultipartUpload(days=3))

# Specify that parts created before the specified date expire. 
rule4 = LifecycleRule('rule4', 'tests4/',
                      status=LifecycleRule.ENABLED,
                      abort_multipart_upload = AbortMultipartUpload(created_before_date=datetime.date(2022, 12, 12)))

# Specify that the storage classes of objects are changed to Infrequent Access (IA) 20 days after they are last modified, and to Archive 30 days after they are last modified. 
rule5 = LifecycleRule('rule5', 'tests5/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(days=20,storage_class=oss2.BUCKET_STORAGE_CLASS_IA),
                            StorageTransition(days=30,storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)])

# Specify the tag that you want the lifecycle rule to match. 
tagging_rule = TaggingRule()
tagging_rule.add('key1', 'value1')
tagging_rule.add('key2', 'value2')
tagging = Tagging(tagging_rule)

# Specify that the storage classes of objects are changed to Archive 365 days after they are last modified.  
# Compared with the preceding rules, rule6 includes the tag condition to match objects. The rule takes effect for objects whose tagging configurations are key1=value1 and key2=value2. 
rule6 = LifecycleRule('rule6', 'tests6/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(created_before_date=datetime.date(2022, 12, 12),storage_class=oss2.BUCKET_STORAGE_CLASS_IA)],
                      tagging = tagging)

# rule7 is a lifecycle rule that applies to a versioning-enabled bucket. 
# Specify that the storage classes of objects are changed to Archive 365 days after they are last modified. 
# Specify that delete markers are automatically removed when they expire. 
# Specify that the storage classes of objects are changed to IA 12 days after they become previous versions. 
# Specify that the storage classes of objects are changed to Archive 20 days after they become previous versions. 
# Specify that objects are deleted 30 days after they become previous versions. 
rule7 = LifecycleRule('rule7', 'tests7/',
              status=LifecycleRule.ENABLED,
              storage_transitions=[StorageTransition(days=365, storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)], 
              expiration=LifecycleExpiration(expired_detete_marker=True),
              noncurrent_version_sotrage_transitions = 
                    [NoncurrentVersionStorageTransition(12, oss2.BUCKET_STORAGE_CLASS_IA),
                     NoncurrentVersionStorageTransition(20, oss2.BUCKET_STORAGE_CLASS_ARCHIVE)],
              noncurrent_version_expiration = NoncurrentVersionExpiration(30))

lifecycle = BucketLifecycle([rule1, rule2, rule3, rule4, rule5, rule6, rule7])

bucket.put_bucket_lifecycle(lifecycle)

C#

using Aliyun.OSS;
using Aliyun.OSS.Common;
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the bucket name. Example: examplebucket. 
var bucketName = "examplebucket";
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
const string region = "cn-hangzhou";

// Create a ClientConfiguration instance and modify the default parameters based on your requirements.
var conf = new ClientConfiguration();

// Use the signature algorithm V4.
conf.SignatureVersion = SignatureVersion.V4;

// Create an OSSClient instance.
var client = new OssClient(endpoint, accessKeyId, accessKeySecret, conf);
client.SetRegion(region);
try
{
    var setBucketLifecycleRequest = new SetBucketLifecycleRequest(bucketName);
    // Create the first lifecycle rule. 
    LifecycleRule lcr1 = new LifecycleRule()
    {
        ID = "delete obsoleted files",
        Prefix = "obsoleted/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 3,
        Tags = new Tag[1]
    };
    // Specify a tag for the rule. 
    var tag1 = new Tag
    {
        Key = "project",
        Value = "projectone"
    };

    lcr1.Tags[0] = tag1;

    // Create the second lifecycle rule. 
    LifecycleRule lcr2 = new LifecycleRule()
    {
        ID = "delete temporary files",
        Prefix = "temporary/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 20,
        Tags = new Tag[1]         
    };
    // Specify a tag for the rule. 
    var tag2 = new Tag
    {
        Key = "user",
        Value = "jsmith"
    };
    lcr2.Tags[0] = tag2;

    // Specify that parts expire 30 days after they are last modified. 
    lcr2.AbortMultipartUpload = new LifecycleRule.LifeCycleExpiration()
    {
        Days = 30
    };

    LifecycleRule lcr3 = new LifecycleRule();
    lcr3.ID = "only NoncurrentVersionTransition";
    lcr3.Prefix = "test1";
    lcr3.Status = RuleStatus.Enabled;
    lcr3.NoncurrentVersionTransitions = new LifecycleRule.LifeCycleNoncurrentVersionTransition[2]
    {
        // Specify that the storage classes of the previous versions of objects are converted to IA 90 days after they are last modified. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.IA,
            NoncurrentDays = 90
        },
        // Specify that the storage classes of the previous versions of objects are converted to Archive 180 days after they are last modified. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.Archive,
            NoncurrentDays = 180
        }
    };
    setBucketLifecycleRequest.AddLifecycleRule(lcr1);
    setBucketLifecycleRequest.AddLifecycleRule(lcr2);
    setBucketLifecycleRequest.AddLifecycleRule(lcr3);

    // Configure lifecycle rules. 
    client.SetBucketLifecycle(setBucketLifecycleRequest);
    Console.WriteLine("Set bucket:{0} Lifecycle succeeded ", bucketName);
}
catch (OssException ex)
{
    Console.WriteLine("Failed with error code: {0}; Error info: {1}. \nRequestID:{2}\tHostID:{3}",
        ex.ErrorCode, ex.Message, ex.RequestId, ex.HostId);
}
catch (Exception ex)
{
    Console.WriteLine("Failed with error info: {0}", ex.Message);
}

Android-Java

PutBucketLifecycleRequest request = new PutBucketLifecycleRequest();
request.setBucketName("examplebucket");

BucketLifecycleRule rule1 = new BucketLifecycleRule();
// Specify the rule ID and the prefix contained in the names of the objects that match the rule. 
rule1.setIdentifier("1");
rule1.setPrefix("A");
// Specify whether to run the lifecycle rule. If this parameter is set to true, OSS periodically runs this rule. If this parameter is set to false, OSS ignores this rule. 
rule1.setStatus(true);
// Specify that objects expire 200 days after they are last modified. 
rule1.setDays("200");
// Specify that the storage classes of objects are converted to Archive 30 days after they are last modified.
rule1.setArchiveDays("30");
// Specify that parts expire three days after they fail to be uploaded. 
rule1.setMultipartDays("3");
// Specify that the storage classes of objects are converted to Infrequent Access (IA) 15 days after they are last modified. 
rule1.setIADays("15");

BucketLifecycleRule rule2 = new BucketLifecycleRule();
rule2.setIdentifier("2");
rule2.setPrefix("B");
rule2.setStatus(true);
rule2.setDays("300");
rule2.setArchiveDays("30");
rule2.setMultipartDays("3");
rule2.setIADays("15");

ArrayList<BucketLifecycleRule> lifecycleRules = new ArrayList<BucketLifecycleRule>();
lifecycleRules.add(rule1);
lifecycleRules.add(rule2);
request.setLifecycleRules(lifecycleRules);
OSSAsyncTask task = oss.asyncPutBucketLifecycle(request, new OSSCompletedCallback<PutBucketLifecycleRequest, PutBucketLifecycleResult>() {
    @Override
    public void onSuccess(PutBucketLifecycleRequest request, PutBucketLifecycleResult result) {
        OSSLog.logInfo("code::"+result.getStatusCode());

    }

    @Override
    public void onFailure(PutBucketLifecycleRequest request, ClientException clientException, ServiceException serviceException) {
        OSSLog.logError("error: "+serviceException.getRawMessage());

    }
});

task.waitUntilFinished();

Go

package main

import (
	"fmt"
	"os"

	"github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
	// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
	provider, err := oss.NewEnvironmentVariableCredentialsProvider()
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
	// Create an OSSClient instance. 
	// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify the actual endpoint. 
	// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. Specify the actual region.
	clientOptions := []oss.ClientOption{oss.SetCredentialsProvider(&provider)}
	clientOptions = append(clientOptions, oss.Region("yourRegion"))
	// Specify the version of the signature algorithm.
	clientOptions = append(clientOptions, oss.AuthVersion(oss.AuthV4))
	client, err := oss.New("yourEndpoint", "", "", clientOptions...)
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
	// Specify the name of the bucket. 
	bucketName := "examplebucket"
	// Configure a lifecycle rule and set ID to rule1. Specify that the objects whose names contain the foo prefix in the bucket expire three days after the objects are last modified. 
	rule1 := oss.BuildLifecycleRuleByDays("rule1", "foo/", true, 3)

	// If an object in a bucket for which versioning is enabled is a delete marker and has no other versions, the delete marker is deleted. 
	deleteMark := true
	expiration := oss.LifecycleExpiration{
		ExpiredObjectDeleteMarker: &deleteMark,
	}

	// Specify that the previous versions of objects are deleted 30 days after they are last modified. 
	versionExpiration := oss.LifecycleVersionExpiration{
		NoncurrentDays: 30,
	}

	// Specify that the storage classes of the previous versions of objects are converted to Infrequent Access (IA) 10 days after the objects are last modified. 
	versionTransition := oss.LifecycleVersionTransition{
		NoncurrentDays: 10,
		StorageClass:   "IA",
	}

	// Configure a lifecycle rule and set ID to rule2. 
	rule2 := oss.LifecycleRule{
		ID:                   "rule2",
		Prefix:               "yourObjectPrefix",
		Status:               "Enabled",
		Expiration:           &expiration,
		NonVersionExpiration: &versionExpiration,
		NonVersionTransitions: []oss.LifecycleVersionTransition{
			versionTransition,
		},
	}

	// Configure a lifecycle rule and set ID to rule3. This rule takes effect for objects that have the tag whose key is tag1 and value is value1. These objects expire three days after the objects are last modified. 
	rule3 := oss.LifecycleRule{
		ID:     "rule3",
		Prefix: "",
		Status: "Enabled",
		Tags: []oss.Tag{
			oss.Tag{
				Key:   "tag1",
				Value: "value1",
			},
		},
		Expiration: &oss.LifecycleExpiration{Days: 3},
	}

	// Configure lifecycle rules. 
	rules := []oss.LifecycleRule{rule1, rule2, rule3}
	err = client.SetBucketLifecycle(bucketName, rules)
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
}

C++

#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize information about the account used to access OSS. */
    
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
    std::string Region = "yourRegion";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";

    /* Initialize resources such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    conf.signatureVersion = SignatureVersionType::V4;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);
    client.SetRegion(Region);

    SetBucketLifecycleRequest request(BucketName);
    std::string date("2022-10-12T00:00:00.000Z");

    /* Specify the tags of the objects that you want to match the rule. */
    Tagging tagging;
    tagging.addTag(Tag("key1", "value1"));
    tagging.addTag(Tag("key2", "value2"));

    /* Specify a lifecycle rule. */
    auto rule1 = LifecycleRule();
    rule1.setID("rule1");
    rule1.setPrefix("test1/");
    rule1.setStatus(RuleStatus::Enabled);
    rule1.setExpiration(3);
    rule1.setTags(tagging.Tags());

    /* Specify the expiration date. */
    auto rule2 = LifecycleRule();
    rule2.setID("rule2");
    rule2.setPrefix("test2/");
    rule2.setStatus(RuleStatus::Disabled);
    rule2.setExpiration(date);

    /* Specify that rule3 is enabled for the bucket if versioning is enabled for the bucket. */
    auto rule3 = LifecycleRule();
    rule3.setID("rule3");
    rule3.setPrefix("test3/");
    rule3.setStatus(RuleStatus::Disabled);

    /* Specify that the storage classes of objects are changed to Archive 365 days after the objects are last modified. */  
    auto transition = LifeCycleTransition();  
    transition.Expiration().setDays(365);
    transition.setStorageClass(StorageClass::Archive);
    rule3.addTransition(transition);

    /* Specify that expired delete markers are automatically deleted. */
    rule3.setExpiredObjectDeleteMarker(true);

    /* Specify that the storage classes of the previous versions of objects are changed to IA 10 days after the objects are last modified. */
    auto transition1 = LifeCycleTransition();  
    transition1.Expiration().setDays(10);
    transition1.setStorageClass(StorageClass::IA);

    /* Specify that the storage classes of the previous versions of objects are changed to Archive 20 days after the objects are last modified. */
    auto transition2 = LifeCycleTransition();  
    transition2.Expiration().setDays(20);
    transition2.setStorageClass(StorageClass::Archive);

    /* Specify that previous versions are deleted 30 days after the versions are updated. */
    auto expiration  = LifeCycleExpiration(30);
    rule3.setNoncurrentVersionExpiration(expiration);

    LifeCycleTransitionList noncurrentVersionStorageTransitions{transition1, transition2};
    rule3.setNoncurrentVersionTransitionList(noncurrentVersionStorageTransitions);

    /* Configure the lifecycle rules. */
    LifecycleRuleList list{rule1, rule2, rule3};
    request.setLifecycleRules(list);
    auto outcome = client.SetBucketLifecycle(request);

    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "SetBucketLifecycle fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources such as network resources. */
    ShutdownSdk();
    return 0;
}

C

#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
/* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
const char *region = "yourRegion";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize aos_string_t. */
    aos_str_set(&options->config->endpoint, endpoint);
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
    aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
    // Specify two additional parameters.
    aos_str_set(&options->config->region, region);
    options->config->signature_version = 4;
    /* Specify whether to use CNAME to access OSS. A value of 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Specify network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize parameters. */
    aos_string_t bucket;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    aos_list_t lifecycle_rule_list;   
    aos_str_set(&bucket, bucket_name);
    aos_list_init(&lifecycle_rule_list);
    /* Specify the validity period. */
    oss_lifecycle_rule_content_t *rule_content_days = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_days->id, "rule-1");
    /* Specify the prefix that is contained in the names of the objects that you want to match the rule. */
    aos_str_set(&rule_content_days->prefix, "dir1");
    aos_str_set(&rule_content_days->status, "Enabled");
    rule_content_days->days = 3;
    aos_list_add_tail(&rule_content_days->node, &lifecycle_rule_list);
    /* Specify the expiration date. */
    oss_lifecycle_rule_content_t *rule_content_date = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_date->id, "rule-2");
    aos_str_set(&rule_content_date->prefix, "dir2");
    aos_str_set(&rule_content_date->status, "Enabled");
    /* The expiration date is displayed in UTC. 
    aos_str_set(&rule_content_date->date, "2023-10-11T00:00:00.000Z");
    aos_list_add_tail(&rule_content_date->node, &lifecycle_rule_list);
    /* Configure the lifecycle rule. */
    resp_status = oss_put_bucket_lifecycle(oss_client_options, &bucket, &lifecycle_rule_list, &resp_headers);
    if (aos_status_is_ok(resp_status)) {
        printf("put bucket lifecycle succeeded\n");
    } else {
        printf("put bucket lifecycle failed, code:%d, error_code:%s, error_msg:%s, request_id:%s\n",
            resp_status->code, resp_status->error_code, resp_status->error_msg, resp_status->req_id);
    }
    /* Release the memory pool to release the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}

Ruby

require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Specify the bucket name. 
bucket = client.get_bucket('examplebucket')
# Configure lifecycle rules. 
bucket.lifecycle = [
  Aliyun::OSS::LifeCycleRule.new(
    :id => 'rule1', :enable => true, :prefix => 'foo/', :expiry => 3),
  Aliyun::OSS::LifeCycleRule.new(
    :id => 'rule2', :enable => false, :prefix => 'bar/', :expiry => Date.new(2016, 1, 1))
]

Use ossutil

For more information about how to set lifecycle rules using ossutil, see put-bucket-lifecycle.

Use the REST API

If your application has specific requirements, you can send REST API requests directly. To do this, you must manually write code to calculate the signature. For more information, see PutBucketLifecycle.

Manually convert object storage classes using the CopyObject operation

You can call the CopyObject operation to convert the storage class of an object by overwriting the object.

  • If you convert the storage class of an object to IA, Archive, Cold Archive, or Deep Cold Archive, a minimum billable size of 64 KB, a minimum storage duration, and data retrieval fees may apply. For more information, see Usage notes.

  • Archive, Cold Archive, and Deep Cold Archive objects must be restored before their storage classes can be changed. For more information about how to restore objects, see Restore objects. If real-time access of Archive objects is enabled, you can change the storage class of Archive objects without restoring them. Direct reads incur data retrieval fees for real-time access. For more information, see Real-time access of Archive objects.

Note
  • In a bucket for which versioning is enabled, when you call the CopyObject operation to convert the storage class of an object, OSS automatically generates a unique version ID for the new object version. This version ID is returned in the x-oss-version-id response header.

  • In a bucket for which versioning is disabled or suspended, when you call the CopyObject operation to convert the storage class of an object, OSS automatically generates a version with a null version ID for the new object and overwrites the existing version with a null version ID. If the overwritten object is of the IA, Archive, Cold Archive, or Deep Cold Archive storage class, you may be charged for storage of an object for less than the minimum storage duration. For more information, see How am I charged for objects that are stored for less than the minimum storage duration?.

Rules for converting storage classes using CopyObject

  • LRS

    You can convert an LRS object between any of the following storage classes: Standard LRS, IA LRS, Archive LRS, Cold Archive LRS, and Deep Cold Archive LRS.

  • Zone-Redundant Storage (ZRS)

    You can convert a ZRS object between any of the following storage classes: Standard ZRS, IA ZRS, and Archive ZRS.

    When you convert an Archive ZRS object to a Standard ZRS or IA ZRS object, different operations are required based on the bucket's settings:

    • If real-time access of Archive objects is enabled for the bucket, you can directly convert the storage class of the Archive object without restoring it.

    • If real-time access of Archive objects is not enabled for the bucket, you must first restore the Archive object before you can convert its storage class.

Methods to convert storage classes using CopyObject

After you enable Prevent Object Overwrites, you cannot call the CopyObject operation to convert object storage classes using clients such as the OSS console, SDKs, or ossutil. To convert storage classes, you must use lifecycle rules for automatic conversion.

Use the OSS console

When you change the storage class of an object in the console, the object size cannot exceed 1 GB. For objects larger than 1 GB, we recommend that you use an SDK or ossutil.

  1. Log on to the OSS console.

  2. Click Buckets, and then click the name of the destination bucket.

  3. In the left navigation pane, choose Object Management > Objects.

  4. On the Objects page, locate the destination object and choose more > Change Storage Class.

  5. We recommend turning on the Retain User Metadata switch to preserve the object's custom metadata when changing its storage class.

  6. Select a new storage class and then click OK.

Use Alibaba Cloud SDKs

import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.model.CopyObjectRequest;
import com.aliyun.oss.model.CopyObjectResult;
import com.aliyun.oss.model.ObjectMetadata;
import com.aliyun.oss.model.StorageClass;

public class Demo {
    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // In this example, a bucket and a Standard or IA object are created. 
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the full path of the object. Do not include the bucket name in the full path. Example: exampleobject.txt. 
        String objectName = "exampleobject.txt";
        // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
        String region = "cn-hangzhou";

        // Create an OSSClient instance.
        // Call the shutdown method to release resources when the OSSClient is no longer in use. 
        ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
        clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
        OSS ossClient = OSSClientBuilder.create()
        .endpoint(endpoint)
        .credentialsProvider(credentialsProvider)
        .clientConfiguration(clientBuilderConfiguration)
        .region(region)               
        .build();

        try {
            // Create a CopyObjectRequest object. 
            CopyObjectRequest request = new CopyObjectRequest(bucketName, objectName, bucketName, objectName) ;

            // Create an ObjectMetadata object. 
            ObjectMetadata objectMetadata = new ObjectMetadata();

            // Convert the storage class of the object to Archive. 
            objectMetadata.setHeader("x-oss-storage-class", StorageClass.Archive);
            // Convert the storage class of the object to Cold Archive. 
            // objectMetadata.setHeader("x-oss-storage-class", StorageClass.ColdArchive);
            // Convert the storage class of the object to Deep Cold Archive. 
            // objectMetadata.setHeader("x-oss-storage-class", StorageClass.DeepColdArchive);
            request.setNewObjectMetadata(objectMetadata);

            // Convert the storage class of the object. 
            CopyObjectResult result = ossClient.copyObject(request);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
$provider = new EnvironmentVariableCredentialsProvider();
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
$endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Specify the name of the bucket. 
$bucket= "<yourBucketName>";
// Specify the full path of the object. Do not include the bucket name in the full path. Example: destfolder/exampleobject.txt. 
$object = "<yourObjectName>";

$config = array(
        "provider" => $provider,
        "endpoint" => $endpoint,
        "signatureVersion" => OssClient::OSS_SIGNATURE_VERSION_V4,
        "region"=> "cn-hangzhou"
    );
    $ossClient = new OssClient($config);

try {

    // Specify the storage class to which you want to convert the object. In this example, set the storage class to Archive. 
    $copyOptions = array(
        OssClient::OSS_HEADERS => array(            
            'x-oss-storage-class' => 'Archive',
            'x-oss-metadata-directive' => 'REPLACE',
        ),
    );
    
    $ossClient->copyObject($bucket, $object, $bucket, $object, $copyOptions);

} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}

print(__FUNCTION__ . ": OK" . "\n");
const OSS = require('ali-oss');

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
})
const options = {
    headers:{'x-oss-storage-class':'Archive'}
}
client.copy('Objectname','Objectname',options).then((res) => {
    console.log(res);
}).catch(err => {
    console.log(err)
})
# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
import os
# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"

# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of your bucket.
bucket = oss2.Bucket(auth, endpoint, "yourBucketName", region=region)

# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
# Make sure that the storage class of the object is Standard or IA. 
object_name = 'exampledir/exampleobject.txt'

# Convert the storage class of the object to Archive by setting the x-oss-storage-class header to oss2.BUCKET_STORAGE_CLASS_ARCHIVE. 
headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_ARCHIVE}
# Convert the storage class of the object to Cold Archive by setting the x-oss-storage-class header to oss2.BUCKET_STORAGE_CLASS_COLD_ARCHIVE. 
# headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_COLD_ARCHIVE}
# Convert the storage class of the object to Deep Cold Archive by setting the x-oss-storage-class header to BUCKET_STORAGE_CLASS_DEEP_COLD_ARCHIVE 
# headers = {'x-oss-storage-class': oss2.models.BUCKET_STORAGE_CLASS_DEEP_COLD_ARCHIVE}
# Convert the storage class of the object. 
bucket.copy_object(bucket.bucket_name, object_name, object_name, headers)                    
package main

import (
	"log"

	"github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
	// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
	provider, err := oss.NewEnvironmentVariableCredentialsProvider()
	if err != nil {
		log.Fatalf("Failed to create credentials provider: %v", err)
	}

	// Create an OSSClient instance. 
	// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
	// Specify the region in which the bucket is located. For example, if your bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.  
	clientOptions := []oss.ClientOption{oss.SetCredentialsProvider(&provider)}
	clientOptions = append(clientOptions, oss.Region("yourRegion"))
	// Specify the signature version.
	clientOptions = append(clientOptions, oss.AuthVersion(oss.AuthV4))
	client, err := oss.New("yourEndpoint", "", "", clientOptions...)
	if err != nil {
		log.Fatalf("Failed to create OSS client: %v", err)
	}

	// Specify the name of the bucket. 
	bucketName := "yourBucketName" // Replace yourBucketName with the actual bucket name.
	// Specify the full path of the object. Do not include the bucket name in the full path. 
	objectName := "yourObjectName" // Replace yourObjectName with the actual object path.

	bucket, err := client.Bucket(bucketName)
	if err != nil {
		log.Fatalf("Failed to get bucket: %v", err)
	}

	// Convert the storage class of the object to Archive. 
	_, err = bucket.CopyObject(objectName, objectName, oss.ObjectStorageClass(oss.StorageArchive))
	if err != nil {
		log.Fatalf("Failed to change storage class of object: %v", err)
	}

	log.Println("Storage class changed successfully.")
}
OSSCopyObjectRequest * copy = [OSSCopyObjectRequest new];
copy.sourceBucketName = @"examplebucket";
copy.sourceObjectKey = @"exampleobject.txt";
copy.bucketName = @"examplebucket";
copy.objectKey = @"exampleobject.txt";
// Set the storage class of the exampleobject.txt object to Archive. 
copy.objectMeta = @{@"x-oss-storage-class" : @"Archive"};

OSSTask * task = [client copyObject:copy];
[task continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"copy object success!");
    } else {
        NSLog(@"copy object failed, error: %@" , task.error);
    }
    return nil;
}];
// Implement synchronous blocking to wait for the task to complete. 
// [task waitUntilFinished];  
#include <iostream>
#include <alibabacloud/oss/OssClient.h>

using namespace AlibabaCloud::OSS;

int main(void)
{  
            
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
    std::string Region = "yourRegion";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";
    /* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
    std::string ObjectName = "exampledir/exampleobject.txt";
  
    /* Initialize resources, such as network resources. */
    InitializeSdk();
    ClientConfiguration conf;
    conf.signatureVersion = SignatureVersionType::V4;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);
    client.SetRegion(Region);
    
    /* Specify the storage class to which you want to convert the object. In this example, the storage class is set to Archive. */
    ObjectMetaData objectMeta;
    objectMeta.addHeader("x-oss-storage-class", "Archive");
    
    std::string SourceBucketName = BucketName;
    std::string SourceObjectName = ObjectName;
    
    CopyObjectRequest request(SourceBucketName, ObjectName, objectMeta);
    request.setCopySource(SourceBucketName, SourceObjectName);
    
    /* Convert the storage class of the object to the specified storage class. */
    auto outcome = client.CopyObject(request);
    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "CopyObject fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }
    
    /* Release resources, such as network resources. */
    ShutdownSdk();
    return 0;
}

Use ossutil

For more information about how to convert the storage class of an object using ossutil, see copy-object.

Use the REST API

If your application has specific requirements, you can send REST API requests directly. To do this, you must manually write code to calculate the signature. For more information, see CopyObject.

Usage notes

When you convert the storage class of an object to IA, Archive, Cold Archive, or Deep Cold Archive, note the following:

Minimum billable size

The minimum billable size of an object is 64 KB. If an object is smaller than 64 KB, you are charged for 64 KB of storage.

Minimum storage duration

The minimum storage duration is 30 days for IA objects, 60 days for Archive objects, 180 days for Cold Archive objects, and 180 days for Deep Cold Archive objects. If an object is stored for a period shorter than the minimum storage duration, you are charged for storage of an object for less than the minimum storage duration. For more information, see Storage fees.

  • Automatically convert object storage classes using lifecycle rules

    • When you convert the storage class of an object to IA or Archive, the storage duration of the object is not recalculated.

      For example, an object named a.txt is stored as a Standard object in OSS for 10 days. After its storage class is converted to IA by a lifecycle rule, it only needs to be stored for another 20 days to meet the 30-day minimum storage duration requirement.

    • When you convert the storage class of an object to Cold Archive or Deep Cold Archive, the storage duration of the object is recalculated.

      • Example 1: An object named a.txt is stored as a Standard or IA object in OSS for 10 days. After its storage class is converted to Cold Archive or Deep Cold Archive by a lifecycle rule, it must be stored for another 180 days to meet the 180-day minimum storage duration requirement.

      • Example 2: An object named a.txt is stored as a Cold Archive object in OSS for 30 days. When its storage class is converted to Deep Cold Archive by a lifecycle rule, you are charged for storing a Cold Archive object for less than the 180-day minimum storage duration. In addition, after being converted to Deep Cold Archive, it must be stored for another 180 days to meet the 180-day minimum storage duration requirement.

  • Manually convert object storage classes using the CopyObject operation

    When you manually convert an object to any storage class using the CopyObject operation, the storage duration of the object is recalculated.

    For example, an object named a.txt is stored as a Standard object in OSS for 10 days. After you manually convert the object to the IA storage class using the CopyObject operation, it must be stored for another 30 days to meet the 30-day minimum storage duration requirement.

Note

If you rename an IA, Archive, Cold Archive, or Deep Cold Archive object or overwrite it by uploading an object with the same name before the minimum storage duration elapses, you are charged for storage of an object for less than the minimum storage duration. For example, if you rename an IA object after it has been stored for 29 days, OSS treats the renamed object as a new object and resets its storage duration. This means the object must be stored for another 30 days to meet the minimum storage duration requirement for the IA storage class.

Restore time

Restoring an Archive, Cold Archive, or Deep Cold Archive object takes time. If your business scenario requires real-time access to objects, we recommend that you do not convert the storage class of objects to Archive, Cold Archive, or Deep Cold Archive.

Request fees

Conversion method

Source storage class of the object

Request fees

Lifecycle rule

Standard, IA, Archive, Cold Archive

You are charged for PUT requests based on the source storage class of the object. The fees are billed to the current bucket.

CopyObject

Archive Storage

  • Real-time access of Archive objects is enabled

    • You are charged for GET requests based on the source storage class of the object. The fees are billed to the source bucket.

    • You are charged for PUT requests based on the destination storage class of the object. The fees are billed to the destination bucket.

  • Real-time access of Archive objects is not enabled

    You are charged for PUT requests based on the source storage class of the object. The fees are billed to the destination bucket.

Standard, IA, Cold Archive, Deep Cold Archive

You are charged for PUT requests based on the source storage class of the object. The fees are billed to the destination bucket.

If real-time access of Archive objects is enabled, when you use the CopyObject operation to convert the storage class of a source Archive object, you do not need to restore it in advance. You are not charged for data restoration from Archive Storage, but you are charged for data retrieval for real-time access.

If real-time access of Archive objects is not enabled, when you use the CopyObject operation to convert the storage class of a source Archive object, you must restore it first. You are charged for data restoration from Archive Storage, but you are not charged for data retrieval for real-time access.

For more information, see Data processing fees.

Data retrieval fees

When you access IA objects, you are charged additional data retrieval fees based on the actual amount of data accessed. When you restore Archive, Cold Archive, and Deep Cold Archive objects, you are charged additional data restoration fees. After you enable real-time access of Archive objects, you are charged additional fees for real-time access when you directly access Archive objects. These fees are billed separately from outbound traffic fees. If an object is accessed more than once per month on average, the cost of using the IA, Archive, Cold Archive, or Deep Cold Archive storage class may be higher than that of the Standard storage class.

Temporary storage fees

When a Cold Archive or Deep Cold Archive object is restored, a Standard replica of the object is created for access. You are charged temporary storage fees for this replica at the storage rate of the Standard storage class for the duration of the restoration period.

FAQ

Can I use a lifecycle rule based on the last modified time to convert the storage class of an object from Infrequent Access to Standard?

No, you cannot. You can use one of the following methods to convert the storage class of an object from IA to Standard: