Object Storage Service (OSS) provides the following storage classes: Standard, Infrequent Access (IA), Archive, and Cold Archive. You can configure lifecycle rules or call the CopyObject operation to convert the storage class of an object based on your business requirements.

Warning

We recommend that you do not change the storage classes of objects in the .dlsdata/ directory in which OSS-HDFS data is stored in a bucket for which the OSS-HDFS service is enabled.

If you change the storage class of an object in the .dlsdata/ directory to Infrequent Access (IA), you can use the OSS-HDFS service to access the object. If you change the storage class of the object to Archive or Cold Archive, you cannot use the OSS-HDFS service to access the object. If you want to access the object, restore the Archive object or Cold Archive object.

Configure lifecycle rules to automatically convert the storage classes of objects

You can configure lifecycle rules to allow OSS to automatically convert the storage classes of objects. For more information about the storage classes provided by OSS, see Overview.

Storage class conversion based on the last modified time of objects

  • Locally redundant storage (LRS)LRS

    The storage classes of LRS objects can be converted based on the following rules:

    • Conversions from Standard LRS to IA LRS, Archive LRS, or Cold Archive LRS.
    • Conversions from IA LRS to Archive LRS or Cold Archive LRS.
    • Conversions from Archive LRS to Cold Archive LRS.

    If different policies are configured for a bucket at the same time to convert the storage classes of objects to IA, Archive, and Cold Archive, the periods specified in the policies must meet the following requirements:

    Period of time required for conversion to IA LRS < Period of time required for conversion to Archive LRS < Period of time required for conversion to Cold Archive LRS

  • Zone-redundant storage (ZRS)ZRS

    The storage classes of ZRS objects can only be converted from Standard to IA.

For more information, see Lifecycle rules based on the last modified time.

Storage class conversion based on the last access time of objects

atime

The preceding rules apply to conversion between Standard LRS objects and IA LRS objects as well as between Standard ZRS objects and IA ZRS objects.

You can configure a lifecycle rule that automatically converts Standard objects to IA objects a specific number of days after the last access time of the objects. After the objects are converted, you can specify whether to convert the objects back to Standard objects or remain IA objects when the objects are accessed. For more information, see Lifecycle rules based on the last access time.

Configure lifecycle rules to convert the storage classes of objects

You can use multiple methods to configure lifecycle rules. Based on the lifecycle rules that you configure, OSS can convert the storage classes of multiple objects to the specified storage class or delete expired objects and parts within the specified period. You can configure lifecycle rules by using one of the following methods to convert the storage classes of objects to a specified storage class:

Use the OSS console

  1. Log on to the OSS console.
  2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the desired bucket.
  3. In the left-side navigation pane, choose Basic Settings > Lifecycle. In the Lifecycle section, click Configure.
  4. Turn on Enable access tracking on the Lifecycle page if you want to create lifecycle rules based on the last access time of objects.
  5. On the page that appears, click Create Rule. In the Create Rule panel, configure the parameters described in the following table.
    • Parameters for unversioned buckets
      Section Parameter Description
      Basic Settings Status

      Specify the status of the lifecycle rule. Valid values: Enabled and Disabled.

      Applied To

      Specify the objects to which the lifecycle rule applies. Valid values: Files with Specified Prefix and Whole Bucket.

      Allow Overlapped Prefixes

      If you select Allow Overlapped Prefixes, you can configure lifecycle rules with the same or overlapping prefixes without specifying tags.

      Notice
      • If you want OSS to automatically detect whether rules with the same or overlapping prefixes are configured, do not select Allow Overlapped Prefixes.
      • If you want to use this parameter, contact technical support.
      Prefix
      Specify the prefix in the names of objects to which the lifecycle rule takes effect.
      • If you set the prefix to img, all objects whose names start with img, such as imgtest.png and img/example.jpg, match the lifecycle rule.
      • If you set the prefix to img/, all objects whose names start with img/, such as img/example.jpg and img/test.jpg, match the lifecycle rule.
      Tagging

      Specify tags. The rule applies only to objects that have the specified tags. For example, if you select Files with Specified Prefix and set Prefix to img, Key to a, and Value to 1, the rule applies to all objects that have the img prefix in their names and have the tag a=1. For more information about object tagging, see Object tagging.

      NOT

      The NOT parameter is used to specify that the lifecycle rule does not apply to objects that have the specified prefix and tags.

      Notice
      • If you enable NOT, each lifecycle rule must contain at least one of the prefix and tags of an object.
      • The key of the tag specified by the NOT syntax cannot be the same as the key specified by the Tagging parameter.
      • If you enable NOT, lifecycle rules that apply to parts cannot be configured.
      • If you want to use this parameter, contact technical support.
      Policy for Objects File Lifecycle

      Configure rules for objects to specify when the objects expire. Valid values: Validity Period (Days), Expiration Date, and Disabled. If you select Disabled, the configurations of File Lifecycle do not take effect.

      Lifecycle-based Rules

      Configure the policy to convert the storage class of objects or delete expired objects.

      Example 1: If you select Access Time, set Validity Period (Days) to 30, and specify that the storage class of the objects is converted to IA (Not Converted After Access) after the validity period ends. In this case, the storage class of objects that were last accessed on September 1, 2021 is converted to Infrequent Access (IA) on October 1, 2021.

      Note If you configure a lifecycle rule based on the last access time, you can specify that the rule takes effect only to objects that are larger than 64 KB in size or to all objects in the bucket.

      Example 2: If you select Modified Time, set Expiration Date to September 24, 2021, and specify that objects that are last modified before this date are deleted. In this case, objects that are last modified before September 24, 2021 are automatically deleted. The deleted objects cannot be recovered.

      Policy for Parts Part Lifecycle

      Specify the operations that you want to perform on expired parts. If you select Tagging, this parameter is unavailable. You can select Validity Period (Days), Expiration Date, or Disabled. If you select Disabled, the configurations of Part Lifecycle do not take effect.

      Notice Each lifecycle rule must contain at least one of object expiration policies and part expiration policies.
      Rules for Parts

      Specify when parts expire based on the value of Part Lifecycle. Expired parts are automatically deleted and cannot be recovered.

    • Parameters for versioned buckets

      Configure the parameters in the Basic Settings and Policy for Parts sections in the same way as the parameters configured for unversioned buckets. The following table describes only the parameters that are different from the parameters that you configure for unversioned buckets.

      Section Parameter Description
      Policy for Current Versions Clean Up Delete Marker

      If you enable versioning for the bucket, you can configure the Clean Up Delete Marker parameter. Other parameters are the same as those you can configure for unversioned buckets.

      If you select Clean Up Delete Marker, and an object has only one version which is a delete marker, OSS considers the delete marker expired and removes the delete marker. If an object has multiple versions and the current version of the object is a delete marker, OSS retains the delete marker. For more information about delete makers, see Delete marker.

      Policy for Previous Versions File Lifecycle

      Specify when previous versions expire. Valid values: Validity Period (Days) and Disabled. If you select Disabled, File Lifecycle does not take effect.

      Lifecycle-based Rules

      Specify the number of days within which objects can be retained after they become previous versions. After they expire, the specified operations are performed on the previous versions the next day. For example, if you set Validity Period (Days) to 30, objects that become previous versions on September 1, 2021 are converted to the specified storage class or deleted on October 1, 2021.

      Notice You can determine when an object becomes a previous version based on the time when the next version of the object is last modified.
  6. Click OK.

    After the lifecycle rule is saved, you can view the rule in the lifecycle rule list.

Use OSS SDKs

The following code provides examples on how to configure lifecycle rules by using OSS SDKs for common programming languages. For more information about the sample code to configure lifecycle rules by using OSS SDKs for other programming languages, see Overview.

OSS ossClient = null;
try {
    // Set yourEndpoint to the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set yourEndpoint to https://oss-cn-hangzhou.aliyuncs.com. 
    String endpoint = "yourEndpoint";
    // Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to access OSS because the account has permissions on all API operations. We recommend that you use a Resource Access Management (RAM) user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
    String accessKeyId = "yourAccessKeyId";
    String accessKeySecret = "yourAccessKeySecret";
    // Specify the name of the bucket for which you want to configure a lifecycle rule. Example: examplebucket. 
    String bucketName = "examplebucket";

    // Create an OSSClient instance. 
    ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret);

    // Create a request by using SetBucketLifecycleRequest. 
    SetBucketLifecycleRequest request = new SetBucketLifecycleRequest(bucketName);

    // Specify the ID for the lifecycle rule. 
    String ruleId0 = "rule0";
    // Specify the prefix that you want the lifecycle rule to match. 
    String matchPrefix0 = "A0/";
    // Specify the tag that you want the lifecycle rule to match. 
    Map<String, String> matchTags0 = new HashMap<String, String>();
    // Specify the key and value of the object tag. Example: the key is owner, and the value is John. 
    matchTags0.put("owner", "John");


    String ruleId1 = "rule1";
    String matchPrefix1 = "A1/";
    Map<String, String> matchTags1 = new HashMap<String, String>();
    matchTags1.put("type", "document");

    String ruleId2 = "rule2";
    String matchPrefix2 = "A2/";

    String ruleId3 = "rule3";
    String matchPrefix3 = "A3/";

    String ruleId4 = "rule4";
    String matchPrefix4 = "A4/";

    String ruleId5 = "rule5";
    String matchPrefix5 = "A5/";

    String ruleId6 = "rule6";
    String matchPrefix6 = "A6/";

    // Set the expiration time to three days after the last modified time. 
    LifecycleRule rule = new LifecycleRule(ruleId0, matchPrefix0, LifecycleRule.RuleStatus.Enabled, 3);
    rule.setTags(matchTags0);
    request.AddLifecycleRule(rule);

    // Specify that objects that are created before the expiration date expire. 
    rule = new LifecycleRule(ruleId1, matchPrefix1, LifecycleRule.RuleStatus.Enabled);
    rule.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
    rule.setTags(matchTags1);
    request.AddLifecycleRule(rule);

    // Set the expiration time to three days for parts of an object. 
    rule = new LifecycleRule(ruleId2, matchPrefix2, LifecycleRule.RuleStatus.Enabled);
    LifecycleRule.AbortMultipartUpload abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
    abortMultipartUpload.setExpirationDays(3);
    rule.setAbortMultipartUpload(abortMultipartUpload);
    request.AddLifecycleRule(rule);

    // Specify that parts that are created before the expiration date expire. 
    rule = new LifecycleRule(ruleId3, matchPrefix3, LifecycleRule.RuleStatus.Enabled);
    abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
    abortMultipartUpload.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
    rule.setAbortMultipartUpload(abortMultipartUpload);
    request.AddLifecycleRule(rule);

    // Specify that the storage class of objects is converted to IA 10 days after they are last modified, and to Archive 30 days after they are last modified. 
    rule = new LifecycleRule(ruleId4, matchPrefix4, LifecycleRule.RuleStatus.Enabled);
    List<LifecycleRule.StorageTransition> storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
    LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition();
    storageTransition.setStorageClass(StorageClass.IA);
    storageTransition.setExpirationDays(10);
    storageTransitions.add(storageTransition);
    storageTransition = new LifecycleRule.StorageTransition();
    storageTransition.setStorageClass(StorageClass.Archive);
    storageTransition.setExpirationDays(30);
    storageTransitions.add(storageTransition);
    rule.setStorageTransition(storageTransitions);
    request.AddLifecycleRule(rule);

    // Specify that the storage class of objects that are last modified before October 12, 2022 is converted to Archive. 
    rule = new LifecycleRule(ruleId5, matchPrefix5, LifecycleRule.RuleStatus.Enabled);
    storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
    storageTransition = new LifecycleRule.StorageTransition();

    storageTransition.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));

    storageTransition.setStorageClass(StorageClass.Archive);
    storageTransitions.add(storageTransition);
    rule.setStorageTransition(storageTransitions);
    request.AddLifecycleRule(rule);

    // Specify that rule6 is configured for versioned buckets. 
    rule = new LifecycleRule(ruleId6, matchPrefix6, LifecycleRule.RuleStatus.Enabled);
    // Specify that the storage class of objects is converted to Archive 365 days after the objects are last modified. 
    storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
    storageTransition = new LifecycleRule.StorageTransition();
    storageTransition.setStorageClass(StorageClass.Archive);
    storageTransition.setExpirationDays(365);
    storageTransitions.add(storageTransition);
    rule.setStorageTransition(storageTransitions);
    // Configure the lifecycle rule to automatically delete expired delete markers. 
    rule.setExpiredDeleteMarker(true);
    // Configure the lifecycle rule to convert the previous versions of objects to the IA storage class 10 days after the objects are last modified. 
    LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition =
            new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(10).withStrorageClass(StorageClass.IA);
    // Specify that the storage class of the previous versions of objects is converted to Archive 20 days after the objects are last modified. 
    LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition2 =
            new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(20).withStrorageClass(StorageClass.Archive);
    // Specify that the previous versions of objects are deleted 30 days after the objects become previous versions. 
    LifecycleRule.NoncurrentVersionExpiration noncurrentVersionExpiration = new LifecycleRule.NoncurrentVersionExpiration().withNoncurrentDays(30);
    List<LifecycleRule.NoncurrentVersionStorageTransition> noncurrentVersionStorageTransitions = new ArrayList<LifecycleRule.NoncurrentVersionStorageTransition>();
    noncurrentVersionStorageTransitions.add(noncurrentVersionStorageTransition2);
    rule.setStorageTransition(storageTransitions);
    rule.setNoncurrentVersionExpiration(noncurrentVersionExpiration);
    rule.setNoncurrentVersionStorageTransitions(noncurrentVersionStorageTransitions);
    request.AddLifecycleRule(rule);

    // Initiate a request to configure lifecycle rules. 
    ossClient.setBucketLifecycle(request);

    // Query the lifecycle rules configured for the bucket. 
    List<LifecycleRule> listRules = ossClient.getBucketLifecycle(bucketName);
    for(LifecycleRule rules : listRules){
        System.out.println("ruleId="+rules.getId()+", matchPrefix="+rules.getPrefix());
    }
} catch (Exception e) {
    e.printStackTrace();
} finally {
    if(ossClient != null){
        // Shut down the OSSClient instance. 
        ossClient.shutdown();
    }
}
<? php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\OssClient;
use OSS\Core\OssException;
use OSS\Model\LifecycleConfig;
use OSS\Model\LifecycleRule;
use OSS\Model\LifecycleAction;

// Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to log on to OSS because the account has permissions on all API operations. We recommend that you use your RAM user's credentials to call API operations or perform routine operations and maintenance. To create a RAM user, log on to the RAM console.
$accessKeyId = "<yourAccessKeyId>";
$accessKeySecret = "<yourAccessKeySecret>";
// The endpoint of the China (Hangzhou) region is used in this example. Specify the actual endpoint.
$endpoint = "http://oss-cn-hangzhou.aliyuncs.com";
$bucket= "<yourBucketName>";

// Specify the rule IDs and the object prefixes to match the rules.
$ruleId0 = "rule0";
$matchPrefix0 = "A0/";
$ruleId1 = "rule1";
$matchPrefix1 = "A1/";

$lifecycleConfig = new LifecycleConfig();
$actions = array();
// Specify that objects expire three days after they are last modified.
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DAYS, 3);
$lifecycleRule = new LifecycleRule($ruleId0, $matchPrefix0, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
$actions = array();
// Specify that the objects last modified before the specified date expire.
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DATE, '2022-10-12T00:00:00.000Z');
$lifecycleRule = new LifecycleRule($ruleId1, $matchPrefix1, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
try {
    $ossClient = new OssClient($accessKeyId, $accessKeySecret, $endpoint);

    $ossClient->putBucketLifecycle($bucket, $lifecycleConfig);
} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . ": OK" . "\n");
            
const OSS = require('ali-oss')
const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
  accessKeyId: 'yourAccessKeyId',
  accessKeySecret: 'yourAccessKeySecret',
  // Set yourbucketname to the name of your bucket. 
  bucket: 'yourbucketname'
});

async function putBucketLifecycle(lifecycle) {
  try {
    const result = await client.putBucketLifecycle('yourbucketname', [
    lifecycle
  ]);
    console.log(result);
  } catch (e) {
    console.log(e);
  }
}

const lifecycle1 = {
  id: 'rule1',
  status: 'Enabled',
  prefix: 'foo/',
  expiration: {
    days: 3 // Specify that the current versions of objects expire three days after they are last modified. 
  }
}
putBucketLifecycle(lifecycle1)

const lifecycle2 = {
  id: 'rule2',
  status: 'Enabled',
  prefix: 'foo/', 
  expiration: {
    createdBeforeDate: '2020-02-18T00:00:00.000Z' // Specify that objects created before the specified date expire. 
  },
}
putBucketLifecycle(lifecycle2)

const lifecycle3 = {
  id: 'rule3',
  status: 'Enabled',
  prefix: 'foo/', 
  abortMultipartUpload: {
    days: 3 // Specify that parts expire three days after they are last modified. 
  },
}
putBucketLifecycle(lifecycle3)

const lifecycle4 = {
  id: 'rule4',
  status: 'Enabled',
  prefix: 'foo/', 
  abortMultipartUpload: {
    createdBeforeDate: '2020-02-18T00:00:00.000Z' // Specify that parts created before the specified date expire. 
  },
}
putBucketLifecycle(lifecycle4)

const lifecycle5 = {
  id: 'rule5',
  status: 'Enabled',
  prefix: 'foo/', 
  transition: {
    // Specify that the storage class of the current versions of objects is converted to Archive 20 days after they are last modified. 
    days: 20,
    storageClass: 'Archive'
  },
  expiration: {
    days: 21 // Specify that the current versions of objects expire 21 days after they are last modified. 
  },
}
putBucketLifecycle(lifecycle5)

const lifecycle6 = {
  id: 'rule6',
  status: 'Enabled',
  prefix: 'foo/', 
  transition: {
    // Specify that the storage class of the objects that are last modified before the specified date is converted to Archive. 
    createdBeforeDate: '2020-02-18T00:00:00.000Z', // Specify that the conversion date is earlier than the expiration date. 
    storageClass: 'Archive'
  },
  expiration: {
    createdBeforeDate: '2020-02-19T00:00:00.000Z' // Specify that objects created before the specified date expire. 
  },
}
putBucketLifecycle(lifecycle6)

const lifecycle7 = {
  id: 'rule7',
  status: 'Enabled',
  prefix: 'foo/', 
  noncurrentVersionExpiration: {
    noncurrentDays: 1 // Specify that objects expire one day after they become previous versions. 
  },
}
putBucketLifecycle(lifecycle7)

const lifecycle8 = {
  id: 'rule8',
  status: 'Enabled',
  prefix: 'foo/', 
  expiredObjectDeleteMarker: true // Specify that delete markers are automatically removed when they expire. 
}
putBucketLifecycle(lifecycle8)

const lifecycle9 = {
  id: 'rule9',
  status: 'Enabled',
  prefix: 'foo/', 
  // Specify that the storage class of objects is converted to IA 10 days after they become previous versions. 
  noncurrentVersionTransition: {
    noncurrentDays: '10',
    storageClass: 'IA'
  }
}
putBucketLifecycle(lifecycle9)

const lifecycle10 = {
  id: 'rule10',
  status: 'Enabled',
  prefix: 'foo/', 
  // Specify that the storage class of objects is converted to IA 10 days after they become previous versions. 
  noncurrentVersionTransition: {
    noncurrentDays: '10',
    storageClass: 'IA'
  },
  // Specify object tags that match the rules. 
  tag: [{
    key: 'key1',
    value: 'value1'
  },
   {
     key: 'key2',
     value: 'value2'
   }]
}
putBucketLifecycle(lifecycle10)
# -*- coding: utf-8 -*-
import oss2
import datetime
from oss2.models import (LifecycleExpiration, LifecycleRule, 
                        BucketLifecycle,AbortMultipartUpload, 
                        TaggingRule, Tagging, StorageTransition,
                        NoncurrentVersionStorageTransition,
                        NoncurrentVersionExpiration)

# The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
auth = oss2.Auth('yourAccessKeyId', 'yourAccessKeySecret')
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
# Set yourBucketName to the name of your bucket. Example: examplebucket. 
bucket = oss2.Bucket(auth, 'http://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')

# Specify that objects expire three days after they are last modified. 
rule1 = LifecycleRule('rule1', 'tests/',
                      status=LifecycleRule.ENABLED,
                      expiration=LifecycleExpiration(days=3))

# Specify that the objects created before the specified date expire. 
rule2 = LifecycleRule('rule2', 'tests2/',
                      status=LifecycleRule.ENABLED,
expiration=LifecycleExpiration(created_before_date=datetime.date(2022, 12, 12)))

# Specify that parts expire three days after they are last modified. 
rule3 = LifecycleRule('rule3', 'tests3/',
                      status=LifecycleRule.ENABLED,
            abort_multipart_upload=AbortMultipartUpload(days=3))

# Specify that the parts created before the specified date expire. 
rule4 = LifecycleRule('rule4', 'tests4/',
                      status=LifecycleRule.ENABLED,
                      abort_multipart_upload = AbortMultipartUpload(created_before_date=datetime.date(2022, 12, 12)))

# Specify that the storage classes of objects are converted to Infrequent Access (IA) 20 days after they are last modified, and to Archive 30 days after they are last modified. 
rule5 = LifecycleRule('rule5', 'tests5/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(days=20,storage_class=oss2.BUCKET_STORAGE_CLASS_IA),
                            StorageTransition(days=30,storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)])

# Specify tags that match the rules. 
tagging_rule = TaggingRule()
tagging_rule.add('key1', 'value1')
tagging_rule.add('key2', 'value2')
tagging = Tagging(tagging_rule)

# Specify that the storage classes of objects are converted to Archive 365 days after they are last modified.  
# Compared with the preceding rules, rule6 specifies tags that match the rule. The rule applies to objects whose tagging configurations are key1=value1 and key2=value2. 
rule6 = LifecycleRule('rule6', 'tests6/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(created_before_date=datetime.date(2022, 12, 12),storage_class=oss2.BUCKET_STORAGE_CLASS_IA)],
                      tagging = tagging)

# Specify that rule7 is configured for versioned buckets. 
# Specify that the storage classes of objects are converted to Archive 365 days after they are last modified. 
# Specify that delete markers are automatically removed when they expire. 
# Specify that the storage classes of objects are converted to IA 12 days after they become previous versions. 
# Specify that the storage classes of objects are converted to Archive 20 days after they become previous versions. 
# Specify that objects are deleted 30 days after they become previous versions. 
rule7 = LifecycleRule('rule7', 'tests7/',
              status=LifecycleRule.ENABLED,
              storage_transitions=[StorageTransition(days=365, storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)], 
              expiration=LifecycleExpiration(expired_detete_marker=True),
              noncurrent_version_sotrage_transitions = 
                    [NoncurrentVersionStorageTransition(12, oss2.BUCKET_STORAGE_CLASS_IA),
                     NoncurrentVersionStorageTransition(20, oss2.BUCKET_STORAGE_CLASS_ARCHIVE)],
              noncurrent_version_expiration = NoncurrentVersionExpiration(30))

lifecycle = BucketLifecycle([rule1, rule2, rule3, rule4, rule5, rule6, rule7])

bucket.put_bucket_lifecycle(lifecycle)
using Aliyun.OSS;
using Aliyun.OSS.Common;
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
var accessKeyId = "yourAccessKeyId";
var accessKeySecret = "yourAccessKeySecret";
// Specify the name of the bucket. Example: examplebucket. 
var bucketName = "examplebucket";

// Create an OSSClient instance. 
var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
try
{
    var setBucketLifecycleRequest = new SetBucketLifecycleRequest(bucketName);
    // Create the first lifecycle rule. 
    LifecycleRule lcr1 = new LifecycleRule()
    {
        ID = "delete obsoleted files",
        Prefix = "obsoleted/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 3,
        Tags = new Tag[1]
    };
    // Specify tags for the first rule. 
    var tag1 = new Tag
    {
        Key = "project",
        Value = "projectone"
    };

    lcr1.Tags[0] = tag1;

    // Create the second lifecycle rule. 
    LifecycleRule lcr2 = new LifecycleRule()
    {
        ID = "delete temporary files",
        Prefix = "temporary/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 20,
        Tags = new Tag[1]         
    };
    // Specify tags for the second rule. 
    var tag2 = new Tag
    {
        Key = "user",
        Value = "jsmith"
    };
    lcr2.Tags[0] = tag2;

    // Specify that parts expire 30 days after they are last modified. 
    lcr2.AbortMultipartUpload = new LifecycleRule.LifeCycleExpiration()
    {
        Days = 30
    };

    LifecycleRule lcr3 = new LifecycleRule();
    lcr3.ID = "only NoncurrentVersionTransition";
    lcr3.Prefix = "test1";
    lcr3.Status = RuleStatus.Enabled;
    lcr3.NoncurrentVersionTransitions = new LifecycleRule.LifeCycleNoncurrentVersionTransition[2]
    {
        // Specify that the storage class of the previous versions of objects is converted to IA 90 days after they are last modified. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.IA,
            NoncurrentDays = 90
        },
        // Specify that the storage class of the previous versions of objects is converted to Archive 180 days after they are last modified. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.Archive,
            NoncurrentDays = 180
        }
    };
    setBucketLifecycleRequest.AddLifecycleRule(lcr1);
    setBucketLifecycleRequest.AddLifecycleRule(lcr2);
    setBucketLifecycleRequest.AddLifecycleRule(lcr3);

    // Configure the lifecycle rules. 
    client.SetBucketLifecycle(setBucketLifecycleRequest);
    Console.WriteLine("Set bucket:{0} Lifecycle succeeded ", bucketName);
}
catch (OssException ex)
{
    Console.WriteLine("Failed with error code: {0}; Error info: {1}. \nRequestID:{2}\tHostID:{3}",
        ex.ErrorCode, ex.Message, ex.RequestId, ex.HostId);
}
catch (Exception ex)
{
    Console.WriteLine("Failed with error info: {0}", ex.Message);
}
PutBucketLifecycleRequest request = new PutBucketLifecycleRequest();
request.setBucketName("examplebucket");

BucketLifecycleRule rule1 = new BucketLifecycleRule();
// Specify the rule ID and the prefix of the object names that match the rule. 
rule1.setIdentifier("1");
rule1.setPrefix("A");
// Specify whether to run the lifecycle rule. If this parameter is set to true, OSS periodically runs this rule. If this parameter is set to false, OSS ignores this rule. 
rule1.setStatus(true);
// Specify that objects expire 200 days after they are last modified. 
rule1.setDays("200");
// Specify that the storage class of objects is converted to Archive 30 days after they are last modified.
rule1.setArchiveDays("30");
// Specify that parts expire three days after they fail to be uploaded. 
rule1.setMultipartDays("3");
// Specify that the storage class of objects is converted to IA 15 days after they are last modified. 
rule1.setIADays("15");

BucketLifecycleRule rule2 = new BucketLifecycleRule();
rule2.setIdentifier("2");
rule2.setPrefix("B");
rule2.setStatus(true);
rule2.setDays("300");
rule2.setArchiveDays("30");
rule2.setMultipartDays("3");
rule2.setIADays("15");

ArrayList<BucketLifecycleRule> lifecycleRules = new ArrayList<BucketLifecycleRule>();
lifecycleRules.add(rule1);
lifecycleRules.add(rule2);
request.setLifecycleRules(lifecycleRules);
OSSAsyncTask task = oss.asyncPutBucketLifecycle(request, new OSSCompletedCallback<PutBucketLifecycleRequest, PutBucketLifecycleResult>() {
    @Override
    public void onSuccess(PutBucketLifecycleRequest request, PutBucketLifecycleResult result) {
        OSSLog.logInfo("code::"+result.getStatusCode());

    }

    @Override
    public void onFailure(PutBucketLifecycleRequest request, ClientException clientException, ServiceException serviceException) {
        OSSLog.logError("error: "+serviceException.getRawMessage());

    }
});

task.waitUntilFinished();
package main

import (
    "fmt"
    "os"

    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    // Create an OSSClient instance. 
    // Set yourEndpoint to the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set yourEndpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify the endpoint based on your business requirements. 
    // Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to access OSS because the account has permissions on all API operations. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
    client, err := oss.New("yourEndpoint", "yourAccessKeyId", "yourAccessKeySecret")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Create a lifecycle rule and set id to rule1, enable to true, prefix to foo/, and expiry to Days 3. This rule applies to objects whose names are prefixed with foo/. These objects expire three days after they are last modified. 
    rule1 := oss.BuildLifecycleRuleByDays("rule1", "foo/", true, 3)

    // If an object in a versioned bucket is a delete marker and has no other versions, the delete marker is removed. 
    deleteMark := true
    expiration := oss.LifecycleExpiration{
        ExpiredObjectDeleteMarker: &deleteMark,
    }

    // Specify that objects are deleted when they expire 30 days after they become previous versions. 
    versionExpiration := oss.LifecycleVersionExpiration{
        NoncurrentDays: 30,
    }

    // Specify that the storage class of objects is converted to IA 10 days after they become previous versions. 
    versionTransition := oss.LifecycleVersionTransition{
        NoncurrentDays: 10,
        StorageClass:   "IA",
    }

    // Create a lifecycle rule and set id to rule2. 
    rule2 := oss.LifecycleRule{
        ID:                   "rule2",
        Prefix:               "yourObjectPrefix",
        Status:               "Enabled",
        Expiration:           &expiration,
        NonVersionExpiration: &versionExpiration,
        NonVersionTransition: &versionTransition,
    }

    // Create a lifecycle rule and set id to rule3. This rule applies to objects that have the tag with the tag key of tagA and tag value of A. These objects expire three days after they are last modified. 
    rule3 := oss.LifecycleRule{
        ID:     "rule3",
        Prefix: "",
        Status: "Enabled",
        Tags: []oss.Tag{
            oss.Tag{
                Key:   "tagA",
                Value: "A",
            },
        },
        Expiration: &oss.LifecycleExpiration{Days: 3},
    }

    // Configure the lifecycle rules. 
    rules := []oss.LifecycleRule{rule1, rule2, rule3}
    // Specify the name of the bucket. Example: examplebucket. 
    bucketName := "examplebucket"
    err = client.SetBucketLifecycle(bucketName, rules)
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}
#include <alibabacloud/oss/OssClient.h>
#include "../src/utils/Utils.h"
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize the account information. */
    /* Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to access OSS because the account has permissions on all API operations. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. */
    std::string AccessKeyId = "yourAccessKeyId";
    std::string AccessKeySecret = "yourAccessKeySecret";
    /* Set yourEndpoint to the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set Endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";

    /* Initialize resources such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    OssClient client(Endpoint, AccessKeyId, AccessKeySecret, conf);

    /* View the lifecycle rules. */
    auto outcome = client.GetBucketLifecycle(BucketName);

    if (outcome.isSuccess()) {
        std::cout << "GetBucketLifecycle success," << std::endl;
        for (auto const rule : outcome.result().LifecycleRules()) {
            std::cout << "rule:" << rule.ID() << "," << rule.Prefix() << "," << rule.Status() << ","
            "hasExpiration:" << rule.hasExpiration() << "," <<
            "hasTransitionList:" << rule.hasTransitionList() << "," << std::endl;

            auto taglist = rule.Tags();
            for (const auto& tag : taglist)
            {
                std::cout <<"GetBucketLifecycle tag success, Key:" 
                << tag.Key() << "; Value:" << tag.Value() << std::endl;
            }

            /* View the lifecycle rules to check whether expired delete markers are automatically deleted. */
            if (rule.ExpiredObjectDeleteMarker()) {
                std::cout << "rule expired delete marker: " << rule.ExpiredObjectDeleteMarker() << std::endl;
            }

            /* View the configurations used to convert the storage class of previous versions of the objects. */
            if (rule.hasNoncurrentVersionTransitionList()) {
                for (auto const lifeCycleTransition : rule.NoncurrentVersionTransitionList()) {
                    std::cout << "rule noncurrent versions trans days:" << lifeCycleTransition.Expiration() <<
                    " trans storage class: " << ToStorageClassName(lifeCycleTransition.StorageClass()) << std::endl;    
                }
            }

            /* View the expiration configurations for the previous versions of the objects. */
            if (rule.hasNoncurrentVersionExpiration()) {
                std::cout << "rule noncurrent versions expiration days:" << rule.NoncurrentVersionExpiration().Days() << std::endl;
            }

        }
    }
    else {
        /* Handle exceptions. */
        std::cout << "GetBucketLifecycle fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        ShutdownSdk();
        return -1;
    }

    /* Release resources such as networks. */
    ShutdownSdk();
    return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. */
const char *access_key_id = "yourAccessKeyId";
const char *access_key_secret = "yourAccessKeySecret";
/* Specify the bucket name. Example: examplebucket. */
const char *bucket_name = "examplebucket";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize the aos_string_t data type. */
    aos_str_set(&options->config->endpoint, endpoint);
    aos_str_set(&options->config->access_key_id, access_key_id);
    aos_str_set(&options->config->access_key_secret, access_key_secret);
    /* Specify whether to use CNAME to access OSS. The value 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Configure network parameters such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources such as networks and memory. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    aos_list_t lifecycle_rule_list;
    /* Create lifecycle rules for the bucket. */
    aos_str_set(&bucket, bucket_name);
    aos_list_init(&lifecycle_rule_list);
    /* Specify the validity period. */
    oss_lifecycle_rule_content_t *rule_content_days = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_days->id, "rule-1");
    aos_str_set(&rule_content_days->prefix, "obsoleted");
    aos_str_set(&rule_content_days->status, "Enabled");
    rule_content_days->days = 3;
    aos_list_add_tail(&rule_content_days->node, &lifecycle_rule_list);
    /* Specify the expiration date. */
    oss_lifecycle_rule_content_t *rule_content_date = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_date->id, "rule-2");
    aos_str_set(&rule_content_date->prefix, "delete");
    aos_str_set(&rule_content_date->status, "Enabled");
    aos_str_set(&rule_content_date->date, "2022-10-11T00:00:00.000Z");
    aos_list_add_tail(&rule_content_date->node, &lifecycle_rule_list);
    /* Configure the lifecycle rules. */
    resp_status = oss_put_bucket_lifecycle(oss_client_options, &bucket, &lifecycle_rule_list, &resp_headers);
    if (aos_status_is_ok(resp_status)) {
        printf("put bucket lifecycle succeeded\n");
    } else {
        printf("put bucket lifecycle failed, code:%d, error_code:%s, error_msg:%s, request_id:%s\n",
            resp_status->code, resp_status->error_code, resp_status->error_msg, resp_status->req_id);
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}

Use ossutil

For more information about how to configure lifecycle rules by using ossutil, see Add or modify lifecycle rules.

Use RESTful APIs

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see PutBucketLifecycle.

Call CopyObject to manually convert the storage classes of objects

You can call the CopyObject operation to convert the storage class of an object by overwriting the object.
  • If you change the storage class of an object to IA, Archive, or Cold Archive, you are charged storage fees based on the object size and storage duration when you access the IA object, or storage fees based on the object size and storage duration and data retrieval fees when you access the Archive or Cold Archive object. If the object size is smaller than 64 KB and the object is stored for a period of time that is shorter than the minimum storage duration, the minimum billable size 64 KB and the minimum storage duration are used for billing. For more information, see Usage notes.
  • You can modify the storage class of an Archive or Cold Archive object only after the object is restored. For more information, see Restore objects.

Rules for storage class conversion by calling CopyObject

  • LRS
    Conversions between storage classes are supported. copy2
  • ZRS

    Only conversions between Standard ZRS and IA ZRS are supported.

    copy

Call CopyObject to convert the storage classes of objects

Use the OSS console

When you modify the storage class of an object in the OSS console, the size of the object cannot exceed 1 GB. To modify the storage classes of objects that are larger than 1 GB in size, we recommend that you use OSS SDKs or ossutil.

  1. Log on to the OSS console.
  2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the desired bucket.
  3. In the left-side navigation pane, choose Files > Files.
  4. On the Files page, move the pointer over More in the Actions column corresponding to the object for which you want to modify the storage class and select Modify Storage Class from the drop-down list.
  5. Select the storage class to which you want to convert the object, and click OK.
  6. We recommend that you keep Retain User Metadata enabled to retain the user metadata of the object after you modify the storage class.

Use OSS SDKs

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.model.CopyObjectRequest;
import com.aliyun.oss.model.CopyObjectResult;
import com.aliyun.oss.model.ObjectMetadata;
import com.aliyun.oss.model.StorageClass;

public class Demo {
    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify the actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
        String accessKeyId = "yourAccessKeyId";
        String accessKeySecret = "yourAccessKeySecret";
        // In this example, a bucket and a Standard or an IA object have been created. 
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the full path of the object. The full path of the object cannot contain the bucket name. Example: exampleobject.txt. 
        String objectName = "exampleobject.txt";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret);

        try {
            // Create a CopyObjectRequest object. 
            CopyObjectRequest request = new CopyObjectRequest(bucketName, objectName, bucketName, objectName) ;

            // Create an ObjectMetadata object. 
            ObjectMetadata objectMetadata = new ObjectMetadata();

            // Encapsulate the header. In this example, the storage class is converted to Archive. 
            objectMetadata.setHeader("x-oss-storage-class", StorageClass.Archive);
            request.setNewObjectMetadata(objectMetadata);

            // Convert the storage class of the object. 
            CopyObjectResult result = ossClient.copyObject(request);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\OssClient;
use OSS\Core\OssException;

// Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to log on to OSS because the account has permissions on all API operations. We recommend that you use a Resource Access Management (RAM) user to call API operations or perform routine operations and maintenance. To create a RAM user, log on to the RAM console. 
$accessKeyId = "<yourAccessKeyId>";
$accessKeySecret = "<yourAccessKeySecret>";
// The endpoint of the China (Hangzhou) region is used in this example. Specify the actual endpoint. 
$endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Specify the bucket name. 
$bucket= "<yourBucketName>";
// Specify the full path of the object. The full path of the object does not contain bucket names. Example: destfolder/exampleobject.txt. 
$object = "<yourObjectName>";

$ossClient = new OssClient($accessKeyId, $accessKeySecret, $endpoint);

try {

    // Specify the storage class that you want to convert to. In this example, set the storage class to Archive. 
    $copyOptions = array(
        OssClient::OSS_HEADERS => array(            
            'x-oss-storage-class' => 'Archive',
            'x-oss-metadata-directive' => 'REPLACE',
        ),
    );
    
    $ossClient->copyObject($bucket, $object, $bucket, $object, $copyOptions);

} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}

print(__FUNCTION__ . ": OK" . "\n");
let OSS = require('ali-oss');

let client = new OSS({
    bucket: '<your bucket>',
    region: '<your region>',
    accessKeyId: '<your accessKeyId>',
    accessKeySecret: '<your accessKeySecret>'
})
var options = {
    headers:{'x-oss-storage-class':'Archive'}
}
client.copy('Objectname','Objectname',options).then((res) => {
    console.log(res);
}).catch(err => {
    console.log(err)
})
# -*- coding: utf-8 -*-
import oss2
import os
# The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in Object Storage Service (OSS) is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
auth = oss2.Auth('<yourAccessKeyId>', '<yourAccessKeySecret>')

# In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
# Specify the bucket name. Example: examplebucket. 
bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')
# Specify the full path of the object. The full path of the object cannot contain the bucket name. Example: exampledir/exampleobject.txt. 
# Make sure that the storage class of the object is Standard or IA. 
object_name = 'exampledir/exampleobject.txt'

# Convert the storage class of the object to Archive by adding a header that specifies the Archive storage class in the request. 
headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_ARCHIVE}
# Convert the storage class of the object to Cold Archive by adding a header that specifies the Cold Archive storage class in the request. 
# headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_COLDARCHIVE}

# Modify the storage class of the object. 
bucket.copy_object(bucket.bucket_name, object_name, object_name, headers)                    
package main

import (
    "fmt"
    "os"

    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    // Create an OSSClient instance.
    client, err := oss.New("<yourEndpoint>", "<yourAccessKeyId>", "<yourAccessKeySecret>")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    bucketName := "<yourBucketName>"
    objectName := "<yourObjectName>"

    // Obtain information about the bucket based on the bucket name.
    bucket, err := client.Bucket(bucketName)
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Modify the storage class of the object. Set the storage class to Archive.
    _, err = bucket.CopyObject(objectName, objectName, oss.ObjectStorageClass(oss.StorageArchive))
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}
                
OSSCopyObjectRequest * copy = [OSSCopyObjectRequest new];
copy.sourceBucketName = @"examplebucket";
copy.sourceobjectKey = @"exampleobject.txt";
copy.bucketName = @"examplebucket";
copy.objectKey = @"exampleobject.txt";
// Set the storage class of the object named exampleobject.txt to Archive. 
copy.objectMeta = @{@"x-oss-storage-class" : @"Archive"};

OSSTask * task = [client copyObject:copy];
[task continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"copy object success!");
    } else {
        NSLog(@"copy object failed, error: %@" , task.error);
    }
    return nil;
}];
#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize the OSS account information. */
    std::string AccessKeyId = "yourAccessKeyId";
    std::string AccessKeySecret = "yourAccessKeySecret";
    std::string Endpoint = "yourEndpoint";
    std::string SourceBucketName = "yourSourceBucketName";
    std::string SourceObjectName = "yourSourceObjectName";


    /* Initialize network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    OssClient client(Endpoint, AccessKeyId, AccessKeySecret, conf);

    /* Set the storage class to Archive. */
    ObjectMetaData objectMeta;
    objectMeta.addHeader("x-oss-storage-class", "Archive");
    CopyObjectRequest request(SourceBucketName, SourceBucketName,objectMeta);
    request.setCopySource(SourceBucketName, SourceObjectName);

    /* Modify the storage class. */
    auto outcome = client.CopyObject(request);

    if (! outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "CopyObject fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        ShutdownSdk();
        return -1;
    }

    /* Release network resources. */
    ShutdownSdk();
    return 0;
}

Use ossutil

For more information about how to use ossutil to convert the storage classes of objects, see Modify the storage class of an object.

Use RESTful APIs

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see CopyObject.

Usage notes

After you convert the storage class of an object to IA, Archive, or Cold Archive, take note of the following items:

Minimum billable size

Objects that are smaller than 64 KB in size are billed as 64 KB.

Minimum storage period

The minimum storage period that you can set for an IA object is 30 days. The minimum storage period that you can set for an Archive object is 60 days. The minimum storage period that you can set for a Cold Archive object is 180 days. If an object is stored for a period less than the minimum storage period, you are still charged for the minimum storage period.
  • Configure lifecycle rules to automatically convert the storage classes of objects

    If you configure lifecycle rules to automatically convert the storage class of an object, OSS does not recalculate the retention period when the storage class of the object changes. For example, an object named a.txt is a Standard object. After the object is stored in OSS for 10 days, its storage class is converted to IA based on lifecycle rules. Then, the object must be stored as an IA object for another 20 days to meet the minimum storage period of 30 days. For more information, see FAQ.

  • Call CopyObject to manually convert the storage classes of objects

    If you call CopyObject to manually convert the storage class of an object, OSS recalculates the retention period when the storage class of the object changes. For example, an object named a.txt is a Standard object. After the object is stored in OSS for 10 days, its storage class is manually converted to IA. Then, the retention period of the object as an IA object is reset to 0, and the object must be stored for another 30 days to meet the minimum storage period of 30 days.

Restoration time

It takes a period of time to restore Archive or Cold Archive objects to the readable state. If your business requires your objects to be read in real time, we recommend that you do not convert the storage classes of your objects to Archive or Cold Archive.

Data retrieval fees

When you access IA objects, you are charged additional data retrieval fees based on the amount of accessed data. You are charged data restoration fees when you restore Archive or Cold Archive objects. Data restoration and outbound traffic are two separate billable items. If an object is accessed more than once per month on average, the storage cost of the object may be higher if you convert the storage class of the object from Standard to IA, Archive, or Cold Archive.

Temporary storage fees

When you restore a Cold Archive object, a Standard replica of the object is generated for temporary access. You are charged the temporary storage fees of the replica for its duration as a Standard object before the restoration period ends.