All Products
Search
Document Center

Object Storage Service:Convert storage classes

Last Updated:Dec 06, 2023

Object Storage Service (OSS) provides the following storage classes: Standard, Infrequent Access (IA), Archive, Cold Archive, and Deep Cold Archive. You can convert the storage class of an object by using a lifecycle rule or calling the CopyObject operation.

Warning

We recommend that you do not change the storage classes of objects in the .dlsdata/ directory of a bucket for which OSS-HDFS is enabled.

If you change the storage class of an object in the .dlsdata/ directory to Infrequent Access (IA), the object remains accessible by using OSS-HDFS. If you change the storage class of an object in the directory to Archive, Cold Archive, or Deep Cold Archive, the object cannot be directly accessed by using OSS-HDFS. You must restore the object before you can access it.

Configure lifecycle rules to automatically convert the storage classes of objects

You can configure lifecycle rules to allow OSS to automatically convert the storage classes of objects. For more information about storage classes, see Overview.

Storage class conversion based on the last modified time of objects

  • LRS本地冗余

    The storage classes of Locally redundant storage (LRS) objects can be converted based on the following rules:

    • Conversions from Standard LRS to IA LRS, Archive LRS, Cold Archive LRS, or Cold Archive LRS

    • Conversions from IA LRS to Archive LRS, Cold Archive LRS, or Deep Cold Archive LRS

    • Conversions from Archive LRS to Cold Archive LRS or Deep Cold Archive LRS

    • Conversions from Cold Archive LRS to Deep Cold Archive LRS

    If different policies are configured for a bucket at the same time to convert the storage classes of objects to IA, Archive, Cold Archive, and Deep Cold Archive, the periods specified in the policies must meet the following requirements:

    Period of time required for conversion to IA < Period of time required for conversion to Archive < Period of time required for conversion to Cold Archive < Period of time required for conversion to Deep Cold Archive

  • ZRS同城

    The storage class of Zone-redundant storage (ZRS) objects can be converted based on the following rules:

    • Conversions from Standard ZRS to IA ZRS, Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS

    • Conversions from IA ZRS to Archive ZRS, Cold Archive LRS, or Deep Cold Archive LRS

    • Conversions from Archive ZRS to Cold Archive LRS or Deep Cold Archive LRS

For more information, see Lifecycle rules based on the last modified time.

Storage class conversion based on the last access time of objects

atime

The storage class conversion rules shown in the preceding figure apply to only the storage class conversion of LRS objects.

  • You can specify policies in a lifecycle rule that is configured based on the last access time of objects to convert the storage class of the objects from Standard to IA. You can also specify whether to convert the storage class of the objects from IA to Standard when the objects are accessed.

  • You can specify policies in a lifecycle rule that is configured based on the last access time of objects to convert the storage class of the objects from Standard or IA to Archive or Cold Archive. You can also specify policies in the lifecycle rule to convert the storage class of the objects from Archive to Cold Archive. If you want to convert the storage class of objects from Standard or IA to Archive or Cold Archive, submit a ticket to apply for the required permissions. After the application is approved, you must specify the storage class to which you want to convert the objects.

    Important

    After the application is approved, if you configure a lifecycle rule based on the last access time of objects for a bucket to convert the storage class of the objects in the bucket from Standard or IA to Archive or Cold Archive, the last access time of the Archive or Cold Archive objects in the bucket is the time when access tracking is enabled for the bucket.

For more information, see Lifecycle rules based on the last access time.

Configure lifecycle rules to convert the storage classes of objects

You can use multiple methods to configure lifecycle rules. Based on the lifecycle rules that you configure, OSS converts the storage classes of multiple objects to the specified storage class or deletes objects and parts after the specified period of time elapses. You can configure lifecycle rules by using one of the following methods to convert the storage classes of objects to a specific storage class:

Use the OSS console

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the left-side navigation tree, choose Data Management > Lifecycle.

  4. (Optional) If you want to create a lifecycle rule based on the last access time, turn on Enable Access Tracking on the Lifecycle page.

  5. On the Lifecycle page, click Create Rule.

  6. In the Create Rule panel, configure the parameters. The following table describes the parameters.

    • Bucket for which versioning is disabled

      Section

      Parameter

      Description

      Basic Settings

      Status

      Specify the status of the lifecycle rule. Valid values: Start and Disabled.

      • After a lifecycle rule is enabled, the storage class of objects is converted or objects are deleted based on the configured lifecycle rule.

      • After you disable a lifecycle rule, the lifecycle tasks are interrupted.

      Applied To

      Specify the objects for which you want the lifecycle rule to take effect. Valid values: Object Prefix and Whole Bucket.

      Allow Overlapped Prefixes

      Specify whether to allow prefixes that overlap. By default, OSS checks whether the prefixes of each lifecycle rule overlap. For example, you configure the following lifecycle rules that contain prefixes that overlap:

      • Rule 1

        All objects whose names contain the dir1/ prefix in the bucket are deleted 180 days after the objects are last modified.

      • Rule 2

        All objects whose names contain the dir1/dir2/ prefix in the bucket are converted to IA objects 30 days after the objects are last modified, and deleted after 60 days.

      If you do not allow prefix overlapping in the lifecycle configuration, OSS detects that objects in the dir1/dir2/ directory match two deletion rules. Therefore, the two lifecycle rules are rejected and the Overlap for same action type Expiration. error message is returned.

      If you allow prefix overlapping in the lifecycle configuration, the objects in the dir1/dir2/ directory are converted to IA objects after 30 days and deleted after 60 days. Other objects in the dir1/ directory are deleted after 180 days.

      Prefix

      Specify the prefix in the names of objects for which you want the lifecycle rule to take effect.

      • If you set the Prefix parameter to img, all objects whose names contain the img prefix, such as imgtest.png and img/example.jpg, match the lifecycle rule.

      • If you set the prefix to img/, all objects whose names contain the img/ prefix, such as img/example.jpg and img/test.jpg, match the lifecycle rule.

      Tag

      Specify tags. The rule takes effect only for objects that contain the specified tags. For example, if you select Object Prefix and set Prefix to img, Key to a, and Value to 1, the rule takes effect for all objects whose names contain the img prefix and that contain the a=1 tag. For more information about object tags, see Object tagging.

      NOT

      Specify that the lifecycle rule does not take effect for the objects that contain the specified prefix and tag.

      Important
      • If you turn on NOT, each lifecycle rule must contain at least one of the prefix and tags of an object.

      • The key of the tag specified for the NOT parameter cannot be the same as the key specified for the Tag parameter.

      • If you turn on NOT, you cannot configure the lifecycle rules that take effect for parts.

      Object Size

      Specify the size of objects for which the lifecycle rule takes effect.

      • Minimum Size: Specify that the lifecycle rule takes effect only for objects that are larger than the specified size threshold. You can specify the minimum object size that is greater than 0 B and less than 5 TB.

      • Maximum Size: Specify that the lifecycle rule takes effect only for objects that are smaller than the specified size threshold. You can specify the maximum object size that is greater than 0 B and less than 5 TB.

      Important

      If you specify the minimum and maximum object size in the same lifecycle rule, take note of the following items:

      • Make sure that the maximum object size is greater than the minimum object size.

      • You cannot specify lifecycle rules for parts.

      • You cannot specify lifecycle rules to remove delete markers.

      Policy for Objects

      Object Lifecycle

      Configure rules for objects to specify when the objects expire. Valid values: Validity Period, Expiration Date, and Disabled. If you select Disabled, the configurations of File Lifecycle do not take effect.

      Lifecycle-based Rules

      Configure the lifecycle rule to convert the storage class of objects or delete expired objects.

      Example 1: If you select Access Time, set Validity Period to 30, and specify that the storage class of the objects is converted to IA (Not Converted After Access) after the validity period elapses. In this case, the storage class of objects that were last accessed on September 1, 2021 is converted to IA on October 1, 2021.

      Example 2: If you select Modified Time, set Expiration Date to September 24, 2021, and specify that objects that are last modified before this date are deleted. In this case, objects that are last modified before September 24, 2021 are automatically deleted. The deleted objects cannot be recovered.

      Policy for Parts

      Part Lifecycle

      Specify the operations that you want to perform on expired parts. If you turn on Tag, this parameter is unavailable. Valid values: Validity Period, Expiration Date, and Disabled. If you select Disabled, the configurations of Part Lifecycle do not take effect.

      Important

      Each lifecycle rule must contain at least one of the object expiration policies and part expiration policies.

      Rules for Parts

      Specify when parts expire based on the value of the Part Lifecycle parameter. Expired parts are automatically deleted and cannot be recovered.

    • Bucket for which versioning is enabled

      Configure the parameters in the Basic Settings and Policy for Parts sections in the same manner as you configure the parameters for unversioned buckets. The following table describes only the parameters that are different from the parameters that you configure for unversioned buckets.

      Section

      Parameter

      Description

      Policy for Current Versions

      Clean Up Delete Marker

      If you enable versioning for the bucket, the Clean Up Delete Marker option is added to the File Lifecycle parameter. Other parameters are the same as those you can configure for unversioned buckets.

      If you select Clean Up Delete Marker, and an object has only one version which is a delete marker, OSS considers the delete marker expired and removes the delete marker. If an object has multiple versions and the current version of the object is a delete marker, OSS retains the delete marker. For more information about delete markers, see Delete marker.

      Policy for Previous Versions

      Object Lifecycle

      Specify when previous versions expire. Valid values: Validity Period and Disabled. If you select Disabled, the configurations of File Lifecycle do not take effect.

      Lifecycle-based Rules

      Specify the number of days within which objects can be retained after they become previous versions. After they expire, the specified operations are performed on the previous versions the next day. For example, if you set the Validity Period (Days) parameter to 30, objects that become previous versions on September 1, 2021 are converted to the specified storage class or deleted on October 1, 2021.

      Important

      You can determine when an object becomes a previous version based on the time when a later version is generated.

  7. Click OK.

    After the lifecycle rule is created, you can view the rule in the lifecycle rule list.

Use OSS SDKs

The following sample code provides examples on how to configure lifecycle rules by using OSS SDKs for common programming languages. For more information about the sample code for configuring lifecycle rules by using OSS SDKs for other programming languages, see Overview.

Java

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.common.utils.DateUtil;
import com.aliyun.oss.model.LifecycleRule;
import com.aliyun.oss.model.SetBucketLifecycleRequest;
import com.aliyun.oss.model.StorageClass;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class Demo {

    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);

        try {
            // Create a request by using SetBucketLifecycleRequest. 
            SetBucketLifecycleRequest request = new SetBucketLifecycleRequest(bucketName);

            // Specify the ID of the lifecycle rule. 
            String ruleId0 = "rule0";
            // Specify the prefix that you want the lifecycle rule to match. 
            String matchPrefix0 = "A0/";
            // Specify the tag that you want the lifecycle rule to match. 
            Map<String, String> matchTags0 = new HashMap<String, String>();
            // Specify the key and value of the tag. In the example, the key is set to owner and the value is set to John. 
            matchTags0.put("owner", "John");


            String ruleId1 = "rule1";
            String matchPrefix1 = "A1/";
            Map<String, String> matchTags1 = new HashMap<String, String>();
            matchTags1.put("type", "document");

            String ruleId2 = "rule2";
            String matchPrefix2 = "A2/";

            String ruleId3 = "rule3";
            String matchPrefix3 = "A3/";

            String ruleId4 = "rule4";
            String matchPrefix4 = "A4/";

            String ruleId5 = "rule5";
            String matchPrefix5 = "A5/";

            String ruleId6 = "rule6";
            String matchPrefix6 = "A6/";

            // Set the expiration date to three days after the last modification time. 
            LifecycleRule rule = new LifecycleRule(ruleId0, matchPrefix0, LifecycleRule.RuleStatus.Enabled, 3);
            rule.setTags(matchTags0);
            request.AddLifecycleRule(rule);

            // Specify that the objects that are created before the specific date expire. 
            rule = new LifecycleRule(ruleId1, matchPrefix1, LifecycleRule.RuleStatus.Enabled);
            rule.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
            rule.setTags(matchTags1);
            request.AddLifecycleRule(rule);

            // Specify that parts expire three days after they are last modified. 
            rule = new LifecycleRule(ruleId2, matchPrefix2, LifecycleRule.RuleStatus.Enabled);
            LifecycleRule.AbortMultipartUpload abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
            abortMultipartUpload.setExpirationDays(3);
            rule.setAbortMultipartUpload(abortMultipartUpload);
            request.AddLifecycleRule(rule);

            // Specify that parts that are created before the specific date expire. 
            rule = new LifecycleRule(ruleId3, matchPrefix3, LifecycleRule.RuleStatus.Enabled);
            abortMultipartUpload = new LifecycleRule.AbortMultipartUpload();
            abortMultipartUpload.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));
            rule.setAbortMultipartUpload(abortMultipartUpload);
            request.AddLifecycleRule(rule);

            // Specify that the storage class of objects is converted to IA 10 days after they are last modified, and to Archive 30 days after they are last modified. 
            rule = new LifecycleRule(ruleId4, matchPrefix4, LifecycleRule.RuleStatus.Enabled);
            List<LifecycleRule.StorageTransition> storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.IA);
            storageTransition.setExpirationDays(10);
            storageTransitions.add(storageTransition);
            storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransition.setExpirationDays(30);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            request.AddLifecycleRule(rule);

            // Specify that the storage class of objects that are last modified before October 12, 2022 is converted to Archive. 
            rule = new LifecycleRule(ruleId5, matchPrefix5, LifecycleRule.RuleStatus.Enabled);
            storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            storageTransition = new LifecycleRule.StorageTransition();

            storageTransition.setCreatedBeforeDate(DateUtil.parseIso8601Date("2022-10-12T00:00:00.000Z"));

            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            request.AddLifecycleRule(rule);

            // Specify that rule6 is configured for buckets for which versioning is enabled. 
            rule = new LifecycleRule(ruleId6, matchPrefix6, LifecycleRule.RuleStatus.Enabled);
            // Specify that the storage class of objects is converted to Archive 365 days after the objects are last modified. 
            storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
            storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.Archive);
            storageTransition.setExpirationDays(365);
            storageTransitions.add(storageTransition);
            rule.setStorageTransition(storageTransitions);
            // Specify that delete markers are automatically removed when they expire. 
            rule.setExpiredDeleteMarker(true);
            // Specify that the storage class of the previous versions of objects is converted to IA 10 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition =
                    new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(10).withStrorageClass(StorageClass.IA);
            // Specify that the storage class of the previous versions of objects is converted to Archive 20 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition2 =
                    new LifecycleRule.NoncurrentVersionStorageTransition().withNoncurrentDays(20).withStrorageClass(StorageClass.Archive);
            // Specify that the previous versions of objects are deleted 30 days after the objects are last modified. 
            LifecycleRule.NoncurrentVersionExpiration noncurrentVersionExpiration = new LifecycleRule.NoncurrentVersionExpiration().withNoncurrentDays(30);
            List<LifecycleRule.NoncurrentVersionStorageTransition> noncurrentVersionStorageTransitions = new ArrayList<LifecycleRule.NoncurrentVersionStorageTransition>();
            noncurrentVersionStorageTransitions.add(noncurrentVersionStorageTransition2);
            rule.setStorageTransition(storageTransitions);
            rule.setNoncurrentVersionExpiration(noncurrentVersionExpiration);
            rule.setNoncurrentVersionStorageTransitions(noncurrentVersionStorageTransitions);
            request.AddLifecycleRule(rule);

            // Initiate a request to configure lifecycle rules. 
            ossClient.setBucketLifecycle(request);

            // Query the lifecycle rules that are configured for the bucket. 
            List<LifecycleRule> listRules = ossClient.getBucketLifecycle(bucketName);
            for(LifecycleRule rules : listRules){
                System.out.println("ruleId="+rules.getId()+", matchPrefix="+rules.getPrefix());
            }
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}

PHP

<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\OssClient;
use OSS\Core\OssException;
use OSS\Model\LifecycleConfig;
use OSS\Model\LifecycleRule;
use OSS\Model\LifecycleAction;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that you specified the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables. 
$accessKeyId = getenv("OSS_ACCESS_KEY_ID");
$accessKeySecret = getenv("OSS_ACCESS_KEY_SECRET");
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
$endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Specify the name of the bucket. 
$bucket= "yourBucketName";

// Specify the rule ID and the prefix of the object names that match the rule. 
$ruleId0 = "rule0";
$matchPrefix0 = "A0/";
$ruleId1 = "rule1";
$matchPrefix1 = "A1/";

$lifecycleConfig = new LifecycleConfig();
$actions = array();
// Set the expiration date to three days after the last modification time. 
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DAYS, 3);
$lifecycleRule = new LifecycleRule($ruleId0, $matchPrefix0, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
$actions = array();
// Specify that the objects that are created before the specific date expire. 
$actions[] = new LifecycleAction(OssClient::OSS_LIFECYCLE_EXPIRATION, OssClient::OSS_LIFECYCLE_TIMING_DATE, '2022-10-12T00:00:00.000Z');
$lifecycleRule = new LifecycleRule($ruleId1, $matchPrefix1, "Enabled", $actions);
$lifecycleConfig->addRule($lifecycleRule);
try {
    $ossClient = new OssClient($accessKeyId, $accessKeySecret, $endpoint);

    $ossClient->putBucketLifecycle($bucket, $lifecycleConfig);
} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . ": OK" . "\n");

Node.js

const OSS = require('ali-oss')

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
});

async function getBucketLifecycle () {
  try {
    const result = await client.getBucketLifecycle('Yourbucketname');
    console.log(result.rules); // Query the lifecycle rules. 

    rules.forEach(rule => {
      console.log(rule.id) // Query the rule IDs.  
      console.log(rule.status) // Query the status of the rules. 
      console.log(rule.tags) // Query the tags configured in the lifecycle rules. 
      console.log(rule.expiration.days) // Query the validity period configurations. 
      console.log(rule.expiration.createdBeforeDate) // Query the expiration date configurations. 
      // Query the rule for expired parts. 
      console.log(rule.abortMultipartUpload.days || rule.abortMultipartUpload.createdBeforeDate)
      // Query the rule of storage class conversion. 
      console.log(rule.transition.days || rule.transition.createdBeforeDate) // Query the conversion date configurations. 
      console.log(rule.transition.storageClass) // Query the configurations used to convert storage classes. 
      // Query the lifecycle rule to check whether expired delete markers are automatically deleted. 
      console.log(rule.transition.expiredObjectDeleteMarker)
      // Query the configurations used to convert the storage class of previous versions of the objects. 
      console.log(rule.noncurrentVersionTransition.noncurrentDays) // Query the conversion date configurations for objects of previous versions. 
      console.log(rule.noncurrentVersionTransition.storageClass) // Query the configurations used to convert the storage classes of previous versions of objects. 
    })
  } catch (e) {
    console.log(e);
  }
}
getBucketLifecycle();

Python

# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
import datetime
from oss2.models import (LifecycleExpiration, LifecycleRule, 
                        BucketLifecycle,AbortMultipartUpload, 
                        TaggingRule, Tagging, StorageTransition,
                        NoncurrentVersionStorageTransition,
                        NoncurrentVersionExpiration)

# Obtain access credentials from the environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
# Specify the name of the bucket. Example: examplebucket. 
bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')

# Specify that objects expire three days after they are last modified. 
rule1 = LifecycleRule('rule1', 'tests/',
                      status=LifecycleRule.ENABLED,
                      expiration=LifecycleExpiration(days=3))

# Specify that objects created before the specified date expire. 
rule2 = LifecycleRule('rule2', 'tests2/',
                      status=LifecycleRule.ENABLED,
expiration=LifecycleExpiration(created_before_date=datetime.date(2023, 12, 12)))

# Specify that parts expire three days after they are last modified. 
rule3 = LifecycleRule('rule3', 'tests3/',
                      status=LifecycleRule.ENABLED,
            abort_multipart_upload=AbortMultipartUpload(days=3))

# Specify that parts created before the specified date expire. 
rule4 = LifecycleRule('rule4', 'tests4/',
                      status=LifecycleRule.ENABLED,
                      abort_multipart_upload = AbortMultipartUpload(created_before_date=datetime.date(2022, 12, 12)))

# Specify that the storage classes of objects are converted to Infrequent Access (IA) 20 days after they are last modified, and to Archive 30 days after they are last modified. 
rule5 = LifecycleRule('rule5', 'tests5/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(days=20,storage_class=oss2.BUCKET_STORAGE_CLASS_IA),
                            StorageTransition(days=30,storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)])

# Specify the tag that you want the lifecycle rule to match. 
tagging_rule = TaggingRule()
tagging_rule.add('key1', 'value1')
tagging_rule.add('key2', 'value2')
tagging = Tagging(tagging_rule)

# Specify that the storage classes of objects are converted to Archive 365 days after they are last modified.  
# Compared with the preceding rules, rule6 includes the tag condition to match objects. The rule takes effect for objects whose tagging configurations are key1=value1 and key2=value2. 
rule6 = LifecycleRule('rule6', 'tests6/',
                      status=LifecycleRule.ENABLED,
                      storage_transitions=[StorageTransition(created_before_date=datetime.date(2022, 12, 12),storage_class=oss2.BUCKET_STORAGE_CLASS_IA)],
                      tagging = tagging)

# rule7 is a lifecycle rule that applies to a versioning-enabled bucket. 
# Specify that the storage classes of objects are converted to Archive 365 days after they are last modified. 
# Specify that delete markers are automatically removed when they expire. 
# Specify that the storage classes of objects are converted to IA 12 days after they become previous versions. 
# Specify that the storage classes of objects are converted to Archive 20 days after they become previous versions. 
# Specify that objects are deleted 30 days after they become previous versions. 
rule7 = LifecycleRule('rule7', 'tests7/',
              status=LifecycleRule.ENABLED,
              storage_transitions=[StorageTransition(days=365, storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE)], 
              expiration=LifecycleExpiration(expired_detete_marker=True),
              noncurrent_version_sotrage_transitions = 
                    [NoncurrentVersionStorageTransition(12, oss2.BUCKET_STORAGE_CLASS_IA),
                     NoncurrentVersionStorageTransition(20, oss2.BUCKET_STORAGE_CLASS_ARCHIVE)],
              noncurrent_version_expiration = NoncurrentVersionExpiration(30))

lifecycle = BucketLifecycle([rule1, rule2, rule3, rule4, rule5, rule6, rule7])

bucket.put_bucket_lifecycle(lifecycle)

C#

using Aliyun.OSS;
using Aliyun.OSS.Common;
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the name of the bucket. Example: examplebucket. 
var bucketName = "examplebucket";

// Create an OSSClient instance. 
var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
try
{
    var setBucketLifecycleRequest = new SetBucketLifecycleRequest(bucketName);
    // Create the first lifecycle rule. 
    LifecycleRule lcr1 = new LifecycleRule()
    {
        ID = "delete obsoleted files",
        Prefix = "obsoleted/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 3,
        Tags = new Tag[1]
    };
    // Specify tags for the first rule. 
    var tag1 = new Tag
    {
        Key = "project",
        Value = "projectone"
    };

    lcr1.Tags[0] = tag1;

    // Create the second lifecycle rule. 
    LifecycleRule lcr2 = new LifecycleRule()
    {
        ID = "delete temporary files",
        Prefix = "temporary/",
        Status = RuleStatus.Enabled,
        ExpriationDays = 20,
        Tags = new Tag[1]         
    };
    // Specify tags for the second rule. 
    var tag2 = new Tag
    {
        Key = "user",
        Value = "jsmith"
    };
    lcr2.Tags[0] = tag2;

    // Specify that parts expire 30 days after they are last modified. 
    lcr2.AbortMultipartUpload = new LifecycleRule.LifeCycleExpiration()
    {
        Days = 30
    };

    LifecycleRule lcr3 = new LifecycleRule();
    lcr3.ID = "only NoncurrentVersionTransition";
    lcr3.Prefix = "test1";
    lcr3.Status = RuleStatus.Enabled;
    lcr3.NoncurrentVersionTransitions = new LifecycleRule.LifeCycleNoncurrentVersionTransition[2]
    {
        // Specify that the storage class of objects is converted to Infrequent Access (IA) 90 days after they become previous versions. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.IA,
            NoncurrentDays = 90
        },
        // Specify that the storage classes of the previous versions of objects are converted to Archive 180 days after they are last modified. 
        new LifecycleRule.LifeCycleNoncurrentVersionTransition(){
            StorageClass = StorageClass.Archive,
            NoncurrentDays = 180
        }
    };
    setBucketLifecycleRequest.AddLifecycleRule(lcr1);
    setBucketLifecycleRequest.AddLifecycleRule(lcr2);
    setBucketLifecycleRequest.AddLifecycleRule(lcr3);

    // Configure lifecycle rules. 
    client.SetBucketLifecycle(setBucketLifecycleRequest);
    Console.WriteLine("Set bucket:{0} Lifecycle succeeded ", bucketName);
}
catch (OssException ex)
{
    Console.WriteLine("Failed with error code: {0}; Error info: {1}. \nRequestID:{2}\tHostID:{3}",
        ex.ErrorCode, ex.Message, ex.RequestId, ex.HostId);
}
catch (Exception ex)
{
    Console.WriteLine("Failed with error info: {0}", ex.Message);
}

Android-Java

PutBucketLifecycleRequest request = new PutBucketLifecycleRequest();
request.setBucketName("examplebucket");

BucketLifecycleRule rule1 = new BucketLifecycleRule();
// Specify the rule ID and the prefix of the object names that match the rule. 
rule1.setIdentifier("1");
rule1.setPrefix("A");
// Specify whether to run the lifecycle rule. If this parameter is set to true, OSS periodically runs this rule. If this parameter is set to false, OSS ignores this rule. 
rule1.setStatus(true);
// Specify that objects expire 200 days after they are last modified. 
rule1.setDays("200");
// Specify that the storage class of objects is converted to Archive 30 days after they are last modified.
rule1.setArchiveDays("30");
// Specify that parts expire three days after they fail to be uploaded. 
rule1.setMultipartDays("3");
// Specify that the storage class of objects is converted to IA 15 days after they are last modified. 
rule1.setIADays("15");

BucketLifecycleRule rule2 = new BucketLifecycleRule();
rule2.setIdentifier("2");
rule2.setPrefix("B");
rule2.setStatus(true);
rule2.setDays("300");
rule2.setArchiveDays("30");
rule2.setMultipartDays("3");
rule2.setIADays("15");

ArrayList<BucketLifecycleRule> lifecycleRules = new ArrayList<BucketLifecycleRule>();
lifecycleRules.add(rule1);
lifecycleRules.add(rule2);
request.setLifecycleRules(lifecycleRules);
OSSAsyncTask task = oss.asyncPutBucketLifecycle(request, new OSSCompletedCallback<PutBucketLifecycleRequest, PutBucketLifecycleResult>() {
    @Override
    public void onSuccess(PutBucketLifecycleRequest request, PutBucketLifecycleResult result) {
        OSSLog.logInfo("code::"+result.getStatusCode());

    }

    @Override
    public void onFailure(PutBucketLifecycleRequest request, ClientException clientException, ServiceException serviceException) {
        OSSLog.logError("error: "+serviceException.getRawMessage());

    }
});

task.waitUntilFinished();

Go

package main

import (
	"fmt"
	"os"

	"github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
	// Specify the name of the bucket. 
	bucketName := "yourBucketName"

	// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
	provider, err := oss.NewEnvironmentVariableCredentialsProvider()
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}

	// Create an OSSClient instance. 
	// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
	client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}

	// Create a lifecycle rule and set ID to rule1. Specify that the objects whose names contain the foo prefix in the bucket expire three days after the objects are last modified. 
	rule1 := oss.BuildLifecycleRuleByDays("rule1", "foo/", true, 3)

	// If an object in a bucket for which versioning is enabled is a delete marker and has no other versions, the delete marker is deleted. 
	deleteMark := true
	expiration := oss.LifecycleExpiration{
		ExpiredObjectDeleteMarker: &deleteMark,
	}

	// Specify that the previous versions of the objects are deleted 30 days after the objects are last modified. 
	versionExpiration := oss.LifecycleVersionExpiration{
		NoncurrentDays: 30,
	}

	// Specify that the storage class of the previous versions of the objects is converted to IA 10 days after the objects are last modified. 
	versionTransition := oss.LifecycleVersionTransition{
		NoncurrentDays: 10,
		StorageClass:   "IA",
	}

	// Create a lifecycle rule and set ID to rule2. 
	rule2 := oss.LifecycleRule{
		ID:                   "rule2",
		Prefix:               "yourObjectPrefix",
		Status:               "Enabled",
		Expiration:           &expiration,
		NonVersionExpiration: &versionExpiration,
		NonVersionTransition: &versionTransition,
	}

	// Create a lifecycle rule and set ID to rule3. This rule takes effect for objects that contain the tag whose key is tag1 and whose value is value1. These objects expire three days after the objects are last modified. 
	rule3 := oss.LifecycleRule{
		ID:     "rule3",
		Prefix: "",
		Status: "Enabled",
		Tags: []oss.Tag{
			oss.Tag{
				Key:   "tag1",
				Value: "value1",
			},
		},
		Expiration: &oss.LifecycleExpiration{Days: 3},
	}

	// Configure the lifecycle rules. 
	rules := []oss.LifecycleRule{rule1, rule2, rule3}
	// Specify the name of the bucket. Example: examplebucket. 
	bucketName := "examplebucket"
	err = client.SetBucketLifecycle(bucketName, rules)
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
}

C++

#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize information about the account that is used to access OSS. */
    
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";

    /* Initialize resources such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);

    SetBucketLifecycleRequest request(BucketName);
    std::string date("2022-10-12T00:00:00.000Z");

    /* Configure tagging. */
    Tagging tagging;
    tagging.addTag(Tag("key1", "value1"));
    tagging.addTag(Tag("key2", "value2"));

    /* Specify a lifecycle rule. */
    auto rule1 = LifecycleRule();
    rule1.setID("rule1");
    rule1.setPrefix("test1/");
    rule1.setStatus(RuleStatus::Enabled);
    rule1.setExpiration(3);
    rule1.setTags(tagging.Tags());

    /* Specify the expiration date. */
    auto rule2 = LifecycleRule();
    rule2.setID("rule2");
    rule2.setPrefix("test2/");
    rule2.setStatus(RuleStatus::Disabled);
    rule2.setExpiration(date);

    /* Specify that rule3 is configured for the bucket if the versioning state of the bucket is enabled. */
    auto rule3 = LifecycleRule();
    rule3.setID("rule3");
    rule3.setPrefix("test3/");
    rule3.setStatus(RuleStatus::Disabled);

    /* Specify that the storage class of objects is converted to Archive 365 days after they are last modified. */  
    auto transition = LifeCycleTransition();  
    transition.Expiration().setDays(365);
    transition.setStorageClass(StorageClass::Archive);
    rule3.addTransition(transition);

    /* Specify that expired delete markers are automatically deleted. */
    rule3.setExpiredObjectDeleteMarker(true);

    /* Specify that the storage class of objects is converted to Infrequent Access (IA) 10 days after their versions are updated. */
    auto transition1 = LifeCycleTransition();  
    transition1.Expiration().setDays(10);
    transition1.setStorageClass(StorageClass::IA);

    /* Specify that the storage class of objects is converted to Archive 20 days after their versions are updated. */
    auto transition2 = LifeCycleTransition();  
    transition2.Expiration().setDays(20);
    transition2.setStorageClass(StorageClass::Archive);

    /* Specify that objects are deleted 30 days after their versions are updated. */
    auto expiration  = LifeCycleExpiration(30);
    rule3.setNoncurrentVersionExpiration(expiration);

    LifeCycleTransitionList noncurrentVersionStorageTransitions{transition1, transition2};
    rule3.setNoncurrentVersionTransitionList(noncurrentVersionStorageTransitions);

    /* Configure the lifecycle rules. */
    LifecycleRuleList list{rule1, rule2, rule3};
    request.setLifecycleRules(list);
    auto outcome = client.SetBucketLifecycle(request);

    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "SetBucketLifecycle fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources such as network resources. */
    ShutdownSdk();
    return 0;
}

C

#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize data of the aos_string_t type. */
    aos_str_set(&options->config->endpoint, endpoint);
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
    aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
    /* Specify whether to use CNAME to access OSS. The value 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Specify network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code that is used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    aos_list_t lifecycle_rule_list;   
    aos_str_set(&bucket, bucket_name);
    aos_list_init(&lifecycle_rule_list);
    /* Specify the validity period. */
    oss_lifecycle_rule_content_t *rule_content_days = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_days->id, "rule-1");
    /* Set the prefix contained in the names of the object that match the rule. */
    aos_str_set(&rule_content_days->prefix, "dir1");
    aos_str_set(&rule_content_days->status, "Enabled");
    rule_content_days->days = 3;
    aos_list_add_tail(&rule_content_days->node, &lifecycle_rule_list);
    /* Specify the expiration date. */
    oss_lifecycle_rule_content_t *rule_content_date = oss_create_lifecycle_rule_content(pool);
    aos_str_set(&rule_content_date->id, "rule-2");
    aos_str_set(&rule_content_date->prefix, "dir2");
    aos_str_set(&rule_content_date->status, "Enabled");
    /* The expiration data is displayed in UTC. 
    aos_str_set(&rule_content_date->date, "2023-10-11T00:00:00.000Z");
    aos_list_add_tail(&rule_content_date->node, &lifecycle_rule_list);
    /* Configure the lifecycle rule. */
    resp_status = oss_put_bucket_lifecycle(oss_client_options, &bucket, &lifecycle_rule_list, &resp_headers);
    if (aos_status_is_ok(resp_status)) {
        printf("put bucket lifecycle succeeded\n");
    } else {
        printf("put bucket lifecycle failed, code:%d, error_code:%s, error_msg:%s, request_id:%s\n",
            resp_status->code, resp_status->error_code, resp_status->error_msg, resp_status->req_id);
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}

Ruby

require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Specify the name of the bucket. 
bucket = client.get_bucket('examplebucket')
# Configure lifecycle rules. 
bucket.lifecycle = [
  Aliyun::OSS::LifeCycleRule.new(
    :id => 'rule1', :enable => true, :prefix => 'foo/', :expiry => 3),
  Aliyun::OSS::LifeCycleRule.new(
    :id => 'rule2', :enable => false, :prefix => 'bar/', :expiry => Date.new(2016, 1, 1))
]

Use ossutil

For more information about how to configure lifecycle rules by using ossutil, see Add or modify lifecycle rules.

Use RESTful APIs

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see PutBucketLifecycle.

Call CopyObject to manually convert the storage classes of objects

You can call the CopyObject operation to convert the storage class of an object by overwriting the object.

  • If you convert the storage class of an object to IA, Archive, Cold Archive, or Deep Cold Archive, you are charged storage fees based on the object size and storage duration of the IA object, or storage fees based on the object size and storage duration and data retrieval fees for the Archive, Cold Archive, or Deep Cold Archive object. For more information, see the Usage notes section.

  • To convert the storage class of an Archive object, a Cold Archive object, or a Deep Cold Archive object, you must first restore the object. For more information about how to restore an object, see Restore objects. If real-time access of Archive objects is enabled for a bucket, you can directly convert the storage class of Archive objects in the bucket without restoring them. For more information about how to enable real-time access of Archive objects, see Real-time access of Archive objects.

Note

If you call the CopyObject operation to convert the storage class of an object in a bucket for which versioning is enabled, OSS automatically generates a unique version ID for the destination object. The version ID is returned in the x-oss-version-id response header. If versioning is not enabled or suspended for the bucket, OSS generates a version whose ID is null for the destination object and overwrites the existing version whose ID is null.

Rules for storage class conversion by calling CopyObject

  • LRS

    You can convert an LRS object between two of the following storage classes: Standard LRS, IA LRS, Archive LRS, Cold Archive LRS, and Deep Cold Archive LRS.

  • ZRS

    You can convert a ZRS object only between Standard ZRS and IA ZRS.

Methods to manually convert the storage classes of objects

Use the OSS console

If you want to convert the storage class of an object in the OSS console, the size of the object cannot exceed 1 GB. To convert the storage class of an object whose size is greater than 1 GB, we recommend that you use OSS SDKs or ossutil.

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, click the bucket that contains the object whose storage class you want to convert.

  3. In the left-side navigation tree, choose Object Management > Objects.

  4. On the Objects page, find the object whose storage class you want to convert and choose more > Change Storage Class.

  5. We recommend that you keep Retain User Metadata turned on to retain the user metadata of the object after you convert the storage class.

  6. Select the storage class to which you want to convert the object, and click OK.

Use OSS SDKs

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.model.CopyObjectRequest;
import com.aliyun.oss.model.CopyObjectResult;
import com.aliyun.oss.model.ObjectMetadata;
import com.aliyun.oss.model.StorageClass;

public class Demo {
    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // In this example, a bucket and a Standard or IA object are created. 
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the full path of the object. Do not include the bucket name in the full path. Example: exampleobject.txt. 
        String objectName = "exampleobject.txt";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);

        try {
            // Create a CopyObjectRequest object. 
            CopyObjectRequest request = new CopyObjectRequest(bucketName, objectName, bucketName, objectName) ;

            // Create an ObjectMetadata object. 
            ObjectMetadata objectMetadata = new ObjectMetadata();

            // Convert the storage class of the object to Archive. 
            objectMetadata.setHeader("x-oss-storage-class", StorageClass.Archive);
            // Convert the storage class of the object to Cold Archive. 
            // objectMetadata.setHeader("x-oss-storage-class", StorageClass.ColdArchive);
            // Convert the storage class of the object to Deep Cold Archive. 
            // objectMetadata.setHeader("x-oss-storage-class", StorageClass.DeepColdArchive);
            request.setNewObjectMetadata(objectMetadata);

            // Convert the storage class of the object. 
            CopyObjectResult result = ossClient.copyObject(request);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\OssClient;
use OSS\Core\OssException;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
$accessKeyId = getenv("OSS_ACCESS_KEY_ID");
$accessKeySecret = getenv("OSS_ACCESS_KEY_SECRET");
// In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
$endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Specify the name of the bucket. 
$bucket= "<yourBucketName>";
// Specify the full path of the object. Do not include the bucket name in the full path. Example: destfolder/exampleobject.txt. 
$object = "<yourObjectName>";

$ossClient = new OssClient($accessKeyId, $accessKeySecret, $endpoint);

try {

    // Specify the storage class to which you want to convert the storage class of the object. In this example, specify the Archive storage class. 
    $copyOptions = array(
        OssClient::OSS_HEADERS => array(            
            'x-oss-storage-class' => 'Archive',
            'x-oss-metadata-directive' => 'REPLACE',
        ),
    );
    
    $ossClient->copyObject($bucket, $object, $bucket, $object, $copyOptions);

} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}

print(__FUNCTION__ . ": OK" . "\n");
const OSS = require('ali-oss');

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
})
const options = {
    headers:{'x-oss-storage-class':'Archive'}
}
client.copy('Objectname','Objectname',options).then((res) => {
    console.log(res);
}).catch(err => {
    console.log(err)
})
# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
import os
# Obtain access credentials from environment variables. Before you run the code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())

# In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
# Specify the name of the bucket. Example: examplebucket. 
bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')
# Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
# Make sure that the storage class of the object is Standard or IA. 
object_name = 'exampledir/exampleobject.txt'

# Convert the storage class of the object to Archive by specifying the x-oss-storage-class header. 
headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_ARCHIVE}
# Convert the storage class of the object to Cold Archive by specifying the x-oss-storage-class header. 
# headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_COLD_ARCHIVE}
# Convert the storage class of the object to Deep Cold Archive by specifying the x-oss-storage-class header. 
# headers = {'x-oss-storage-class': oss2.BUCKET_STORAGE_CLASS_DEEP_COLD_ARCHIVE}
# Convert the storage class of the object. 
bucket.copy_object(bucket.bucket_name, object_name, object_name, headers)                    
package main

import (
    "fmt"
    "os"

    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    /// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
    provider, err := oss.NewEnvironmentVariableCredentialsProvider()
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Create an OSSClient instance. 
    // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
    client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
    // Specify the name of the bucket. 
    bucketName := "yourBucketName"
    // Specify the full path of the object. Do not include the bucket name in the full path. 
    objectName := "yourObjectName"
    bucket, err := client.Bucket(bucketName)
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Convert the storage class of the object to Archive. 
    _, err = bucket.CopyObject(objectName, objectName, oss.ObjectStorageClass(oss.StorageArchive))
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}
OSSCopyObjectRequest * copy = [OSSCopyObjectRequest new];
copy.sourceBucketName = @"examplebucket";
copy.sourceobjectKey = @"exampleobject.txt";
copy.bucketName = @"examplebucket";
copy.objectKey = @"exampleobject.txt";
// Set the storage class of the object named exampleobject.txt to Archive. 
copy.objectMeta = @{@"x-oss-storage-class" : @"Archive"};

OSSTask * task = [client copyObject:copy];
[task continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"copy object success!");
    } else {
        NSLog(@"copy object failed, error: %@" , task.error);
    }
    return nil;
}];
#include <iostream>
#include <alibabacloud/oss/OssClient.h>

using namespace AlibabaCloud::OSS;

int main(void)
{  
            
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";
    /* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
    std::string ObjectName = "exampledir/exampleobject.txt";
  
    /* Initialize resources, such as network resources. */
    InitializeSdk();
    ClientConfiguration conf;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);
    
    /* Specify the storage class to which you want to convert the object. In this example, the storage class is set to Archive. */
    ObjectMetaData objectMeta;
    objectMeta.addHeader("x-oss-storage-class", "Archive");
    
    std::string SourceBucketName = BucketName;
    std::string SourceObjectName = ObjectName;
    
    CopyObjectRequest request(SourceBucketName, ObjectName, objectMeta);
    request.setCopySource(SourceBucketName, SourceObjectName);
    
    /* Convert the storage class of the object to the specified storage class. */
    auto outcome = client.CopyObject(request);
    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "CopyObject fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }
    
    /* Release resources, such as network resources. */
    ShutdownSdk();
    return 0;
}

Use ossutil

For more information about how to convert the storage class of an object by using ossutil, see Copy objects.

Use RESTful APIs

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see CopyObject.

Usage notes

When you convert the storage class of an object to IA, Archive, Cold Archive, or Deep Cold Archive, take note of the following items:

Minimum billable size

The minimum billable size of an object is 64 KB. If an object is less than 64 KB in size, you are charged for the minimum billable size of the object.

Minimum storage duration

The minimum storage duration is 30 days for IA objects, 60 days for Archive objects, and 180 days for Cold Archive and Deep Cold Archive objects. If an object is stored for a period of time less than the minimum storage duration, you are charged for the storage usage of the object that is stored for less than the minimum storage duration. For more information, see Storage fees.

  • Configure lifecycle rules to automatically convert the storage classes of objects

    • If you convert the storage class of an object to IA or Archive, OSS does not recalculate the retention period.

      For example, an object named a.txt is a Standard object. After the object is stored in OSS for 10 days, its storage class is converted to IA based on lifecycle rules. After the storage class conversion, the object must be stored as an IA object for another 20 days to meet the minimum storage duration requirement of the IA storage class.

    • If you convert the storage class of an object to Cold Archive or Deep Cold Archive, OSS recalculates the retention period.

      • Example 1: An object named a.txt is a Standard or IA object. After the object is stored in OSS for 10 days, its storage class is converted to Cold Archive or Deep Cold Archive based on lifecycle rules. After the storage class conversion, the object must be stored for 180 days to meet the minimum storage duration requirement of the Cold Archive or Deep Cold Archive storage class.

      • Example 2: An object named a.txt is a Cold Archive object. After the object is stored in OSS for 30 days, its storage class is converted to Deep Cold Archive based on lifecycle rules. You are charged for the storage usage of the Cold Archive object that is stored for 30 days and the storage usage of the Cold Archive object that is stored for less than the minimum storage duration (180 - 30). After the object is converted to a Deep Cold Archive object, the object must be stored for 180 days to meet the minimum storage duration requirement of the Deep Cold Archive storage class.

  • Call CopyObject to manually convert the storage classes of objects

    If you call the CopyObject operation to manually convert the storage class of an object, OSS recalculates the storage duration of the object.

    For example, an object named a.txt is a Standard object. After the object is stored in OSS for 10 days, its storage class is converted to IA by calling the CopyObject operation. After the storage class conversion, the object must be stored as an IA object for 30 days to meet the minimum storage duration requirement for the IA storage class.

Note

If you rename an IA object, an Archive object, a Cold Archive, or a Deep Cold Archive object or overwrite the object by uploading an object that has the same name before the minimum storage duration elapses, you are also charged for the storage usage of the object that is stored for less than the minimum storage duration. For example, if you rename an IA object after it is stored for 29 days, OSS recalculates the last modified time of the object. That is, you need to store the object for another 30 days to meet the minimum storage duration requirement of the IA storage class.

Restoration time

Archive, Cold Archive, and Deep Cold Archive objects must be restored before they can be accessed. It takes a period of time to restore an Archive, Cold Archive, or Deep Cold Archive object. If your business scenario requires real-time access to objects, we recommend that you do not convert the storage class of objects to Archive, Cold Archive, or Deep Cold Archive.

API operation calling fees

Conversion method

Storage class before conversion

API operation calling fee

Lifecycle rule

Standard, IA, Archive, and Cold Archive

You are charged for PUT requests based on the storage class before the conversion. The API operation calling fee is included in the bill for the current bucket.

CopyObject

Archive

  • Real-time access of Archive objects is enabled for the source bucket

    • You are charged for GET requests based on the storage class of the source object. The API operation calling fee is included in the bill for the source bucket.

    • You are charged for PUT requests based on the storage class of the destination object. The API operation calling fee is included in the bill for the destination bucket.

  • Real-time access of Archive objects is not enabled for the source bucket

    You are charged for PUT requests based on the storage class of the source object. The API operation calling fee is included in the bill for the destination bucket.

Standard, IA, Cold Archive, and Deep Cold Archive

You are charged for PUT requests based on the storage class of the source object. The API operation calling fee is included in the bill for the destination bucket.

If you call the CopyObject operation to convert an Archive object in a bucket for which real-time access of Archive objects is enabled, you do not need to restore the object in advance, and you are not charged for the restoration. You are charged Archive data retrieval fees based on the size of the accessed Archive data.

If you call the CopyObject operation to convert an Archive object in a bucket for which real-time access of Archive objects is not enabled, the object must be restored first, and you are charged for the restoration.

For more information, see Data processing fees.

Data retrieval fees

You are charged data retrieval fees when you access IA objects based on the size of the retrieved IA data. You are charged additional fees when you restore Archive, Cold Archive, and Deep Cold Archive objects. If real-time access of Archive objects is enabled for a bucket, you are charged when you access Archive objects in real time. The data retrieval fees and outbound traffic fees are separately calculated. If a Standard object is accessed more than once per month, converting the storage class of the object to IA, Archive, Cold Archive, or Deep Cold Archive may cause higher costs.

Temporary storage fees

If you restore a Cold Archive or Deep Cold Archive object, a Standard replica of the object is created to facilitate access. You are charged temporary storage fees of the replica for its duration as a Standard object before the restoration period ends.

FAQ

Can I convert the storage class of an object from IA to Standard by configuring a lifecycle rule based on the last modified time?

No, you cannot use a lifecycle rule to convert the storage class of an object from IA to Standard. You can use one of the following methods to convert the storage class of an object from IA to Standard: