All Products
Search
Document Center

Object Storage Service:Use lifecycle rules based on last access time to implement intelligent data tiering

Last Updated:Dec 30, 2025

You can use lifecycle rules based on the last access time to automatically monitor data access patterns and identify cold data. Then, you can transition the storage class of the cold data to implement intelligent data tiering and reduce storage costs.

Scenario description

A multimedia website needs to classify its data as hot or cold based on the last access time. The traditional method requires manual log analysis. However, using lifecycle rules based on the last access time lets you automatically identify and tier data.

In this scenario, data is categorized and stored in different paths within the `examplebucket` bucket. The goal is to transition some data to a lower-cost storage class after a specified period.

Storage path

Storage scenario

Lifecycle policy

Result

data/

Stores WMV live streaming videos. These videos are accessed infrequently in the first two months after upload and are rarely accessed afterward.

Transition to the Infrequent Access (IA) storage class 200 days after the last access. If the data is accessed again, it remains in the IA storage class.

The storage class is transitioned after the specified number of days. Data that does not meet the criteria remains in the Standard storage class.

Stores MP4 movie video data. Most files are frequently accessed within each six-month period.

log/

Stores a large amount of log data. A small number of files have a few access records within the last three months. All files have almost no access records six months after being uploaded.

  • Transition to the Infrequent Access (IA) storage class 120 days after the last access. If the data is accessed again, it remains in the IA storage class.

  • Transition to the Archive storage class 250 days after the last access.

Note
  • With a lifecycle policy based on the last access time, OSS automatically identifies and tiers hot and cold data. For example, frequently accessed MP4 videos in the `data/` path remain in the Standard storage class. MP4 videos that are not accessed for six months are transitioned to the Infrequent Access storage class after 200 days. If you configure a lifecycle rule based on the last modification time, data in the `data/` path can be transitioned or deleted only based on its last modification time. This prevents you from implementing intelligent data tiering based on file access frequency.

  • The lifecycle policies and recommended actions in this scenario are for reference only. You can configure lifecycle rules based on your specific business needs.

Prerequisites

  • Access tracking is enabled.

  • To transition objects from the Standard or Infrequent Access storage class to the Archive, Cold Archive, or Deep Cold Archive storage class, you must submit a ticket to request this feature.

    Important

    After your ticket is approved, if you use a policy based on the last access time to transition objects from the Standard or Infrequent Access storage class to the Archive, Cold Archive, or Deep Cold Archive storage class, the last access time of these objects defaults to the time when access tracking was enabled for the bucket.

Procedure

Use the OSS console

  1. Enable access tracking.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

    3. In the navigation pane on the left, choose Data Management > Lifecycle.

    4. On the Lifecycle page, turn on the Enable Access Tracking switch.

      Note

      After you enable access tracking, OSS sets the last access time for all objects in the bucket to the time when access tracking was enabled.

  2. Configure a lifecycle rule.

    1. On the Lifecycle page, click Create Rule.

    2. In the Create Lifecycle Rule panel, configure lifecycle rules for the `data/` prefix and the `log/` prefix as described in the sections below.

      Lifecycle rule for the data/ prefix

      Configure the required parameters for the lifecycle rule for the `data/` prefix as described below. You can keep the default settings for the other parameters.

      Configuration item

      Description

      Status

      Click Start.

      Applied To

      Select Match by Prefix.

      Prefix

      Enter data/.

      Object Lifecycle

      Select Specify Days.

      Lifecycle-based Rules

      From the drop-down list, select Last Access Time. Enter 200 days. The data is automatically transitioned to Infrequent Access (Data remains in IA after being accessed).

      Lifecycle rule for the log/ prefix

      Configure the required parameters for the lifecycle rule for the `log/` prefix as described below. You can keep the default settings for the other parameters.

      Configuration item

      Description

      Status

      Select Start.

      Applied To

      Select Match by Prefix.

      Prefix

      Enter log/.

      Object Lifecycle

      Select Specify Days.

      Lifecycle-based Rules

      • From the drop-down list, select Last Access Time. Enter 120 days. The data is automatically transitioned to Infrequent Access (Data remains in IA after being accessed).

      • From the drop-down list, select Last Access Time. Enter 250 days. The data is automatically transitioned to Archive.

    3. Click OK.

Use an Alibaba Cloud SDK

Only the Java, Python, and Go SDKs support the creation of lifecycle rules based on the last access time. Before you create a lifecycle rule based on the last access time, you must enable access tracking for the specified bucket.

  1. Enable access tracking.

    Java

    import com.aliyun.oss.*;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.common.comm.SignVersion;
    import com.aliyun.oss.model.AccessMonitor;
    
    public class Demo {
    
        public static void main(String[] args) throws Exception {
            // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the name of the bucket. Example: examplebucket. 
            String bucketName = "examplebucket";
    
            // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
            String region = "cn-hangzhou";
    
            // Create an OSSClient instance. 
            // Call the shutdown method to release associated resources when the OSSClient is no longer in use.
            ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
            clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
            OSS ossClient = OSSClientBuilder.create()
            .endpoint(endpoint)
            .credentialsProvider(credentialsProvider)
            .clientConfiguration(clientBuilderConfiguration)
            .region(region)               
            .build();
    
            try {
                // Enable access tracking for the bucket. If you want to change the access tracking status of a bucket from Enabled to Disabled, make sure that the bucket does not have lifecycle rules configured based on the last access time of the objects in the bucket. 
                ossClient.putBucketAccessMonitor(bucketName, AccessMonitor.AccessMonitorStatus.Enabled.toString());
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }

    Python

    # -*- coding: utf-8 -*-
    import oss2
    from oss2.credentials import EnvironmentVariableCredentialsProvider
    
    # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
    auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
    
    # Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
    endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
    # Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
    region = "cn-hangzhou"
    
    # Specify the name of the bucket.
    bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)
    
    # Enable access tracking for the bucket. If you want to change the access tracking status of a bucket to Disabled after you enable access tracking for the bucket, make sure that the bucket does not have lifecycle rules that are configured based on the last access time. 
    bucket.put_bucket_access_monitor("Enabled")

    Go

    package main
    
    import (
    	"context"
    	"flag"
    	"log"
    
    	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
    	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
    )
    
    // Define global variables.
    var (
    	region     string // The bucket region.
    	bucketName string // The bucket name.
    )
    
    // The init function initializes command-line parameters.
    func init() {
    	flag.StringVar(&region, "region", "", "The region in which the bucket is located.")
    	flag.StringVar(&bucketName, "bucket", "", "The name of the bucket.")
    }
    
    // The main function enables access tracking for the bucket.
    func main() {
    	// Parse command-line parameters.
    	flag.Parse()
    
    	// Check whether the bucket name is empty.
    	if len(bucketName) == 0 {
    		flag.PrintDefaults()
    		log.Fatalf("invalid parameters, bucket name required")
    	}
    
    	// Check whether the region is empty.
    	if len(region) == 0 {
    		flag.PrintDefaults()
    		log.Fatalf("invalid parameters, region required")
    	}
    
    	// Load the default configurations, and set the credential provider and region.
    	cfg := oss.LoadDefaultConfig().
    		WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
    		WithRegion(region)
    
    	// Create an OSS client.
    	client := oss.NewClient(cfg)
    
    	// Create a request to enable access tracking for the bucket.
    	request := &oss.PutBucketAccessMonitorRequest{
    		Bucket: oss.Ptr(bucketName),
    		AccessMonitorConfiguration: &oss.AccessMonitorConfiguration{
    			Status: oss.AccessMonitorStatusEnabled, // Enable access tracking.
    		},
    	}
    
    	// Execute the operation to enable access tracking for the bucket.
    	putResult, err := client.PutBucketAccessMonitor(context.TODO(), request)
    	if err != nil {
    		log.Fatalf("failed to put bucket access monitor %v", err)
    	}
    
    	// Print the result.
    	log.Printf("put bucket access monitor result: %#v\n", putResult)
    }
    
  2. Configure lifecycle rules based on the last access time for the data/ prefix and the log/ prefix.

    Java

    import com.aliyun.oss.ClientException;
    import com.aliyun.oss.OSS;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.OSSClientBuilder;
    import com.aliyun.oss.OSSException;
    import com.aliyun.oss.model.*;
    import java.util.ArrayList;
    import java.util.List;
    
    public class Lifecycle {
    
        public static void main(String[] args) throws Exception {
            // The endpoint is set to China (Hangzhou) in this example. Specify the actual endpoint.
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before running the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are set.
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the bucket name, for example, examplebucket.
            String bucketName = "examplebucket";
    
            // Create an OSSClient instance.
            // When the OSSClient instance is no longer needed, call the shutdown method to release resources.
            OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);
    
            try {
                String ruleId1 = "rule1";
                String ruleId2 = "rule2";
                // Specify the prefix as data/.
                String matchPrefix = "data/";
                // Specify the prefix as log/.
                String matchPrefix2 = "log/";
    
                SetBucketLifecycleRequest request = new SetBucketLifecycleRequest(bucketName);
    
                // In lifecycle rule 1, transition all files with the data/ prefix to the Infrequent Access storage class 200 days after their last access time. When these files are accessed again, they remain in the Infrequent Access storage class.
                List<LifecycleRule.StorageTransition> storageTransitions = new ArrayList<LifecycleRule.StorageTransition>();
                LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition();
                storageTransition.setStorageClass(StorageClass.IA);
                storageTransition.setExpirationDays(200);
                storageTransition.setIsAccessTime(true);
                storageTransition.setReturnToStdWhenVisit(false);
                storageTransitions.add(storageTransition);
    
                LifecycleRule rule = new LifecycleRule(ruleId1, matchPrefix, LifecycleRule.RuleStatus.Enabled);
                rule.setStorageTransition(storageTransitions);
                request.AddLifecycleRule(rule);
    
                // In lifecycle rule 2, transition all files with the log/ prefix to the Infrequent Access storage class 120 days after their last access time. When these files are accessed again, they remain in the Infrequent Access storage class.
                List<LifecycleRule.StorageTransition> storageTransitions2 = new ArrayList<LifecycleRule.StorageTransition>();
                LifecycleRule.StorageTransition storageTransition2 = new LifecycleRule.StorageTransition();
                storageTransition2.setStorageClass(StorageClass.IA);
                storageTransition2.setExpirationDays(120);
                storageTransition2.setIsAccessTime(true);
                storageTransition2.setReturnToStdWhenVisit(false);
                storageTransitions2.add(storageTransition2);
                // In the same rule, transition all files with the log/ prefix to the Archive storage class 250 days after their last modification time.
                LifecycleRule.StorageTransition storageTransition3 = new LifecycleRule.StorageTransition();
                storageTransition3.setStorageClass(StorageClass.Archive);
                storageTransition3.setExpirationDays(250);
                storageTransition3.setIsAccessTime(false);
                storageTransitions2.add(storageTransition3);
    
                LifecycleRule rule2 = new LifecycleRule(ruleId2, matchPrefix2, LifecycleRule.RuleStatus.Enabled);
                rule2.setStorageTransition(storageTransitions2);
                request.AddLifecycleRule(rule2);
    
                VoidResult result = ossClient.setBucketLifecycle(request);
    
                System.out.println("Return status code:"+result.getResponse().getStatusCode()+" set lifecycle succeed");
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }

    Python

    # -*- coding: utf-8 -*-
    import oss2
    from oss2.credentials import EnvironmentVariableCredentialsProvider
    from oss2.models import LifecycleRule, BucketLifecycle, StorageTransition
    
    # Obtain access credentials from environment variables. Before running this sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are set.
    auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
    # Set yourEndpoint to the endpoint of the region where the bucket is located. For example, if the bucket is in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com.
    # Specify the bucket name, for example, examplebucket.
    bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')
    
    # In lifecycle rule 1, transition all files with the data/ prefix to the Infrequent Access storage class 200 days after their last access time. When these files are accessed again, they remain in the Infrequent Access storage class.
    rule1 = LifecycleRule('rule1', 'data/', status=LifecycleRule.ENABLED)
    rule1.storage_transitions = [StorageTransition(days=200,
                                                   storage_class=oss2.BUCKET_STORAGE_CLASS_IA,
                                                   is_access_time=True,
                                                   return_to_std_when_visit=False)]
    
    # In lifecycle rule 2, transition all files with the log/ prefix to the Infrequent Access storage class 120 days after their last access time. When these files are accessed again, they remain in the Infrequent Access storage class.
    # In the same rule, transition all files with the log/ prefix to the Archive storage class 250 days after their last modification time.
    rule2 = LifecycleRule('rule2', 'log/', status=LifecycleRule.ENABLED)
    rule2.storage_transitions = [StorageTransition(days=120,
                                                   storage_class=oss2.BUCKET_STORAGE_CLASS_IA,
                                                   is_access_time=True,
                                                   return_to_std_when_visit=False),
                                 StorageTransition(days=250,
                                                   storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE,
                                                   is_access_time=False)]
    
    lifecycle = BucketLifecycle([rule1, rule2])
    
    # Set the lifecycle rule.
    result = bucket.put_bucket_lifecycle(lifecycle)
    
    print('Lifecycle rule set successfully. Return status:' + str(result.status))

    Go

    package main
    
    import (
    	"context"
    	"flag"
    	"log"
    
    	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
    	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
    )
    
    // Define global variables.
    var (
    	region     string // Region in which the bucket is located.
    	bucketName string // Name of the bucket.
    )
    
    // Specify the init function used to initialize command line parameters.
    func init() {
    	flag.StringVar(&region, "region", "", "The region in which the bucket is located.")
    	flag.StringVar(&bucketName, "bucket", "", "The name of the bucket.")
    }
    
    func main() {
    	// Parse command line parameters.
    	flag.Parse()
    
    	// Check whether the name of the bucket is specified.
    	if len(bucketName) == 0 {
    		flag.PrintDefaults()
    		log.Fatalf("invalid parameters, bucket name required")
    	}
    
    	// Check whether the region is specified.
    	if len(region) == 0 {
    		flag.PrintDefaults()
    		log.Fatalf("invalid parameters, region required")
    	}
    
    	// Load the default configurations and specify the credential provider and region.
    	cfg := oss.LoadDefaultConfig().
    		WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
    		WithRegion(region)
    
    	// Create an OSS client.
    	client := oss.NewClient(cfg)
    
    	// Create a request to configure lifecycle rules for the bucket.
    	request := &oss.PutBucketLifecycleRequest{
    		Bucket: oss.Ptr(bucketName), // Name of the bucket.
    		LifecycleConfiguration: &oss.LifecycleConfiguration{
    			Rules: []oss.LifecycleRule{
    				{
    					// Configure rule1 to change the storage class of objects whose names contain the data/ prefix to IA 200 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.
    					ID:     oss.Ptr("rule1"),
    					Status: oss.Ptr("Enabled"),
    					Prefix: oss.Ptr("data/"),
    					Transitions: []oss.LifecycleRuleTransition{
    						{
    							Days:                 oss.Ptr(int32(200)),
    							StorageClass:         oss.StorageClassIA,
    							IsAccessTime:         oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.
    							ReturnToStdWhenVisit: oss.Ptr(false),
    						},
    					},
    				},
    				{
    					// Configure rule2 to change the storage class of objects whose names contain the log/ prefix to IA 120 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.
    					// Change the storage class of objects whose names contain the log/ prefix to Archive 250 days after they are last accessed.
    					ID:     oss.Ptr("rule2"),
    					Status: oss.Ptr("Enabled"),
    					Prefix: oss.Ptr("log/"),
    					Transitions: []oss.LifecycleRuleTransition{
    						{
    							Days:                 oss.Ptr(int32(120)),
    							StorageClass:         oss.StorageClassIA,
    							IsAccessTime:         oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.
    							ReturnToStdWhenVisit: oss.Ptr(false),
    						},
    						{
    							Days:                 oss.Ptr(int32(250)),
    							StorageClass:         oss.StorageClassArchive,
    							IsAccessTime:         oss.Ptr(true),
    							ReturnToStdWhenVisit: oss.Ptr(false),
    						},
    					},
    				},
    			},
    		},
    	}
    
    	// Configure lifecycle rules for the bucket.
    	result, err := client.PutBucketLifecycle(context.TODO(), request)
    	if err != nil {
    		log.Fatalf("failed to put bucket lifecycle %v", err)
    	}
    
    	// Display the result.
    	log.Printf("put bucket lifecycle result:%#v\n", result)
    }
    

Use the ossutil command line interface

ossutil 2.0

  1. Enable access tracking.

    1. Configure access tracking in the local `config1.xml` file.

      <?xml version="1.0" encoding="UTF-8"?>
      <AccessMonitorConfiguration>
          <Status>Enabled</Status>
      </AccessMonitorConfiguration>
    2. Set the access tracking status for the target bucket.

      ossutil api put-bucket-access-monitor --bucket bucketname --access-monitor-configuration file://config1.xml
  2. Configure lifecycle rules based on the last access time for the `data/` prefix and the `log/` prefix.

    1. Configure the following lifecycle rules in the local `config2.xml` file.

      <?xml version="1.0" encoding="UTF-8"?>
      <LifecycleConfiguration>
        <Rule>
          <ID>rule1</ID>
          <Prefix>data/</Prefix>
          <Status>Enabled</Status>
          <Transition>
            <Days>200</Days>
            <StorageClass>IA</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>    
        </Rule>
        <Rule>
          <ID>rule2</ID>
          <Prefix>log/</Prefix>
          <Status>Enabled</Status>
          <Transition>
            <Days>120</Days>
            <StorageClass>IA</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>
          <Transition>
            <Days>250</Days>
            <StorageClass>Archive</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>    
        </Rule>
      </LifecycleConfiguration>
    2. Set the lifecycle rules for the target bucket.

      ossutil api put-bucket-lifecycle --bucket bucketname --lifecycle-configuration file://config2.xml

ossutil 1.0

  1. Enable access tracking.

    1. Configure access tracking in the local `config1.xml` file.

      <?xml version="1.0" encoding="UTF-8"?>
      <AccessMonitorConfiguration>
          <Status>Enabled</Status>
      </AccessMonitorConfiguration>
    2. Set the access tracking status for the target bucket.

      ossutil access-monitor --method put oss://examplebucket/ config1.xml
  2. Configure lifecycle rules based on the last access time for the `data/` prefix and the `log/` prefix.

    1. Configure the following lifecycle rules in the local `config2.xml` file.

      <?xml version="1.0" encoding="UTF-8"?>
      <LifecycleConfiguration>
        <Rule>
          <ID>rule1</ID>
          <Prefix>data/</Prefix>
          <Status>Enabled</Status>
          <Transition>
            <Days>200</Days>
            <StorageClass>IA</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>    
        </Rule>
        <Rule>
          <ID>rule2</ID>
          <Prefix>log/</Prefix>
          <Status>Enabled</Status>
          <Transition>
            <Days>120</Days>
            <StorageClass>IA</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>
          <Transition>
            <Days>250</Days>
            <StorageClass>Archive</StorageClass>
            <IsAccessTime>true</IsAccessTime>
            <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit>
          </Transition>    
        </Rule>
      </LifecycleConfiguration>
    2. Set the lifecycle rules for the target bucket.

      ossutil lifecycle --method put oss://examplebucket config2.xml

Use a REST API

If your program has high customization requirements, you can directly make REST API requests. To make REST API requests, you must manually write code to calculate signatures. For more information, see PutBucketAccessMonitor and PutBucketLifecycle.

References