All Products
Search
Document Center

Object Storage Service:Lifecycle rules based on the last access time

Last Updated:Dec 08, 2025

You can configure lifecycle rules based on the last access time of objects in an Object Storage Service (OSS) bucket. After you configure such a lifecycle rule, OSS monitors the access patterns of the objects in the bucket, identifies cold data based on access patterns, and automatically moves cold data to the specified storage class for tiered data storage and reduced storage costs.

Use cases

  • Multimedia

    After you store videos and images of your website in an OSS bucket, some of the data may become infrequently accessed over time. For data that becomes infrequently accessed, you may need to change its storage class to Infrequent Access (IA). For data that was uploaded a long time ago but is still frequently accessed, you need to retain the Standard storage class. You can configure a lifecycle rule for the bucket based on the last access time of the objects in the bucket. This way, cold data and hot data are stored in different storage classes, and storage costs are reduced.

  • Albums or network disks

    A bucket can be used to store photo albums or serve as a network disk. If you want to reduce the storage costs of infrequently accessed data and still maintain real-time access to the data, you can configure a lifecycle rule to automatically move the data to the IA storage class after the specified number of days from the last access time.

  • Life science

    A large amount of data is generated in gene sequencing. In most cases, whether the data is frequently accessed is determined based on the last access time instead of the last modified time of the data. You can manually implement tiered storage of cold and hot data based on log analysis or other methods. A more efficient method for tiered storage is to configure a lifecycle rule based on the last access time, which allows OSS to automatically identify cold and hot data and store data in appropriate storage classes. You can also specify policies based on the last access time and the last modified time in the same lifecycle rule to manage data in a more flexible manner.

Limits

Data deletion

You cannot delete data by using lifecycle rules that are based on the last access time.

Match conditions

Lifecycle rules support matching only based on prefixes and tags. Wildcard matching, suffix matching, and regular expression matching are not supported.

Part lifecycle

You cannot configure two or more lifecycle rules that contain a part lifecycle policy for objects whose names have overlapping prefixes. Examples:

  • Example 1

    If you configure a lifecycle rule that contains a part policy for a bucket, you cannot configure another lifecycle rule that contains a part policy for any objects in the bucket.

  • Example 2

    If you configure a lifecycle rule that contains a part policy for objects whose names contain the dir1 prefix in a bucket, you cannot configure another lifecycle rule that contains a part policy for objects whose names contain overlapping prefixes, such as dir1/dir2.

Usage notes

Number of lifecycle rules

A bucket can have up to 1,000 lifecycle rules. A lifecycle rule can contain both policies based on the last modified time and policies based on the last access time.

Fees

  • Object monitoring and management fees

    After you enable access tracking for a bucket, object monitoring and management fees are generated. However, you are not charged the fees.

  • Storage fees for IA objects that are stored for less than the minimum storage duration

    IA objects have a minimum storage duration of 30 days. You are charged for the storage usage of IA objects that are stored for less than the minimum storage duration. The following examples show how IA objects are billed when you configure lifecycle rules based on the last access time of the objects:

    Example 1: OSS converts a Standard object to an IA object 10 days after the object is created and converts the IA object back to a Standard object after 5 days based on the configured lifecycle rule. In this case, you are charged for the storage usage of the IA objects that are stored for less than the minimum storage duration (15 days).

    Example 2: OSS converts a Standard object to an IA object 10 days after the object is created and deletes the IA object after 15 days based on the configured lifecycle rule. In this case, you are charged for the storage usage of the IA objects that are stored for less than the minimum storage duration (5 days).

    For more information, see Storage fees.

  • Retrieval fees for IA objects

    When you access IA objects, you are charged data retrieval fees based on the size of the retrieved IA objects. For more information, see Data processing fees.

  • API operation calling fees

    When you convert storage classes of objects by using lifecycle rules, you are charged API operation calling fees. For more information, see API operation calling fees.

Overwrite semantics

The PutBucketLifecycle operation overwrites the existing configurations of a lifecycle rule of a bucket. For example, if a lifecycle rule named Rule1 is configured for a bucket and you want to configure another lifecycle rule named Rule2 for the bucket, perform the following operations:

  • Call the GetBucketLifecycle operation to query Rule1.

  • Add both Rule1 and Rule2 to the lifecycle rule configuration.

  • Call the PutBucketLifecycle operation to create Rule1 and Rule2 for the bucket.

Effective period

After you configure a lifecycle rule, OSS loads the rule within 24 hours. After the lifecycle rule is loaded, OSS runs the rule every day at 08:00:00 (UTC+8).

Completion time

  • After a lifecycle rule takes effect, operations such as object deletions, storage class transitions, and part expiration are typically completed within 24 hours. However, the object count threshold varies by region. In the China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), and Singapore regions, this applies to workloads of up to 1 billion objects. In all other regions, it applies to workloads of up to 100 million objects.

  • Under certain conditions, task execution can be significantly delayed, extending beyond 24 hours to several days or weeks. This is typically caused by a heavy workload, including factors such as an excessive number of objects to scan or process, a high number of tags, numerous object versions, or a high volume of new writes during the task's execution.

    Note

    If versioning is enabled for a bucket, a lifecycle management action on each object version in the bucket is counted towards the applicable limit.

Policies for updating the last access time

After access tracking is enabled, the last access time (LastAccessTime) of an object is updated based on the following rules:

  1. Initialization: When access tracking is enabled, the LastAccessTime for all objects within the bucket is initialized to the timestamp of that event.

  2. Update rules: Subsequently, operations such as downloading or overwriting an object will update its LastAccessTime. For more information about which operations affect an object's LastAccessTime, see How do common operations affect LastAccessTime of objects?.

  3. Update mechanism:

    • The LastAccessTime is updated asynchronously, typically within 24 hours.

    • Within a 24-hour window, if the same object is accessed multiple times, OSS records the timestamp of the initial access as the object's LastAccessTime. Subsequent accesses within that same period will not trigger another update.

Supported storage classes

  • You can use a lifecycle rule based on the last access time to move objects from Standard to IA and specify whether to move the objects from IA back to Standard after the objects are accessed.

  • You can use a lifecycle rule based on the last access time to move objects from Standard or IA to Archive, Cold Archive, or Deep Cold Archive or from Archive to Cold Archive or Deep Cold Archive. If you want to move objects from Standard or IA to Archive, Cold Archive, or Deep Cold Archive, submit a ticket to apply for the required permissions first. After your application is approved, you need to specify the storage class to which you want to move the objects.

    Important

    After your application is approved, if you use a lifecycle rule based on the last access time to move an object from Standard or IA to Archive, Cold Archive, or Deep Cold Archive, the last access time of the Archive, Cold Archive, or Deep Cold Archive object is the time when access tracking was enabled for the bucket.

Configure a lifecycle rule for a bucket for which OSS-HDFS is enabled

To configure or modify a lifecycle rule based on the last access time to match all objects in a bucket for which OSS-HDFS is enabled, use the NOT element to exclude the objects that are stored in the .dlsdata/ directory. This prevents lifecycle rule-triggered object deletion or storage class conversion actions from applying to OSS-HDFS data and consequently affecting read and write operations on OSS-HDFS data.

p571593 (1)..jpeg

Methods

Use the OSS console

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the left-side navigation tree, choose Data Management > Lifecycle.

  4. On the Lifecycle page, turn on Enable Access Tracking and click Create Rule.

  5. In the Create Rule panel, configure the parameters. The following table describes the parameters.

    • Parameters for unversioned buckets

      Section

      Parameter

      Description

      Basic Settings

      Status

      Specify the status of the lifecycle rule. You can select Enabled or Disabled.

      • After you enable a lifecycle rule, OSS takes the specified actions on the specified objects.

      • After you disable a lifecycle rule, lifecycle management tasks based on the rule are interrupted.

      Applied To

      Specify the objects for which you want the lifecycle rule to take effect. You can select Objects with Specified Prefix or Whole Bucket.

      Allow Overlapped Prefixes

      By default, OSS checks whether the prefixes of each lifecycle rule overlap. For example, if the bucket has an existing lifecycle rule (Rule 1) and you want to configure another lifecycle rule (Rule 2) that contains an overlapping prefix:

      • Rule 1

        Convert the storage class of all objects whose names contain the dir1/ prefix in the bucket to IA 30 days after the objects are last modified.

      • Rule 2

        Convert all objects whose names contain the dir1/dir2/ prefix in the bucket to Archive objects 30 days after the objects are last accessed.

      If you do not select this check box, OSS detects that objects in the dir1/dir2/ directory match two lifecycle rules and thus does not allow you to create Rule 2.

      If you select this check box, Rule 2 is created. The objects in the dir1/dir2/ directory are converted to Archive objects after 30 days. Other objects in the dir1/ directory are converted to IA objects after 180 days.

      Prefix

      Specify the prefix in the names of the objects for which you want the lifecycle rule to take effect.

      • If you set the prefix to img, all objects whose names contain the img prefix, such as imgtest.png and img/example.jpg, match the lifecycle rule.

      • If you set the prefix to img/, all objects whose names contain the img/ prefix, such as img/example.jpg and img/test.jpg, match the lifecycle rule.

      Tag

      Specify tags. The rule takes effect only for objects that contain the specified tags. For example, if you select Objects with Specified Prefix, set Prefix to img, and specify a tag whose key is a and value is 1, the rule applies to all objects that contain the img prefix in their names and have the a=1 tag. For more information about object tags, see Tag objects.

      NOT

      Specify that the lifecycle rule does not take effect for the objects that have the specified name prefix and tag.

      Important
      • If you turn on NOT, at least one of the Prefix and Tag parameters must be specified for the lifecycle rule.

      • The key of the tag specified for the NOT parameter cannot be the same as the key specified for the Tag parameter.

      • If you turn on NOT, you cannot include a part policy in the lifecycle rule.

      Object Size

      Specify the size of objects for which the lifecycle rule takes effect.

      • Minimum Size: Specify that the lifecycle rule takes effect only for objects whose sizes are larger than the specified size. You can specify a minimum object size that is greater than 0 B and less than 5 TB.

      • Maximum Size: Specify that the lifecycle rule takes effect only for objects whose sizes are smaller than the specified size. You can specify a maximum object size that is greater than 0 B and less than or equal to 5 TB.

      Important

      If you specify a minimum object size and a maximum object size in the same lifecycle rule, take note of the following items:

      • The maximum object size must be greater than the minimum object size.

      • You cannot include a part policy in the lifecycle rule.

      • You cannot include a policy to remove delete markers.

      Policy for Objects

      Object Lifecycle

      Specify an object expiration policy. You can select Validity Period (Days), Expiration Date, or Disabled. If you select Disabled, no object expiration policy is configured.

      Lifecycle-based Rules

      Specify policies for storage class conversion. Valid values:

      • IA (Not Converted After Access)

      • IA (Converted to Standard After Access)

      • Archive

      • Cold Archive

      • Deep Cold Archive

      For example, if you select Access Time, set Validity Period to 30, and specify that the storage class of the objects is changed to IA (Not Converted After Access) after the validity period elapses, the storage class of objects that are last accessed on September 1, 2021 is converted to IA on October 1, 2021.

      Policy for Parts

      Part Lifecycle

      Specify the operation that you want to perform on expired parts. If you turn on Tag, this parameter is unavailable. You can select Validity Period (Days), Expiration Date, or Disabled. If you select Disabled, no part expiration policy is configured.

      Important

      A lifecycle rule must contain at least one of the object expiration policies and part expiration policies.

      Rules for Parts

      Specify when parts expire. You can specify a validity period or expiration date. Expired parts are automatically deleted and cannot be recovered.

    • Parameters for versioned buckets

      Configure the parameters in the Basic Settings and Policy for Parts sections in the same way you configure the parameters for an unversioned bucket. The following table describes only the parameters that are different from the parameters that you configure for an unversioned bucket.

      Section

      Parameter

      Description

      Policy for Current Versions

      Removal of Delete Marker

      If the bucket is versioned, the Removal of Delete Marker option is added to the Object Lifecycle parameter. Other parameters are the same as those you can configure for an unversioned bucket.

      If you select Removal of Delete Marker, and an object has only one version, which is a delete marker, OSS considers the delete marker expired and removes the delete marker. If an object has multiple versions and the current version of the object is a delete marker, OSS retains the delete marker. For more information, see Delete marker.

      Policy for Previous Versions

      Object Lifecycle

      Specify the time when previous versions expire. You can select Validity Period (Days) or Disabled. If you select Disabled, no object policy is configured.

      Lifecycle-based Rules

      Specify the number of days within which objects can be retained after they become previous versions. The next day after they expire, the previous versions are moved to the specified storage class. For example, if you set Validity Period (Days) to 30, objects that become previous versions on September 1, 2021 are moved to the specified storage class on October 1, 2021.

      Important

      You can determine when an object becomes a previous version based on the time when the next version is generated.

  6. Click OK.

    After the lifecycle rule is created, you can view the rule in the lifecycle rule list.

Use OSS SDKs

You can use only OSS SDK for Java and OSS SDK for Go to create lifecycle rules based on the last access time. Before you create a lifecycle rule based on the last access time, you must enable the access tracking feature for the bucket. For the sample code that is used to configure lifecycle rules based on the last access time, see Overview.

Java

import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.model.*;
import java.util.ArrayList;
import java.util.List;

public class Demo {

    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
        String region = "cn-hangzhou";

        // Create an OSSClient instance. 
        // Call the shutdown method to release resources when the OSSClient is no longer in use.
        ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
        clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
        OSS ossClient = OSSClientBuilder.create()
        .endpoint(endpoint)
        .credentialsProvider(credentialsProvider)
        .clientConfiguration(clientBuilderConfiguration)
        .region(region)               
        .build();

        try {
            ossClient.putBucketAccessMonitor(bucketName, AccessMonitor.AccessMonitorStatus.Enabled.toString());
            // Create a lifecycle rule and set the ID to rule1. Specify that the storage classes of objects whose names contain the logs prefix and whose size is less than or equal to 64 KB are changed to IA 30 days after the objects are last accessed. In addition, specify that the objects whose name contain the logs prefix are still stored as IA objects when the objects are accessed again. 
            LifecycleRule lifecycleRule = new LifecycleRule("rule1", "logs", LifecycleRule.RuleStatus.Enabled);
            List<LifecycleRule> lifecycleRuleList = new ArrayList<LifecycleRule>();
            SetBucketLifecycleRequest setBucketLifecycleRequest = new SetBucketLifecycleRequest(bucketName);

            LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition();
            storageTransition.setStorageClass(StorageClass.IA);
            storageTransition.setExpirationDays(30);
            storageTransition.setIsAccessTime(true);
            storageTransition.setReturnToStdWhenVisit(false);
            storageTransition.setAllowSmallFile(true);
            List<LifecycleRule.StorageTransition> storageTransitionList = new ArrayList<LifecycleRule.StorageTransition>();
            storageTransitionList.add(storageTransition);
            lifecycleRule.setStorageTransition(storageTransitionList);
            lifecycleRuleList.add(lifecycleRule);
            
            // Create a lifecycle rule and set the ID to rule2. Specify that the previous versions of the objects whose names contain the dir prefix and whose size is greater than 64 KB are changed to IA 10 days after the objects are last accessed. In addition, specify that the storage classes of the objects whose names contain the dir prefix are changed to Standard when the objects are accessed again. 
            LifecycleRule lifecycleRule2 = new LifecycleRule("rule2", "dir", LifecycleRule.RuleStatus.Enabled);
            LifecycleRule.NoncurrentVersionStorageTransition noncurrentVersionStorageTransition = new LifecycleRule.NoncurrentVersionStorageTransition();
            noncurrentVersionStorageTransition.setStorageClass(StorageClass.IA);
            noncurrentVersionStorageTransition.setNoncurrentDays(10);
            noncurrentVersionStorageTransition.setIsAccessTime(true);
            noncurrentVersionStorageTransition.setReturnToStdWhenVisit(true);
            noncurrentVersionStorageTransition.setAllowSmallFile(false);

            List<LifecycleRule.NoncurrentVersionStorageTransition> noncurrentVersionStorageTransitionList = new ArrayList<LifecycleRule.NoncurrentVersionStorageTransition>();
            noncurrentVersionStorageTransitionList.add(noncurrentVersionStorageTransition);
            lifecycleRule2.setNoncurrentVersionStorageTransitions(noncurrentVersionStorageTransitionList);
            lifecycleRuleList.add(lifecycleRule2);

            setBucketLifecycleRequest.setLifecycleRules(lifecycleRuleList);

            // Configure the lifecycle rules. 
            ossClient.setBucketLifecycle(setBucketLifecycleRequest);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}

Python

import argparse
import datetime
import alibabacloud_oss_v2 as oss

# Create a command-line argument parser to receive user-entered parameters.
parser = argparse.ArgumentParser(description="put bucket lifecycle sample")

# Add the --region command-line argument, which specifies the region where the bucket is located. This argument is required.
parser.add_argument('--region', help='The region in which the bucket is located.', required=True)

# Add the --bucket command-line argument, which specifies the name of the bucket. This argument is required.
parser.add_argument('--bucket', help='The name of the bucket.', required=True)

# Add the --endpoint command-line argument, which specifies the domain name that other services can use to access OSS. This argument is optional.
parser.add_argument('--endpoint', help='The domain names that other services can use to access OSS')

def main():
    # Parse command-line arguments.
    args = parser.parse_args()

    # Load credentials (AccessKey ID and AccessKey secret) from environment variables.
    credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()

    # Load the default configurations of the SDK.
    cfg = oss.config.load_default()

    # Set the credentials provider.
    cfg.credentials_provider = credentials_provider

    # Set the region where the bucket is located.
    cfg.region = args.region

    # If a custom endpoint is provided by the user, set it in the configuration.
    if args.endpoint is not None:
        cfg.endpoint = args.endpoint

    # Initialize the OSS client using the configuration object.
    client = oss.Client(cfg)

    result = client.put_bucket_lifecycle(oss.PutBucketLifecycleRequest(
            bucket=args.bucket,
            lifecycle_configuration=oss.LifecycleConfiguration(
                rules=[oss.LifecycleRule(
                    # In lifecycle rule rule1, all objects that have the prefix data/ are converted to the Infrequent Access (IA) storage class 200 days after they are last accessed. When these objects are accessed again, they remain in the IA storage class.
                    id='rule1',
                    status='Enabled',
                    prefix='data/',
                    transitions=[oss.LifecycleRuleTransition(
                        days=200,
                        storage_class=oss.StorageClassType.IA,
                        is_access_time=True, # Set to true, which indicates that the policy is based on the last access time.
                        return_to_std_when_visit=False
                    )],
                ), oss.LifecycleRule(
                    # In lifecycle rule rule2, all objects that have the prefix log/ are converted to the Infrequent Access (IA) storage class 120 days after they are last accessed. When these objects are accessed again, they remain in the IA storage class.
		    # In the same rule, all objects that have the prefix log/ are converted to the Archive storage class 250 days after they are last accessed.
                    id='rule2',
                    status='Enabled',
                    prefix='log/',
                    transitions=[oss.LifecycleRuleTransition(
                        days=120,
                        storage_class=oss.StorageClassType.IA,
                        is_access_time=True, # Set to true, which indicates that the policy is based on the last access time.
                        return_to_std_when_visit=False
                    ), oss.LifecycleRuleTransition(
                        days=250,
                        storage_class=oss.StorageClassType.ARCHIVE,
                        is_access_time=True, # Set to true, which indicates that the policy is based on the last access time.
                        return_to_std_when_visit=False
                    )],
                )]
            ),
    ))

    # Print the status code and request ID of the operation.
    print(f'status code: {result.status_code}, '  # The HTTP status code, which indicates whether the request is successful.
          f'request id: {result.request_id}')    # The request ID, which is used to track request logs and for debugging.


if __name__ == "__main__":
    # The program entry point that calls the main function to execute the logic.
    main()

Go

package main

import (
	"context"
	"flag"
	"log"

	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
)

// Define global variables.
var (
	region     string // Region in which the bucket is located.
	bucketName string // Name of the bucket.
)

// Specify the init function used to initialize command line parameters.
func init() {
	flag.StringVar(&region, "region", "", "The region in which the bucket is located.")
	flag.StringVar(&bucketName, "bucket", "", "The name of the bucket.")
}

func main() {
	// Parse command line parameters.
	flag.Parse()

	// Check whether the name of the bucket is specified.
	if len(bucketName) == 0 {
		flag.PrintDefaults()
		log.Fatalf("invalid parameters, bucket name required")
	}

	// Check whether the region is specified.
	if len(region) == 0 {
		flag.PrintDefaults()
		log.Fatalf("invalid parameters, region required")
	}

	// Load the default configurations and specify the credential provider and region.
	cfg := oss.LoadDefaultConfig().
		WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
		WithRegion(region)

	// Create an OSS client.
	client := oss.NewClient(cfg)

	// Create a request to configure lifecycle rules for the bucket.
	request := &oss.PutBucketLifecycleRequest{
		Bucket: oss.Ptr(bucketName), // Name of the bucket.
		LifecycleConfiguration: &oss.LifecycleConfiguration{
			Rules: []oss.LifecycleRule{
				{
					// Configure rule1 to change the storage class of objects whose names contain the data/ prefix to IA 200 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.
					ID:     oss.Ptr("rule1"),
					Status: oss.Ptr("Enabled"),
					Prefix: oss.Ptr("data/"),
					Transitions: []oss.LifecycleRuleTransition{
						{
							Days:                 oss.Ptr(int32(200)),
							StorageClass:         oss.StorageClassIA,
							IsAccessTime:         oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.
							ReturnToStdWhenVisit: oss.Ptr(false),
						},
					},
				},
				{
					// Configure rule2 to change the storage class of objects whose names contain the log/ prefix to IA 120 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.
					// Change the storage class of objects whose names contain the log/ prefix to Archive 250 days after they are last accessed.
					ID:     oss.Ptr("rule2"),
					Status: oss.Ptr("Enabled"),
					Prefix: oss.Ptr("log/"),
					Transitions: []oss.LifecycleRuleTransition{
						{
							Days:                 oss.Ptr(int32(120)),
							StorageClass:         oss.StorageClassIA,
							IsAccessTime:         oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.
							ReturnToStdWhenVisit: oss.Ptr(false),
						},
						{
							Days:                 oss.Ptr(int32(250)),
							StorageClass:         oss.StorageClassArchive,
							IsAccessTime:         oss.Ptr(true),
							ReturnToStdWhenVisit: oss.Ptr(false),
						},
					},
				},
			},
		},
	}

	// Configure lifecycle rules for the bucket.
	result, err := client.PutBucketLifecycle(context.TODO(), request)
	if err != nil {
		log.Fatalf("failed to put bucket lifecycle %v", err)
	}

	// Display the result.
	log.Printf("put bucket lifecycle result:%#v\n", result)
}

PHP

<?php

// Include the autoload file to load dependencies
require_once __DIR__ . '/../vendor/autoload.php';

use AlibabaCloud\Oss\V2 as Oss;
use AlibabaCloud\Oss\V2\Models\LifecycleConfiguration;

// Specify descriptions for command line parameters
$optsdesc = [
    "region" => ['help' => 'The region in which the bucket is located', 'required' => True], // (Required) Specify the region in which the bucket is located.
    "endpoint" => ['help' => 'The domain names that other services can use to access OSS', 'required' => False], // (Optional) Specify the endpoint that can be used by other services to access OSS.
    "bucket" => ['help' => 'The name of the bucket', 'required' => True], // (Required) Specify the name of the bucket.
];

// Generate a list of long options to parse the command-line parameters
$longopts = \array_map(function ($key) {
    return "$key:"; // Add a colon after each parameter to indicate that a value is required
}, array_keys($optsdesc));

// Parse the command-line parameters
$options = getopt("", $longopts); 

// Check whether the required parameters are missing
foreach ($optsdesc as $key => $value) {
    if ($value['required'] === True && empty($options[$key])) {
        $help = $value['help'];
        echo "Error: the following arguments are required: --$key, $help"; // Prompt the user for missing required parameters
        exit(1); 
    }
}

// Obtain the values of the command-line parameters
$region = $options["region"]; // The region in which the bucket is located
$bucket = $options["bucket"]; // The name of the bucket

// Use environment variables to load the AccessKey ID and AccessKey secret
$credentialsProvider = new Oss\Credentials\EnvironmentVariableCredentialsProvider();

// Use the default configuration of the SDK
$cfg = Oss\Config::loadDefault();

// Specify the credential provider
$cfg->setCredentialsProvider($credentialsProvider);

// Specify the region
$cfg->setRegion($region);

// Specify the endpoint if an endpoint is provided
if (isset($options["endpoint"])) {
    $cfg->setEndpoint($options["endpoint"]);
}

// Create an OSSClient instance
$client = new Oss\Client($cfg);

// Define a lifecycle rule to convert objects whose names contain the log/ prefix to the IA storage class after 30 days
$lifecycleRule = new Oss\Models\LifecycleRule(
    prefix: 'log/', // The prefix of the object
    transitions: array(
        new Oss\Models\LifecycleRuleTransition(
            days: 30, // The conversion time is 30 days
            storageClass: 'IA', // The target storage class is IA
            IsAccessTime: 'true', // Whether to trigger the conversion based on the access time
            ReturnToStdWhenVisit: 'false' // Keep as IA storage when accessed again
        )
    ),
    id: 'rule', // The ID of the rule
    status: 'Enabled' // The status of the rule is enabled
);

// Create a lifecycle configuration object and add the lifecycle rule
$lifecycleConfiguration = new LifecycleConfiguration(
    rules: array($lifecycleRule)
);

// Create a request object to set the lifecycle of the bucket and pass in the lifecycle configuration
$request = new Oss\Models\PutBucketLifecycleRequest(
    bucket: $bucket,
    lifecycleConfiguration: $lifecycleConfiguration
);

// Call the putBucketLifecycle method to set the lifecycle rules for the bucket
$result = $client->putBucketLifecycle($request);

// Display the returned result
printf(
    'status code:' . $result->statusCode . PHP_EOL . // The HTTP response status code
    'request id:' . $result->requestId . PHP_EOL // The unique identifier of the request
);

Use ossutil

You can configure lifecycle rules by using ossutil. For more information about how to install ossutil, see Install ossutil.

The following sample code shows how to configure a lifecycle rule for examplebucket.

ossutil api put-bucket-lifecycle --bucket examplebucket --lifecycle-configuration "{\"Rule\":{\"ID\":\"rule1\",\"Prefix\":\"tmp/\",\"Status\":\"Enabled\",\"Expiration\":{\"Days\":\"10\"},\"Transition\":{\"Days\":\"5\",\"StorageClass\":\"IA\"},\"AbortMultipartUpload\":{\"Days\":\"10\"}}}"

For more information, see put-bucket-lifecycle.

Related API operation

The following API provides the essential infrastructure and capabilities needed to execute the operations mentioned above. If your business requires a high level of customization, you can directly call the OSS RESTful API. To directly call an API, you must include the signature calculation in your code. For more information, see PutBucketLifecycle.

FAQ

What happens if I configure a lifecycle rule based on the last modified time and a lifecycle rule based on the last access time at the same time for objects that have the same name prefix in the same bucket?

For example, you configure two lifecycle rules for a bucket named examplebucket. The first rule specifies that all objects whose names contain the doc prefix in examplebucket are deleted 30 days after the objects are last modified. The second rule specifies that the storage class of all objects whose names contain the doc prefix in examplebucket is converted to IA 30 days after the objects are last accessed.

In this case, only the first lifecycle rule takes effect because OSS preferentially applies the lifecycle rule that incurs lower fees. If the first rule is applied, you are not charged after the specified objects are deleted based on the rule. If the second rule is applied, you are charged storage fees or data retrieval fees after the storage class of the specified objects is converted based on the rule.

When does a lifecycle rule take effect after I modify the rule and what happens to the objects to which the original rule applies?

For example, you configure a lifecycle rule for objects whose names contain the er prefix to convert the storage class of the objects to IA 30 days after the objects are last accessed, and then back to Standard when the IA objects are accessed 30 days after conversion to IA. In this case, if you change the prefix that you specify in the lifecycle rule from er to re 35 days after the objects whose names are prefixed with er are last accessed, the storage class of these objects is already converted to IA and cannot be converted back to Standard based on the original lifecycle rule. After you modify the lifecycle rule, the last access time of the objects whose names contain the re prefix is set to the time when you enabled access tracking for the bucket.

How are objects stored if I configure lifecycle rules based on the last access time for a versioned bucket?

Each object in a versioned bucket has a unique version ID. Objects that have different version IDs are separately stored. After you configure lifecycle rules based on the last access time for a versioned bucket, the storage class of the current version of an object may be different from the storage class of a previous version of the same object.

Can I disable access tracking?

Yes, you can disable access tracking. Before you disable access tracking for a bucket, make sure that no lifecycle rules based on the last access time are configured for the bucket. After you disable access tracking for a bucket, OSS stops tracking the last access time of objects in the bucket. The next time you enable access tracking for the bucket, the last access time of the objects in the bucket is updated.

References

LastAccessTime (last access time) is an important attribute of OSS objects. The attribute is used in scenarios, such as billing and lifecycle rules. After access tracking is enabled for a bucket, specific operations on objects may update the last access time of the objects. For more information, see How do common operations affect LastAccessTime of objects?