All Products
Search
Document Center

Object Storage Service:Log storage

Last Updated:Dec 02, 2025

Accessing Object Storage Service (OSS) generates many access logs. You can use the log storage feature to generate hourly log files. These files follow a fixed naming convention and are written to a bucket that you specify. You can analyze the stored logs using Alibaba Cloud Simple Log Service or by building a Spark cluster.

Precautions

  • If the source bucket that generates logs has a region attribute, the target bucket for log storage can be the same as or different from the source bucket. However, the target bucket must be in the same region and belong to the same account.

    When you configure log storage for a source bucket, the log push operation itself generates new logs. If the source and target buckets are the same, the log storage feature records and pushes these new logs. This causes a logging loop. To prevent this, configure different source and target buckets.

  • Log files are expected to be generated within 48 hours. A log file for a specific time period may not record all requests from that period. Some requests may appear in the log file of the previous or next time period. Therefore, the log records for a specific time period are not guaranteed to be complete or timely.

  • OSS generates log files every hour until you disable the log storage feature. To reduce your storage costs, promptly clear log files that are no longer needed.

    You can use a lifecycle rule to periodically delete log files. For more information, see Lifecycle rules based on the last modified time.

  • To maintain OSS-HDFS availability and prevent data contamination, do not set Log Prefix to .dlsdata/ when you configure logging for a bucket for which OSS-HDFS is enabled.

  • OSS may add fields to the end of logs as needed. You must consider compatibility issues when you develop log processing tools. Effective September 17, 2025, the Bucket ARN field will be added to the log content.

Configure log storage

Configure log storage for a bucket

Console

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the navigation pane on the left, choose Log Management > Log Storage.

  4. On the Log Storage page, turn on Enable Log Storage and configure the parameters as described in the following table.

    Configuration

    Description

    Log Storage Location

    By default, logs are stored in the current bucket. To store logs in another bucket, select the name of the bucket for log storage from the drop-down list. You can only select a bucket that is in the same region and belongs to the same account.

    Log Prefix

    The directory where log files are stored. If you specify this parameter, log files are saved to the specified directory in the target bucket. If you do not specify this parameter, log files are saved to the root directory of the target bucket. For example, if you set the log prefix to log/, the log files are recorded in the log/ directory.

    Role

    Grant OSS permissions to perform log storage operations by assuming a default role or a custom role.

    • Default Log Storage Authorization (Recommended)

      If you select this option, the default role AliyunOSSLoggingDefaultRole is automatically created and granted permissions. This role supports log storage authorization for all buckets under your account. The first time you use the log storage feature, click One-click Authorization to automatically create the default role and grant permissions.

    • Custom Authorization

      If you only need to grant permissions for log storage to a specific bucket, use a custom role. When you use a custom role, perform the following steps.

      1. Create a service role. During role creation, set Trusted Entity Type to Alibaba Cloud Service and Trusted Entity to Object Storage Service.

      2. Create a custom policy in script edit mode.

        Bucket that uses KMS encryption
        {
          "Version": "1",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": [
                "kms:List*",
                "kms:Describe*",
                "kms:GenerateDataKey",
                "kms:Decrypt"
              ],
              "Resource": "*"
            },
            {
              "Effect": "Allow",
              "Action": [
                "oss:PutObject",
                "oss:AbortMultipartUpload"
              ],
              "Resource": "acs:oss:*:*:examplebucket/*"
            }
          ]
        }

        Bucket that does not use KMS encryption

        This includes two cases: server-side encryption is not enabled for the bucket, or the bucket uses OSS-managed encryption. In both cases, configure the custom policy as shown in the following example.

        {
          "Version": "1",
          "Statement": [  
            {
              "Effect": "Allow",
              "Action": [
                "oss:PutObject",
                "oss:AbortMultipartUpload"
              ],
              "Resource": "acs:oss:*:*:examplebucket/*"
            }
          ]
        }
      3. Grant permissions to the RAM role.

  5. Click Save.

ossutil

You can use the ossutil command-line interface (CLI) tool to enable log storage. For more information about how to install ossutil, see Install ossutil.

The following command enables log storage for the examplebucket bucket. The log file prefix is MyLog-, and the bucket that stores access logs is dest-bucket.

ossutil api put-bucket-logging --bucket examplebucket --bucket-logging-status "{\"LoggingEnabled\":{\"TargetBucket\":\"destBucket\",\"TargetPrefix\":\"MyLog-\"}}"

For more information about this command, see put-bucket-logging.

SDKs

The following code provides examples of how to configure log storage using common software development kits (SDKs). For code examples for other SDKs, see SDKs.

import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.model.SetBucketLoggingRequest;

public class Demo {

    public static void main(String[] args) throws Exception {
        // The endpoint of the China (Hangzhou) region is used in this example. Specify the actual endpoint.
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run this code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are set.
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // The name of the bucket for which you want to enable log storage. For example, examplebucket.
        String bucketName = "examplebucket";
        // The destination bucket to store the log files. The source and destination buckets can be the same or different.
        String targetBucketName = "yourTargetBucketName";
        // The folder where the log files are stored. If you specify this parameter, the log files are stored in the specified folder of the destination bucket. If you do not specify this parameter, the log files are stored in the root directory of the destination bucket.
        String targetPrefix = "log/";
        // The region where the bucket is located. For example, if the bucket is in the China (Hangzhou) region, set the region to cn-hangzhou.
        String region = "cn-hangzhou";

        // Create an OSSClient instance.
        // When the OSSClient instance is no longer needed, call the shutdown method to release its resources.
        ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
        clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);        
        OSS ossClient = OSSClientBuilder.create()
        .endpoint(endpoint)
        .credentialsProvider(credentialsProvider)
        .clientConfiguration(clientBuilderConfiguration)
        .region(region)               
        .build();

        try {
            SetBucketLoggingRequest request = new SetBucketLoggingRequest(bucketName);
            request.setTargetBucket(targetBucketName);
            request.setTargetPrefix(targetPrefix);
            ossClient.setBucketLogging(request);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.  
$provider = new EnvironmentVariableCredentialsProvider();
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
$endpoint = "yourEndpoint";
// Specify the name of the source bucket for which you want to enable logging. Example: examplebucket. 
$bucket= "examplebucket";

$option = array();
// Specify the name of the destination bucket in which the log objects are stored. 
$targetBucket = "destbucket";
// Specify the directory in which the log objects are stored. If you specify this parameter, the log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, the log objects are stored in the root directory of the destination bucket. 
$targetPrefix = "log/";

try {
    $config = array(
        "provider" => $provider,
        "endpoint" => $endpoint,
        "signatureVersion" => OssClient::OSS_SIGNATURE_VERSION_V4,
        "region"=> "cn-hangzhou"
    );
    $ossClient = new OssClient($config);

    // Enable logging for the source bucket. 
    $ossClient->putBucketLogging($bucket, $targetBucket, $targetPrefix, $option);
} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . ": OK" . "\n");            
const OSS = require('ali-oss')
const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
});
async function putBucketLogging () {
  try {
     const result = await client.putBucketLogging('bucket-name', 'logs/');
     console.log(result)
  } catch (e) {
    console.log(e)
  }
}
putBucketLogging();
# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
from oss2.models import BucketLogging

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4.
region = "cn-hangzhou"

# Specify the name of your bucket.
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

# Specify that the generated log objects are stored in the current bucket. 
# Set the directory in which the log objects are stored to log/. If you specify this parameter, the log objects are stored in the specified directory of the bucket. If you do not specify this parameter, the log objects are stored in the root directory of the bucket. 
# Enable logging for the bucket. 
logging = bucket.put_bucket_logging(BucketLogging(bucket.bucket_name, 'log/'))
if logging.status == 200:
    print("Enable access logging")
else:
    print("request_id:", logging.request_id)
    print("resp : ", logging.resp.response)            
using Aliyun.OSS;
using Aliyun.OSS.Common;

// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
var endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the name of the source bucket for which you want to enable logging. Example: examplebucket. 
var bucketName = "examplebucket";
// Specify the name of the destination bucket in which the log objects are stored. The source bucket and the destination bucket can be the same bucket or different buckets. 
var targetBucketName = "destbucket";
// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.
const string region = "cn-hangzhou";

// Create a ClientConfiguration instance and modify the default parameters based on your requirements.
var conf = new ClientConfiguration();

// Use the signature algorithm V4.
conf.SignatureVersion = SignatureVersion.V4;

// Create an OSSClient instance.
var client = new OssClient(endpoint, accessKeyId, accessKeySecret, conf);
client.SetRegion(region);
try
{
    // Specify that the log objects are stored in the log/ directory. If you specify this parameter, the log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, log objects are stored in the root directory of the destination bucket. 
    var request = new SetBucketLoggingRequest(bucketName, targetBucketName, "log/");
    // Enable logging for the source bucket. 
    client.SetBucketLogging(request);
    Console.WriteLine("Set bucket:{0} Logging succeeded ", bucketName);
}
catch (OssException ex)
{
    Console.WriteLine("Failed with error info: {0}; Error info: {1}. \nRequestID:{2}\tHostID:{3}",
        ex.ErrorCode, ex.Message, ex.RequestId, ex.HostId);
}
catch (Exception ex)
{
    Console.WriteLine("Failed with error info: {0}", ex.Message);
}
PutBucketLoggingRequest request = new PutBucketLoggingRequest();
// Specify the name of the source bucket for which you want to enable logging. 
request.setBucketName("yourSourceBucketName");
// Specify the name of the destination bucket in which you want to store the logs. 
// The source bucket and the destination bucket must be located in the same region. The source bucket and the destination bucket can be the same bucket or different buckets. 
request.setTargetBucketName("yourTargetBucketName");
// Specify the directory in which the logs are stored. 
request.setTargetPrefix("<yourTargetPrefix>");

OSSAsyncTask task = oss.asyncPutBucketLogging(request, new OSSCompletedCallback<PutBucketLoggingRequest, PutBucketLoggingResult>() {
    @Override
    public void onSuccess(PutBucketLoggingRequest request, PutBucketLoggingResult result) {
        OSSLog.logInfo("code::"+result.getStatusCode());
    }

    @Override
    public void onFailure(PutBucketLoggingRequest request, ClientException clientException, ServiceException serviceException) {
         OSSLog.logError("error: "+serviceException.getRawMessage());
    }
});
task.waitUntilFinished();
package main

import (
	"fmt"
	"os"

	"github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
	// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
	provider, err := oss.NewEnvironmentVariableCredentialsProvider()
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}

	// Create an OSSClient instance. 
        // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
	// Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. Specify the actual region.
	clientOptions := []oss.ClientOption{oss.SetCredentialsProvider(&provider)}
	clientOptions = append(clientOptions, oss.Region("yourRegion"))
	// Specify the version of the signature algorithm.
	clientOptions = append(clientOptions, oss.AuthVersion(oss.AuthV4))
	client, err := oss.New("yourEndpoint", "", "", clientOptions...)
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
	// Specify the name of the source bucket for which you want to enable logging. Example: examplebucket. 
	bucketName := "examplebucket"
	// Specify the name of the destination bucket in which you want to store the log objects. The source and destination buckets can be the same bucket or different buckets, but they must be located in the same region. 
	targetBucketName := "destbucket"
	// Set the directory in which you want to store the log objects to log/. If you specify this parameter, the log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, the log objects are stored in the root directory of the destination bucket. 
	targetPrefix := "log/"

	// Enable logging for the bucket. 
	err = client.SetBucketLogging(bucketName, targetBucketName, targetPrefix, true)
	if err != nil {
		fmt.Println("Error:", err)
		os.Exit(-1)
	}
}
#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize information about the account that is used to access OSS. */
            
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. */
    std::string Region = "yourRegion";
    /* Specify the name of the source bucket for which you want to enable logging. Example: examplebucket. */
    std::string BucketName = "examplebucket";
    /* Specify the name of the destination bucket in which the log objects are stored. The source bucket and the destination bucket can be the same bucket or different buckets. */
    std::string TargetBucketName = "destbucket";
    /* Set the directory in which the log objects are stored to log/. If you specify this parameter, the log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, the log objects are stored in the root directory of the destination bucket. */
    std::string TargetPrefix  ="log/";

    /* Initialize resources such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    conf.signatureVersion = SignatureVersionType::V4;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);
    client.SetRegion(Region);  

    /* Enable logging for the bucket. */
    SetBucketLoggingRequest request(BucketName, TargetBucketName, TargetPrefix);
    auto outcome = client.SetBucketLogging(request);

    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "SetBucketLogging fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources such as network resources. */
    ShutdownSdk();
    return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
/* Specify the name of the destination bucket in which the log objects are stored. The source bucket and the destination bucket can be the same bucket or different buckets. */
const char *target_bucket_name = "yourTargetBucketName";
/* Specify the directory in which the log objects are stored. If you specify this parameter, the log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, the log objects are stored in the root directory of the destination bucket. */
const char *target_logging_prefix = "yourTargetPrefix";
/*Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou.*/
const char *region = "yourRegion";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize data of the aos_string_t type. */
    aos_str_set(&options->config->endpoint, endpoint);
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
    aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
    //Specify two additional parameters.
    aos_str_set(&options->config->region, region);
    options->config->signature_version = 4;
    /* Specify whether to use CNAME. The value 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Specify network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    oss_logging_config_content_t *content;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    content = oss_create_logging_rule_content(pool);
    aos_str_set(&content->target_bucket, target_bucket_name);
    aos_str_set(&content->prefix, target_logging_prefix);
    /* Enable logging for the source bucket. */
    resp_status = oss_put_bucket_logging(oss_client_options, &bucket, content, &resp_headers);
    if (aos_status_is_ok(resp_status)) {
        printf("put bucket logging succeeded\n");
    } else {
        printf("put bucket logging failed, code:%d, error_code:%s, error_msg:%s, request_id:%s\n",
            resp_status->code, resp_status->error_code, resp_status->error_msg, resp_status->req_id); 
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}
require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)

# Specify the name of the bucket. Example: examplebucket. 
bucket = client.get_bucket('examplebucket')
# Set logging_bucket to the destination bucket in which log objects are stored. 
# Set my-log to the directory in which log objects are stored. If you specify this parameter, log objects are stored in the specified directory of the destination bucket. If you do not specify this parameter, log objects are stored in the root directory of the destination bucket. 
bucket.logging = Aliyun::OSS::BucketLogging.new(
  enable: true, target_bucket: 'logging_bucket', target_prefix: 'my-log')

API

Call the PutBucketLogging operation to enable log storage for a bucket.

Configure log storage for a vector bucket

Console

  1. On the Vector Buckets page, click the name of the destination bucket. In the navigation pane on the left, choose Log Management > Log Storage.

  2. Turn on the Log Storage switch and configure the following parameters:

    • Destination Storage Location: Select a bucket to store log files. The bucket must be in the same region as the vector bucket.

    • Log Prefix: Set a prefix for the log files, such as MyLog-.

    • Role: Use the default service role AliyunOSSLoggingDefaultRole for logging or select a custom role.

ossutil

The following examples show how to enable log storage for a bucket named examplebucket. The log file prefix is MyLog- and the bucket that stores the access logs is examplebucket.

  • Use a JSON configuration file. The following code shows the content of bucket-logging-status.json:

    {
      "BucketLoggingStatus": {
        "LoggingEnabled": {
          "TargetBucket": "examplebucket",
          "TargetPrefix": "MyLog-",
          "LoggingRole": "AliyunOSSLoggingDefaultRole"
        }
      }
    }

    The following command is an example:

    ossutil vectors-api put-bucket-logging --bucket examplebucket --bucket-logging-status file://bucket-logging-status.json
  • Use JSON configuration parameters. The following command is an example:

    ossutil vectors-api put-bucket-logging --bucket examplebucket --bucket-logging-status "{\"BucketLoggingStatus\":{\"LoggingEnabled\":{\"TargetBucket\":\"examplebucket\",\"TargetPrefix\":\"MyLog-\",\"LoggingRole\":\"AliyunOSSLoggingDefaultRole\"}}}"

SDK

Python

import argparse
import alibabacloud_oss_v2 as oss
import alibabacloud_oss_v2.vectors as oss_vectors

parser = argparse.ArgumentParser(description="vector put bucket logging sample")
parser.add_argument('--region', help='The region in which the bucket is located.', required=True)
parser.add_argument('--bucket', help='The name of the bucket.', required=True)
parser.add_argument('--endpoint', help='The domain names that other services can use to access OSS')
parser.add_argument('--account_id', help='The account id.', required=True)
parser.add_argument('--target_bucket', help='The name of the target bucket.', required=True)

def main():
    args = parser.parse_args()

    # Load credential values from environment variables.
    credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()

    # Use the default configuration of the SDK.
    cfg = oss.config.load_default()
    cfg.credentials_provider = credentials_provider
    cfg.region = args.region
    cfg.account_id = args.account_id
    if args.endpoint is not None:
        cfg.endpoint = args.endpoint

    vector_client = oss_vectors.Client(cfg)

    result = vector_client.put_bucket_logging(oss_vectors.models.PutBucketLoggingRequest(
        bucket=args.bucket,
        bucket_logging_status=oss_vectors.models.BucketLoggingStatus(
            logging_enabled=oss_vectors.models.LoggingEnabled(
                target_bucket=args.target_bucket,
                target_prefix='log-prefix',
                logging_role='AliyunOSSLoggingDefaultRole'
            )
        )
    ))

    print(f'status code: {result.status_code},'
          f' request id: {result.request_id},'
    )

if __name__ == "__main__":
    main()

Go

package main

import (
	"context"
	"flag"
	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/vectors"
	"log"

	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
)

var (
	region     string
	bucketName string
	accountId  string
)

func init() {
	flag.StringVar(&region, "region", "", "The region in which the vector bucket is located.")
	flag.StringVar(&bucketName, "bucket", "", "The name of the vector bucket.")
	flag.StringVar(&accountId, "account-id", "", "The id of vector account.")
}

func main() {
	flag.Parse()
	if len(bucketName) == 0 {
		flag.PrintDefaults()
		log.Fatalf("invalid parameters, bucket name required")
	}

	if len(region) == 0 {
		flag.PrintDefaults()
		log.Fatalf("invalid parameters, region required")
	}

	if len(accountId) == 0 {
		flag.PrintDefaults()
		log.Fatalf("invalid parameters, accounId required")
	}

	cfg := oss.LoadDefaultConfig().
		WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
		WithRegion(region).WithAccountId(accountId)

	client := vectors.NewVectorsClient(cfg)

	request := &vectors.PutBucketLoggingRequest{
		Bucket: oss.Ptr(bucketName),
		BucketLoggingStatus: &vectors.BucketLoggingStatus{
			&vectors.LoggingEnabled{
				TargetBucket: oss.Ptr("TargetBucket"),
				TargetPrefix: oss.Ptr("TargetPrefix"),
				LoggingRole:  oss.Ptr("AliyunOSSLoggingDefaultRole"),
			},
		},
	}
	result, err := client.PutBucketLogging(context.TODO(), request)
	if err != nil {
		log.Fatalf("failed to put vector bucket logging %v", err)
	}

	log.Printf("put vector bucket logging result:%#v\n", result)
}

API

Call the PutBucketLogging operation to enable log storage for a vector bucket.

Log file naming convention

The naming convention for stored log files is as follows:

<TargetPrefix><SourceBucket>YYYY-mm-DD-HH-MM-SS-UniqueString

Field

Description

TargetPrefix

The prefix of the log file name.

SourceBucket

The name of the source bucket that generates access logs.

YYYY-mm-DD-HH-MM-SS

The time partition of the log. From left to right, they represent year, month, day, hour, minute, and second. The stored logs are organized by hour. For example, if HH is 01, the log file contains log information from 01:00:00 to 01:59:59. MM and SS are both pushed as 00.

UniqueString

A system-generated string that uniquely identifies the log file.

Log format and example

  • Log format

    OSS access logs contain information about the requester and the accessed resource. The format is as follows:

    RemoteIP Reserved Reserved Time "RequestURL" HTTPStatus SentBytes RequestTime "Referer" "UserAgent" "HostName" "RequestID" "LoggingFlag" "RequesterAliyunID" "Operation" "BucketName" "ObjectName" ObjectSize ServerCostTime "ErrorCode" RequestLength "UserID" DeltaDataSize "SyncRequest" "StorageClass" "TargetStorageClass" "TransmissionAccelerationAccessPoint" "AccessKeyID" "BucketARN"

    Field

    Example

    Description

    RemoteIP

    192.168.0.1

    The IP address of the requester.

    Reserved

    -

    Reserved field. The value is fixed to -.

    Reserved

    -

    Reserved field. The value is fixed to -.

    Time

    03/Jan/2021:14:59:49 +0800

    The time when OSS received the request.

    RequestURL

    GET /example.jpg HTTP/1.0

    The request URL that contains the query string.

    OSS ignores query string parameters that start with x-, but these parameters are recorded in the access log. Use a query string parameter that starts with x- to mark a request and then use this mark to quickly find the corresponding log.

    HTTPStatus

    200

    The HTTP status code returned by OSS.

    SentBytes

    999131

    The downstream traffic generated by the request, in bytes.

    RequestTime

    127

    The time taken to complete the request, in ms.

    Referer

    http://www.aliyun.com/product/oss

    The HTTP Referer of the request.

    UserAgent

    curl/7.15.5

    The User-Agent header of the HTTP request.

    HostName

    examplebucket.oss-cn-hangzhou.aliyuncs.com

    The destination domain name accessed by the request.

    RequestID

    5FF16B65F05BC932307A3C3C

    The request ID.

    LoggingFlag

    true

    Indicates whether log storage is enabled. Valid values:

    • true: Log storage is enabled.

    • false: Log storage is not enabled.

    RequesterAliyunID

    16571836914537****

    The user ID of the requester. A value of - indicates an anonymous access.

    Operation

    GetObject

    The request type.

    BucketName

    examplebucket

    The name of the destination bucket.

    ObjectName

    example.jpg

    The name of the destination object.

    ObjectSize

    999131

    The size of the destination object, in bytes.

    ServerCostTime

    88

    The time taken by OSS to process the request, in milliseconds.

    ErrorCode

    -

    The error code returned by OSS. A value of - indicates that no error code was returned.

    RequestLength

    302

    The length of the request, in bytes.

    UserID

    16571836914537****

    The ID of the bucket owner.

    DeltaDataSize

    -

    The change in the object size. A value of - indicates that the request does not involve an object write operation.

    SyncRequest

    -

    The request type. Valid values:

    • -: A general request.

    • cdn: A CDN origin request.

    • lifecycle: A request to dump or delete data made through a lifecycle rule.

    StorageClass

    Standard

    The storage class of the destination object. Valid values:

    • Standard: Standard.

    • IA: Infrequent Access.

    • Archive: Archive Storage.

    • Cold Archive: Cold Archive.

    • DeepCold Archive: Deep Cold Archive.

    • -: The object storage class was not obtained.

    TargetStorageClass

    -

    Indicates whether the storage class of the object was changed by a lifecycle rule or a CopyObject operation. Valid values:

    • Standard: The storage class is changed to Standard.

    • IA: The storage class is changed to Infrequent Access.

    • Archive: The storage class is changed to Archive Storage.

    • Cold Archive: The storage class is changed to Cold Archive.

    • DeepCold Archive: The storage class is changed to Deep Cold Archive.

    • -: No storage class conversion operation is involved.

    TransmissionAccelerationAccessPoint

    -

    The acceleration endpoint used when accessing the destination bucket through a transfer acceleration domain name. For example, if a requester accesses the destination bucket through an endpoint in China (Hangzhou), the value is cn-hangzhou.

    A value of - indicates that a transfer acceleration domain name was not used or the acceleration endpoint is in the same region as the destination bucket.

    AccessKeyID

    LTAI****************

    The AccessKey ID of the requester.

    • If the request is initiated from the console, a temporary AccessKey ID that starts with TMP is displayed in the log field.

    • If the request is initiated from a tool or an SDK using a long-term key, a common AccessKey ID is displayed in the log field. Example: LTAI****************.

    • If the request is initiated using temporary access credentials from Security Token Service (STS), a temporary AccessKey ID that starts with STS is displayed.

    Note

    The AccessKey ID field displays - for anonymous requests.

    BucketArn

    acs:oss***************

    The globally unique resource descriptor of the bucket.

  • Log example

    192.168.0.1 - - [03/Jan/2021:14:59:49 +0800] "GET /example.jpg HTTP/1.0" 200 999131 127 "http://www.aliyun.com/product/oss" "curl/7.15.5" "examplebucket.oss-cn-hangzhou.aliyuncs.com" "5FF16B65F05BC932307A3C3C" "true" "16571836914537****" "GetObject" "examplebucket" "example.jpg" 999131 88 "-" 302 "16571836914537****" - "cdn" "standard" "-" "-" "LTAI****************" "acs:oss***************"

    After log files are stored in the specified OSS bucket, you can use Simple Log Service to analyze them. Before you can analyze the log files, you must import them to Simple Log Service. For more information about how to import data, see Import OSS data. For more information about the analysis features of Simple Log Service, see Query and analysis overview.

FAQ

Can I query interrupted requests in OSS access logs?

OSS access logs do not currently record interrupted requests. If you make a request using an SDK, you can check the SDK's return value to determine the reason for the interruption.