All Products
Search
Document Center

Simple Log Service:Create an OSS shipping task (old version)

Last Updated:Mar 25, 2026

After Simple Log Service collects data, you can ship it to Object Storage Service (OSS) for storage and analysis. This topic describes how to create an OSS shipping task (old version).

Important

The old version of shipping logs to OSS is discontinued. Refer to the new version.

Prerequisites

Background information

Simple Log Service allows you to automatically archive data from a Logstore to OSS to unlock more value from your data.

  • OSS lets you configure lifecycle rules to store data for extended periods.

  • You can use data processing platforms, such as E-MapReduce and Data Lake Analytics (DLA), or custom programs to consume data from OSS.

Procedure

Important
  • After you enable the shipping feature, Simple Log Service runs multiple shipping instances in parallel.

  • After a shipping instance is generated, you can verify the shipping task by checking its status and the data in OSS.

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the one you want.

    image

  3. On the Log Storage > Logstores tab, click > to the left of the target logstore and choose Data Processing > Export > Object Storage Service (OSS).

  4. Hover over Object Storage Service (OSS) and click the + icon.

  5. In the OSS LogShipper panel, configure the following parameters and then click OK.

    Set Shipping Version to Old Version. The following table describes the key parameters.

    Parameter

    Description

    OSS shipper name

    The name of the shipping task.

    OSS bucket

    The name of the destination OSS bucket.

    Important
    • The bucket must exist, have Write-Once-Read-Many (WORM) policies disabled, and be in the same region as the SLS project. For more information about WORM, see Retention Policy (WORM).

    • You can ship data to buckets of Standard, Infrequent Access (IA), Archive, Cold Archive, and Deep Cold Archive storage classes. By default, the storage class of the generated OSS objects is the same as that of the bucket. For more information, see Storage class.

    • Storage classes other than Standard have minimum storage durations and minimum billable sizes. Choose a storage class for the destination bucket that meets your requirements. For more information, see Storage class comparison.

    File delivery directory

    The directory in the OSS bucket. The directory name cannot start with a forward slash (/) or a backslash (\).

    After the OSS shipping task is created, data from the Logstore is shipped to this directory in the destination OSS bucket.

    Partition format

    The format for dynamically generating OSS bucket subdirectories based on the shipping task's creation time. The format cannot start with a forward slash (/). The default value is %Y/%m/%d/%H/%M. For examples, see Partition format. For parameter details, see the strptime API.

    OSS write RAM role

    Grants the OSS shipping task permission to write data to the OSS bucket.

    Shipping size

    The amount of data to ship from each shard. This value controls the size of the uncompressed data in each OSS object. The value must be an integer from 5 to 256. Unit: MB.

    When the data to be shipped from a shard reaches this size, a new shipping instance is automatically created.

    Storage format

    The storage format for data shipped to OSS. For more information, see JSON format, CSV format, and Parquet format.

    Compress

    The compression method for data stored in OSS.

    • No Compress: Data is not compressed.

    • Compress(snappy): Data is compressed by using the snappy algorithm. This reduces storage usage in the OSS bucket.

    Shipping interval

    The shipping interval for each shard. The value must be an integer from 300 to 900. Default value: 300. Unit: seconds.

    When the shipping interval for a shard is reached, a new shipping instance is automatically created.

View data in OSS

Once data is successfully shipped to OSS, you can access it using the OSS console, API, SDK, or other tools. For more information, see Object management.

The OSS object path has the following format:

oss://OSS-BUCKET/OSS-PREFIX/PARTITION-FORMAT_RANDOM-ID

OSS-BUCKET is the OSS bucket name, OSS-PREFIX is the directory prefix, PARTITION-FORMAT is the partition format (calculated from the shipping task's creation time by using the strptime API), and RANDOM-ID is the unique ID of the shipping task.

Note

The OSS bucket directory is determined by the creation time of the shipping task. For example, a shipping task is created at 00:00:00 on June 23, 2016 to ship data that was written to Simple Log Service after 23:55 on June 22, 2016. Assume data is shipped to OSS every 5 minutes. If you want to analyze all data for June 22, 2016, you must check all objects in the 2016/06/22 directory and check whether objects created in the first 10 minutes within the 2016/06/23/00/ directory contain data from June 22, 2016.

Partition format

A shipping task corresponds to an OSS bucket directory in the format of oss://OSS-BUCKET/OSS-PREFIX/PARTITION-FORMAT_RANDOM-ID. The PARTITION-FORMAT is derived from the creation time of the shipping task. The following table provides examples for a shipping task created at 19:50:43 on Jan 20, 2017.

OSS bucket

OSS prefix

Partition format

OSS file path

test-bucket

test-table

%Y/%m/%d/%H/%M

oss://test-bucket/test-table/2017/01/20/19/50_1484913043351525351_2850008

test-bucket

log_ship_oss_example

year=%Y/mon=%m/day=%d/log_%H%M%S

oss://test-bucket/log_ship_oss_example/year=2017/mon=01/day=20/log_195043_1484913043351525351_2850008.parquet

test-bucket

log_ship_oss_example

ds=%Y%m%d/%H

oss://test-bucket/log_ship_oss_example/ds=20170120/19_1484913043351525351_2850008.snappy

test-bucket

log_ship_oss_example

%Y%m%d/

oss://test-bucket/log_ship_oss_example/20170120/_1484913043351525351_2850008

Note

This format may cause platforms such as Hive to fail when parsing the OSS content. We recommend that you do not use this format.

test-bucket

log_ship_oss_example

%Y%m%d%H

oss://test-bucket/log_ship_oss_example/2017012019_1484913043351525351_2850008

When you analyze OSS data using big data platforms such as Hive, MaxCompute, or Data Lake Analytics (DLA), you can use partition information by setting the partition format to a key=value format. For example, the path oss://test-bucket/log_ship_oss_example/year=2022/mon=01/day=20/log_195043_1484913043351525351_2850008.parquet uses three partition columns: year, mon, and day.

More operations

After you create a shipping task, you can go to the OSS Shipper page to modify the task, disable shipping, view task status and error messages, and retry failed tasks.

  • Modify a shipping task

    Click Settings to modify the shipping task. For more information about the parameters, see Procedure in this topic.

  • Disable shipping

    Click Disable to disable the shipping task.

  • View task status and error messages

    Simple Log Service lets you view all shipping tasks from the last two days and their statuses.

    • Task status

      Status

      Description

      Succeeded

      The shipping task completed successfully.

      Running

      The shipping task is in progress. Check the status again later.

      Failed

      The shipping task failed due to an error that could not be resolved by automatic retries. Troubleshoot the issue based on the error message and then retry the task.

    • Error messages

      If a shipping task fails, a corresponding error message appears in the console.

      Error message

      Cause

      Solution

      UnAuthorized

      The required permissions are not granted.

      Verify the following settings:

      • Check whether the owner of the OSS bucket has created the AliyunLogDefaultRole role.

      • Check whether the Alibaba Cloud account ID in the role's policy document is correct.

      • Check whether the AliyunLogDefaultRole role is granted write permissions on the OSS bucket.

      • Check whether the ARN of the RAM role is correctly configured.

      ConfigNotExist

      The configuration does not exist.

      This error usually occurs because the shipping task was disabled. Re-enable the task and then retry it.

      InvalidOssBucket

      The OSS bucket does not exist.

      Verify the following settings:

      • Check whether the OSS bucket and the Simple Log Service project are in the same region.

      • Check whether the bucket name is spelled correctly.

      InternalServerError

      An internal error occurred in Simple Log Service.

      Retry the task.

    • Retry a task

      By default, Simple Log Service automatically retries failed tasks according to a policy. You can also retry tasks manually. Simple Log Service retries all failed tasks from the last two days. When a task fails, the system waits 15 minutes before the first automatic retry, 30 minutes before the second, 60 minutes before the third, and so on.

      To retry a failed task immediately, click Retry All Failed Tasks, or click Retry in the row of the target task. You can also retry a specific task by using an API or SDK.