All Products
Search
Document Center

Simple Log Service:Data shipping to OSS (new version)

Last Updated:Aug 06, 2025

This topic describes the stability and limits of the data shipping to OSS (new version) feature.

How to access this feature

Create an OSS data shipping job (new version)

Stability description

Data reads from Simple Log Service

Item

Description

Availability

High availability.

If Simple Log Service returns an error and data cannot be read, the OSS data shipping job retries at least 10 times. If the read still fails, an error is reported for the task execution, and then the job restarts.

Data writes to OSS

Item

Description

Degree of concurrency

Data is partitioned based on the shards of Simple Log Service to create data shipping instances. This supports rapid scale-out.

If a shard in the source Logstore of Simple Log Service is split, the data shipping instances can be scaled out within seconds to accelerate data exporting.

No data loss

The OSS data shipping job is extended based on a consumer group to ensure consistency. The offset is submitted only after the data is delivered. This ensures that the offset is not submitted before the data is written to OSS, which prevents data loss.

Monitoring and alerts

Item

Description

Monitoring and alerts

Data shipping provides comprehensive monitoring features that allow you to track metrics such as the latency and traffic of data shipping jobs in real time. You can configure custom alerts as needed to detect issues in a timely manner, such as insufficient export instances or network quota limits. For more information, see Configure alerts for an OSS data shipping job (new version).

Limits

Network

Limit

Description

Network type

Data is transmitted over the Alibaba Cloud internal network to ensure network stability and speed.

Permission management

Limit

Description

Authorization

This involves the permissions for OSS data shipping operations and data access. For more information, see Prepare permissions.

Server-side encryption

If server-side encryption is enabled, you must grant additional permissions to the RAM role. For more information, see OSS configuration document.

Read traffic

Limit

Description

Read traffic

A project and a shard have a maximum traffic limit. For more information, see Data reads and writes.

If the traffic exceeds the limit, you can split the shard or request to increase the read traffic limit for the project. If the limit is exceeded, the OSS data shipping job fails to read data and retries at least 10 times. If the read still fails, an error is reported for the task execution, and then the job restarts.

Data writes to OSS

Limit

Description

Concurrent instances

The number of concurrent instances is the same as the number of shards, including read/write shards and read-only shards.

Shipping limits

  • The size of an OSS object is controlled by the shipping size of each shard. The value is calculated based on uncompressed data. The value ranges from 5 MB to 256 MB.

  • The shipping epoch for each shard ranges from 300 to 900 seconds.

  • Each data shipping operation of a concurrent instance generates a separate OSS file.

Time-based partitioning

OSS data shipping is performed in batches. A file is written in each batch. The file contains a batch of data. The file path is determined by the minimum `receive_time` (the time when the data arrives at Simple Log Service) in the batch.

File format

After data is shipped to OSS, it can be stored in the CSV, JSON, Parquet, or ORC file format. For more information, see JSON format, CSV format, Parquet format, and ORC format.

Compression method

The snappy, gzip, and zstd compression methods are supported. You can also choose not to compress data.

OSS bucket

  • You can ship data only to an existing bucket for which Write-Once-Read-Many (WORM) is disabled. The bucket must be in the same region as the Simple Log Service project. For more information about WORM, see Retention policy (WORM).

  • You can ship data to buckets of the Standard, Infrequent Access (IA), Archive, and Cold Archive storage classes. After data is shipped, the storage class of the generated OSS object is the same as that of the bucket by default. For more information, see Introduction to storage classes.

  • Buckets of non-standard storage classes have limits on the minimum storage duration and minimum billable size. You must configure the storage class for the destination bucket as needed. For more information, see Comparison of storage classes.

Configuration item

Limit

Description

Delayed shipping

The time that you set for the Delayed Shipping configuration item cannot exceed the data retention period of the current Logstore.

We recommend that you reserve a buffer period. Otherwise, data may be lost. For example, if the data retention period of a Logstore is 30 days, the delayed shipping period cannot exceed 25 days.

Manage data shipping

Limit

Description

Pause a data shipping job

A data shipping job records the log cursor of the last shipping operation. When the job is resumed, it continues to ship data from the recorded cursor. Therefore, the following mechanisms are in place when you pause a data shipping job.

  • If you pause a job for a period of time that does not exceed the data retention period and then resume the job, the system continues to ship data from where it left off. No data is lost.

  • If you pause a job for a period of time that exceeds the data retention period and then resume the job, the system starts to ship data from the record that is nearest to where the job left off.