All Products
Search
Document Center

Simple Log Service:Limits on data import from OSS to Simple Log Service

Last Updated:Aug 28, 2024

This topic describes the limits on data import from Object Storage Service (OSS) to Simple Log Service.

Limits on collection

Item

Description

Size of a single object

  • An import task can import logs from an object that is Snappy-compressed without using a framing format. The size of the object can be up to 350 MB.

  • An import task can import logs from an object that is in other formats. The size of the object can be up to 5 GB.

If the size of a single object exceeds the limit, the entire object is ignored during import.

Size of a single data record

The size of a single data record can be up to 3 MB. If the size of a single data record exceeds the limit, the record is discarded.

The Deliver Failed chart on the Data Processing Insight dashboard displays the number of data records that are discarded. For more information, see What to do next.

Object update

If an OSS object is imported to Simple Log Service and new data is appended to the OSS object, all data of the OSS object is re-imported to Simple Log Service when a data import task for the OSS object is run.

Detection latency of new objects

The minimum interval at which an import task detects new objects is 1 minute. If an import task writes a large number of objects to Simple Log Service, high latency may exist.

Limits on configuration

Item

Description

Number of data import configurations

The maximum number of data import configurations that can be created in a single project is 100 regardless of configuration types. If you want to increase the quota, submit a ticket.

Limits on performance

Item

Description

Number of concurrent subtasks

Simple Log Service automatically creates multiple data import subtasks to concurrently import data based on the number of objects that need to be imported. Simple Log Service automatically creates up to eight subtasks for each data import configuration. Each subtask can process decompressed data at a speed of up to 10 MB/s. In total, an import task can process decompressed data at a speed of up to 80 MB/s.

If you want to increase the quota, submit a ticket.

Number of shards in a Logstore

The write performance of Simple Log Service varies based on the number of shards in a Logstore. A single shard supports a write speed of 5 MB/s. If an import task writes a large volume of data to Simple Log Service, we recommend that you increase the number of shards for the Logstore. For more information, see Manage shards.

Data read from Archive objects

If the objects that you want to import are Archive objects, you must restore the objects before Simple Log Service can read data from the objects.

In most cases, Archive objects require approximately 1 minute to be restored.

Size of objects

If the total volume of data is the same, the larger the size of each object, the higher the read throughput. The smaller the size of each object, the lower the read throughput.

Network

If your OSS bucket and Simple Log Service project reside in the same region, no Internet traffic is generated, and data is transferred at a high speed.

If your OSS bucket and Simple Log Service project reside in different regions, the object read is significantly affected by network conditions, and the read performance is relatively poor.

Import latency of new data

If the number of existing objects is large and you do not enable OSS Metadata Indexing, the value of the New File Check Cycle parameter may not take effect when new objects are imported to Simple Log Service.

If the number of existing objects is approximately 1 million, the import latency of new data is approximately 2 minutes. The import latency varies in a linear manner with the number of existing objects.