All Products
Search
Document Center

Simple Log Service:Import Kafka data

Last Updated:Mar 25, 2026

This topic describes the limits on importing data from Kafka to Simple Log Service.

Collection limits

Limit Description
Compression format The Kafka Producer must use one of the following compression formats: gzip, zstd, lz4, or snappy. Data compressed in other formats is discarded.

The number of discarded data entries is shown as Deliver Failed on the Data Processing Insight dashboard. For more information, see View the data import configuration.

Maximum number of topics A single data import configuration supports a maximum of 10,000 topics.
Maximum log size The maximum size for a single log is 3 MB. Logs that exceed this limit are discarded.

The number of discarded logs is shown as Deliver Failed on the Data Processing Insight dashboard. For more information, see View the data import configuration.

Starting position You can set the starting position only to Earliest or Latest. Importing data from a specific point in time is not supported.

Configuration limits

Limit Description
Number of data import configurations A single Project supports a maximum of 100 data import configurations of all types. If you require a higher limit, submit a ticket.
Bandwidth limit By default, the maximum network bandwidth is 128 MB/s when a data import task reads data from an Alibaba Cloud Kafka cluster over a VPC. If you require more bandwidth, submit a ticket.

Performance limits

Limit Description
Number of concurrent subtasks Simple Log Service automatically creates multiple subtasks to import data concurrently based on the number of topics. Each subtask can process decompressed data at a maximum rate of 50 MB/s.
  • If the number of topics exceeds 2,000, Simple Log Service creates 16 subtasks.
  • If the number of topics exceeds 1,000, Simple Log Service creates 8 subtasks.
  • If the number of topics exceeds 500, Simple Log Service creates 4 subtasks.
  • If the number of topics is 500 or less, Simple Log Service creates 2 subtasks.

If you require a higher limit, submit a ticket.

Number of topic partitions More partitions in a Kafka topic improve throughput by allowing subtasks to scale out.

For high-volume topics, we recommend increasing the number of partitions to at least 16.

Number of Logstore Shards The write performance of Simple Log Service depends on the number of Shards in the destination Logstore. A single Shard supports a write throughput of 5 MB/s. For high-volume ingestion, we recommend increasing the number of Shards in the destination Logstore. For more information, see Manage shards.
Data compression For large data volumes, we recommend compressing data before writing it to Kafka. This significantly reduces network traffic.

Network transmission is often more time-consuming than data decompression, especially when importing data over the internet.

Network If your Alibaba Cloud Kafka cluster is in a VPC, you can read data over the VPC network. This saves internet traffic and provides faster transmission speeds, with bandwidth reaching over 100 MB/s.

When you import data over the internet, network performance and bandwidth are not guaranteed, which can lead to import latency.

Other limits

Limit Description
Metadata synchronization latency The import task synchronizes metadata from the Kafka cluster every 10 minutes. For newly added topics and partitions, metadata synchronization is delayed by approximately 10 minutes.
Note If the starting position is set to Latest, the initial data written to a new topic (up to 10 minutes of data) might be skipped.
Topic offset validity period The maximum validity period for a topic offset is 7 days. If no data is read from a topic for 7 consecutive days, its previous offset is discarded. When new data arrives, the import task sets the new offset based on the starting position in the data import configuration.