Log Service provides two functions related to the read operation.
Log collection and consumption (LogHub): Provides public channels for log collection and distribution, sequential (first in, first out (FIFO)) read and write of full data, and functions similar to Kafka.
- Each Logstore has one or more shards. Data is written to a shard at random.
- You can read logs in batches from a specified shard according to the sequence that logs are written to the shard.
- You can set the start point (cursor) to pull logs from shards in batches based on the time when Log Service receives logs.
- By default, logs are retained in LogHub for two days, during which logs can be consumed.
Log query (index): Log Service supports querying massive logs based on LogHub. You can query logs by using keywords.
- Use the keyword to query logs that meet your requirement.
- Supports using the boolean combination of AND, NOT, and OR to query logs based on keywords.
- You can only query logs in all shards, but not a specified shard.
|Function||Log query (LogSearch)||Log collection and consumption (LogHub)|
|Query logs by using keywords||Supported||Not supported|
|Read small amounts of data||Fast||Fast|
|Read full data||Slow (100 logs every 100ms, not recommended)||Fast (1 MB logs every 10ms, recommended)|
|Read logs by topic||Yes||No. Logs are read by shard|
|Read logs by shard||No. You can only query logs in all shards||Yes. You must specify a shard for reading logs|
|Scenario||Scenarios that need to filter data such as monitoring and troubleshooting||Full processing scenarios such as stream computing and batch processing|