Logs collected to the LogHub of the Log Service can be consumed in the following three methods:
|Approach||Scenario||Real time||Storage period|
|Real-time consumption (loghub)||Stream computing and real-time computing||Real-time||Customize|
|Query and analysis (LogSearch/Analytics)||Online query and analysis||Real-time (less than one second in 99.99% cases)||Customize|
|Shipping and storage (LogShipper)||Full log storage for offline analysis||5–30 minutes||Depends on the storage system|
- Obtain a cursor based on a set of criteria such as time, Begin, and End.
- The system reads logs based on the cursor and step and returns the next cursor.
- Moves the cursor continuously to consume logs.
- Use Spark Streaming Client to consume logs.
- Use Storm Spout to consume logs.
- Use Flink Connector, including Flink consumer and Flink producer to consume logs.
- Use LogHub Consumer Library to consume logs. The consumer library is an advanced mode for LogHub consumers, which provides a lightweight computing framework and solves the issue of automatic shard allocation and order preservation when multiple consumers consume a Logstore at the same time.
- Use SDKs to consume logs. Log Service provides SDKs in multiple languages (Java and Python) that support the log consumption APIs. For more information about SDKs, see Log Service SDK.
- Use cloud products to consume logs:
Query and analysis
Shipping and storage
Secure Log Service: Log Service interconnects with cloud security products and uses ISV to consume logs of cloud products.