This topic describes the limits on data import from CloudMonitor to Log Service.
Limits on collection
Item | Description |
---|---|
Start time of collection | When you create a data import configuration, the default start time of collection
is the current time minus 30 minutes . In this case, a new import task can collect historical data that is generated within
30 minutes.
|
Collection period | An import task pulls new data for each metric every minute. If an import task writes a large volume of data to Log Service and the data cannot be imported within 1 minute, the collection latency exceeds 1 minute. |
Total size of data points collected at a time for a single metric | Up to 3 MB of data points can be collected at a time for a single metric. If the total
size exceeds 3 MB, all data points that are collected for the metric within the current
collection period are discarded.
The Data Processing Insight dashboard displays the number of times that data points are discarded. For more information, see View a data import configuration. |
Limits on configuration
Item | Description |
---|---|
Number of data import configurations | Up to 100 data import configurations can be created in a single project regardless of configuration types. If you want to increase the quota, submit a ticket. |
Limits on performance
Item | Description |
---|---|
Number of concurrent subtasks | If you turn on Use Hybrid Cloud Monitoring API in a data import configuration, Log Service creates a subtask for each namespace
from which you want to import data. For example, if you specify three namespaces when
you create a data import configuration, Log Service creates three subtasks to import
data from the namespaces. If you do not turn on Use Hybrid Cloud Monitoring API in
a data import configuration, Log Service creates only one task to import metrics from
all namespaces.
Up to 10 MB/s of import traffic is allowed for each subtask. |
Number of CloudMonitor API calls |
|
Number of shards in a Logstore | The write performance of Log Service varies based on the number of shards in a Logstore. A single shard supports a write speed of 5 MB/s. If an import task writes a large volume of data to Log Service, we recommend that you increase the number of shards for the Logstore. For more information, see Manage shards. |
Network | CloudMonitor is deployed in one region, and import tasks pull data over the Internet. Therefore, the network bandwidth significantly fluctuates and varies based on regions. In most cases, the import traffic ranges from 5 MB/s to 10 MB/s. |
Other limits
Item | Description |
---|---|
Data import latency | The latency of data import to Log Service varies based on data collection from data
sources, data storage in CloudMonitor, and data reads by Log Service. In normal cases,
if you use Hybrid Cloud Monitoring API to pull data from CloudMonitor, the data latency
is approximately 1 minute. If you do not use Hybrid Cloud Monitoring API to pull data
from CloudMonitor, the data latency is approximately 3 minutes. The Data Processing Insight dashboard displays data import latencies. For more information, see View the data import configuration.
For metrics in a namespace named acs_global_acceleration, the latency of data import from CloudMonitor to Log Service is approximately 10 to 15 minutes because a long latency exists when data is collected from data sources to CloudMonitor. |