This topic describes the relationship between the number of metrics collected by the Prometheus agent and the CPU and memory resources. In addition, this topic provides suggestions on resource allocation.

Stress test report for the Prometheus agent

Number of metrics collected by a single agent CPU Memory
1 million 0.95 cores 1.09483 GB
1.1 million 1.11 cores 1.16045 GB
1.2 million 1.36 cores 1.09452 GB
1.3 million 1.66 cores 1.15971 GB
1.4 million 1.29 cores 1.09465 GB
1.5 million 1.50 cores 1.15977 GB
1.6 million 1.39 cores 1.15971 GB
1.7 million 1.64 cores 1.1599 GB
1.8 million 1.63 cores 1.42331 GB

You can view the number of metrics collected by each agent on the Grafana dashboard whose name contains Prometheus.

For example, the following figure shows the number of metrics collected for the following PromQL statement:

sum (scrape_samples_scraped) by (_ARMS_AGENT_ID)
Prometheus Grafana

Suggestions on resource allocation

The Prometheus agent requires about 1 CPU core and 1 GB memory to collect 1 million metrics, as shown in the Stress test report for the Prometheus agent. The recommended CPU utilization and memory usage of the Prometheus agent are both 50% for the agent to properly collect data.

Therefore, we recommend that you allocate the CPU and memory resources based on the number of metrics to collect in the following way:

  • If the number of metrics to collect is 0.5 million, which is displayed as 500K on the Grafana dashboard, we recommend that you allocate 1 CPU core and 1 GB memory.
  • If the number of metrics to collect is 1 million, we recommend that you allocate 2 CPU cores and 2 GB memory.
  • If the number of metrics to collect is 2 million, we recommend that you allocate 4 CPU cores and 4 GB memory.
  • Allocate the CPU and memory resources in the same manner for other collection scales.

Example: If the number of metrics that are collected by an agent reaches 1 million on the Grafana dashboard, we recommend that you scale up the CPU and memory resources to 2 cores and 2 GB for the agent.