All Products
Search
Document Center

Application Real-Time Monitoring Service:Deliver Prometheus monitoring data to ApsaraMQ for Kafka

Last Updated:Mar 11, 2026

Managed Service for Prometheus can stream monitoring data from Prometheus instances to ApsaraMQ for Kafka through data delivery tasks. Once the data reaches Kafka, you can feed it into long-term storage backends, custom alerting pipelines, real-time analytics engines, or data lake ingestion workflows.

Use cases

Delivering Prometheus metrics to Kafka enables several downstream workflows:

  • Real-time analytics -- Feed metrics into stream-processing engines such as Apache Flink or Apache Spark for real-time aggregation and anomaly detection.

  • Long-term storage -- Route metrics to storage systems such as ClickHouse or Snowflake for historical analysis beyond the default Prometheus retention period.

  • Custom alerting -- Build alerting pipelines that evaluate complex rules outside Prometheus, using Kafka consumers or Kafka Connect.

  • Data lake ingestion -- Stream metrics into a data warehouse (for example, Databricks or MaxCompute) for cross-system correlation and machine learning workloads.

Supported instance types

Not all Prometheus instance types support data delivery. The following table lists the supported types and their restrictions.

Instance typeRestriction
Prometheus for Alibaba Cloud servicesFree instances only. Instances whose names start with cloud-product-prometheus are excluded.
Prometheus for container servicesNo restrictions
Prometheus for application monitoringNo restrictions
Prometheus for Flink ServerlessNo restrictions
Prometheus for KubernetesNo restrictions
General-purpose Prometheus instanceInstances that report data through OpenTelemetry endpoints are excluded.

Networking requirements

If the Prometheus instance and the Kafka instance reside in different VPCs, you must add the target VPC's vSwitch CIDR block to the Prometheus instance whitelist. Without this step, network connection may fail.

To find the vSwitch CIDR block, open the vSwitch page in the VPC console.

vSwitch CIDR block

Data format

Prometheus monitoring data is serialized to JSON arrays and compressed with Snappy before delivery. Consumers must decompress the payload before parsing the JSON content.

Each record in the JSON array contains the metric name, labels, value, and timestamp. See Consume the delivered data for the full schema and a code example.

Prerequisites

Before you begin, make sure that you have:

Note

Data delivery relies on EventBridge, which has been commercially available since June 3, 2025. For billing details, see EventBridge billing.

Create a data delivery task

  1. Log on to the ARMS console.

  2. In the left navigation pane, choose Managed Service for Prometheus > Data Delivery.

  3. In the top navigation bar of the Data Delivery page, select a region and click Create Task.

  4. In the dialog box, enter a Task Name and Task Description, and then click OK.

  5. On the Edit Task page, configure the data source and the event target:

    1. Click + Add Data Source, configure the following parameters, and then click OK.

      ParameterDescriptionExample
      Prometheus InstanceThe Prometheus instance to deliver data from.c78cb8273c02*****
      Data FilteringFilter metrics by label using regular expressions. Separate multiple conditions with line breaks. All conditions are evaluated with logical AND.__name__=AliyunEcs_CPUUtilization|AliyunEcs_memory_usedutilization
      regionId=cn-hangzhou
      id=i-2ze0mxp.*

      Data LabelingCustom labels to attach to the delivered metric data. Separate multiple labels with line breaks.deliver_test_key1=ssss
      deliver_test_key2=yyyy
    2. Click Add Target, set Destination Type to ApsaraMQ for Kafka, configure the connection details, and then click OK.

  6. Click OK, and then click Save.

Consume the delivered data

Monitoring data is compressed with Snappy before delivery to ApsaraMQ for Kafka. Decompress the payload before processing.

View data in the ApsaraMQ for Kafka console

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region of your Kafka instance.

  3. On the Instances page, click the name of your instance.

  4. In the left navigation pane, click Topics. Find your topic and click Details in the Actions column.

  5. Click the CloudMonitor or Message Query tab to view the delivered monitoring data.

Kafka topic details

Consume data programmatically

This example uses a Java Kafka consumer to decompress and parse the delivered monitoring data.

  1. Initialize a Kafka consumer. For setup instructions, see Use a single consumer to subscribe to messages.

  2. Add the following code to KafkaConsumerDemo.java. The consumer polls for messages, decompresses each payload with Snappy, and prints the JSON output.

       public static void main(String[] args) {
    
           // Initialize the Kafka consumer first.
    
           while (true) {
               try {
                   ConsumerRecords<String, byte[]> records = consumer.poll(1000);
    
                   // Process all records before the next poll.
                   // The total processing time must not exceed SESSION_TIMEOUT_MS_CONFIG.
                   // For production workloads, use a thread pool for asynchronous processing.
                   for (ConsumerRecord<String, byte[]> record : records) {
                       byte[] compressedData = record.value();
    
                       // Decompress the Snappy-compressed payload
                       byte[] data = Snappy.uncompress(compressedData);
    
                       System.out.println(new String(data));
                   }
               } catch (Exception e) {
                   try {
                       Thread.sleep(1000);
                   } catch (Throwable ignore) {
                   }
                   e.printStackTrace();
               }
           }
       }
  3. Compile and run KafkaConsumerDemo.java. The output is a JSON array where each element represents a metric sample:

       [
         {
           "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket",
           "instance": "*****",
           "pod": "*****",
           "namespace": "default",
           "service": "kubernetes",
           "job": "apiserver",
           "endpoint": "http-metrics",
           "operation": "UPDATE",
           "type": "validate",
           "rejected": "false",
           "name": "*****",
           "le": "2.5",
           "value": "675.0",
           "timestamp": "1698732988354"
         }
       ]

    Each JSON object contains the following fields:

    FieldDescription
    __name__The Prometheus metric name
    instanceThe instance that reported the metric
    namespaceThe Kubernetes namespace (if applicable)
    jobThe Prometheus scrape job name
    valueThe metric value at the time of collection
    timestampThe Unix timestamp in milliseconds when the metric was collected
    Other fieldsPrometheus labels attached to the metric (for example, pod, service, endpoint)