Managed Service for Prometheus can stream monitoring data from Prometheus instances to ApsaraMQ for Kafka through data delivery tasks. Once the data reaches Kafka, you can feed it into long-term storage backends, custom alerting pipelines, real-time analytics engines, or data lake ingestion workflows.
Use cases
Delivering Prometheus metrics to Kafka enables several downstream workflows:
Real-time analytics -- Feed metrics into stream-processing engines such as Apache Flink or Apache Spark for real-time aggregation and anomaly detection.
Long-term storage -- Route metrics to storage systems such as ClickHouse or Snowflake for historical analysis beyond the default Prometheus retention period.
Custom alerting -- Build alerting pipelines that evaluate complex rules outside Prometheus, using Kafka consumers or Kafka Connect.
Data lake ingestion -- Stream metrics into a data warehouse (for example, Databricks or MaxCompute) for cross-system correlation and machine learning workloads.
Supported instance types
Not all Prometheus instance types support data delivery. The following table lists the supported types and their restrictions.
| Instance type | Restriction |
|---|---|
| Prometheus for Alibaba Cloud services | Free instances only. Instances whose names start with cloud-product-prometheus are excluded. |
| Prometheus for container services | No restrictions |
| Prometheus for application monitoring | No restrictions |
| Prometheus for Flink Serverless | No restrictions |
| Prometheus for Kubernetes | No restrictions |
| General-purpose Prometheus instance | Instances that report data through OpenTelemetry endpoints are excluded. |
Networking requirements
If the Prometheus instance and the Kafka instance reside in different VPCs, you must add the target VPC's vSwitch CIDR block to the Prometheus instance whitelist. Without this step, network connection may fail.
To find the vSwitch CIDR block, open the vSwitch page in the VPC console.

Data format
Prometheus monitoring data is serialized to JSON arrays and compressed with Snappy before delivery. Consumers must decompress the payload before parsing the JSON content.
Each record in the JSON array contains the metric name, labels, value, and timestamp. See Consume the delivered data for the full schema and a code example.
Prerequisites
Before you begin, make sure that you have:
A Prometheus instance of a supported type. For setup instructions, see:
An ApsaraMQ for Kafka instance with at least one topic. See Get started with ApsaraMQ for Kafka.
EventBridge activated. See Activate EventBridge and grant permissions.
Data delivery relies on EventBridge, which has been commercially available since June 3, 2025. For billing details, see EventBridge billing.
Create a data delivery task
Log on to the ARMS console.
In the left navigation pane, choose Managed Service for Prometheus > Data Delivery.
In the top navigation bar of the Data Delivery page, select a region and click Create Task.
In the dialog box, enter a Task Name and Task Description, and then click OK.
On the Edit Task page, configure the data source and the event target:
Click + Add Data Source, configure the following parameters, and then click OK.
Parameter Description Example Prometheus Instance The Prometheus instance to deliver data from. c78cb8273c02*****Data Filtering Filter metrics by label using regular expressions. Separate multiple conditions with line breaks. All conditions are evaluated with logical AND. __name__=AliyunEcs_CPUUtilization|AliyunEcs_memory_usedutilizationregionId=cn-hangzhouid=i-2ze0mxp.*Data Labeling Custom labels to attach to the delivered metric data. Separate multiple labels with line breaks. deliver_test_key1=ssssdeliver_test_key2=yyyyClick Add Target, set Destination Type to ApsaraMQ for Kafka, configure the connection details, and then click OK.
Click OK, and then click Save.
Consume the delivered data
Monitoring data is compressed with Snappy before delivery to ApsaraMQ for Kafka. Decompress the payload before processing.
View data in the ApsaraMQ for Kafka console
Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region of your Kafka instance.
On the Instances page, click the name of your instance.
In the left navigation pane, click Topics. Find your topic and click Details in the Actions column.
Click the CloudMonitor or Message Query tab to view the delivered monitoring data.

Consume data programmatically
This example uses a Java Kafka consumer to decompress and parse the delivered monitoring data.
Initialize a Kafka consumer. For setup instructions, see Use a single consumer to subscribe to messages.
Add the following code to
KafkaConsumerDemo.java. The consumer polls for messages, decompresses each payload with Snappy, and prints the JSON output.public static void main(String[] args) { // Initialize the Kafka consumer first. while (true) { try { ConsumerRecords<String, byte[]> records = consumer.poll(1000); // Process all records before the next poll. // The total processing time must not exceed SESSION_TIMEOUT_MS_CONFIG. // For production workloads, use a thread pool for asynchronous processing. for (ConsumerRecord<String, byte[]> record : records) { byte[] compressedData = record.value(); // Decompress the Snappy-compressed payload byte[] data = Snappy.uncompress(compressedData); System.out.println(new String(data)); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } }Compile and run
KafkaConsumerDemo.java. The output is a JSON array where each element represents a metric sample:[ { "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket", "instance": "*****", "pod": "*****", "namespace": "default", "service": "kubernetes", "job": "apiserver", "endpoint": "http-metrics", "operation": "UPDATE", "type": "validate", "rejected": "false", "name": "*****", "le": "2.5", "value": "675.0", "timestamp": "1698732988354" } ]Each JSON object contains the following fields:
Field Description __name__The Prometheus metric name instanceThe instance that reported the metric namespaceThe Kubernetes namespace (if applicable) jobThe Prometheus scrape job name valueThe metric value at the time of collection timestampThe Unix timestamp in milliseconds when the metric was collected Other fields Prometheus labels attached to the metric (for example, pod,service,endpoint)