To export data from a Prometheus instance for custom processing, you can use the Prometheus data shipping service. This topic describes how to ship data from a Prometheus instance to ApsaraMQ for Kafka for further processing.
Prerequisites
A Prometheus instance is connected. For more information, see the following topics:
An ApsaraMQ for Kafka instance is deployed as the destination, and the required resources, such as topics, are created. For more information, see Overview.
EventBridge is activated. For more information, see Activate EventBridge and grant permissions to a RAM user.
NoteThis feature requires EventBridge (commercially available since June 3, 2025). Refer to EventBridge fees for billing details.
Limitations
When you deliver the data in the virtual private cloud (VPC), if the VPC where the Prometheus instance resides is not the same as the target VPC, ensure that the IP address of the vSwitch in the target VPC has been added to the whitelist of the Prometheus instance. Otherwise, network connection may fail.
On the vSwitch page in the VPC console, you can obtain the CIDR block of the vSwitch.

The following table lists the instances that support data delivery.
Instance
Description
Prometheus for Alibaba Cloud services
The free instance, except for those starting with the name cloud-product-prometheus.
Prometheus for container services
N/A
Prometheus for Flink Serverless
N/A
Prometheus for Kubernetes
N/A
General-purpose Prometheus instance
The general-purpose instance, except for those whose data is reported through OpenTelemetry endpoints.
Only real-time data generated after you create a delivery task can be exported. Delivering historical data is not supported.
Step 1: Create a shipping task
Log on to the Managed Service for Prometheus console.
In the left navigation pane, click Data Delivery.
On the Data Delivery page, select a region in the top navigation bar and click Create Task.
In the dialog box that appears, set the Task Name and Task Description parameters, and click OK.
On the Edit Task page, configure the data source and event target.
Click + Add Data Source, set the parameters, and then click OK. The following table lists the parameters.
Parameter
Description
Example
Prometheus Instance
The Prometheus instance whose data you want to deliver.
c78cb8273c02*****
Data Filtering
The label of the metric to be filtered.
Regular expressions are supported. Use line breaks to separate multiple conditions. The data can be delivered only when the relationship among the conditions is Logical AND.
__name__=AliyunEcs_CPUUtilization|AliyunEcs_memory_usedutilization regionId=cn-hangzhou id=i-2ze0mxp.*Data Labeling
The label you add to metric data to be delivered. Use line breaks to separate multiple labels.
deliver_test_key1=ssss deliver_test_key2=yyyyClick Add Destination. Set Destination Type to ApsaraMQ for Kafka, enter the other required information, and click OK.
On the Edit Task page, click OK and Save.
Step 2: View the Prometheus monitoring data
To reduce the load on the destination, Prometheus monitoring data is compressed using Snappy into the JSON array format before it is shipped to Kafka. For more information, see Snappy compression format.
Method 1: View data in the console
Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region where the ApsaraMQ for Kafka instance that you want to manage resides.
On the Instances page, click the name of the instance that you want to manage.
In the navigation pane on the left, click Topic Management. Find the destination topic and click Details in the Actions column. On the CloudMonitor or Message Query tab, you can view the imported data.

Method 2: View data using a client
Initialize a Kafka client. For more information, see Subscribe to messages as a single consumer.
Add the following code to the
KafkaConsumerDemo.javafile. The following example shows how to consume and decompress data using Snappy after initializing the Kafka client:public static void main(String[] args) { // Initialize the Kafka consumer first. while (true){ try { ConsumerRecords<String, byte[]> records = consumer.poll(1000); // You must consume these records before the next poll. The total time cannot exceed SESSION_TIMEOUT_MS_CONFIG. // We recommend that you use a separate thread pool to consume messages and return the results asynchronously. for (ConsumerRecord<String, byte[]> record : records) { byte[] compressedData = record.value(); byte[] data = Snappy.uncompress(compressedData); System.out.println(new String(data)); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } }Compile and run the
KafkaConsumerDemo.javafile. The following metric data is returned in JSON format:[{ "instance": "*****", "pod": "*****", "rejected": "false", "type": "validate", "pod_name": "*****", "endpoint": "http-metrics", "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket", "service": "kubernetes", "name": "*****", "namespace": "default", "le": "2.5", "job": "apiserver", "operation": "UPDATE", "value": "675.0", "timestamp": "1698732988354" }, { "instance": "*****", "pod": "*****", "rejected": "false", "type": "validate", "pod_name": "*****", "endpoint": "http-metrics", "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket", "service": "kubernetes", "name": "*****", "namespace": "default", "le": "+Inf", "job": "apiserver", "operation": "UPDATE", "value": "675.0", "timestamp": "1698732988354" }, { "instance": "*****", "pod": "*****", "rejected": "false", "type": "validate", "pod_name": "*****", "endpoint": "http-metrics", "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket", "service": "kubernetes", "name": "*****", "namespace": "default", "le": "0.005", "job": "apiserver", "operation": "UPDATE", "value": "1037.0", "timestamp": "1698732988519" }, { "instance": "*****", "pod": "*****", "rejected": "false", "type": "validate", "pod_name": "*****", "endpoint": "http-metrics", "__name__": "apiserver_admission_controller_admission_duration_seconds_bucket", "service": "kubernetes", "name": "*****", "namespace": "default", "le": "0.025", "job": "apiserver", "operation": "UPDATE", "value": "1037.0", "timestamp": "1698732988519" }]