All Products
Search
Document Center

Realtime Compute for Apache Flink:Customize metric reporters

Last Updated:Mar 26, 2026

Realtime Compute for Apache Flink reports monitoring metrics to the Flink development console by default. To route these metrics to an external system—Prometheus, Simple Log Service (SLS), or Kafka—add a metrics reporter configuration to your deployment's running parameters. You can also report to multiple channels simultaneously.

Usage notes

  • Console metrics are disabled when reporting exclusively to external channels. If metrics.reporters does not include jmx,promappmgr, the Flink development console stops displaying metric curves. Application Real-Time Monitoring Service (ARMS) and Cloud Monitor (CMS) are not enabled, any alert configurations in the console become invalid, and you will not be able to create new valid alert configurations in the Flink console. Configure alerting directly on the target platform.

  • Multi-channel reporting incurs additional collection costs. Each additional reporter increases the metrics collection overhead. For details, see Report to multiple channels.

Prerequisites

Before you begin, ensure that you have:

  • A running deployment in the Flink development console

  • Network connectivity between the Flink workspace and the target reporting system (for Prometheus and Kafka)

Report to a self-managed Prometheus instance

Flink pushes metrics to a Prometheus Pushgateway at the interval you configure. Confirm that the Pushgateway is reachable from the Flink workspace before proceeding.

On the Deployment Details tab of your job, go to Parameter Settings > Other Configurations and add the following configuration. For instructions on editing running parameters, see How do I configure custom job running parameters?

metrics.reporters: promgatewayappmgr
metrics.reporter.promgatewayappmgr.factory.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterFactory
metrics.reporter.promgatewayappmgr.host: <your-pushgateway-host>
metrics.reporter.promgatewayappmgr.port: <your-pushgateway-port>
metrics.reporter.promgatewayappmgr.jobName: '{{deploymentName}}'
metrics.reporter.promgatewayappmgr.groupingKey: 'deploymentName={{deploymentName}};deploymentId={{deploymentId}};jobId={{jobId}}'

Replace <your-pushgateway-host> and <your-pushgateway-port> with the actual host and port of your Pushgateway instance. The system automatically substitutes {{deploymentName}}, {{deploymentId}}, and {{jobId}} at runtime.

Network requirements:

Network topology Required action
Prometheus and Flink workspace in the same VPC Confirm that the Prometheus security group allows inbound traffic from the Flink CIDR block
Prometheus has a public IP, Flink workspace in a different VPC Configure public network access for the Flink workspace. See How do I access the internet?
Prometheus has a VPC-only IP, Flink workspace in a different VPC Connect the two VPCs. See How do I access other services across VPCs?

Report to Simple Log Service (SLS)

On the Deployment Details tab of your job, go to Parameter Settings > Other Configurations and add the following configuration. For instructions, see How do I configure custom job running parameters?

metrics.reporters: sls
metrics.reporter.sls.factory.class: org.apache.flink.metrics.sls.SLSReporterFactory
metrics.reporter.sls.endPoint: <your-endpoint>
metrics.reporter.sls.project: <your-project>
metrics.reporter.sls.logStore: <your-logstore>
metrics.reporter.sls.accessId: <your-access-key-id>
metrics.reporter.sls.accessKey: <your-access-key-secret>
metrics.reporter.sls.extraTags: deploymentId={{ deploymentId }};deploymentName={{ deploymentName}};namespace={{ namespace }}

The system automatically substitutes {{ deploymentId }}, {{ deploymentName }}, and {{ namespace }} at runtime. To get your AccessKey ID and AccessKey secret, see How do I view my AccessKey ID and AccessKey secret?

Parameter reference:

Parameter Description
endPoint SLS service endpoint for the region where your project is located
project Name of the SLS project
logStore Name of the Logstore (or MetricStore, if reporting to a MetricStore)
accessId AccessKey ID for authentication
accessKey AccessKey secret for authentication

Report to a MetricStore

To write metrics to an SLS MetricStore instead of a Logstore, add the following parameters and set logStore to the MetricStore name:

metrics.reporter.sls.toMetricStore: true
metrics.reporter.sls.logStore: <your-metricstore-name>

Report to Kafka

On the Deployment Details tab of your job, go to Parameter Settings > Other Configurations and add the following configuration. For instructions, see How do I configure custom job running parameters?

metrics.reporters: monitor
metrics.reporter.monitor.factory.class: org.apache.flink.metrics.monitor.KafkaReporterFactory
metrics.reporter.monitor.kafka.bootstrap.servers: <your-bootstrap-servers>
metrics.reporter.monitor.topicName: <your-topic-name>
metrics.reporter.monitor._FLINK_CLUSTER_NAME: '{{ deploymentName }}'
metrics.reporter.monitor._JOB_NAME: '{{ deploymentName }}'
metrics.reporter.monitor._NAMESPACE_NAME: '{{ namespace }}'

The system automatically substitutes {{ deploymentName }} and {{ namespace }} at runtime.

Parameter reference:

Parameter Description
kafka.bootstrap.servers Comma-separated list of Kafka broker addresses
topicName Target Kafka topic for metric data
_FLINK_CLUSTER_NAME Cluster name label attached to each metric record
_JOB_NAME Job name label attached to each metric record
_NAMESPACE_NAME Namespace label attached to each metric record

Report to multiple channels

Reporting to multiple channels lets you keep metrics visible in the Flink development console while also sending them to an external system. This incurs additional collection costs.

Report to the Flink console and SLS simultaneously

Including jmx,promappmgr in metrics.reporters keeps the Flink development console active. Add the reporter names as a comma-separated list, then include the configuration for each additional reporter.

The following example reports to both the Flink development console and SLS:

metrics.reporters: jmx,promappmgr,sls
metrics.reporter.sls.factory.class: org.apache.flink.metrics.sls.SLSReporterFactory
metrics.reporter.sls.endPoint: <your-endpoint>
metrics.reporter.sls.project: <your-project>
metrics.reporter.sls.logStore: <your-logstore>
metrics.reporter.sls.accessId: <your-access-key-id>
metrics.reporter.sls.accessKey: <your-access-key-secret>
metrics.reporter.sls.extraTags: deploymentId={{ deploymentId }};deploymentName={{ deploymentName}};namespace={{ namespace }}
When metrics.reporters includes jmx,promappmgr, the Flink development console continues to display metric curves and alert configurations remain valid. For SLS parameter details, see Report to Simple Log Service (SLS).

Report to SLS and Kafka simultaneously (outside the Flink console)

To route metrics to two external systems without the Flink console, combine the reporter names and their configurations. The Flink development console will not display metrics in this setup—view them on the target platforms instead. See Usage notes for the full list of impacts.

metrics.reporters: sls,monitor
metrics.reporter.sls.factory.class: org.apache.flink.metrics.sls.SLSReporterFactory
metrics.reporter.sls.endPoint: <your-endpoint>
metrics.reporter.sls.project: <your-project>
metrics.reporter.sls.logStore: <your-logstore>
metrics.reporter.sls.accessId: <your-access-key-id>
metrics.reporter.sls.accessKey: <your-access-key-secret>
metrics.reporter.sls.extraTags: deploymentId={{ deploymentId }};deploymentName={{ deploymentName}};namespace={{ namespace }}
metrics.reporter.monitor.factory.class: org.apache.flink.metrics.monitor.KafkaReporterFactory
metrics.reporter.monitor.kafka.bootstrap.servers: <your-bootstrap-servers>
metrics.reporter.monitor.topicName: <your-topic-name>
metrics.reporter.monitor._FLINK_CLUSTER_NAME: '{{ deploymentName }}'
metrics.reporter.monitor._JOB_NAME: '{{ deploymentName }}'
metrics.reporter.monitor._NAMESPACE_NAME: '{{ namespace }}'

For parameter details, see Report to Simple Log Service (SLS) and Report to Kafka.

Integrate metrics into a self-managed platform using ARMS APIs

If you selected Prometheus Service when creating your workspace, use ARMS APIs to retrieve Flink metrics and integrate them into your own platform. This approach keeps metric curves and alert configurations active in the Flink development console while giving you programmatic access to the raw data.

For ARMS API details, see API overview. For operator-level metrics, see Operator metrics.

What's next