Prometheus Monitoring RocketMQ Best Practices

Among the more than 50 cloud products integrated by Prometheus, RocketMQ is a particularly representative cloud product, which has achieved very perfect functions in terms of observability.

How RocketMQ accesses Prometheus

RocketMQ was born in Alibaba's internal core e-commerce system and is the preferred MQ platform for business messages. The above figure shows the overall picture of RocketMQ 5.0 system. It has made great improvements in the access layer, core components and underlying operation and maintenance, and has many advantages such as diverse functions, high performance, high reliability, observability, and easy operation and maintenance.

Metrics, Tracing and Logging are the three pillars of observability.

1. Metrics: RocketMQ provides users with an out of the box dashboard based on the product portfolio of Prometheus+Grafana, which is widely used in the open source field. The indicators cover message volume, accumulation volume, time consumption in each stage, etc. This market combines the best practice template polished by years of R&D and O&M experience of RocketMQ team in the message field, and provides continuous iterative updating capability.

2. Tracing: RocketMQ introduces the open source standard of OpenTelemetry Tracing for the first time, and reorganizes the abstract span topology according to the message dimension.

3. Logging: In the aspect of logging, some client logs are standardized, which makes it easier to use logs to locate problems.

All the observability data of RocketMQ revolves around a message in the processing and consumption phases of the production side, the server side. From the life cycle diagram of messages, you can see the time it takes to send a message from the producer to the MQ server; If it is a scheduled message, you can know the scheduled time according to the Ready time; From the perspective of the consumer, we can know the network time consumption from the start of pulling messages to the arrival of the client; It takes a long time to wait for processing resources from the time of arrival at the client to the start of processing messages; It takes time to process the ACK from the start of processing the message to the final return. Messages can be clearly defined and observed at any stage of the life cycle, which is the core concept of RocketMQ observability.

The RocketMQ exporter contributed by the RocketMQ team has been included in the official open source exporter ecosystem of Prometheus, providing rich monitoring indicators for brokers, producers, and consumers at various stages. The basic logic of the Exporter is to periodically pull data from the MQ cluster by starting multiple scheduled tasks internally, and then expose the data to Prometheus through the endpoint after normalization. The MQAdminExt class encapsulates the various interface logic exposed by MQAdmin. From a structural perspective, the Exporter of RocketMQ is an observer from a third-party perspective, and all indicators come from within the MQ cluster.

Prometheus needs to pay attention to the following two points in the process of exposing monitoring indicators in applications:

1. Exporter deployment mode is divided into direct observation mode that embeds Prometheus client into the application and independent exporter mode outside the application. The direct observation mode has the advantages of mainstream language support, better performance and maintenance free, while the disadvantage is code coupling. The Exporter mode has the advantages of decoupling and rich open source ecology. The biggest disadvantage is that it requires a separate O&M Exporter component. In the application architecture mode of cloud native microservices, the deployment of multiple Exporters brings a lot of burden to O&M. There is no difference between good and bad deployment mode. It is generally recommended to select the direct observation mode when you have control over the application code, and the Exporter mode when you have no control over the application code.

2. Try to avoid the problem of high cardinality caused by divergence of indicator dimensions. Because it is very convenient to add only one label to the extension dimension of Prometheus's indicator model, many users will add as many dimensions as they need to the indicator, which will inevitably introduce some non enumerable dimensions, such as common userid, url, email, ip, etc. The total number of Prometheus timelines is calculated according to the combined product relationship between indicators and dimensions. Therefore, the high cardinality problem not only brings huge storage costs, but also brings about considerable performance challenges to the query side due to the excessive amount of data returned instantaneously, and the serious dimension divergence makes the indicators themselves lose their statistical significance. Therefore, in the process of use, the divergence of indicator dimensions should be avoided as far as possible.

When using the Prometheus Client, we also encounter the problem of high cardinality, especially the RocketMQ indicator, which provides a combination of multiple dimensions such as account, instance, topic, and consumer group ID, so that the overall number of timelines is at a high level. In the process of practice, we have made two targeted optimizations for the Prometheus native client, in order to effectively control the hidden memory problems caused by the high cardinality of the Exporter.

In the production environment of RocketMQ, customer level monitoring of sales tenants is required. RocketMQ resources of each customer are strictly isolated according to the tenant. If you deploy a set of Exporters for each tenant, it will bring great challenges to the product architecture, operation and maintenance. Therefore, RocketMQ chooses another way to access Prometheus in the production environment.

The architecture of RocketMQ 5.0 has been greatly improved. The thin and weak multilingual client base layer uniformly uses the gRPC protocol to send data to the server. At the same time, the MQ server is also divided into two roles: CBroker (proxy) and SBroker, which can be disassembled and combined. At the same time as the architecture changes, RocketMQ 5.0 introduces the specification of OpenTelemetry tracing standard burying point on both the client and server.

Full link Tracing

1. The client embedded the OpenTelemetry Exporter to send the Tracing data to the proxy in batches.

2. The proxy itself, as a collector, integrates the tracing data reported by the client and its own.

3. Tracing storage supports user-defined collector, commercial hosted storage, and open source storage reporting to their own platforms.

4. For the life cycle of messages, the span topology model is redesigned.

Accurate and diverse metrics

1. The server performs secondary aggregation calculation on the received tracing data, and the calculated indicators meet the OpenMetrics specification.

2. It can be seamlessly integrated into Prometheus storage and Grafana's big market display.

RocketMQ span topology model. The topology model has re normalized the burying points for Prod, Recv, Await, Proc, ACK/Back phases, and submitted the attributes part of the OpenTelemetry tracking model to the OpenTelemetry specification organization for inclusion.

The above improvements have greatly enhanced the message trace function. It can not only query the relevant trace according to the basic information of the message, but also clearly understand each stage of the message life cycle. Click the trace ID to view the detailed tracking information, as well as the display of producers, consumers and related resources, such as machine information.

Why should RocketMQ's indicator data be accessed to Prometheus? Because Prometheus naturally conforms to the cloud native architecture, Prometheus is in the metric de facto standard position in the open source field. Prometheus is a native cloud architecture, and is naturally integrated with Kubernetes. It has the capabilities of automatic discovery, multi-level collection, powerful ecology, universal multimodal indicator model, and powerful PromQL query syntax.

RocketMQ is based on trace data to perform secondary calculation to metric to interface with Prometheus. As mentioned earlier, RocketMQ 5.0 introduced the OpenTelemetry tracing burying point. We uniformly store the tracing data reported by the client and server to Alibaba Cloud's log system. Based on the tracing data, we conduct secondary aggregation according to multiple dimensions to generate time series data that conforms to the Prometheus indicator specification. Within the ARMS team, log data is converted into indicators and stored in the Prometheus system by tenants through real-time ETL tools. The RocketMQ console is deeply integrated with Grafana's big disk and Alarm alarm module. Users only need to open Prometheus on the RocketMQ instance monitoring page to get their own big disk and alarm information with one click.

ARMS Prometheus integrates many cloud product monitoring indicators and provides a complete solution for the multi tenancy requirements of cloud products. In addition to monitoring the product's own indicators, Alibaba Cloud's cloud products also need to monitor the tenant indicators for product sales.

Cloud products are divided according to tenant resources, mainly including tenant exclusive resource mode and tenant shared resource mode. The tenant exclusive resource mode has the characteristics of tenants occupying deployment resources independently and good isolation. The tenant information for identifying indicators only needs to be marked with the tenant indicator; Tenant sharing resource mode means that tenants will share deployment resources, and the tenant information identifying indicators needs to be added by the cloud product itself.

Compared with the open source Prometheus, ARMS Prometheus monitoring adopts the architecture of separation of collection and storage. The collection end has the ability to identify and distribute multiple leases. The storage end has the ability to build multiple leases, and the resources between tenants are completely isolated.

ARMS Prometheus will create an instance of the Prometheus cloud service for each Alibaba Cloud user to store the user's corresponding Alibaba Cloud product indicators, which truly solves the problem of data islands caused by scattered monitoring system data in the past, and provides each cloud product with deep customization, out of the box market and alarm capabilities.

The figure above shows an example of the default integrated Grafana market of RocketMQ. The market provides fine-grained monitoring data support such as Overview, Topic message sending, and Group ID message consumption. Compared with open source implementation, this market provides more and more accurate indicator data, combines the best practice template polished by years of operation and maintenance experience of RocketMQ team in the field of messaging, and provides the ability to continuously update iteratively.

RocketMQ observable best practices

Simply focusing on the observable data provided by the message system can only find some problems. In a real microservice system, users need to focus on the observable data of the access layer, business applications, middleware, containers, and the underlying IaaS in the overall technology stack to accurately locate problems. The above figure shows a very typical upstream and downstream application structure of the messaging system. The upstream order system sends messages, and the downstream inventory system and marketing system subscribe to messages to achieve the decoupling of upstream and downstream. How to find and solve problems in such a complex business system requires a comprehensive review of the observability of the entire system.

First of all, it is necessary to collect the observable data of each component of the system, and the three pillars of Metric, Trace and Log are indispensable. Metric measures the application status and can quickly find problems through indicator alerts; Trace data can track the path in the whole cycle at the request level, and can quickly locate the problem by checking the call link; Log data records events generated by the system in detail, and can quickly troubleshoot through log analysis.

The above figure shows the diagnostic experience of ARMS Kubernetes monitoring sediment. Through the end-to-end and top-down whole stack correlation of the applied technology stack, it provides us with practical ideas for diagnosing and positioning observable problems horizontally and vertically. For business related components, more attention should be paid to RED indicators that affect user experience, and more attention should be paid to indicators related to resource saturation at the resource level. At the same time, you need to pay attention to log, event, and call chain associations horizontally. Only multi-directional and full perspective observation can more clearly troubleshoot the location problem.

The above figure shows an example of a message stacking scenario.

First, you need to understand the indicator meaning of message stacking. After a message is sent by the Producer, the three phases of processing in the message queue and consumption by the Consumer are Ready, InFlight, and Received, respectively. Two indicators need to be focused. Ready message indicates the number of ready messages. The size of the message volume reflects the size of the message that has not been consumed. In the case of consumer exceptions, the data volume of ready messages will become more; The message queue time indicates the difference between the ready time of the earliest ready message and the current time. This time size reflects the time delay of the message that has not been processed. It is a very important metric for time sensitive businesses.

There are two main reasons for message accumulation: consumer failure or insufficient consumption capacity, or excessive message volume at the upstream production end and insufficient consumption capacity at the downstream.

The production side should pay more attention to the message sending health, and can give an alarm about the sending success rate. When an alarm occurs, you need to pay attention to the load, sending time, message volume and other indicators to determine whether there is a sudden change in the message volume; For the consumption health that the consumer should pay attention to whether the consumption is timely, an alarm can be given for the queue time of ready messages. When an alarm occurs, it is necessary to pay attention to the message processing time, consumption success rate, message volume, load and other related indicators, judge the change of message volume, consumption processing time, and query whether there is ERROR log, trace and other related information.

Users can use Alibaba Cloud ARMS products to handle the above troubleshooting process more easily and quickly.

After receiving the alarm information, you can view the associated call chain information by querying the changes of business topology, exception labels and business indicators. You can obtain the processing time of each stage of business processing, whether there are exceptions and other related information on the call chain. Each span node of the call chain can drill down to query the call stack in real time and the proportion of time consumption to locate the problem to the business code level. If the traceID in the log accessed by the user is associated to the call chain according to the ARMS specification, you can also view the corresponding log details with one click association to finally locate the root cause of the problem.

When a problem occurs, in addition to a convenient and quick problem location process, it is also necessary to provide a relatively complete alarm processing and emergency response mechanism for the alarm. ARMS alarm provides users with the full process functions of alarm configuration, alarm scheduling, and alarm processing, facilitating customers to establish emergency handling, post event review, and mechanism optimization.

At the same time, the intelligent alarm platform of ARMS supports the integration of 10+monitoring data sources and multi-channel data push. The nail based CHARTOPS enables alarms to be cooperative, traceable, and statistical. It can also provide algorithm capabilities such as exception checking and intelligent noise reduction, effectively reduce invalid alarms, and get root cause analysis of alarms based on the application context.

Alibaba Cloud ARMS monitoring covers users' terminals, applications, cloud services/third-party components, containers, and infrastructure from top to bottom. It is a comprehensive, three-dimensional, unified monitoring and unified alarm capability. It is a one-stop, observable best practice platform for enterprises.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us