Common questions about notification policies, dispatch rules, alert routing, and troubleshooting in the Alert Management sub-service of Application Real-Time Monitoring Service (ARMS).
How do the old and new alerting versions differ in Managed Service for Prometheus?
The versions differ in two areas: alert rule templates and the Alert Management sub-service.
Alert rule templates: The new version uses alert rule templates verified by Alibaba Cloud. The old version uses open-source templates without Alibaba Cloud verification.
Alert Management: Only the new version includes the Alert Management sub-service. When alert rules fire, alert events flow into Alert Management, where you control which notifications to receive.
Alert Management improves the alerting workflow in three ways:
Decoupled alert and notification configuration: Define only trigger conditions in alert rules. Attach notification policies separately to control delivery.
Fine-grained routing: Route alert notifications by specific criteria such as Container Service for Kubernetes (ACK) namespaces.
Reusable policies: Bind one notification policy to multiple alert rules instead of configuring notification methods per rule.
Use dispatch rules in notification policies to route alerts to the right user groups. The following are typical configurations:
Infrastructure operations and maintenance (O&M): Subscribe to alerts about production cluster resource usage and ACK component failures. Dispatch rules:
Rule 1:
alertName == CPU utilization of nodes higher than 80% & clusterName == Production clusterRule 2:
alertname == ApiServer Failures & clusterName == Production cluster
Payment service O&M: Subscribe to alerts from the
payandpay-prenamespaces in the production cluster. Dispatch rule:namespace Regex match pay.* & clustername == Production clusterP1-level notifications: Subscribe to all critical-severity alerts from the production cluster. Dispatch rule:
severity == critial & clustername == Production cluster
Why am I still receiving notifications from an earlier notification policy?
The earlier notification policy continues to send notifications because its dispatch rules still match the alert. To verify:
Note the Notification Policy field in the alert notification you received. For more information, see View historical alerts.
In the ARMS console, find that notification policy and check whether its dispatch rules match the alert. For more information, see Create and manage a notification policy.
If the rules match, update or remove them to stop these notifications.
Why am I receiving notifications for alerts I don't want?
Notification policies send notifications for any alert that matches their dispatch rules, even unintentionally. To identify the cause:
Note the Notification Policy field in the unwanted notification. For more information, see View historical alerts.
In the ARMS console, find that notification policy and review its dispatch rules. For more information, see Create and manage a notification policy.
If the rules are too broad, narrow them to exclude unwanted alerts.
Why does _aliyun_arms_alert_rule_id appear in a notification policy?
When you specify a notification policy while creating an alert rule, the system automatically adds _aliyun_arms_alert_rule_id == {{Alert rule ID}} as a dispatch rule in that policy. This links the alert rule to the notification policy.
Why do I receive notifications without specifying a notification policy for an alert rule?
All alerts are sent to the Alert Management sub-service, whether or not you specify a notification policy in the alert rule. If the alert matches dispatch rules in any existing notification policy, that policy sends a notification.
Do notification policies have the same priority?
Yes. All notification policies have equal priority. If an alert matches the dispatch rules of multiple policies, each matching policy sends its own notification independently.
What is the logical relationship between dispatch rules?
Dispatch rules within a notification policy use two logical operators:
| Level | Operator | Behavior |
|---|---|---|
| Between dispatch rules in the same policy | OR | A notification is sent when any single rule matches. |
| Between conditions within one dispatch rule | AND | A rule matches only when all its conditions are met. |
Example: A notification policy has Rule A (with conditions C1 and C2) and Rule B. An alert triggers a notification if it matches either Rule A or Rule B. For Rule A to match, the alert must satisfy both C1 and C2.
Should I specify a notification policy when creating an alert rule?
It depends on your use case:
| Scenario | Recommendation |
|---|---|
| Simple routing (send Alert A to Contact B) | Specify a notification policy when creating the alert rule. |
| Advanced processing (sorting, muting, grouping, or custom workflows) | Skip the notification policy in the alert rule. Create a custom notification policy in the ARMS console afterward. For more information, see Create and manage a notification policy. |
Why are false alerts generated?
The following false alerts are caused by invalid configurations in the old alert rule template:
CPU utilization of nodes is higher than 8,000%
Status of pods is abnormal
Pods time out during startup
The Alert Management sub-service has released an updated template. To fix these alerts, migrate your rules:
Delete the alert rules created from the old template.
Recreate the alert rules using the updated template.
For alert rule management instructions, see the documentation for the relevant monitoring service:
| Monitoring service | Documentation |
|---|---|
| Application Monitoring | Alert rules |
| Browser Monitoring | Create and manage a Browser Monitoring alert rule |
| Managed Service for Prometheus | Create an alert rule |
What is the relationship between Alert Management and Alertmanager?
In open-source Prometheus, alerts go to Alertmanager, which requires manual configuration for dispatching and notifications. In Managed Service for Prometheus, the Alert Management sub-service acts as a multi-tenant Alertmanager hosted by Alibaba Cloud. Alerts are automatically routed to Alert Management for processing. The service supports the core features of open-source Alertmanager.
Sending alerts to a self-managed Alertmanager instance is not supported. As an alternative, Alert Management can forward alert notifications in Alertmanager format through webhooks. For more information, see Format of alert notifications sent by using webhooks.
Why do alert notifications contain a "New event" message I didn't configure?
Alert Management groups alert events by labels and sends one notification per event group. When a new event joins an existing group, Alert Management sends another notification with the New event message to indicate the group has been updated.
How do I modify the content of a DingTalk alert card?
A DingTalk alert card has two parts: alert content (controlled by a notification template) and card styling (controlled by a chatbot).
Modify the alert content
Log on to the ARMS console.
In the left-side navigation pane, choose Alert Management > Notification Policies. Find the notification policy and click Edit in the Actions column.
Click the Notification Objects tab, and then modify Notification Content on the DingTalk/Lark/WeCom tab.
{{ .Labels.alertname }} to insert the alert name and {{ .Annotations.summary }} to insert the alert summary. For syntax details, see Configure a notification template and a webhook template.Modify the card styling
In the left-side navigation pane, choose Alert Management > Notification Objects.
Click the DingTalk/Lark/WeCom tab, find the chatbot, and click Edit in the Actions column.
Configure the alert card style as needed.