This topic lists terminologies related to custom monitoring, alarms, and dashboards.
An alarm rule defines how to generate alarms based on datasets, and how to send alarm notifications. Alarm severities are defined as follows:
A collection rule defines how a monitoring job collects data from different data source instances. Collection rules must be defined for a monitoring job.
Data cleansing is a process during which operations such as splitting and static join are performed on logs to convert them into standard Key-Value (KV) pairs.
Data screening defines which types of data in the dataset are used for dataset calculation. Data that do not meet criteria are filtered out from the dataset.
ARMS obtains data from the following sources:
- ECS data source: ARMS can collect data from ECS instances by collecting logs.
- MQ data source: ARMS can collect data through integration with MQ.
- SDK data source: You can send data to ARMS directly by sending integrated APIs.
A dataset defines how a monitoring job performs aggregation calculation, persistent storage on the collected data, and accesses the output using OpenAPI. You can define a dataset directly, or indirectly using the report space and alarm notification.
A dimension is the key value used for aggregation when a dataset is created. It is similar to the GroupBy column name in a database, or an attribute in multidimensional On-Line Analytical Processing (OLAP). A dataset performs aggregation operations on the real-time data based on the configured dimensions.
A specific metric stored in a dataset is usually of the numerical type, similar to a value in multidimensional OLAP. ARMS metrics correspond to values of Count, Max, Sum, and Count Distinct after real-time calculation.
The interactive dashboard is a set of interactive reports customized based on datasets in ARMS. Multiple datasets can be displayed on the dashboard using different types of charts. The query time span can be customized.
A custom static mapping table is used for mapping query results to business attribute fields. For example, map the province, city, and district names in the query results to zip codes for subsequent chart processing.
A monitoring job is an instance where ARMS captures, processes, and stores the data, and then presents and exports the results. Jobs are classified as follows:
- Predefined job: For example, Exception stack monitoring, and product sales statistics. With this type of jobs, you can use predefined components, such as cleansing logic, datasets, and report controls, to quickly assemble a monitoring job for a specific scenario.
- Custom job: By following the step-by-step instructions, you can manually customize the various job components to build a complete monitoring job.
All datasets in ARMS have the time attribute. Time granularity defines the time span of data to be returned in data query, for example, one day, two hours, or five minutes.