You can view metrics of Function Compute resources and the metrics at the region, service, and function levels on the Overview page of the Function Compute console. The <MetricName> parameter specifies a metric. This topic describes the names and description of each monitoring metric of Function Compute.
Resource metrics
You can log on to the Function Compute console and view the resource metrics in the Resource Usage Statistics section on the Overview page.
Resource metrics are used to monitor and measure the overall resource usage and network traffic of Function Compute in a region or all regions. The following table describes resource metrics. Values of metrics are summed on a daily or monthly basis.
Category | Metric | Unit | Description |
Overview | Invocations | N/A | The total number of requests for function invocations. |
vCPU Usage | vCPU-second | The vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by function execution duration. | |
Memory Usage | GB-second | The memory resources consumed by invoked functions. The value is the memory capacity multiplied by function execution duration. | |
Disk Usage | GB-second | The disk resources consumed by invoked functions. The value is the disk capacity multiplied by function execution duration. | |
Outbound Internet Traffic | GB | The total outbound Internet traffic that is generated during function execution within a specified statistical period. | |
GPU Usage | GB-second | The GPU resources consumed by invoked functions. The value is the GPU capacity multiplied by function execution duration. | |
vCPU Usage | Active vCPU Usage | vCPU-second | The active vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by function execution duration. |
Idle vCPU Usage | vCPU-second | The idle vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by instance idle duration. |
Region-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, you can view region-level metrics.
Region-level metrics are used to monitor the resource usage of Function Compute in a region. The following table describes region-level metrics.
Category | Metric | Unit | Description |
Function Executions | Invocations | N/A | The total number of requests to invoke functions in a region. The statistics are collected every minute or every hour. |
Errors | Server Errors | N/A | The total number of requests for function invocations that failed to be executed in a region due to Function Compute system errors. The statistics are collected every minute or every hour. Note Requests for successful invocations of a function configured with an HTTP trigger and for which an HTTP |
Client Errors | N/A | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP
For more information, see Public error codes. Note If a function processes a request for which | |
Function Errors | N/A | The total number of requests for invocations that failed to be executed due to function errors in a region. The statistics are collected every minute or every hour. | |
Throttling Errors | Maximum Concurrent Instances Exceeded | N/A | The total number of requests for invocations that failed to be executed due to excessive concurrent instances in a region and for which the HTTP |
Maximum Instances Exceeded | N/A | The total number of requests for invocations that failed to be executed due to excessive instances in a region and for which the HTTP | |
On-demand Instances | Upper Limit | N/A | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
On-demand Instances | N/A | The number of on-demand instances that are concurrently occupied when functions in a region are invoked. The statistics are collected every minute or every hour. | |
Provisioned Instance | Provisioned Instances | N/A | The total number of provisioned instances that are created for all functions in a region within the current account. |
Service-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the service whose metrics you want to view in the Service Name column.
Service-level metrics are used to monitor and measure the resource usage of a service from the perspectives of service, version, and alias. Metrics of these perspectives are managed by service. The following table describes service-level metrics.
The prefixes of version-level and alias-level metrics are fixed to ServiceQualifier
. For example, the metric about the total number of invocations of functions in a service is ServiceQualifierTotalInvocations
.
Category | Metric | Unit | Description |
Function Executions | Total Invocations | N/A | The total number of requests to invoke functions in a service. The statistics are collected every minute or every hour. |
Errors | Server Errors | N/A | The total number of requests that failed to be executed in a service due to a Function Compute system error. The statistics are collected every minute or every hour. Note Requests for successful invocations of a function configured with an HTTP trigger and for which an HTTP |
Client Errors | N/A | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP
For more information, see Public error codes. Note If a function processes a request for which | |
ServiceFunctionErrors | N/A | The total number of requests for function invocations that failed to be executed in a service due to function errors. The statistics are collected every minute or every hour. | |
Throttling Errors | Maximum Concurrent Instances Exceeded | N/A | The total number of requests for which the |
Maximum Instances Exceeded | N/A | The total number of requests for which the | |
Region On-demand Instances | Limit | N/A | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
Used On-demand Instances in the Region | N/A | The number of on-demand instances that are concurrently occupied when functions in a region are invoked. The statistics are collected every minute or every hour. | |
Provisioned Instance | Provisioned Instances | N/A | The total number of provisioned instances for all functions in the current service. |
Asynchronous Invocation Processing | Asynchronous Requests Enqueued | N/A | The number of enqueued requests for asynchronous invocations in a service. If the number of the enqueued requests is far greater than the number of requests that are executed, a request backlog may occur. In this case, increase the upper limit of instances for the function or contact Function Compute technical support. For more information, see the Configure provisioned instances and auto scaling rules. |
Asynchronous Requests Processed | N/A | The number of processed requests for asynchronous invocations in a service. If the number of the enqueued requests is far greater than the number of requests that are executed, a request backlog may occur. In this case, increase the upper limit of instances for the function or contact Function Compute technical support. For more information, see the Configure provisioned instances and auto scaling rules. | |
Asynchronous Invocation Processing Latency | Average Latency of Asynchronous Requests | Millisecond | The average interval of time between asynchronous requests are enqueued and processed. If the value of this parameter is too large, backlog of requests appears. In this case, increase the upper limit of instance for the function or contact Function Compute technical support. For more information, see Configure provisioned instances and auto scaling rules function. |
Function-level metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the desired service in the Service Name column. On the Service-level Monitoring page, click the name of the function whose metrics that you want to view in the Function Name column.
Function-level metrics are used to monitor and measure the resource usage of functions from the perspectives of functions, all functions in a service version, and all functions with a service alias. These perspectives are managed by functions. The following table describes function-level metrics.
The prefixes of function-level metrics in a version or alias of a service are fixed to
FunctionQualifier
. For example, theFunctionQualifierTotalInvocations
metric indicates the number of total invocations of a function.Function Compute can monitor and measure the CPU utilization, memory usage, and network traffic of a function only after instance-level metrics are enabled. For more information about instance-level metrics, see Instance-level metrics.
Category | Metric | Unit | Description |
Invocations | Total Invocations | N/A | The total number of requests to invoke function instances in provisioned and on-demand mode. The statistics are collected every minute or every hour. |
Provisioned Instance-based Invocations | N/A | The total number of requests to invoke function instances in provisioned mode. The statistics are collected every minute or every hour. | |
Errors | Server Errors | N/A | The total number of requests for invocations of a function that failed to be executed due to Function Compute system errors. The statistics are collected every minute or every hour. Note Requests for successful invocations of a function configured with an HTTP trigger and for which an HTTP |
Client Errors | N/A | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP
For more information, see Public error codes. Note If a function processes a request for which | |
Function Errors | N/A | The total number of requests for invocations of a function that failed to be executed due to function errors. The statistics are collected every minute or every hour. | |
Throttling Errors | Maximum Concurrent Instances Exceeded | N/A | The total number of requests for invocations of a function that failed to be executed due to excessive concurrent instances and for which the HTTP |
Maximum Instances Exceeded | N/A | The total number of requests for invocations of a function that failed to be executed due to excessive instances and for which the HTTP | |
End-to-End Latency | Average | Millisecond | The average amount of time consumed for invoking a function. The time counting starts when a function execution request arrives at Function Compute and ends when the request leaves Function Compute, including the amount of time consumed by the platform. The average amount of time is calculated every minute or hour. |
Function Provisioned Instances | Number of Provisioned Instances | N/A | The number of provisioned instances that are occupied to execute a function when the function is invoked. |
vCPU Usage | vCPU Quota | % | The vCPU quota of a function when the function is invoked. The statistics are collected every minute or every hour. You can flexibly configure the vCPU and memory specifications. The vCPU-to-memory ratio (vCPU:GB) must range from 1:1 to 1:4. |
Memory Usage | Memory Quota | MB | The maximum size of the memory that can be used by a function when the function is invoked. If the function consumes more memory than this upper limit, an out-of-memory (OOM) error occurs. The maximum value for all instances of the function is calculated every minute or every hour. |
Network Traffic | Inbound Traffic | kbps | The inbound traffic of function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. |
Outbound Traffic | kbps | The outbound traffic of function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. | |
Asynchronous Invocation Processing | Asynchronous Requests Enqueued | N/A | The number of enqueued requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. |
Asynchronous Requests Processed | N/A | The number of processed requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. Note If the number of enqueued requests is much greater than the number of processed requests, backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute technical support. For more information, see Configure provisioned instances and auto scaling rules. | |
Asynchronous Invocation Processing Latency | Average | Millisecond | The interval of time between asynchronous requests are enqueued and processed. The average value is calculated every minute or every hour. |
Maximum Duration | Millisecond | The interval of time between asynchronous requests are enqueued and processed. The statistics are collected every minute or every hour. | |
Asynchronous Invocation Trigger Events | Discarded upon Timeout | N/A | The total number of requests that are dropped when a destination is configured for asynchronous invocation of a function. The statistics are collected every minute or every hour. |
Destination Trigger Failed | N/A | The number of requests that fail to trigger the destination during function execution when a destination is configured for asynchronous invocation of a function. The statistics are collected every minute or every hour. | |
Destination Triggered | N/A | The number of requests that trigger the destination during function execution when a destination is configured for asynchronous invocation of a function. The statistics are collected every minute or every hour. | |
Asynchronous Requests Backlogs | Backlogs | N/A | The total number of requests that are queued and being processed. The statistics are collected every minute or every hour. Note If the value is greater than 0, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. |
More information
For more information about how to call CloudMonitor API operations to view monitoring details, see Monitoring data.