You can query metrics of Function Compute in the Function Compute console. The metrics include resource overview metrics and metrics specific to regions, services, and functions. The MetricName parameter specifies a metric. This topic describes the names and descriptions of monitoring metrics of Function Compute.
Resource overview metrics
You can log on to the Function Compute console and view the resource overview metrics in the Resource Usage Statistics section on the Overview page.
Category | Metric name | Unit | Description |
---|---|---|---|
Overview | Invocations | Count | The total number of function invocation requests. |
vCPU Usage | vCPU-seconds | The vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by function execution duration. | |
Memory Usage | GB-seconds | The memory resources consumed by invoked functions. The value is the memory capacity multiplied by function execution duration. | |
Disk Usage | GB-seconds | The disk resources consumed by invoked functions. The value is the disk capacity multiplied by function execution duration. | |
Outbound Internet Traffic | GB | The total outbound Internet traffic that is generated during function execution within a specified statistical period. | |
GPU Usage | GB-seconds | The GPU resources consumed by invoked functions. The value is the GPU capacity multiplied by function execution duration. | |
vCPU Usage | Active vCPU Usage | vCPU-seconds | The active vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by function execution duration. |
Idle vCPU Usage | vCPU-seconds | The idle vCPU resources consumed by invoked functions. The value is the vCPU capacity multiplied by instance idle duration. | |
Outbound Internet Traffic | Data Transfer Within Functions | GB | The traffic that is generated when functions access the Internet. |
Response Traffic of Function Requests | GB | The traffic that is generated when Function Compute returns responses over the Internet after functions are executed. | |
CDN Back-to-Origin Traffic | GB | The CDN back-to-origin traffic that is generated when Function Compute serves as the CDN origin. |
Region-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, you can view the region-specific metrics.
The region-specific metrics are used to monitor and measure the resource usage of Function Compute in a region. The following table describes the region-specific metrics.
Category | Metric name | Unit | Description |
---|---|---|---|
Function execution | Invocations | Count | The total number of requests to invoke functions in a region. The statistics are collected every minute or every hour. |
Number of errors | Server Errors | Count | The total number of requests for invocations in a region that failed to be executed due to Function Compute server errors. The statistics are collected every minute or every hour. Note Requests for successful invocations of functions configured with HTTP triggers and for which an HTTP 5xx status code is returned are not counted. |
Client Errors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour. The following items list some HTTP 4xx status codes:
For more information, see Public error codes. Note If the requests for which 412 and 499 are reported are executed and function logs are generated and billed, you can view the logs of these client error requests in Request List. For more information, see View function invocation logs. | |
Function Errors | Count | The total number of requests for invocations in a region that failed to be executed due to function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | Maximum Concurrent Instances Exceeded | Count | The total number of requests for invocations in a region that failed to be executed due to excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour. |
Maximum Instances Exceeded | Count | The total number of requests for invocations in a region that failed to be executed due to excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour. | |
Number of on-demand instances | Upper Limit | Count | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
On-demand Instances | Count | The number of on-demand instances that are concurrently occupied when you invoke functions in a region. The statistics are collected every minute or every hour. | |
Number of provisioned instances | Provisioned Instances | Count | The total number of provisioned instances that are created for all functions in a region within the current account. |
Service-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the service whose metrics you want to view in the Service Name column.
ServiceQualifier
. For example, the ServiceQualifierTotalInvocations
metric specifies the total number of invocations. Category | Metric name | Unit | Description |
---|---|---|---|
Function execution | Total Invocations | Count | The total number of requests to invoke functions in a service. The statistics are collected every minute or every hour. |
Number of errors | Server Errors | Count | The total number of requests for invocations of functions in a service that failed to be executed due to Function Compute server errors. The statistics are collected every minute or every hour. Note Requests for successful invocations of functions configured with HTTP triggers and for which an HTTP 5xx status code is returned are not counted. |
Client Errors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour. The following items list some HTTP 4xx status codes:
For more information, see Public error codes. Note If the requests for which 412 and 499 are reported are executed and function logs are generated and billed, you can view the logs of these client error requests in Request List. For more information, see View function invocation logs. | |
Function Errors | Count | The total number of requests for invocations in a service that failed to be executed due to function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | Maximum Concurrent Instances Exceeded | Count | The total number of requests for invocations in a service that failed to be executed due to excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour. |
Maximum Instances Exceeded | Count | The total number of requests for invocations in a service that failed to be executed due to excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour. | |
Number of on-demand instances in a region | Limit | Count | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
Used On-demand Instances in the Region | Count | The number of on-demand instances that are concurrently occupied when the functions in a region are invoked. The statistics are collected every minute or every hour. | |
Number of provisioned instances | Provisioned Instances | Count | The total number of provisioned instances for all functions in the current service. |
Asynchronous invocations | Asynchronous Requests Enqueued | Count | The number of enqueued requests for asynchronous invocations in the service.Function Compute If the number of enqueued requests is much greater than the number of processed requests, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. |
Asynchronous Requests Processed | Count | The number of processed requests for asynchronous invocations in the service.Function Compute If the number of enqueued requests is much greater than the number of processed requests, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. | |
Processing latency of an asynchronous message | Average Latency of Asynchronous Requests | Millisecond | The average interval of time between asynchronous requests are enqueued and processed. If the average latency is excessively high, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. |
Function-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the desired service in the Service Name column. On the Service-level Monitoring page, click the name of the function whose metrics that you want to view in the Function Name column.
- The prefix of the metric name from the perspective of all functions in a service version or the perspective of all functions in a service alias is
FunctionQualifier
, such asFunctionQualifierTotalInvocations
for the total number of requests to invoke the functions. - You can monitor and measure the function-specific CPU utilization, memory usage, and network traffic only after collection of instance-level metrics is enabled. For more information about instance-level metrics, see Instance-level metrics.
Category | Metric name | Unit | Description |
---|---|---|---|
Number of requests | Total Invocations | Count | The total number of requests to invoke a function by provisioned and on-demand instances. The statistics are collected every minute or every hour. |
Provisioned Instance-based Invocations | Count | The total number of requests to invoke a function by provisioned instances. The statistics are collected every minute or every hour. | |
Number of errors | Server Errors | Count | The total number of requests for invocations of a function that failed to be executed due to server errors.Function Compute The statistics are collected every minute or every hour. Note Requests for successful invocations of functions configured with HTTP triggers and for which an HTTP 5xx status code is returned are not counted. |
Client Errors | Count | The total number of requests that are not executed or failed to be executed due to client errors of Function Compute and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour. The following items list some HTTP 4xx status codes:
For more information, see Public error codes. Note If the requests for which 412 and 499 are reported are executed and function logs are generated and billed, you can view the logs of these client error requests in Request List. For more information, see View function invocation logs. | |
Function Errors | Count | The total number of requests for invocations of a function that failed to be executed due to function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | Maximum Concurrent Instances Exceeded | Count | The total number of requests for invocations of a function that failed to be executed due to excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour. |
Maximum Instances Exceeded | Count | The total number of requests for invocations of a function that failed to be executed due to excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour. | |
End-to-end latency | Average | Millisecond | The average amount of time consumed when a function is invoked from the time when a function execution request arrives at Function Compute to the time when the request leaves Function Compute, including the amount of time consumed by the platform. The average amount of time is calculated every minute or every hour. |
Maximum Latency | Millisecond | The maximum amount of time consumed when a function is invoked from the time when a function execution request arrives at Function Compute to the time when the request leaves Function Compute, including the amount of time consumed by the platform. The statistics are collected every minute or every hour. | |
Number of requests concurrently processed by a single instance | Concurrent Requests | Count | The number of requests that are concurrently processed by an instance when a function is invoked. The statistics are collected every minute or every hour. Note If you do not configure a single instance to concurrently process multiple requests for the function, a single instance concurrently processes a single request by default. To display this metric, enable the collection of instance-level metrics. For more information, see A single instance that concurrently processes multiple requests and Instance-level metrics. |
Used On-demand Instances | Count | The number of on-demand instances that are occupied to execute a function when the function is invoked. | |
Number of provisioned instances for a function | Number of Provisioned Instances | Count | The number of provisioned instances that are occupied to execute a function when the function is invoked. |
vCPU usage | vCPU Quota | % | The vCPU quota of the function when the function is invoked. The statistics are collected every minute or every hour. You can flexibly configure the vCPU and memory specifications. The vCPU-to-memory ratio (vCPU:GB) must be from 1:1 to 1:4. |
vCPU Usage | % | The vCPU utilization of a function when a function is invoked. This metric indicates the number of used vCPUs. For example, 100% represents one vCPU. The total value for all instances of the function is calculated every minute or every hour. | |
Memory usage | Memory Quota | MB | The maximum size of the memory that can be used by a function when the function is invoked. If the function consumes more memory than this upper limit, an out-of-memory (OOM) error occurs. The maximum value for all instances of the function is calculated every minute or every hour. |
Used Memory | MB | The size of the memory consumed to execute a function when the function is invoked. This metric indicates the memory actually consumed by the function. The maximum value for all instances of the function is calculated every minute or every hour. | |
Network traffic | Inbound Traffic | kbps | The inbound traffic that is generated during function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. |
Outbound Traffic | kbps | The outbound traffic that is generated during function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. | |
Asynchronous Invocation Processing Cases | Asynchronous Requests Enqueued | Count | The number of enqueued requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. |
Asynchronous Requests Processed | Count | The number of processed requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. Note If the number of enqueued requests is much greater than the number of processed requests, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. | |
Processing latency of an asynchronous message | Average | Millisecond | The interval of time between asynchronous requests are enqueued and processed. The average value is calculated every minute or every hour. |
Maximum Duration | Millisecond | The interval of time between asynchronous requests are enqueued and processed. The statistics are collected every minute or every hour. | |
Events triggered during asynchronous invocations | Discarded upon Timeout | Count | The total number of requests that are dropped when a destination is configured for asynchronous invocations of a function. The statistics are collected every minute or every hour. |
Destination Trigger Failed | Count | The number of requests that fail to trigger the destination during function execution when a destination is configured for asynchronous invocation of a function. The statistics are collected every minute or every hour. | |
Destination Triggered | Count | The number of requests that trigger the destination during function execution when a destination is configured for asynchronous invocation of a function. The statistics are collected every minute or every hour. | |
Resource usage | Usage | MB × ms | The resources consumed by all functions in a service of a specified version or alias. The value is the memory capacity multiplied by the function execution duration. The statistics are collected every minute or every hour. |
Request backlogs | Backlogs | Count | The total number of requests that are queued and being processed. The statistics are collected every minute or every hour. Note If the value is greater than 0, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure provisioned instances and auto scaling rules. |
References
For more information about how to call CloudMonitor API operations to view monitoring details, see Metrics data.