You can query metrics of Function Compute in the Function Compute console. The metrics include resource overview metrics and metrics specific to regions, services, and functions. The MetricName parameter specifies a metric. This topic describes the value of the MetricName parameter for each metric and what each metric means.
Resource overview metrics
You can log on to the Function Compute console and view the resource overview metrics in the Resource Usage Statistics section on the Overview page.
Metric type | MetricName | Unit | Description |
---|---|---|---|
Overview | Invocations | Count | The total number of function invocation requests. The statistics are collected every day or every month. |
Usage | GB-s | The resources consumed by invoked functions. The value is the memory usage multiplied by the function execution duration. The statistics are collected every day or every month. | |
InternetOut | GB | The total outbound Internet traffic that is generated during function execution within a specified statistical period. The statistics are collected every day or every month. | |
Resource usage | ElasticOndemandUsage | GB-s | The on-demand elastic instance resources consumed by invoked functions. The value is the memory usage multiplied by the function execution duration. The statistics are collected every day or every month. |
EnhancedOndemandUsage | GB-s | The on-demand performance instance resources consumed by invoked functions. The value is the memory usage multiplied by the function execution duration. The statistics are collected every day or every month. | |
ElasticProvisionUsage | GB-s | The provisioned elastic instance resources consumed by invoked functions. The value is the memory usage multiplied by the duration in which provisioned instances are retained. The statistics are collected every day or every month. | |
EnhancedProvisionUsage | GB-s | The provisioned performance instance resources consumed by invoked functions. The value is the memory usage multiplied by the duration in which provisioned instances are retained. The statistics are collected every day or every month. | |
Internet traffic | DataTransferInternetOut | GB | The traffic that is generated when functions access the Internet. The statistics are collected every day or every month. |
InvokeInternetOut | GB | The traffic that is generated when Function Compute returns responses over the Internet after functions are executed. The statistics are collected every day or every month. | |
InvokeCDNOut | GB | The CDN back-to-origin traffic that is generated when Function Compute serves as the CDN origin. The statistics are collected every day or every month. |
Region-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, you can view the region-specific metrics.
The region-specific metrics are used to monitor and measure the resource usage of Function Compute in a region. The following table describes the region-specific metrics.
Metric type | MetricName | Unit | Description |
---|---|---|---|
Function execution | RegionTotalInvocations | Count | The total number of requests to invoke functions in a region. The statistics are collected every minute or every hour. |
Number of errors | RegionServerErrors | Count | The total number of requests for invocations in a region followed by function execution
failures caused by server errors. The statistics are collected every minute or every
hour.
Note Requests for successful invocations of functions configured with HTTP triggers and
for which an HTTP
5xx status code is returned are not counted.
|
RegionClientErrors | Count | The total number of requests for invocations in a region followed by function execution
failures caused by client errors and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour.
The following part describes some HTTP 4xx status codes:
|
|
RegionFunctionErrors | Count | The total number of requests for invocations in a region followed by function execution failures caused by function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | RegionThrottles | Count | The total number of requests for invocations in a region followed by function execution
failures caused by excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour.
|
RegionResourceThrottles | Count | The total number of requests for invocations in a region followed by function execution
failures caused by excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour.
|
|
Number of on-demand instances | RegionConcurrencyLimit | Count | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
RegionConcurrentCount | Count | The number of on-demand instances that are concurrently occupied when you invoke functions in a region. The statistics are collected every minute or every hour. | |
Number of provisioned instances | RegionProvisionedCurrentInstance | Count | The total number of provisioned instances that are created for all functions in a region within the current account. |
Service-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the service whose metrics you want to view in the Service Name column.
ServiceQualifier
, such as ServiceQualifierTotalInvocations
for the total number of requests to invoke functions.
Metric type | MetricName | Unit | Description |
---|---|---|---|
Function execution | ServiceTotalInvocations | Count | The total number of requests to invoke functions in a service. The statistics are collected every minute or every hour. |
Number of errors | ServiceServerErrors | Count | The total number of requests for invocations in a service followed by function execution
failures caused by server errors. The statistics are collected every minute or every
hour.
Note Requests for successful invocations of functions configured with HTTP triggers and
for which an HTTP
5xx status code is returned are not counted.
|
ServiceClientErrors | Count | The total number of requests for invocations in a service followed by function execution
failures caused by client errors and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour.
The following part describes some HTTP 4xx status codes:
|
|
ServiceFunctionErrors | Count | The total number of requests for invocations in a service followed by function execution failures caused by function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | ServiceThrottles | Count | The total number of requests for invocations in a service followed by function execution
failures caused by excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour.
|
ServiceResourceThrottles | Count | The total number of requests for invocations in a service followed by function execution
failures caused by excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour.
|
|
Number of on-demand instances in a region | RegionConcurrencyLimit | Count | The maximum number of on-demand instances that can be concurrently occupied in a region within the current account. Default value: 300. |
RegionConcurrentCount | Count | The number of on-demand instances that are concurrently occupied when you invoke functions in a region. The statistics are collected every minute or every hour. | |
Number of provisioned instances | FunctionProvisionedCurrentInstance | Count | The total number of provisioned instances that are created for all functions in the current service. |
Asynchronous invocations | ServiceEnqueueCount | Count | The number of enqueued requests for asynchronous invocations in a service. If the number of enqueued requests is much greater than the number of processed requests, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure auto scaling instances (including provisioned instances). |
ServiceDequeueCount | Count | The number of processed requests for asynchronous invocations in a service. If the number of enqueued requests is much greater than the number of processed requests, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure auto scaling instances (including provisioned instances). | |
Processing latency of an asynchronous message | ServiceAsyncMessageLatencyAvg | Milliseconds | The average latency between the enqueuing and processing of requests for asynchronous invocations in a service within a specified time range. If the average latency is excessively high, a message backlog occurs. In this case, you can change the upper limit of provisioned instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure auto scaling instances (including provisioned instances). |
Function-specific metrics
Log on to the Function Compute console. In the left-side navigation pane, choose . On the page that appears, click the name of the service whose metrics you want to view in the Service Name column. On the Service-level Monitoring page, click the name of the function whose metrics that you want to view in the Function Name column.
- The prefix of the metric name from the perspective of all functions in a service version
or the perspective of all functions in a service alias is
FunctionQualifier
, such asFunctionQualifierTotalInvocations
for the total number of requests to invoke the functions. - Function Compute can monitor and measure the function-specific CPU utilization, memory usage, and network traffic only after the collection of instance-level metrics is enabled. For more information about instance-level metrics, see Instance-level metrics.
Metric type | MetricName | Unit | Description |
---|---|---|---|
Number of requests | FunctionTotalInvocations | Count | The total number of requests to invoke a function by provisioned and on-demand instances. The statistics are collected every minute or every hour. |
FunctionProvisionInvocations | Count | The total number of requests to invoke a function by provisioned and on-demand instances. The statistics are collected every minute or every hour. | |
Number of errors | FunctionServerErrors | Count | The total number of requests for invocations of a function followed by function execution
failures caused by server errors. The statistics are collected every minute or every
hour.
Note Requests for successful invocations of a function configured with an HTTP trigger
and for which an HTTP
5xx status code is returned are not counted.
|
FunctionClientErrors | Count | The total number of requests for invocations of a function followed by function execution
failures caused by client errors and for which an HTTP 4xx status code is returned. The statistics are collected every minute or every hour.
The following part describes some HTTP 4xx status codes:
|
|
FunctionFunctionErrors | Count | The total number of requests for invocations of a function followed by function execution failures caused by function errors. The statistics are collected every minute or every hour. | |
Errors due to throttling | FunctionConcurrencyThrottles | Count | The total number of requests for invocations of a function followed by function execution
failures caused by excessive concurrent instances and for which the HTTP 429 status code is returned. The statistics are collected every minute or every hour.
|
FunctionResourceThrottles | Count | The total number of requests for invocations of a function followed by function execution
failures caused by excessive instances and for which the HTTP 503 status code is returned. The statistics are collected every minute or every hour.
|
|
End-to-end latency | FunctionLatencyAvg | Milliseconds | The average amount of time consumed when a function is invoked from the time when a function execution request arrives at Function Compute to the time when the request leaves Function Compute, including the amount of time consumed by the platform. The average amount of time is calculated every minute or every hour. |
FunctionLatencyMax | Milliseconds | The maximum amount of time consumed when a function is invoked from the time when a function execution request arrives at Function Compute to the time when the request leaves Function Compute, including the amount of time consumed by the platform. The statistics are collected every minute or every hour. | |
Number of requests concurrently processed by a single instance | FunctionConcurrentRequests | Count | The number of requests that are concurrently processed by an instance when a function
is invoked. The statistics are collected every minute or every hour.
Note If you do not configure a single instance to concurrently process multiple requests
for the function, a single instance concurrently processes a single request by default.
To display this metric, enable the collection of instance-level metrics. For more
information, see A single instance that concurrently processes multiple requests and Instance-level metrics.
|
FunctionOndemandActiveInstance | Count | The number of on-demand instances that are occupied to execute a function when the function is invoked. | |
Number of provisioned instances for a function | FunctionProvisionedCurrentInstance | Count | The number of provisioned instances that are occupied to execute a function when the function is invoked. |
CPU utilization | FunctionCPUQuotaPercent | % | The CPU quota of a function when the function is invoked. The statistics are collected
every minute or every hour. Correspondence between the memory and CPU for a function:
|
FunctionCPUPercent | % | The CPU utilization of a function when a function is invoked. This metric indicates the number of used CPU cores. For example, 100% represents one CPU core. The total value for all instances of the function is calculated every minute or every hour. | |
Memory usage | FunctionMemoryLimitMB | MB | The maximum size of the memory that can be used by a function when the function is invoked. If the function consumes more memory than this upper limit, an out-of-memory (OOM) error occurs. The maximum value for all instances of the function is calculated every minute or every hour. |
FunctionMaxMemoryUsage | MB | The size of the memory consumed to execute a function when the function is invoked. This metric indicates the memory actually consumed by the function. The maximum value for all instances of the function is calculated every minute or every hour. | |
Network traffic | FunctionRXBytesPerSec | kbps | The inbound traffic that is generated during function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. |
FunctionTXBytesPerSec | kbps | The outbound traffic that is generated during function execution within a unit time when a function is invoked. The total value for all instances of the function is calculated every minute or every hour. | |
Asynchronous invocations | FunctionEnqueueCount | Count | The number of enqueued requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. |
FunctionDequeueCount | Count | The number of processed requests when a function is asynchronously invoked. The statistics
are collected every minute or every hour.
Note If the number of enqueued requests is much greater than the number of processed requests,
a message backlog occurs. In this case, you can change the upper limit of provisioned
instances for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure auto scaling instances (including provisioned instances).
|
|
Processing latency of an asynchronous message | FunctionAsyncMessageLatencyAvg | Milliseconds | The average latency between the enqueuing and processing of requests when a function is asynchronously invoked. The average value is calculated every minute or every hour. |
FunctionAsyncMessageLatencyMax | Milliseconds | The average latency between the enqueuing and processing of requests when a function is asynchronously invoked. The statistics are collected every minute or every hour. | |
Events triggered during asynchronous invocations | FunctionAsyncEventExpiredDropped | Count | The total number of requests that are dropped when a destination is configured for asynchronous invocations of a function. The statistics are collected every minute or every hour. |
FunctionDestinationErrors | Count | The number of requests that fail to trigger the destination during function execution when a destination is configured for asynchronous invocations of a function. The statistics are collected every minute or every hour. | |
FunctionDestinationSucceed | Count | The number of requests that trigger the destination during function execution when a destination is configured for asynchronous invocations of a function. The statistics are collected every minute or every hour. | |
Resource usage (MB × ms) | FunctionCost | MB × ms | The resources consumed by all functions in a service of a specified version or alias. The value is the memory usage multiplied by the function execution duration. The statistics are collected every minute or every hour. |
Request backlog | AsynchronousRequestsBacklogs | Count | The total number of requests that are queued and being processed. The statistics are
collected every minute or every hour.
Note If the value is greater than 0, you can change the upper limit of provisioned instances
for auto scaling or contact Function Compute engineers. For more information about provisioned instances for auto scaling, see Configure auto scaling instances (including provisioned instances).
|
References
For more information about how to call CloudMonitor API operations to view monitoring details, see Metrics data.