All Products
Search
Document Center

Function Compute:Terms

Last Updated:Jan 24, 2025

This topic describes the terms in Function Compute to help you better understand and use Function Compute.

Overview

Category

Terms

Common terms

service, function, handler, version, alias, tag, layer, trigger, runtime, and custom domain name

Billing-related terms

pay-as-you-go and resource plan

Instance-related terms

CPU instance, GPU-accelerated instance, cold start, on-demand mode, provisioned mode, and instance concurrency

Invocation-related terms

synchronous invocation, asynchronous invocation, invocation analysis, and Tracing Analysis

service

A service is a unit for resource management in Function Compute based on the microservice architecture. From a business perspective, an application consists of multiple services. From a resource usage perspective, a service consists of multiple functions. For example, a data processing service consists of two functions: a data preparation function that requires less resources and a data processing function that requires more resources. You can configure an instance that has low specifications to run the data preparation function, and configure an instance that has high specifications to run the data processing function.

Before you create a function, you must create a service. All functions in a service share some settings such as the permission settings and log settings of the service. For more information about how to manage services, see Manage services.

function

A function is a unit by which Function Compute schedules and runs resources. A Function Compute function consists of function code and function configurations.

A Function Compute function belongs to a service. All functions in a service share some settings such as the permission settings and log settings of the service. For more information about how to manage functions, see Manage functions.

Function Compute supports two types of functions: event functions and HTTP functions. For more information about the differences between the two types of functions, see Function type selection.

Handler

When you create a function, you must specify a handler for the function. The Function Compute runtime loads and invokes the handler to process requests. Handlers are classified into the following types:
  • Event handler

    An event handler is used to process the event requests triggered by various event sources other than HTTP triggers, such as OSS triggers, Log Service triggers, and Message Queue for Apache RocketMQ triggers.

  • HTTP handler

    An HTTP handler processes the requests that are triggered by HTTP triggers. For more information, see Configure and use an HTTP trigger.

You can configure the handler for a function by using the Request Handler parameter in the Function Compute console. For more information, see Create a function.

version

A version can be considered as a snapshot of a service. A version contains information such as the service settings and the code and settings of the functions that belong to the service. A version does not contain trigger information. A version is similar to a commit in Git. Each commit contains changes to one or more code files or settings, and represents a snapshot of a repository at a specific point in time. For more information, see Manage versions.

alias

An alias can be considered as a pointer to a specific service version. You can use aliases to manage versions. For example, you can use aliases to release or roll back service versions, or implement canary releases. An alias is similar to a tag in Git. You can add a tag to a commit and release the commit to perform a business iteration. For more information, see Manage aliases.

tag

Tags are used to classify service resources. This facilitates resource search and aggregation. You can also use tags to group services and assign varying permissions on those service groups to different roles. For more information, see Tag management.

layer

Layers allow you to publish and deploy custom resources such as public dependency libraries, runtimes, and function extensions. You can use a layer to abstract the public libraries on which a function depends. This reduces the size of the function code package when you deploy and update the function. You can also deploy a custom runtime as a layer to share the runtime among multiple functions. For more information, see Create a custom layer.

trigger

A trigger is a method that is used to trigger function execution. In an event-driven computing model, an event source is an event producer, and a function is an event handler. Triggers allow you to manage different event sources in a centralized manner. When an event that matches the rules defined for a trigger occurs, the event source automatically invokes the function associated with the trigger. For more information, see Trigger overview.

runtime

A runtime is an environment in which functions are executed. Function Compute provides runtimes in multiple programming languages. For more information, see Function Compute runtimes.

You can also create custom runtimes or custom container runtimes. For more information, see the following topics:

custom domain name

You can bind a custom domain name to an application or function that is configured with HTTP triggers. This allows users to access the application or function by using a fixed domain name. You can also configure the custom domain name as the origin domain name and add a CDN-accelerated domain name to the custom domain name. This allows users to access resources faster and improves the service quality by reducing access latency. For more information, see Configure a custom domain name.

pay-as-you-go

Pay-as-you-go is a billing method that allows you to use resources first and pay for them afterward. If you use the pay-as-you-go billing method, you pay only for the Function Compute resources that you use. You do not need to purchase resources in advance. For more information, see Pay-as-you-go.

resource plan

A resource plan is a type of subscription billing method that offers higher discounts than the pay-as-you-go billing method. Function Compute provides resource plans of various types. For more information, see Resource plans.

CPU instance

CPU instance is the basic instance type of Function Compute. CPU instances are suitable for scenarios that involve burst traffic and require heavy computing power. For more information, see Instance types and usage modes.

GPU-accelerated instance

Based on the Turing architecture, GPU-accelerated instances accelerate service loads by using GPU hardware. This way, service processing is more efficient. GPU-accelerated instances are mainly used in audio and video processing, AI, and image processing scenarios. For more information, see Instance types and usage modes.

cold start

A cold start of a function includes steps such as code download, start of the instance that is used to execute the function, process initialization, and code initialization during the invocation of the function. After the cold start is complete, the function instance is ready to process subsequent requests. For more information, see Best practice for reducing cold start latencies.

on-demand mode

In on-demand mode, Function Compute automatically allocates and releases instances for functions. For more information, see the "On-demand mode" section in Instance types and usage modes.

provisioned mode

In provisioned mode, function instances are allocated and released by yourself. Function Compute preferentially forwards requests to provisioned instances. When the provisioned instances are not enough to process the requests, the remaining requests are forwarded to on-demand instances. For more information, see the "Provisioned mode" section in Instance types and usage modes.

An instance in provisioned mode is ready for use after it is created. This eliminates the impacts caused by cold starts.

If you create a fixed number of provisioned instances, the instances may not be fully used. You can enable scheduled or metric-based auto scaling for provisioned instances to improve instance utilization.

Scheduled auto scaling

Scheduled auto scaling allows you to configure a rule to automatically adjust the number of provisioned instances to a specific value at the specified points in time. For more information, see Configure provisioned instances and auto scaling rules.

Metric-based auto scaling

Metric-based auto scaling dynamically adjusts the number of provisioned instances based on values of tracking metrics. For more information, see Configure provisioned instances and auto scaling rules.

instance concurrency

Instance concurrency specifies the maximum number of concurrent requests that a function instance can process at a time. For more information, see Configure instance concurrency.

synchronous invocation

In a synchronous invocation, a response is returned only after a function finishes processing an event. For more information, see Synchronous invocations.

asynchronous invocation

In an asynchronous invocation, a response is immediately returned after an event triggers a function. You do not need to wait for the event to be processed by the function. Function Compute processes the event in a reliable manner but does not return the invocation details or execution status of the function. To obtain the results of the asynchronous invocation, you must configure destinations for the asynchronous invocation. For more information, see Overview. If you want to track and save the states of an asynchronous invocation in each phase, you can enable the asynchronous task feature to process asynchronous requests. For more information, see Overview.

invocation analysis

The invocation analysis feature summarizes the execution states of function requests. After you enable this feature, the system collects metrics about each execution of a function. For more information, see Request-level metric logs.

Tracing Analysis

Tracing Analysis provides a set of tools for distributed application development. These tools include those for trace mapping, request counting, trace topology, and application dependency analysis. You can use these tools to analyze and diagnose performance bottlenecks in a distributed application architecture and make microservice development and diagnostics more efficient. For more information, see Overview.