All Products
Search
Document Center

Function Compute (2.0):Terms

Last Updated:Oct 24, 2023

This topic describes the terms in Function Compute to help you better understand and use Function Compute.

Overview

Category

Terms

General concepts

service, function, Handler, version, alias, tag, layer, trigger, runtime, and custom domain name

Billing related

pay-as-you-go and resource plan

Instance related

elastic instance, GPU-accelerated instances, cold start, on-demand mode, provisioned mode, and instance concurrency

Function invocation

synchronous invocation, asynchronous invocation, insights, and Tracing Analysis

service

A service is a unit for resource management in Function Compute based on the microservice architecture. From the perspective of business, an application can be divided into multiple services. From the perspective of resource usage, a service can include multiple functions. For example, a data processing service consists of two functions: a data preparation function that requires less resources and a data processing function that requires more resources. You can configure an instance that has low specifications to run the data preparation function. During data processing, however, you need to select a high-specification instance due to demanding requirements for function resources.

Before you create a function, you must create a service. All functions in a service share some settings such as the permission settings and log settings of the service. For more information, see Manage services.

function

A function is a unit by which Function Compute schedules and runs resources. A Function Compute function consists of function code and function configurations.

A Function Compute function belongs to a service. All functions in a service share some settings such as the permission settings and log settings of the service. For more information, see Manage functions.

Function Compute supports two types of functions: event functions and HTTP functions. For more information about the differences between the two types of functions, see Function type selection.

Handler

When you create a function, you must specify a handler for the function. The Function Compute runtime loads and invokes the handler to process requests. Handlers are classified into the following types:
  • Event handler

    An event handler is used to process the event requests triggered by various event sources other than HTTP triggers, such as OSS triggers, Log Service triggers, and Message Queue for Apache RocketMQ triggers.

  • HTTP handler

    An HTTP handler processes the requests that are triggered by HTTP triggers. For more information, see Configure an HTTP trigger that invokes a function with HTTP requests.

You can configure the handler for a function by using the Request Handler parameter in the Function Compute console. For more information, see Create a function.

version

A version can be considered as a snapshot of a service. A version contains information such as the service settings and the code and settings of the functions that belong to the service. A version does not contain trigger information. A version is similar to a commit in Git. Each commit contains changes to one or more code files or settings, and represents a snapshot of a repository at a specified point in time. For more information, see Manage versions.

alias

An alias can be considered as a pointer to a specific service version. You can use aliases to manage versions. For example, you can use aliases to release or roll back service versions, or implement canary releases. An alias is similar to a tag in Git. You can add a tag to a commit and release the commit to perform a business iteration. For more information, see Manage aliases.

tag

Tags are used to classify service resources. This facilitates resource search and aggregation. You can use the tagging feature to group services and authorize different roles to manage services in different groups. For more information about the message routing feature, see Tag management.

layer

Layers allow you to publish and deploy custom resources such as public dependency libraries, runtimes, and function extensions. You can use a layer to abstract the public libraries on which a function depends. This reduces the size of the function code package when you deploy or update the function. You can also deploy a custom runtime as a layer to share the runtime among multiple functions. For more information, see Create a custom layer.

trigger

A trigger is a way to trigger function execution. In an event-driven computing model, an event source is an event producer, and a function is an event handler. Triggers manage different event sources in a centralized manner. When an event that matches the rules defined for a trigger occurs, the event source automatically invokes the function associated with the trigger. For more information, see Trigger overview.

runtime

A runtime is an environment in which functions are executed. Function Compute provides runtimes in multiple programming languages. For more information, see Function Compute runtimes.

You can also create custom runtimes or custom container runtimes. For more information, see the following topics:

custom domain name

You can bind a custom domain name to an application or function that is configured with HTTP triggers. This allows users to access the application or function by using a fixed domain name. You can also configure the custom domain name as the origin domain name and add a CDN-accelerated domain name to the custom domain name. This allows users to access resources faster and improves the service quality by reducing access latency. For more information, see Configure a custom domain name.

pay-as-you-go

Pay-as-you-go is a billing method that allows you to use resources first and pay for them afterward. If you use the pay-as-you-go billing method, you pay only for the Function Compute resources that you use. You do not need to purchase resources in advance. For more information, see Pay-as-you-go.

resource plan

Resource plans are prepaid quota to offset fees of resource usage. Compared with pay-as-you-go, resource plans can be more cost effective. Function Compute provides resource plans for various resource usage types. For more information, see Resource plans.

elastic instance

The basic instance type of Function Compute. Elastic instances are suitable for scenarios in which burst traffic occurs and compute-intensive scenarios. For more information,see Instance types and usage modes.

GPU-accelerated instances

A type of instances that are based on the Turing architecture and accelerate service loads by using GPU hardware. This way, service processing is more efficient. GPU-accelerated instances are mainly used in scenarios such as audio and video processing, AI, and image processing. For more information, see Instance types and usage modes.

cold start

A cold start of a function includes steps such as code download, start of the instance that is used to execute the function, process initialization, and code initialization during the invocation of the function. After the cold start is complete, the instance that is used to execute the function is ready to process subsequent requests. For more information, see Best practice for reducing cold start latencies.

on-demand mode

In on-demand mode, Function Compute automatically allocates and releases instances for functions. For more information, see the "On-demand mode" section in Instance types and instance modes.

provisioned mode

In provisioned mode, you allocate and release instances for functions. Function Compute preferentially forwards requests to provisioned instances. When the provisioned instances are not enough to process the requests, the remaining requests are forwarded to on-demand instances. For more information, see the "Provisioned mode" section in Instance types and instance modes.

A provisioned instance is ready for use after it is created. This eliminates the impacts caused by cold starts.

If you create a fixed number of provisioned instances, not all the instances may be used. You can enable scheduled or metric-based auto scaling for provisioned instances to improve instance utilization.

scheduled auto scaling

Scheduled auto scaling allows you to configure a rule to automatically adjust the number of provisioned instances to a specified value at specified points in time. For more information, see Create an auto scaling rule for provisioned instances.

metric-based auto scaling

Metric-based auto scaling dynamically adjusts the number of provisioned instances by tracking metrics. For more information, see Create an auto scaling rule for provisioned instances.

instance concurrency

Instance concurrency indicates the number of requests that can be concurrently processed by a single instance. For more information, see Configure instance concurrency.

synchronous invocation

In a synchronous invocation, the result is returned after an event is processed by a function. For more information, see Synchronous invocations.

asynchronous invocation

In an asynchronous invocation, a response is immediately returned after an event triggers a function. You do not need to wait for the event to be processed by the function. Function Compute processes the event but does not return the invocation details or execution status of the function. To obtain the results of the asynchronous invocation, you must configure destinations for the asynchronous invocation. For more information, see Feature overview. If you want to track and save the states of an asynchronous invocation in each phase, you can enable the asynchronous task feature to process asynchronous requests. For more information, see Feature overview.

insights

Insights is a feature that summarizes the execution states of function requests. After you enable Insights, the system collects metrics about each execution of a function. For more information, see Overview.

Tracing Analysis

Tracing Analysis provides a set of tools for distributed application development. These tools include those for trace mapping, request counting, trace topology, and application dependency analysis. You can use these tools to analyze and diagnose performance bottlenecks in a distributed application architecture and make microservice development and diagnostics more efficient. For more information, see Overview.