Explain the six application scenarios of Serverless architecture

Guided Reading

The Serverless architecture will become an important technical architecture in the future cloud computing field and will be adopted by more businesses. Further research, in what scenarios does the Serverless architecture perform well and in which scenarios may it not perform well? Or, what scenarios are more suitable for Serverless architecture?

Application scenarios of Serverless architecture

The application scenarios of Serverless architecture are usually determined by its features, and the supported triggers determine the specific scenarios. As shown in Figure 1-1, CNCF Serverless Whitepaper v1.0 describes the following user scenarios for the Serverless architecture.

● Asynchronous concurrency, where components can be deployed and expanded independently.

● Sudden or unpredictable service usage.

● Short, stateless applications, and scenarios insensitive to cold start time.

● Businesses requiring rapid development and iteration.

1-1 User scenarios suitable for the Serverless architecture described in CNCF Serverless Whitepaper v1.0

In addition to providing four applicable user scenarios based on the features of Serverless architecture, CNCF also provides detailed examples in combination with common triggers.

● Execution logic in response to database changes (insert, update, trigger, delete).

● Analyze the input messages of IoT sensors (such as MQTT messages).

● Perform flow processing (analyze or modify dynamic data).

● Single data extraction, conversion and storage requires a large amount of processing (ETL) in a short time.

● Provide cognitive computing (asynchronous) through the chat robot interface.

● Dispatch tasks executed in a short time, such as CRON or batch processing.

● Machine learning and artificial intelligence models. ● Continuously integrate pipelines to provide resources for construction operations as needed.

CNCF Serverless Whitepaper v1.0, based on the characteristics of the Serverless architecture, theoretically describes the scenarios or businesses suitable for the Serverless architecture. Cloud manufacturers describe typical application scenarios of Serverless architecture from their own business perspective.

In general, when object storage is used as a trigger for Serverless related products, typical application scenarios include video processing, data ETL processing, etc; The API gateway will more often enable users to access external links and related functions. When the API gateway acts as a trigger for Serverless related products, a typical application scenario is back-end services, including App back-end services, website back-end services, and even WeChat applets.

Some smart speakers will also open related interfaces, which can also trigger cloud functions through the API gateway to obtain corresponding services; In addition to object storage trigger and API gateway trigger, common triggers include message queue trigger, Kafka trigger, log trigger, etc.

1. Web application or mobile application backend

If the Serverless architecture is combined with other cloud products provided by cloud manufacturers, developers can build flexible and scalable mobile applications or Web applications to easily create a rich serverless backend. And these programs are available in multiple data centers. Figure 1-2 shows an example of Web application back-end processing.

1-2 Web Application Backend Processing Example

2. Real time file/data processing

In video applications, social applications and other scenarios, the total amount and frequency of pictures, audio and video uploaded by users are often large, which requires high real-time and concurrency of the processing system. At this time, we can use multiple functions to process images uploaded by users, including image compression, format conversion, etc., to meet the needs of different scenarios. Figure 1-3 shows an example of real-time file processing.

1-3 Example of real-time file processing

We can process data in real time through the rich event sources, event triggering mechanisms, codes and simple configurations supported by the Serverless architecture, such as decompressing the object storage compression package, cleaning the data in the log or database, and custom consumption of MNS messages. Figure 1-4 shows an example of real-time data processing.

1-4 Example of real-time data processing

3. Offline data processing

Generally, to process big data, we need to set up Hadoop or Spark and other related big data frameworks, as well as a cluster to process data. However, with Serverless technology, we only need to continuously store the obtained data to the object storage, and trigger the data splitting function through the relevant trigger configured for the object storage to split the relevant data or tasks, then call the relevant processing function, and then store it in the cloud database.

For example, a securities company counts the transactions in this period every 12 hours and sorts out the top 5 transactions in this period; Deal with the transaction flow log of Seckill website once a day to obtain the errors caused by selling out, so as to accurately analyze the commodity heat and trend. The nearly unlimited capacity expansion capability of function computing enables users to easily calculate large volumes of data.

Using the Serverless architecture, you can concurrently execute mapper and reducer functions on the source data to complete the work in a short time. Compared with the traditional working mode, using the Serverless architecture can avoid idle resources and save costs. The data ETC processing flow can be simplified as Figure 1-5.

1-5 Data ETL Processing Example

4. Artificial intelligence

When the AI model completes training and provides inference services externally, the data model is packaged in the calling function based on the Serverless architecture, and the code is run when the actual user's request arrives. Compared with traditional inference and prediction, the advantage of this method is that both function modules, back-end GPU servers, and other connected machine learning services can be paid as you go and automatically scaled, so as to ensure the stability of the service while ensuring the performance. Figure 1-6 shows an example of machine learning (AI inference prediction) processing.

1-6 Example of machine learning (AI inference prediction) processing

5. Internet of Things (IoT)

At present, many manufacturers are launching their own smart speaker products - users say a word to the smart speaker, and the smart speaker transmits this sentence to the back-end service through the Internet, and then gets the feedback results and returns them to users. With the Serverless architecture, manufacturers can combine API gateways, cloud functions and database products to replace traditional servers or virtual machines.

On the one hand, the Serverless architecture can ensure that resources can be paid as you go, that is, only when users use them, the function part will be charged; On the other hand, when the number of users increases, the back end of the smart speaker system implemented through the Serverless architecture will also be elastically scaled to ensure the stability of the user side service, and the maintenance of a function is equivalent to the maintenance of a single function, which will not bring additional risks to the main process, and will be relatively safer and more stable. Figure 1-7 shows an example of IoT back-end processing.

Figure 1-7 Example of IoT back-end processing

6. Monitoring and automatic operation and maintenance

In actual production, we often need to do some monitoring scripts to monitor whether website services or API services are healthy, including availability and response speed. The traditional method is to monitor and alarm through some website monitoring platforms (such as DNSPod monitoring, website service monitoring, Alibaba Cloud monitoring, etc.).

The principle of these monitoring platforms is that users set the websites to be monitored and the expected time threshold, and the servers deployed by the monitoring platforms in various regions regularly send requests to judge the availability of websites or services. Of course, although these servers are very versatile, they are not necessarily suitable. For example, it is now necessary to monitor the status code of a website and the delay in different areas, and set a delay threshold. When the website status is abnormal or the delay is too large, the platform will notify and alarm by email.

At present, for such a customized demand, most monitoring platforms are difficult to implement directly, so it is particularly important to develop a website status monitoring tool. In addition, in the actual production operation and maintenance, it is also necessary to monitor and alarm the cloud services used. For example, when using Hadoop and Spark, the health of nodes should be monitored; When using Kubernetes, monitor API Server, ETCD and other indicators; When using Kafka, you need to monitor the data backlog and indicators such as Topic and Consumer.

For the monitoring of these services, we often cannot judge by simple URLs and certain statuses. In traditional operation and maintenance, we usually set a timing task on an additional machine to bypass monitoring related services. A very important application scenario of the Serverless architecture is operation and maintenance, monitoring and alarm. That is, it can monitor and perceive the health status of some resources by combining with timed triggers. Figure 1-8 shows an example of website monitoring alarm.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us