×
Community Blog Getting Started with Serverless: Serverless Architectures

Getting Started with Serverless: Serverless Architectures

This article discusses how going serverless allows an organization to fix practical issues with less effort and fewer resources.

1

By Hongqi, Alibaba Cloud Senior Technical Expert

According to the definition of serverless computing provided by the Cloud Native Computing Foundation (CNCF), serverless architectures are designed to fix issues by using function as a service (FaaS) and backend as a service (BaaS). This definition clarifies the nature of serverless, but it causes confusion and arguments.

To meet emerging needs and keep pace with technological developments, industry participants have launched non-FaaS serverless computing services, such as Google Cloud Run and Alibaba Cloud Serverless App Engine (SAE) and Serverless Kubernetes. Based on serverless technology, these services provide auto scaling capabilities and support pay-as-you-go billing. They enrich the scenarios of serverless computing.

  • To eliminate the impact of cold starts, FaaS-based serverless computing services provide the reservation feature, so they are not completely pay-as-you-go. These services include Alibaba Cloud Function Compute and AWS Lambda.
  • Some serverful backend services also provide serverless features, such as Amazon Aurora Serverless and Alibaba Cloud ApsaraDB for HBase Serverless Edition.

The emergence of these services blurs the boundaries of serverless computing. Many cloud services are evolving toward the serverless form. But how can we fix business issues based on a vague concept? One design goal of serverless has remained unchanged: to focus on business logic no matter what types of servers, auto scaling capabilities, or pay-as-you-go billing methods are used.

Ben Kehoe, a famous serverless expert, describes serverless as a state of mind. Consider the following things when you conduct businesses:

  • What are my businesses?
  • Will my businesses thrive when I implement this approach?
  • If not, why should I implement this approach instead of having someone else solve the problem?
  • There is no need to fix technical issues before business issues are fixed.

When you build a serverless architecture, focus on your business logic instead of spending time choosing cutting edge services and technologies to fix technical pain points. Once you understand your business logic, it is easier for you to select suitable technologies and services and figure out a workable plan to design an application architecture. Going serverless allows you to fix practical issues while focusing on your businesses. This means less work and fewer resources because you can transfer some work to others.

The following section explains how serverless architectures are applied to common scenarios. We will look at architecture design from the perspectives of computing, storage, and message transmission. We will also weigh the pros and cons of architectures in terms of maintainability, security, reliability, scalability, and costs. To make the discussion more practical, I will use specific services as examples. You can also try out other services because these architectures are universal.

Static Websites

2

For example, assume you are about to build a simple informational website, like online yellow pages. The following three solutions are available:

  • Solution 1: Purchase a server and have the server managed in an international data center (IDC) run the website.
  • Solution 2: Purchase a cloud server from a cloud vendor to run the website and purchase a load balancing service and multiple servers to ensure high availability.
  • Solution 3: Build a static website with support from an object storage service, such as Alibaba Cloud Object Storage Service (OSS), and divert traffic back to OSS through the Content Delivery Network (CDN).

3

Going from Solution 1 to Solution 3 takes you into the realm of serverless. In other words, you migrate your businesses to the cloud and do away with managed servers. What changes do you experience when you go serverless? Solution 1 and Solution 2 require a series of tasks to be completed, including budgeting, scale-out, high availability, and manual monitoring. This was not what Jack Ma wanted when, in the early days, he just wanted to build an informational website to introduce China to the world. This was his business logic. So, go serverless if you just want to focus on your business. Solution 3 builds a static website based on a serverless architecture. It has the following advantages over the other two solutions:

  • Maintainability: You do not need to purchase a management server that is used to upgrade the security patches of your operating system, implement fault escalation, and ensure high availability. These tasks are completed by cloud services, such as OSS and CDN.
  • Scalability: You do not need to estimate resources or consider future scale-out. OSS is elastic, and CDN minimizes system latency, reduces costs, and improves availability.
  • Costs: You only pay for the resources you use, including storage fees and request fees. You are not charged when no requests are sent.
  • Security: In a serverless system, any server is transparent to you, and SSH logon is not required. Any distributed denial of service (DDoS) attacks are handled by cloud services.

Standalone Applications and Microservice Applications

4

Static pages and websites can be used to display small-volume content that is infrequently updated. Dynamic pages and websites can be used to display frequently updated content in large volumes. For example, you cannot design static pages to manage item information on the item pages of Taobao. We can enable dynamic responses to user requests by using the following solutions:

  • Develop a standalone web application to implement all types of application logic with the support of a database. This solution uses a layered architecture to quickly implement relatively simple applications.
  • Develop a microservice application for each element of an item page, such as the comments section, sale information, and shipping information. These page elements result from the execution units that are split from the logic in a standalone application. This splitting is necessary when your businesses and team grow and you introduce more features to your website, resulting in increased access traffic. Each unit of this microservices model is highly autonomous and easy to develop (for example, by using different technologies), deploy, and scale out. However, the microservices model faces some issues that are typical of distributed systems, such as load balancing for inter-service communication and fault handling.

You can choose an appropriate solution to fix your major business issues based on the development phase and scale of your organization. Taobao's initial success had nothing to do with the technical architecture it used. No matter what architecture you use, serverless as a state of mind helps you focus on your businesses. For example:

  • You can determine whether to purchase a server to install a database that is used to implement high availability, manage backups, and upgrade versions, or use ApsaraDB for RDS to manage these tasks. You can determine whether to use Tablestore, ApsaraDB for HBase Serverless Edition, or another serverless database service to elastically scale resources in and out and only pay for the resources you actually use.
  • You can determine whether to purchase a server to run standalone applications, or use services such as Function Compute and SAE to manage the applications.
  • You can determine whether to use functions to implement lightweight microservices and rely on the various capabilities provided by Function Compute, such as load balancing, auto scaling, pay-as-you-go billing, log collection, and system monitoring.
  • If you implement microservice applications based on Spring Cloud, Dubbo, or High-speed Service Framework (HSF), you can determine whether to purchase a server to deploy these applications, manage service discovery, and implement load balancing, auto scaling, circuit breaking, and system monitoring, or host these tasks to SAE.

5

The architecture shown in the right part of the preceding figure introduces API Gateway, Function Compute, or SAE to implement a computing layer. A large number of tasks are completed by cloud services, allowing you to focus on your business logic. The following figure shows the interactions between microservices in the system. An item aggregation service presents internal microservices to external users in a unified manner. The microservices can be implemented by using SAE or functions.

6

The architecture can be extended to support access by different clients, as shown in the right part of the preceding figure. Different clients may need different information. You can enable your website to recommend items to mobile phone users based on their locations. Can we develop a serverless architecture that benefits both mobile phone users and web browser users? The answer to this question lies in the Backend for Frontend (BFF) architecture, which has received high praise from frontend development engineers because it is built with serverless technology. Frontend engineers can directly write BFF code from the business perspective, without having to handle complex server-related details.

Event Trigger

Dynamic pages are created in sync with request processing. However, some requests may take a long time to process. For example, say we need to implement content management for the pictures and videos that users generate in the comments section so that they can be displayed or played back properly on different clients. The content management involves how to upload pictures, add thumbnails and watermarks to the pictures, and censor pictures.

7

The technical architectures used to process uploaded multimedia files in real time have evolved as follows:

8

  • Serverful monolithic architecture: Multimedia files are uploaded to a server for processing. The server also processes multimedia display requests.
  • Server-based microservices model: A server processes uploaded multimedia files and then transfers them to OSS, with the file addresses being added to a message queue. Another server processes files and then stores the processing results in OSS. Multimedia display requests are jointly completed by OSS and CDN.
  • Serverless architecture: Multimedia files are directly uploaded to OSS, which implements the event trigger capability to trigger functions to process the files. The processing results are stored in OSS. Multimedia display requests are jointly completed by OSS and CDN.

If you choose a serverful monolithic architecture, you have to consider the following issues:

  • How do I process a large number of files? A single server provides limited storage space, so you have to purchase more servers.
  • How do I scale out a web application server? Is the web application server suitable for processing CPU-intensive tasks?
  • How do I ensure high availability for processing upload requests?
  • How do I ensure high availability for processing display requests?
  • How do I deal with peaks and valleys in request loads?

A server-based microservices model can help you fix most of the preceding issues, but new issues emerge, for example:

  • Managing the high availability and elasticity of application servers
  • Managing the elasticity of file processing servers
  • Managing the elasticity of message queues

A serverless architecture can fix all of the preceding issues. With serverless, a series of tasks that were originally handled by developers are transferred to services for automatic execution. These tasks include load balancing, high availability and auto scaling of servers, and message queue management. As architectures are evolving, fewer tasks need to be handled by developers and systems have become more sophisticated. This means more effort can be devoted to businesses. This significantly accelerates business delivery.

A serverless architecture provides the following benefits:

  • Event trigger capability: Function Compute and OSS (which provides event sources) are natively integrated, which removes the need for queue resource management. Queues are automatically scaled out so that multimedia files can be processed immediately after being uploaded.
  • High elasticity and pay-as-you-go billing: Different specifications of computing resources are required to process pictures and videos of different sizes. Different volumes of resources are required to process traffic peaks and valleys. This elasticity in resource usage is guaranteed by services. Resources can be scaled in and out as needed so that you can fully utilize resources and never pay for idle resources.

Event triggers are an important feature of FaaS. The publish/subscribe (pub/sub) event-driven model is not new. However, before serverless architectures were widely used, we had to manually implement event production and consumption processes and set up supporting intermediate connections. This is similar to the working of a server-based microservices model.

In a serverless architecture, events are sent by the producer, and you can focus on the consumer logic without having to maintain the intermediate connections. This is a major benefit of going serverless.

Function Compute is also integrated with the event sources of other cloud services, allowing you to conveniently apply common modes to your businesses, such as pub/sub mode, event streaming mode, and event sourcing mode.

9

Service Orchestration

As mentioned above, an item page contains complex elements, but only read operations can be performed on the page. The aggregation service APIs are stateless and synchronous. Now, let's look at an order placement process, which is a core scenario of e-commerce.

10

This process involves a series of distributed writes, which are difficult to process in a microservices model. However, you can use a standalone application to easily process these distributed writes. This is because only one database is used and data consistency can be maintained through database transactions. In practice, you may have to deal with some external services and need a solution to ensure that each step is completed or rolled back smoothly. A classic solution is the Saga model, which can be implemented based on two different architectures.

In one architecture, the progress of steps is driven by events. A message bus is used by the relevant services, such as the inventory service, to listen to events. The listener has access to servers or functions. This architecture can do away with servers by integrating Function Compute with topics.

The modules of this architecture are loosely coupled and assigned clear responsibilities. However, this architecture becomes increasingly difficult to maintain as more complex steps are added. For example, it is difficult to understand the business logic and track the execution status. In addition, the architecture has low maintainability.

11

The other architecture is based on workflows. The services are all independent of each other, and information is not transmitted by using events. Instead, a centralized coordinator service schedules individual business services and maintains the business logic and states. However, the following issues arise from this centralized coordinator:

  • You have to write a large amount of code to implement functions such as orchestration logic, state maintenance, and error retry, but these functions are difficult for other applications to reuse.
  • You have to maintain the infrastructure used to run orchestration applications to ensure the high availability and scalability of these applications.
  • You have to consider state persistence to support the long-term progression of multi-step processes and maintain transactional processes.

You can use cloud services, such as Alibaba Cloud Serverless Workflow, to complete the preceding tasks automatically so that you only need to focus on your business logic.

The related flowchart is shown in the right part of the following figure. The flowchart implements the event-driven Saga model, which significantly simplifies processes and improves observability.

12

Data Pipelines

As your businesses grow and generate more data, you can mine the value of this data. For example, you can analyze how users behave on your website and make recommendations to them accordingly. A data pipeline includes data collection, processing, and analytics. It is possible but difficult to build a data pipeline from scratch. Here, we are talking about e-commerce instead of how to provide a data pipeline service. Once you have set a goal for your businesses, you will find it easier to choose ways to achieve this goal.

  • Log Service (SLS) provides data collection, analytics, and shipping features.
  • Function Compute can process the data in SLS in real time and write results to other services, such as SLS and OSS.
  • Alibaba Cloud Serverless Workflow supports scheduled data processing in batches and allows you to use functions to define flexible data processing logic and create extract, transform, and load (ETL) jobs.
  • Data Lake Analytics (DLA) provides a serverless, interactive query service. DLA uses standard SQL statements to analyze data from multiple sources, including OSS, databases such as PostgreSQL, MySQL, and NoSQL, and Tablestore.

Summary

This article introduces the common scenarios of serverless architectures and explains how serverless architectures separate tasks unrelated to your business from business logic and transfer them to platforms and services for handling. The division of responsibilities and coordination are common practices and they are more clearly defined in serverless architectures. Less is more. A serverless architecture allows you to focus on your businesses and the core competitiveness of your products without having to deal with servers, server loads, and other details that are not related to your businesses.

0 0 0
Share on

You may also like

Comments

Related Products