Design and optimization of in constant query

In the past, to build applications, you need to buy ECS instances, build open source software systems, and then maintain them. The whole process is very complicated and cumbersome, because the traffic volume increases and decreases.

After using the Serverless service, these problems have been simplified. From half custody to full custody, all services are API based, unlimited capacity is fully flexible, can be assembled and used, and productivity has changed significantly. At the same time, we will promote the upgrading of software R&D model, and assembly-based R&D will become the mainstream.

Based on Alibaba Cloud's overall experience of serverless, Ding Yu (Shutong), researcher of Alibaba and general manager of Alibaba Cloud's intelligent cloud native application platform, elaborated on the evolution of enterprise application architecture and the industry changes brought by the rise of serverless.

In the past decade, cloud access has become a deterministic trend.

In the cloud stage, enterprises focus on how to achieve smooth cloud access, so cloud vendors take cloud hosting as their core strategy. The main form of cloud is resource-based service, which provides massive computing power for enterprises in the form of virtual machines.

For developers, the functions and usage of virtual machines are no different from those of physical servers in IDC. The original application and technology stack can be smoothly clouded without changing. The strategy of cloud hosting well meets the core demands of enterprises in the cloud stage, so it has achieved success.

As more and more enterprises go to the cloud, and even many enterprise systems are built on the cloud on the first day, the core focus of enterprises has changed to how to make better use of the capabilities of the cloud to quickly market products, so as to achieve business success.

This makes the main goal of cloud development in the next stage change to take advantage of its own advantages to solve the development and operation and maintenance challenges of large-scale complex applications. However, if the form of computing power is still the form of resources such as servers, the threshold for its use is still high. Computing power and business are too far apart. Enterprises need a complete set of infrastructure supporting applications to make good use of computing power.

How to make computing power as popular as electricity, cloud computing needs a new form.

The role of cloud services will change greatly. It is no longer just to provide resources, but to become a new platform for enterprises to build applications. It is necessary to help enterprises minimize low-value repetitive work such as machine operation and maintenance, and focus on business innovation.

The next decade is the stage when cloud evolves its own capabilities to help enterprises make good use of cloud. The core competence of cloud manufacturers is Serverless cloud services.

Why choose Serverless

Serverless service is fully hosted

Cloud manufacturers can improve the resource efficiency and performance of services on a large scale through the underlying technologies such as storage computing separation, software and hardware co-optimization. Taking Alibaba Cloud storage service as an example, RDMA technology has been used on a large scale since 2018, and Solar-RDMA protocol, as well as HPCC flow control and end network integration technology have been developed.

Through the collaborative design of network and storage, combined with the ability of FPGA hardware to accelerate compression algorithm, stable microsecond reading and writing performance is achieved. Enterprises only need to call service APIs to use the expertise of cloud vendors in relevant fields and enjoy technical dividends.

Serverless service has adaptive flexibility, enabling enterprise applications to cope with unpredictable or sudden business load more smoothly.

A typical business system can be divided into application layer, access layer and resource layer. Resource-based cloud services only provide the flexibility of the resource level. Enterprises also need to achieve the flexibility of the access layer and the application layer to achieve the full link flexibility of the business.

1) Architecture design stage

According to the dependency of each component, formulate the elastic scaling and current limiting degradation scheme. For services such as relational databases that have little flexibility, it is generally necessary to predict the write and read scale of the database in the next three years, and perform database and table division.

2) Resource planning stage

Weigh the factors such as the difficulty of scaling, the speed of scaling, and the speed of business load change of each component, and realize the corresponding flexibility through redundant resources. The proportion of access layer resources in the whole system is not high, the cost of maintaining high redundant resources is not high, and it is easy to expand. The resource planning at the application level is the most challenging. The application layer is the largest resource consumption, and it is generally not allowed to carry the load peak through high redundant resources. In addition, the expansion and contraction of the application layer involves the upstream and downstream links, and the complexity is very high. Finally, the traffic scale of different services in the application layer is different, which needs to be sorted out clearly, and the redundant resource planning of hot links should be focused on.

3) Online operation stage

Through the complete observability, establish quantitative link traffic, detect hotspots, dynamically expand and shrink the capacity, re-quantize the hot link traffic, and then judge whether to carry out the closed-loop dynamic expansion and shrinking capacity. In addition, complete and timely monitoring and alarm is also very necessary. Different heat thresholds are set for different components. After the heat flow is detected, the system should timely broadcast to the development, operation and maintenance personnel of the associated components, and handle it according to the predetermined scheme.

It can be seen that the complexity of building the resilience of the entire business based on the resilience of the resource layer is very high. The adaptive flexibility goal of Serverless service is to simplify complexity and help enterprises achieve business flexibility more easily.

First of all, the cloud factory will make a large number of middleware, database, big data and other BaaS-based services Serverless. Taking database as an example, it not only provides database services with high flexibility such as NoSQL, but also makes the traditional relational database Serverless.

Secondly, the Serverless computing service usually has an instance startup speed of 100 milliseconds to seconds, starting thousands or even tens of thousands of instances per second, and highly automated elastic scalability. With the Serverless BaaS service, it will achieve full-link business flexibility.

Finally, the Serverless service is usually built with the ability of current limiting and degrading, which makes the enterprise resources controllable and easier to deal with the problem of system avalanche.

How to use resources efficiently is a common problem faced by enterprises. Statistics from industry data centers show that the overall average resource utilization rate of enterprises is not high, generally less than 15%. To improve resource utilization, enterprises generally face the following challenges:

• The use of resources in each business department is independent of each other. There is no pooling of resources and no unified scheduling.

• Considering factors such as performance, peak load and future business development guarantee, business departments generally tend to apply for more resources, usually 3-5 times the actual use of resources.

• Fragmented resource consumption of non-core applications leads to a large amount of resource waste. In order to meet the requirements of high availability, a large number of non-core applications need at least 2-3 machines, and these applications are often called with long tail and low frequency, even when the service is offline but the server forgets to release, resulting in resource waste. In Alibaba Group, non-core applications consume more resources than core applications.

• There are no shared resources for applications of different nature, no peak shaving and valley filling, and the overall resource utilization rate of the cluster is not high.

Containerization is an effective means to improve resource utilization, but its implementation is relatively complex. Alibaba Group improves the overall utilization of resources through full stack containerization, unified scheduling and offline hybrid, involving container performance optimization, tenant isolation, bottom server computing power normalization, customized unified scheduling of resources and offline hybrid, etc.

The goal of Serverless is to enable enterprises to improve resource utilization and reduce costs in a simpler way.

Taking function calculation as an example, enterprises do not need to pay for idle resources, but pay for the resources actually used. This means that a large number of test, pre-release and even production environments, as well as a large number of non-core application fragmentation resource use scenarios, the resource utilization rate will be very high after using Serverless.

In terms of performance, some resources need to be reserved, and the idle resource cost calculated by the function is also lower than that of the server. Multi-AZ disaster tolerance capability is built in the function calculation, and enterprises do not need to prepare redundant resources for disaster tolerance. The function calculation supports the elastic scaling speed at the level of 100 milliseconds and rich scaling rules. Enterprises do not need to reserve resources for peak load.

When the cloud service evolves into the form of Serverless, the threshold for enterprises to use it will be greatly reduced, and Serverless will make computing power as popular as electricity.

Drive the upgrading of R&D mode

The evolution of application architecture and R&D model is mainly driven by the business development demands of enterprises. Enterprises always expect to be more agile to cope with the growth of business scale and complexity, to market products faster, and to accelerate the speed of business innovation, which requires technology to support the rapid iteration of large-scale and complex software.

The traditional enterprise-level application architecture is usually single. All modules are coupled together and released at the same time. This single architecture application is easy to manage at the beginning, but with the development of business, it will bring huge complexity. This strongly coupled architecture brings a lot of conflicts in the process of development, testing and operation and maintenance, which slows down the whole iteration.

For example, the development of the entire application requires that all modules adopt a unified language and framework technology stack. If a basic library is shared by multiple modules, and one module wants to upgrade to a new version, you need to persuade everyone to upgrade at the same time, even if others do not need a new version. The release rhythm of all modules is forced to be aligned, and the issue of one module will affect the release of the entire application.

It is also very difficult to quickly repair the online problems of a module, because it needs to merge with the changes in progress of other modules, resolve conflicts, republish the entire application, and run all tests before it can be republished online. The single application architecture can no longer meet the requirements of software R&D efficiency, and has been replaced by the Internet distributed architecture characterized by micro-services.

After adopting the microservice architecture, the application consists of independent services. These services are loosely coupled and interact through API calls, event triggering, or data flow. Each service completes a specific function and is independently developed, run and published.

Microservices solve the bottleneck of R&D efficiency of single architecture, but put forward very high requirements for application infrastructure.

For example, to ensure that independently developed microservices can coordinate as expected, detailed integration and end-to-end testing are required. The number of application deployments in the test environment is usually 10 times that in the production environment. If the application infrastructure cannot provide an independent test environment quickly, a large amount of test time will be consumed in solving the problem of environment stability.

According to the research and development statistics of Alibaba Group, the research and development of 1 person day usually corresponds to the test of 5-7 person days. The test environment has become the biggest pain point of Alibaba Group's research and development.

The loose coupling of microservices also brings great challenges to database use, status management, problem diagnosis and application delivery pipeline. There has been a lot of discussion about the complexity and solutions of microservices in the industry, so I won't repeat it here.

It is the consensus of the industry that the implementation of the Internet distributed architecture with micro-services as its core is complex and must be supported by good tools and platforms.

In addition to micro-service architecture, enterprises also widely use reactive architecture, event-driven architecture and other models. These architectures have brought loose coupling, agile development and other benefits, but the corresponding landing complexity has also increased.

In fact, the industry has provided a wealth of products and solutions in every aspect of application construction, orchestration, operation, BaaS services, infrastructure management, and has established a huge ecosystem. However, it is by no means easy for enterprises to integrate these software/services, make them flexible, stable and well integrated, and accelerate application development iterations.

At the stage of making good use of the cloud, the mission of the cloud is to eliminate this complexity, bring a qualitative breakthrough in large-scale complex software development, and help enterprises break the technology gap.

Each Serverless service is the output of the manufacturer's capabilities in the field. It exposes functions through the service API and promises reliability, flexibility, performance and other capability indicators. Therefore, they are high-quality application building blocks.

For example, Alibaba Cloud Object Storage (OSS) service carries massive data of EB level, promises data reliability of 11 9s, 99.95% availability, and diversified hierarchical data storage and processing capabilities.

Alibaba Cloud Message Queuing RocketMQ has experienced the tempering of the two-ten-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand-thousand. These cloud services have obvious advantages in flexibility and reliability compared with the systems built by enterprises based on open source software.

Not only cloud vendors, but also a large number of open source commercial products have adopted the Serverless model, including Confluent Cloud, MongoDB Atlas, Snowflake, Databricks, etc.

As manufacturers launch more and more Serverless services in storage, computing, middleware, big data and other fields, and these services are tightly integrated through event-driven methods, cloud has gradually become a super platform for application construction and operation, and the application R&D mode has also been upgraded to assembly-based R&D.

Make cloud the best platform for application construction

With Alibaba Cloud providing more and more comprehensive Serverless products, many cloud products have become modular, API, and service-oriented. It can be assembled, and applications can be built by dragging and dropping.

Under the Serverless architecture, the R&D method is upgraded to assembly-type R&D, which can achieve process arrangement, event-driven, and even visualization, which completely subverts the original software R&D method, greatly improves the R&D efficiency, and flexibly responds to business challenges. According to the survey and statistics of authoritative institutions, assembly-type R&D can improve the efficiency of R&D by more than 50% compared with the traditional model.

From emerging Internet startups to traditional enterprises building large-scale software, Serverless architecture and assembly-based research and development can be used.

Take Gaode as an example, the launch business of Gaode is closely related to the user's life scene, and its functions are changeable; The recommended downstream business categories grow rapidly, and the business strategies are variable; Moreover, the whole business is closely related to user travel, with obvious peak-valley attributes.

With the growth of business, the original architecture of the launch platform faces some obvious pain points:

1. Re-client. Card processing, navigation planning, page display and other logic are all placed on the Web or mobile device, resulting in slow client release and bloated code.

2. The business functions are tightly coupled and cannot keep up with the business iteration requirements. The launch strategy is changeable, and each release has a large impact.

3. The load has obvious peaks and valleys, the instance is resident, and the resource utilization rate is low.

Serverless architecture can well solve the above pain points. First of all, the client should be downsized, and the logic on the end should be largely moved to the BFF layer (Backends for frontend).

Because Serverless calculates zero operation and maintenance, it only needs to develop business logic, which is completely released by client personnel, avoiding team collaboration problems. With the platform's built-in application smooth release capability, the client staff can quickly iterate and release at ease.

Back-end services such as delivery strategies are also decoupled into functions, including rule filtering functions, fatigue reminder functions, content assembly functions, and so on. As independent back-end service development iterations, these functions have little impact on each release and control the explosion radius.

By carefully sorting out the hotspot logic and upstream and downstream dependencies, the full link flexibility and interface-level flow control capability are realized. Elastic scaling is not only fast, but also safe. The amount of resources is matched with the load peak and valley, and the efficiency is high.

At present, the Gaode business delivery platform based on the Serverless architecture has carried 100% of the production traffic, the business scale has reached million QPS, the function delivery has been reduced from a few days to a few hours, and the overall cost has been reduced by 38%.

Serverless singularity has come

Explorers of cloud computing believe that the default computing paradigm in the next decade of cloud computing is Serverless.

In 2021, DataDog released the Serverless research report. The data shows that from cloud-based startups to large enterprises are paying attention to Serverless, and the Serverless ecosystem has surpassed FaaS, including dozens of services, which can help developers build faster and more dynamic applications.

Serverless has become the mainstream of IT development today and the mainstream of cloud server providers' capabilities since it was proposed in 2012 to 2022 this year.

We believe that the Serverless Singularity has come. The so-called Singularity is the turning point from stable development to high-speed development, which indicates that the industry will start to fully erupt. And we will also become a generation of technologists to witness this change.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us