×
Community Blog 7 Major Trends for Cloud Native in 2020: Serverless

7 Major Trends for Cloud Native in 2020: Serverless

In the first article of this series, we will review and analyze the state of cloud native for the coming decade from the perspective of serverless.

This is the first post in Seven Major Cloud Native Trends for 2020 Series

In 2019, the capabilities of major serverless computing platforms in the industry were greatly improved and became more versatile. For example, reserved resources are used to eliminate the impact of cold start on latency, so that latency-sensitive online applications can also be built in serverless mode. The serverless ecosystem is continuously developing. Many open source projects and startups have emerged in the fields of application construction, security, monitoring, and alerting, and the toolchain is becoming increasingly sophisticated.

Users' acceptance of serverless is constantly increasing. In addition to industries such as the Internet that quickly embrace new technologies, traditional enterprise users have also begun to adopt serverless technology. As we enter a new decade, we expect the serverless field to evolve along the following routes:

Serverless Will Further Extend to Online Services from Offline Services

Billing based on requests and the response time are intrinsically contradictory. Serverless technologies, such as FaaS, were initially insensitive to the response time, and therefore they were first used for event-driven offline services. However, we can now see that products such as AWS Lambda Provisioned Capacity and Azure Functions Premium Plan allow users to pay a little extra for shorter response times. This undoubtedly makes serverless more suitable for online businesses.

Serverless Not Only Provides Application or Function Capabilities, but Also Accelerates the Promotion of Serverless Infrastructure and Services

After business code is hosted on a serverless platform, you can enjoy automatic scaling and pay-as-you-go billing. However, if the infrastructure and related services do not provide real-time scalability, the business is not flexible as a whole. AWS has put a lot of work into improving the real-time elasticity of resources, such as VPC networks and database connection pools, in Lambda. We believe that other vendors will follow soon, which accelerates the progress of the industry as a whole toward serverless infrastructure and cloud services.

Open Source Solutions, Such as Knative, Will Receive Increasing Attention

Although cloud vendors are vigorously promoting their serverless products, developers are generally worried about being restricted by vendors. Therefore, organizations of a certain scale will use open source solutions, such as Knative, to build their own serverless platforms. Once an open source solution becomes mainstream, cloud vendors will take the initiative to provide compatibility with open source standards and increase their investment in the open source community.

Serverless Developer Tools and Frameworks Will Proliferate

IDE, problem diagnosis, continuous integration and release, and other supporting tools and services will provide a more complete user experience. We will see more success stories and best practices as well. Serverless application frameworks will emerge in front-end development and other fields, maximizing engineering efficiency.

Java Will Continue Its Efforts and Become One of the Mainstream Languages for Serverless Platforms

Serverless platforms require application images to be small enough for fast distribution and also a short application startup time. Although languages such as Java, NodeJS, and Python differ in these aspects, the Java community is working hard to succeed in this area. We can see that Java is continuously trying to "lose weight" through technologies such as Java 9 Modules and GraalVM Native Images. Spring, the mainstream framework, has also begun to embrace GraalVM, and new frameworks, such as Prometheus and Micronaut, are making new breakthroughs. We are looking forward to the brand-new experience that Java will provide in the serverless field.

We Expect That Research or Products in the Intermediate Layer (Acceleration Layer) Will Make Breakthroughs by Solving the Faas State Transmission Issue

When serverless is used in function scenarios, the greatest challenge will be the latency amplification caused by the state transmission required by functions in a series and the frequent interaction with the internal storage required by function processing. In the traditional architecture, these processes are handled within a single program process. To solve these challenges, an intermediate computing layer (acceleration layer) is needed. This layer is one of the future directions of academic research and product development.

We Expect a WebAssembly (WASM)-based FaaS Solution to Emerge

Solomon Hykes, one of the founders of Docker, once said, "If WASM and WASI were around in 2008, we wouldn't have needed to create Docker." This illustrates the importance of WASM. Although WASM is widely considered a browser technology, it provides excellent security isolation, an extremely fast startup speed, and support for more than 20 languages. Then, why cannot we run it on the server? These technical features perfectly suit the needs of FaaS.

1

You May Also Like

Knative Eventing Hello World: An Introduction to Knative

This article illustrates how to obtain events in Knative Eventing and pass them to Knative Serving for consumption using the Kubernetes Event Source example.

Full Lifecycle Observability and How It Supports Double 11

Read how Alibaba's EagleEye team helped to create an Intelligent Fault Location system to improve the observability of Alibaba's systems.

0 0 0
Share on

You may also like

Comments

Related Products