Community Blog Trends and Applications of Serverless Computing

Trends and Applications of Serverless Computing

As the concept of serverless computing becoming more and more popular, this tutorial will allow you run serverless service on Alibaba Cloud smoothly.

Serverless Computing with Alibaba Cloud Function Compute

Serverless computing refers to the concept of building and running apps that do not require server management.
Cloud computing is a revolutionary technology because it essentially breaks traditional computing models, simplifies architecture, and better meets users' requirements while keeping costs at a minimum. Owing to the flexibility of cloud computing, we have seen a multitude of novel approaches to computing, one of which being serverless computing. As a concept in the next phase of cloud computing, serverless computing truly helps enterprises focus only on business and application building without worrying about the IT infrastructure.

What is Serverless Computing?

According to the Cloud Native Computing Foundation (CNCF), serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.

Serverless computing enables developers to build and run applications without managing servers and other infrastructures. An application is decoupled to fine-grained functions, which are the basic unit of deployment and running. Users only pay for resources consumed. Therefore, serverless computing relieves application developers from management of servers and other underlying infrastructures, and enables them to focus on innovation at the business layer.

Although serverless computing is new, the idea of "serverless" has already been around since the beginning of cloud computing. Many cloud services, such as computing and storage, are serverless. For example, Object Storage Service (OSS), the first cloud service launched by Alibaba Cloud, is a serverless storage service. Users do not need to care how data is stored on the underlying server, and they only pay for storage resources consumed.

On the other hand, "serverless" in serverless computing refers to the ability to build applications and programs without the need of managing resources, including cloud resources. Serverless computing services such as Alibaba Cloud Function Compute takes away the need to manage infrastructure and enables developers to focus on writing and uploading code.

Statistics show that in recent years, more than 70% of new services or features launched by cloud service providers are serverless. With the increasing types of serverless services on the cloud, users can quickly build elastic and high-availability cloud native applications using multiple services.

Features of Serverless Computing from the Function Compute Architecture Aspect

In serverless computing, the platform service manages the underlying infrastructure. Therefore, the platform service must resolve problems such as fault tolerance and resource scaling to fully utilize the capabilities of serverless computing.

The Alibaba Cloud Function Compute architecture diagram shows that the API service layer provides the functions such as authentication and metadata read/write. When a function is synchronously called, the API service module obtains the available function execution engine from the resource scheduling module, sends a request, and finally retrieves the result. When a function is asynchronously called, the API service module writes the event to Message Queue (MQ) and returns the result. The event distribution module distributes events in a similar way as the synchronous call. The figure below depicts the architecture of Alibaba Cloud Function Compute.

Real-time auto scaling is a core strength of Function Compute. When the user load peaks appear, the system expands resources in real time to smoothly cope with access at the peak. The following takes asynchronous event processing as an example to describe the procedure:

  1. The event is written to the event queue of Function Compute.
  2. The event distributor reads the event from the queue and calls the corresponding function to process the event.
  3. The user function processes the event.

The Function Compute system monitors user load changes, and each component dynamically resizes based on the user load. As shown in the preceding figure, when more events of user A are generated, the system automatically allocates more resources to user A in each link to match the event processing capability.

Function Compute adopts the multi-level resource scheduling policy. The system predicts the requirements based on the user load and water level of the resource pool and prepares computing resources in advance. Backed by years of technical accumulation of Alibaba Cloud Apsara distributed system platform, Function Compute keeps a good balance between the timeliness and accuracy of scheduling and can automatically scale in milliseconds.

Typical Use Cases of Serverless Computing

With Function Compute, users can build almost any type of applications or backend services, including backend services of Web applications, large-scale file processing, and real-time data streaming.

For example, by integrating OSS and Function Compute, a user only needs to write a function to process a single video. When a large number of video files are uploaded to OSS, multiple function instances are automatically triggered for concurrent processing.

To use the HTTP trigger of Function Compute, a user only needs to write a function to process a single request. When the TPS rises, Function Compute automatically expands computing resources to execute multiple function instance processing requests.

Related Blogs

How Will Front-End Engineers Embrace the Trend of Serverless?

In this article, we will be elaborating on the development of Serverless and its impact on front-end development.

Although most front-end work has nothing to do servers, Fa Xin, a senior front-end developer at Alibaba, is deeply moved by the wide popularity of and the heated discussion about Serverless for the last year. Having worked as a front-end developer for more than 10 years, Fa Xin thinks that Serverless may be one of the future technologies that can lead to revolutionary changes in the front-end field.

Today, Fa Xin is going to elaborate on the development of Serverless and its impact on front-end development. I hope that this article can be helpful for front-end engineers.

The preceding figure shows the trend in searching for the word "Serverless" on Google. It shows that the number of searches for this word has increased significantly in the last six months.

Important Technological Revolutions in Front-end Development

The Birth of Ajax

Let's first take a retrospective look at some milestones in the front-end development. The first milestone is the year 2005, when the term Ajax was publicly used by Jesse James Garrett in an article titled Ajax: A New Approach to Web Applications, based on techniques used on Google pages. To be more accurate, Ajax was just a new term that time, not a new technology. I remember that I was a sophomore that year. Although Ajax simply packages technologies like XmlHttpRequest, it becomes a global Web development standard after Google's publicity of Ajax. Ajax has indirectly facilitated the popularity of rich Internet applications (RIAs) and single-page applications (SPAs). Most of these applications provide fluid user experience (partial refresh) and play a important part in the development of Web 2.0. The wide popularity of Ajax makes the front-end JavaScript development more important and complicated and leads to more fine-grained specialized division of labor. This indirectly leads to the birth of full-time and professional front-end developers. Before Ajax, Web development was not divided into the service-side work and the browser-side work. Therefore, Ajax is the first significant event in the front-end field.

The Contribution of Node.js to Front-end Standardization and Engineering

The most significant milestone in front-end development is the birth of Node.js in 2009 and its wide popularity (including CommonJS and npm). Its significant contribution to the front-end development is not just allowing front-end developers to use JavaScript to write servers. Personally, the biggest contribution of Node.js is CommonJS, npm, and the front-end engineering facilitated by the agile development experience. It switches the front-end development from original deployment methods that are not compliant with traditional software engineering to R&D models of traditional enterprise applications. Before Node.js, no efficient tools and standards were available for resource reference, dependency management, and module specifications in front-end development. With the popularity of Node.js, package deployment and dependency management based on CommonJS modules and npm become a mainstream (similar to the Maven system in Java). In addition, Node.js leads to many auxiliary front-end development CLI tools based on Node.js (such as grunt and gulp). Currently, npm is the largest package manager in the world and has become a package dependency management standard for front-end projects. webpack makes front-end code deployment much easier and allows front-end developers to publish applications by bundling code (similar to Java jar packages), irrespective of the type of resources of a project.

Componentization and VDOM in React

The third revolutionary milestone is the birth of React in 2013. Although the Web Components standard had been released before React, React is the most widely used libray that really popularizes the componentization concept. At least two of its features make it the most prospective front-end library. The first feature is the inception of VDOM. Before VDOM, all UI libraries are directly associated with DOM. React adds an intermediate layer called VDOM (a protocol that uses lightweight JSON to describe UI structures) between UI creation and rendering engines. VDOM improves the performance of dom diff. In addition, VDOM enables the separation of UI writing and rendering. With VDOM, the UI, once written, can be rendered on many ends, including servers, mobile devices, PCs, and other devices that display the UI. React Native and Weex also benefit from this separation concept.

In addition to VDOM, React has another advanced concept: The UI is a function (class) that takes some state and returns the entire UI. Before React, most frameworks and libraries split the UI into one HTML fragment (usually supporting templates to render data) and one JS statement that binds events to this HTML fragment. Although this makes UI more understandable, the UI abstraction in React reflects the actual nature of the UI. The function concept in React works wonderfully with FaaS and Serverless.

The birth of React has a profound impact on subsequent or even previous frameworks and libraries, including but not limited to Angular and Vue, which adopt many concepts and ideas in React. React has become one of the several stable technology options in the front-end development field.

To sum up, Ajax separated the front-end out of the entire development process. Node.js accelerated the transformation of the front-end development model to the use of traditional programming languages (engineering). React basically solved the previous problems and challenges to the back-end caused by the fast-changing technologies on the front-end.

Relationship between Serverless and Front-end

Why do I say Serverless is the next technology that will have a profound impact on the front-end? Although the term Serverless was coined by Amazon years ago, it was not an explosive new idea. When CDNs were not as popular as they are now, Web engineers uploaded JS resources and view files (either static or dynamic) to servers. At that time, the front-end work was related to servers. However, the popularity of CDNs and back-to-origin policies and the wide application of engineering and system creation allowed front-end developers to throw a JS or static file to a CDN node. The back-to-origin mechanism (CDNs back to a dynamic service) made it possible to implement half-dynamic view layer rendering. Front-end developers didn't need to care about servers or know how many CDN nodes were used, how load balancing or GSLB was performed, or how much QPS could be handled. One CDN could distribute many development resources. It is safe to say that CDNs are the foregoer of the Serverless concept.

Let's return to application deployment. When Node.js initially started gaining traction years ago, some developers realized that the cost of application and machine deployment and maintenance would be a problem on the business side. Later, some containerization ideas were developed to solve this problem. For example, CBU developed the Naga container in 2015. In a Naga container, business logic is composed of many plug-ins. The container is responsible for request routing and distribution as well as load and stability management. The business side only needs to write and upload business code. This is an implementation of the Serverless concept on the business side, because Naga maintainers perform deployment and maintenance for the business side.

Now let's look at page creation systems and the BFF layer, which are closely related to the front-end. Whether it is to build various systems (such as zebra, Jimu, and TMS) or GraphQl-based platforms or quickly write API gateway products through Web IDE (for example, mbox by CBU), business development can simply focus an business logic, without having to paying attention to deployment and maintenance. This is also a reflection of the Serverless concept.

Embrace Serverless Analytics with Alibaba Cloud

In this article, we will introduce the concepts of serverless computing and big data analytics, and explore how we can reap the benefits of both technologies using Data Lake Analytics.

If you are reading this article, I am pretty sure that you are already familiar with cloud computing. Cloud computing has made a huge impact by providing businesses the opportunity to have virtually unlimited computing resources anywhere, anytime. Over the last few months, we can see that the industries and businesses are moving towards "Serverless Computing". So, it is not surprising to see the footprints of Serverless Computing in Business Intelligence and Analytics (BIA) architectures.

Since serverless computing is a rather novel term, in this article, I will walk you through the concepts of serverless computing and its underlying benefits. I will then talk about Alibaba Cloud Data Lake Analytics (DLA), and discuss how efficient it is when compared to traditional methods of analytics. We will then finish up with typical scenarios of DLA with different use cases as an example.

Is This Article Series for Me?

This article is meant for everyone! This includes students or newcomers who just want to familiarize with general concepts of serverless computing and big data analytics, as well as professional data engineers and analysts who want to leverage serverless analytics to optimize the cost utilization, and time consumption.


This article covers about what is serverless computing, why serverless computing, how serverless architecture deepens its roots in Business Intelligence and Analytics (BIA), and how to leverage Serverless analytics with the help of Alibaba Cloud Data Lake Analytics. We will analyze and visualize data from different data sources, such as from Alibaba Cloud Object Storage Service (OSS), Table Store, and ApsaraDB for RDS, using Alibaba Cloud DLA and Alibaba Cloud Quick BI. At least you need to activate OSS, DLA and Quick BI to make use of this article effectively.

What Is Serverless Computing?

Serverless Computing doesn't mean that there is no server, it is software development approach that aims to eliminate the need to manage server on our own. In general, Serverless Computing is a cloud computing model which leads to build more, manage less by avoiding running the virtual resources for the long period of time.

In Serverless Computing, the code runs in "stateless compute clusters that are ephemeral". i.e. The clusters are automatically provisioned and invoked for the specific tasks and after completion of the tasks, the resources are released. It all happens in matter of seconds, significantly optimizes the resources and reduces the cost.

For illustration, just imagine a machine which starts off to complete a task and stops after completing the task automatically. Serverless computing often refers to FaaS because of "Just run for function".

Alibaba Cloud provides Function as a Service (FaaS) in the name of Alibaba Cloud Function Compute which is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

The above figure illustrates the key difference between serverless computing, and Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) cloud computing models.

Why Serverless Computing?

Serverless Computing provides virtual resources in a tick of time for a function (a specific task), allowing you to run code more flexibly and reliably. This leads to following benefits are

Cost Effective

Unlike PaaS Models (run nonstop to serve the requests), Serverless computing is event-driven (run to complete a task or function) i.e. the resources are allocated on the fly and invoked to serve specific tasks, therefore you only need to pay to the computing time you really need (Pay-Per execution).

Zero Administration

With Serverless approaches, we really need not to worry about provisioning the instance and managing it. Serverless applications scale as per the demand autonomously. There is no need of scaling and tuning but still operation team has to monitor the applications.

Low Overhead

Due to its layers of abstraction, deployment in the Serverless is less complex. Deploy the code in the environment, then you are go-to-market ready.

Serverless Computing in Analytics

In the pipeline, Business intelligence and Analytics architecture is divided into two important conceptual components to derive business value from the data are ¬

  1. Extracting the data from multiple sources, transforming it, Storing in data warehouse.
  2. Transforming it again to make it suitable for data targets or BI systems.

Scalable Serverless APIs on Alibaba Cloud

In this post we will discuss how Alibaba Cloud Function Compute can be used to migrate your existing monolith APIs to serverless functions.

More than that of the personal experience we rely upon the experts' evangelicals and views, and the same theme has accounted for Monolithic applications and architectures since after the prevalent of some certain buzzwords like microservices, containers and serverless architecture. Undoubtedly, monolithic applications are notoriously complex to manage and deploy, updating and scaling larger applications sound like a massively troubling task, however, its biggest flaw is the biggest reason to support it. The monolith architecture is simply perfect for small applications because it would be easy to maintain, and its problems would relatively easy and predictable—meaning that the problems arise as our application grows and complexity. For the applications that are either small or demand no frequent changes, or the applications for which we do not know the complete domain and scope, I found monolith pattern even better for the following reasons:

  1. Single solution is much easier to build and manage. Also, it gives you fast deployment.
  2. Problems can easily be tracked and resolved, moreover, we do not need much skilled and allrounder teams for this legacy monolith pattern.
  3. The development is simpler, and integration is just a function-call away.

But when your application goes beyond the normal size, then you need to draw some domains, and sketch the areas where you are going to break the application down to smaller levels. That is when you create the services and develop in a service-oriented pattern. These services in modern day are known as the microservices. The reason I am mentioning microservices, is because serverless and microservices are often confused and used interchangeably.

Serverless approach although focused and resembles the microservices, or service-oriented architecture in the simplicities, deployment and inter-service-communication. But serverless more likely is a practice, like DevOps. In serverless approach, we write our applications targeting the best possible outcome in terms of performance, and lesser overhead and wait-times in the overall execution of our request. As a developer, our role is to write the software that utilizes the cloud platform to its fullest. In the same time, we also target to decrease the overall cost of our solutions.

On the cloud, the cost is charged based on several factors,

  1. Compute power used.
  2. Memory and storage used.
  3. Network traffic being used.

These three main areas of the cloud are the ones that incur charges. Serverless aims to minimize this, and it does so by,

  1. Leaving the orchestration part to the cloud provider.
  2. Only targeting the main job to be done—underlying runtimes, frameworks are not managed by you!
  3. Rearchitecting your monolith, as somewhat like serverless approach.

Serverless approaches on Alibaba Cloud take the same step for our applications. Alibaba Cloud provides us with the Alibaba Function Compute service, that can be extended to support a complete application deployment in miniature microservices, that conform to all the requirements that we have on the cloud environment.

Before we start with the Alibaba Cloud Function Compute, it is better to explain the serverless concept, and how serverless helps organizations do more and pay less on their cloud budget. Also, one thing, this is not an Alibaba Cloud Function Compute starter guide, to learn how to create a function please check out one of our other posts on Alibaba Cloud, in this post we will discuss how Alibaba Cloud Function Compute can be used to migrate your existing monolith APIs to serverless functions.

Slaying the Monolith Architecture

So far, we have covered a lot about monolith applications and about the reasons that why they should not be distinct, and why they should be a preferable choice for small-scale applications. Now think with the perspective of bigger products, mega applications and larger projects, such as enterprise applications, which is your only goal and market demand. How will you scale and manage them, would easily they be built and deployed or is it even sensible to replicate whole of the application at times when there are changes in only one or two modules. To be specific, it is too much complicated and hard to manage an ever growing and ever scaling application which is based on monolithic architecture. In most cases, you generally start with a monolith application and once in production you distribute the workload one by one and create separate modules—which are services—and deploy them separately. Most organizations utilize approaches like domain-driven design and generate domains for each service and separate them out.

Writing code sophisticatedly had become the norm in almost every software company. However, this isn't a bad practice but if this sophistication comes along with the fear that what if other things will get affected by making any kind of changes in any specific module, due to coupled and dependent code then this sophistication cannot be taken positively, especially in an ever-growing environment. For monolithic application, even the senior developers do hesitate while making changes without being extremely worried about performance of overall application in the production because they know tweaking in any of the module can be risky and can make them sit for long hours to track what is making your application crashed.

So, we need something less dependent, decoupled and easy to build and develop architecting model, which is the actual topic of this article.

Bringing Cloud in the Scene

Not since quite recently but it has been quite many years now that every big and widespread computing concept, or normally startups starts with cloud, but we cannot make cloud computing our ultimate solution for everything. Similarly, the fact that putting our monolith application on the cloud will solve out the issues—which we have discussed above—is useless and unjustifiable. Because, we would need highly skilled and trained team to tackle cloud infrastructure, moreover, maintenance, security and configuration of load balancers would be highly costly, which is cumbersome and a proof that 'There is no use of putting monolith applications AS-IT-IS on the cloud'.

Somehow and somewhat, we all have used software engineering design pattern in any of our projects. All these patterns are designed to fulfill a specific kind of problem and requirement. In, the same way we do have various kind of cloud pattern depending upon the user need and usage. From the wide variety of these design patterns the one which has usurped the monolith pattern and resolve the scalability and management issues of such large applications is serverless pattern. And this is the pattern that Alibaba Cloud Function Compute provides us with.

Serverless is not only about the development of the solutions in a distributed and service-oriented fashion, but it also helps us manage, and in most cases even help us decrease the cost of the cloud infrastructure that we have purchased. In the next sections, we will explore how scalability happens on the Alibaba Cloud Function Compute and how we can deploy our solutions online.

Building a Serverless Application on Alibaba Cloud (Part 1)

In this first part of a two-part tutorial series on how to build a serverless application, we will be covering some of the major concepts to be discussed in this series.

This blog is part of a two-part tutorial series on how to build a sample serverless application on Alibaba Cloud. This blog is the first part. In it, we will go over the major concepts and architecture of a serverless application. We try to present these terms in a simple and approachable manner, so they everything's easier to understand. Of course, if you are already familiar with all of this stuff, you can skip this part and go straight on to the second part of this two-part tutorial series, where you'll directly learn how to build the sample serverless application on Alibaba Cloud. We will take the same approach to the second part of this series.

What Is a Serverless Architecture?

Throughout the history of cloud computing, slowly but surely things like databases, file storage, and servers have slowly transitioned to the cloud. Following this steady migration to the cloud, business applications likewise in increasing numbers can be deployed and run completely on elastic and secure virtual cloud servers. All of this takes away the hassle of having to manage and maintain on-premises data centers with physical servers and their supporting network infrastructure all by yourself or your relevant team. And now, with the recent release of the Function Compute service from Alibaba Cloud, things have got even easier and better. Let me explain exactly. Now, instead of hosting an entire application on a server, developers can alternatively upload their application logic to the cloud without worrying about server provision and management systems. Because of this, among other things, this serverless model can be far more cost-effective and offers flexible scaling capabilities.

What Is Serverless?

As you can probably infer from its name, this new application architecture provides the capabilities to deploy and run your applications without the need of setting up and maintaining any server in the cloud. The entire application stacks will operate by relying on different cloud services.

Why Is Serverless?

Perhaps the most notable advantage of serverless architecture is that you do not have to provide or manage any servers. This means that you can do things without the need for server provision, which means eliminating the need for no server management. This gets rid of the headache of server and network configuration, access control, OS patching, security defence, load balancing and performance scaling, so on and so forth.

The advantage of all of this is that you can completely focus on your core product and business logic. As a result, as you could image, with serveless, companies and organizations can significantly reduce the time-to-market of their on-cloud applications. Adding to this, your applications can be automatically scaled out in a very flexible manner by expanding its memory or throughput. High availability and fault tolerance are also provided by default. Finally, this paradigm helps to achieve a lower operation cost as you only pay when you code is running. Yes you heard it right: if you're not executing your application, you don't need to pay for it.

What Are the Limitations of Serveless?

Of course, as with other models, a serverless framework has its both pros and cons. You need to consider these drawbacks before moving forward. Let's discuss some of them here:

  1. You have to rethink the way to design your application logic and workflow. This is especially the case if you want to use Alibaba Cloud's Function Compute because its functions are event-driven and, of course, also stateless. The entire system now technically depends on separate functions calls triggered by different events.
  2. For all of serveless's, as well as Alibaba Cloud's Function Compute's ease of execution and reduction in cost, you also lose control of your environment. That is, you are able to install third-party packages and libraries that your code needs to run, but your hands are really tied in terms of custom OS, so on.
  3. There are restrictions on execution time (10 minutes) and payload size (6 MB) as well as maximum number of functions that can be created under a single service (50 functions). Also, asks that require more than 10-minute execution times need to be divided into smaller batches. Traditional approaches, non-serverless ones, that is, do not have these kinds of limits.
  4. Startup latency: as functions are designed to run in containers, there is much overhead required to jump start the environment, which in turn adds to the execution time needed.

How Does Serverless Work on the Cloud?

In the serverless model, the whole application is running without any server-providing instances. All essential components are managed using various services provided by Alibaba Cloud. Each service is fully managed and does not require you to provision or manage servers.

Let's quickly go through the key components, all of which are Alibaba Cloud services, in this architecture:

1.Object Storage Service (OSS)

This service hosts and serves all website static contents such as HTML, CSS, Javascript, images, PDFs, so on. It can be frequently accessed or infrequently accessed data. Like other cloud services, the benefits of OSS are cost effectiveness, high security and reliability and a Pay-As-You-Go billing model. Transferring data to and from OSS can be done by calling API actions or SDK interfaces. We can also employ Alibaba Cloud CDN to efficiently cache data for users in different geographical areas.

2.API Gateway

API Gateway provides you with high-performance and highly available API hosting services to deploy and release your APIs. It serves as an HTTP endpoint to front our Function Compute logic. This is a front door for users to access data, business logic, or functionality from your back-end services. Having said that, this service is responsible to authorize access permissions to your system with multiple authentication methods. It also supports security mechanisms such as anti-attack, anti-injection, anti-request replay, and anti-request tampering.

3.Function Compute

This is a fully-managed event-driven compute service that runs your application logic. It lets you run code without provisioning or managing servers. Function Compute prepares computing resources for you and runs your codes on your behalf elastically and reliably. You only pay for resources actually consumed when running the codes. If your code isn't executed, you don't pay.

Function Compute runs your code in response to events. When the event source service triggers an event, the associated function is automatically called to process the event.

4.ApsaraDB for MongoDB

Over the last few years, NoSQL has become increasingly popular. This model solves the impedance mismatch between the relational data structures (tables, rows, fields) and the in-memory data structures of the application (objects). Most importantly, NoSQL is designed to scale horizontally which makes it an excellent choice for modern web applications.

Related Courses

Using Function Compute To Acquire Users Registration Info

This course is designed to assist users who want to get knowledge of serverless computing technology, and who like to use cloud product to apply data processing. In this course, users can understand what is serverless compute, the advantage of function compute, and can apply function compute to satisfy simple business demands.

Global Acceleration – Live Demo (English)

This live demo explains the basic concepts and usage scenarios of Alibaba Cloud Global Acceleration and illustrates how to configure it properly.

Deploy Web Service on ECS - Live Demo

This is a step by step operations demonstration on how to quickly deploy web service using ECS

Related Market Products

Using Function Compute To Acquire Users Registration Info

Through this course, you can understand advantages and usage scenarios of Alibaba Function Compute, and be able to apply this product to satisfy simple business demands.

An Introduction to Application Containerization

Through this course, you will learn the basic knowledge and common tools of application containerization and Alibaba Cloud Container Service.

Related Documentation

Use ECI in Serverless Kubernetes

This topic describes how to use Elastic Container Instance (ECI) in Serverless Kubernetes. Serverless Kubernetes integrates Kubernetes clusters with Alibaba Cloud services, such as ECI. You can quickly create a serverless Kubernetes cluster and use ECI in the cluster.

Serverless Kubernetes overview

ECI is compatible with Kubernetes. Based on ECI, Alibaba Cloud provides the Serverless Kubernetes service for you to create Kubernetes clusters that are maintenance-free and fully managed.

Serverless Kubernetes is a serverless platform that is optimized for running containers. This platform provides powerful capabilities for managing Kubernetes clusters and handling workloads, such as deployments, StatefulSets, jobs, and cron jobs. This platform allows you to abstract the architecture of applications and components and eliminates the need of server management, such as server creation, infrastructure management, O&M, upgrade, and capacity planning. You can focus on applications, rather than the underlying resources, and only pay for the resources used by the applications. Serverless Kubernetes automatically scales in and out resources based on application types, and manages resources in a finer-grained way.

Knative overview - Container Service for Kubernetes

This topic describes the concept of Knative, roles involved in a Knative system, Knative components, and thirty-party add-on supported by Knative.

Background information

Knative is a serverless framework that is based on Kubernetes. The goal of Knative is to provide the cloud-native standard to orchestrate serverless workloads across different platforms. To implement this goal, Knative codifies the best practices around three areas of developing cloud native applications: building container and function, serving and dynamically scaling workloads, and eventing.

Roles involved in the Knative system

  1. Developers: refers to personnel that directly use native Kubernetes APIs to deploy serverless functions, applications, and containers to an auto-scaling runtime.
  2. Contributors: refers to personnel that develop and contribute code and documents to the Knative community.
  3. Operators: refers to personnel that deploy and manage Knative instances by using Kubernetes APIs and tools. Knative can be integrated into any environments that support
  4. Kubernetes, such as systems of any enterprises or cloud providers.
  5. Users: refers to personnel that use an Istio gateway to access the target services, or use the eventing system to trigger the serverless service of Knative.

Related Products

Data Lake Analytics

Data Lake Analytics does not require any ETL tools. This service allows you to use standard SQL syntax and business intelligence (BI) tools to efficiently analyze your data stored in the cloud with extremely low costs.

ApsaraDB RDS for MySQL

A fully hosted online database service that supports MySQL 5.5, 5.6, 5.7, and 8.0.

0 0 0
Share on

Alibaba Clouder

2,600 posts | 750 followers

You may also like