×
Community Blog What is Serverless Computing? Challenges, Architecture and Applications

What is Serverless Computing? Challenges, Architecture and Applications

Serverless computing, also known as function as a service, is a model of cloud computing. Based on platform as a service, serverless computing provides a micro architecture.

You've probably also heard mention of serverless cloud computing. But how can that be? Where do you host your website, database and so on if you haven't first created one or more virtual servers?

Serverless cloud computing is provided by Alibaba Cloud within a product set known as Function Compute. This actually provides a useful clue as to how the concept actually works. The basic building block is no longer a server but a block of code, which is known as a function. The function is executed by an event known as a trigger. This can be an API call, a call to a particular URL, a timer reaching a predetermined value or the appearance of a new file in a directory, for example.

Triggering Functions

Functions still run on cloud servers, but those servers are part of Alibaba Cloud's fully-managed core infrastructure. You don't need to build, patch, secure or pay for your own servers. Instead, you simply create and upload functions, and then create triggers which cause them to run.

Function Compute is massively scalable, without the need for you to implement any load balancing or contingency planning. Whether a function gets called once a month, or a million times a minute, the underlying infrastructure will take care of it.

Billing for Serverless Cloud Computing

Billing is straightforward and easy to budget for. The cost of calling a function is made up of three components: request, duration, and an optional Internet traffic fee.

Request fees are USD $0.20 per million. The duration component is measured in gigabyte seconds, and is calculated by multiplying the time your function takes to run by the amount of RAM you allocated to it. For example, a function which runs for 0.1 seconds and is allocated 0.5GB of RAM will incur a cost of 0.05 gigabyte seconds. Gigabyte seconds cost $0.00001668, which is around 6 cents per gigabyte hour.

As an Alibaba Cloud account holder, your first 1 million Function Compute requests and 400,000 gigabyte seconds each month are completely free. Therefore, you could trigger around 32,200 function calls per day, each allocated 0.4GB of RAM and running for 100 milliseconds, and pay absolutely nothing. This compares very favorably with conventional cloud servers, especially as there's no need to expend additional time and effort securing, patching and updating your own VMs.

If your function generates traffic on the public Internet, this is charged at between $0.07 and $0.13 depending on the region.

Ideal Uses for Function Compute
Function Compute currently supports python, node.js, Java, C# and PHP. And while it doesn't necessarily replace all aspects of your cloud computing requirements, there are clearly some situations which can transfer very well. For example, when a user uploads an image to your website, this could trigger a function which compresses the file in order to save on storage costs. Other ideal uses for Function Compute include web crawlers, data and log analysis, automated backups, and implementing APIs. Coupled with OSS storage, Function Compute also makes a great back end for IoT devices and mobile apps, especially as it's so scalable and cost effective.

You can even implement an entire serverless website. Create your content (static HTML files or database-driven) and upload the files to an Alibaba Cloud Object Storage Service (OSS) bucket. Then create triggers which retrieve and display the relevant content in response to a particular URL being accessed. You can find a complete step-by-step tutorial on how to do this at https://www.alibabacloud.com/blog/create-a-serverless-website-with-alibaba-cloud-function-compute_594594 .

To learn more about Function Compute, see https://www.alibabacloud.com/products/function-compute .

Related Blogs

The Concept and Challenges of Serverless

This article analyzes the core concepts of serverless systems from the application architecture perspective and summarizes the challenges faced while implementing serverless systems.

Preface

In an article titled "The Sound and the Fury of Serverless", I used the metaphor to describe the development of serverless systems in the industry. Serverless is not straightforward: everyone talks about it, nobody really knows how to do it, and everyone thinks everyone else is doing it, so everyone claims they are doing it.

Although it's been half a year since I wrote that article, I don't think the situation has changed much. Many developers and managers have a one-sided or even erroneous understanding of serverless technology. If new technologies are launched without sufficient knowledge of application architecture evolution, cloud infrastructure capabilities, and risks, their business value cannot be realized, efforts are wasted, and technical risks are introduced.

In this article, I will attempt to analyze the charm and core concepts of serverless systems from the perspective of application architecture and summarize the challenges that must be faced to implement serverless systems.

Evolution of Application Architectures

To help you better understand serverless systems, let's look back at the evolution of application architectures. More than a decade ago, in mainstream application architectures, monolith applications were deployed on a single server with a database. Under this architecture, O&M personnel carefully maintained the server to ensure service availability. As businesses grow in size, this simplest architecture soon confronts two problems. First, only one server is available. If the server fails due to hardware damage or other such problems the whole service becomes unavailable. Second, as the business volume grows, a single server's resources are soon unable to handle all the traffic. The most direct method to solve these two problems is to add a Server Load Balancer (SLB) at the traffic entry and deploy a single application on multiple servers. In this way, the server's single point of failure (SPOF) risk is solved, and monolith applications can be scaled out.

As the business continues to grow, more R&D personnel must be hired to develop features for monolith applications. At this time, the code in monolith applications does not have clear physical boundaries, and soon, different parts of the code conflict with each other. This requires manual coordination and a large number of conflict merge operations, sharply decreasing R&D efficiency. In this case, monolith applications are split into microservice applications that are independently developed, tested, and deployed. Services communicate with each other through APIs over protocols such as HTTP, GRPC, and DUBBO. Microservices split based on Bounded Context in the domain-driven design mode greatly improve medium and large teams' R&D efficiency. To learn more about Bounded Context, consult books about domain-driven design.

In the evolution of monolithic applications to microservices, the distributed architecture is the default option from the physical perspective. In this case, architects have to meet new challenges produced by distributed architectures. In this process, distributed services, frameworks, and distributed tracking systems are generally used first, for example, the cache service Redis, the configuration service Application Configuration Management (ACM), the state coordination service ZooKeeper, the message service Kafka, and communication frameworks such as GRPC or DUBBO. In addition to the challenges of distributed environments, the microservice model gives rise to new O&M challenges. Previously, developers only needed to maintain one application, but now they need to maintain ten or more applications. Therefore, the workloads involved in security patch upgrades, capacity assessments, and troubleshooting has increased exponentially. As a result, application distribution standards, lifecycle standards, observation standards, and auto scaling are increasingly important.

Now let's talk about the term "cloud-native." In simple words, whether architecture is cloud-native depends on whether the architecture evolved in the cloud. "Evolving in the cloud" is not simply about using services at the infrastructure as a service (IaaS) layer of the cloud, such as Elastic Compute Service (ECS), Object Storage Service (OSS), and other basic computing and storage services. Rather, it means using distributed services, such as Redis and Kafka, in the cloud. These services directly affect the business architecture. As mentioned earlier, distributed services are necessary for a microservice architecture. Originally, we developed such services ourselves or maintained them based on open-source versions. In the cloud-native era, businesses directly use cloud services.

Another two technologies that need to be mentioned are Docker and Kubernetes. Docker defines application distribution standards. Applications written in Spring Boot and Node.js are all distributed by images. Based on Docker technology, Kubernetes defines a unified standard for applications throughout their life cycles, covering startup, launch, health checks, and deprecation. With application distribution standards and lifecycle standards, the cloud provides standard web app services, including application version management, release, post-release observation, and self-recovery. For example, for stateless applications, an underlying physical node's failure does not affect R&D at all. This is because the web app service automatically switches the application containers from the faulty physical node to a new physical node based on the application lifecycle. Cloud-native provides even greater advantages.

On this basis, the web app service detects runtime data for applications, such as business traffic concurrency, CPU load, and memory usage. Therefore, auto scaling rules are configured for businesses based on these metrics. The platform executes these rules to increase or decrease the number of containers based on business traffic. This is the most basic implementation of auto scaling. This helps you avoid resource constraints during your business's off-peak hours, reduce costs, and improve O&M efficiency.

As the architecture evolves, R&D and O&M personnel gradually shift their focus from physical machines and want the machines to be managed by the platform system without human intervention. This is a simple understanding of serverless.

Core Concepts of Serverless

As we all know, serverless does not actually mean the disappearance of servers. More precisely, serverless means that developers do not need to care about servers. This is just like the modern programming languages Java and Python. With them, developers do not need to allocate and release memory manually. However, the memory still exists but is managed by garbage collectors. A platform that helps you manage your servers is called a serverless platform, which is similar to calling Java and Python memoryless languages.

In today's cloud era, a narrow understanding of serverless as simply not caring about servers is not enough. In addition to the basic computing, network, and storage resources contained in the servers, cloud resources also include various types of higher-level resources, such as databases, caches, and messages.

From Serverless Containers to Serverless Kubernetes

This article shares thoughts on serverless Kubernetes and provides an in-depth analysis of serverless Kubernetes in terms of architecture design and infrastructure.

From Serverless Containers to Serverless Kubernetes

Serverless containers allow deploying container applications without purchasing or managing servers. Serverless containers significantly improve the agility and elasticity of container application deployment and reduce computing costs. This allows users to focus on managing business applications rather than infrastructure, which greatly improves application development efficiency and reduces O&M costs.

Kubernetes has become the de-facto standard for container orchestration systems in the industry. Kubernetes-based cloud-native application ecosystems, such as Helm, Istio, Knative, Kubeflow, and Spark on Kubernetes, use Kubernetes as a cloud operating system. Serverless Kubernetes has attracted the attention of cloud vendors. On the one hand, it simplifies Kubernetes management through a serverless mode, freeing businesses from capacity planning, security maintenance, and troubleshooting for Kubernetes clusters. On the other hand, it further unleashes the capabilities of cloud computing and implements security, availability, and scalability on infrastructure, differentiating itself from the competition.

Alibaba Cloud launched Elastic Container Instance (ECI) and Alibaba Cloud Serverless Kubernetes (ASK) in May 2018. These products have been commercially available since February 2019.

Industry Trends

Gartner predicts that, by 2023, 70% of artificial intelligence (AI) tasks will be constructed through computing models such as containers and serverless computing models. According to a survey conducted by AWS, 40% of new AWS Elastic Container Service customers used serverless containers on Fargate in 2019.

The Container as a Service (CaaS) is moving in the direction of serverless containers, which will work with function Platform as a Service (fPaaS) and Function as a Service (FaaS) in a complementary manner. FaaS provides an event-driven programming method, thus, users only need to implement the processing logic of the functions. For example, transcoding and watermarking videos during the upload process. FaaS ensures high development efficiency and powerful elasticity. However, there is a need to change the existing development mode in order to adapt to FaaS. Serverless container applications are built on container images, making them highly flexible. The scheduling system supports various applications, including stateless applications, stateful applications, and computing task applications. Many existing applications can be deployed in a serverless container environment without the need for modification.

Building Serverless Kubernetes Clusters for Serverless Jobs

In this article, I will describe the difference between offerings of serverless clusters and the serverless jobs that can run on managed clusters, and which type of the subscription is cost-effective for small-medium businesses and enterprises.

Service orchestration is the primary focus of cloud-native solutions and enterprises that deploy their solutions on private or public clouds. Alibaba Cloud offers several hosting options, and many deployment models are supported to enable customers to fully utilize the cloud. But this sometimes is not the case, as many customers are unaware of the potential of the cloud, and sometimes they do not use the right architectural design for their solutions. Think of this, many people use the lift-and-shift approach to migrate their legacy solutions from a VM-based hosting platform to a cloud; say Alibaba Cloud or any other cloud in this manner.

Going Serverless

The serverless application design approach enables organizations to author their solutions that face undefined user traffic and goes under peak traffic scenarios. Deployment of a legacy application on the cloud would cost the enterprise and not yield good results. A common practice is to distribute the different areas of the application that is under stress and load¡ªthe microservices approach. Even with this approach your monthly bill for cloud services does not do justice. The reasons are several:

  1. Your jobs only run at a specific time.
  2. Your jobs face a peak hour at a special occasion or time.
  3. Your jobs are trigger-based and only handle events as they come.
  4. Your solution and its internal services are asynchronous in nature.

In these cases, even microservices sometimes fail to provide the best solution on a cloud-hosted environment. This article will discuss the benefits of using Serverless Container Service for Kubernetes for Serverless jobs.

Serverless Jobs

What I have mentioned so far is the serverless jobs and tasks. You distribute your overall solution into multiple services and then further distribute them into minor functions and tasks that are executed based on an event or a trigger, or they keep on executing ¨C such as a handler for HTTP request. Serverless functions option help when you want to keep an HTTP handler for the requests when customers are only going to send a request at a specific time. For example, a contact form is one good candidate for this reason. If you are a blog owner or a small website owner, it is not feasible for you to host a complete web application, just to hold a database of orders or user queries. In this case, a simple function with Alibaba Cloud Function Compute would be more than enough. Here are a few snippets that might help you understand how to develop a serverless function, this one and this one with the URL capture. Now, you will only be charged when a user sends a request. Another major benefit of the serverless jobs is that you only pay for the resources when they are being used, and as soon as the usage goes down, your jobs are scaled down to zero!

A typical web application contains web route handlers, views, models and database manager. Apart from these services, the components for logging, backups, availability also make up a minor part of the application. In a microservices environment, each of these components is broken down and deployed separately. This helps achieve high-availability and better developer experience because your teams can work separately on the projects. You should check this blog out to learn more about the differences between monolith, microservices and serverless application development architectures.

Although Alibaba Cloud offers several platforms and hosting solutions for serverless jobs, especially the Alibaba Cloud Function Compute, it offers Node.js, Python and other SDKs. But we will focus on the containers-based jobs and the serverless approaches taken by the infrastructure too.

Serverless Infrastructure

First things first, there is no such thing as serverless infrastructure, but orchestrated infrastructure. If you design your solutions to be scalable, then the problem would show up when the infrastructure (or resources) hit their limit. Classical approach to create the resources for Kubernetes require manual creation of the VMs, and other resources and their attachment to the cluster. Our cloud environments should not have these limitations, as that will kill the overall purpose of a cloud.

Related Products

Simple Application Server

Simple Application Server is a new generation computing service for stand-alone application scenarios. It provides one-click application deployment and supports all-in-one services such as domain name resolution, website publishing, security, O&M, and application management.

Elastic Compute Service

Alibaba Cloud Elastic Compute Service (ECS) provides fast memory and the latest Intel CPUs to help you to power your cloud applications and achieve faster results with low latency.

Related Documentation

Deploy Jenkins in a serverless Kubernetes cluster and build an application delivery pipeline

This topic describes how to deploy Jenkins, a continuous integration environment, in a serverless Kubernetes cluster, and provides step-by-step examples on how to build an application delivery pipeline that includes source code compilation, image build and push, and application deployment.

Prerequisites

You have created a serverless Kubernetes cluster. For more information, see Create a serverless Kubernetes cluster.

Deploy Jenkins

  1. Run the following command to download the Jenkins package.
$ git clone https://github.com/AliyunContainerService/jenkins-on-serverless.git
$ cd jenkins-on-serverless

Mount a persistent volume to the jenkins_home directory.

  1. Currently, serverless Kubernetes clusters do not support cloud disks. You can mount an NFS volume to the jenkins_home directory. You need to modify the serverless-k8s-jenkins-deploy.yaml file to add the following fields and set NFS parameters:
#volumeMounts:
        #  - mountPath: /var/jenkins_home
        #    name: jenkins-home
      .....
      #volumes:
      #  - name: jenkins-home
      #    nfs:
      #      path: /
      #      server:
  1. Run the following command to deploy Jenkins:
$ kubectl apply -f serverless-k8s-jenkins-deploy.yaml

Create a serverless Kubernetes cluster

This topic describes how to create a serverless Kubernetes cluster in the Container Service console.

Prerequisites

Container Service and Resource Access Management (RAM) are activated. You can activate these services in the Container Service console and RAM console.

Procedure

  1. Log on to the Container Service console.
  2. In the left-side navigation pane, choose Clusters > Clusters. The Clusters page appears.
  3. Click Create Kubernetes Cluster in the upper-right corner of the page. In the Select Cluster Template dialog box that appears, click Create on the Standard Serverless Cluster card. The Serverless Kubernetes tab appears by default.

Related Courses

Using Function Compute To Acquire Users Registration Info

This course is associated with Using Function Compute To Acquire Users Registration Info. You must purchase the certification package before you are able to complete all lessons for a certificate.

Setup Message Queue for Message Subscription and Consumption - Live Demo

Alibaba Cloud Message Queue (MQ) is a distributed message queue service independently developed by Alibaba and fully hosted on the Alibaba Cloud platform. It supports reliable message-based asynchronous communication among microservices, distributed systems, and serverless applications. This service can be used to easily create a scalable distributed system with loose coupling and high availability.

0 0 0
Share on

Alibaba Clouder

2,603 posts | 747 followers

You may also like

Comments