Community Blog Build Apps Easily with Serverless Architecture on Virtualization Tech

Build Apps Easily with Serverless Architecture on Virtualization Tech

Serverless architecures emerged as virtualization technology develops, and now you can take advantage of serverless solutions to build light, highly-flexible, and stateless applications easily.

In cloud computing, virtualization refers to hardware virtualization, which means to create virtual machines within an operating system (OS).

Virtualization allows you to separate the operating system from the underlying hardware, which means you can run multiple operating systems such as Windows and Linux, at the same on a single physical machine. These operating systems are called guest OSes (operating systems). Virtualization allows you to save money and time.

Alibaba Cloud provides a comprehensive suite of global cloud computing services, which includes easy-to-use and high-performance virtual servers. And now just $2.50 a month makes you benefit from all advantages of SSD virtual servers through the new packages, which not only offer you SSD virtual server but also including a data transfer plan that you must need. Come and Get the Offer!

Developers are constantly on the lookout for more effective ways to maintain the software development lifecycle. Every introduction of a new technology concept is accompanied with an increase of productivity. Similarly, serverless architecture was introduced to help businesses focus on application development. With serverless, businesses no longer have to worry about server infrastructure, reducing development costs and shortening the development cycle.

The development process for serverless architecture is built upon previous achievements starting from virtualization (cloud computing). Although the process is rather continuous, it is marked by several notable milestones:

  1. Virtualization technology is introduced to virtualize large physical servers into individual VM resources.
  2. Virtualization clusters are moved to cloud computing platforms for simple O&M.
  3. Each VM is subdivided into Docker containers based on the principle of minimizing the operating space.
  4. Applications built on Docker containers do not require any runtime environment management, only a serverless architecture for the core code.
  5. Serverless is introduced to help developers focus on application logic rather than server infrastructure.

Serverless architectures has the following features:

  1. Granular computing resources
  2. Resources do not need to be pre-allocated
  3. Highly scalable and flexible architecture
  4. Users only need to pay for the resources they use

You can find details of the scenarios of serverless architecure in 4 Use Cases of Serverless Architecture.

Related Blog Posts

How to Use the LXD Container Hypervisor on Ubuntu 16.04

A popular application used to contain and make services portable is Docker. It is designed to isolate single applications and excels at it. Although there are workarounds that make it possible to squeeze more programs into a Docker box, there's no reason to bend a tool designed for one purpose, to do something else. That's where LXD comes in, when instead of packing a single application, we need to contain an entire (Linux based) operating system. LXD uses and manages LXC containers and is similar to a virtual machine hypervisor, like QEMU, Xen or VirtualBox but much more lightweight and also slightly faster, since it doesn't actually virtualize hardware, it just contains/isolates a group of processes from the host system.

In this tutorial we will install and configure LXD on an Alibaba Cloud Elastic Compute Service (ECS) instance and learn how to use the command line to create and manage containers.

Lower Cost with Higher Stability: How Do We Manage Test Environments at Alibaba?

The earliest virtualization technology is virtual machines. As early as the 1950s, IBM began to use this hardware-level virtualization method to improve the resource utilization exponentially. Different isolated environments on the virtual machine run complete operating systems respectively, so that the isolation is high and the universality is strong. However, it is slightly cumbersome for the scenario of running business services. After 2000, open-source projects, such as KVM and XEN, popularized the hardware-level virtualization.

At the same time, another lightweight virtualization technology emerged. The early container technology, represented by OpenVZ and LXC, achieved the virtualization of the runtime environment built on the kernel of the operating system, which reduced the resource consumption of the independent operating system and obtained higher resource utilization at the expense of certain isolation.

Later, Docker, with its concept of image encapsulation and single-process container, promoted this kernel-level virtualization technology to a high level sought after by millions of people. Following the pace of technological advancement, Alibaba began using virtual machines and containers very early. During the shopping carnival on "Singles' Day" in 2017, the proportion of online business services being containerized reached 100%. The next challenge, however, is whether infrastructure resources can be used more efficiently.

By getting rid of the overhead of hardware command conversion and operating systems for virtual machines, only a thin layer of kernel namespace isolation exists between programs and ordinary programs running in containers, which has no runtime performance loss at all. As a result, virtualization seems to have reached its limit in this direction. The only possibility is to put aside generic scenarios, focus on specific scenarios of test environment management, and continue to seek breakthroughs. Finally, Alibaba has found a new treasure in this area: the service-level virtualization.

The so-called service-level virtualization is essentially based on the control of message routing to achieve the reuse of some services in the cluster. In the case of service-level virtualization, many seemingly large standalone test environments actually consume minimal additional infrastructure resources. Therefore, it is no longer a significant advantage to provide each developer with a dedicated test environment cluster.

The feature environment is the most interesting part of this method. It is a virtual environment. Superficially, each feature environment is an independent and complete test environment consisting of a cluster of services. In fact, apart from the services that some current users want to test, other services are virtualized through the routing system and message-oriented middleware, pointing to the corresponding services in the shared basic environment. In the general development process at Alibaba, development tasks needs to go through the feature branches, release branches, and many related links to be finally released and launched. Most environments are deployed from the release branch, but this kind of self-use virtual environments for developers are deployed from the version of the code feature branch. Therefore, it can be called the "feature environment" (it is called the "project environment" in Alibaba).

Related Documentation

Deploy an Ingress application on a virtual node

This topic describes how to deploy an Ingress application on a virtual node of a Kubernetes cluster. With the virtual node, the Kubernetes cluster can provide the application with greater computing capability without the need for a new node needs to be created for the cluster.

Connect to Kubernetes clusters through virtual node addons

To improve the user experience of Container Service and support more application scenarios, Alibaba Cloud Container Service for Kubernetes provides the virtual node feature, which allows for the unlimited scaling and hosting of your Kubernetes clusters.

Virtual nodes are created by using the community’s Virtual Kubelet technology. They enable seamless connection between Kubernetes and Elastic Container Instance (ECI), so that Kubernetes clusters can easily obtain great elasticity without being limited by the computing capacity of cluster nodes.

Related Products

Container Service for Kubernetes (ACK)

Container Service for Kubernetes (ACK) is Kubernetes Certified Service Provider(KCSP)and qualified by Certified Kubernetes Conformance Program. Ensures Kubernetes consistent experience, workload portability. Integrates Alibaba Cloud capabilities in virtualization, storage, network, and security, providing an improved running environment for Kubernetes containerized applications.

Container Service

Container Service simplifies establishment of container management clusters and integrates Alibaba Cloud virtualization, storage, network, and security capabilities to create the optimal container-running environment on the cloud.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 739 followers

You may also like