×
Community Blog New Thoughts on Cloud Native: Why Are Containers Everywhere?

New Thoughts on Cloud Native: Why Are Containers Everywhere?

This article is based on a speech given by Yi Li, Director of Alibaba Cloud Container Service, at the Cloud Native Industry Conference sponsored by CAICT.

By Muhuan

On April 24, the first Cloud Native Industry Conference sponsored by the China Academy of Information and Communications Technology (CAICT) was held in Beijing. During his speech "Cloud Native Digital Leading Future", Yi Li, Director of Alibaba Cloud Container Service, said "Cloud native technology can support Internet applications, and has a profound influence on new computing architectures and intelligent data applications. Represented by containers, service grids, microservices, and Serverless Runtime, cloud native technology has delivered a new method of application construction."

This document is based on the speech given by Yi Li.

Reducing System Complexity by Using Cloud Native Technology

Currently, most enterprises completely embrace cloud computing. The all-in-cloud era has witnessed three important changes: cloud-based infrastructure, Internet-based core technologies, and data-driven and intelligent services. In different fields and industries, many business applications were born in the cloud so that many enterprises are more and more similar to Internet companies. Therefore, technical capabilities are viewed as indispensable core competencies. At the 2019 Beijing Alibaba Cloud Summit, Zhang Jianfeng, President of Alibaba Cloud Intelligence, mentioned the significance of vigorous investments in the cloud native technology when talking about "Internet-based core technologies".

Why should we embrace cloud native technology? On the one hand, cloud computing has rebuilt the entire software lifecycle, from architecture design to development, construction, delivery, and O&M. On the other hand, the IT architectures of enterprises have changed significantly and services deeply depend on IT capabilities. These two aspects contribute to complexity and challenges.

As the development of human society is followed by technology revolutions and changing divisions of labor, complexity can be reduced by using cloud native technology, which reflects IT progress.

First, Docker decouples applications from the runtime environments: The loads of many business applications can be containerized, and containerization makes applications agile, migratable, and standardized. Second, Kubernetes decouples resource orchestration and scheduling from underlying infrastructure: Application and resource control is simplified, and container orchestration improves the efficiency of resource orchestration and scheduling. Third, the service grid technology represented by Istio decouples service implementation from service governance capabilities. In addition, Alibaba Cloud provides diverse development tools (such as APIs and SDKs) for integrating third-party software. This opens up extensive possibilities for cloud ecosystem partners. Such layered technology architecture has advanced the division of labor and significantly accelerated technical and business innovation.

Alibaba Cloud believes that cloud native technology can support Internet-scale applications, accelerate innovation, allow low-cost trial and error, and avoid differences and complexity of underlying infrastructure. In addition, new computing approaches, such as service grids and serverless computing, make the entire IT architecture extremely flexible so that applications can better serve business purposes. You can build domain-oriented cloud native frameworks based on Alibaba Cloud Container Service, for example, Kubeflow for machine learning and Knative for Serverless Runtime.

1

New Thoughts on Container Service

Containers are everywhere. As a provider of Container Service, we believe that container technology will continue to develop and be applied to new computing forms, new application loads, and new physical boundaries. The following shares relevant insights and new thoughts.

1. New Computing Form: Cloud Native Serverless Runtime Has Arrived

The cloud native technology aims to make enterprises and developers focus only on application development rather than infrastructure and basic services. Similarly, serverless computing turns application services into resources and allows you to call resources from the client through APIs. More importantly, the Pay-As-You-Go mode can reduce your costs.

Serverless Runtime can be implemented for infrastructure containers, application service encapsulation, and event-driven function-oriented computing.

2

The cloud native serverless runtime can be implemented in multiple forms. Many manufacturers have designed different service solutions.

  • Function-oriented Function as a Service (FaaS): AWS Lambda and Alibaba Cloud Function Compute support event-driven programming. You only need to call services through function implementation, improving the development efficiency. Alibaba Cloud Function Compute charges by the number of calls. You can smoothly adjust computing resources based on the business traffic. In typical scenarios, costs are reduced by 10% to 90%. Malong Technologies reduced costs by 40% through Function Compute during model prediction.
  • Application-oriented: For example, with Google App Engine, Cloud Run (new), and Alibaba Cloud EDAS Serverless, you only need to enable application implementation. Platforms support flexible and automated application O&M. These services are designed mainly for Internet applications. Different from FaaS, the application-oriented serverless architecture does not require the transformation of existing applications. Alibaba Cloud EDAS Serverless provides a serverless application hosting platform for popular open source microservice frameworks, and supports the Spring Cloud, Apache Dubbo, and Alibaba Cloud HSF frameworks.
  • Container-oriented: For example, carried by container images, AWS Fargate and Alibaba Cloud Serverless Kubernetes are flexible and support various applications with scheduling systems. You do not need to manage the underlying infrastructure. In May 2018, Alibaba Cloud released Serverless Kubernetes Container Service for container-based applications. This service does not require node management and capacity planning, charges by the resources required for applications, and supports auto scaling. Optimized for Alibaba Cloud infrastructure, it is safe and efficient. It significantly reduces the costs of managing Kubernetes clusters. The bottom layer of Serverless Kubernetes is built on lightweight virtualization elastic container instances that are optimized by Alibaba Cloud for containers. Serverless Kubernetes provides a lightweight, efficient, and safe environment for the execution of container applications. Serverless Kubernetes allows you to deploy container applications without the need to modify the configuration.

3

2. New Application Loads: Containers Are Being Used for More and More Applications

Containers were not considered suitable for traditional applications. However, they have now been significantly improved. Containers are supported by the Windows ecosystem. Most core capabilities of Kubernetes V1.14, such as pods, services, application orchestration, and Container Network Interfaces (CNIs), are now supported on Windows nodes. Windows systems still have a 60% market share. Traditional virtualization-based applications, such as Enterprise Resource Planning (ERP) software, ASP-based applications, and a large number of Windows databases, can be containerized without rewriting the code.

New architectures based on container technology will generate new business value for applications. As cloud native AI is an important application scenario, we need to quickly build an AI environment and efficiently use underlying resources to seamlessly adapt to the full lifecycle of deep learning. For AI engineering, the cloud native system can improve efficiency in four aspects.

  • It optimizes the scheduling of heterogeneous resources.
  • It improves the elasticity, efficiency, and granularity (GPU sharing supported).
  • It simplifies the management of heterogeneous resources and improves observability and use efficiency.
  • It can migrate, assemble, and reproduce the AI process.

Distributed training of deep learning, for example, can be enhanced in three aspects through Alibaba Cloud Container Service. Resource optimization: Heterogeneous resources, such as CPU and GPU, can be scheduled in a centralized manner. Virtual Private Cloud (VPC) or Remote Direct Memory Access (RDMA) is used for network acceleration. Performance improvement: With GPU FP64 P100, the acceleration ratio is increased by 90%, and the performance is improved by 45% relative to native TensorFlow. Algorithm optimization: Message-Passing-Interfaces (MPIs) replace gRPC, ring-allreduce, computing-communication overlapping, and gradient convergence.

In other high-performance computing scenarios, such as gene data processing, an Alibaba Cloud user can process a WGS with 100 GB data within five hours. A complex process with more than 5,000 steps is supported, and 500 nodes can be scaled out within 90 seconds. In this way, we can make the most of the extreme elasticity of containers.

3. New Physical Boundaries: Cloud-Edge-End, Containers Are not Limited to IDCs

The best-known container infrastructure is an Internet Data Center (IDC). The extreme elasticity of containers allows IDCs to scale applications and resources in response to fluctuations in business traffic, and thereby ensure high utilization and cost-effectiveness.

With the advent of 5G technology and IoT, traditional cloud computing centers that store and compute data in a centralized way can no longer meet the needs of terminal devices for timeliness, capacity, and computing power. The extension of cloud computing capabilities to edges and devices and the centralized implementation, delivery, O&M, and control through a center will become an important trend in cloud computing. Based on Kubernetes, the cloud native technology provides the same functions and experiences as in the cloud. It distributes applications in a cloud-edge-end integrated mode, supports application distribution and lifecycle management in different kinds of system architectures and network conditions, and optimizes access protocols, synchronization mechanisms, and security mechanisms for edges and devices.

As described in the preceding section, application containerization supports standard migration and achieves an agile and flexible cloud native application architecture. In this way, multi-cloud or hybrid cloud deployment is significantly simplified, cost-effectiveness is optimized, and more options are available. For example, security compliance requirements are met, business agility is enhanced, and regional coverage is improved.

Containers are applicable to multiple kinds of infrastructure, such as IDCs, edge clouds, multiple clouds, or hybrid clouds. In this way, developers can concentrate on the applications themselves.

Conclusion

The era of cloud native technology is the best era for developers.

Cloud native technology can support Internet applications, and has a profound influence on new computing architectures and intelligent data applications. Represented by containers, service grids, microservices, and Serverless Runtime, cloud native technology has delivered a new method of application construction. In addition, the cloud native technology is expanding the boundary of cloud computing by promoting borderless computing in multi-cloud and hybrid cloud mode and through cloud-edge-end integration.

In the era of cloud native technology, cloud manufacturers can play a larger role and create more value for customers.

Cloud manufacturers need to help users take full advantage of clouds and help enterprises create commercial value.

0 0 0
Share on

Alibaba Container Service

120 posts | 26 followers

You may also like

Comments