As you probably all know, containers, microservices, and cloud-native technologies are the current big trends in IT. According to a 2019 report from Portworx and Aqua Security, most companies surveyed are either using containers or considering using them.
Early this morning, I talked with Chris, from the US, who delivered a speech just before us. He said that San Diego is expecting the turnout at KubeCon at the end of this year to reach 12,000 people! He also mentioned that cloud-native technology has not only changed the architecture of existing applications, but also promoted the development of a wider variety of services, accelerating the application of IT systems.
But, even if container technology being all the hype these days, challenges still do exist. According to a report from Tripwire, about half of the companies surveyed, especially those that run a deployment of more than 100 containers, believe that their containers have security vulnerabilities. On the other hand, an even larger number of companies are not sure whether their containers are vulnerable or not. As we see things, security is not only a question of technology, but it also involves issues of confidence-something that shouldn't be underestimated in its importance. As is shown in pink in their survey, 42% of respondents cannot fully embrace the container ecosystem due to security concerns. Therefore, security is definitely an important matter.
To put things another way, concerns about security is a step forward. Because, it's only when you are ready to use a technology in the production environment, are you willing to take a hard look at it from a security perspective, right? Interestingly when it comes to containers, security is a rather complex subject, involving several aspects. Container security is an end-to-end technology, therefore its security thus involves the security and integrity of container images themselves, the security of the software and hardware infrastructure on which containers run, and the security of container engines.
Container security has a long history of development. Take the namespace and cgroup features of the Linux kernel for example. This set of container technology extends kernel features from the perspective of process scheduling. With benefits such as an interface for user-friendly operation and low overhead, this set can be used outside the existing applications to serve as an isolated environment. As a pure software solution, it is compatible with both physical and virtual machines at various layers. However, the problem is that it is one part of the Linux kernel, so certain Linux isolation problems in it cannot be eradicated and may worsen due to some newly added features. At LinuxCon 2015 in Seattle, Linux Creator Linus Torvalds said in an interview, "the only real solution is to admit that a) bugs happen, and b) try to mitigate them by having multiple layers of security."
An isolation layer is about allowing application containers to run in their own kernels. The simplest way to do this is to deploy containers in virtual machines, as shown in the leftmost section in the preceding figure. This solution does not require you to change the software stack but allows your containers to run in your own virtual machines. However, this will result in more overhead and increase your overall maintenance complexity as there will now be two layers.
Another well-established solution for dedicated kernels is unikernel, as shown in the rightmost section in the preceding figure. This solution allows applications to run with their own kernels. The benefit of this solution lies in a minimal, cut-down version of Library OS (LibOS), which has a lower overhead and a smaller attack surface. However, as of now at least it is not widely used because applications often have to be modified to work with it. Of course, compatibility is always the biggest obstacle to the adoption of platform technology. Therefore, we think, then, more suitable secure container solutions for unmodified applications would be either one of the two solutions in the middle -microVM and process virtualization. The former uses a lightweight virtual machine and a tailored Linux kernel to reduce O&M overhead while maintaining compatibility. The latter uses a specific kernel to provide Linux ABI and directly virtualizes the process runtime, maximizing compatibility for Linux applications.
Kata Containers is a microVM–based secure container solution. From an application perspective, it is a runC-compatible container runtime engine that can be called by Kubernetes through containerd or CRI-O and can directly run Docker images or OCI images. But unlike runC, Kata Containers uses hardware virtualization. In this way, your applications no longer run directly in the host kernel, but in a dedicated kernel installed in a virtual machine. Even if the dedicated kernel is attacked due to an unknown security vulnerability, the virtual sandbox cannot be easily cracked. The Kata Containers project became open-sourced in 2017. And then, in April 2018, it became the first open infrastructure project in the seven years to be under the OpenStack Foundation umbrella. As a community project, it also involved many developers outside of Alibaba Cloud and Ant Financial. Currently, the development roadmap for Kata Containers version 2.0 is under discussion. And, of course, with it being open-source, you're welcome to contribute your code and requirements for the project.
Technically, in the Kubernetes ecosystem, Kata Containers can integrate with CRI daemons such as containerd and CRI-O like runC. We recommend that you call the containerd shim API v2, which was first introduced in the containerd community last summer. This API is also supported by CRI-O. Kata Containers is the first container engine that officially supports this new interface. This interface allows only one additional Kata worker process per pod regardless of the number of containers in the pod, which helps to alleviate pressure off of the host scheduler. Shim manages OCI containers in the pod by controlling agents in microVMs through VSOCK. VMMs supported by the community version of Kata Containers includes QEMU and the open-source Firecracker from AWS. The former has richer features and better compatibility while the latter is more compact. Based on the Alibaba spirit of "bringing you the best of everything", you do not have to give up anything. With Kubernetes RuntimeClass, you can specify the VMMs to be used for each pod. For more information, you can refer to our documentation on the GitHub or join the discussion in our Slack channel. Do not forget to submit the issues that you encounter. This is also a huge support for the community and a contribution to open source in addition to writing code.
There are quite a few similar container solutions based on the microVM technology，and you can refer to Thoughts on the Development of Secure Container Technology.
This article discusses the technical details of container optimization to support rapid server deployment during Alibaba's Double 11 Shopping Festival.
Usually, large scale promotions such as the Double 11 Shopping Festival require a peak-hour traffic estimation ahead of time. However, the pre-calculated resources and application capacity might not be sufficient to support the peaks in traffic, and last-minute scaling needs to be available. Container technology works well for this scenario, as it supports rapid, automatic, and elastic scaling as needed.
During this year's Double 11 Global Shopping Festival, Alibaba Cloud's container image warehouse stored 300,000 different images, totaling 10 million copies of images, which were downloaded up to 800 million times.
How can we support so many customers with simultaneous business peaks requiring a large number of server resources?
As an agile, portable, and controllable lightweight virtualization technology, the container technology has been popular among developers since its introduction. More importantly, container technology establishes a standardized delivery method—container images.
As a package that encapsulates both the application code and the code environment dependencies, the container image is an environment-independent deliverable that is applicable at any stage of the software lifecycle. Like how physical containers revolutionized the logistics industry, the virtual container technology that created container images has revolutionized traditional software delivery models.
The entire container technology industry has experienced explosive worldwide growth over the past three years. According to statistics, 67% of enterprises adopt or plan to adopt Docker in their production process to help them implement agile development and improve their R&D delivery efficiency.
According to the Docker Con 2017 statistics, there are about one million Docker apps, a 30 times increase over the past three years. There are more than 11 billion container image pulls, an almost exponential increase over the past three years.
In this tutorial, we will build and deploy containerized images using Alibaba Cloud Container Registry service.
Let’s say you are a container micro-services developer. You have a lot of container images, each with multiple versions, and all you are looking for is a fast, reliable, secure, and private container registry. You also want to instantly upload and retrieve images, and deploy them as a part of your uninterrupted integration and continuous delivery of services. Well, look no more! This article is for you.
This article introduces you to the Alibaba Cloud Container Registry service and its abundance of features. You can use it to build images in the cloud and deploy them in your Alibaba Cloud Docker cluster or premises. After reading this article, you should be able to deploy your own Alibaba Cloud Container Registry.
Alibaba Cloud Container Registry (ACR) is a scalable server application that builds and stores container images, and enables you to distribute Docker images. With ACR, you have full control over your stored images. ACR has a number of features, including integration with GitHub, Bitbucket, and self-built GitLab. It can also automatically build new images after the compile and test from source code to applications.
Alibaba Cloud Simple Application Server provides one-click application deployment and supports all-in-one services such as management and O&M monitoring of domain name, website and application.
Alibaba Cloud Elastic Compute Service (ECS) offers high performance, elastic & secure virtual cloud servers with various instance types at cost-effective prices for all your cloud hosting needs.
This course aims to help IT companies who want to container their business applications, and cloud computing engineers or enthusiasts who want to learn container technology and Kubernetes. By learning this course, you can fully understand what Kubernetes is, why we need Kubernetes, the basic architecture of Kubernetes, some core concepts and terms of Kubernetes, and how to build a Kubernetes cluster on the Alibaba cloud platform, so as to provide reference for the evaluation, design and implementation of application containerization.
This course is designed to help IT companies that want to containerize business applications, as well as cloud computing engineers and operations & maintenance engineers who want to understand and learn how to diagnose problems and monitor the containerized application. By learning this course, you can fully understand what the problem diagnosis of containerized applications is, the common problems of containerized applications, the basic workflow of diagnosing problems, the monitoring scheme and common tools of containerized applications, and the visual monitoring scheme based on Alibaba Cloud Container Service.
Container Service provides an independent IP address that is reachable within the cluster for each container in the cluster. Containers can communicate with each other by using the independent IP addresses instead of being exposed to the host port by means of Network Address Translation (NAT). Therefore, the dependency on the IP address of the host is removed, avoiding port conflict issues among multiple containers when configuring NAT. The following section describes how to realize cross-host container communication under different network models.
VPC helps you build an isolated network environment based on Alibaba Cloud. You can have full control over your own virtual network, including a free IP address range, Classless Inter-Domain Routing (CIDR) block division, and the configurations of route table and gateway. By configuring the VPC route table, Container Service forwards inter-container access requests to the Elastic Compute Service (ECS) instances corresponding to the container IP address range. See as follows.
Alibaba Cloud Elastic Container Instance (ECI) provides a basic environment for running pods of Kubernetes. However, you still need to use Kubernetes to configure features such as business dependencies, load balancing, auto scaling, and scheduling. This topic describes how to connect ECI to Kubernetes clusters and use ECI to run pods.
ECI provides a hierarchical solution for Kubernetes. In this solution, ECI schedules and manages underlying pods. On the Platform as a Service (PaaS) layer, Kubernetes manages workloads, such as deployments, services, StatefulSets, and cron jobs.
ECI can manage the underlying infrastructure of pods and provide resources for pods as needed. In this case, Kubernetes does not need to deploy or start pods or pay attention to the resources of the underlying virtual machines (VMs).
You can run all pods of a Kubernetes cluster in ECI. The maintenance-free feature of ECI frees you from the maintenance work. Kubernetes only needs to manage workloads to ensure that your business applications run stably.
Alibaba Cloud Serverless Kubernetes allows you to create a serverless Kubernetes cluster built on ECI. For more information, see Serverless Kubernetes and Use ECI in Serverless Kubernetes.
We recommend that you create a serverless Kubernetes cluster in Serverless Kubernetes. Serverless Kubernetes provides a maintenance-free and cost-effective Kubernetes environment for you to run online or offline business applications and perform simulation, development, and testing.
Immediately start running PyTorch using NVIDIA's GPU optimized distribution Optimized for NVIDIA Volta and NVIDIA Turing GPU's for highest performance across a wide range of AI workloads Eliminates the do-it-yourself task of optimizing a deep learning software stack tuned for GPUs.
This course is designed to help IT companies who want to containerize their business applications, as well as cloud computing engineers and cloud computing enthusiasts who want to learn container technologies. By learning this course, you can fully understand what the application containerization is, the layering theory of Docker image, the principle and common instructions of Dockerfile, common tools of application containerization, and how to deploy a containerized application based on Alibaba Cloud Container Service. So as to provide a reference for the evaluation, design and implementation of containerized applications.
Alibaba Clouder - June 8, 2020
Alibaba Clouder - December 20, 2017
Alibaba Cloud Storage - June 4, 2019
Alibaba Clouder - September 2, 2020
Alibaba Developer - March 25, 2019
Alibaba Clouder - August 4, 2020
More Posts by Alibaba Clouder