Knative came into being
The above content is the problem that Knative wants to solve, which is also the background of Knative's emergence. Next let's take a look at Knative.
What is Knative
Let's first look at the official definition: "based on the Kubernetes platform for building, deploying and managing modern serverless workloads". Knative is an application serverless orchestration system based on Kubernetes. In fact, Knative contains not only Workload, but also Kubernetes' native process orchestration engine and a complete event system.
As mentioned earlier, Kubernetes standardizes computing, storage, and networking, while Knative aims to standardize application serverless workload orchestration based on Kubernetes.
Knative core module
Knative consists of three core modules: Tekton, Eventing, and Serving.
Tekton is Kubernetes' native process orchestration framework, which is mainly used to build CICD systems;
Eventing is mainly responsible for the event processing function, which can access events of external systems. After the event is accessed, a series of process processing and triggering of Serving consumption events are performed;
Serving is the core management module for application running workloads, and is mainly responsible for traffic scheduling, elasticity, and grayscale publishing.
Tekton is a Kubernetes-native process orchestration framework, which is mainly used to build CICD systems. For example, compiling from source code to image, testing the services in the image, publishing the image as an application, and a series of operations can be done based on Tekton.
The basic execution unit in Tekton is the Task.
A Task can contain multiple Steps that are executed sequentially. A Task is ultimately a Pod, and each Step in it is a Container when it is finally executed. Every time a TaskRun CRD is submitted to Kubernetes, it will trigger the execution of a Task;
Pipeline can arrange multiple tasks, and tasks in Pipeline can set dependencies. Pipeline generates a directed acyclic graph based on dependencies, and then executes a series of tasks concurrently or serially according to the directed acyclic graph. Each submission of a PipelineRun CRD will trigger an execution of the Pipeline;
PipelineResource represents the resources used or generated by Pipeline, such as: github code repository, dependent or used images, etc.;
Tekton is a native implementation of Kubernetes, so information such as the secret key of resource authentication can be managed through the secret of Kubernetes.
The Eventing module implements a series of event processing mechanisms based on the CloudEvent standard. The core capabilities of the Eventing module are divided into four parts.
External event access
Eventing has a strong extension mechanism, which can access events from any external event source, such as commit, pull request and other events in github; events in Kubernetes; messages in the message system; and OSS, table storage, Redis and other systems. event.
After Eventing is connected to external events, it will be converted into CloudEvent format, and the internal flow of events is completed through the CloudEvent event standard.
Internal handling of events
The Broker and Trigger models introduced by the Eventing module not only shield the complex processing of events from users, but also provide a rich event subscription and filtering mechanism.
event handling transaction management
Eventing is based on a reliable messaging system that enables transactional management of events. If event consumption fails, operations such as retry or redistribution can be performed.
The CRD at the core of Serving is Service. Knative Controller automatically operates Kubernetes Ingress, K8s Service, and Deployment through Service configuration to simplify application management.
Knative Service corresponds to a resource called Configuration. Every time the Service changes, if a new Workload needs to be created, the Configuration will be updated, and then a unique Revision will be created every time the Configuration is updated. Revision can be considered as the version management mechanism of Configuration. In theory, the Revision will not be modified after it is created, and different Revisions are generally used for grayscale publishing.
Route is the core logic for Knative to manage traffic. Knative is built after Istio. Knative Route Controller automatically generates Istio's VirtualService resource through the configuration of Route, thus realizing traffic management.
Serverless orchestration of application workloads by Knative Serving starts with traffic.
The traffic first reaches Knative's Gateway, and the Gateway automatically splits the traffic to different revisions according to the percentage based on the Route configuration, and then each revision has its own independent elastic policy. When the incoming traffic requests increase, the current Revision starts to automatically expand. The expansion strategies of each Revision are independent and do not affect each other.
Grayscales different Revisions based on traffic percentage, and each Revision has its own independent elasticity policy. Knative Serving achieves the perfect combination of traffic management, elasticity and grayscale through traffic control.
Cloud-native features of Knative
Kubernetes is a cloud-native operating system recognized by the industry. As a cloud-native serverless orchestration engine, Knative is of course compatible with the Kubernetes API.
Knative itself is open source, you can deploy a set of Knative on any Kubernetes cluster. Likewise, your services in any Knative cluster can be seamlessly migrated to another Knative cluster. If your service is built on Knative, then your service can be circulated among various cloud vendors like water, and any cloud vendor's Kubernetes can easily deploy your service after Knative is built. We can see that Knative has been supported by a large number of manufacturers or platforms through the following support list:
Google's CloudRun is based on Knative
IBM public cloud already supports Knative
Alibaba Cloud already supports Knative
Pivotal's Riff is a FaaS system built on top of Knative
Knative support for OpenShift
Rancher RIO is based on Knative
The kubeflow community is based on Knative's KFServing and is using Knative as an AI-related framework
This is where the power of cloud native lies, with open standards widely supported. As a customer who uses the cloud, based on this open standard, you always have the right to negotiate with service providers. You can go wherever the service is good, otherwise you may be locked up by one company. For cloud manufacturers, more customers can be accessed through open standards, and the specific implementation under this standard will be supported by each company according to its own characteristics, which is also the core competitiveness of cloud manufacturers.
Typical application scenarios of Knative
Having introduced so much, let's take a look at the scenarios in which Knative is suitable for use.
Application Serverless Orchestration Scenarios
Knative Serving performs serverless orchestration of applications starting from traffic.
First, Knative takes over the traffic of the service based on the Istio Gateway, which can segment the traffic by percentage. The segmented traffic can be directly used for grayscale publishing. For example, the traffic segmented by percentage is directly transferred to a Revision, and the percentage of traffic received by each Revision can be accurately controlled, so as to achieve precise control of the grayscale version for online service applications. scope.
Knative's elasticity strategy acts on each Revision. Different Revisions scale independently according to their "own rhythm", realizing the perfect combination of traffic, grayscale, and elasticity. All applications require elastic hosting. All can be achieved through Knative. The following scenarios are ideal for solving with Knative:
such as hosting microservice applications
such as hosting a web service
such as hosting a gRPC service
Knative's Eventing provides a complete event model, which can easily access events from various external systems. After the event is accessed, it is internally circulated through the CloudEvent standard, and the Broker/Trigger mechanism provides a very good way for event processing.
This complete event system can easily implement event-driven services, such as:
For example, it is connected with events of various cloud products, so that the status update of cloud products can automatically trigger a service, etc.
Build CICD systems based on Tekton, such as:
When gihub has code submission, it will automatically trigger the mirror construction and service release process
When there is an image submission in the docker image repository, the image is automatically tested and published as a service, etc.
By deploying services based on Knative Servering, you do not need to manually operate Kubernetes resources, which can greatly reduce the threshold for using Kubernetes. So if you are not maintaining the Kubernetes system, or doing complex development based on Kubernetes, you can use Knative to manage your own services, which is very convenient.
What are the typical customer cases of Alibaba Cloud Knative?
Web service hosting
The web hosting service is actually the MicroPaaS type of scenario introduced earlier, and customers use Knative to simplify the complexity of using Kubernetes. Even without Knative's resiliency, the efficiency of application hosting can be improved.
Apply Serverless Orchestration
Microservice hosting scenarios
web application hosting and resiliency
Small program, public account background
E-commerce service background
AI service hosting
Elastic scaling based on task queue
Use ECI for flexibility, effectively reducing the cost of long-term resource retention
SaaS service hosting
Automatically build images for users after SaaS users submit code
SaaS users automatically help users publish services after pushing an image by themselves
CMS system SaaS providers can easily deploy a new set of services to users through Helm Chart
SpringCloud Microservice Hosting
Register the address of Knative Service to the registration center, and realize the traffic segmentation, grayscale publishing and elasticity of microservices through the capabilities of Knative. In this way, Knative gives serverless capabilities to ordinary microservice applications.
Building a CICD system
CICD system based on git code repository for automatic construction and service release
Automatically execute tests or service releases when new images are available in the docker image repository
When a new file is added to the OSS, the execution of the machine learning task is automatically triggered, and the image is analyzed and recognized.
When a new video file is added to OSS, a task is automatically triggered to process the video, such as video content recognition, etc.
Feed Streaming System Design
Social information sending notifications, etc.
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00