Implementation of Kubernetes Ingress Gateway

Introduction to Kubernetes Ingress

Generally, the network environment within a Kubernetes cluster is isolated from the outside, which means that clients outside the Kubernetes cluster cannot directly access services within the cluster, which is a problem of how different network domains connect. The conventional approach to solving cross network domain access is to introduce an entry point for the target cluster. All external requests to the target cluster must access this entry point, and then the entry point forwards the external requests to the target node.

Similarly, the Kubernetes community also solves the problem of how to expose services within the cluster through the addition of entry points. Kubernetes' consistent approach is to solve the same type of problems by defining standards, and it is no exception when it comes to solving external traffic management issues in clusters. Kubernetes further unified abstraction of cluster entry points and proposed three solutions: NodePort, LoadBalancer, and Ingress. The following figure is a comparison of the three options:

Through comparison, it can be seen that Ingress is a more suitable method for business use, which can be used for more complex secondary routing distribution, which is also a mainstream choice for users at present.

Kubernetes Ingress Status

Although Kubernetes standardizes and abstracts the cluster entry traffic management method, it only covers the basic HTTP/HTTPS traffic forwarding function, and cannot meet the large-scale and complex traffic governance issues of cloud native distributed applications. For example, the standard Ingress does not support more common traffic policies such as traffic diversion, cross domain, rewriting, and redirection. There are two mainstream solutions to this problem. One is to expand by defining Key Value in Ingress's Annotation; The other is to use Kubernetes CRD to define new inbound traffic rules. As shown in the following figure:

Kubernetes Ingress Best Practices

This section will explore Kubernetes Ingress best practices in the following five areas.

• Traffic isolation: Deploy multiple sets of IngressProviders to reduce the explosion radius

• Grayscale publishing: How to use IngressAnnotation for grayscale publishing

• Business domain splitting: How to design APIs based on business domains

• Zero trust: What is zero trust, why it is needed, and how to do it

• Performance tuning: Some practical performance tuning methods

Flow isolation

In actual business scenarios, back-end services in a cluster need to provide services to external users or other internal clusters. Generally speaking, we refer to external access to internal traffic as north-south traffic, and refer to traffic between internal services as east-west traffic. In order to save machine costs and operation and maintenance pressure, some users will choose to share an Ingress Provider for north-south and east-west traffic. This approach will bring new problems, and it is not possible to conduct refined traffic management for external or internal traffic, while expanding the impact of failures. The best practice is to independently deploy the Ingress Provider for external and internal network scenarios, and control the number of replicas and hardware resources based on the actual request size, while minimizing the explosion radius and providing resource utilization as much as possible.

Grayscale publishing

During the continuous iterative development process of the business, the application services of the business are faced with frequent version upgrades. The most primitive and simplest way is to stop the old version of the online service, and then deploy and start the new version of the service. This way of directly providing a new version of the service to all users poses two serious problems. First, during the period between stopping the old version of the service and starting a new version, the application service is unavailable, and the success rate of traffic requests drops to zero. Secondly, if there are serious program bugs in the new version, the operation of rolling back from the new version to the old version can lead to a temporary unavailability of the service, which not only affects the user experience, but also creates many unstable factors for the overall business system.

So, how can we not only meet the demands of rapid business iteration, but also ensure high external availability of business applications during the upgrade process?

I believe that the following core issues need to be addressed:

How to reduce the impact of upgrades?

How can I quickly roll back to a stable version when a bug occurs in a new version?

3. How to solve the defect that the standard Ingress does not support traffic diversion?

For the first two issues, the industry consensus is that a more common approach is to use grayscale publishing, commonly known as canary publishing. The idea behind canary publishing is to redirect a small number of requests to a new version, so deploying a new version of the service requires only a very small number of machines. After verifying that the new version meets expectations, gradually adjust the traffic to slowly migrate from the old version to the new version. During this period, the new version of the service can be expanded based on the distribution of current traffic between the new and old versions, while the old version of the service can be scaled down to maximize the utilization of underlying resources.

In the section on the current status of Ingress, we mentioned two popular schemes for extending Ingress. The third problem can be solved by adding a Key Value to the Annotation. In Annotation, we can define the policy configuration required for grayscale publishing, such as configuring headers and cookies for grayscale traffic, and matching methods for corresponding values (precise matching or regular matching). After that, the Ingress Provider identifies the newly defined annotation and parses it into its own routing rules. That is, the key is that the Ingress Provider selected by the user should support a variety of routing methods.

Grayscale publishing - according to header grayscale

During the process of verifying whether the new version of the small traffic verification service meets expectations, we can selectively consider online traffic with some characteristics as small traffic. Both headers and cookies in the request content can be considered as request characteristics, so for the same API, we can segment online traffic based on headers or cookies. If there is no difference in headers in real traffic, we can manually create some traffic with grayscale headers based on the online environment for verification. In addition, we can also perform new version validation in batches based on the importance of the client. For example, for access requests from ordinary users, priority should be given to accessing the new version. After the validation is completed, VIP users can be gradually attracted. Generally, these user information and client information are stored in cookies.

Taking Nginx Ingress as an example, the annotation supports Ingress traffic diversion. The schematic diagram based on header grayscale is as follows:

Grayscale Publishing - By Weighted Grayscale

According to the grayscale method of the header, a new version of the service can be provided to a specific request or user, but the scale of requests to access the new version cannot be well evaluated. Therefore, it may not be possible to maximize the utilization of resources when allocating machines for the new version. The grayscale method based on weight can accurately control the flow ratio, making it easy to allocate machine resources. After passing the small traffic verification in the early stage, the version upgrade is gradually completed by adjusting the traffic weight in the later stage. This method is simple to operate and easy to manage. However, online traffic will be directed indiscriminately towards new versions, which may affect the experience of important users. The schematic diagram based on the weight grayscale is as follows:

Business domain splitting

With the continuous expansion of the scale of cloud native applications, developers have begun to split the original single architecture in a fine-grained manner, dividing the service modules in the single application into microservices that are deployed and run independently. The life cycle of these microservices is the sole responsibility of the corresponding business team, effectively solving the problems of insufficient agility and flexibility in the single architecture. However, any architecture is not a silver bullet, and it is bound to introduce some new problems while solving old problems. Single applications can complete external exposure services through a four layer SLB, while distributed applications rely on Ingress to provide seven layer traffic distribution capabilities. How to better design routing rules is particularly important.

Generally, we split services based on business or functional domains, so we can also follow this principle when exposing services through Ingress. When designing external APIs for microservices, representative business prefixes can be added to the original path. After the request completes routing matching and before forwarding the request to the back-end service, the Ingress Provider completes the work of eliminating the business prefix by rewriting the path. The workflow diagram is as follows:

The API design principles facilitate the management of exposed service collections, provide more granular authentication based on service prefixes, and facilitate unified observable construction of services across various business domains.

Zero Trust

Security issues have always been the number one public enemy of business applications, accompanying the entire lifecycle of business development. In addition, the environment of the external Internet is becoming increasingly complex, and the internal business architecture is becoming increasingly large. The deployment structure involves multiple forms of public cloud, private cloud, and hybrid cloud, resulting in increasingly serious security issues. Zero trust, as a new design model in the security field, was born out of the belief that all users and services inside and outside the application network are untrustworthy, and must undergo identity authentication before initiating and processing requests. All authorization operations follow the minimum authority principle. Simply put, it means trust no one, verify everything

The following figure is an architecture diagram of the entire end-to-end zero trust concept for external users ->Ingress Provider ->backend services:

• External users and the Ingress Provider. External users complete identity authentication by verifying the certificate provided by the Ingress Provider to an authoritative certification authority; The Ingress Provider completes authentication and authentication by providing JWT credentials to external users.

• Ingress Provider and back-end services. The Ingress Provider completes identity authentication by verifying the certificate provided by the back-end service to the internal private certificate authority. The back-end service completes identity authentication by verifying the certificate provided by the Ingress Provider to the internal private certificate. At the same time, the back-end service can perform authentication operations to the authorization service based on the caller's identity.

performance tuning

All external access traffic needs to pass through the Ingress Provider first, so the main performance bottleneck is the Ingress Provider, which has higher requirements for high concurrency and performance. Regardless of the performance differences between various Ingress Providers, we can further free up performance by adjusting kernel parameters. Through Alibaba's years of practical experience in the cluster access layer, we can appropriately adjust the following kernel parameters:

1. Increase the capacity of the TCP connection queue: net.core.somaxconn

2. Increase the available port range: net.ipv4.ip_ local_ port_ range

3. Multiplex TCP connection: net.ipv4.tcp_ tw_ reuse

Another optimization perspective is to start with hardware and fully release the computational power of the underlying hardware to further improve the performance of the application layer. Currently, HTTPS has become the main method of using public network requests. After using HTTPS entirely, due to the need for TLS handshake, there will inevitably be a significant performance loss compared to HTTP. Currently, with the significant improvement of CPU performance, using the SIMD mechanism of the CPU can effectively accelerate the performance of TLS. This optimization scheme relies on the support of machine hardware and the internal implementation of the Ingress Provider.

Currently, the MSE cloud native gateway based on the Istio-Envoy architecture, combined with Alibaba Cloud's seventh generation ECS, is the first to complete TLS hardware acceleration, significantly improving HTTPS performance without increasing user resource costs.

New Choice for Ingress Provider - MSE Cloud Native Gateway

As cloud native technology continues to evolve and cloud native application microservices continue to deepen, Nginx Ingress is somewhat tired of facing complex routing rule configurations, supporting multiple application layer protocols (Dubbo, QUIC, etc.), security of service access, and observability of traffic. In addition, Nignx Ingress handles configuration updates by using the Reload method to validate the configuration. In the face of large-scale and long connections, flash outages may occur, and frequent configuration changes may cause loss of business traffic.

In order to address the strong demand of users for large-scale traffic governance, MSE Cloud Native Gateway has emerged as the times require. This is the next generation gateway that is compatible with the Ingress standard launched by Alibaba Cloud, and has the advantages of low-cost, secure, highly integrated, and highly available products. Combining traditional WAF gateways, traffic gateways, and microservice gateways, while reducing resource costs by 50%, provides users with refined traffic governance capabilities. It supports multiple service discovery methods such as ACK container services, Nacos, Eureka, fixed address, and FaaS, and supports multiple authentication and login methods to quickly build a security defense line. It provides a comprehensive and multi perspective monitoring system, such as indicator monitoring, log analysis, and link tracking, In addition, it supports parsing standard Ingress resources in single and multiple Kubernetes cluster modes, helping users perform unified traffic governance declaratively in cloud native application scenarios. In addition, we have introduced the WASM plug-in market to meet user customization requirements.

Nginx Ingress VS MSE Cloud Native Gateway

The following is a summary of the comparison between Nginx Ingress and MSE cloud native gateways:

Smooth migration

The MSE cloud native gateway is hosted by Alibaba Cloud, free of operation and maintenance, cost reduction, and rich in functionality. It is deeply integrated with Alibaba Cloud peripheral products. The following figure shows how to seamlessly migrate from Nginx Ingress to the MSE cloud native gateway. Other Ingress providers can also refer to this method.

Hands-on practice

Next, we will conduct practical operations related to the Ingress Provider - MSE Cloud Native Gateway based on Alibaba Cloud Container Service ACK. You can learn how to manage cluster entrance traffic through the MSE Ingress Controller.

Operation document address:

https://help.aliyun.com/document_detail/426544.html

prerequisite

Installing the MSE Ingress Controller

We can find ack mse ingress controller in the application market of Alibaba Cloud container services, and complete the installation according to the operation documents below the components.

Creating an MSE cloud native gateway through CRD

MseIngressConfig is a CRD resource provided by MSE Ingress Controller, which uses MseIngressConfig to manage the lifecycle of MSE cloud native gateway instances. One MseIngressConfig corresponds to one MSE cloud native gateway instance. If you need to use multiple MSE cloud native gateway instances, you need to create multiple MseIngressConfig configurations. For a simple demonstration, we create the gateway in a way that minimizes configuration.

Configure the Kubernetes standard IngressClass to associate with MseIngressConfig. After the association is completed, the cloud native gateway will start listening to the IngressClass related Ingress resources in the cluster.

We can view the current status by viewing the status of MseIngressConfig. MseIngressConfig will change sequentially according to the status of Pending>Running>Listening. Each status is described as follows:

• Pending: Indicates that the cloud native gateway is being created and needs to wait for about 3 minutes.

• Running: indicates that the cloud native gateway was successfully created and is in running status.

• Listening: Indicates that the cloud native is running and listens for Ingress resources in the cluster.

• Failed: Indicates that the cloud native gateway is in an illegal state. You can view the Message in the Status field to further clarify the reason.

Grayscale Publishing Practice

Assuming that the cluster has a back-end service, httpbin, we hope to perform grayscale verification according to the header during version upgrade, as shown in the figure:

First deploy the httpbin v1 and v2 versions and apply the following resources to the ACK cluster.

The above is how we use Ingress Annotation to extend the high-level traffic governance capabilities of the standard Ingress to support grayscale publishing.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us