×
Community Blog ACK and MSE Ingress Makes Cluster Ingress Traffic Management Easier

ACK and MSE Ingress Makes Cluster Ingress Traffic Management Easier

This article describes how to use Container Service for Kubernetes (ACK) and MSE Ingresses to manage ingress traffic in clusters.

By Yangshao

With the increasing popularity of cloud-native technologies, more business applications are shifting to a cloud-native architecture. With the immutable infrastructure, elastic scaling, and high extensibility of the container management platform Kubernetes, businesses can quickly complete digital transformation. Ingress traffic management methods have become universal and standardized in the evolution of cloud-native technologies. Ingresses defined by Kubernetes are used to manage external access to internal services of clusters. The standardization of ingress gateway decouples ingress traffic management from the implementation of gateways. This promotes the development of various Ingress controllers and eliminates the need for developers to bind to vendors. In the future, you can switch to different ingress controllers based on your business scenarios.

Alibaba Cloud Container Service for Kubernetes (ACK) allows you to manage containerized applications that run on the cloud conveniently and efficiently. MSE Ingress provides a more powerful method to manage traffic based on MSE cloud-native gateways. MSE Ingress is compatible with more than 50 annotations of NGINX Ingress and is suitable for more than 90% of use scenarios of NGINX Ingress. MSE Ingress supports canary releases of multiple service versions and provides flexible service governance capabilities and comprehensive security protection. Meanwhile, MSE Ingress can meet requirements for traffic governance in scenarios in which a large number of cloud-native distributed applications are used. This article describes how to use Container Service for Kubernetes (ACK) and MSE Ingresses to manage ingress traffic in clusters richer and easier.

How to Achieve Better Canary Release

Business development requires continuous iteration of application systems. We cannot avoid frequent application changes and releases, but we can improve the stability and high availability during application upgrades. A more common approach is to use canary release. Canary release routes a small amount of traffic to the new version of a service. As a result, few machines are required to deploy the new version of the service. After you verify that the new version meets your expectations, you can gradually adjust the traffic to migrate the traffic from the old version to the new version. During this period, you can scale out the services of the new version and scale in the services of the old version based on the current traffic distribution of the new and old versions to maximize the utilization of underlying resources.

You can use MSE Ingress to enable the coexistence of multiple canary release versions of a service. This allows you to develop multiple features of the service at the same time and perform independent canary verification. MSE Ingress supports multiple methods to identify canary traffic. You can implement canary matching policies on the route level based on HTTP headers, cookies, and weights.

MSE Ingress provides the Fallback feature for the canary version of a service by default. If the canary version of a service does not exist or is unavailable, traffic is automatically routed to the official version of the service. This ensures the high availability of business applications. You can still use the default-backend annotations provided by MSE Ingress to control the direction of the disaster recovery service.

1

How to Build End-to-End Canary Release

For micro-application in a distributed architecture, the dependencies between services are complex. A business function requires multiple micro-services to provide capabilities. One business request needs to go through multiple micro-services to complete the processing.

In this scenario, the release of a new service feature may involve the release of multiple services at the same time. When verifying the new feature, multiple services are canaried at the same time. The environment isolation from the gateway to the entire backend service is established to perform phased verification on multiple services of different canary versions. This is a unique full-link canary scenario in the microservices model.

Currently, the end-to-end canary solution includes physical environment isolation and logical environment isolation. The physical environment isolation approach is to build real traffic isolation by adding machines. This approach has certain labor costs and machine costs, so the more common approach in the industry is based on flexible logical environment management. Although it seems that both the official version and the canary version of a service are deployed in the same environment, the canary route matching policy can accurately control the canary traffic to preferentially flow through the corresponding canary version of the service. Only when the target service does not have a canary version will the disaster recovery go to the official version of the service. From the overall perspective, the canary verification traffic for new features only flows through the canary version of the service that involves the to-be-released version. For the services that do not involve changes in this new feature, the canary traffic passes normally. This precise traffic control method greatly facilitates the pain points of concurrent development and verification of multiple versions in the microservices model and reduces the machine cost of building the test environment.

Container Service for Kubernetes (ACK) users can use MSE governance and MSE Ingress together to easily and quickly implement the full-link canary feature without changing a single line of code. This fine-grained traffic control feature makes microservices model governance easier for users.

2

Please see this document for a specific example: Configure full-link canary release based on MSE Ingress

How to Create a Full Range of Security Protection

Security risk is always the number one problem of business applications, along with the entire lifecycle of business development. In addition, the environment of the external Internet is becoming more complex, the internal business architecture is becoming larger, and the deployment structure involves multiple forms of public cloud, private cloud, and hybrid cloud. Therefore, the security risk is becoming more intense.

As an ingress gateway, MSE Ingress has actively explored and enhanced security in the field from the very beginning, creating all aspects of security protection. It is mainly reflected in the following aspects:

  • Encrypted Communication and TLS Hardware Acceleration: Generally speaking, business communication data is private and sensitive. As the most basic and extensive anti-eavesdropping and anti-tampering protocol, TLS is often used in combination with HTTP, which is known as HTTPS. MSE Ingress supports the HTTPS protocol from the client to the gateway and the gateway to the backend service. In addition, MSE Ingress supports TLS hardware acceleration based on the seventh-generation ECS of Alibaba Cloud. This significantly improves the performance of HTTPS without increasing user resource costs. MSE Ingress supports TLS version control and cipher suites at the domain level to meet the compliance requirements of TLS protocol versions and TLS encryption conditions in special business scenarios.
  • Fine-Grained WAF Protection: MSE Ingress supports global WAF protection and fine-grained routing-level WAF protection. MSE Ingress uses a built-in WAF agent to provide shorter request links and lower RT than traditional WAF.
  • Fine-Grained IP Access Control: MSE Ingress supports IP blacklists and whitelists at the instance level, domain name level, and route level. The priority gradually increases. You can configure a wider range of IP access control at the instance level and then configure service-related IP access control at the route level to meet your diversified access control policies.
  • Diversified Authentication Systems: In a microservice architecture, multiple services receive requests from external users (clients). In most cases, the services are not directly exposed to external users. Instead, a gateway is added to serve as the control point for external users to access internal services. For requests for external user access, we usually want to authenticate in the gateway, identify the user, and be able to define access control policies. Currently, MSE Ingress supports Basic Auth, JWT Auth, OIDC, Alibaba Cloud IDaaS, and custom external authentication. This helps businesses implement high-level access control in a non-intrusive manner.

3

How to Build a Comprehensive Observability System

Observability is not a new word. The word comes from control theory and refers to the degree to which a system can infer its internal state from its external output. With the development of the IT industry for decades, the fields of monitoring, alerting, and troubleshooting of IT systems have gradually matured, and the IT industry has abstracted it to form a set of observability engineering systems. The term has become popular in recent years is largely due to the continuous popularity of technologies (such as cloud-native, micro-service models, and DevOps), which pose greater challenges to observability.

As the gateway of the business traffic, the observability of the gateway is closely related to the stability of the overall business. At the same time, since the gateway has many user scenarios and functions, the network environment is more complex. This also brings many difficulties to the gateway observability construction, mainly:

  1. Many roles focus on gateway observability
  2. The tracking point is not accurate enough, and the statistical consumption is large.
  3. The network environment is complex, and troubleshooting is difficult.

MSE Ingress helps users build a comprehensive observability system based on three major categories to address the user pain points: logs, Tracing Analysis, and metric monitoring.

In terms of logs, MSE Ingress is seamlessly integrated with Alibaba Cloud SLS. Users can view all access requests to the cluster ingress in real-time.

4

In terms of monitoring metrics, MSE Ingress provides a dashboard that contains a wide range of metrics and integrates Prometheus and SLS. You can use the etl of gateway access logs to obtain more precise and accurate data. You can also use Prometheus to obtain real-time monitoring of the gateway.

5

In terms of Tracing Analysis, to help users solve the pain points of call link visualization in microservice scenarios, MSE Ingress connects to the out-of-the-box ARMS distributed Tracing Analysis service and supports the delivery of trace data to user-built skywalking to avoid cloud product locking.

6

How to Use MSE Ingress

MSE Ingress is deeply integrated with ACK/ASK, so users can easily access and use it.

When you create an ACK/ASK cluster, you can install an MSE Ingress in the Ingress section.

7

For a previously created cluster, you can go to Cluster → Operations Management → Component Management, find the MSE Ingress Controller in the network group, and install it.

8

The installation of the MSE Ingress Controller is complete. You can refer to the following documentation to get started with MSE Ingress.

0 1 0
Share on

Alibaba Cloud Native

165 posts | 12 followers

You may also like

Comments

Alibaba Cloud Native

165 posts | 12 followers

Related Products