All Products
Search
Document Center

Container Service for Kubernetes:Network overview

Last Updated:Mar 25, 2024

Container Service for Kubernetes (ACK) (ACK) provides stable and high-performance container networks by integrating the Kubernetes network model, Virtual Private Cloud (VPC), and Server Load Balancer (SLB). This topic describes important terms used in ACK cluster networking and Alibaba Cloud network infrastructure, such as container network interface (CNI), Service, Ingress, and DNS service discovery. Understanding these terms helps you optimize application deployment models and network access methods.

CNI

Figure 1. Container network model容器网络模型

Containerized applications usually deploy multiple workloads on a node. Each workload requires a unique network namespace. To avoid network conflicts, each pod must have a unique network namespace. To enable the application that runs in a pod to communicate with other networks, the pod must be able to access other networks. Container networking has the following features:

  • Each pod has a unique network namespace and an IP address. Applications that run in different pods can listen on the same port without conflicts.

  • Pods can access each other by through their IP addresses.

    In a cluster, a pod can communicate with other applications through a unique IP address.

    • Pods can access each other within a cluster.

    • Pods can access Elastic Compute Service (ECS) instances that are deployed in the same VPC.

    • ECS instances can access pods that are deployed in the same VPC.

    Note

    To ensure that pods can access ECS instances in the same VPC and ECS instances can access pods in the same VPC, you must configure proper security group rules. For more information about how to configure security group rules, see Add a security group rule.

ACK provides two container network plug-ins: Flannel and Terway. The two network plug-ins adopt different network models and have the following features:

Note

After you create a cluster, you cannot change the network plug-in that is used by the cluster.

Terway

Terway adopts a cloud-native networking solution and configures the container network by using elastic network interfaces (ENIs). ENIs are virtual network interface controllers (NICs) provided by Alibaba Cloud. An ENI assigns IP addresses within a VPC to pods. You do not need to specify a CIDR block for pods.

Terway provides the following features:

  • Containers and virtual machines (VMs) reside at the same network layer. This makes cloud-native migration easier.

  • Network devices that are allocated to containers can be used for communication without the need to use packet encapsulation or route tables.

  • The number of nodes in a cluster is not limited by the quota of route tables that are used to route traffic or the quota of forwarding database (FDB) tables that are used to encapsulate packets.

  • You do not need to plan the CIDR blocks of overlay networks for containers. Containers in different clusters can communicate if the relevant ports are opened in security group rules.

  • You can directly attach pods to the backend of a Server Load Balancer (SLB) instance without the need to use a NodePort Service to forward traffic.

  • NAT gateways can translate source network addresses for pods, which eliminates the need to configure SNAT for the pod CIDR block. Pods access resources in VPCs through their IP addresses. Therefore, all access can be easily audited. No conntrack SNAT is needed when pods access external networks. This reduces the chances of access failures.

  • Terway allows you to use network policies to enforce access control on pods.

    Network policies define how pods communicate with each other and how pods communicate with other network endpoints. Network policies are Kubernetes resources that select pods based on labels and define communication rules for the selected pods. For more information, see Use network policies in ACK clusters or Use network policies in ACK Serverless clusters.

  • When you select Alibaba Cloud Linux as the operating system of a node, Terway supports IPVLAN and Extended Berkeley Packet Filter (eBPF) to improve network performance.

Flannel

The Flannel network plug-in ensures that the CIDR block of pods does not overlap with the CIDR block of the VPC. The CIDR block of pods is evenly divided among the nodes in the cluster. Each pod on a node is assigned an IP address that belongs to the CIDR block of the node. Terway enables pods on different nodes to access a VPC by using custom routes that are provided by the VPC.

Flannel provides the following features:

  • The VPC-based Flannel network does not require packet encapsulation and can improve network performance by 20% compared with the default Flannel VXLAN network.

  • The CIDR block of pods does not overlap with the CIDR block of the VPC.

  • Each node must have a VPC route entry. The number of Kubernetes nodes in a VPC is limited by the quota of VPC route tables.

For more information about the correlation between the VPC CIDR block and Kubernetes cluster CIDR block, see Plan CIDR blocks for an ACK cluster.

For more information about how to choose a network mode, see Work with Terway.

Networking features of ACK

Category

Networking feature

Network plug-in

References

Terway

Flannel

Network configuration management

Dual-stack (IPv4 and IPv6)

Supported

Not supported

Configure network configurations for individual nodes

Supported

Not supported

Configure network settings for individual nodes in a cluster that uses Terway

Configure network configurations for individual pods

Supported

Not supported

Configure a static IP address, a separate vSwitch, and a separate security group for a pod

Configure static pod IP addresses

Supported

Not supported

Configure a static IP address, a separate vSwitch, and a separate security group for a pod

Pod QoS

Supported

Not supported

Configure QoS for pods

Use network policies

Supported

Not supported

Use network policies in ACK clusters

Configure security groups for pods

Supported

Not supported

Expand the pod CIDR block

Supported

Not supported

Configure multiple route tables for a VPC

Supported

Supported

North-south traffic management

Configure pods to access the Internet

Supported

Supported

Expose pods to the Internet

Supported

Not supported

Use LoadBalancer Services

Supported

Supported

Use an automatically created SLB instance to expose an application

Use Ingresses

Supported

Supported

Ingress overview

Service

Cloud-native applications typically require agile iterations and fast scaling. Containers and the related network resources have short lifecycles. To achieve fast workload scaling, you must configure automatic load balancing and use a static IP address. ACK allows you to create a Service as the ingress and load balancer of a pod. How a Service works

  • When you create a Service, ACK assigns a stable IP address to the Service.

  • You can configure the selector parameter to select a pod and then map the IP address and port of the Service to the IP address and port of the pod for load balancing.

ACK provides the following types of Services to handle requests from different sources and clients:

  • ClusterIP

    • A ClusterIP Service is used for handling access within the cluster. If you want your application to provide services within the cluster, create a ClusterIP Service.

      Note
      • By default, ClusterIP is selected when you create a Service.

      • When you create a ClusterIP Service, no SLB instance is created.

  • NodePort

    • A NodePort Service is used to expose an application to the Internet. You can use the IP address and port of a cluster node to expose your application. This way, your application can be accessed through the node IP address and port.

  • LoadBalancer

  • Headless Service

    • A Headless Service is defined by setting the clusterIP field to None in the Service configuration file. A Headless Service does not have a fixed virtual IP address (VIP). When a client accesses the domain of the Service, DNS returns the IP addresses of all backend pods. The client must use DNS load balancing to balance the loads across pods.

  • ExternalName

    • An ExternalName Service is used to map an external domain name to a Service within the cluster. For example, you can map the domain name of an external database to a Service name within the cluster. This allows you to access the database within the cluster through the Service name.

For more information, see Considerations for configuring a LoadBalancer type Service.

Ingress

In ACK clusters, Services support Layer 4 load balancing. However, Ingresses manage external access to Services in the cluster at Layer 7. You can use Ingresses to configure different Layer 7 forwarding rules. For example, you can forward requests to different Services based on domain names or access paths. For more information, see Ingress overview.

Figure 2. Correlation between Ingresses and ServicesIngress和Service关系图

Example

In common architectures that decouple the frontend and backend, different access paths are used to distinguish the frontend and backend. In this scenario, you can create Ingresses to balance the traffic of applications at Layer 7.示例图

DNS Service discovery

ACK uses DNS for service discovery. For example, the name of a Service can be resolved to the cluster IP of the Service on a client. The name of a pod can be resolved to the IP address of the pod by using a StatefulSet. Using DNS service discovery allows you to invoke applications by using DNS names, regardless of IP addresses or deployment environments.

CoreDNS can automatically convert the name of a Service to the IP address of the Service. This allows you to use the same Service name to access the Service in different environments. For more information about how to use and fine-tune the DNS component, see DNS policies and domain name resolution.dns

Network infrastructure

  • VPC

    • VPC is a type of private network provided by Alibaba Cloud. VPCs are logically isolated from each other. You can create and manage cloud services in VPCs, such as ECS instances, ApsaraDB RDS, and SLB instances.

      Each VPC consists of one vRouter, at least one private CIDR block, and at least one vSwitch.vpc

      • Private CIDR blocks

        When you create a VPC and a vSwitch, you must specify the private IP address range for the VPC in CIDR notation.

        You can use one of the standard private CIDR blocks listed in the following table as the private CIDR block of a VPC, or use a custom CIDR block. For more information about CIDR blocks, see Plan networks.

        CIDR block

        Description

        192.168.0.0/16

        Number of available private IP addresses (excluding IP addresses reserved by the system): 65,532

        172.16.0.0/12

        Number of available private IP addresses (excluding IP addresses reserved by the system): 1,048,572

        10.0.0.0/8

        Number of available private IP addresses (excluding IP addresses reserved by the system): 16,777,212

        Custom CIDR block

        You can also use a custom CIDR block other than 100.64.0.0/10, 224.0.0.0/4, 127.0.0.0/8, 169.254.0.0/16, or their subnets.

      • vRouters

        A vRouter is the hub of a VPC. As a core component, it connects the vSwitches in a VPC and serves as a gateway between a VPC and other networks. After a VPC is created, a vRouter is automatically created for the VPC. A vRouter can be associated with only one route table.

        For more information about route tables, see Route table overview.

      • vSwitches

        A vSwitch is a basic network component that connects different cloud resources in a VPC. After you create a VPC, you can create vSwitches to create one or more subnets for the VPC. vSwitches in the same VPC can communicate with each other. You can deploy your applications in vSwitches that belong to different zones to improve service availability.

        For more information about vSwitches, see Create and manage a vSwitch.

  • SLB

    • After you connect ECS instances to an SLB instance, SLB uses VIPs to virtualize the ECS instances and adds the ECS instances to an application service pool. The application service pool features high performance and high availability. Client requests are distributed across the ECS instances based on forwarding rules.

      SLB checks the health status of the ECS instances and automatically removes unhealthy ECS instances from the pool to eliminate single points of failure. This improves the availability of your applications. You can also use SLB to defend your applications against DDoS attacks.

      SLB consists of the following components:

      • SLB instances

        An SLB instance is a running entity of the SLB service. An SLB instance receives and distributes traffic to backend servers. To get started with SLB, you must create an SLB instance and add at least one listener and two ECS instances to the SLB instance.

      • Listeners

        A listener checks and forwards client requests to backend servers. A listener also performs health checks on backend servers.

      • Backend servers

        ECS instances are attached to SLB instances as backend servers to receive and process client requests. You can add ECS instances to a server pool, or create vServer groups or primary/secondary server groups to manage ECS instances in batches.