ACK@Edge is a cloud-managed solution provided by Container Service for Kubernetes (ACK). You can use ACK@Edge to achieve collaborative cloud-edge computing. This topic lists the latest changes to ACK@Edge of Kubernetes 1.18.

Cloud-edge O&M channel and O&M monitoring

The cloud-edge O&M channel and O&M monitoring features are optimized:

  • tunnel-server intercepts and handles the traffic of edge O&M and monitoring based on cluster DNS resolutions instead of the iptables rules of individual nodes.
  • Monitoring components that depend on the cloud-edge O&M channel, such as metrics-server and prometheus, are no longer required to be deployed on the same node as tunnel-server.
  • tunnel-server can be deployed on multiple pod replicas and support load balancing among all nodes.
  • The meta server module is added to the cloud-edge O&M channel. This module is used to handle Prometheus metrics and debug/pprof. The endpoint of tunnel-server is http://127.0.0.1:10265. The endpoint of edge-tunnel-agent is http://127.0.0.1:10266. You can change the port in an endpoint by setting the --meta-port startup parameter of a component.

Autonomy of edge nodes

Edge caching, health checks, service endpoints, and traffic analysis are optimized. Edge traffic autonomy is enhanced. Access from edge applications to kube-apiserver in InCluster mode is enhanced. The following section describes the improvements:

  • Traffic topology of Services at the edge is supported by edge-hub and is no longer dependent on Kubernetes feature gates.
  • The endpoint of a Service at the edge is automatically changed by edge-hub to the public endpoint of kube-apiserver of the cluster. This allows applications at the edge to access the cluster in InCluster mode.
  • CustomResourceDefinitions (CRDs) can be cached by edge-hub. For example, the nodenetworkconfigurations CRD can be cached. This CRD is used to store network information for Flannel.
  • Health checks in the cloud are improved by edge-hub. During health checks, Lease heartbeats instead of healthz requests are sent.
  • Port 10261 and port 10267 are listened on by edge-hub. Port 10261 is used to forward requests. Port 10267 is used to handle local requests sent to edge-hub, such as liveness probes, metrics, and pprof that are sent to edge-hub.
  • The node_edge_hub_proxy_traffic_collector metric is supported by edge-hub. This metric shows the traffic generated when components of edge nodes such as kubelet and kube-proxy access Kubernetes resources, such as pods and Deployments.

Cell-based management at the edge

The Patch field is supported in cell-based management (based on the UnitedDeployment controller) at the edge. This field allows you to customize the configurations of each node pool. For example, you want to deploy nodes in different node pools in a deployment cell by using different local image repositories. In this case, you can specify an image address for each node pool by using the Patch field.

Add edge nodes to a cluster

Nodes that run the Ubuntu 20.04 operating system can be added to edge Kubernetes clusters.

Edge network

  • The cloud-edge network built by using Flannel is optimized. List operations and watch operations are no longer performed on nodes. Instead, list operations and watch operations are performed on related CRDs. This reduces the traffic generated by these operations.
  • Annotations about traffic management at the edge
    • The following table describes the keys of annotations supported by Kubernetes 1.16 for traffic management at the edge.
      Annotation Key Annotation Value Description
      openyurt.io/topologyKeys kubernetes.io/hostname Specifies that the Service can be accessed by only the node on which the Service is deployed.
      openyurt.io/topologyKeys kubernetes.io/zone Specifies that the Service can be accessed by only the nodes in the node pool where the Service is deployed.
      None None Specifies that access to the Service is unlimited.
    • In Kubernetes 1.18, the valid values of openyout.io/topologyKeys are modified. Valid values: kubernetes.io/zone and openyurt.io/nodepool. These values specify that the Service can be accessed by only nodes in the node pool where the Service is deployed. We recommend that you set the value to openyurt.io/nodepool.