By Mengyuan Pan
Alibaba Cloud Service Mesh (ASM) supports the Sidecar and Sidecarless modes. The Sidecar mode, in which a proxy runs next to each service instance, is currently the most mainstream and stable solution. However, this architecture introduces significant latency and resource overhead. To address the latency and resource overhead caused by the Sidecar mode, various Sidecarless mode solutions have emerged in recent years, such as Istio Ambient. Istio Ambient deploys ztunnel on each node to proxy Layer 4 traffic for pods running on the node and introduces Waypoint to proxy Layer 7 traffic. Although the Sidecarless mode can reduce latency and resource overhead, its stability and functionality need to be improved.
ASM currently supports various Sidecarless modes, such as Istio Ambient mode, ACMG mode, and Kmesh. Kmesh (for more information, please refer to https://kmesh.net/ ) is a high-performance ASM data plane software implemented based on eBPF and programmable kernel technology. By sinking traffic governance to the kernel, Kmesh enables service communication within the ASM without the need for proxy software, significantly reducing the traffic forwarding path and effectively enhancing the forwarding performance of service access.
The dual-engine mode of Kmesh uses eBPF to intercept traffic in the kernel space and deploys the Waypoint Proxy to handle complex Layer 7 traffic management. This separates Layer 4 and Layer 7 governance between the kernel space (eBPF) and the user space (Waypoint). Compared to Istio's Ambient Mesh, the latency is reduced by 30%. Compared with the kernel-native mode, the dual-engine mode does not need to enhance the kernel and can be used more widely.
Figure | Kmesh Dual-engine Mode Architecture
ASM supports the Kmesh dual-engine mode as one of the ASM data planes to implement more efficient service governance. ASM can be used as the control plane, and Kmesh can be deployed as the data plane within the ACK cluster.
Refer to the official documentation of Alibaba Cloud Service Mesh (ASM) to create an ASM cluster instance and an ACK cluster, and then add the ACK cluster to the ASM cluster instance for management. For more information, please refer to:
Run the following command to download the Kmesh project to your on-premises machine.
git clone https://github.com/kmesh-net/kmesh.git && cd kmesh
After the download is complete, you must run the following command to view the Service name of the current ASM control plane in the cluster to configure the connection between Kmesh and the ASM control plane.
kubectl get svc -n istio-system | grep istiod
# istiod-1-22-6 ClusterIP None <none> 15012/TCP 2d
You can use kubectl to install Kmesh in the ACK cluster. Before installation, you must add ClusterId
and xdsAddress
environment variables to the Kmesh DaemonSet for authentication and connection between Kmesh and the ASM control plane. ClusterId
is the ID of the ACK cluster where you deploy Kmesh, and xdsAddress
is the Service information of the ASM control plane.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kmesh
labels:
app: kmesh
namespace: kmesh-system
spec:
spec:
containers:
- env:
# Modify the xdsAddress parameter to the Service of the ASM control plane
- name: XDS_ADDRESS
value: "istiod-1-22-6.istio-system.svc:15012"
# Add the ID of the ACK cluster
- name: CLUSTER_ID
value: "cluster-id"
...
Modify the Kmesh Daemonset environment variables in the following command, and then run the command to deploy Kmesh.
kubectl apply -f -<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kmesh
labels:
app: kmesh
namespace: kmesh-system
spec:
selector:
matchLabels:
app: kmesh
template:
metadata:
labels:
app: kmesh
annotations:
prometheus.io/path: "status/metric"
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
# use cgroup requires
- name: mnt
hostPath:
path: /mnt
# for eBPF program into the host machine
- name: sys-fs-bpf
hostPath:
path: /sys/fs/bpf
# required for compiling and building ko
- name: lib-modules
hostPath:
path: /lib/modules
# k8s default cni conflist path
- name: cni
hostPath:
path: /etc/cni/net.d
# k8s default cni path
- name: kmesh-cni-install-path
hostPath:
path: /opt/cni/bin
- name: host-procfs
hostPath:
path: /proc
type: Directory
- name: istiod-ca-cert
configMap:
defaultMode: 420
name: istio-ca-root-cert
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
# Modify the xdsAddress parameter to the Service of the ASM control plane
- name: XDS_ADDRESS
value: "istiod-1-22-6.istio-system.svc:15012"
# Add the ID of the ACK cluster
- name: CLUSTER_ID
value: "cluster-id"
containers:
- name: kmesh
image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/kmesh:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args:
[
"./start_kmesh.sh --mode=dual-engine --enable-bypass=false --enable-bpf-log=true",
]
securityContext:
privileged: true
capabilities:
add: ["all"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: XDS_ADDRESS
value: "istiod.istio-system.svc:15012"
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: mnt
mountPath: /mnt
readOnly: false
- name: sys-fs-bpf
mountPath: /sys/fs/bpf
readOnly: false
- name: lib-modules
mountPath: /lib/modules
readOnly: false
# k8s default cni conflist path
- name: cni
mountPath: /etc/cni/net.d
readOnly: false
# k8s default cni path
- name: kmesh-cni-install-path
mountPath: /opt/cni/bin
readOnly: false
- mountPath: /host/proc
name: host-procfs
readOnly: true
- name: istiod-ca-cert
mountPath: /var/run/secrets/istio
- name: istio-token
mountPath: /var/run/secrets/tokens
resources:
limits:
# image online-compile needs 800Mi, or only 200Mi
memory: "800Mi"
cpu: "1"
priorityClassName: system-node-critical
serviceAccountName: kmesh
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kmesh
labels:
app: kmesh
rules:
- apiGroups: [""]
resources: ["pods","services","namespaces"]
verbs: ["get", "update", "patch", "list", "watch"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kmesh
labels:
app: kmesh
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kmesh
subjects:
- kind: ServiceAccount
name: kmesh
namespace: kmesh-system
---
apiVersion: v1
kind: Namespace
metadata:
name: kmesh-system
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-listener-filter
namespace: istio-system
labels:
asm-system: 'true'
provider: asm
spec:
workloadSelector:
labels:
gateway.istio.io/managed: istio.io-mesh-controller
configPatches:
- applyTo: LISTENER
match:
proxy:
proxyVersion: .*
patch:
operation: ADD
value:
name: kmesh-listener
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 15019
additional_addresses:
- address:
socket_address:
protocol: TCP
address: "::"
port_value: 15019
default_filter_chain:
filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: kmesh
cluster: main_internal
filter_chains:
- filter_chain_match:
application_protocols:
- "http/1.1"
- "h2c"
filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: kmesh
route_config:
name: default
virtual_hosts:
- name: default
domains:
- '*'
routes:
- match:
prefix: "/"
route:
cluster: main_internal
http_filters:
- name: waypoint_downstream_peer_metadata
typed_config:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/io.istio.http.peer_metadata.Config
value:
downstream_discovery:
- workload_discovery: {}
shared_with_upstream: true
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
listener_filters:
- name: "envoy.listener.kmesh_tlv"
typed_config:
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct"
"type_url": "type.googleapis.com/envoy.listener.kmesh_tlv.config.KmeshTlv"
- name: "envoy.filters.listener.http_inspector"
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.listener.http_inspector.v3.HttpInspector"
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: skip-tunneling
namespace: istio-system
labels:
asm-system: 'true'
provider: asm
spec:
workloadSelector:
labels:
gateway.istio.io/managed: istio.io-mesh-controller
configPatches:
- applyTo: NETWORK_FILTER
match:
proxy:
proxyVersion: .*
listener:
filterChain:
filter:
name: envoy.filters.network.tcp_proxy
patch:
operation: REPLACE
value:
name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: kmesh_original_dst_cluster
cluster: kmesh_original_dst_cluster
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-original-dst-cluster
namespace: istio-system
labels:
asm-system: 'true'
provider: asm
spec:
workloadSelector:
labels:
gateway.istio.io/managed: istio.io-mesh-controller
configPatches:
- applyTo: CLUSTER
match:
proxy:
proxyVersion: .*
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: "kmesh_original_dst_cluster"
type: ORIGINAL_DST
connect_timeout: 2s
lb_policy: CLUSTER_PROVIDED
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kmesh
namespace: kmesh-system
EOF
After the installation is complete, run the following command to check the startup status of the Kmesh service.
kubectl get pods -A | grep kmesh
# kmesh-system kmesh-l5z2j 1/1 Running 0 117m
Run the following command to check the running status of the Kmesh service.
kubectl logs -f -n kmesh-system kmesh-l5z2j
# time="2024-02-19T10:16:52Z" level=info msg="service node sidecar~192.168.11.53~kmesh-system.kmesh-system~kmesh-system.svc.cluster.local connect to discovery address istiod.istio-system.svc:15012" subsys=controller/envoy
# time="2024-02-19T10:16:52Z" level=info msg="options InitDaemonConfig successful" subsys=manager
# time="2024-02-19T10:16:53Z" level=info msg="bpf Start successful" subsys=manager
# time="2024-02-19T10:16:53Z" level=info msg="controller Start successful" subsys=manager
# time="2024-02-19T10:16:53Z" level=info msg="command StartServer successful" subsys=manager
# time="2024-02-19T10:16:53Z" level=info msg="start write CNI config\n" subsys="cni installer"
# time="2024-02-19T10:16:53Z" level=info msg="kmesh cni use chained\n" subsys="cni installer"
# time="2024-02-19T10:16:54Z" level=info msg="Copied /usr/bin/kmesh-cni to /opt/cni/bin." subsys="cni installer"
# time="2024-02-19T10:16:54Z" level=info msg="kubeconfig either does not exist or is out of date, writing a new one" subsys="cni installer"
# time="2024-02-19T10:16:54Z" level=info msg="wrote kubeconfig file /etc/cni/net.d/kmesh-cni-kubeconfig" subsys="cni installer"
# time="2024-02-19T10:16:54Z" level=info msg="command Start cni successful" subsys=manager
You can run the following command to enable Kmesh for a specified namespace.
kubectl label namespace default istio.io/dataplane-mode=Kmesh
After you enable Kmesh for the default namespace, run the following command to install the sample application.
kubectl apply -f samples/fortio/fortio-route.yaml
kubectl apply -f samples/fortio/netutils.yaml
Run the following command to view the running status of the instance:
kubectl get pod
# NAME READY STATUS RESTARTS AGE
# fortio-v1-596b55cb8b-sfktr 1/1 Running 0 57m
# fortio-v2-76997f99f4-qjsmd 1/1 Running 0 57m
# netutils-575f5c569-lr98z 1/1 Running 0 67m
kubectl describe pod netutils-575f5c569-lr98z | grep Annotations
# Annotations: kmesh.net/redirection: enabled
When you see that the application pod has kmesh.net/redirection: enabled
, it means that Kmesh redirection has been enabled for the pod.
Run the following command to view the current traffic scheduling rule. The command shows that 90% of traffic is destined for Fortio v1 and 10% of traffic is destined for Fortio v2.
kubectl get virtualservices -o yaml
# apiVersion: v1
# items:
# - apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# metadata:
# annotations:
# kubectl.kubernetes.io/last-applied-configuration: |
# {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"fortio","namespace":"default"},"spec":{"hosts":["fortio"],"http":[{"route":[{"destination":{"host":"fortio","subset":"v1"},"weight":90},{"destination":{"host":"fortio","subset":"v2"},"weight":10}]}]}}
# creationTimestamp: "2024-07-09T09:00:36Z"
# generation: 1
# name: fortio
# namespace: default
# resourceVersion: "11166"
# uid: 0a07f283-ac26-4d86-b3bd-ce6aa07dc628
# spec:
# hosts:
# - fortio
# http:
# - route:
# - destination:
# host: fortio
# subset: v1
# weight: 90
# - destination:
# host: fortio
# subset: v2
# weight: 10
# kind: List
# metadata:
# resourceVersion: ""
You can initiate test traffic by running the following command. It can be seen that only about 10% of traffic is destined for Fortio v2.
for i in {1..20}; do kubectl exec -it $(kubectl get pod | grep netutils | awk '{print $1}') -- curl -v $(kubectl get svc -owide | grep fortio | awk '{print $3}'):80 | grep "Server:"; done
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 2
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 1
# < Server: 2
# < Server: 1
# < Server: 1
Application Distribution Capability by ACK One: Efficient Multi-cluster Application Management
180 posts | 32 followers
FollowAlibaba Cloud Native Community - December 18, 2023
Alibaba Cloud Native - November 3, 2022
Alibaba Cloud Native - November 16, 2023
Xi Ning Wang(王夕宁) - July 21, 2023
Alibaba Cloud Native - October 8, 2022
Xi Ning Wang(王夕宁) - July 1, 2021
180 posts | 32 followers
FollowAlibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn MoreProvides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources
Learn MoreAccelerate and secure the development, deployment, and management of containerized applications cost-effectively.
Learn MoreA secure image hosting platform providing containerized image lifecycle management
Learn MoreMore Posts by Alibaba Container Service