Argo CD monitors the changes to application orchestration in a Git repository, compares the application orchestration with the status of applications in a cluster, and automatically pulls and deploys the changes to the cluster. Argo CD also allows you to manually deploy the changes to the cluster. This topic describes how to use Argo CD to implement end-to-end canary releases of application services.
Prerequisites
A Service Mesh (ASM) instance of Enterprise Edition or Ultimate Edition is created, and the instance is of the latest version. For more information, see Create an ASM instance.
- An ACK managed cluster is created. For more information, see Create an ACK managed cluster.
- The cluster is added to the ASM instance. For more information, see Add a cluster to an ASM instance.
Argo CD is installed, and the Container Service for Kubernetes (ACK) cluster is added to Argo CD as an external cluster. For more information, see Integrate Argo CD with ASM to implement GitOps.
An ASM gateway is created, and the related HTTP protocol and public IP address are exposed. For more information, see Create an ingress gateway service.
kubectl is used to connect to the ACK cluster and the ASM instance. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster and Use kubectl on the control plane to access Istio resources.
Background information
Argo CD allows you to deploy and publish application services in a convenient and quick manner. To implement an end-to-end canary release of an application service in lane mode, you must declare a TrafficLabel Custom Resource Definition (CRD) globally or at the namespace level and add the ASM_TRAFFIC_TAG
tag to the orchestration YAML file of the application service. The following figure shows Service A, Service B, and Service C. Each service has three versions: v1, v2, and v3. Services of the same version reside in a single lane. In this example, the lanes for v1 and v2 are used to describe how to deploy services and verify the traffic. You can create a lane for v3 to check whether the traffic flows among services as expected.
Preparations
Create an orchestration YAML file for each application service and submit the file to GitHub. Argo CD synchronizes the orchestration YAML files in GitHub to the configured Kubernetes cluster. The following figure shows the orchestration files of the v1 and v2 versions of Service A, Service B, and Service C. For more information, see swimeline.
The v1 folder contains the configurations of Service A, Service B, and Service C of the v1 version, routing rules for an ASM gateway, and virtual services and destination rules of these services.
The v2 folder contains the orchestration YAML files of Deployments of Service A, Service B, and Service C of the v2 version.
The following code shows a sample YAML file in the v1 folder:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mocka-v1
labels:
app: mocka
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: mocka
version: v1
ASM_TRAFFIC_TAG: v1
template:
metadata:
labels:
app: mocka
version: v1
ASM_TRAFFIC_TAG: v1
spec:
containers:
- name: default
image: docker.io/vifoggy/gobin:1.0.0
imagePullPolicy: IfNotPresent
env:
- name: version
value: v1
- name: app
value: mocka
- name: upstream_url
value: "http://mockb:8000/"
ports:
- containerPort: 8000
Service A, Service B, and Service C mock the same image. You can set the environment variables in Deployments to specify the content to be returned. For example, you can specify that Service A of v1 returns -> mocka(version: v1, ip: 172.30.96.123)
. The following information describes some parameters in the sample YAML file:
app
: the name of the application.version
: the version number.upstream_url
: indicates whether to send a request to an upstream service in a synchronous manner. You can leave this parameter empty or set this parameter to the URL used in the request. The preceding configuration indicates that the8000
port of themockb
service is to be accessed.ASM_TRAFFIC_TAG
: the name of the lane.NoteThe services whose YAML files are stored in the v1 folder are of the v1 version, and the
ASM_TRAFFIC_TAG: v1
tag is added totemplate.metadata.labels
in the Deployments of the services.
Step 1: Deploy application services
Add a Git repository by using SSH.
The connection is unstable if you access GitHub by using HTTPS. Therefore, we recommend that you use SSH to access GitHub.
Log on to Argo CD, and choose in the left-side navigation pane. Then, click CONNECT PEPO USING SSH.
Set parameters in the panel that appears.
The following information describes some parameters in the panel:
Repository URL: the URL of the Git repository.
SSH private key data: the SSH private key.
Deploy the services in the lane for v1.
On the Applications page, click NEW APP and perform the following operations:
In the GENERAL section, set the Application Name parameter to abc-v1, the Project parameter to default, and the SYNC POLICY parameter to Manual.
In the SOURCE section, set the Repository URL parameter to git@github.com:AliyunContainerService/asm-labs.git, the Revision parameter to main, and the Path parameter to fulllink-gray/swimlane/v1.
In the DESTINATION section, set the Cluster URL parameter to https://kubernetes.default.svc and the Namespace parameter to fulllink.
NoteYou must select the ACK cluster in which you want to deploy the services.
After the configuration is complete, click CREATE in the upper part of the panel.
After you create the application, you can view the status of the abc-v1 application on the Applications page.
NoteIf the application is in the OutOfSync state as shown in the card corresponding to the abc-v1 application, the configurations of the application in the Git repository are not synchronized to the cluster. In this case, you can click SYNC to synchronize the configurations. After the synchronization is complete, you can view the status of resources in the ASM and ACK consoles.
Click abc-v1 to view the status of the created resources.
Run the following command to access the services by using the ASM gateway:
curl http://<IP address of the ASM gateway>/mock
Expected output:
-> mocka(version: v1, ip: 172.30.96.69)-> mockb(version: v1, ip: 172.30.96.66)-> mockc(version: v1, ip: 172.30.96.67)
Creating an application named abc-v2 to deploy services in the lane for v2 by referring to operations in the preceding substep.
Step 2: Verify the lane mode of end-to-end canary releases
In this example, a TrafficLabel CRD is configured in an ASM cluster.
Create a file named traffic_label_default_swimlane.yaml that contains the following content:
apiVersion: istio.alibabacloud.com/v1beta1 kind: TrafficLabel metadata: name: example1 namespace: fulllink spec: rules: - labels: - name: userdefinelabel1 valueFrom: - $localLabel attachTo: - opentracing # The protocols to take effect. If you do not set the protocols parameter, no protocol takes effect. If you set the protocols parameter to an asterisk (*), all protocols take effect. protocols: "*" hosts: # The services to take effect. - "*"
Run the following command to deploy the TrafficLabel CRD:
kubectl apply -f traffic_label_default_swimlane.yaml
Run the following command to access services in the lane for v1:
curl -H 'x-asm-prefer-tag: v1' $INGRESS_GATEWAY_IP/mock
The returned result indicates that the accessed version of Service A, Service B, and Service C is always v1.
Run the following command to access services in the lane for v2:
curl -H 'x-asm-prefer-tag: v2' $INGRESS_GATEWAY_IP/mock
The returned result indicates that the accessed version of Service A, Service B, and Service C is always v2.
FAQ
Does an end-to-end canary release depend on a tracing system?
No.
The Traffic labeling and label-based routing topic describes how an end-to-end canary release is implemented. The getContext
method is supported in the Spec
parameter of the TrafficLabel CRD
. The following code shows the content of a sample YAML file:
apiVersion: istio.alibabacloud.com/v1beta1
kind: TrafficLabel
metadata:
name: example2
namespace: workshop
spec:
workloadSelector:
labels:
app: test
rules:
- labels:
- name: userdefinelabel1
valueFrom:
- $getContext(x-request-id)
- $localLabel
attachTo:
- opentracing
protocols: "*"
hosts:
- "*"
You can pass parameters to the getContext
method. In this example, the x-request-id
parameter is used. In scenarios where HTTP or gRPC is used, the x-request-id
parameter represents a header key. An end-to-end canary release is implemented only based on the propagation of this header key in the traffic content.
Header propagation is also referred to as context propagation and serves as a means that transfers context between services and remote processes by using HTTP headers. Context is injected into a request on the client side and then extracted and forwarded by a remote application. The application may also send a request that contains the original context to the upstream service. The process of relaying the original request context is called header propagation.
The lane mode of end-to-end canary releases reads local traffic labels. In lane mode, header propagation is not required for the related services.
The end-to-end canary release feature does not depend on a specific tracing system. However, a tracing system provides a header key that represents the trace ID. A tracing system allows you to use the header propagation logic. If your application service is connected to a tracing system, you can directly configure the header key that represents the trace ID.
If the application service is not connected to a tracing system or you do not consider connecting to a tracing system by using SDKs, you can make use of the built-in header propagation logic in the code. In this case, you need to only make sure that the x-request-id
header is propagated. This logic is also used by the Bookinfo application released by the Istio community. The Bookinfo application contains the following code snippet. For more information about the source code of the Bookinfo application, see Source code of Bookinfo on GitHub.
def getForwardHeaders(request):
headers = {}
# x-b3-*** headers can be populated using the opentracing span
span = get_current_span()
carrier = {}
tracer.inject(
span_context=span.context,
format=Format.HTTP_HEADERS,
carrier=carrier)
headers.update(carrier)
# We handle other (non x-b3-***) headers manually
if 'user' in session:
headers['end-user'] = session['user']
# Keep this in sync with the headers in details and reviews.
incoming_headers = [
# All applications should propagate x-request-id. This header is
# included in access log statements and is used for consistent trace
# sampling and log sampling decisions in Istio.
'x-request-id',
# Lightstep tracing header. Propagate this if you use lightstep tracing
# in Istio (see
# https://istio.io/latest/docs/tasks/observability/distributed-tracing/lightstep/)
# Note: this should probably be changed to use B3 or W3C TRACE_CONTEXT.
# Lightstep recommends using B3 or TRACE_CONTEXT and most application
# libraries from lightstep do not support x-ot-span-context.
'x-ot-span-context',
# Datadog tracing header. Propagate these headers if you use Datadog
# tracing.
'x-datadog-trace-id',
'x-datadog-parent-id',
'x-datadog-sampling-priority',
# W3C Trace Context. Compatible with OpenCensusAgent and Stackdriver Istio
# configurations.
'traceparent',
'tracestate',
# Cloud trace context. Compatible with OpenCensusAgent and Stackdriver Istio
# configurations.
'x-cloud-trace-context',
# Grpc binary trace context. Compatible with OpenCensusAgent nad
# Stackdriver Istio configurations.
'grpc-trace-bin',
# b3 trace headers. Compatible with Zipkin, OpenCensusAgent, and
# Stackdriver Istio configurations. Commented out since they are
# propagated by the OpenTracing tracer above.
# 'x-b3-traceid',
# 'x-b3-spanid',
# 'x-b3-parentspanid',
# 'x-b3-sampled',
# 'x-b3-flags',
# Application-specific headers to forward.
'user-agent',
# Context and session specific headers
'cookie',
'authorization',
'jwt',
]
# For Zipkin, always propagate b3 headers.
# For Lightstep, always propagate the x-ot-span-context header.
# For Datadog, propagate the corresponding datadog headers.
# For OpenCensusAgent and Stackdriver configurations, you can choose any
# set of compatible headers to propagate within your application. For
# example, you can propagate b3 headers or W3C trace context headers with
# the same result. This can also allow you to translate between context
# propagation mechanisms between different applications.
for ihdr in incoming_headers:
val = request.headers.get(ihdr)
if val is not None:
headers[ihdr] = val
return