KubeVela is an open source modern software delivery platform, which provides rich application O&M, management, and extension capabilities. This topic describes how to install and use KubeVela in ACS clusters to manage applications.
Prerequisites
CoreDNS is installed in the ACS cluster. For more information, see Configure unmanaged CoreDNS.
A kubectl client is connected to the ACS cluster. For more information, see Getting started with ACS using kubectl.
Procedure
Step 1: Install the KubeVela component
Log on to the ACS console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of your cluster. In the left-side navigation pane, choose Applications > Helm.
On the Helm page, click Deploy.
In the panel that appears, search for ack-kubevela, find and click the component, and click Next.
Select the latest chart version.
Click OK to install the component.
In the left-side navigation pane of the cluster management page, choose to view the deployment of ack-kubevela.
If the status of ack-kubevela changes to Deployed, the component is deployed.
ImportantExpected deployment result:
Install the KubeVela Core suite, which consists of the kubevela and cluster-gateway controllers.
Install the VelaUX suite, which consists of VelaUX WebServer and an Internet-facing SLB instance serving as an endpoint.
In the left-side navigation pane of the cluster management page, choose to view the Service that is created.
NoteOn the Service details page, the default username of VelaUX created in the vela-system namespace is admin and the default password is VelaUX12345.
Use the endpoint to log on to VelaUX.
NoteYou can use Kubernetes APIs, VelaUX, or Vela CLI to manage your applications and resources.
Step 2: Install the Vela CLI (optional)
In addition to VelaUX, you can install the Vela CLI in the local environment and use a client to manage applications and install plug-ins.
macOS and Linux
curl -fsSl https://kubevela.io/script/install.sh | bashWindows
powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
The first application
This example creates an application by using KubeVela. The following cloud resources are created:
A general-purpose pod with 0.25 vCPUs.
An Internet-facing SLB instance used to access the application.
30 An ultra disk (GiB).
Create a file named cube.yaml based on the following content.
apiVersion: core.oam.dev/v1beta1 kind: Application metadata: name: cube namespace: default spec: components: - name: cube properties: cpu: "0.25" exposeType: LoadBalancer #Declare an SLB instance. image: registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0 memory: 512Mi ports: - expose: true port: 80 protocol: TCP traits: - properties: "alibabacloud.com/compute-class": "general-purpose" #Declare a general-purpose pod. "app": "demo-1" type: labels - properties: replicas: 1 type: scaler - properties: pvc: - mountPath: /home/admin name: demo-pvc resources: requests: storage: 30Gi storageClassName: alicloud-disk-topology-alltype #Declare an ultra disk. type: storage type: webservice policies: - name: default properties: clusters: - local namespace: default type: topology workflow: mode: steps: DAG steps: - meta: alias: Deploy To default name: default properties: policies: - default type: deployRun the following command to deploy the YAML file by using the Vela CLI.
vela up -f cube.yaml -n default -v demo-v1Expected results:
Applying an application in vela K8s object format... I1108 15:35:33.369515 65870 apply.go:121] "creating object" name="cube" resource="core.oam.dev/v1beta1, Kind=Application" App has been deployed Port forward: vela port-forward cube SSH: vela exec cube Logging: vela logs cube App status: vela status cube Endpoint: vela status cube --endpoint Application default/cube applied.Run the following commands to check the status of the pod.
vela status cube -n defaultExpected results:
About: Name: cube Namespace: default Created at: 2023-11-08 15:35:33 +0800 CST Status: running Workflow: mode: DAG-DAG finished: true Suspend: false Terminated: false Steps - id: 6vkbhba12p name: default type: deploy phase: succeeded Services: - Name: cube Cluster: local Namespace: default Type: webservice Healthy Ready:1/1 Traits: labels scaler storageWhen the application enters the
Runningstate, the application is deployed.Run the following command to view the endpoint.
vela status cube -n default --endpointExpected results:
Please access cube from the following endpoints: +---------+-----------+--------------------------+-----------------------+-------+ | CLUSTER | COMPONENT | REF(KIND/NAMESPACE/NAME) | ENDPOINT | INNER | +---------+-----------+--------------------------+-----------------------+-------+ | local | cube | Service/default/cube | http://your-endpoint | false | +---------+-----------+--------------------------+-----------------------+-------+Use a web browser to access the endpoint. The application is displayed.
Use VelaUX to manage and maintain the application.

Use KubeVela to manage CI and O&M
1. Use Container Registry and GitHub to build and deliver images
The following example describes how to use KubeVela and ACS to build images and deliver services in terms of CI and CD.
Use kaniko to build images and push them to Container Registry Personal Edition in the ACS environment.
Use a KubeVela Application to describe an application and deliver services.
Prerequisites
Container Registry Personal Edition or Enterprise Edition is activated. In this example, Container Registry Personal Edition is used.
A code repository is created. In this example, the GitHub repository
https://gitee.com/AliyunContainerService/simple-web-demo.gitis used.
Procedure
Enable vela-workflow addon.
Log on to VelaUX. Choose extensions > Addons > search for vela-workflow > click the icon > click Install. When the Addon is enabled, the installation succeeds.

Create a Git key.
This example uses GitHub. The key is stored in a Secret to build images.
kubectl create secret generic git-token --from-literal='GIT_TOKEN=<YOUR-GIT-TOKEN>'Create a key
In this example, Container Registry Personal Edition is used. A Secret is used to store the key. The Secret will be used when you build images and deploy applications.
NoteACS integrates the Container Registry password-free plug-in. You can use this plug-in instead of using a Secret. For more information, see Pull images from Container Registry without using Secrets.
kubectl create secret docker-registry docker-regcred \ --docker-server=registry.cn-beijing.aliyuncs.com \ --docker-username=yourUserName \ --docker-password=yourPasswordDeclare a WorkflowRun.
apiVersion: core.oam.dev/v1alpha1 kind: WorkflowRun metadata: name: demo-wr namespace: default spec: context: image: registry.cn-beijing.aliyuncs.com/k8s-conformance/demo:v1 workflowSpec: steps: - name: build-push type: build-push-image inputs: - from: context.image parameterKey: image properties: # You can specify the kanikoExecutor image in the kanikoExecutor field. The default is oamdev/kaniko-executor:v1.9.1. # kanikoExecutor: gcr.io/kaniko-project/executor:latest # You can specify the repository address and branch in context, or directly specify the context. For more information, see https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts. context: git: gitee.com/AliyunContainerService/simple-web-demo branch: main # This field will be overwritten by the image in inputs. image: my-registry/test-image:v1 # Specify the dockerfile path. The default is ./Dockerfile. # dockerfile: ./Dockerfile credentials: image: name: docker-regcred git: name: git-token key: GIT_TOKEN - name: apply-app type: apply-app inputs: - from: context.image parameterKey: data.spec.components[0].properties.image properties: data: apiVersion: core.oam.dev/v1beta1 kind: Application metadata: name: demo-1 namespace: default spec: components: - name: demo-1 properties: cpu: "0.25" exposeType: LoadBalancer #Declare an SLB instance. image: image memory: 512Mi ports: - expose: true port: 80 protocol: TCP traits: - properties: "alibabacloud.com/compute-class": "general-purpose" #Declare a general-purpose pod. "alibabacloud.com/compute-qos": "default" "app": "demo-1" type: labels - properties: replicas: 2 type: scaler type: webservice policies: - name: default properties: clusters: - local namespace: default type: topology workflow: mode: steps: DAG steps: - meta: alias: Deploy To default name: default properties: policies: - default type: deployView the status and logs of the workflow
vela workflow logs demo-wr -n default # Expected output. ? Select a step to show logs: [Use arrows to move, type to filter] > build-push apply-appAfter all tasks are completed, the job succeeds.
View the status and endpoint of the application.
# View the application list. vela ls -n default # Expected output. APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME demo-1 demo-1 webservice labels,scaler running healthy Ready:2/2 2023-11-15 17:58:12 +0800 CST # View the application details. vela status demo-1 -n default # Expected output. About: Name: demo-1 Namespace: default Created at: 2023-11-15 17:58:12 +0800 CST Status: running Workflow: mode: DAG-DAG finished: true Suspend: false Terminated: false Steps - id: 8nsijpwkfd name: default type: deploy phase: succeeded Services: - Name: demo-1 Cluster: local Namespace: default Type: webservice Healthy Ready:2/2 Traits: labels scaler # View the endpoint. vela status demo-1 -n default --endpoint # Expected output. Please access demo-1 from the following endpoints: +---------+-----------+--------------------------+-----------------------+-------+ | CLUSTER | COMPONENT | REF(KIND/NAMESPACE/NAME) | ENDPOINT | INNER | +---------+-----------+--------------------------+-----------------------+-------+ | local | demo-1 | Service/default/demo-1 | http://your-endpoint | false | +---------+-----------+--------------------------+-----------------------+-------+Use the endpoint to access the application.

2. Use PDBs to ensure application high availability
You can create PodDisruptionBudgets (PDBs) to define maxUnavailable to guarantee the minimum number of pods for an application. When you use KubeVela, you can use apply-object to deploy a PDB in the WorkflowRun.
We recommend that you create PDBs to ensure the minimum number of pods running for your application in an HA production environment. This helps avoid application unavailability.
Adjust the WorkflowRun.
apiVersion: core.oam.dev/v1alpha1 kind: WorkflowRun metadata: name: demo-wr namespace: default spec: context: image: registry.cn-beijing.aliyuncs.com/k8s-conformance/demo:v1 workflowSpec: steps: ...... - name: apply-app type: apply-app ...... - name: apply-pdb type: apply-object properties: value: apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: demo-pdb spec: maxUnavailable: 20% #The maximum number of unavailable pods is 20% of the total pods. This helps prevent pod eviction. selector: matchLabels: app: demo-1 #Use label-selector to associate the PDB.Use kubectl to submit the PDB.
kubectl apply -f demo-wr.yamlRerun the workflow and view the status of the PDB.
# Rerun the workflow. vela workflow restart demo-wr -ndefault # Expected output. Successfully restart workflow: demo-wr # View the PDB. kubectl get pdb # Expected output. NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE demo-pdb 1 N/A 1 2s
3. Use cloudshell to manage logons
cloudshell is a plug-in of KubeVela. It provides the WebTerminal feature based on open source software cloudtty.
Install cloudshell as an add-on.
Search for cloudshell in the console and install it. If the plug-in is enabled, the installation succeeds.

Use WebTerminal to log on to a container.

4. Use Keda to perform auto scaling
Keda is an open source event-driven auto scaling framework. Based on KubeVela, you can extend and integrate Keda to meet various auto scaling or reuse requirements in different scenarios.
Install Keda as an add-on.
Search for Keda in the console and install it. If the plug-in is enabled, the installation succeeds. KubeVela also installs keda-auto-scaler.

Add keda-auto-scaler to WorkflowRun.
NoteAn autoscaler is added in this example. You can also modify the configuration on demand.
apiVersion: core.oam.dev/v1alpha1 kind: WorkflowRun metadata: name: demo-wr namespace: default spec: context: image: registry.cn-beijing.aliyuncs.com/k8s-conformance/demo:v1 workflowSpec: steps: ...... - name: apply-app type: apply-app inputs: - from: context.image parameterKey: data.spec.components[0].properties.image properties: data: apiVersion: core.oam.dev/v1beta1 kind: Application metadata: name: demo-1 namespace: default spec: components: - name: demo-1 properties: cpu: "0.25" exposeType: LoadBalancer #Declare an SLB instance. image: image memory: 512Mi ports: - expose: true port: 80 protocol: TCP traits: - properties: "alibabacloud.com/compute-class": "general-purpose" #Declare a general-purpose pod. type: labels - properties: replicas: 2 type: scaler - type: keda-auto-scaler properties: triggers: - type: cron metadata: timezone: Asia/Shanghai # The acceptable values would be a value from the IANA Time Zone Database. start: 00 * * * * # Every hour on the 30th minute end: 10 * * * * # Every hour on the 45th minute desiredReplicas: "3" type: webserviceSubmit and rerun the workflow.
kubectl apply -f demo-wr.yaml # Rerun the workflow. vela workflow restart demo-wr -ndefault # Expected output. Successfully restart workflow: demo-wrQuery the status of the application.
The application details indicate that keda-auto-scaler is enabled.
vela status demo-1 -ndefaultExpected results:
About: Name: demo-1 Namespace: default Created at: 2023-11-15 17:58:12 +0800 CST Status: running Workflow: mode: DAG-DAG finished: true Suspend: false Terminated: false Steps - id: ziwddaa6mt name: default type: deploy phase: succeeded Services: - Name: demo-1 Cluster: local Namespace: default Type: webservice Healthy Ready:2/2 Traits: labels scaler keda-auto-scaler