All Products
Search
Document Center

Alibaba Cloud Service Mesh:Use Knative on ASM to deploy a serverless application

Last Updated:Mar 15, 2024

Service Mesh (ASM) is closely integrated with the capabilities of the Knative Serving component that is deployed in a Container Service for Kubernetes (ACK) cluster or ACK Serverless cluster to simplify the deployment and management of serverless applications in various cluster environments. Knative on ASM provides an all-in-one solution for you to quickly build, deploy, and manage container-based applications, especially when you require automatic scaling and on-demand billing. This solution helps you quickly deploy and scale serverless applications, improving development efficiency and service elasticity. This topic describes how to use Knative on ASM to deploy a Knative Service.

Prerequisites

  • An ASM instance of version 1.16.3.50 or later is created, and an ACK or ACK Serverless cluster is added to the instance. For more information, see Create an ASM instance and Add a cluster to an ASM instance.

    Important
    • When you create an ASM instance, make sure that Allow data plane cluster KubeAPI to access Istio CR is selected in the Resource configuration section. This check box is selected by default. This indicates that you can access Istio resources by using the Kubernetes API of clusters on the data plane.

    • Access to the API server of the ACK or ACK Serverless cluster is enabled to support quick access.

    • The ASM instance and the ACK or ACK Serverless cluster must be in the same virtual private cloud (VPC) in the same region, and use the same vSwitch.

  • An ingress gateway is created for the cluster. In the following example, an ASM ingress gateway is used. For more information, see Create an ingress gateway.

    Note

    The ASM gateway allows you to split traffic for different Knative Revisions, access Google Remote Procedure Call (gRPC) services, configure timeout and retry settings, mount Transport Layer Security (TLS) certificates, and perform external authentication and authorization. For more information, see Overview of ASM gateways.

Step 1: Enable Knative on ASM

For an ASM instance whose version is earlier than 1.18.2.104

  1. Deploy the Knative Serving component in the ACK or ACK Serverless cluster. Select Kourier for the Gateway parameter.

    • For more information about how to deploy Knative in an ACK cluster, see Deploy Knative.

    • For more information about how to deploy Knative in an ACK Serverless cluster, see Install Knative.

      Important

      After the Knative Serving component is installed, click the Components tab. In the Add-on Component section, uninstall the Kourier component.

  2. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  3. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Ecosystem > Knative on ASM.

  4. On the Knative on ASM page, click Enable Knative on ASM.

For an ASM instance whose version is 1.18.2.104 or later

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Ecosystem > Knative on ASM.

  3. On the Knative on ASM page, select a Knative version based on your business requirements and click Enable Knative on ASM.

  4. Deploy Knative in the ACK or ACK Serverless cluster. Select ASM for the Gateway parameter.

    • For more information about how to deploy Knative in an ACK cluster, see Deploy Knative.

    • For more information about how to deploy Knative in an ACK Serverless cluster, see Install Knative.

Step 2: Deploy a Knative Service

Knative on ASM allows you to deploy a Knative Service in the ACK console or by using a YAML configuration file. You can select a deployment method based on your business requirements.

Method 1: Deploy a Knative Service in the ACK console

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Applications > Knative in the left-side navigation pane.

  3. On the Knative page, click the Services tab. In the upper-right corner, click Create Service, set the required parameters, and then click Create.

    For more information about parameters, see Use Knative to deploy serverless applications.

    Parameter

    Description

    Example

    Namespace

    Select the namespace to which the Service belongs.

    default

    Service Name

    Enter a name for the Service.

    helloworld-go

    Image Name

    To select an image, click Select Image. In the Select Image dialog box, select an image and click OK. You can specify an image in a private registry. The image address must be in the domainname/namespace/imagename:tag format.

    registry.cn-hangzhou.aliyuncs.com/knative-sample/helloworld-go

    Image Version

    To select an image version, click Select Image Version. In the Image Version dialog box, select an image version and click OK.

    73fbdd56

    Access Protocol

    HTTP and gRPC are supported.

    HTTP

    Container Port

    Specify the container port that you want to expose. The port number must be in the range of 1 to 65535.

    8080

    Environment Variables

    Click Advanced. In the Environment Variables section, click Add to set environment variables in key-value pairs.

    Type: Select Custom.

    Variable Key: Set it to TARGET.

    Value/ValueFrom: Set it to Knative.

Method 2: Deploy a Knative Service by using a YAML configuration file

  1. Save the following content as a file named helloworld-go.yaml.

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: helloworld-go
      annotations:
        knative.k8s.alibabacloud/tls: "false"
    spec:
      template:
        spec:
          containers:
            - image: registry.cn-hangzhou.aliyuncs.com/acs/helloworld-go:160e4dc8
              ports:
                - containerPort: 8080
              env:
                - name: TARGET
                  value: "Knative"
  2. Use kubectl to connect to the ACK cluster. Run the following command to create a Knative Service. Wait until the Knative Service is created.

    kubectl apply -f helloworld-go.yaml
  3. Run the following command to view the list of Knative Services:

    kubectl get ksvc

    Expected output:

    NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON
    helloworld-go   http://helloworld-go.default.example.com   helloworld-go-00001   helloworld-go-00001   True

Step 3: Query the gateway address

Method 1: Query the gateway address in the ACK console

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Applications > Knative in the left-side navigation pane.

  3. Click the Services tab. In the list of Services, click the name of the Knative Service for which you want to query the gateway address to view the details of the Knative Service. In the Basic Information section, you can view and obtain information about Gateway and Default Domain.

Method 2: Query the gateway address in the ASM console

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the desired ASM instance. In the left-side navigation pane, choose ASM Gateways > Ingress Gateway.

  3. In the Service address section on the Ingress Gateway page, view and record the gateway address.

Step 4: Access the Knative Service

After the Knative Service is deployed, you can point its domain name to the IP address of a gateway to associate the Service with the gateway. This allows you to access the Knative Service by using its URL.

  1. Add the following information to the hosts file to point the domain name of the Service to the IP address of the gateway that you obtained in Step 3: Query the gateway address.

    The following sample code shows the content that is to be added to the hosts file. Replace xx.xx.xxx.xx with the gateway address that you actually access.

    xx.xx.xxx.xx helloworld-go.default.example.com
  2. After you add the preceding information to the hosts file, you can directly access the Knative Service by using its domain name.

    • Access the Knative Service by running the following command

      curl http://helloworld-go.default.example.com
      
      # Expected output:
      Hello Knative!
    • Access the Knative Service by visiting the following website

      Enter http://helloworld-go.default.example.com in a browser to access the Service.

      Expected output:HelloKnative

References

  • The default domain name of a Knative Service is example.com. Knative on ASM allows you to use a custom domain name as the default domain name. For more information, see Set a custom domain name in Knative on ASM.

  • An ASM gateway supports HTTPS and allows you to dynamically load certificates. When you use Knative on ASM, you can use an ASM gateway to access the Knative Service over HTTPS. For more information, see Use an ASM gateway to access a Knative Service over HTTPS.

  • Knative on ASM allows you to perform a canary release based on traffic splitting for a Knative Service. When you create a Knative Service, Knative automatically creates the first Revision for the Service. Whenever the configuration of the Knative Service changes, Knative creates a new Revision and modifies the percentage of traffic that is distributed to different Revisions to implement a canary release. For more information, see Perform a canary release based on traffic splitting for a Knative Service by using Knative on ASM.

  • Knative Serving adds the Queue Proxy container to each pod. The Queue Proxy container sends the concurrency metrics of the application containers to Knative Pod Autoscaler (KPA). After KPA receives the metrics, KPA automatically adjusts the number of pods provisioned for a Deployment based on the number of concurrent requests and related autoscaling algorithms. For more information, see Enable autoscaling of pods based on the number of requests.