×
Community Blog Experience Knative on Alibaba Cloud

Experience Knative on Alibaba Cloud

Learn how you can experience Knative on Alibaba Cloud's Container Service for Kubernetes

Knative Serving is a scale-to-zero and request-driven compute runtime environment built on Kubernetes and Istio to support the deployment and serving of serverless applications and functions. Knative Serving aims to provide Kubernetes extensions for deploying and running serverless workloads.

This post describes how to quickly build Knative Serving and implement automatic scaling with Alibaba Cloud Container Service for Kubernetes.

Building Knative Serving

Step 1: Prepare the Kubernetes environment

Alibaba Cloud Container Service for Kubernetes version 1.11.5 is now available. You can use the console to quickly and easily create a Kubernetes cluster. To learn more, see Create a Kubernetes cluster.

Step 2: Deploy Istio

Knative Serving runs on Istio. Currently, Alibaba Cloud Container Service for Kubernetes allows you to quickly install and configure Istio in just one click. If you're unsure how to do this, see Deploy Istio.

Log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose Cluster, then Cluster again to go to the Clusters page. Then, you'll want to select a cluster and choose More, then Deploy Istio in the Actions column.

1

You'll need to set the parameters that appear on the Deploy Istio page, and then click Deploy Istio once you're done. After several seconds to a minute or so, the Istio environment will be deployed. You can confirm that it is indeed deployed by checking the pod status in the console. You'll see a screen similar to the one below.

2

Step 3: Deploy Istio Ingress Gateway

For this step, log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose Marketplace and then App Catalog. On the page that appears, find and click ack-istio-ingressgateway. It's the one in a red square below.

3

Click the Parameters tab. The default configuration of Istio Ingress Gateway is provided. Modify these parameters based on your specific requirements, and then click Create.

4

View the pod list in the istio-system namespace to check the running status. It should look something like what you see below.

5

Step 4: Deploy Knative CRDs

Log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose Marketplace and then choose App Catalog. On the page that appears, find and click ack-knative-init.

6

Click Create to install the content required for Knative initialization, including Custom Resource Definitions (CRDs).

7

Step 5: Deploy Knative Serving

Log on to the Container Service - Kubernetes console. In the left-side navigation pane, choose Marketplace and then App Catalog. On the page that appears, find and click ack-knative-serving.

8

Next, click the Parameters tab. Default configuration of Istio Ingress Gateway is provided. Modify these parameters on demand, and then click Create.

9

Now, the four Helm charts required for installing Knative Serving have been installed. This should be confirmed on the console.

10

Experience Knative

Step 1: Deploy Knative Service for a Sample Autoscale App

For this first step, run the following command to deploy Knative Service for a sample autoscale app:

kubectl create -f autoscale.yaml

The content of the autoscale.yaml file is as follows:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: autoscale-go
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        metadata:
          annotations:
            # Target 10 in-flight-requests per pod.
            autoscaling.knative.dev/target: "10"
            autoscaling.knative.dev/class:  kpa.autoscaling.knative.dev
        spec:
          container:
            image: registry.cn-beijing.aliyuncs.com/wangxining/autoscale-go:0.1

Step 2: Access Knative Service for the Sample Autoscale App

For this step, locate the entry host name and IP address and export them as environment variables.

export IP_ADDRESS=`kubectl get svc istio-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`

Send a request to the autoscale app and check the resource consumption.

curl --header "Host: autoscale-go.default.{domain.name}" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"

Note: You'll want to replace {domain.name} with your domain name suffix. In the default example, the suffix is aliyun.com.

curl --header "Host: autoscale-go.default.aliyun.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.16 milliseconds.

Run the following command to install the load generator:

go get -u github.com/rakyll/hey

Maintain 50 concurrent requests and send traffic for 30 seconds.

hey -z 30s -c 50 \
  -host "autoscale-go.default.aliyun.com" \
  "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5" \
  && kubectl get pods

Within the 30 seconds, you can see that the Knative Service automatically scales up as the number of requests increases.

Summary:
  Total:    30.1126 secs
  Slowest:    2.8528 secs
  Fastest:    0.1066 secs
  Average:    0.1216 secs
  Requests/sec:    410.3270

  Total data:    1235134 bytes
  Size/request:    99 bytes

Response time histogram:
  0.107 [1]    |
  0.381 [12305]    |°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ°ˆ
  0.656 [0]    |
  0.930 [0]    |
  1.205 [0]    |
  1.480 [0]    |
  1.754 [0]    |
  2.029 [0]    |
  2.304 [0]    |
  2.578 [27]    |
  2.853 [23]    |


Latency distribution:
  10% in 0.1089 secs
  25% in 0.1096 secs
  50% in 0.1107 secs
  75% in 0.1122 secs
  90% in 0.1148 secs
  95% in 0.1178 secs
  99% in 0.1318 secs

Details (average, fastest, slowest):
  DNS+dialup:    0.0001 secs, 0.1066 secs, 2.8528 secs
  DNS-lookup:    0.0000 secs, 0.0000 secs, 0.0000 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0023 secs
  resp wait:    0.1214 secs, 0.1065 secs, 2.8356 secs
  resp read:    0.0001 secs, 0.0000 secs, 0.0012 secs

Status code distribution:
  [200]    12356 responses



NAME                                             READY   STATUS        RESTARTS   AGE
autoscale-go-00001-deployment-5fb497488b-2r76v   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-6bshv   2/2     Running       0          2m
autoscale-go-00001-deployment-5fb497488b-fb2vb   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-kbmmk   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-l4j9q   1/2     Terminating   0          4m
autoscale-go-00001-deployment-5fb497488b-xfv8v   2/2     Running       0          29s

Summary

As shown in this post, you can quickly build Knative Serving and implement automatic scaling based on Alibaba Cloud's Container Service for Kubernetes. We invite you to use Alibaba Cloud Container Service to quickly build Knative Serving and easily integrate it into your project development.

0 0 0
Share on

Xi Ning Wang

17 posts | 6 followers

You may also like

Comments

Xi Ning Wang

17 posts | 6 followers

Related Products

  • Container Registry

    A secure image hosting platform providing containerized image lifecycle management

    Learn More
  • Container Service

    A high-performance container manage service that provides containerized application lifecycle management

    Learn More
  • Container Service for Kubernetes

    Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.

    Learn More
  • ECI(Elastic Container Instance)

    Elastic Container Instance (ECI) is an agile and secure serverless container instance service. You can easily run containers without managing servers. Also you only pay for the resources that have been consumed by the containers. ECI helps you focus on your business applications instead of managing infrastructure.

    Learn More