This topic provides answers to some frequently asked questions (FAQ) about Knative in Container Compute Service (ACS).
Table of contents
What are the differences between Alibaba Cloud Knative and open source Knative?
Alibaba Cloud Knative provides enhanced service capabilities based on the open source Knative, including O&M, ease of use, elasticity, Ingress, event-driven service, and monitoring and alerting. For more information, see Comparison between Alibaba Cloud Knative and open source Knative.
Which Ingress do I need to use when I install Knative?
Alibaba Cloud Knative supports four types of Ingresses: Application Load Balancer (ALB) Ingresses, Microservices Engine (MSE) Ingresses, Service Mesh (ASM) Ingresses, and Kourier Ingresses. ALB Ingresses are suitable for load balancing at the application layer. MSE cloud-native Ingresses are suitable for microservice scenarios. ASM Ingresses provide the Istio capabilities. If you require only basic Ingress capabilities, you can use Kourier Ingresses. For more information, see Suggestions on selecting Knative Ingresses.
What permissions are required by a RAM user or RAM role for using Knative?
Permissions to access all namespaces of the cluster are required. You can perform the following steps to complete authorization.
Log on to the ACS console. In the left-side navigation pane, click Permission Management.
Click the RAM Users tab, find the RAM user that you want to manage in the list, and then click Modify Permissions.
In the Add Permissions section, select the cluster that you want to authorize and select all namespaces, and then complete the authorization.
How long does it take to reduce the number of pods to zero?
The amount of time that is required to reduce the number of pods to zero depends on the following three parameters:
stable-window: the time window before the scale-in operation is performed. Before pods are scaled in, the system observes and evaluates the metrics within the time window and does not immediately perform the scale-in operation.scale-to-zero-grace-period: the graceful period before the number of pods is reduced to zero. During this period, the system does not immediately stop or delete the last pod even if no new requests are received. This helps respond to burst traffic.scale-to-zero-pod-retention-period: the retention period of the last pod before the number of pods is reduced to zero.
To reduce the number of pods to zero, make sure that the following conditions are met:
First, no requests are received during the time window that is specified by the
stable-windowparameter.Secondly, the retention period of the last pod that is specified by the
scale-to-zero-pod-retention-periodparameter is exceeded.Finally, the amount of time that is consumed to switch to the proxy mode for a serverless Kubernetes service is longer than the graceful period that is specified by the
scale-to-zero-grace-periodparameter.
The retention period of the last pod before the number of pods is reduced to zero does not exceed the value that is calculated based on the following formula: stable-window + Max["scale-to-zero-grace-period", "scale-to-zero-pod-retention-period"]. If you want to forcibly set a retention period of the last pod before the number of pods is reduced to 0, we recommend that you use the scale-to-zero-pod-retention-period parameter.
Is a fee charged for using the Knative Activator component of ACS?
Yes. The Activator component is a data plane component. It runs in pods and occupies instance resources.
How do I configure a listening port for a Knative Service?
Set the listening port to the port specified in containerPort of the Knative Service. The default port is 8080. If you want to use a custom listening port, see Configure custom listening ports in Knative.