All Products
Search
Document Center

Harden Kubernetes

Last Updated: May 07, 2018

Kubernetes provides many options that greatly increase application security. You must be familiar with Kubernetes and its deployment security requirements to configure such options. This article describes the security hardening solution for Kubernetes to help you deploy secure Kubernetes applications.

Security hardening solution

Make sure that the image does not have security vulnerabilities

Before deployment, make sure that all operating system software and Kubernetes software are in the latest official versions to prevent intrusion due to the vulnerabilities after deployment.

During O&M, continuously scan security vulnerabilities because the container may have outdated packages of common vulnerabilities and exposures (CVE). New vulnerabilities are generated every day. Therefore, this is not a one-time task. Continuous security assessment of images is critical.

Periodically perform security updates for the environment. Once a vulnerability is found in the running container, you must update the image and immediately redeploy the container. Do not directly update the image, such as using apt-update, on a running container. This breaks the relationship between the image and the container.

You can gradually update the running container by using the Kubernetes rolling upgrade function, which allows you to upgrade the image to the latest version.

Make sure that only the authorized image is used in your environment

If an image not in compliance with the organization policy runs, the organization may have a vulnerable or even malicious container. It is dangerous to download and run an image from an unknown source, which is equivalent to running software from an unknown service provider on a production server.

If you use a private image to store your valid images, the number of images that may access your environment is greatly reduced. We recommend that you add a security assessment such as vulnerability scanning to continuous integration (CI) as part of the building process.

The CI must guarantee that only approved code is used to build an image. After an image is successfully built, scan security vulnerabilities on the image. The image can be pushed to the private image repository only when no vulnerability exists. Images that failed in the security assessment cannot be pushed to the image repository.

The Kubernetes image authorization plug-in has been completed and is expected to be launched with Kubernetes 1.4. This plug-in can be used to block distribution of unauthorized images. For more information, click here.

Restrict direct access to Kubernetes nodes

Logon to Kubernetes nodes through SSH or with an SSH key must be restricted to eliminate unauthorized access to host resources. You must ask users to run kubectl exec, which can directly access the container without accessing the host.

You can use the Kubernetes authorization plug-in to further control users’ access to resources. This plug-in allows you to set refined access control rules for the specified namespace, container, and operation.

Modify the default port

The Kubernetes API server provides the Kubernetes API. Generally, one process runs on a single kubernetes-master node.

The Kubernetes API server provides the following HTTP ports by default:

Local host port

  • The default port of the HTTP service is 8080. The identifier is –insecure-port.
  • The default IP address is that of the local host. The identifier is —insecure-bind-address.
  • No authentication or authorization check is performed for the HTTP service.
  • Access to the host is protected.

Security port

  • The default port is 6443. The identifier is —secure-port.
  • The default IP address is that of the network interface of the first non-local host. The identifier is —bind-address.
  • Set the IDs of the certificate and key in –tls-cert-file and –tls-private-key-file.
  • The token file or client certificate is used for authentication.
  • The strategy-based authorization mode is used.

For security reasons, the read-only port is removed, which is replaced with Service Account.

Access control of the API management port

In some configuration files, a proxy (Nginx) is used as the API server process and runs in the same machine. The proxy is an HTTPS service, the authentication port is port 443, and the port that accesses the API server is port 8080 of the local host. In these configuration files, Secure Port is usually set to port 6443.

You can use ECS security group firewall rules to restrict access from external HTTPS services over port 443.

The preceding are default configurations, which may vary with cloud providers. You can flexibly configure parameters and adjust their values based on the business scenario.

Create management boundaries between resources

Restricting user permissions can reduce the impact of errors or malicious activities. The Kubernetes namespace allows you to allocate resources to logical naming groups. Resources created in a namespace are invisible to other namespaces.

By default, each resource a user created in the Kubernetes cluster runs in the default space named “default”. You can create additional namespaces and allocate resources and users to them. You can also use a Kubernetes-authorized plug-in to create policies so that different users’ access requests are isolated from each other and stored in different namespaces.

Example: The following policy allows “alice” to read pods from the namespace “fronto”.

  1. {
  2. "apiVersion": "abac.authorization.kubernetes.io/v1beta1",
  3. "kind": "Policy",
  4. "spec": {
  5. "user": "alice",
  6. "namespace": "fronto",
  7. "resource": "pods",
  8. "readonly": true
  9. }
  10. }

Define resource quotas

If you run a container without resource limitation, your system may suffer from DoS attacks or interference from other tenants. To avoid and minimize such risks, you must define a resource quota.

By default, all resources in the Kubernetes cluster have no limit on CPU and memory usage. You can create a resource quota policy and attach it to the Kubernetes namespace to limit the usage of pods on the CPU and memory. These configurations can be done by editing the compute-resources.yaml file.

For example, create the following compute-resources.yaml file, in which the number of pods in the namespace is limited to 4, the number of CPUs is between 1 and 2, and the memory size is between 1 GB and 2 GB.

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: compute-resources
  5. spec:
  6. hard:
  7. pods: "4"
  8. requests.cpu: "1"
  9. requests.memory: 1Gi
  10. limits.cpu: "2"
  11. limits.memory: 2Gi

Use the following command to allocate the resource quota to a namespace:

  1. kubectl create -f ./compute-resources.yaml --namespace=myspace

Divide network security domains

If you run different applications on one Kubernetes cluster, a malicious program in one application may infect other applications. Therefore, it is important to split the network so that a container can communicate with only authorized containers.

One of the challenges when deploying Kubernetes is to create network segments between pods, services, and containers. The reason is that the container network identifier (IP address) dynamically changes, and containers can communicate with each other within the same node or between nodes.

To prevent cross-cluster communication, you can use a network firewall or SDN scheme to deploy Kubernetes. The new network policy APIs must address the need of creating firewall rules between pods and limit the networks that containers can access. The following shows a network policy that only the front-end pod can access the backend pod:

  1. POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys
  2. {
  3. "kind": "NetworkPolicy",
  4. "metadata": {
  5. "name": "pol1"
  6. },
  7. "spec": {
  8. "allowIncoming": {
  9. "from": [{
  10. "pods": { "segment": "frontend" }
  11. }],
  12. "toPorts": [{
  13. "port": 80,
  14. "protocol": "TCP"
  15. }]
  16. },
  17. "podSelector": {
  18. "segment": "backend"
  19. }
  20. }
  21. }

Click here to view more network policies.

Apply a security environment to your pods and containers

When designing your containers and pods, make sure that you configure a security environment for your pods, containers, and storage volumes. The security environment is an attribute defined in the YAML file. It controls the security parameters allocated to the pods, containers, and storage volumes. Some important parameters include:

Security environment setting items Description
SecurityContext > runAsNonRoot The container must be run by a non-root user.
SecurityContext > Capabilities Controls the Linux capabilities allocated to the container.
SecurityContext > readOnlyRootFilesystem Specifies whether the container is read-only for the root file system.
PodSecurityContext > runAsNonRoot Prevents the root user from running the container as part of the pod.

The following example shows a pod definition with the security environment parameters:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: hello-world
  5. spec:
  6. containers:
  7. # specification of the pod’s containers
  8. # ...
  9. securityContext:
  10. readOnlyRootFilesystem: true
  11. runAsNonRoot: true

Click here to view more policy information.

API server authentication and authorization

The API server permission control involves authentication, authorization, and admission control.

Authentication

When a client initiates an API request to Kubernetes through a non-read-only port, Kubernetes authenticates the user’s validity (check whether the user has the permission to operate the API) by certificate, token, or basic information.

  • Certificate-based authentication

    Set the startup parameter of the API server: --client_ca_file=SOMEFILE.

    Verify the client certificate is included in the cited file. If the verification succeeds, the main object in the verification record serves as the request username.

  • Token-based authentication

    Set the startup parameter of the API server: --token_auth_file=SOMEFILE

    The token file consists of the token, username, and userid columns. When using the token for authentication, you must add the header field Authorization to HTTP requests and set the field value to Bearer SOMETOKEN.

  • Authentication based on the basic information

    Set the startup parameter of the API server: --basic_auth_file=SOMEFILE

    If you have changed the password in the file, the password takes effect only after you restart the API server. The basic information file consists of the password, username, and userid columns. When using the basic information for authentication, you must add the header field Authorization to HTTP requests and set the field value to Basic BASE64ENCODEDUSER:PASSWORD.

Authorization

In Kubernetes, authentication and authorization are separated from each other, and authorization is implemented after authentication is completed. Authentication is to check whether the user who initiates an API request is the one that the user claims, while authorization is to determine whether this user has the permission to initiate the API request. That is, authorization is based on the result of authentication.

The Kubernetes authorization module applies to all HTTP access requests to the API server. Authentication and authorization are not required for read-only port access requests. When the API server starts, it sets authorization_mode to AlwaysAllow by default to permanently allow authorization.

The Kubernetes authorization module checks each HTTP request, extracts the expected attribute (for example, user, resource kind, or namespace) in the request context, and compares it with the access control rules. Any API request must be verified by one or more access control rules before being processed.

Currently, Kubernetes supports and implements the following authorization modes (authorization_mode), which can be selected by transmitting parameters when the API server starts.

  1. --authorization_mode=AlwaysDeny
  2. --authorization_mode=AlwaysAllow
  3. --authorization_mode=ABAC

In AlwaysDeny mode, all requests are shielded (usually used for testing). In AlwaysAllow mode, all requests are allowed, which is used by default when the API server starts. The Attribute-Based Access Control (ABAC) mode allows you to create custom authorization access control rules.

ABAC mode

In an API request, four attributes are used for the user authorization process:

  • UserName: String type, used to identify the user who initiates the request. If no authentication or authorization is performed, the string is empty.

  • ReadOnly: Bool type, used to identify whether only the read-only operations are performed for the request. (GET is a read-only operation.)

  • Kind: String type, used to identify the type of the Kubernetes resource object to access. The Kind attribute is not empty when API endpoints such as /api/v1beta1/pods are accessed. However, the Kind attribute is empty when other endpoints such as /version and /healthz are accessed.

  • Namespace: String type, used to identify the namespace in which the Kubernetes resource object to be accessed is located.

For the ABAC mode, besides transmitting the --authorization_mode=ABAC option when the API server starts, you must specify --authorization_policy_file=SOME_FILENAME. Each line of the authorization_policy_file file is a JSON object, which is a non-nested Map data structure and represents an access control rule object. An access control rule object is a Map containing the following fields:

  • user: User string specified by --token_auth_file.
  • readonly: The value is true or false. True indicates that the rule is applicable only to the GET requests.
  • kind: Type of the Kubernetes built-in resource object, such as pods and events.
  • namespace: It can be abbreviated as “ns”.

The following shows a simple access control rule file, with each line defining a rule:

Note: The default field value is equivalent to the zero value (empty string, 0, or false) of the field type.

  1. {"user":"admin"}
  2. ## The administrator can perform any operation without the restrictions of the namespace, resource type, and request type.
  3. {"user":"alice", "ns": "projectCaribou"}
  4. ## The alice user can perform any operation in the namespace "projectCaribou" without the restrictions of the resource type and request type.
  5. {"user":"kubelet", "readonly": true, "kind": "pods"}
  6. ## The kubelet user has the permission to read the information about any pod.
  7. {"user":"kubelet", "kind": "events"}
  8. ## The kubelet user has the permission to read and write any event.
  9. {"user":"bob", "kind": "pods", "readonly": true, "ns": "projectCaribou"}
  10. ## The bob user has the permission to read the information about all pods in the namespace "projectCaribou".

In an authorization process, attributes in an API request are compared with the corresponding fields in an access control rule file to check whether they match. When the API server receives an API request, the request attributes are already determined. If one of the attributes is not set, the API server sets it to an empty value (empty string, 0, or false) of this type. The matching rules are as follows:

  • If the value of an attribute in the API request is empty, the attribute must match the corresponding field in the access control rule file.
  • If the value of a field in the access control rule is empty, the field must match the corresponding attribute in the API request.
  • If the value of an attribute in the API request is not empty and the value of a field in the access control rule is not empty, the two values are compared. If the values are the same, the attribute matches the field. If the values are different, they do not match.
  • The tuple of the API request matches all the rules in the access control rule file one by one. If any rule matches, the match succeeds. If no rule matches, the authorization fails.

Click here to view more information about Kubernetes API access control.

Record all logs

Kubernetes provides cluster-based logging that records container activities to a log center. When the cluster is created, stdout and stderr of each container can be recorded to Stackdriver or Elasticsearch through the Fluentd service running on each node and then viewed using Kibana.

Conclusion

Kubernetes provides many options for creating secure deployments. As no solution is applicable to all situations, it is important to get yourself familiar with these security options and understand how they can improve application security.

The security practices mentioned in this article add the flexible configuration capabilities of Kubernetes to continuous integration and automatically and seamlessly integrate security into the entire process.

References

[1]. Kubernetes best practices
[2]. Kubernetes API