×
Community Blog Kubernetes: Assign Memory Resources and Limits to Containers

Kubernetes: Assign Memory Resources and Limits to Containers

This tutorial teaches you how to assign memory resources and limits to containers with Kubernetes on Alibaba Cloud.

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This tutorial teaches you how to assign memory resources and limits to containers with Kubernetes on Alibaba Cloud. This tutorial will cover two related topics:

  • How to assign memory resources to a Pod when you define it
  • How Kubernetes administrators can put Limits on the memory use of Pods (during runtime and when they define a Pod)

We will first define some Pods with our self-imposed limits specified. Then we will test how those limits are enforced.

During the last part of this tutorial we will use LimitRanges. Those limits define memory limits for Pod declaration as well as separate limits for runtime memory use. We will also test how those limits are enforced.

Once you have completed this tutorial, you are invited to create your own LimitRanges and Pods to test your understanding of this topic.

This tutorial will cover the following topics:

  • Pod with memory request and limit
  • Pod with 2 containers: each with memory request and limit
  • Pod exceed RAM limit upon startup : restartPolicy: Never
  • Pod exceed RAM limit upon startup : restartPolicy: OnFailure
  • LimitRange for memory
  • Pod that requests RAM above and below limits
  • Define LimitRange defaults and limits
  • Pod does not specify RAM limits in its YAML spec file
  • LimitRange in namespaces
  • Kubernetes API objects

1) Pod with Memory Request and Limit

Note the syntax below. This is how we define:

  • 50 Mi memory requested
  • 100 Mi memory limit - the max RAM we declare our Pod will ever need
nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"
    
  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 
   
pod/mybench-pod created
   

Now we investigate how Kubernetes deals with Pods exceeding those limits.

We use an image: mytutorials/centos:bench, that I created and uploaded to the docker hub.

It contains a simple CentOS 7 base operating system. It also includes stress, a benchmark and stress test application.

Syntax:

stress --vm 1 --vm-bytes 50M --vm-hang 10

  • --vm 1 ... start one virtual worker thread
  • --vm-bytes ... allocate 50MB of RAM
  • --vm-hang ... randomly waits 10 seconds and re-allocate around 50 MB of ram

If you do not use --vm-hang the Pod will continuously use 100% CPU since it will continuously re-allocate around 50 MB of ram with no pause between re-allocations.

We do not want to 100 % stress out the CPU, this hang/wait functionality causes the stress test to use near-zero CPU.

We use the command kubectl exec to ssh into the Pod. Here we can run commands at the Linux shell.

kubectl exec mybench-pod -i -t -- /bin/bash

# stress --vm 1 --vm-bytes 50M --vm-hang 10

Truncated output of top command I ran in another shell.

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21149 root      20   0   57.2m  50.4m   0.0   2.7   0:00.11 S stress --vm 1 --vm-bytes 50M --vm-hang 10
21148 root      20   0    7.1m   0.9m   0.0   0.0   0:00.00 S stress --vm 1 --vm-bytes 50M --vm-hang 10

Container can easily handle allocation of 50MB RAM.

# stress --vm 1 --vm-bytes 90M --vm-hang 10

Container can easily handle allocation of 90MB RAM.

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21875 root      20   0   97.2m  90.5m   0.0   4.9   0:00.02 S stress --vm 1 --vm-bytes 90M --vm-hang 10
21874 root      20   0    7.1m   0.9m   0.0   0.0   0:00.00 S stress --vm 1 --vm-bytes 90M --vm-hang 10

stress --vm 1 --vm-bytes 95M --vm-hang 10

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
22143 root      20   0  102.2m  95.3m   0.0   5.1   0:00.02 S stress --vm 1 --vm-bytes 95M --vm-hang 10
22142 root      20   0    7.1m   0.6m   0.0   0.0   0:00.00 S stress --vm 1 --vm-bytes 95M --vm-hang 10

Container can easily handle allocation of 95MB RAM.

[root@mybench-pod /]# stress --vm 1 --vm-bytes 97M --vm-hang 10
stress: info: [78] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [78](415) <-- worker 79 got signal 9
stress: WARN: [78](417) now reaping child worker processes
stress: FAIL: [78](451) failed run completed in 0s
[root@mybench-pod /]# exit

Allocating 97MB RAM fails - our CentOS container needs around 3 MB to run.

We went over our 100Mi RAM limit.

Note that only this stress thread crashed, not the whole container and not the whole Pod.

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Status:             Running
    State:          Running
      Started:      Thu, 10 Jan 2019 07:47:43 +0200
    Ready:          True
    Restart Count:  0
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Events:
  Type    Reason     Age   From               Message
  Normal  Scheduled  12m   default-scheduler  Successfully assigned default/mybench-pod to minikube
  Normal  Pulled     12m   kubelet, minikube  Container image "centos:bench" already present on machine
  Normal  Created    12m   kubelet, minikube  Created container
  Normal  Started    12m   kubelet, minikube  Started container

Note that the Pod is still running. CentOS 7 is still running and we can still ssh into Pod and still do typical Linux shell work.

First demo complete, delete our Pod:

kubectl delete -f myBench-Pod.yaml --force --grace-period=0

pod "myapp-pod" force deleted

2) Pod with 2 Containers: Each with Memory Request and Limit

Now we have 2 containers with memory limits. Detail in YAML file below:

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container-1
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo mybench-container-1 is Running ; sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"
    
  - name: mybench-container-2
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo mybench-container-2 is Running ; sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"

  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 
   
pod/mybench-pod created
   

Allocate 88MB in container number 2.

kubectl exec mybench-pod -c mybench-container-2 -i -t -- /bin/bash

[root@mybench-pod /]# stress --vm 1 --vm-bytes 88M --vm-hang 10

stress: info: [22] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd

Open another shell:

Allocate 95MB in container number 1.

kubectl exec mybench-pod -c mybench-container-1 -i -t -- /bin/bash

[root@mybench-pod /]# stress --vm 1 --vm-bytes 95M --vm-hang 10
stress: info: [23] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd

Each container has its own max RAM limit of 100Mi.

top output below shows both containers running.

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
26346 root      20   0  102.2m  95.4m   0.8   5.1   0:00.04 S stress --vm 1 --vm-bytes 95M --vm-hang 10
26244 root      20   0   95.2m  88.4m   0.0   4.7   0:00.01 S stress --vm 1 --vm-bytes 88M --vm-hang 10
26345 root      20   0    7.1m   0.8m   0.0   0.0   0:00.00 S stress --vm 1 --vm-bytes 95M --vm-hang 10
26243 root      20   0    7.1m   0.8m   0.0   0.0   0:00.00 S stress --vm 1 --vm-bytes 88M --vm-hang 10

Press control c in any of your shells, and allocate 120MB RAM. It crashes since it exceeds 100Mi limit.

[root@mybench-pod /]# stress --vm 1 --vm-bytes 120M --vm-hang 10
stress: info: [24] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [24](415) <-- worker 25 got signal 9
stress: WARN: [24](417) now reaping child worker processes
stress: FAIL: [24](451) failed run completed in 0s

Type exit to exit this shell.

Press control c in the other shell, type exit to exit this shell.

Summary: all containers in a Pod have independent RAM limits. You have fine-grained control of each container independently.

( Not part of this tutorial, but each container can have independent CPU constraints as well. )

kubectl delete -f myBench-Pod.yaml --force --grace-period=0

pod "myapp-pod" force deleted

3) Pod Exceed RAM Limit upon Startup : restartPolicy: Never

We have seen that the overall status of Pod is unaffected by failing threads inside it.

Now we will see what happens if a Pod contains a container that upon startup exceeds the RAM limit.

    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]

The YAML spec above specifies a command to run upon startup, exceeding its self-imposed / self-declared limit of 100Mi of RAM.

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container-1
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]

    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"

  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 
   
pod/mybench-pod created
   

Immediately afterwards:

kubectl get po

NAME          READY   STATUS      RESTARTS   AGE
mybench-pod   0/1     OOMKilled   0          9s

OOM means out of memory.

Output of kubectl describe mybench-pod with ONLY important status fields shown:

Status:             Failed
    State:          Terminated
      Reason:       OOMKilled
      Exit Code:    1
      Started:      Thu, 10 Jan 2019 08:30:13 +0200
      Finished:     Thu, 10 Jan 2019 08:30:14 +0200
    Ready:          False
    Restart Count:  0

We had restartPolicy: Never in the Pod spec.

Once it got killed with OOM Killed, it stayed killed: never restarting.

4) Pod Exceed RAM Limit upon Startup : restartPolicy: OnFailure

Investigate what happens with restartPolicy: OnFailure

( Note last line in spec )

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container-1
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]

    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"

  restartPolicy: OnFailure

Create the Pod.

kubectl create -f myBench-Pod.yaml 
   
pod/mybench-pod created
   

Check on Pod status :

kubectl get po
NAME          READY   STATUS      RESTARTS   AGE
mybench-pod   0/1     OOMKilled   1          2s

kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
mybench-pod   0/1     CrashLoopBackOff   1          6s

kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
mybench-pod   0/1     CrashLoopBackOff   1          12s

kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
mybench-pod   0/1     CrashLoopBackOff   1          15s

kubectl get po
NAME          READY   STATUS      RESTARTS   AGE
mybench-pod   0/1     OOMKilled   2          24s

kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
mybench-pod   0/1     CrashLoopBackOff   2          35s

kubectl get po
NAME          READY   STATUS      RESTARTS   AGE
mybench-pod   0/1     OOMKilled   3          45s

restartPolicy: OnFailure faithfully but without success restarts Pod repeatedly.

kubectl delete -f myBench-Pod.yaml --force --grace-period=0

pod "myapp-pod" force deleted

5) LimitRange for Memory

Kubernetes administrators can define RAM limits for their nodes.

These limits are enforced at higher priority over how much RAM your Pod declares and wants to use.

Let's define our first LimitRange : 25Mi RAM as min, 200Mi as max.

nano myRAM-LimitRange.yaml

apiVersion: v1
kind: LimitRange
metadata:
  name: my-ram-limit
spec:
  limits:
  - max:
      memory: 200Mi
    min:
      memory: 25Mi
kubectl create -f myRAM-LimitRange.yaml

MB not supported: You cannot specify RAM in MB, must be in Mi. This is error you will get:

kubectl create -f myRAM-LimitRange.yaml

Error from server (BadRequest): error when creating "myRAM-LimitRange.yaml": LimitRange in version "v1" cannot be handled as a LimitRange: v1.LimitRange.Spec: v1.LimitRangeSpec.Limits: []v1.LimitRangeItem: v1.LimitRangeItem.Min: Max: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|y":"250MB"},"min":{"|..., bigger context ...|fault"},"spec":{"limits":[{"max":{"memory":"250MB"},"min":{"memory":"25MB"},"type":"Container"}]}}

6) Pod That Requests RAM above and below Limits

Some namespaces or nodes may be dedicated to vast Pods. No tiny Pods allowed.

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "10Mi"
    
  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml

Error from server (Forbidden): error when creating "myBench-Pod.yaml": pods "mybench-pod" is forbidden: minimum memory usage per Container is 25Mi, but request is 10Mi.   

Error easy to understand.

Let's request too much RAM:

nano myBench-Pod.yaml
   
apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
    resources:
      limits:
        memory: "300Mi"
      requests:
        memory: "30Mi"
    
  restartPolicy: Never
kubectl create -f myBench-Pod.yaml

Error from server (Forbidden): error when creating "myBench-Pod.yaml": pods "mybench-pod" is forbidden: maximum memory usage per Container is 250Mi, but limit is 300Mi.

Easy to understand.

Let's define a Pod within the allowed RAM ranges:

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "30Mi"
    
  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 
   
pod/mybench-pod created
   

Success.

If we now use kubectl exec and allocate too much RAM that thread gets killed ( as expected ).

kubectl exec mybench-pod -i -t -- /bin/bash

[root@mybench-pod /]# stress --vm 1 --vm-bytes 50M --vm-hang 10
stress: info: [22] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 90M --vm-hang 10
stress: info: [24] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 120M --vm-hang 10
stress: info: [26] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [26](415) <-- worker 27 got signal 9
stress: WARN: [26](417) now reaping child worker processes
stress: FAIL: [26](451) failed run completed in 1s

[root@mybench-pod /]# exit
kubectl delete -f myBench-Pod.yaml --force --grace-period=0

pod "myapp-pod" force deleted

Finished with this limit definition, delete it.

kubectl delete limits/my-ram-limit
limitrange "my-ram-limit" deleted

7) Define LimitRange Defaults and Limits

Previously our LimitRange only defined limits.

We can also use LimitRange to define default limits. These defaults are used when a Pod spec does not specify its own limits.

apiVersion: v1
kind: LimitRange
metadata:
  name: my-ram-limit
spec:
  limits:
  - default:
      memory: 150Mi
    defaultRequest:
      memory: 30Mi
    max:
      memory: 250Mi
    min:
      memory: 25Mi
    type: Container

The first 5 lines would be more clear if the syntax was:

  limits:
  - defaultLimit:
      memory: 150Mi
    defaultRequest:
      memory: 30Mi

Unfortunately the syntax is:

  limits:
  - default:
      memory: 150Mi
    defaultRequest:
      memory: 30Mi

Let's create our new LimitRange:

kubectl create -f myRAM-LimitRange.yaml

limitrange/my-ram-limit created

Let's let Kubernetes describe what it did:

kubectl describe limits

Name:       my-ram-limit
Namespace:  default
Type        Resource  Min   Max    Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---   ---    ---------------  -------------  -----------------------
Container   memory    25Mi  250Mi  30Mi             150Mi          -

Perfectly understandable output.

Now we need a Pod that does not specify limits in its spec.

8) Pod Does Not Specify Ram Limits in Its YAML Spec File

See no limits in spec below:

nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mybench-pod
spec:
  containers:
  - name: mybench-container
    image: mytutorials/centos:bench
    imagePullPolicy: IfNotPresent
    
    command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
    
  restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml

pod/mybench-pod created

Output of kubectl describe mybench-pod with ONLY important status fields shown:

Name:               mybench-pod

Annotations:        kubernetes.io/limit-ranger:
                      LimitRanger plugin set: memory request for container mybench-container; memory limit for container mybench-container

Status:             Running
IP:                 172.17.0.6
Containers:
  mybench-container:
...  
    Limits:
      memory:  150Mi
    Requests:
      memory:     30Mi
...

Note the annotation : this Pod spec memory got auto adjusted using a limit-ranger.

At the bottom we can see this Pod has RAM limits based on LimitRange defaults.

We expect that if we use more than 150Mi of RAM that such an allocation will crash.

Let's test it:

kubectl exec mybench-pod -i -t -- /bin/bash

[root@mybench-pod /]# stress --vm 1 --vm-bytes 140M --vm-hang 10
stress: info: [27] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 160M --vm-hang 10
stress: info: [29] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [29](415) <-- worker 30 got signal 9
stress: WARN: [29](417) now reaping child worker processes
stress: FAIL: [29](451) failed run completed in 0s
[root@mybench-pod /]# exit

As expected: 140MB within limit, no problem, 160MB got killed ( as expected ).

9) LimitRange in Namespaces

Throughout this tutorial we used these limits in the default namespace.

In a complex development and production environment Pods run in different namespaces.

Namespaces separate Kubernetes resources into logical groupings.

Similarly LimitRanges can be defined to be enforced per namespace.

If you work at a small company with just one namespace the LimitRanges you define will be in the default namespace - applicable to all Pods .

For more information: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

and

https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

10) Kubernetes API Objects

Some online Kubernetes documentation stress the underlying architecture when describing any Kubernetes topic.

Nearly everything in Kubernetes is an API object: namespaces, Pods, LimitRanges.

Real people say: I used a LimitRange to define RAM limits for Pods. I used kubectl to create it.

Real people do not say: I used a Kubernetes API LimitRange object to define RAM limits for my Kubernetes API Pod object. I used the RESTful interface of kubectl to call the apiserver to create those API objects in etcd.

Unless your work focus is the Kubernetes underlying architecture ( writing API calls ) you may safely skip references to API.

Knowing that etcd is the 'highly-available key value store used as Kubernetes' backing store for all cluster API resources data' adds nothing to your understanding of RAM limits.

Conclusion

This tutorial does not provide an exhaustive list of all the possibilities of using limits. Instead, it provides you with sufficient information for you to understand the its underlying logic. In summary, we learned that:

  • A Pod can self-declare memory limits
  • LimitRange can define min and max memory limits that are enforced for all Pods
  • LimitRange can define default and max memory limits that are auto added to Pod specs at Pod creation time ( if Pod does not self-declare its memory limits )

You can now define Pods and LimitRanges and accurately predict what will happen in each Pod with any size memory allocation.

Note: If your Pods do not define limits and there are no LimitRanges then one container can use all the RAM on your Kubernetes node.

Exercise: define your own LimitRange and one Pod. Repeatedly adjust your Pods' limits around, inside and outside those ranges. Create the Pods and do similar stress tests to test you understanding.

Each such test cycle could be done in 2 minutes. If, after 15 minutes, all your predicted expectations are correct, you may stop and congratulate yourself.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments