We have seen, in our beginner's guide that Kubernetes is an open source platform used in containerized applications management. We have looked at the basics Kubernetes, terms, and what components do. In how to Install and Deploy Kubernetes on Ubuntu 16.04, the author has demonstrated the procedure of deploying Kubernetes on Alibaba Cloud. With our platform running successfully on the cloud platform, we are now going to explore more about primitives, deploying containerized applications. We shall also see how to expose services as well as scaling them through replication by a controller.
You will need to get yourself acquainted with the concepts and deployment of Kubernetes for you to follow through with this article. There are specific terms that may need you to refer back to the articles for this tutorial.
Users leverage Kubernetes APIs for the creation, scaling and termination of applications on the platform. Kubernetes manages various types of objects, each targeted by a different operation. Objects constitute the basic building blocks of Kubernetes, availed as primitives for managing containerized applications. In summary, below are the most important Kubernetes API objects:
Kubernetes treats nodes as objects within the cluster. Therefore, you can manage them as you would any other Kubernetes object. On top of that Namespaces on Kubernetes provide a means for the logical separation of applications. A common application of this feature is the separation of development, testing, staging and production clusters. With Namespaces, the various environments can be managed using APIs that link to them independently.
Docker containers running on Kubernetes are not deployed directly because Kubernetes does not understand the format. Kubernetes primitives are needed to package Docker into a different version that allows Kubernetes application management.
Pods can run multiple containers. For instance, we can have both Nginx and Redis containers to run a web server and cache operations in a singular pod. In the configuration, all containers in a pod are as a result of a pre-configured ped definition, and as such, they form a logical unit. Notably, inter-process communication (IPC) is the method of communication between pods.
Kubernetes Services employ TCP and UDP protocols for interaction. The other bit worth mentioning about primitives is the database configuration. Kubernetes does not expose database containers and caches to the public. Kubernetes uses a policy mechanism to expose such containers to other containers to avoid exposure of sensitive workloads to the public. However, APIs are exposed to the public to access services. This default configuration of the primitives improves security.
On scaling workloads up and down, Kubernetes incorporates a dynamic implementation of the label primitive. Selectors can then easily discover running objects. Such objects also include containers, which makes scaling very fast as compared to heavier virtual machines. On the whole, the different primitive configuration enables capabilities similar to PAAS.
If you have successfully followed how to Install and Deploy Kubernetes on Ubuntu 16.04, you can get a list of all nodes and namespaces by running the command below:
kubectl get nodes Output NAME STATUS ROLES AGE VERSION spc3c97hei-master-1 Ready master 10m v1.8.7 spc3c97hei-worker-1 Ready <none> 4m v1.8.7 spc3c97hei-worker-2 Ready <none> 4m v1.8.7 kubectl get namespaces Output NAME STATUS AGE default Active 11m kube-public Active 11m kube-system Active 11m stackpoint-system Active 4m
kubectl will target the default Namespace if there no other identified namespaces. Great, let us get an application launched!
We need the
kubectl CLI to declare objects in YAML format to submit to Kubernetes for processing. Let us create our first pod:
Run the command to create
sample pod Sample-Pod.yaml.
Next, let us define our pod by adding the code below. It defines our pod as having a single container based on Nginx. It uses TCP protocol over port 80. The
env labels in the definition make it possible to identify and configure select pods.
apiVersion: "v1" kind: Pod metadata: name: web-pod labels: name: web env: dev spec: containers: - name: myweb image: nginx ports: - containerPort: 80 name: http protocol: TCP
Create our Pod by running the following command:
kubectl create -f Sample-Pod.yaml Output pod "web-pod" created
Run the command below to verify our Pod was created
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-pod 1/1 Running 0 2m
We want to make our Pod accessible to the public. We shall see how to go about it in the next section.
You can expose Pods either internally or externally using Services. In our simple project, let us expose the Nginx web server pod publicly. Our preferred object of use is the NodePort, which uses an arbitrary port on a node. We begin by creating a
Sample-Service.yaml file, which has the coded instructions to define the Nginx service.
apiVersion: v1 kind: Service metadata: name: web-svc labels: name: web env: dev spec: selector: name: web type: NodePort ports: - port: 80 name: http targetPort: 80 protocol: TCP
We have created a service to discover all pods with the Label with name: web and that are within the same namespace. The association is defined fully for the selector. The service has also been declared as of NodePort type. The final process is to submit it to the cluster using kubectl.
kubectl create -f Sample-Service.yml
You should get confirmation for the successful creation of the service in the output:
Output service "web-svc" created
Use the command below to get the Pod's port:
kubectl get services Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.3.0.1 <none> 443/TCP 28m web-svc NodePort 10.3.0.143 <none> 80:32096/TCP 38s
The output has indicated that port 32096 carries the service. Accordingly, lets try working with one of the available nodes:
Use the Alibaba console to obtain the IP addresses of the worker nodes.
Next, make an HTTP request to one of the workers using a
curl command on port
The response should contain the home page of the Nginx web server
<!DOCTYPE html> <html> <head> <title>Welcome to Nginx!</title> ... Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
We have both a Pod and a Service declared, we shall look at replication sets in the next section.
Replica sets maintain the minimum required Pods running within the cluster. We are going to destroy the Pod we created and use the Replica Set to create three replacements.
Delete the Pod like so:
kubectl delete pod web-pod Output pod "web-pod" deleted
Declare a new Replica set to proceed to the next step. Defining one is similar to declaring a pod, with the only difference being the replica element defining the Pods it will run. It also contains metadata definition for ease of discovery as was the case with Pods.
We shall create a
Sample-RS.yml and the code below:
apiVersion: apps/v1beta2 kind: ReplicaSet metadata: name: web-rs labels: name: web env: dev spec: replicas: 3 selector: matchLabels: name: web template: metadata: labels: name: web env: dev spec: containers: - name: myweb image: nginx ports: - containerPort: 80 name: http protocol: TCP
Save the changes and close the file.
Next, let's get the Replica Set defined:
kubectl create -f Simple-RS.yml Output replicaset "web-rs" created
Let us now search for our Pods:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-rs-htb58 1/1 Running 0 8s web-rs-khtld 1/1 Running 0 8s web-rs-p5lzg 1/1 Running 0 8s
When a NodePort is used for Service access, requests are passed to one of these nodes under by the Replica Set. Let us try our Replica Set's response by deleting one of the pods like so:
kubectl delete pod web-rs-p5lzg Output pod "web-rs-p5lzg" deleted
Run the command below again:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-rs-htb58 1/1 Running 0 3m web-rs-khtld 1/1 Running 0 3m web-rs-fqh2f 0/1 ContainerCreating 0 3s web-rs-p5lzg 1/1 Running 0 3m web-rs-p5lzg 0/1 Terminating 0 3m
Kubernetes deletes the pod but creates a new one to maintain the required number of pods in our cluster. The next section will explore deployments
Deployments are easier for upgrades and patches compared to Pods and Replica Sets. It is the fundamental reason why you would want to use them to deploy containers. For instance, you can upgrade a running pod with Deployments but not with Replica Sets. The feature allows upgrades without downtime and enables PAAS capabilities. Let us first delete our replica set and then proceed to create a Deployment like so:
kubectl delete rs web-rs Output replicaset "web-rs" deleted
Create a new
Sample-Deployment.yaml file and include the code below:
apiVersion: apps/v1beta2 kind: Deployment metadata: name: web-dep labels: name: web env: dev spec: replicas: 3 selector: matchLabels: name: web template: metadata: labels: name: web spec: containers: - name: myweb image: nginx ports: - containerPort: 80
Create the deployment and view existing Deployments
kubectl create -f Simple-Deployment.yml Output deployment "web-dep" created
Get the deployments:
kubectl get deployments Output NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-dep 3 3 3 3 2m
We specified the creation of three Pods, there are three running Pods.
Get the pods like so:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-dep-8594f5c765-5wmrb 1/1 Running 0 3m web-dep-8594f5c765-6cbsr 1/1 Running 0 3m web-dep-8594f5c765-sczf8 1/1 Running 0 3m
We had configured a service before creating the Deployment. However, since they all have the same labels, it will still be able to use the newly created pods.
If you need to clean up, you can delete both the Service and Deployment like so:
kubectl delete deployment web-dep Output deployment "web-dep" deleted kubectl delete service web-svc Output service "web-svc" deleted
The Kubernetes documentation contains further information on this subject.
This tutorial has built on what we discussed in the beginner's guide to Kubernetes. We have examined primitive configurations as well as the most important configurations that you are likely to encounter on Kubernetes. We have configured a web server using a pod, created a service, Replica set and deployment. The configurations we have studied in this tutorial are among the most fundamental when handling Kubernetes.
To learn more about Kubernetes on Alibaba Cloud, visit www.alibabacloud.com/product/kubernetes
Alibaba System Software - August 14, 2018
Alex - November 8, 2018
Alibaba BlockChain Service Team - August 29, 2018
Alibaba Clouder - August 31, 2018
Alibaba Cloud Blockchain Service Team - January 17, 2019
Alibaba Clouder - January 18, 2019
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.Learn More
A secure image hosting platform providing containerized image lifecycle managementLearn More
Super Computing Service provides ultimate computing performance and parallel computing cluster services for high-performance computing through high-speed RDMA network and heterogeneous accelerators such as GPU.Learn More
A high-performance container manage service that provides containerized application lifecycle managementLearn More
More Posts by Alex