How to Start Your Kubernetes Journey

How to start your Kubernetes journey

How to start your Kubernetes journey.This article is about how to start a Kubernetes journey [Translator's words] This article introduces how to start Kubernetes learning and how to deploy applications based on Kubernetes. The author gives a lot of very good suggestions, guides readers to master various concepts and related skills around Kubernetes, and points out the direction for learning Kubernetes.

How to start your Kubernetes journey.This article is about how to start a Kubernetes journey [translator's words] This article introduces how to start Kubernetes learning and how to deploy applications based on Kubernetes. The author gives a lot of very good suggestions, guides readers to master various concepts and related skills around Kubernetes, and points out the direction for learning Kubernetes.

How to start your Kubernetes journey.From Hello Minikube to Kubernetes Anywhere, to microservice example applications, there are many ways to learn Google 's container orchestration tool.
Every innovation brings some new troubles. Containers make it easier to package and run applications, but managing large-scale containers remains a challenge.

How to start your Kubernetes journey.Kubernetes , a product developed internally at Google to solve this problem, provides a single framework for managing containers running across a cluster. The services provided by this product are mainly focused on "orchestration", but also cover many aspects: container scheduling, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, etc.

How to start your Kubernetes journey.In this guide, we'll cover the basics of creating a Kubernetes cluster and publishing container applications. This is not to introduce the concepts of Kubernetes, but to show how these concepts fit together in the process of running Kubernetes through simple examples.

How to start your Kubernetes journey.Choose a Kubernetes host

Kubernetes was born to manage Linux containers. However, as of Kubernetes 1.5, Kubernetes has started to support Windows Server Containers, although the Kubernetes control plane must continue to run on Linux. Of course, with virtualization, you can start using Kubernetes on any platform.

How to start your Kubernetes journey.If you choose to run Kubernetes on your own hardware or virtual machines, a common approach is to get a Linux distribution that comes bundled with Kubernetes. This eliminates the need to set up Kubernetes, not only the installation and deployment process, but even the configuration and management process.

How to start your Kubernetes journey.CoreOS Tectonic is one such distribution, focusing on containers and Kubernetes to the exclusion of almost everything else. RancherOS takes a similar approach, again automating most of the setup. Both can be installed in various environments: bare metal, Amazon AWS VMs, Google Compute Engine, OpenStack, etc.

How to start your Kubernetes journey.Another approach is to run Kubernetes on a regular Linux distribution, although usually with more administrative overhead and manual tuning. For example, Red Hat Enterprise Linux has Kubernetes in its repositories, but even Red Hat recommends it only for testing and experimentation. Red Hat users are advised to use Kubernetes via OpenShift PaaS, rather than building it themselves from scratch, OpenShift now uses Kubernetes as its own container orchestration system. Many traditional Linux distributions provide special tools for setting up Kubernetes and other large software stacks. For example, Ubuntu provides a tool called conjur-up that can be used to deploy upstream versions of Kubernetes on cloud and bare metal instances .

How to start your Kubernetes journey.Choose a cloud to host Kubernetes

Although Kubernetes is fully supported on Google Cloud Platform (GCP), on many other cloud platforms, Kubernetes support is still a focus. GCP provides two main ways to run Kubernetes. The most convenient and best-integrated way is through Google Container Engine, which allows you to run Kubernetes' command-line tools to manage the clusters you create. Alternatively, you can use Google Compute Engine to create a compute cluster and deploy Kubernetes manually. This method is more demanding on the user's skills, but allows the user to use personalization definitions that are not yet supported by Google Container Engine. If you're new to containers, it's best to stick with the container engine. After a while, and you have some understanding of Container Engine, you can try something more advanced, like deploying a specific version of Kubernetes yourself, or deploying a virtual machine running a Kubernetes distribution.

Amazon EC2 has native support for containers, but no native support for Kubernetes as a container orchestration system. Running Kubernetes on AWS is similar to using Google Compute Engine: provision a compute cluster and then manually deploy Kubernetes.

Many Kubernetes distributions have detailed instructions on how to deploy on AWS. For example, CoreOS Tectonic, which has a graphical installer , also supports the Terraform infrastructure configuration tool. Additionally, the Kubernetes kops tool can be used to configure a cluster of virtual machines on AWS (usually Debian Linux is used, but other Linux distributions are partially supported).

Microsoft Azure supports Kubernetes through Azure Container Service . However, this is not very "native" support from the perspective of Kubernetes as a service hosted on Azure. Instead, Kubernetes is deployed from an Azure Resource Management Template. Azure's support for other container orchestration systems (such as Docker Swarm and Mesosphere DC/OS) is also implemented in this way. If you deploy Kubernetes in any of the other clouds mentioned here and want full control over it, installing a core version of Kubernetes on an Azure VM is always the easy way to go.

A quick way to provision a basic Kubernetes cluster in various environments (cloud or otherwise) is to use a project called Kubernetes Anywhere. This script works on Google Compute Engine, Microsoft Azure and VMware vSphere (requires vCenter). In each case, it provides some degree of automation for the startup process.

Your own small set of Kubernetes nodes
If you're just running Kubernetes in a local environment like a development machine, and you don't need the full power of Kubernetes, there are several ways to set up "just enough" Kubernetes to use.

One of the ways is to use MiniKube provided by the Kubernetes development team itself . Run it and you'll get a single node Kubernetes cluster deployed on a virtual host of your choice. minikube has several prerequisites, such as the kubectl command line interface and virtualization environments such as VirtualBox, but those have ready-made binaries that support MacOS, Linux, and Windows.

For CoreOS users on MacOS, there is Kubernetes Solo . It runs on a CoreOS virtual machine and provides a status bar application for quick management. Solo also includes the Kubernetes package manager Helm (usually the next level to Helm), so Kubernetes-based application packages are easier to get and set up.

Play around with your container cluster
Once your Kubernetes is running, you can run and manage containers. You can easily become familiar with the operation of containers by deploying and managing many container-based application examples. Take an existing container-based application demo, assemble it yourself, see how it's composed, deploy it, and gradually modify it until it meets your expectations. If you choose to find your footing with minikube, you can use the Hello Minikube tutorial to create a Docker container that runs a simple Node.js application hosted on a single-node Kubernetes to demonstrate cluster setup and application deployment. Once you're comfortable with this, you can replace it with your own container and do some hands-on deployments. The

next step is to deploy a sample application similar to what you might use in a production environment. Through this application, you can further familiarize yourself with some advanced concepts of Kubernetes, such as pods (one or more containers, including an application), service (a group of logical Pods), replica sets (provides a recovery mechanism for applications in the event of machine failure) and deployment (the version of the application). Demystify the WordPress/MySQL sample application , for example, and you'll see more than just how to deploy applications to Kubernetes and get those applications running. You'll also see implementation details of many of the concepts involved in Kubernetes applications that are production-ready. You'll learn how to set up persistent volumes to hold your application's state, how to expose pods to each other, and how to connect to the outside world through services , how to store application passwords and API keys as secrets , and more.

Weaveworks has a sample application, the sock store , that shows how the microservices pattern can be used to compose applications in Kubernetes. The sock store is best for those familiar with underlying technologies such as Node.js, Go kit, and Spring Boot, but the gist goes beyond specific frameworks to illustrate cloud-native technologies.

If you take a look at a WordPress/MySQL application and want to create an application that runs on Kubernetes to suit your needs, that's basically the right idea. Kubernetes has an application-defined system called Helm , which provides a mechanism for packaging, versioning, and sharing Kubernetes applications. Some mainstream applications (GitLab, WordPress) and modules for building applications (MySQL, NGINX) use Helm as a one-stop direct kubeapps portal for best practices.

How to start your Kubernetes journey.Deep dive into Kubernetes
Kubernetes simplifies the management of containers through powerful abstraction capabilities, such as Pod and Service. Colleagues provide great flexibility through label and namespace mechanisms. Both labels and namespaces can be used to isolate pods, services and deployments (such as development environments, transitions, etc.). environment and production load).

If you choose one of the examples above, and create different instances in multiple namespaces, you can make changes to components independent of the other namespaces. You can use deployments to allow these updates to roll over across multiple pods in a given namespace .

A step further than this exercise is learning how Kubernetes itself can be driven by tools for managing infrastructure. In the case of Puppet, there was a module for creating and manipulating Kubernetes resources , but HashiCorp 's Terraform had support early on, but the support evolved to manage Kubernetes as a resource . If you plan to use such an explorer, be aware that different tools may give tables different expected results. Take Puppet and Terraform as examples, which default to using mutable and immutable infrastructure respectively . These philosophical layers and behavioral differences make it easy or difficult to create the Kubernetes you need.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us