Community Blog Kubernetes Tutorial - Build a Machine Learning System & Monitor and Autoscale Cloud Native Applications

Kubernetes Tutorial - Build a Machine Learning System & Monitor and Autoscale Cloud Native Applications

Read this Kubernetes tutorial about building a machine learning system, monitoring and autoscaling Cloud Native applications using Kubernetes.

Build a Machine Learning System Using Kubernetes

The engineering associated with machine learning is complex due to common software development problems and the data-driven features of machine learning. As a result, the workflow becomes longer, data versions are out of control, experiments cannot be easily traced, results cannot be conveniently reproduced, and it is costly to iterate the model. To resolve these inherent issues in machine learning, many enterprises have built internal machine learning platforms to manage the machine learning lifecycle, such as Google's TensorFlow Extended platform, Facebook's FBLearner Flow platform, and Uber's Michelangelo platform. However, these platforms depend on the internal infrastructure of these enterprises. This means that they cannot be completely open-source. These platforms use the machine learning workflow as the framework. This framework enables data scientists to flexibly define their own machine learning pipelines and reuse existing data processing and model training capabilities to better manage the machine learning lifecycle.
Build a Machine Learning System Using Kubernetes
Google has extensive experience in building machine learning workflow platforms. Its TensorFlow Extended platform supports Google's core businesses such as search, translation, and video playback. More importantly, Google has a profound understanding of engineering efficiency in the machine learning field. Google's Kubeflow team made Kubeflow Pipelines open-source at the end of 2018. Kubeflow Pipelines is designed in the same way as Google's internal TensorFlow Extended machine learning platform. The only difference is that Kubeflow Pipelines runs on the Kubernetes platform while TensorFlow Extended runs on Borg.

What is Kubeflow Pipelines?

The Kubeflow Pipelines platform consists of the following components:

  1. A console for running and tracing experiments.
  2. The Argo workflow engine for performing multiple machine learning steps.
  3. A software development kit (SDK) for defining workflows. Currently, only the Python SDK is supported.

You can use Kubeflow Pipelines to achieve the following goals:

  1. End-to-end task orchestration: You can orchestrate and organize a complex machine learning workflow. This workflow can be triggered directly at a scheduled time, or be triggered by events or even by data changes.
  2. Easy experiment management: Scientists can try numerous ideas and frameworks and manage various experiments. Kubeflow Pipelines also facilitates the transition from experiments to production.
  3. Easy reuse: You can quickly create end-to-end solutions by reusing pipelines and components, without the need to rebuild experiments from scratch each time.

This document has gone over what Kubeflow Pipelines are, covered some of the issues resolved by Kubeflow Pipelines, and the procedure for using Kustomize on Alibaba Cloud to quickly build Kubeflow Pipelines for machine learning.

How to Monitor and Autoscale Cloud Native Applications in Kubernetes

Monitor and Autoscale Cloud Native Applications in Kubernetes
While an increasing number of developers continuously accept and recognize the design concept of the cloud-native applications, it is critical to note that Kubernetes has become the center of the entire cloud-native implementation stack. Cloud service capabilities are revealed from the standard Kubernetes interface to the service layer through Cloud Provider, CRD Controller, and Operator. Developers build their own cloud-native applications and platforms based on Kubernetes. Hence, Kubernetes is now the platform for building platforms.

It is very simple to use monitoring and autoscaling capabilities on the Alibaba Cloud Container Service for Kubernetes. Developers only need to install the corresponding component chart in one click to get complete access. With multi-dimensional monitoring and autoscaling capabilities, cloud-native applications obtain higher stability and robustness at the lowest cost.

In this blog,you could understand how a cloud-native application seamlessly integrates monitoring and autoscaling capabilities in Kubernetes.

Related Products

Container Service for Kubernetes (ACK)

Container Service for Kubernetes (ACK) is a fully managed service. ACK is integrated with services such as virtualization, storage, network and security, providing user a high performance and scalable Kubernetes environments for containerized applications. Alibaba Cloud is a Kubernetes Certified Service Provider(KCSP)and ACK is certified by Certified Kubernetes Conformance Program which ensures consistent experience of Kubernetes and workload portability.

Related Courses

Using Kubernetes to Manage Containers and Cluster Resources

This course aims to help IT companies who want to container their business applications, and cloud computing engineers or enthusiasts who want to learn container technology and Kubernetes. By learning this course, you can fully understand what Kubernetes is, why we need Kubernetes, the basic architecture of Kubernetes, some core concepts and terms of Kubernetes, and how to build a Kubernetes cluster on the Alibaba cloud platform, so as to provide reference for the evaluation, design and implementation of application containerization.

Provisioning a Multi-zone ACK Kubernetes Cluster Using Terraform

Through this course, you will not only learn about Alibaba Cloud Container Service for Kubernetes and its applicable scenarios, but also learn how to use Terraform to flexibly deploy of ACK clusters and realize blue and green deployment.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 746 followers

You may also like


Alibaba Clouder

2,605 posts | 746 followers

Related Products