This topic describes the documentation updates for new features and updates of Machine Learning Platform for AI (PAI) in 2021.

July 2021

Date Update item Type Description References
2021.07.29 ASR models New feature The Chinese speech vectorization model and English speech vectorization model are added to Model Hub. ASR models
2021.07.06 Elastic Algorithm Service (EAS) SDKs New feature Official EAS SDKs are provided to call the services deployed based on models. EAS SDKs reduce the time required for defining the call logic and improve the call stability. PAI provides EAS SDKs for Python, Java, and Golang. SDK for Python, SDK for Java, and SDK for Go

June 2021

Date Update item Type Description References
2021.06.27 Plug-ins provided by AutoLearning Optimization The topics about how to use the computer vision model training plug-in and the general-purpose model training plug-in are updated based on the procedures in the PAI console. AI Industry Plug-In
2021.06.24 Model deployment by using custom images New feature During business development, environmental dependencies are often complex. If you use a processor to deploy a model as a service, you must package shared libraries to the processor. You cannot install the dependency to a path of the system by running the yum install command. This method is less flexible. Therefore, EAS provides a new feature that allows you to use a custom image to deploy a model as a service. Use a custom image to deploy a model service

May 2021

Date Update item Type Description References
2021.05.27 Authorization in EAS Optimization The sample code that shows the content of a RAM policy is updated. Authorize a RAM user to access EAS

April 2021

Date Update item Type Description References
2021.04.25 AI computing asset management New feature This module unifies the management of PAI-related datasets, algorithms, models, and images. AI Computing Asset Management
2021.04.19 Product models New feature The product recognition model is added to Model Hub. Product recognition model
2021.04.07 Built-in processors New feature Built-in processors for TensorFlow 1.15 and PyTorch 1.6 are added. Built-in processors

March 2021

Date Update item Type Description References
2021.03.04 Offline prediction in end-to-end text recognition New feature EasyVision of PAI allows you to perform model training and prediction in end-to-end text recognition. You can use EasyVision to perform distributed training and prediction on multiple servers. This topic describes how to use EasyVision to achieve offline prediction in end-to-end text recognition based on existing training models. End-to-end text recognition
2021.03.04 Labeling templates New feature This topic describes labeling templates for text, videos, and images, and the scenarios and data structure of each labeling template. Labeling templates for images

February 2021

Date Update item Type Description References
2021.02.26 Learning path New feature This topic describes the learning path of PAI. Machine Learning Platform for AI
2021.02.25 Components for binary classification Optimization This topic describes the input parameters, PAI commands, and examples of components for binary classification. Linear SVM

January 2021

Date Update item Type Description References
2021.01.26 Intelligent video processing models New feature The models for general video classification and video highlights generation are added. This topic describes the input format and output format of both models and provides test examples. Intelligent video processing models
2021.01.20 Distributed deep learning framework Whale New feature Whale is a flexible, easy-to-use, efficient, and centralized distributed training framework. It provides simple and easy-to-use API operations for data parallelism, model parallelism, pipeline parallelism, operator splitting, and hybrid parallelism that combines multiple parallelism strategies. Whale is developed based on TensorFlow and fully compatible with the TensorFlow API. You need only to add a few lines of code that describe distributed parallelism strategies to an existing TensorFlow model to perform distributed and hybrid parallel training.
2021.01.11 The development environment of Data Science Workshop (DSW) Optimization This topic describes how to work with the development environment of DSW, including how to use user interfaces, run preset cases, and manage third-party libraries. Work with the development environments of DSW
2021.01.11 Create a DSW instance Optimization You must create DSW instances before you use DSW to build Notebook models. This topic describes how to create a DSW instance. Create an instance