This topic describes the terms of Machine Learning Studio, Data Science Workshop (DSW), and Elastic Algorithm Service (EAS).

Machine Learning Studio

Term Description
Project Machine Learning Studio is reliant on projects. You can create multiple projects in each region to isolate and manage resources, permissions, and experiments. You can also use your Alibaba Cloud account to authorize Resource Access Management (RAM) users to access only specified projects.
Experiment You can create multiple experiments in each project to build algorithm models. Experiment can be scheduled in DataWorks.
Table Tables are automatically stored in MaxCompute. Table is the basic unit for data storage in MaxCompute.
  • You can create tables, add tables to favorites, and import data to tables.
  • You can delete tables only in the MaxCompute console.


Term Description
Instance Instance is the basic unit for performing development operations in DSW, such as reading data, developing algorithms, and training models. It is also the basic unit for associating resources and storing data.
Note You must create a DSW instance before you can start coding.


Term Description
Resource group Resource groups are used to isolate cluster resources. When you create an online model service, you can choose to deploy the service in the default shared resource group or a dedicated resource group.
Model service A model service is a service deployed with permanent residence. Model services are deployed based on model files and online prediction logic. You can create, update, start, stop, scale out, and scale in a model service.
Model file The format of a model file generated through offline model training varies based on the machine learning framework that you choose. In most cases, model files and processors are deployed together to generate model services.
Processor A processor is a program package that contains the prediction logic for online prediction. Processors and model files are deployed together to generate model services. EAS provides built-in processors for PMML, TensorFlow (SavedModel), and Caffe models.
Custom processor You can use custom processors if the built-in processors cannot meet your service deployment requirements. EAS allows you to develop custom processors by using C++, Java, and Python languages.
Service instance More than one service instance can be deployed to serve each service. This helps increase the maximum number of concurrent requests that a service can handle. If a resource group contains multiple nodes, EAS automatically distributes instances to these nodes. This ensures the high-availability of model services.
High-speed connection to VPC networks After a dedicated resource group is connected to an existing Virtual Private Cloud (VPC), you can use a client to establish a high-speed connection between the VPC network and each service instance in the resource group.