Machine Learning Platform for AI supports deep learning frameworks and provides GPU-accelerated clusters. You can use deep learning algorithms based on these frameworks and hardware resources.
Prerequisites
MaxCompute resources are associated with the workspace. For more information, see Manage workspaces.
Background information
Deep learning supports the TensorFlow framework and is compatible with TensorFlow 1.12. TensorFlow supports custom code in Python.
Before you use a deep learning framework to train models, you must upload your data to Object Storage Service (OSS). The algorithms read data from the specified OSS paths when you run the algorithms. If you use the algorithms to access data from OSS buckets that are deployed in the same region, you are not charged data transfer fees. If the OSS buckets are deployed in different regions, you are charged data transfer fees.
Machine Learning Platform for AI supports GPU-accelerated clusters only in the following regions: China (Shanghai), China (Beijing), China (Hangzhou), and China (Shenzhen).
Enable deep learning
To use deep learning, go to the details page of the workspace that you want to manage and configure GPU resources.
Log on to the Machine Learning Platform for AI console.
In the left-side navigation pane, click Workspaces. On the Workspace list page, click the name of the workspace that you want to manage.
In the Workspace Details section, click Resource management next to Computing resources.
In the Workspace resource allocation panel, perform the steps that are shown in the following figure to open the Resource Configuration dialog box.
In the Resource Configuration dialog box, set GPU to Pay-As-You-Go and click Confirm.