Machine Learning Platform for AI (PAI) of Alibaba Cloud supports deep learning frameworks and provides GPU computing clusters that have various features. You can use deep learning algorithms based on these frameworks and hardware resources.

Prerequisites

A project is created. For more information, see Create a project.

Background information

Deep learning supports the following frameworks: MXNet 0.9.5, Caffe rc3, TensorFlow 1.4, and TensorFlow 1.8. TensorFlow and MXNet support custom coding in Python. Caffe supports customized net files.

Before you use a deep learning framework to train models, you must upload your data to Alibaba Cloud Object Storage Service (OSS). Algorithms read data from specified OSS paths when you run the algorithms. If you use algorithms to access OSS data in the same region, no costs are generated for the traffic. However, if you use algorithms to access OSS data in other regions, costs are generated for the traffic.
Note GPU clusters are deployed only in the China (Shanghai) and China (Beijing) regions for PAI.

Procedure

To use deep learning, you must select By usage or By package in the Open GPU column to enable GPU computing for a specific project.

  1. Log on to the PAI console.
  2. In the left-side navigation pane, choose Model Training > Visualized Modeling (Machine Learning Studio).
  3. On the Visualized Modeling page, find the created project and select By usage or By package in the Open GPU column.

    The projects for which you have enabled GPU computing are allocated to a public resource pool. This way, the projects can use the underlying GPU computing resources.