All Products
Document Center

DSW Environment

Last Updated: Jun 01, 2020

You can log on to the DSW console and click Open next to your DSW instance to launch the DSW terminal. Data Science Workshop (DSW) provides a Python environment with built-in big data development and algorithm libraries. You can also install third-party libraries. This allows you to focus on algorithm development.

Notebook development environment

Click Open next to your DSW instance to launch the DSW terminal. You can read data, develop algorithms, and train models in the terminal.


If you are new to DSW, you can use the built-in DSW demos to quickly get started with DWS features. In the left-side file list, click Demos and select a demo to download. You can find the downloaded case in the /Demo/Cases path, and open it to run the demo.

Install third-party libraries

If you need to use third-party libraries in the Python environment, install the third-party libraries through the DSW terminal.Install third-party libraries for Python3:

  1. pip install --user xxx
  2. Example:
  3. pip install --user sklearn

Remove third-party libraries. You can only remove third-party libraries installed under your account:

  1. pip uninstall xxx

View the installed libraries:

  1. pip list

Install a new version of tensoflow-gpu. You are not allowed to remove tensorflow-gpu. Therefore, the only way to install a new version is by using the update command. The updated version must be compatible with cuda version. ( For a subscription instance, the cuda version 10 is used. For a pay-as you go instance, the cuda version 9 is used.)

  1. pip install --upgrade --user tensorflow-gpu=<Version number>

Currently, DSW provides four Kernels: Python2, Python3, PyTorch, and TensorFlow2.0.Third-party libraries are installed under Python3 by default. You can also switch the environment to install third-party libraries to other environments.

  1. Install third-party libraries to a python2 environment:
  2. source activate python2
  3. pip install --user xxx
  4. Install third-party libraries to a TensorFlow2.0 environment:
  5. source activate tf2
  6. pip install --user xxx

Upload data

Upload local files

To upload small files, click the Upload icon in the DSW terminal. DSW also supports resumable upload. For more information about how to upload large files, see How to upload and download to DSW. The uploaded files are stored in the NAS file system.

Mount a NAS file system

If a NAS file system is mounted to your DSW instance, you can manage files in the left-side file list of the DSW terminal.

Note: The free NAS file system provided by DSW is under the following path: /home/admin/jupyter. Files stored in this path will not be automatically deleted.

Model deployment

The DSW terminal supports EASCMD commands. You can run these commands in the DSW terminal to deploy model services in Elastic Algorithm Service (EAS) (What is PAI EAS?). Follow these steps to create an EAS model service.

1. OSS authorization

If you want to use an online prediction service provided by EAS, you need the Alibaba Cloud AccessKey information to verify the identity. When you submit a model deployment task, you must provide the AccessKey ID and AccessKey secret at the same time. Enter the following command in the terminal:

  1. eascmd config -i <AccessKeyId> -k <AccessKeySecret> -e

Replace the endpoint in the command with the endpoint of the region where your model service is deployed. The endpoints and regions currently supported by the public offering of Alibaba Cloud are as follows:

Region Endpoint
China (Shanghai) (Subscription instances)
China (Beijing) (Subscription instances)
China (Shanghai) (Pay-as-you-go instances)
China (Beijing) (Pay-as-you-go instances)
China North 2 Ali Gov
China (Hangzhou)
China (Shenzhen)

2. Upload files

When you create an EAS service, you must specify the HTTP address or OSS address of the model file or processor file. EAS provides an OSS bucket for you to store data. You can run the EASCMD upload command to upload files, and then obtain the OSS addresses of the uploaded files. The upload command is as follows. The filename specifies the model file or custom processor file generated after you train the model in DSW.

  1. eascmd upload [filename] --inner

If you have uploaded the files, you can deploy the EAS service with the OSS target path: oss://eas-model-beijing/1295715995194599/xlab_m_random_forests__638730_v0-random forest-1-Model.pmml

  1. sh-4.2 $ eascmd upload xlab_m_random_forests__638730_v0-random forest-1-Model.pmml --inner
  2. [OK] oss endpoint: []
  3. [OK] oss target path: [oss: // eas-model-beijing / 1295715995194599 / xlab_m_random_forests__638730_v0-random forest-1-Model.pmml]
  4. Succeed: Total num: 1, size: 23,846. OK num: 1(upload 1 files).
  5. sh-4.2$

3. Create an EAS service

Create a JSON file in DSW. For example, the file name can be pmml.json. The file contains the OSS target path generated in step 2. For more information about parameter descriptions, see EASCMD.

  1. {
  2. "name": "model_example",
  3. "generate_token": "true",
  4. "model_path": "oss: // eas-model-shanghai / 1295715995194599 / xlab_m_random_forests__638730_v0-random forest-1-Model.pmml".
  5. "processor": "pmml",
  6. "metadata": {
  7. "instance": 1,
  8. "cpu": 1
  9. }
  10. }

Run the create command to create an EAS service. The model service is deployed, as shown in the following figure.

  1. eascmd create pmml.json

You can log on to the EAS console to manage model services deployed through DSW.