This topic describes how to use Elastic Algorithm Service (EAS) of Machine Learning
Platform for AI and DataWorks to implement scheduled model deployment.
Step 1: Create an exclusive resource group
- Log on to the DataWorks console.
- In the left-side navigation pane, click Resource Groups.
- On the Exclusive Resource Groups tab, click Create Resource Group for Scheduling.
- In the Create a dedicated resource group panel, set the parameters.
Parameter |
Description |
Resource Group Type |
Select Exclusive Resource Groups.
|
Resource Group Name |
The name of the resource group, which must be unique for the tenant.
Note A tenant indicates an Alibaba Cloud account. Each tenant may have multiple RAM users.
|
Resource Group Description |
The description of the resource group, which helps differentiate this resource group
from other resource groups.
|
Order Number |
The order number of the purchased exclusive resource group. If you have not purchased
one, click Purchase.
|
- Click OK.
Note The exclusive resource group is initialized within 20 minutes. Wait until its status
changes to Running.
Step 2: Bind the exclusive resource group to a workspace
You must bind an exclusive resource group to a workspace before you can select the
resource group in the workspace.
- On the Exclusive Resource Groups tab, find the resource group and click Change Workspace in the Actions column.
- In the Workspace section of the Modify home workspace dialog box, find the workspace to which you want to bind the exclusive resource group.
Then, click Bind in the Actions column.
Step 3: Create a workflow
- Go to the DataStudio page.
- Log on to the DataWorks console.
- In the left-side navigation pane, click Workspaces.
- In the top navigation bar, select the region where the required workspace resides,
find the workspace, and then click Data Analytics.
- Move the pointer over the
icon and click Workflow.
- In the Create Workflow dialog box, set the Workflow Name and Description parameters.
Notice The workflow name can be up to 128 characters in length and can contain letters, digits,
underscores (_), and periods (.).
- Click Create.
- On the tab of the workflow, choose and drag Shell to the right canvas.

- In the Create Node dialog box, enter Deployment in the Node Name field.
- Click Commit.
Step 4: Deploy the model as the initial model service
In scheduled model deployment, the initial model service is updated to a new version
as an online service. Therefore, the model must be deployed as the initial model service
before the scheduled model deployment. If the initial model service is available,
go to Step 5.
- Edit the deployment script.
- On the tab of the workflow, double-click the created Shell node. In this example,
double-click the Deployment node.
- On the tab of the Shell node, enter the following commands:
# Compile the service deployment description file.
cat << EOF > echo.json
{
"name": "yourModelName",
"generate_token": "true",
"model_path": "yourModelAdress",
"processor": "yourProcessorType",
"metadata": {
"instance": 1, # You can change the number of instances based on the actual needs.
"cpu": 2, # You can change the number of CPUs based on the actual needs.
"memory": 4000
}
}
EOF
# Run the deployment command.
/home/admin/usertools/tools/eascmd -i <yourAccessKeyID> -k <yourAccessKeySecret> -e pai-eas.cn-shanghai.aliyuncs.com create echo.json
echo.json is the JSON file that describes the service information, such as the location of
the model and the required resources. You can set the following parameters as needed:
- name: the name of a model service. It is the unique identifier of a model service and
must be unique in a region. A model service can be named based on the business meaning.
- model_path: the path where the trained model is stored. You can specify an HTTP URL or an Object
Storage Service (OSS) path.
If you set this parameter to an HTTP URL, the files must be in TAR, GZ, BZ2, or ZIP
format. If you set this parameter to an OSS path, you can specify the path of the
package or the directory. To use an OSS path, you must specify the
endpoint of OSS by adding the line of code
"oss_endpoint":"oss-cn-beijing.aliyuncs.com"
to the preceding service deployment description file. You can change the region in
the code as needed.
Note If you use OSS to store models, you must grant Machine Learning Platform for AI the
permissions to access OSS. For more information, see the "OSS authorization" section
of the
Authorization topic.
- processor: the type of the processor.
- metadata: the metadata of the service, which can be modified as needed. For more information,
see Run commands to use the EASCMD client.
- yourAccessKeyID: the AccessKey ID.
- yourAccessKeySecret: the AccessKey secret.
- Endpoint: the endpoint of Machine Learning Platform for AI in a specified region. You must
set the Endpoint parameter after
-e
in the preceding deployment command. The following table describes the regions and
the endpoint in each region.
Region |
Endpoint |
China (Shanghai) |
pai-eas.cn-shanghai.aliyuncs.com |
China (Beijing) |
pai-eas.cn-beijing.aliyuncs.com |
China (Hangzhou) |
pai-eas.cn-hangzhou.aliyuncs.com |
China (Shenzhen) |
pai-eas.cn-shenzhen.aliyuncs.com |
China (Hong Kong) |
pai-eas.cn-hongkong.aliyuncs.com |
Singapore |
pai-eas.ap-southeast-1.aliyuncs.com |
India (Mumbai) |
pai-eas.ap-south-1.aliyuncs.com |
Indonesia (Jakarta) |
pai-eas.ap-southeast-5.aliyuncs.com |
Germany (Frankfurt) |
pai-eas.eu-central-1.aliyuncs.com |
- Run the script.
- On the tab of the Shell node, click the
icon in the upper part.
- In the Warning message, click Continue to Run.
- In the Runtime Parameters dialog box, set the Resource Group parameter to the created exclusive resource group.
- Click OK.
After the code is executed, an online model service is generated. You can perform
the following steps to view the model service in the Machine Learning Platform for
AI console.
- Optional:View the deployed model service.
- Log on to the Machine Learning Platform for AI console.
- In the left-side navigation pane, choose .
- In the top navigation bar, select a region.
- On the Elastic Algorithm Service page, view the deployed model service.

In subsequent steps, more service versions will be added to the model service to implement
scheduled model deployment.
Step 5: Edit the scheduled deployment script
Edit the code of the Shell node in Step 5, as shown in the following sample code.
If you have completed Step 5, retain the first 14 lines of code. If you have not performed
Step 5, you must change the parameter values in the first 14 lines of code as needed.
# Compile the service deployment description file.
cat << EOF > echo.json
{
"name": "yourModelName",
"generate_token": "true",
"model_path": "yourModelAdress",
"processor": "yourProcessorType",
"metadata": {
"instance": 1,
"cpu": 2,
"memory": 4000
}
}
EOF # The 14rd line of code.
# Update and deploy the model. For each scheduled deployment, a new version of the model service is added as the latest online service.
/home/admin/usertools/tools/eascmd -i <yourAccessKeyID> -k <yourAccessKeySecret> -e pai-eas.cn-shanghai.aliyuncs.com modify <yourModelName> -s echo.json
# Define the test logic for the service.
# If an exception occurs on the test service, run the following command to roll back the model service:
#/home/admin/usertools/tools/eascmd -i <yourAccessKeyID> -k <yourAccessKeySecret> -e pai-eas.cn-shanghai.aliyuncs.com version -f <The name of the model to be rolled back> 1
For more information about the parameters, see
Step 4: Deploy the model as the initial model service.
Step 6: Execute scheduled deployment
- Configure scheduling properties and commit the Shell node.
- On the Shell node tab, click the Properties tab in the right-side pane.
- In the Properties panel, set the Instance Recurrence parameter in the Schedule section.
- In the Dependencies section, click Use Root Node next to the Parent Nodes field.
- Configure dependencies. For more information, see Configure same-cycle scheduling dependencies.
- Click the
icon on the tab of the Shell node to save the configurations.
- Click the
icon on the tab of the Shell node to commit the scheduled node.
- View the instances of the scheduling node.
- On the tab of the Shell node, click Operation Center in the upper-right corner.
- On the Operation Center page, choose .
- On the instance list page, view the scheduled time for automatic model deployment
in the Schedule column.
- Click in the Actions column and select View Runtime Log to view operational logs of each scheduled deployment.

- View the model service deployed at scheduled time.
- Log on to the Machine Learning Platform for AI console.
- In the left-side navigation pane, choose .
- In the top navigation bar, select a region.
- On the Elastic Algorithm Service page, find the service and view all the versions that are automatically updated in
the Version column.
