All Products
Search
Document Center

PAI-EAS Built-in Processors

Last Updated: Jul 21, 2019

A processor is a program package that contains the computing logic of online model services. Machine Learning Platform for AI provides multiple built-in processors for PMML, TensorFlow, and Caffe models. You can use the built-in processors to deploy these models, which helps you save the costs on service logic development.

Table of contents

  1. PMML processor
  2. TensorFlow processor
  3. Caffe processor

1. PMML processor

  • What is PMML?: Most of the machine learning models trained in Machine Learning Studio can be exported to the predictive model markup language (PMML) format.

  • Export a PMML model from Machine Learning Studio:

    • Before you run an experiment in Machine Learning Studio, select Settings > General and then select Auto generate PMML.
    • Run the experiment, and then right-click a node on the canvas, select Model Option > Export PMML to download the generated PMML model. You can also right-click the experiment in the left-side experiment list and then select Export PMML to download the PMML model.
    • Algorithms that support generating PMML models include GBDT binary classification, linear SVM, logistic regression for binary classification, logistic regression for multiclass classification, random forest, K-means, linear regression, GBDT regression, and scorecard training.
  • The Elastic Algorithm Service (EAS) PMML processor loads PMML models to provide online model services, process requests sent to the services, and return the calculated results.

  • When the isMissing policy is not set for the feature column in a PMML model, the PMML processor automatically imputes missing values according to the following policy:

Data type Default value imputed
Boolean false
Double 0.0
Float 0.0
Int 0
String “”
  • You can use the following methods to deploy a PMML model:
    1. Upload to the console: go to the Elastic Algorithm Service (EAS) page, click Upload and Deploy Models, set the processor type to PMML, and follow the instructions to upload and deploy the model.
    2. Use Machine Learning Studio: click Deploy > Online Model Service on the canvas and then follow the instructions to deploy the model.
    3. Use the EASCMD client: when you create a service, set the processor field to pmml in the service configuration file service.json. The following shows the deployment configuration:
  1. {
  2. "processor": "pmml",
  3. "generate_token": "true",
  4. "model_path": "http://xxxxx/lr.pmml",
  5. "name": "eas_lr_example",
  6. "metadata": {
  7. "instance": 1,
  8. "cpu": 1 #1 quota, which equals 1 core and 4 GB of memory.
  9. }
  10. }
  1. Use Data Science Workshop (DSW): follow the same procedure for deploying a model by using the EASCMD client.

2. TensorFlow processor

  • The TensorFlow processor provided by EAS is used to load TensorFlow models in the SavedModel and SessionBundle formats. For Keras and checkpoint models, follow the procedure described in Export TensorFlow models to convert the model format, and then deploy the models.
  • The PMML processor does not support OP customization.
  • You can use the following methods to deploy a TensorFlow model:
    1. Upload to the console: go to the EAS page, click Upload and Deploy Models, set the processor type to TensorFlow, and follow the instructions to upload and deploy the model.
    2. Use Machine Learning Studio: click Deploy > Online Model Service on the canvas and then follow instructions to deploy the model.
    3. Use the EASCMD client: when you create a service, set the processor field to tensorflow_cpu or tensorflow_gpu in the service configuration file service.json based on the compute resource that the model uses. If the processor type does not match the compute resource type, a deployment error occurs. The following shows the deployment configuration. For more information, see the EASCMD topic.
  1. {
  2. "name": "tf_serving_test",
  3. "generate_token": "true",
  4. "model_path": "http://xxxxx/savedmodel_example.zip",
  5. "processor": "tensorflow_cpu",
  6. "metadata": {
  7. "instance": 1,
  8. "cpu": 1,
  9. "gpu": 0,
  10. "memory": 2000
  11. }
  12. }
  1. Use DSW: follow the procedure for deploying a model by using the EASCMD client.

3. Caffe processor

  • The Caffe processor provided by EAS is used to load deep learning models trained by the Caffe framework.
  • Due to the flexibility of the Caffe framework, when you deploy a Caffe model, you must specify the model file name and weight file name in the model package, as shown in the following example.
  • The PMML processor does not support DataLayer customization.
  • You can use the following methods to deploy a Caffe model:
    1. Upload to the console: go to the EAS page, click Upload and Deploy Models, set the processor type to Caffe, and follow the instructions to upload and deploy the model.
    2. Use the EASCMD client: when you create a service, set the processor field to caffe_cpu or caffe_gpu in the service configuration file service.json based on the compute resource that the model uses. If the processor type does not match the compute resource type, a deployment error occurs. The following shows the deployment configuration. For more information, see the EASCMD topic.
    3. Use DSW: follow the procedure for deploying a model by using the EASCMD client.
  1. {
  2. "name": "caffe_serving_test",
  3. "generate_token": "true",
  4. "model_path": "http://xxxxx/caffe_model.zip",
  5. "processor": "caffe_cpu",
  6. "model_config": {
  7. "model": "deploy.prototxt",
  8. "weight": "bvlc_reference_caffenet.caffemodel"
  9. },
  10. "metadata": {
  11. "instance": 1,
  12. "cpu": 1,
  13. "gpu": 0,
  14. "memory": 2000
  15. }
  16. }