All Products
Search
Document Center

Develop Custom processors

Last Updated: Apr 28, 2020

A processor is a program package that contains prediction logic. User requests are processed by the processor and returned to a client. The processor logic contains model loading and request prediction. Currently, Elastic Algorithm Service (EAS) provides common processors such as predictive model markup language (PMML), TensorFlow, and Caffe. To customize the prediction logic, you need to develop a processor based on the processor development standards.

Prerequisites for deploying custom processors

  1. For the security of your models and services, custom processors cannot be used in public resource groups. You must create a dedicated subscription resource group before using a custom processor.image.png


2. After you develop a processor, we recommend that you debug the processor locally before deploying it for online services.# Develop custom processorsSee the corresponding documentation based on your language for development and local debugging.## 1. C/C++### 1.1 Quick start demo


https://github.com/pai-eas/pai-prediction-example


The preceding project contains the demos of two custom processors.

  • [echo]: returns the user input as it is and returns the file list in the model.
  • [image_classification]: mnist text classification. When an mnist dataset of the .jpg format is the input, the image class is returned.


For the compilation method, see the project README. For the local debugging method of each processor, see the README in their respective directories.


1.2 Interface definition

To develop a processor in C/C++, you must define the Load() and Process() functions. The Load() function is used to load a model during service initialization. The Process() function is used to process each user request and return the processing results. The declaration of the two functions is as follows:


  1. void *initialize(const char *model_entry, const char *model_config, int *state)

Parameter

Type

Description

model_entry

Input parameter

Corresponds to the model_entry field in the configuration file when a service is created. You can input a file name such as randomforest.pmml, or a directory, such as ./model.

model_config

Input parameter

Corresponds to the model_config field in the configuration file when a service is created. It indicates the custom configuration information of a model.

state

Output parameter

The returned value of the model loading status. If the value is 0, the model is successfully loaded. Otherwise, the model fails to be loaded.


Returned value

The memory address of the model variable that you define. It can be of any type that you define.


  1. int process(void *model_buf, const void *input_data, int input_size,void **output_data, int *output_size)

Parameter

Type

Description

model_buf

Input parameter

The model’s memory address returned by the initialize() function.

input_data

Input parameter

The data you input, which can be any string or binary data.

input_size

Input parameter

The length of the data that you input.

output_data

Output parameter

The data returned by the processor. Heap memory needs to be allocated for the data and the model releases the heap memory as configured.

output_size

Output parameter

The length of data returned by the processor.


Returned value

If 0 or 200 is returned, the request is successful. An HTTP error code can be directly returned. If an unrecognized HTTP error code is returned, it is automatically converted to an HTTP 400 error.

1.3 Sample code

The following is a simple sample. In the sample, no model data is loaded, and the prediction service directly returns the user request to the client.

  1. #include <stdio.h>
  2. #include <string.h>
  3. extern "C" {
  4. void *initialize(const char *model_entry, const char *model_config, int *state)
  5. {
  6. *state = 0;
  7. return NULL;
  8. }
  9. int process(void *model_buf, const void *input_data, int input_size,
  10. void **output_data, int *output_size)
  11. {
  12. if (inputSize == 0) {
  13. const char *errmsg = "input data should not be empty";
  14. *outputData = strndup(errmsg, strlen(errmsg));
  15. *outputSize = strlen(errmsg);
  16. return 400;
  17. }
  18. *outputData = strndup((char *)inputData, inputSize);
  19. *outputSize = inputSize;
  20. return 200;
  21. }
  22. }


The processor does not read any model information, and outputs user input to the user as is. The code can be compiled into a .so file by using the following Makefile:

  1. CC=g++
  2. CCFLAGS=-I./ -D_GNU_SOURCE -Wall -g -fPIC
  3. LDFLAGS= -shared -Wl,-rpath=. /
  4. OBJS=processor.o
  5. TARGET=libpredictor.so
  6. all: $(TARGET)
  7. $(TARGET): $(OBJS)
  8. $(CC) -o $(TARGET) $(OBJS) $(LDFLAGS) -L./
  9. %.o: %.cc
  10. $(CC) $(CCFLAGS) -c $< -o $@
  11. clean:
  12. rm -f $(TARGET) $(OBJS)


If there is another dependent .so file, you must add the -rpath option to the Makefile to specify the search directory for the .so file. For more information about how rpath works, see https://en.wikipedia.org/wiki/Rpath.


2. Java

2.1 Interface definition

To develop a custom processor in Java, you only need to define one class. In addition to the constructor function, you only need to use the Load() and Process() functions. The class prototype is as follows:


  1. package com.alibaba.eas;
  2. import java.util.*;
  3. public class TestProcessor {
  4. public TestProcessor(String modelEntry, String modelConfig) {
  5. /* Transfer the model file name for initialization. */
  6. }
  7. public void Load() {
  8. /* Load the model information based on the model name. */
  9. }
  10. public byte[] Process(byte[] input) {
  11. /* Perform prediction on the input data and return a result. Currently, byte[] and string are supported. We recommend that you use byte[] to avoid coding problems. */
  12. }
  13. public static void main(String[] args) {
  14. /* The main function is optional and can be used for local verification of a single server. */
  15. }
  16. }


If an exception is thrown in the process, the framework captures the exception and returns the message in the exception to the client as an error message. In addition, the HTTP 400 error is also reported. You can also capture the exception yourself and output your own error message as follows:


  1. try{
  2. } catch (com.alibaba.fastjson.JSONException e) {
  3. throw new RuntimeException("bad json format, " + e.getMessage());
  4. }


2.2 Single-server development and debugging

Non-cluster users can use the single-server debugging feature to develop and debug a model or a processor in a local environment. The development interfaces and calling interfaces are fully compatible with those in the online cluster environment. This function allows you to avoid frequent deployment and updates of a service during development and testing, and also saves the resource cost for debugging.

Note: This feature depends on Docker. Therefore, Docker must be pre-installed on the server that runs EASCMD. If you need a GPU or Compute Unified Device Architecture (CUDA), you must install the CUDA and NVIDIA-Docker on the local server in advance.

2.2.1 Install Docker

If you have not installed Docker, see https://docs.docker-cn.com/engine/installation/.


2.2.2 Download the EASCMD client

Windows: http://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmdwin64

Linux 32: http://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmd32

Linux 64: http://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmd64

MacOS: http://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmdmac64


2.2.3 Create a service configuration file

Specify the model to be deployed and the compiled processor to the configuration file, as shown in the following sample:

  1. {
  2. "name": "diy_test",
  3. "generate_token": "true",
  4. "model_path": "model.tar.gz", # Either an HTTP path or a local path is accepted.
  5. "model_entry": "./model_path/",
  6. "model_config": "{\"model\": \"deploy.prototxt\", \"weight\": \"bvlc_reference_caffenet.caffemodel\"}",
  7. "processor_path": "diy_predictor_gpu.tar.gz", # Either an HTTP path or a local path is accepted.
  8. "processor_entry": "diy_predictor_gpu.so",
  9. "processor_type": "cpp",
  10. "cuda": "/usr/local/cuda"
  11. }


For more information about the configuration fields, see

EASCMD client manual


2.2.4 Debug the deployment

  1. sudo eascmd test service.json



3. Python

3.1 Overview

Python is one of the most important programming languages in the machine learning and AI fields. With machine learning frameworks such as TensorFlow, NumPy, and Scikit-learn as well as data engineering kits such as Pandas, Python has been the first programming language in the AI field. A high-performance Python online prediction service framework not only helps algorithm engineers quickly iterate models, but also releases engineers from online tasks.

Elastic Algorithm Service (EAS) of Alibaba Cloud Machine Learning Platform for AI (PAI) provides a Python SDK for algorithm engineers to develop and deploy online model services more efficiently. The EAS Python SDK supports Python-based open-source machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn. It also supports data analytics and processing frameworks such as Pandas.

The EAS Python SDK allows you to quickly convert prediction service scripts on your local host to online services. The SDK has a built-in high-performance product risk classification (PRC) framework, which is customized for AI inference scenarios. The SDK also provides entry points for you to interact with advanced features of EAS clusters. With these simple interface implementation, you can easily deploy services in EAS clusters, and then use the features of EAS, including model monitoring, blue-green deployment, auto scaling, and direct connect.

The following example describes how to use the EAS Python SDK.

3.2 Build a development environment

You can use a Python package management tool, such as PyEnv or Conda, to build a development environment. The EASCMD client is used to encapsulate the logic for creating the environment. You only need to run one command to initialize the Python SDK environment. If you need to add more custom settings to the environment, you can choose to manually initialize the environment.

3.2.1 Initialize the environment by using EASCMD (only applicable to Linux)

For more information about EASCMD, see EASCMD Client Manual .

EASCMD is an EAS client. This tool is used to encapsulate the logic for initializing the Python SDK environment. After you download EASCMD, you only need to run one command to initialize the Python SDK environment, and generate the relevant templates.

  1. # Install and initialize EASCMD. In this example, EASCMD is installed in a Linux operating system.
  2. # To install EASCMD in other operating systems, download the corresponding EASCMD version from the download address listed in the EASCMD client manual.
  3. $ wget http://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmd64
  4. # After you download EASCMD, modify access permissions, and configure your AccessKey information.
  5. $ chmod +x eascmd64
  6. $ ./eascmd64 config -i <access_id> -k <access_key>
  1. # Initialize the environment.
  2. $ ./eascmd64 pysdk init ./pysdk_demo

Enter the Python version. The default version is 3.6. The virtual environment folder ENV is automatically created. The prediction service template is app.py and the service deployment template is app.json.

()

3.2.2 Manually initialize the environment

If EASCMD cannot meet your requirements or an error occurs when you initialize the environment by using EASCMD, you can choose to manually initialize the environment. The procedure is simple. We recommend that you use Conda to deploy the model. Run the following command in the command line:

  1. mkdir demo
  2. cd demo
  3. # Use Conda to create a Python environment. Specify the name of the virtual environment folder as ENV.
  4. conda create -p ENV python=2.6
  5. # Install the EAS Python SDK.
  6. ENV/bin/pip install http://eas-data.oss-cn-shanghai.aliyuncs.com/sdk/allspark-0.9-py2.py3-none-any.whl
  7. # Install dependencies, such as TensorFlow 1.14.
  8. ENV/bin/pip install tensorflow==1.14

If you have not installed Conda on your local server, install Conda first:

  1. $ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
  2. $ sh Miniconda3-latest-Linux-x86_64.sh

()

EAS provides pre-built development images, is pre-installed with Conda, and generates the corresponding Python version of the ENV. Currently, EAS provides three images:

  1. # Base image only installed with Conda
  2. registry.cn-shanghai.aliyuncs.com/eas/eas-python-base-image:latest
  3. # Python SDK installed with Conda, Python 2.7, and EAS allspark 0.8
  4. registry.cn-shanghai.aliyuncs.com/eas/eas-python-base-image:py2.7-allspark-0.8
  5. # Python SDK installed with Conda, Python 3.6, and EAS allspark 0.8
  6. registry.cn-shanghai.aliyuncs.com/eas/eas-python-base-image:py3.6-allspark-0.8

Run the base image directly to obtain a Python development environment:

  1. $sudo docker run -ti registry.cn-shanghai.aliyuncs.com/eas/eas-python-base-image:py3.6-allspark-0.8
  2. (/data/eas/ENV) [root@487a04df4b21 eas]#
  3. (/data/eas/ENV) [root@487a04df4b21 eas]# ENV/bin/python app.py
  4. [INFO] initialize new lua plugin
  5. [INFO] loading builtin config script
  6. [INFO] current meritc id:0
  7. [INFO] loading builtin lua scripts
  8. [INFO] Success load all lua scripts.
  9. [INFO] create service
  10. [INFO] rpc binds to predefined port 8080
  11. [INFO] updating rpc port to 8080
  12. [INFO] install builtin handler call to /api/builtin/call
  13. [INFO] install builtin handler eastool to /api/builtin/eastool
  14. [INFO] install builtin handler monitor to /api/builtin/monitor
  15. [INFO] install builtin handler ping to /api/builtin/ping
  16. [INFO] install builtin handler prop to /api/builtin/prop
  17. [INFO] install builtin handler realtime_metrics to /api/builtin/realtime_metrics
  18. [INFO] install builtin handler tell to /api/builtin/tell
  19. [INFO] install builtin handler term to /api/builtin/term
  20. [INFO] Service start successfully
  21. [INFO] shutting down context ... press Ctrl+C again to force quit

You can install your dependent libraries based on the ENV environment of the base image, and commit the modified container into a data image.

  1. ENV/bin/pip install tensorflow==1.12

You can also build an ENV development environment outside Docker and copy the environment into the /data/eas/ directory of any Docker image. Using an image for deployment can speed up the deployment and avoid packaging and uploading the entire ENV environment during each deployment.

3.3 Compile the prediction logic

Create the main file app.py of the prediction service in the directory of the same level as ENV.py, whose template file has been created in the built-in development image of EAS. EAS provides the following SDK encapsulation, which is automatically generated when you use EASCMD to initialize the environment:

  1. # -*- coding: utf-8 -*-
  2. import allspark
  3. class MyProcessor(allspark.BaseProcessor):
  4. """ MyProcessor is a example
  5. you can send mesage like this to predict
  6. curl -v http://127.0.0.1:8080/api/predict/service_name -d '2 105'
  7. """
  8. def initialize(self):
  9. """ load module, executed once at the start of the service
  10. do service intialization and load models in this function.
  11. """
  12. self.module = {'w0': 100, 'w1': 2}
  13. def pre_process(self, data):
  14. """ data format pre process
  15. """
  16. x, y = data.split(b' ')
  17. return int(x), int(y)
  18. def post_process(self, data):
  19. """ proccess after process
  20. """
  21. return bytes(data, encoding='utf8')
  22. def process(self, data):
  23. """ process the request data
  24. """
  25. x, y = self.pre_proccess(data)
  26. w0 = self.module['w0']
  27. w1 = self.module['w1']
  28. y1 = w1 * x + w0
  29. if y1 >= y:
  30. return self.post_process("True"), 200
  31. else:
  32. return self.post_process("False"), 400
  33. if __name__ == '__main__':
  34. # paramter worker_threads indicates concurrency of processing
  35. runner = MyProcessor(worker_threads=10)
  36. runner.run()

The preceding code is a simple sample provided by the Python SDK. You must inherit properties from the base class BaseProcessor of EAS, and define the functions initialize() and process(). Both the input and output types of the process() function are bytes. Output parameters include response_data and status_code. For successful requests, the status_code parameter returns value 0 or 200.

()

Function Feature Parameters
init(worker_threads=5, worker_processes=1,endpoint=None) Function for building a processor. worker_threads indicates the number of worker threads, which is 5 by default. worker_processes indicates the number of worker processes, which is 1 by default. When the value of worker_processes is 1, it is in the single-process multi-thread mode. When the value of worker_processes is greater than 1, the worker threads only read data. In this case, requests are concurrently processed by multiple processes, and each process runs initialize(). endpoint indicates the endpoint that the service listens to, which can be used to specify the IP address and port that the service listens to, such as endpoint=’0.0.0.0:8079’.
initialize() Function for initializing the processor, which performs initialization such as loading a model during service startup. No parameters.
process(data) Function for processing requests. When each request is received, the request body is transferred to the process() function for processing as a parameter, and the returned value is sent to the client. data indicates the request body, which is in bytes. The returned value is also in bytes.
run() Function for starting the service. No parameters.

3.4 Test the service locally

  1. ./ENV/bin/python app.py
  2. curl http://127.0.0.1:8080/test -d '10 20'

()

3.5 Publish the service online

3.5.1 Package a complete environment

After you write the Python code, package the code and then deploy the prediction service in two steps.EASCMD provides the following packaging commands for simply encapsulating the package:

  1. $ ./eascmd64 pysdk pack ./demo
  2. [PYSDK] Creating package: /home/xingke.lwp/code/test/demo.tar.gz

You can also package the code in zip or tar.gz format. The ENV directory must be in the root directory of the package.Upload the tar.gz package to Object Storage Service (OSS). The address for obtaining the package is oss://eas-model-beijing/1955570263925790/demo.tar.gz.Use the following service configuration file to deploy the service:

  1. {
  2. "name": "pysdk_demo",
  3. "processor_entry": "./app.py",
  4. "processor_type": "python",
  5. "processor_path": "oss://eas-model-beijing/1955570263925790/pack.tar.gz",
  6. "metadata": {
  7. "instance": 1,
  8. "memory": 2000,
  9. "cpu": 1
  10. }
  11. }

The Python ENV environment generated by Conda is usually large, the ENV environment is seldom modified during the development, and packaging and uploading the environment every time during development and deployment waste your time and storage resources. Therefore, EAS provides the data_image deployment method. You can build an ENV environment based on a pre-built image provided by EAS, and install the dependent Python package that you need in ENV. After the installation is completed, you can commit the container into a data image and upload it to the image repository. For example:

  1. sudo docker commit 487a04df4b21 registry.cn-shanghai.aliyuncs.com/eas-service/develop:latest
  2. sudo docker push registry.cn-shanghai.aliyuncs.com/eas-service/develop:latest

You only need to package and upload the app.py file to the OSS. You can use the following service description file to deploy the service:

  1. {
  2. "name": "pysdk_demo",
  3. "processor_entry": "./service.py",
  4. "processor_type": "python",
  5. "processor_path": "http://eas-data.oss-cn-shanghai.aliyuncs.com/demo/service.py",
  6. "data_image": "registry.cn-shanghai.aliyuncs.com/eas-service/develop:latest",
  7. "metadata": {
  8. "instance": 1,
  9. "memory": 2000,
  10. "cpu": 1
  11. }
  12. }

()

3.5.3 Deploy the service

  1. $ ./eascmd64 create app.json
  2. [RequestId]: 1202D427-8187-4BCB-8D32-D7096E95B5CA
  3. +-------------------+-------------------------------------------------------------------+
  4. | Intranet Endpoint | http://1828488879222746.vpc.cn-beijing.pai-eas.aliyuncs.com/api/predict/pysdk_demo |
  5. | Token | ZTBhZTY3ZjgwMmMyMTQ5OTgyMTQ5YmM0NjdiMmNiNmJkY2M5ODI0Zg== |
  6. +-------------------+-------------------------------------------------------------------+
  7. [OK] Waiting task server to be ready
  8. [OK] Fetching processor from [oss://eas-model-beijing/1955570263925790/pack.tar.gz]
  9. [OK] Building image [registry-vpc.cn-beijing.aliyuncs.com/eas/pysdk_demo_cn-beijing:v0.0.1-20190806082810]
  10. [OK] Pushing image [registry-vpc.cn-beijing.aliyuncs.com/eas/pysdk_demo_cn-beijing:v0.0.1-20190806082810]
  11. [OK] Waiting [Total: 1, Pending: 1, Running: 0]
  12. [OK] Service is running
  13. #4. Test the service
  14. $ curl http://1828488879222746.vpc.cn-beijing.pai-eas.aliyuncs.com/api/predict/pysdk_demo -H 'Authorization: ZTBhZTY3ZjgwMmMyMTQ5OTgyMTQ5YmM0NjdiMmNiNmJkY2M5ODI0Zg==' -d 'hello eas'

Deploy a custom processor service

After you develop a custom processor service, you can deploy the service by using the PAI EAS console or the EASCMD command line tool.

  1. Log on to the PAI EAS console and choose Model Deploy > EAS Model Serving from the left-side navigation pane. On the Elastic Algorithm Service page, click Model Deploy and set Resource Group Type to Dedicated Resource Group, and Processor Type to Custom Processor. Package and upload the model and processor, fill in the required information, and proceed with the subsequent deployment.

    image.png

  2. Use the EASCMD command line tool to deploy the service locally. Set the Resource field to the ID of your dedicated resource group (which can be viewed in the console) so that the deployment can be successful. For more information, seeEASCMD client manual. After the deployment is completed, you can proceed with the subsequent management for the deployed service in the console.