All Products
Search
Document Center

Platform For AI:Develop custom processors by using C or C++

Last Updated:Jun 17, 2025

This topic describes how to develop custom processors by using C or C++.

Quick start demo

Download the pai-prediction-example project. This project contains the following two custom processors:

  • echo: When a request is received, this processor returns the user input without changes and the list of files in the model.

  • image_classification: This processor is used for MNIST classification. If an MNIST image in the JPG format is input, the image category is returned.

For more information about the compilation, see the README file in the project. For more information about on-premises debugging of each processor, see the README file in their corresponding directories.

Interface definition

To develop a custom processor by using C or C++, you must define the initialize() and process() functions. The initialize() function is used to load a model during service initialization. The Process() function is used to process client requests and return results. The following code blocks provide an example on the declaration of the two functions:

void *initialize(const char *model_entry, const char *model_config, int *state)

Parameter

Type

Description

model_entry

Input parameter

The entry file of the model package. This parameter corresponds to the model_entry field in the JSON configuration file when the service is created. For more information about the model_entry field, see Parameters for JSON deployment. You can specify a file name such as randomforest.pmml, or a directory such as ./model.

model_config

Input parameter

The custom configuration information of the model. This parameter corresponds to the model_config field in the configuration file when the service is created. For more information about the model_config field, see Parameters for JSON deployment.

state

Output parameter

The status of model loading. If the value is 0, the model is loaded. Otherwise, the model fails to be loaded.

Returned value

The memory address of the model, which supports all types. The model is specified in the model variable.

int process(void *model_buf, const void *input_data, int input_size,void **output_data, int *output_size)

Parameter

Type

Description

model_buf

Input parameter

The model memory address that is returned by the initialize() function.

input_data

Input parameter

The data that you input, which can be a string or of the BINARY type.

input_size

Input parameter

The length of the data that you input.

output_data

Output parameter

The data that is returned by the processor. The heap memory must be allocated for the data. The model releases the memory as configured.

output_size

Output parameter

The length of the data that is returned by the processor.

Returned value

If 0 or 200 is returned, the request is successful. An HTTP error code can be returned. If an undefined HTTP status code is returned, it is automatically converted to http 400 error.

Sample code

In the following sample code, no model data is loaded. The prediction service returns the user request to the client.

  1. Write test code.

    #include <stdio.h>
    #include <string.h>
    extern "C" {
        void *initialize(const char *model_entry, const char *model_config, int *state)
        {
           *state = 0;
            return NULL;
        }
        int process(void *model_buf, const void *input_data, int input_size,
                void **output_data, int *output_size)
        {
            if (input_size == 0) {
                const char *errmsg = "input data should not be empty";
                *output_data = strdup(errmsg);
                *output_size = strlen(errmsg);
                return 400;
            }  
            *output_data = strdup((char *)input_data);
            *output_size = input_size;
            return 200;
        }
    }
  2. Compile the input data as an SO file based on the following Makefile:

    CC=g++
    CCFLAGS=-I./ -D_GNU_SOURCE -Wall -g -fPIC
    LDFLAGS= -shared -Wl,-rpath=./
    OBJS=processor.o
    TARGET=libpredictor.so
    all: $(TARGET)
    $(TARGET): $(OBJS)
    	$(CC) -o $(TARGET) $(OBJS) $(LDFLAGS) -L./
    %.o: %.cc
    	$(CC) $(CCFLAGS) -c $< -o $@
    clean:
    	rm -f $(TARGET) $(OBJS)
  3. Deploy the EAS service. For information about the processor-related configuration parameters in the JSON configuration file required for service deployment, see the following sample code. processor_entry indicates the main file (SO file) of the processor.

    {
    	"name": "test_echo",
    	"model_path": "http://*****.oss-cn-shanghai.aliyuncs.com/****/saved_model.tar.gz",
    	"processor_path": "oss://path/to/echo_processor_release.tar.gz",
    	"processor_entry": "libpredictor.so",
    	"processor_type": "cpp",
    	"metadata": {
    		"instance": 1
    	}
    }