This topic shows you how to develop custom processors by using Java.

Interface definition

To develop a custom processor by using Java, you must define only one class. In addition to the constructor, this class requires only the Load() and Process() functions. The following information specifies the prototype of the class:
package com.alibaba.eas;
import java.util.*;
public class TestProcessor {
  public TestProcessor(String modelEntry, String modelConfig) {
    /* Pass the model file name for initialization. */
  }
  public void Load() {
    /* Load model information based on the model name. */
  }
  public byte[] Process(byte[] input) {
    /* Predict input data and return the prediction result. BYTE[] and STRING are supported. We recommend that you use BYTE[] to prevent coding problems. */
  }
  public static void main(String[] args) {
    /* The main function is optional and its class functions can be verified on an on-premises standalone machine. */
  }
}
If an exception occurs, the framework captures the exception and returns the message in the exception as an error message to the client. In addition, the HTTP status code 400 is returned. You can also capture exceptions and return corresponding error messages, as shown in the following example:
try{
} catch (com.alibaba.fastjson.JSONException e) {
  throw new RuntimeException("bad json format, " + e.getMessage());
}

Standalone development and debugging

The standalone debugging feature is designed for non-cluster scenarios. This feature allows you to develop and debug a model or processor in the on-premises environment. The development and calling interfaces are fully compatible with the online cluster environment. This feature saves you from frequently deploying new services in the development and testing phases and reduces the resource costs for debugging.
Note This feature depends on Docker. Therefore, you must pre-install Docker on the server where the EASCMD client is running. If the graphics processing unit (GPU) and CUDA are required, you must pre-install the CUDA and Nvidia-Docker on the on-premises server.
Perform the following steps for standalone debugging:
  1. Install Docker. For more information, visit Docker Installation.
  2. Download the EASCMD client from the following links. Different versions are provided.
  3. Create a service configuration file.
    Specify the model to be deployed and the processor to be compiled in the configuration file, as shown in the following example:
     {
      "name": "diy_test",
      "generate_token": "true",
      "model_path": "model.tar.gz", # Specify an HTTP URL or an on-premises path.
      "model_entry": "./model_path/",
      "model_config": "{\"model\": \"deploy.prototxt\", \"weight\": \"bvlc_reference_caffenet.caffemodel\"}",
      "processor_path": "diy_predictor_gpu.tar.gz", # Specify an HTTP URL or an on-premises path.
      "processor_entry": "diy_predictor_gpu.so",
      "processor_type": "java",
      "cuda": "/usr/local/cuda"
    }
    For information about the parameters, see Run commands to use the EASCMD client.
  4. Deploy and debug the processor.
    sudo eascmd test service.json