All Products
Search
Document Center

Platform For AI:XGBoost or LightGBM

Last Updated:Apr 03, 2025

Elastic Algorithm Service (EAS) of Platform for AI (PAI) provides built-in Gradient Boosted Decision Trees (GBDT) processors, including XGBoost and LightGBM. You can use GBDT processors to deploy models in formats that are supported by these processors as real-time inference services. This topic describes how to deploy and call GBDT model services.

Background information

GBDT is a machine learning algorithm. XGBoost and LightGBM are implementations of GBDT and are designed for speed and performance.

You can use EAS to deploy models generated by the GBDT or XGBoost component in Machine Learning Designer or deploy custom models as model services.

Important

You can directly use Predictive Model Markup Language (PMML) processors to deploy GBDT models generated in Machine Learning Designer. For more information, see PMML.

Step 1: Deploy a model service

When you use the EASCMD client to deploy a model service, you need to set processor to xgboost or lightgbm. The following code shows the content of a sample service configuration file:

{
  "name": "gbdt_example",
  "processor": "<Processor type>",
  "model_path": "http://examplebucket.oss-cn-shanghai.aliyuncs.com/models/xgb_model.json",
  "metadata": {
    "instance": 1,
    "cpu": 1
  }
}

You need to replace <Processor type> in the preceding code with xgboost or lightgbm.

The model compilation feature may occupy large memory. If a model is large, we recommend that you use a custom processor to deploy the model. The following code shows the content of a sample service configuration file:

{
  "name": "gbdt_example",
  "processor_type": "python",
  "processor_path": "https://eas-data.oss-cn-shanghai.aliyuncs.com/processors/xgboost_processor_notreelite.tar.gz",
  "processor_entry": "xgboost_inference.py",
  "model_path": "http://examplebucket.oss-cn-shanghai.aliyuncs.com/models/xgb_model.json",
  "metadata": {
    "instance": 1,
    "cpu": 1
  }
}

For information about how to use the EASCMD client to deploy model services, see Deploy model services by using EASCMD.

Step 2: Call the model service

After you deploy the model service, go to the Elastic Algorithm Service (EAS) page and click Invocation Method in the Service Type column of the service that you want to call. Then, you can view the endpoint of the service and the token that is used for service authentication. You can perform the following steps to call the model service:

  1. Create a service request.

    Both the input and output of a GBDT service are JSON arrays. You can include multiple samples in a request. The following code provides an example:

    [
      [14.87, 16.67, 98.64, 682.5, 0.1162, 0.1649, 0.169, 0.08923, 0.2157, 0.06768, 0.4266, 0.9489, 2.989, 41.18, 0.006985, 0.02563, 0.03011, 0.01271, 0.01602, 0.003884, 18.81, 27.37, 127.1, 1095.0, 0.1878, 0.448, 0.4704, 0.2027, 0.3585, 0.1065],
      [11.2, 29.37, 70.67, 386.0, 0.07449, 0.03558, 0.0, 0.0, 0.106, 0.05502, 0.3141, 3.896, 2.041, 22.81, 0.007594, 0.008878, 0.0, 0.0, 0.01989, 0.001773, 11.92, 38.3, 75.19, 439.6, 0.09267, 0.05494, 0.0, 0.0, 0.1566, 0.05905]
    ]
    Note

    When you send the service request, you need to delete line breaks and space characters from the JSON file to speed up data transmission and improve service performance.

  2. Send the service request.

    You can use one of the following methods to send the service request.

    Important

    If you run a cURL command to send the request, the authentication token is specified in the HTTP header and is transmitted in plaintext over the Internet. If you use EAS SDK for Python to send the request, the token is used to sign the request, which improves security.

    • Use the online debugging feature to send the service request.

      Go to the online debugging page of the service and send the service request. For more information, see Debug a service online.

    • Run a cURL command to send the service request.

      GBDT services can be accessed over HTTP. When you send a request, you can directly specify the authentication token in the HTTP header of the request. The following code provides an example:

      // Send a service request.  
      curl -v 18284888792***.cn-shanghai.pai-eas.aliyuncs.com/api/predict/eas_gbdt_example \
            -H 'Authorization: YmE3NDkyMzdiMzNmMGM3ZmE4ZmNjZDk0M2NiMDA***' \
            -d '[[14.87, 16.67, 98.64, 682.5, 0.1162, 0.1649, 0.169, 0.08923, 0.2157, 0.06768, 0.4266, 0.9489, 2.989, 41.18, 0.006985, 0.02563, 0.03011, 0.01271, 0.01602, 0.003884, 18.81, 27.37, 127.1, 1095.0, 0.1878, 0.448, 0.4704, 0.2027, 0.3585, 0.1065], [11.2, 29.37, 70.67, 386.0, 0.07449, 0.03558, 0.0, 0.0, 0.106, 0.05502, 0.3141, 3.896, 2.041, 22.81, 0.007594, 0.008878, 0.0, 0.0, 0.01989, 0.001773, 11.92, 38.3, 75.19, 439.6, 0.09267, 0.05494, 0.0, 0.0, 0.1566, 0.05905]]'
      // The following result is returned: 
      [[0.0004703899612650275, 0.9877758026123047]]
    • Use EAS SDK for Python to send the request. For more information, see SDK for Python.

      The following code provides an example:

      #!/usr/bin/env python
      
      from eas_prediction import PredictClient
      from eas_prediction import StringRequest
      
      if __name__ == '__main__':
          client = PredictClient('1828488879222***.cn-shanghai.pai-eas.aliyuncs.com', 'pmml_test')
          client.set_token('YmE3NDkyMzdiMzNmMGM3ZmE4ZmNjZDk0M2NiMDA***')
          client.init()
      
      
          req = StringRequest('[[0.0004703899612650275,0.9877758026123047]]')
          for x in range(100):
              resp = client.predict(req)
              print(resp)

      For information about how to use EAS SDKs for other languages, see SDKs.