Parame Server-Logistical Regression (PS-LR) is a logistic regression algorithm based on parameter servers. It is applicable to recommendation systems and advertising engines that use a great number of features and samples. This topic describes how to use Elastic Algorithm Service (EAS) of Machine Learning Platform for AI to deploy a PS-LR model and debug the model online.

Prerequisites

  • A dedicated resource group is purchased for EAS. You can select the subscription or pay-as-you-go billing method as needed. For more information, see Dedicated resource groups.
  • Model training is complete. A publicly accessible address for storing the model or a local model is obtained.

Background information

The PS-LR model requires a custom processor. You can deploy it only to a dedicated resource group.

Deploy the model

  1. Go to the Elastic Algorithm Service page.
    1. Log on to the Machine Learning Platform for AI console.
    2. In the left-side navigation pane, choose Model Deployment > EAS-Model Serving.
  2. On the Elastic Algorithm Service page, click Model Deploy.
  3. In the Deployment details and confirmation panel, set the parameters.
    Parameter Description
    Resource Group Type Select the dedicated resource group that you purchased based on the resource group name.
    Processor Type Select Self-definition processor.
    Processor Language Select Cpp.
    Processor package You can set this parameter in one of the following ways:
    • In the Processor package field, enter http://easprocessor.oss-cn-shanghai.aliyuncs.com/public/pslr_processor_release.tar.gz.
      1. Download the processor package to your on-premises machine.
      2. Click Upload Local Files below the Processor package field and upload the downloaded processor package as prompted.

        The package is uploaded to the Object Storage Service (OSS) path in the current region and the Processor package parameter is automatically set.

      Note If you upload an on-premises processor package, the loading speed of the processor can be improved during model deployment.
    Processor Master File Enter libpslr.so.
    Model Files You can set this parameter in one of the following ways:
    • Upload Local File
      1. Select Upload Local File.
      2. Click Upload Local Files. Then, upload an on-premises model file as prompted.
    • Import OSS File

      Select Import OSS File. Then, select the OSS path where the model file resides.

    • Download from Internet

      Select Download from Internet. Then, enter a public URL.

  4. Click Next.
  5. In the Deployment details and confirmation panel, set the parameters.
    1. Select New Service.
    2. Enter the model name in the Custom Model Name field.
    3. Set the Number Of Instances, Cores, and Memory (M) parameters.
    4. Click Deploy.

Test the API

  1. On the Elastic Algorithm Service page, find the service and click Debug in the Operating column.
  2. On the debugging page, check the values of the API Endpoint and Token parameters.
    The system automatically sets the API Endpoint and Token parameters for the deployed model.
  3. On the debugging page, enter the test data in the Request Body code editor.
    You can enter multiple data entries in a single request. Example: [{"1":1,"2":1},{"1":1,"3":1}].
  4. Click Send Request.
  5. In the Debugging Info section, view the response.