All Products
Search
Document Center

Platform For AI:Debug a service online

Last Updated:Apr 11, 2024

After you deploy a service in Platform for AI (PAI), you can use the online debugging feature to test whether the service runs as expected. This topic describes how to debug a service online.

Prerequisites

A service is deployed. For more information, see Model service deployment by using the PAI console.

Procedure

  1. Go to the EAS-Online Model Services page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace that you want to manage.

    3. In the left-side navigation pane, choose Model Deployment > Elastic Algorithm Service (EAS). The EAS-Online Model Services page appears.

  2. On the Inference Service tab, find the service that you want to debug and click Online Debugging in the Actions column.

  3. On the Body tab of the Request Parameter Online Tuning section, configure the service request parameters.

    • If you deploy the service by using a common processor, such as TensorFlow, Caffe, or Predictive Model Markup Language (PMML), you can refer to the Construct a request for a TensorFlow service topic to construct a service request.

    • If the service that you deploy uses another type of model, such as a custom model, you must construct the service request based on the model or the input data format of the used image.

      The following section shows a sample request body for a heart disease prediction service. The service uses the logistic regression for binary classification model.

      The logistic regression for binary classification model is a PMML model. The model file contains the following features: sex, cp, fbs, restecg, exang, slop, thal, age, trestbps, chol, thalach, oldpeak, and ca. In this case, the following content can be specified in the Request Body section: [{"sex":0,"cp":0,"fbs":0,"restecg":0,"exang":0,"slop":0,"thal":0,"age":0,"trestbps":0,"chol":0,"thalach":0,"oldpeak":0,"ca":0}]

  4. Click Send Request. The prediction result is displayed in the Debugging Information section. image

References

  • You can view and manage the deployed online model services on the EAS-Online Model Services page. For more information, see Manage online model services in EAS.

  • You can use an automatic stress testing tool to create stress testing tasks for services that are deployed in Elastic Algorithm Service (EAS) to learn more about the performance of EAS. For more information, see Automatic service stress testing.

  • For information about EAS use cases, see EAS use cases.