Link IoT Edge can deploy machine learning models from the cloud to the edge and run the inference models at the edge. This feature is suitable for processing real-time and large-scale data services (such as visual recognition) at the edge.

Prerequisites

Background information

You can train your inference models on Alibaba Cloud Machine Learning Platform for AI or other platforms and then host the trained models and relevant code in Alibaba Cloud Function Compute, Object Storage Service (OSS), or Container Registry. You can deploy the model to the gateway as an edge application in a Link IoT Edge instance, use the local model on the gateway to perform inference, and then upload the inference result to Alibaba Cloud IoT Platform.

Use Link IoT Edge to perform ML Inference

This topic describes how to perform machine learning inference on Link IoT Edge by deploying the deep learning object detection model provided by TensorFlow Lite on Raspberry Pi 4B.

1. Configure the Raspberry Pi server and set up the inference environment

Use the SSH tool to connect to the Raspberry Pi server and perform the following steps:

  1. Open the Raspberry Pi configuration tool and enable the camera.
    1. Run the following command to open the Raspberry Pi configuration tool:
      sudo raspi-config
    2. Select Interfacing Options and click Select.
      Interfacing Options
    3. Select Camera and click Select to enable the camera.
      Camera
    4. Select Finish. Then restart the Raspberry Pi.
    5. Run the following command on the shell terminal of the Raspberry Pi server and test whether the camera is working correctly by using the built-in raspistill tool.
      This command returns the camera information to the Raspberry Pi terminal, and saves the content in the cam.jpg image file.
      raspistill -v -o cam.jpg
  2. Install the TensorFlow Lite Interpreter.
    1. Run the following command on the Raspberry Pi shell terminal to download the installation package of TensorFlow Lite Interpreter:
      curl -O https://iotedge-web.oss-cn-shanghai.aliyuncs.com/public/LeMLInterpreter/ARMv7hf/linkedge_ml_tflite_raspi4_cp3x_armv7hf_installer.tar.gz
    2. Decompress and install the package.
      tar xzvf linkedge_ml_tflite_raspi4_cp37_armv7hf_installer.tar.gz 
      cd linkedge_ml_tflite_raspi4_cp37_armv7hf_installer/
      ./le_ml_installer.sh

      If the system displays the following information, the TensorFlow Lite Interpreter is successfully installed.

      TensorFlow Lite Interpreter is installed.

2. Publish the driver for detectors on the cloud

  1. Download the code of the driver for detectors.
  2. In the left-side navigation pane of the IoT Platform console, choose IoT Link Edge > Drivers.
  3. Add the driver as a custom driver. For more information, see Publish cloud-hosted drivers.
    The following table describes some of the parameters.
    Table 1. Driver parameters
    Parameter Description
    Driver Name The name of the custom driver, for example, obj_detector_driver.
    Communication Protocol Select Custom.
    Language Select Python 3.5.
    Built-in Driver Specifies whether the driver is built-in. Select No.
    Driver File Click Upload File to upload the object_detector_driver.zip driver file.
    Driver Version Set the parameter to v1.0.0.
    Link IoT Edge Version for the Driver Select Version 2.4.0 and Later.
    Version Description (Optional) The description of the driver you created.

    You do not need to set the rest of the parameters.

3. Assign the detector device driver to the edge instance

  1. In the left-side navigation pane, choose IoT Link Edge > Edge Instances. Find the edge instance that you have created and click View.
  2. On the Instance Details page, select Devices & Drivers, and click All Drivers next to the plus (+) icon.
  3. In the Assign Driver dialog box, select Custom Driver, and click Assign next to the obj_detector_driver driver. Then, click Close.
    Assign the detector device driver
  4. Click Assign Sub-device under the obj_detector_driver driver to add sub-devices for the edge instance.
    Edge inference-assign sub-devices
  5. In the Add Device dialog box, click Create Product to create detector products.
    Edge inference-create a product
    In the Create Product dialog box, set the parameters and click OK.
    Table 2. Parameters
    Parameter Description
    Product Name Set this parameter to detector.
    Gateway Connection Protocol Select Custom.
  6. In the Add Device dialog box, the Product drop-down list contains existing products that you have created. Click Configure to add custom features to the product.

    On the Product Details page, add Self-Defined Feature. For more information, see Define features.

    Set the following two properties.

    • Object category propertyObject category property
    • Detection score propertyDetection score property
  7. Return to the Add Device dialog box of the Instance Details page and add the device to the detector product.
    Add tflite_detector device
  8. Assign the tflite_detector device to the edge instance.

4. Create an inference function

  1. Download the inference function code.
  2. Log on to the Alibaba Cloud Function Compute console.
    If you have not activated the Function Compute service, read the terms, select I have read and agree, and click Activate Now.
  3. Optional. In the left-side navigation pane, select Service-Function. From the drop-down list of Create Function, select Create Service. On the Create Service page, configure parameters and click Create.
    The Service Name parameter is required. In this example, you must specify EdgeFC for the Service Name parameter. You can specify other parameters based on your needs.
    Note If the EdgeFC service has been created for other scenarios or applications, you do not need to create a new one.
  4. After creating the service, you must create a function. On the Service-Function page, click Create Function. On the Create Function page, select Event Function and click Next.
  5. Set the parameters for managing the inference function.
    Parameter Description
    Service Name Select EdgeFC.
    Function Name Set the value to object_detector_app.
    Runtime Set the runtime environment for the function. In this example, select python3.
    Function Handler Use the default value index.handler.
    Memory Select 512 MB.
    Timeout Enter 10. Unit: seconds.
    Single Instance Concurrency Use the default value.

    You can configure other parameters based on your needs or leave them empty. For more information about how to configure parameters, see Function Compute.

    Verify the function information, and click Create.

  6. After the function is created, the function details page is displayed. On the Code tab, select Upload Zip File, click Select File, upload the object_detector_app.zip package downloaded in step 1, and then click Save.
    After the code is uploaded, you can view the source code in the In-line Edit box.

5. Assign the function to the edge instance

  1. In the left-side navigation pane of the IoT Platform console, choose Link IoT Edge > Applications.
  2. Use the function that was created in 4. Create an inference function to create an edge application of the Function Compute (FC) type. For more information, see Use Function Compute to create edge applications.

    The following table lists the application parameters.

    Parameter Description
    Application Name Set the name of your application, for example, le_object_detector.
    Application Type Select Function Compute.
    Region Select the region of the service.
    Service Select EdgeFC.
    Function Select the object_detector_app function.
    Authorize Select AliyunIOTAccessingFCRole.
    Application Version The version of the specified application. The version number must be unique under the application.
    The following table describes the function parameters.
    Parameter Description
    Running Mode Two running modes are available. In this example, select Continuous. The application runs immediately after being deployed.
    Memory Limit (MB) The maximum memory that is available for running the function. Unit: MB. Enter 512. If the memory that is used by the function exceeds the limit, the edge application is forced to restart.
    Timeout Limit (Seconds) The maximum processing period after the function receives an event. Use the default value 5. If the function does not return the result within the specified period, the edge application is forced to restart.
    Scheduled Execution Use the default value Close.

    You do not need to set other parameters.

  3. In the left-side navigation pane, choose Link IoT Edge > Edge Instances.
  4. Find the created edge instance and click View.
  5. On the Instance Details page, click the Edge Applications tab. On this tab, click Allocate Application.
  6. Assign the le_object_detector FC application to the edge instance, and click Close.

6. Deploy the edge instance

  1. On the Instance Details page, click Deploy in the upper right corner. In the dialog box that appears, click OK to assign resources such as sub-devices and Function Compute-based edge applications to the edge instance.
    You can click Deployment Details to view the deployment progress and result.
  2. After the deployment is complete, on the Devices & Drivers tab, the tflite_detector device is displayed and the device status is changed to Online.
  3. On the right side of the tflite_detector device, click View and go to the Device Details page.
  4. On the Thing Model Data > Status tab of the Device Details page, you can view the inference results.
    Place a common object or have a person stand in front of the Raspberry Pi camera. The le_object_detector FC application will recognize the object or face, and report the recognition result to the IoT Platform.Edge inference-running status

    Now you have completed the process of deploying a machine learning model and performing inference at the edge.