Link IoT Edge can deploy machine learning models from the cloud to the edge and run the inference models at the edge. This feature can be used to process real-time and large-scale data services at the edge, such as visual recognition.

Prerequisites

Background information

You can train your inference models on platforms such as Machine Learning Platform for AI. Then, you can host the trained models and relevant code in Alibaba Cloud services such as Function Compute, Object Storage Service (OSS), or Container Registry. You can deploy a trained model to the gateway as an edge application in an edge instance of Link IoT Edge. Then, you can use the model on the gateway to perform inference and upload the inference results to IoT Platform.

Use Link IoT Edge to perform machine learning inference

This topic describes how to perform machine learning inference in Link IoT Edge by deploying the deep learning object detection model provided by TensorFlow Lite on Raspberry Pi 4B.

1. Configure the Raspberry Pi device and set up the inference environment

Use Secure Shell (SSH) to connect to the Raspberry Pi device and perform the following steps:

  1. Open the Raspberry Pi configuration tool and enable the camera.
    1. Run the following command to open the Raspberry Pi configuration tool:
      sudo raspi-config
    2. Select Interfacing Options and click Select.
      Interfacing Options
    3. Select Camera and click Select to enable the camera.
      Camera
    4. Click Finish. Then, restart the Raspberry Pi device.
    5. Run the following command on the shell terminal of the Raspberry Pi device and check whether the camera is properly working by using the built-in raspistill tool:
      This command returns the camera information to the Raspberry Pi device, and saves the content as the cam.jpg file in the current directory.
      raspistill -v -o cam.jpg
  2. Install TensorFlow Lite Interpreter.
    1. Run the following command on the shell terminal of the Raspberry Pi device to download the installation package of TensorFlow Lite Interpreter:
      curl -O https://iotedge-web.oss-cn-shanghai.aliyuncs.com/public/LeMLInterpreter/ARMv7hf/linkedge_ml_tflite_raspi4_cp3x_armv7hf_installer.tar.gz
    2. Decompress and install the package.
      tar xzvf linkedge_ml_tflite_raspi4_cp37_armv7hf_installer.tar.gz 
      cd linkedge_ml_tflite_raspi4_cp37_armv7hf_installer/
      ./le_ml_installer.sh

      If the system displays the information that is shown in the following figure, TensorFlow Lite Interpreter is installed.

      TensorFlow Lite Interpreter is installed

2. Publish the driver for detectors in the cloud

  1. Download the code package of the driver for detectors: object_detector_driver.zip.
  2. Log on to the Link IoT Edge console.
  3. In the left-side navigation pane, click Drivers.
  4. Add the driver as a custom driver. For more information, see Publish drivers to the cloud.
    The following table describes some of the parameters.
    Table 1. Parameters in the Driver Information section
    ParameterDescription
    Driver NameThe name of the custom driver, such as obj_detector_driver.
    Communication ProtocolThe communication protocol that is used to develop the driver. In this example, select Custom.
    LanguageThe programming language that is used to develop the driver. In this example, select Python 3.5.
    Built-in DriverSpecifies whether the driver is built in. In this example, select No.
    Driver FileThe driver file. Click Upload File to upload the object_detector_driver.zip driver file.
    Driver VersionThe unique version number of the driver. In this example, set this parameter to v1.0.0.
    Link IoT Edge Version for the DriverThe Link IoT Edge version that supports the driver. In this example, select Version 2.7.0 and Later.
    Version DescriptionOptional. The description of the driver version.

    You need only to set the parameters that are described in the preceding table.

3. Assign the detector device driver to the edge instance

  1. In the left-side navigation pane, click Edge Instances. Find the edge instance that you have created and click View in the Actions column.
  2. On the Instance Details page, click the Devices & Drivers tab. On the Devices & Drivers tab, click the + icon next to All Drivers.
  3. In the Assign Driver panel, select Custom Drivers. Find the obj_detector_driver driver and click Assign in the Actions column. Then, click Close.
    Assign the detector device driver
  4. Click the assigned obj_detector_driver driver and click Assign Sub-device. In the Assign Sub-device panel, click Add Sub-device and create a sub-device for the edge instance.
    Inference at the edge: Assign sub-devices
  5. In the Add Device dialog box, click Create Product and create a detector product.
    Inference at the edge: Create a product
    In the Create Product dialog box, set the parameters and click OK.
    Table 2. Parameters
    ParameterDescription
    Product NameThe name of the product. In this example, set this parameter to detector.
    Gateway Connection ProtocolThe communication protocol that is used by the gateway. In this example, select Custom.
  6. In the Add Device dialog box, the Product parameter is automatically set to the name of the product that you created. Click Configure and define the product features. For more information, see Add a TSL feature.

    Set the following two properties:

    • Object category propertyObject category property
    • Detection score propertyDetection score property
  7. Return to the Add Device dialog box on the Instance Details page in the Link IoT Edge console. Create a device for the detector product.
    Create the tflite_detector device
  8. Assign the tflite_detector device to the edge instance.

4. Create an inference function

  1. Download the code package for the inference function: object_detector_app.zip.
  2. Log on to the Function Compute console.
    If you have not activated Function Compute, read the terms, select I have read and agree, and then click Activate Now.
  3. Optional. In the left-side navigation pane, click Services and Functions. On the Services and Functions page, click Create Service. On the Create Service page, set the parameters as required and click Submit.
    The Service Name parameter is required. In this example, the Service Name parameter is set to EdgeFC. You can set other parameters based on your needs.
    Note If the EdgeFC service has been created for other scenarios or applications, you do not need to recreate the service.
  4. After you create the service, you must create a function in the service. On the Services and Functions page, click Create Function. On the Create Function page, click Configure and Deploy in the Event Function section.
  5. Set the parameters as required to create the inference function.
    ParameterDescription
    Service NameThe service where the function resides. Select EdgeFC.
    Function NameThe name of the function. In this example, set this parameter to object_detector_app.
    RuntimeThe runtime environment of the function. In this example, select Python 3.
    Function HandlerThe handler of the function. Use the default value index.handler.
    MemoryThe size of memory that is required to execute the function. Select 512MB.
    TimeoutThe timeout period of the function. Enter 10. Unit: seconds.
    Single Instance ConcurrencyThe number of concurrent requests that can be processed by an instance. Use the default value.

    You can set other parameters based on your needs or leave them unspecified. For more information, see What is Function Compute?

    Verify the function information and click Create.

  6. After the function is created, you are navigated to the details page of the function. On the Code tab, select Upload Zip File, click Select File, upload the object_detector_app.zip package that is downloaded in Step 1, and then click Save.
    After the code is uploaded, you can view the source code in the In-line Edit code editor.

5. Assign the function to the edge instance

  1. Log on to the Link IoT Edge console.
  2. In the left-side navigation pane, click Applications.
  3. Create a Function Compute-based edge application by using the function that is created in the 4. Create an inference function section of this topic. For more information, see Use Function Compute to create edge applications.

    The following table describes the application parameters.

    ParameterDescription
    Application NameThe name of the application, such as le_object_detector.
    Application TypeThe method that is used to create the edge application. In this example, select Function Compute.
    RegionThe region where the service that you created resides.
    ServiceThe service where the function resides. In this example, select EdgeFC.
    FunctionThe function that you created. In this example, select object_detector_app.
    AuthorizationThe RAM role that is assumed by Link IoT Edge to access Function Compute. In this example, select AliyunIOTAccessingFCRole.
    Application VersionThe unique version number of the application. You cannot specify two identical version numbers for an application.
  4. In the left-side navigation pane, click Edge Instances.
  5. Find the created edge instance and click View in the Actions column.
  6. On the Instance Details page, click the Edge Applications tab. On the Edge Applications tab, click Assign Application.
  7. Assign the le_object_detector application to the edge instance and click Close.

6. Deploy the edge instance

  1. On the Instance Details page, click Deploy in the upper right corner. In the message that appears, click OK to assign resources such as sub-devices and Function Compute-based edge applications to the edge instance.
  2. After the deployment is complete, go to the Devices & Drivers tab. The tflite_detector device is displayed and its device status is changed to Online.
  3. Find the tflite_detector device and click View in the Actions column. The Device Details page appears.
  4. On the Device Details page, click the TSL Data tab. On the TSL Data tab, click Status to view the inference results.
    Place a common object or have a person stand in front of the Raspberry Pi camera. The le_object_detector application will recognize the object or face, and report the recognition results to IoT Platform. Inference at the edge: Status

    You have completed the process of deploying a machine learning model and performing inference at the edge.