Link IoT Edge can deploy machine learning models from the cloud to the edge and run the inference models at the edge. This feature can be used to process real-time and large-scale data services at the edge, such as visual recognition.
Prerequisites
- Link IoT Edge is installed on Raspberry Pi 3B, 3B+, or 4B. For more information, see Install Link IoT Edge on Raspberry Pi.
- Your Raspberry Pi device is connected to a camera. For more information, see Camera Modules.
Background information
You can train your inference models on platforms such as Machine Learning Platform for AI. Then, you can host the trained models and relevant code in Alibaba Cloud services such as Function Compute, Object Storage Service (OSS), or Container Registry. You can deploy a trained model to the gateway as an edge application in an edge instance of Link IoT Edge. Then, you can use the model on the gateway to perform inference and upload the inference results to IoT Platform.
This topic describes how to perform machine learning inference in Link IoT Edge by deploying the deep learning object detection model provided by TensorFlow Lite on Raspberry Pi 4B.
1. Configure the Raspberry Pi device and set up the inference environment
Use Secure Shell (SSH) to connect to the Raspberry Pi device and perform the following steps:
- Open the Raspberry Pi configuration tool and enable the camera.
- Install TensorFlow Lite Interpreter.
2. Publish the driver for detectors in the cloud
- Download the code package of the driver for detectors: object_detector_driver.zip.
- Log on to the Link IoT Edge console.
- In the left-side navigation pane, click Drivers.
- Add the driver as a custom driver. For more information, see Publish drivers to the cloud. The following table describes some of the parameters.
Table 1. Parameters in the Driver Information section Parameter Description Driver Name The name of the custom driver, such as obj_detector_driver. Communication Protocol The communication protocol that is used to develop the driver. In this example, select Custom. Language The programming language that is used to develop the driver. In this example, select Python 3.5. Built-in Driver Specifies whether the driver is built in. In this example, select No. Driver File The driver file. Click Upload File to upload the object_detector_driver.zip driver file. Driver Version The unique version number of the driver. In this example, set this parameter to v1.0.0. Link IoT Edge Version for the Driver The Link IoT Edge version that supports the driver. In this example, select Version 2.7.0 and Later. Version Description Optional. The description of the driver version. You need only to set the parameters that are described in the preceding table.
3. Assign the detector device driver to the edge instance
- In the left-side navigation pane, click Edge Instances. Find the edge instance that you have created and click View in the Actions column.
- On the Instance Details page, click the Devices & Drivers tab. On the Devices & Drivers tab, click the
+
icon next to All Drivers. - In the Assign Driver panel, select Custom Drivers. Find the obj_detector_driver driver and click Assign in the Actions column. Then, click Close.
- Click the assigned obj_detector_driver driver and click Assign Sub-device. In the Assign Sub-device panel, click Add Sub-device and create a sub-device for the edge instance.
- In the Add Device dialog box, click Create Product and create a detector product. In the Create Product dialog box, set the parameters and click OK.
Table 2. Parameters Parameter Description Product Name The name of the product. In this example, set this parameter to detector. Gateway Connection Protocol The communication protocol that is used by the gateway. In this example, select Custom. - In the Add Device dialog box, the Product parameter is automatically set to the name of the product that you created. Click Configure and define the product features. For more information, see Add a TSL feature.
Set the following two properties:
- Object category property
- Detection score property
- Return to the Add Device dialog box on the Instance Details page in the Link IoT Edge console. Create a device for the detector product.
- Assign the tflite_detector device to the edge instance.
4. Create an inference function
- Download the code package for the inference function: object_detector_app.zip.
- Log on to the Function Compute console. If you have not activated Function Compute, read the terms, select I have read and agree, and then click Activate Now.
- Optional. In the left-side navigation pane, click Services and Functions. On the Services and Functions page, click Create Service. On the Create Service page, set the parameters as required and click Submit.The Service Name parameter is required. In this example, the Service Name parameter is set to EdgeFC. You can set other parameters based on your needs.Note If the EdgeFC service has been created for other scenarios or applications, you do not need to recreate the service.
- After you create the service, you must create a function in the service. On the Services and Functions page, click Create Function. On the Create Function page, click Configure and Deploy in the Event Function section.
- Set the parameters as required to create the inference function.
Parameter Description Service Name The service where the function resides. Select EdgeFC. Function Name The name of the function. In this example, set this parameter to object_detector_app. Runtime The runtime environment of the function. In this example, select Python 3. Function Handler The handler of the function. Use the default value index.handler
.Memory The size of memory that is required to execute the function. Select 512MB. Timeout The timeout period of the function. Enter 10. Unit: seconds. Single Instance Concurrency The number of concurrent requests that can be processed by an instance. Use the default value. You can set other parameters based on your needs or leave them unspecified. For more information, see What is Function Compute?
Verify the function information and click Create.
- After the function is created, you are navigated to the details page of the function. On the Code tab, select Upload Zip File, click Select File, upload the object_detector_app.zip package that is downloaded in Step 1, and then click Save. After the code is uploaded, you can view the source code in the In-line Edit code editor.
5. Assign the function to the edge instance
- Log on to the Link IoT Edge console.
- In the left-side navigation pane, click Applications.
- Create a Function Compute-based edge application by using the function that is created in the 4. Create an inference function section of this topic. For more information, see Use Function Compute to create edge applications.
The following table describes the application parameters.
Parameter Description Application Name The name of the application, such as le_object_detector. Application Type The method that is used to create the edge application. In this example, select Function Compute. Region The region where the service that you created resides. Service The service where the function resides. In this example, select EdgeFC. Function The function that you created. In this example, select object_detector_app. Authorization The RAM role that is assumed by Link IoT Edge to access Function Compute. In this example, select AliyunIOTAccessingFCRole. Application Version The unique version number of the application. You cannot specify two identical version numbers for an application. - In the left-side navigation pane, click Edge Instances.
- Find the created edge instance and click View in the Actions column.
- On the Instance Details page, click the Edge Applications tab. On the Edge Applications tab, click Assign Application.
- Assign the le_object_detector application to the edge instance and click Close.
6. Deploy the edge instance
- On the Instance Details page, click Deploy in the upper right corner. In the message that appears, click OK to assign resources such as sub-devices and Function Compute-based edge applications to the edge instance.
- After the deployment is complete, go to the Devices & Drivers tab. The tflite_detector device is displayed and its device status is changed to Online.
- Find the tflite_detector device and click View in the Actions column. The Device Details page appears.
- On the Device Details page, click the tab. On the TSL Data tab, click Status to view the inference results. Place a common object or have a person stand in front of the Raspberry Pi camera. The le_object_detector application will recognize the object or face, and report the recognition results to IoT Platform.
You have completed the process of deploying a machine learning model and performing inference at the edge.