Open main menu

DAVE Developer's Wiki β

Changes

m
no edit summary
|}
== Model deployment and inference application applications ==
=== Version 1 ===
The C++ application previously used and described [https://wiki.dave.eu/index.php/ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_2#Model_deployment_and_inference_application here] was adapted to work with the new NXP Linux BSP release. Now it uses OpenCV 4.2.0 to pre-process the input image and TensorFlow Lite (TFL) 2.1 as inference engine. It still supports all the 3 TFL models previously tested on the [https://wiki.dave.eu/index.php?title=Category:Mito8M&action=edit&redlink=1 Mito8M SoM]:
=== Version 2 ===
The version 1 application was modified to accelerate the inference using the NPU integrated in the i.MX8M Plus Soc.
=== Version 3 ===
== Building and running Running the applications ==
As stated in the [[ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 1|first article of this series]], one of the goals is to evaluate the performances of the inference applications. As known, before and after the execution of the inference, other operations, generally referred to as pre/post-processing, are performed. Technically speaking, these operations are not part of the actual inference and are measured separately.
89
edits