Changes

Jump to: navigation, search
no edit summary
==Workflow and resulting block diagram==
The following picture shows the block diagram of the resulting application and part of the workflow used to build it.
 
 
[[File:MISC-TN-011-image-classifier.png|thumb|center|600px|Block diagram of the image classifier]]
 
First of all, the TensorFlow (TF) model generated with Microsoft Azure Custom Vision was converted into the TensorFlow Lite (TFL) format.
==Running the application==
The follwoing block shows the execution of the classifier on the embedded platform:
<pre class="board-terminal">
root@mito8m:~/devel/image_classifier_eIQ# ./image_classifier_cv converted_model.tflite labels.txt testdata/red-apple1.jpg
Original image size: 600x600x3
Cropped image size: 600x600x3
Resized image size: 224x224x3
Input tensor index: 0
Input tensor name: Placeholder
Filling time: 25.3169 ms
Inference time: 276.121 ms
Total prediction time: 301.438 ms
Output tensor index: 406
Output tensor name: model_outputs
Top results:
0.997172 Red Apple
0.00214239 Green Apple
</pre>
the prediction time is cut by about 88% compared to [[[[SBCX-TN-005: Using TensorFlow to implement a Deep Learning image classifier based on Azure Custom Vision-generated model|this Technical Note (SBCX-TN-005)|this implementation]]. Of course, this is due to several factors. The more relevant ones are:
* i.MX8M is faster than i.MX6Q
* The application is written in C++ and not in Python
* The TF model was replaced with a TFL model, which is inherently more suited for ARM-based devices
* The middleware provided by NXP eIQ is optimized for their SoC's.
4,650
edits

Navigation menu