Changes

Jump to: navigation, search
Building and the applications
== Model deployment ==
== Building and running the applications ==
As stated in the [[ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 1|first article of this series]], one of the goals is to evaluate the performances of the inference applications. As known, before and after the execution of the inference, other operations, generally referred to as pre/post-processing, are performed. Technically speaking, these operations are not part of the actual inference and are measured separately.
In order to have reproducible and reliable results, some measures were taken:
* When possible, the inference were was repeated several times and the average execution time was computed
* All the files required to run the test—the executable, the image files, etc.—are stored on a [https://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-linux tmpfs RAM disk] in order to make file system/storage medium overhead neglectable.
4,650
edits

Navigation menu