Difference between revisions of "ML-TN-004 — Machine Learning, spectroscopy, and materials classification"

From DAVE Developer's Wiki
Jump to: navigation, search
(Introduction)
(Platform: Xilinx Zynq UltraScale+ MPSoC)
Line 42: Line 42:
 
|DPU x.y
 
|DPU x.y
 
|}
 
|}
This configuration is built upon the standard development flow proposed by Xilinx. As such, it makes use of the Deep Learning Processing Unit (DPU) for accelerating the NN-related computations.  
+
This configuration is built upon the standard development flow proposed by Xilinx. As such, it makes use of the Deep Learning Processing Unit (DPU) for accelerating NN-related computations.  
  
 
==== NN accelerator: custom, FINN-generated ====
 
==== NN accelerator: custom, FINN-generated ====
Line 50: Line 50:
  
 
In other words, FINN allows to synthesize custom-tailored accelerators to be instantiated on FPGA. In principle, compared to the classic DPU-centered approach, this technique should result in a much better optimization in terms of programmable logic resource utilization because it may exploit the flexible nature of FPGAs at best.
 
In other words, FINN allows to synthesize custom-tailored accelerators to be instantiated on FPGA. In principle, compared to the classic DPU-centered approach, this technique should result in a much better optimization in terms of programmable logic resource utilization because it may exploit the flexible nature of FPGAs at best.
 +
 +
===TBD Pure convolutional model===
 +
====NN accelerator: DPU====
 +
{| class="wikitable"
 +
|+Testbed configuration
 +
|'''SoC'''
 +
|Xilinx Zynq UltraScale+ MPSoC
 +
|-
 +
|'''Framework'''
 +
|PyTorch
 +
|-
 +
|'''Stack'''
 +
|Vitis AI
 +
|-
 +
|'''NN accelerator'''
 +
|DPU x.y
 +
|}
 +
 +
By nature, data produced by spectrometers is a one-dimensional vector. In the previous section, this vector was used to feed the NN model. Even though this is not conceptually wrong, this approach does not exploit the DPU accelerator efficiently. As known, DPU is optimized to run convolutional NN. Therefore, the model should be fed with data formatted as "images" to optimize hardware-accelerated inference algorithms.
 +
 +
The configuration described in this paragraph makes use of a true convolutional NN, which is fed with multi-dimensional vectors. With respect to the baseline model, fully connected layers — which act are bottlenecks from the computational perspective — were removed.
  
 
=References=
 
=References=

Revision as of 14:07, 24 September 2021

Info Box
NeuralNetwork.png Applies to Machine Learning


History[edit | edit source]

Version Date Notes
1.0.0 October 2021 First public release

Introduction[edit | edit source]

Classification of materials is yet another interesting use case where Machine Learning (ML for short) opens the doors for promising developments related to smart edge devices. In this Technical Note (TN), different implementations of ML-based material classification applications are compared in terms of performance and development workflow. The classification process makes use of spectroscopy: the model consists of a neural network (NN) fed with measurement data produced by a spectrometer.

In the rest of the document, we use the expression baseline model to refer to the model retrieved from TBD and that was used as starting point.

Platform: Xilinx Zynq UltraScale+ MPSoC[edit | edit source]

Baseline model[edit | edit source]

The tests detailed in this section make use of the baseline model.

NN accelerator: DPU[edit | edit source]

Testbed configuration
SoC Xilinx Zynq UltraScale+ MPSoC
Framework PyTorch
Stack Vitis AI
NN accelerator DPU x.y

This configuration is built upon the standard development flow proposed by Xilinx. As such, it makes use of the Deep Learning Processing Unit (DPU) for accelerating NN-related computations.

NN accelerator: custom, FINN-generated[edit | edit source]

According to the official documentation, FINN is

an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. It is not intended to be a generic DNN accelerator offering like Vitis AI, but rather a tool for exploring the design space of DNN inference accelerators on FPGAs.

In other words, FINN allows to synthesize custom-tailored accelerators to be instantiated on FPGA. In principle, compared to the classic DPU-centered approach, this technique should result in a much better optimization in terms of programmable logic resource utilization because it may exploit the flexible nature of FPGAs at best.

TBD Pure convolutional model[edit | edit source]

NN accelerator: DPU[edit | edit source]

Testbed configuration
SoC Xilinx Zynq UltraScale+ MPSoC
Framework PyTorch
Stack Vitis AI
NN accelerator DPU x.y

By nature, data produced by spectrometers is a one-dimensional vector. In the previous section, this vector was used to feed the NN model. Even though this is not conceptually wrong, this approach does not exploit the DPU accelerator efficiently. As known, DPU is optimized to run convolutional NN. Therefore, the model should be fed with data formatted as "images" to optimize hardware-accelerated inference algorithms.

The configuration described in this paragraph makes use of a true convolutional NN, which is fed with multi-dimensional vectors. With respect to the baseline model, fully connected layers — which act are bottlenecks from the computational perspective — were removed.

References[edit | edit source]

  • [1] Hadi Parastara, Geert van Kollenburgb, Yannick Weesepoelc, André van den Doelb, Lutgarde Buydensb, Jeroen Jansen, Integration of handheld NIR and machine learning to “Measure & Monitor” chicken meat authenticity, 2020
  • [2] N. Salamati, C. Fredembach, S. Susstrunk, Material Classification Using Color and NIR Images, 2009
  • [3] Zackory Erickson, Nathan Luskey, Sonia Chernova, and Charles C. Kemp, Zackory Erickson, Nathan Luskey, Sonia Chernova, and Charles C. Kemp, 2019