ML-TN-003 — AI at the edge: visual inspection of assembled PCBs for defect detection — Part 1

From DAVE Developer's Wiki
Jump to: navigation, search
Info Box
NeuralNetwork.png Applies to Machine Learning


History[edit | edit source]

Version Date Notes
1.0.0 March 2021 First public release

Introduction[edit | edit source]

In [ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1 this series] of articles, different embedded platforms suited for building "[Edge AI]" solutions are compared from the point of view of inferencing capabilities/features, development tools, etc. In principle, such platforms can drive a bunch of different applications in the industrial world and other fields as well.

This series of Technical Notes illustrates a feasibility study regarding a common problem in the manufacturing realm that supposedly Machine Learning (ML) algorithms can address effectively: defect detection by automatic visual inspection. More specifically, this study deals with the inspection of assembled Printed Circuit Boards (PCBs). The ultimate goal is to determine if it would be possible to design innovative machines exploiting ML algorithms and able to outperform traditional devices today employed for this task. This is a blatant example of AI at the edge as the application requirements include the following:

  • Data — images in this case — must be processed where they are originated
  • Processing latency has to be minimized in order to increase the manufacturing line efficiency.


Currently, the problem of latency is what is driving many companies to move from the cloud to the edge, along with the fact that it is not reasonably feasible to afford a GPU for all the use cases. This has led to the birth of a new computational paradigm called "Edge AI" that combines the efficiency, speed, scalability, and the reduced costs of edge computing with the powerful advantages offered by the use of Artificial Intelligence and Machine Learning models. "Edge AI", "Intelligence on the edge", or "Edge Machine Learning" means that data is processed locally — i.e. near the source of data — in algorithms stored on a hardware device instead of being processed in algorithms located in the cloud. This not only enables real-time operations, but it also helps to significantly reduce the power consumption and security vulnerability associated with processing data in the cloud.

While moving from the cloud to the edge is a vital step in solving resource constraint issues, many Machine Learning models are still using too much computing power and memory to be able to fit the small microprocessors available on the market. Many are approaching this challenge by creating more efficient software, algorithms, and hardware or by combining these components in a specialized way. To this end, a new generation of purpose-built accelerators is emerging as chip manufacturers work to speed up and optimize the workloads involved in AI and Machine Learning projects from training to performing inference. Faster, cheaper, more power-efficient and scalable, these accelerators promise to boost edge devices to a new level of performance. In this work, a modern system-on-chip (SoC) embedding a configurable hardware accelerator of this sort was analyzed in view of using it as a core building block of such devices. Also, it was studied its applicability in a real-world scenario characterized by issues that are common to a large class of problems in the industrial realm.

Articles in this series[edit | edit source]

  • Part 1 (this document)
  • Part 2 talks about the classification of surface-mounted components on printed circuit boards.
  • Part 3 deals with the issue of data scarcity.

Test Bed[edit | edit source]

Dataset[edit | edit source]

Models[edit | edit source]

ResNet50[edit | edit source]

Resnet50 train and validation accuracy.png
Resnet50 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for ResNet50 model Train and validation loss trend over 1000 training epochs for ResNet50 model


lorem ipsum lorem ipsum lorem ipsum

Host machine, confusion matrix & classification report
Confusion matrix of ResNet50 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.95740 0.89900 0.92728 1000
capacitor 0.97278 0.96500 0.96888 1000
diode 0.88558 0.95200 0.91759 1000
inductor 0.97006 0.97200 0.97103 1000
resistor 0.98882 0.97300 0.98085 1000
transistor 0.92262 0.93000 0.92629 1000
Weighted avg 0.94954 0.94850 0.94865 6000


lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of ResNet50 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.96384 0.85300 0.90504 1000
capacitor 0.99068 0.95700 0.97355 1000
diode 0.83779 0.94000 0.88596 1000
inductor 0.94839 0.97400 0.96103 1000
resistor 0.97211 0.97600 0.97405 1000
transistor 0.89960 0.89600 0.89780 1000
Weighted avg 0.93540 0.93267 0.93290 6000


lorem ipsum lorem ipsum lorem ipsum


[[]] [[]]

ResNet101[edit | edit source]

Resnet101 train and validation accuracy.png
Resnet101 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for ResNet101 model Train and validation loss trend over 1000 training epochs for ResNet101 model


lorem ipsum lorem ipsum lorem ipsum


Host machine, confusion matrix & classification report
Confusion matrix of ResNet101 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.96375 0.95700 0.96036 1000
capacitor 0.96373 0.98300 0.97327 1000
diode 0.96425 0.94400 0.95402 1000
inductor 0.98500 0.98500 0.98500 1000
resistor 0.98504 0.98800 0.98652 1000
transistor 0.96517 0.97000 0.96758 1000
Weighted avg 0.97116 0.97117 0.97112 6000

lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of ResNet101 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.96288 0.88200 0.92067 1000
capacitor 0.95898 0.98200 0.97036 1000
diode 0.93965 0.90300 0.92096 1000
inductor 0.93719 0.95500 0.94601 1000
resistor 0.90428 0.99200 0.94611 1000
transistor 0.93896 0.92300 0.93091 1000
Weighted avg 0.94033 0.93950 0.93917 6000


ResNet152[edit | edit source]

Resnet152 train and validation accuracy.png
Resnet152 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for ResNet152 model Train and validation loss trend over 1000 training epochs for ResNet152 model


lorem ipsum lorem ipsum lorem ipsum


Host machine, confusion matrix & classification report
Confusion matrix of ResNet152 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.94553 0.97200 0.95858 1000
capacitor 0.95538 0.98500 0.96997 1000
diode 0.98298 0.92400 0.95258 1000
inductor 0.98584 0.97500 0.98039 1000
resistor 0.99390 0.97800 0.98589 1000
transistor 0.92899 0.95500 0.94181 1000
Weighted avg 0.96544 0.96483 0.96487 6000


lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of ResNet152 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.91182 0.91000 0.91091 1000
capacitor 0.94460 0.98900 0.96629 1000
diode 0.96464 0.87300 0.91654 1000
inductor 0.94124 0.94500 0.94311 1000
resistor 0.94038 0.97800 0.95882 1000
transistor 0.90358 0.90900 0.90628 1000
Weighted avg 0.93438 0.93400 0.93366 6000

InceptionV4[edit | edit source]

InceptionV4 train and validation accuracy.png
InceptionV4 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for InceptionV4 model Train and validation loss trend over 1000 training epochs for InceptionV4 model


lorem ipsum lorem ipsum lorem ipsum

Host machine, confusion matrix & classification report
Confusion matrix of InceptionV4 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.94524 0.86300 0.90225 1000
capacitor 0.98051 0.95600 0.96810 1000
diode 0.88384 0.87500 0.87940 1000
inductor 0.95575 0.97200 0.96381 1000
resistor 0.96847 0.98300 0.97568 1000
transistor 0.83670 0.91200 0.87273 1000
Weighted avg 0.92842 0.92683 0.92699 6000


lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of InceptionV4 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.78158 0.89100 0.83271 1000
capacitor 0.99220 0.89000 0.93832 1000
diode 0.88553 0.82000 0.85151 1000
inductor 0.88973 0.94400 0.91606 1000
resistor 0.97319 0.98000 0.97658 1000
transistor 0.83282 0.80700 0.81971 1000
Weighted avg 0.89251 0.88867 0.88915 6000

Inception ResNet V1[edit | edit source]

Inception ResNet V1 train and validation accuracy.png
Inception ResNet V1 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for Inception ResNet V1 model Train and validation loss trend over 1000 training epochs for Inception ResNet V1 model


lorem ipsum lorem ipsum lorem ipsum


Host machine, confusion matrix & classification report
Confusion matrix of Inception ResNet V1 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.98274 0.96800 0.97531 1000
capacitor 0.97571 0.96400 0.96982 1000
diode 0.94889 0.98400 0.96613 1000
inductor 0.98085 0.97300 0.97691 1000
resistor 0.98211 0.98800 0.98504 1000
transistor 0.97278 0.96500 0.96888 1000
Weighted avg 0.97385 0.97367 0.97368 6000


lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of Inception ResNet V1 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.84127 0.95400 0.89410 1000
capacitor 0.99787 0.93600 0.96594 1000
diode 0.94346 0.90100 0.92174 1000
inductor 0.95275 0.98800 0.97005 1000
resistor 0.94852 0.99500 0.97121 1000
transistor 0.93348 0.82800 0.87758 1000
Weighted avg 0.93622 0.93367 0.93344 6000

Inception ResNet V2[edit | edit source]

Inception ResNet V2 train and validation accuracy.png
Inception ResNet V2 train and validation loss.png
Train and validation accuracy trend over 1000 training epochs for Inception ResNet V2 model Train and validation loss trend over 1000 training epochs for Inception ResNet V2 model


lorem ipsum lorem ipsum lorem ipsum


Host machine, confusion matrix & classification report
Confusion matrix of Inception ResNet V2 model on host machine before quantization
Class Precision Recall F1-score Support
IC 0.97872 0.96600 0.97232 1000
capacitor 0.99177 0.96400 0.97769 1000
diode 0.98963 0.95400 0.97149 1000
inductor 0.97931 0.99400 0.98660 1000
resistor 0.98213 0.98900 0.98555 1000
transistor 0.93365 0.98500 0.95864 1000
Weighted avg 0.97587 0.97533 0.97538 6000


lorem ipsum lorem ipsum lorem ipsum


Target device, confusion matrix & classification report
Confusion matrix of Inception ResNet V2 model on target device after quantization
Class Precision Recall F1-score Support
IC 0.91735 0.89900 0.90808 1000
capacitor 0.99466 0.93200 0.96231 1000
diode 0.98793 0.90000 0.94192 1000
inductor 0.92066 0.99800 0.95777 1000
resistor 0.96970 0.99200 0.98072 1000
transistor 0.87887 0.93600 0.90654 1000
Weighted avg 0.94486 0.94283 0.94289 6000

Comparison[edit | edit source]

Useful links[edit | edit source]