ML-TN-003 — AI at the edge: visual inspection of assembled PCBs for defect detection — Part 1

From DAVE Developer's Wiki
Jump to: navigation, search
Info Box
NeuralNetwork.png Applies to Machine Learning

History[edit | edit source]

Version Date Notes
1.0.0 May 2021 First public release

Introduction[edit | edit source]

This series of Technical Notes (ML-TN-003) illustrates a feasibility study regarding a common problem in the manufacturing realm that supposedly Machine Learning (ML) algorithms can address effectively: defect detection by automatic visual inspection. More specifically, this study deals with the inspection of assembled Printed Circuit Boards (PCBs). The ultimate goal is to determine if it would be possible to design innovative machines exploiting ML algorithms and able to outperform traditional devices employed today for this task. This is a blatant example of AI at the edge as the application requirements include the following:

  • Data — images in this case — must be processed where they are originated
  • Processing latency should be minimized in order to increase the manufacturing line efficiency.

Currently, the problem of latency is driving many companies to move from the cloud to the edge, along with the fact that in several use cases the costs for a cloud-based solution are not affordable. This has led to the birth of a new computational paradigm called "Edge AI" that combines the efficiency, speed, scalability, and the reduced costs of edge computing with the powerful advantages offered by the use of Artificial Intelligence and Machine Learning models. "Edge AI", "Intelligence on the edge", or "Edge Machine Learning" means that data is processed locally — i.e. near the source of data — in algorithms stored on a hardware device instead of being processed in algorithms located in the cloud. This not only enables real-time operations, but it also helps to reduce significantly the power consumption and security vulnerability associated with processing data in the cloud.

While moving from the cloud to the edge is a vital step in solving resource constraint issues, many Machine Learning models are still using too much computing power and memory to be able to fit the small microprocessors available on the market. Many are approaching this challenge by creating more efficient software, algorithms, and hardware or by combining these components in a specialized way. To this end, a new generation of purpose-built accelerators is emerging as chip manufacturers work to speed up and optimize the workloads involved in AI and Machine Learning projects from training to performing inference. Being these accelerators faster, cheaper, more power-efficient, and scalable, they promise to boost edge devices to a new level of performance. In this work, a modern system-on-chip (SoC) embedding a configurable hardware accelerator of this sort was analyzed in view of using it as a core building block of such devices. Also, it was studied its applicability in a real-world scenario characterized by issues that are common to a large class of problems in the industrial realm. The chosen SoC is one of the devices considered in the ML-TN-001 series of articles, where different embedded platforms suited for building "Edge AI" solutions are compared from the point of view of inferencing capabilities/features, development tools, etc. In principle, such platforms can indeed drive a bunch of different applications in the industrial world and other fields as well.

Articles in this series[edit | edit source]

  • Part 1: this document.
  • Part 2 talks about the classification of surface-mounted components on printed circuit boards.
  • Part 3 deals with the issue of data scarcity.

Useful links[edit | edit source]