Changes

Jump to: navigation, search
Introduction
Currently, the problem of latency is driving many companies to move from the cloud to the edge, along with the fact that in several use cases the costs for a cloud-based solution are not affordable. This has led to the birth of a new computational paradigm called "Edge AI" that combines the efficiency, speed, scalability, and the reduced costs of edge computing with the powerful advantages offered by the use of Artificial Intelligence and Machine Learning models. "Edge AI", "Intelligence on the edge", or "Edge Machine Learning" means that data is processed locally — i.e. near the source of data — in algorithms stored on a hardware device instead of being processed in algorithms located in the cloud. This not only enables real-time operations, but it also helps to reduce significantly the power consumption and security vulnerability associated with processing data in the cloud.
While moving from the cloud to the edge is a vital step in solving resource constraint issues, many Machine Learning models are still using too much computing power and memory to be able to fit the small microprocessors available on the market. Many are approaching this challenge by creating more efficient software, algorithms, and hardware or by combining these components in a specialized way. To this end, a new generation of purpose-built accelerators is emerging as chip manufacturers work to speed up and optimize the workloads involved in AI and Machine Learning projects from training to performing inference. Being these accelerators faster, cheaper, more power-efficient, and scalable, they promise to boost edge devices to a new level of performance. In this work, a modern system-on-chip (SoC) embedding a configurable hardware accelerator of this sort was analyzed in view of using it as a core building block of such devices. Also, it was studied its applicability in a real-world scenario characterized by issues that are common to a large class of problems in the industrial realm. This The chosen SoC was is one of the devices considered in the [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|ML-TN-001 series]] of articles, where different embedded platforms suited for building [https://towardsdatascience.com/will-edge-ai-be-the-ml-architecture-of-the-future-42663d3cbb5 "Edge AI"] solutions are compared from the point of view of inferencing capabilities/features, development tools, etc. In principle, such platforms can indeed drive a bunch of different applications in the industrial world and other fields as well.
==Articles in this series==
4,650
edits

Navigation menu