Changes

Jump to: navigation, search
Introduction
=Introduction=
This Technical Note (TN) describes a demo application used to show the combination of an inference algorithm, namely [https://en.wikipedia.org/wiki/Keyword_spotting keyword spotting], and asymmetric multiprocessing scheme (AMP) on a heterogeneous architecture. This use case can serve as the basis for more complex applications that have to carry out the following tasks:* Acquiring Real-time data acquisition from sensors in real-time* Executing Execution of a computationally expensive inference algorithm on the collected data.
This scenario is quite common in the realm of "AI at the edge" but, generally, can not be addressed with a microcontroller-based solution because it would take too long to run the inference algorithm. On the other hand, a classic embedded processor running a complex operating system such as Linux might not be suited either because unable to handle tight real-time constrained tasks properly.
=Testbed=
The testbed is illustrated in the following picture. Basically, it consists of an [[ORCA_SBC|Orca Single Board Computer]]TBD
=Implementation=
As stated previously, the inference algorithm is keyword spotting. The data Data being processed are thus audio samples retrieved by the Cortex M7 and sent to the Cortex A53 complex.
From a software perspective, we identify two different domains (see also the following picture):
The reserved SDRAM buffers used to store the audio samples are protected at Linux device tree level to prevent D1 domain from accessing them directly. It is worth remembering that it is also possible to make use of a hardware-based, stronger protection mechanism by exploiting the i.MX8M Plus Resource Domain Controller (RDC).
The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. As such, the whole operation of the software works operates like this:
* IAPP triggers the "acquisition" of audio samples by writing to a specific sysfs pseudo file
* The Cortex M7 firmware (MFW)
** adds some noise to the samples
** stores the resulting buffer in the shared memory
** signals IAPP that the buffer is ready
* IAPP runs the inference to spot the pronounced word.
The firmware running in D2 is implemented as a [https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-software-development-kit-sdk:MCUXpresso-SDK FreeRTOS] application. The use of a real-time operating system, combined with the intrinsic characteristics of the Cortex M7 in terms of [https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/beginner-guide-on-interrupt-latency-and-interrupt-latency-of-the-arm-cortex-m-processors interrupt latency], make this core extremely suitable for tight real-time constrained applications. Nevertheless, nothing prevents to choose a bare metal coding style instead.
* The RPMsg link between D1 and D2 is established
* IAPP starts.
Please note that the Cortex M7 firmware has not to be started before the Linux kernel necessarily. It is also possible to start MFW from the user-space Linux.
=Testing=
TBD
4,650
edits

Navigation menu