Changes

Jump to: navigation, search
Implementation
D1 and D2 communicates through the [https://en.wikipedia.org/wiki/RPMsg RPMsg protocol]. On the Cortex M7 side, the [https://github.com/NXPmicro/rpmsg-lite RPMsg Lite] implementation by NXP is used. The interface between D1 and D2 comprises a shared memory buffer as well. This area is used to exchange audio samples. Synchronization messages are exchanged over RPMsg channels instead.
For the sake of simplicity, the audio samples are not captured by the Cortex M7 with a real microphone. They are retrieved from prefilled memory buffers inaccessible to the Cortex A53 cores. For the purposes of discussion, this simplification is neglectable as the communication mechanisms between the domains are not affected at all. Likewise, the inference algorithm could be executed by the powerful Cortex M7 core itself. Again, the aim of this TN is to show an architectural solution that can be tailored to address more challenging, real-world use cases  The reserved SDRAM buffers used to store the audio samples at Linux device tree level. It is worth remembering that, to restrict accessibility, it is also possible to make use of a hardware-based, stronger mechanism thank to the i.MX8M Plus Resource Domain Controller (RDC).
The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. Basically, it works like this:
** stores the resulting buffer in the shared memory
** signals IAPP the buffer is ready
** IAPP runs the inference to spot the pronounced word.
== Additional notes regarding the inference application ==
4,650
edits

Navigation menu