Changes

Jump to: navigation, search
Analysis of the logs and conclusions
The following chart shows the e.MMC accesses over time during the execution of the workload along with other measurements such as read/write throughput.
[[File:MISC-TN-017-eMMC-chart1.png|center|thumb|600x600px800x800px|e.MMC accesses over time]]
It is also possible to extrapolate the latency of the operations.
[[File:MISC-TN-017-eMMC-chart3-latency.png|center|thumb|600x600px800x800px|Latency]]
Another extremely useful graphical depiction is the chunk size distribution. For instance, this information is often used to understand how efficient the user application is when it comes to optimize the write operations for maximizing the e.MMC lifetime. The pie on the left refers to the read operations, while the other one refers to the write operations.
[[File:MISC-TN-017-eMMC-chart2-chunk-size.png|center|thumb|600x600px800x800px|Chunk size distribution]]
To interpret the result, one needs to take into account how the workload was implemented. In the example under discussion, the workload basically makes use of two applications: <code>[https://man7.org/linux/man-pages/man1/dd.1.html dd]</code> and <code>stressapptest</code>. <code>dd</code> was specified to use 4-kByte data chunks (<code>bs=4k</code>). <code>stressapptest</code> uses 512-byte chunks instead because the <code>--write-block-size</code> parameter was not used (for more details please refer to the [https://github.com/stressapptest/stressapptest/blob/e6c56d20c0fd16b07130d6e628d0dd6dcf1fe162/src/worker.cc#L2615 source code]). As a result, one would expect that the majority of accesses are 512 bytes and 4 kByte. The charts clearly show that this is not the case. Most of the accesses are 512kB instead. This is a blatant example of how the algorithms of the file systems and the kernel block driver can alter the accesses issued at application levels for optimization purposes.
4,650
edits

Navigation menu