Difference between revisions of "MISC-TN-015: Proof-of-Concept of an industrial, high-frame-rate video recording/streaming system"

From DAVE Developer's Wiki
Jump to: navigation, search
(Alarm detection latency verification)
(Alarm detection latency verification)
Line 202: Line 202:
 
First of all, we notice that the timer's last digit always appears as "8". This is due to the fact that 7-segment displays are refreshed every millisecond, but the acquisition rate of the image sensor is slower (*). Therefore, the last digit is not meaningful and can be ignored.  
 
First of all, we notice that the timer's last digit always appears as "8". This is due to the fact that 7-segment displays are refreshed every millisecond, but the acquisition rate of the image sensor is slower (*). Therefore, the last digit is not meaningful and can be ignored.  
  
That said, frames t<sub>B'</sub> and t<sub>A'</sub> are pretty close to what we expected, even though they are a little bit in advance, surprisingly:
+
That said, frames t<sub>B'</sub> and t<sub>A'</sub> are pretty close to what we expected:
 
*t<sub>B'</sub> frame
 
*t<sub>B'</sub> frame
 
**This frame is named <code>000000_-8.133780480.png</code>
 
**This frame is named <code>000000_-8.133780480.png</code>

Revision as of 15:03, 10 August 2020

Info Box
NeuralNetwork.png Applies to Machine Learning


History[edit | edit source]

Version Date Notes
1.0.0 August 2020 First public release

Introduction[edit | edit source]

This Technical Note (TN for short) illustrates a Proof-of-Concept (PoC) that DAVE Embedded Systems engineered for a customer operating in the industrial automation market. The goal was to build a prototype of a high-frame-rate video recording/streaming system. In a typical scenario, illustrated in the following picture, this device would be used in fast automatic manufacturing lines for two purposes:

  • remote monitoring
  • detailed off-line "post-mortem" failure analysis.


Fig. 1: Typical usage scenario


In essence, the system consists of a high-frame-rate image sensor (*) interfaced to a Linux-based embedded platform also denoted as PP in the rest of the document. The sensor frames a specific area of the line and sends a constant-rate flow of frames to PP for further processing, as detailed in the following sections.


(*) Resolution and frame rate of this stream have to be carefully determined in function of the characteristics of the scene to shoot, first and foremost the speed of moving objects framed by the sensor and its lens. In the case under discussion, the customer specified a resolution of 1280x720, a frame rate of 300 fps, and the use of a global shutter.

Functionalities[edit | edit source]

Streaming capability is used to monitor the production line remotely. Under normal operation, this is enough for the human operators to get an overview of the line while it is working. For this purpose, a simple low-frame-rate video stream (25 fps or something), which can be viewed on a remote device, does the job.

The most interesting functionality is the recording capability associated with alarm events, however. As shown in the previous image, the production line is governed by a Programmable Logic Controller (PLC), which is interfaced to several actuators and sensors. Of course, the line may be subject to different kinds of faults. The most severe—for instance, a major mechanical failure—can lead to the automatic stop of the line. Thanks to the aforementioned sensors, the PLC is notified in real-time of such faulty conditions. In these situations, it triggers an alarm signal directed to the video recording system. Whenever an alarm is detected, the recording system saves on a persistent storage device high-frame-rate footage showing what happened right before and right after the alarm event. Automation engineers and maintenance personnel can leverage afterwards this fine-grained sequence of frames to analyze in detail the scene around the occurrence of the alarm event, searching for its root cause (this process is also referred to as post-mortem analysis).

Software implementation[edit | edit source]

Figure 1 also shows a simplified block diagram of the application software architecture that was developed to implement this solution. The application is a multi-threaded program, which runs on the processing platform. The high-level business logic is coded in a finite state machine (FSM), which interacts with the threads. Each thread takes care of a particular task.

During normal operation, the high-frame-rate stream generated by the image sensor is acquired by the thread T1. This stream is indicated by the red flow in the previous picture. T1 also stores the frames coming from the sensor into an alarm buffer and passes them to the thread T4 after a down rate conversion (low-frame-rate stream is denoted in green). In parallel, T4 creates a compressed video stream to be transmitted over the local network.

When an alarm is detected by thread T3, thread T2—which is usually idle—is enabled. Once the alarm buffer is filled, this thread stores it persistently on a solid-state drive (SSD).

The application also integrates a web interface (thread T5) that allows to supervise and control the recording/streaming system. For example, it can be used to enable/disable the alarm recording functionality and to read statistical information.

Sizing the alarm buffer[edit | edit source]

Alarm buffer's size is related to the size of the time window surrounding the alarm event, as depicted in the following image.


Fig. 2: Time window surrounding an alarm event


Let t0 be the time associated with the occurrence of an alarm. t0-tB is the time window before the alarm to be recorded. tA-t0 is the time window after the alarm to be recorded. Consequently, tA-tB is the size of the entire window to be recorded. Specifically, for the the PoC here described:

  • t0-tB = 8s
  • tA-t0 = 3s
  • tA-tB = 11s.

Once the size of the time window is known, it is straightforward to determine how much RAM memory is required for the buffer. In the case under discussion, the size of a frame is approximately 1280x720x8bpp=921600 byte. One second of recording is thus 921600x300≈264 MByte. Therefore, the buffer has to be at least 264x11=2904 MByte to contain all the required frames.

Alarm recordings[edit | edit source]

The processing platform is equipped with a dedicated SSD that is used to store alarm buffers only. The operating system and the application software are stored on a different flash memory instead. Of course, the size of the SSD should be determined depending on the maximum number of alarms to be stored at the same time. The directory containing the files associated with the alarm events is also shared via Samba (SMB) protocol. This allows to easily access these files from Windows PC's connected to the same local network as well.

For each alarm, a separate subdirectory is created. In such a subdirectory, the following files are stored:

  • a BMP image for each frame in the alarm buffer
  • a video file generated from the individual frames.

It is worth mentioning the naming scheme used for the BMP files. From the programming perspective, each frame retrieved from the sensor image is an instance of a class—the application software makes use of the object-oriented programming (OOP) model, in fact. Interestingly, these instances include a timestamp as well besides raw pixel data. This timestamp, which has a 20ns resolution, is based on a free-running counter integrated into the image sensor, which is clocked by a local clock. Every time a new frame is captured, it is associated with the current value of the free-running counter. That being the case, timestamps are not that useful if taken separately because they are expressed as an absolute value, which is not related to any usable reference clock. They can be extremely valuable, however, if used as relative quantities, for instance when it comes to determining the elapsed time between two frames. For instance, let's assume that we want to measure the time between two frames (ΔT), say F1 and F2. Let TS1 and TS2 be the associated timestamps respectively. To calculate ΔT expressed in ns, we just need to do (TS2-TS1)x20. An example of utilization of timestamps is described in this section.

When saving the frames as BMP files, the timestamp value would be lost though. To avoid losing this precious information, file names are formatted such that timestamps are preserved too. See for example the following list, which refers to an alarm triggered on August 6th, 2020 at 10:30:08 Central European Summer Time (CEST):

sysadmin@hfrcpoc1-0:/mnt/alarms/2020-08-06_10.30.08_CEST$ ll
total 9012572
drwxrwxr-x  2 sysadmin sysadmin  139264 ago  6 10:38 ./
drwxr-xr-x 12 sysadmin sysadmin    4096 ago  6 10:30 ../
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000000_-8.133780480.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000001_-8.130424960.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000002_-8.127069440.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000003_-8.123713920.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000004_-8.120358400.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000005_-8.117002880.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000006_-8.113647360.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 000007_-8.110291840.bmp
…
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002420_-0.013422080.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002421_-0.010066560.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002422_-0.006711040.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002423_-0.003355520.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002424_+0.000000000.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002425_+0.003355520.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002426_+0.006711040.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002427_+0.010066560.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:30 002428_+0.013422080.bmp
…
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:31 003328_+3.033390080.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:31 003329_+3.036745600.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:31 003330_+3.040101120.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:31 003331_+3.043456640.bmp
-rw-rw-r--  1 sysadmin sysadmin 2764854 ago  6 10:31 003332_+3.046812160.bmp

The name of the subdirectory (2020-08-06_10.30.08_CEST) is self explanatory. File names formatting is a little bit trickier. It looks like this:

<progressive counter>_<time offset relative to alarm frame>.bmp

The first part is a progressive counter, starting from 000000. In other words, the image whose file name is something like 000000_x.bmp refers to the frame captured at the beginning of the recording window (tB). At the other end, the image whose file name starts with the highest counter refers to the frame captured at the end of the recording window (tA). This scheme is convenient for automatically processing the frames because it allows to order them very easily.

The second part of the file names is the relative time offset (expressed in seconds) with respect to the "alarm frame." The alarm frame, in turn, is the first frame captured after the detection of an alarm signal. Consequently, the alarm frame is always named as n_+0.000000000.bmp. In other words, this frame is associate with t0. In the example shown above, the alarm frame's name is 002424_+0.000000000.bmp. This rule allows to determine straightforwardly how close a frame is to the alarm event. For instance, see the following image that refers to a screenshot captured on the processing platform itself.


Fig. 3: Example of frame stored in an alarm recording


Just by looking at the file name, one can understand that the frame shown was captured about 1.58s before the alarm event. Of course, the same image can be viewed on a remote Windows PC too, by accessing the shared directory via SMB:


Fig. 4: Frame stored in an alarm recording and visualized on a remote Windows PC


The subdirectory used for this example cointains around 3300 BMP files as expected, considering that the alarm time window is 11 seconds and the frame rate is 300fps:

sysadmin@hfrcpoc1-0:/mnt/alarms/2020-08-06_10.30.08_CEST$ ls *.bmp | wc -l
3333


As stated previously, a video file is also generated. This can be convenient for users when they wish to analyze the evolution of the framed scene from a dynamic perspective.

Real-timeness, frame loss, and alarm detection latency[edit | edit source]

One of the major concerns that were faced during the conception of the PoC was related to its real-timeness. In principle, two real-time requirements have to be met to achieve the desired functionalities:

  1. [REQ1] Regarding the acquisition of frames from the image sensor, such a PoC should guarantee that all frames are captured (i.e. no frame loss), no matter what processing the PP is performing simultaneously (encoding the live stream, serving a web page, etc.). Taking into account the frame rate and the fact that the sensor has no buffering capabilities, this implies that the processing platform has about 3ms to complete the acquisition of each frame.
  2. [REQ2] Maximum alarm detection latency has to be much smaller than the alarm buffer size, ideally zero.

The first requirement is obvious, but it can be relaxed a little bit in the sense that it has to be satisfied while when the system is waiting for an alarm to happen and until the alarm buffer is filled. After that moment, a fame loss can be tolerated because it does not compromise system functionalities. In other words, following the detection of a new alarm say A1, it is acceptable that the system ignores new possible alarms until the buffer associated with A1 is stored on the SSD. Once storing is completed, the system is ready to detect and process a new alarm again.

The second requirement is a little more subtle and will be explained with the help of Fig. 5.


Fig. 5: Alarm detection latency


The top part shows the ideal recording buffer. As described in section Sizing the alarm buffer, the recording is supposed to start 8 seconds (tB) before the occurrence of the alarm event (t0) and end 3 seconds afterwards (tA). As a matter of fact, the recording system takes some time to recognize that the alarm signal is asserted—we refer to this time as alarm detection latency. The fact that this latency is greater than 0 leads to the actual situation illustrated in the bottom part of the picture. Let t0' be the actual instant when the PP detects the assertion of the alarm signal. Consequently, the beginning and the end of the recording buffer (tB' and tA' respectively) are delayed as well. Of course, alarm detection latency should be minimized in order to get the actual recording buffer as close as possible to the ideal one. If the latency is too large, in fact, the recording could miss essential events. For instance, let's assume that the alarm was triggered by a failure occurring at time tF as shown in the picture. Because of the alarm detection latency, this instant is before the start of the actual recording buffer (tF precedes tB') and thus it is not available for the post-mortem analysis.

In light of these requirements, it is clear that the processing platform should be engineered as a real-time system. Despite these considerations, however, a platform not specifically designed with real-time capabilities was utilized. Rather, it was decided to use a platform providing considerable headroom in terms of processing power and to verify experimentally the fulfillment of requirements. This approach allowed to meet the budget available for the PoC by limiting implementation effort and development time. Basically, two different approaches were utilized for experimental verification.

  • With regard to frame loss, a mechanism able to detect this anomaly was implemented at the application software level. Should a frame loss occur, this event is logged and reported as shown by the example log below (see "Abnormal frame rate" messages) (*).
 [2020-08-10 10:48:53,674][INFO][main_logger.main] SAVING_ALARM_BUFFER -> STREAM_ONLY
 [2020-08-10 10:48:53,675][INFO][main_logger.main] STREAM_ONLY -> WAITING_FOR_ALARM
 [2020-08-10 10:49:22,570][DEBUG][main_logger.main] Alarm signal asserted
 [2020-08-10 10:49:22,571][INFO][main_logger.main] WAITING_FOR_ALARM -> FILLING_ALARM_BUFFER
 [2020-08-10 10:49:23,571][DEBUG][main_logger.main] Alarm signal deasserted
 [2020-08-10 10:49:25,631][INFO][main_logger.alarmbuffersaverthread] Starting AlarmBufferSaverThread ...
 [2020-08-10 10:49:25,631][INFO][main_logger.main] FILLING_ALARM_BUFFER -> SAVING_ALARM_BUFFER
 [2020-08-10 10:49:25,678][ERROR][main_logger.main] Abnormal frame rate (42.6 fps)
 [2020-08-10 10:49:25,716][INFO][main_logger.alarmbuffersaverthread] Deleting oldest alarm ...
 [2020-08-10 10:49:25,778][ERROR][main_logger.main] Abnormal frame rate (59.6 fps)
 [2020-08-10 10:49:25,839][ERROR][main_logger.main] Abnormal frame rate (74.5 fps)
 [2020-08-10 10:49:25,938][INFO][main_logger.alarmbuffersaverthread] Saving to /mnt/alarms/2020-08-10_10.49.22_CEST
 [2020-08-10 10:50:45,750][ERROR][main_logger.main] Stopping to stream frames ...
 [2020-08-10 10:50:45,784][ERROR][main_logger.main] It is enough data for buffer  ...
 [2020-08-10 10:50:48,822][ERROR][main_logger.main] Stopping to stream frames ...
 [2020-08-10 10:50:48,855][ERROR][main_logger.main] It is enough data for buffer  ...
 [2020-08-10 10:51:02,582][DEBUG][main_logger.main] Alarm signal asserted
 [2020-08-10 10:51:03,581][DEBUG][main_logger.main] Alarm signal deasserted
 [2020-08-10 10:51:59,825][INFO][main_logger.alarmbuffersaverthread] Ending AlarmBufferSaverThread ...
 [2020-08-10 10:51:59,844][INFO][main_logger.main] SAVING_ALARM_BUFFER -> STREAM_ONLY
 [2020-08-10 10:51:59,844][INFO][main_logger.main] STREAM_ONLY -> WAITING_FOR_ALARM
  • Concerning alarm detection latency, a tailor-made testbed was built as illustrated in the next section.

(*) Incidentally, this mechanism exploits frame timestamps described in this section.

Alarm detection latency verification[edit | edit source]

The following diagram shows how the specific testbed used for this verification looks like.


Fig. 6: Testbed used to verify the alarm detection latency


In essence, the idea consists of shooting a 5-digit 7-segment-display timer with 1ms resolution. The timer cycles repeatedly from 00.000s to 99.999s and also provides a digital signal that is used to emulate the alarm signal. Whenever the timer reaches 50.000s, this signal is raised for 1 second. In other words, the timer triggers an emulated alarm that is synchronized with the visualization of the 50th second on the displays. Referring to the Fig.2, istants tB, t0, and tA can be easily associated with the time visualized by the timer as follows:

  • tB = 50.000-8.000 = 42.000s
  • t0 = 50.000s
  • tA = 50.000+3.000 = 53.000s

Once the recording is saved on the SSD, the verification consists of checking the first frame, the last frame, and the alarm frame (n_+0.000000000.bmp). They should show respectively:

  • The timer displaying 42.000
  • The timer displaying 53.000
  • The timer displaying 50.000.

Let's analyze one of the recording buffers that were captured this way. It refers to the alarm recorded on Agust 8, 2020 at 12.09.51 CEST. The following images show the frame associated with tB', tA', and t0' respectively.


Fig. 7: Frame associated with tB'


Fig. 8: Frame associated with tA'


Fig. 9: Frame associated with t0'


First of all, we notice that the timer's last digit always appears as "8". This is due to the fact that 7-segment displays are refreshed every millisecond, but the acquisition rate of the image sensor is slower (*). Therefore, the last digit is not meaningful and can be ignored.

That said, frames tB' and tA' are pretty close to what we expected:

  • tB' frame
    • This frame is named 000000_-8.133780480.png
    • The timer displays 41.86x (expected 42.000)
  • tA' frame
    • This frame is named 003332_+3.046812160.png
    • The timer displays 53.04x (expected 53.000)

What about t0' frame? It is named correctly (002424_+0.000000000.png), but the timer displays 98.88x, which is not what we expected at all. Why? To find the answer, let's have a look at the subsequent frame:


Fig. 10: Frame following the alarm frame


Timer displays 50.00x. This means that, in the middle of the exposure window associated with the alarm frame, the timer refreshed all the 7-segment displays from something like 49.xyz to 50.abc. Thus, all the digits were affected by a change during the acquisition and this is the reason why they do not appear consistently.

In conclusion, the number of stored frames is correct (3333) and the assessment of the aforementioned specific frames was completed successfully because the discrepancies are in the order of hundreds of milliseconds, therefore much smaller than the size of the recording window. These checks were repeated over several recordings and similar results were found. Nevertheless, it is worth mentioning that this analysis does not allow to say that the processing platform behaves like a true real-time system with regard to capturing the high-frame-rate flow of frames and processing the alarm signal. As such, we can not determine the upper bound of alarm detection latency, for example. Rather we can say that the system typically performs as in the example detailed here. And this is enough to satisfy this PoC's requirements.


(*) The major factor determining the acquisition rate is the exposure time, which was set to 3300us in this case.

Future work[edit | edit source]

Industrialization[edit | edit source]

To turn this PoC into a real product, it should be submitted to an accurate and extensive industrialization process. For instance, the processing platform should be implemented with a true real-time system. Under this assumption, the effective computational power required should be estimated. In turn, this would lead to selecting an embedded platform that is significantly cheaper than the one used for the PoC.

In regard to the application software, it should be completed with full error management and a richer web interface. Additionally, it can not be excluded that parts of it need to be rewritten to achieve desired performances.

Adding machine learning-based processing[edit | edit source]

Speaking of possible future evolutions of this recording/streaming system, many options are available to add even more advanced functionalities. For example, this device could leverage advanced machine learning-based computer vision algorithms to process the high-frame-rate stream. In the world of industrial automation, many issues could be addressed ranging from anomaly detection to predictive maintenance.

For these reasons, NXP i.MX8M Plus will be one of the key components considered for the possible product industrialization, as DAVE Embedded Systems is part of the beta program associated with this system-on-chip.

Appendix A[edit | edit source]

In this appendix, the log indicated in section Real-timeness, frame loss, and alarm detection latency will be further analyzed to provide more details about the application software.

During normal operation, if the alarm detection functionality is enabled, the application waits indefinitely for an alarm to be raised:

[2020-08-10 10:48:53,674][INFO][main_logger.main] SAVING_ALARM_BUFFER -> STREAM_ONLY
[2020-08-10 10:48:53,675][INFO][main_logger.main] STREAM_ONLY -> WAITING_FOR_ALARM


When such an event occurs

[2020-08-10 10:49:22,570][DEBUG][main_logger.main] Alarm signal asserted

, the application changes state and completes filling the alarm buffer:

[2020-08-10 10:49:22,571][INFO][main_logger.main] WAITING_FOR_ALARM -> FILLING_ALARM_BUFFER


As per system requirements, it takes 3 seconds to fill the buffer after the alarm occurrence (from 10:49:22,570 to 10:49:25,631, in this example). Right after filling process is completed, a specific thread is resumed to store the buffer on the disk:

[2020-08-10 10:49:23,571][DEBUG][main_logger.main] Alarm signal deasserted
[2020-08-10 10:49:25,631][INFO][main_logger.alarmbuffersaverthread] Starting AlarmBufferSaverThread ...
[2020-08-10 10:49:25,631][INFO][main_logger.main] FILLING_ALARM_BUFFER -> SAVING_ALARM_BUFFER


Interestingly, while the PP is saving the recording buffer on the disk, some frames from the image sensor are lost due the non real-time nature of the system. This results in an temporary frame rate drop. As explained previously, this is tolerable.

[2020-08-10 10:49:25,678][ERROR][main_logger.main] Abnormal frame rate (42.6 fps)
[2020-08-10 10:49:25,716][INFO][main_logger.alarmbuffersaverthread] Deleting oldest alarm ...
[2020-08-10 10:49:25,778][ERROR][main_logger.main] Abnormal frame rate (59.6 fps)
[2020-08-10 10:49:25,839][ERROR][main_logger.main] Abnormal frame rate (74.5 fps)
[2020-08-10 10:49:25,938][INFO][main_logger.alarmbuffersaverthread] Saving to /mnt/alarms/2020-08-10_10.49.22_CEST
[2020-08-10 10:50:45,750][ERROR][main_logger.main] Stopping to stream frames ...
[2020-08-10 10:50:45,784][ERROR][main_logger.main] It is enough data for buffer  ...
[2020-08-10 10:50:48,822][ERROR][main_logger.main] Stopping to stream frames ...
[2020-08-10 10:50:48,855][ERROR][main_logger.main] It is enough data for buffer  ...


The following messages refer to a new alarm signal that happened during the writing on to the disk. This alarm is ignored because the system is not designed to process more the one alarm at a time.

[2020-08-10 10:51:02,582][DEBUG][main_logger.main] Alarm signal asserted
[2020-08-10 10:51:03,581][DEBUG][main_logger.main] Alarm signal deasserted


Finally, after the completion of storing persistently the buffer, the system is ready to detect and process a new alarm again:

[2020-08-10 10:51:59,825][INFO][main_logger.alarmbuffersaverthread] Ending AlarmBufferSaverThread ...
[2020-08-10 10:51:59,844][INFO][main_logger.main] SAVING_ALARM_BUFFER -> STREAM_ONLY
[2020-08-10 10:51:59,844][INFO][main_logger.main] STREAM_ONLY -> WAITING_FOR_ALARM

Appendix B[edit | edit source]

This section provides some further details regarding the testbed utilized for the alarm detection latency verification.

The following screenshot shows the emulated alarm signal, which is generated by the custom timer. It consists of a 1-second positive pulse.


Fig. B1: Emulated alarm signal generated by the custom timer


Each display is statically driven by a 74HC595 shift register in order to avoid multiplexing that would be detrimental for our purpose.


Fig. B1: Emulated alarm signal and latch clock when displaying 50.000


Fig. B1 shows the emulated alarm signal (magenta) and the signal used to update the 7-segment displays when the timer reaches 50.000 (light blue). This signal is connected to the "latch clock" inputs of the shift registers so that all the displays are updated simultaneously. Turning on/off LED's is a very fast process, comparable with the propagation time of the shift register itself. Thus, it can be assumed that the displays show 50.000 within a few tens of nanoseconds after the rising edge of the latch clock. This is long before the assertion of the emulated alarm signal, proving the correctness of the timer implementation.

Credits[edit | edit source]