Thanks for pointing out this aspect. It's important because it means that the number of SST tasks is NOT limited by the number of unique interrupt priority levels available in the NVIC. You can have many more SST tasks than interrupt priorities. The limit is only the number of supported interrupt vectors (minus the number of actually used interrupts). But, as shown in the video, the "reserved" interrupt vectors are usually available for SST tasks. --MMS
Hi Miro, sorry for my silly question, but I’m wondering-I’ve been looking for the post() function that posts TIMEOUT1_SIG signal in the sst_cpp/examples/blinky file, but I couldn’t find it. Is TIMEOUT1_SIG placed in the queue in some other special way? Thanks for the great video.
The post() function is declared inside the Task class in the sst.hpp header file (located in the Super-Simple-Tasker/include directory). The Task::post() is defined in the sst.cpp source file (in the Super-Simple-Tasker/sst_cpp/src directory). These are the most sensible places to put such a central functionality. --MMS
Hi Miro, thanks for the great video! i tried to run your example blinky_button on sst_cpp by gnu compiler on tivaC launpad but when i loaded it into board, green led blink, not like i expected from video. Then i take a look, it turn out from DBC_fault_handler. did i get mistake from somewhere?
@sweetie_taurus Thank you for reporting a problem, but I can't reproduce it. I've just downloaded the SST code from the Git repo github.com/QuantumLeaps/Super-Simple-Tasker and tested the example sst_cpp/examples/blinky_button/gnu . It showed a solid green LED without blinking. I'm not sure which GNU-ARM cross-compiler/linker you're using. I've used the GNU-ARM that ships in the QP-bundle for Windows. Your problem could be as simple as incorrect calling the static constructors in a startup code (which depends on the toolchain that you're using). Therefore, please find out which assertion is violated. I recommend using a toolset that allows you to debug the code. For example, the same code (including the exact same bsp_ek-tm4c123gxl.cpp file is used in the KEIL uVision example: sst_cpp/examples/blinky_button/armclang . GNU-ARM could be free, but if you can't debug the code, you're fighting a battle with one hand tied behind your back. --MMS
Hi Miro, thanks for the great video! You mentioned 'just 2 MHz,' but is it possible to run it at higher frequencies? If so, it would be great if you could show me how to adjust it.
Absolutely, it *is* possible to run SST at much higher CPU frequencies, at which point the kernel performs even better. In fact, SST will be so fast that it would require a much faster logic analyzer, so the cheap 24MHz analyzer won't be adequate anymore. Now, adjusting the CPU clock is not trivial. It requires adjusting several registers in the clock-control section, waiting for the PLL (Phase-Lock Loop) to settle, etc. There are whole tools (e.g., STM32CubeMX), which allow you to set it up correctly. --MMS
I feel like the word "blocking" is stretched a bit from its usual context here. Sometimes blocking means that the thing just runs without yielding. Other times it means that interrupts are disabled in that section of code. I almost thought here that by non-blocking a very short operation that runs to completion. But I think it's more blocking and non-blocking has to do with the priority of the operations. I need to look closer but it seems like the Systick holds the event loop and selects the hardware-preemption of possible events, pretty much perfect for RMS/RMA.
Indeed, the term "blocking" is extended here to mean any form of waiting in line until some external event happens. For example, busy waiting for a time delay is considered "blocking" in this context. Traditionally, "blocking" tends to mean only by a context switch in an RTOS. However, any form of "blocking" should be clearly distinguished from performing computations. For example, some complex floating-point computations on a CPU without the FPU might take considerable CPU time. This happens in line, but this is NOT waiting for an external event. --MMS
Hi Miro, thanks for the great presentation. I have one question where a message a posted to another task. In that instance, the caller task got pre-empted/blocked. Isnt that the same as not meeting run to completion?
The relation between preemption and run-to-completion (RTC) is explained in Part-2 (see ruclips.net/video/kEJ6QHerSro/видео.html ). But to quickly summarize: preemption and RTC can happily coexist. RTC does NOT mean monopolizing the CPU for the whole RTC step. It only means processing one event at a time. So, a preempted task only pauses momentarily but continues the same RTC step every time it gets the CPU back, and eventually completes the RTC step. Also, preemption is very *different* than blocking. Preemption is transparent to the preempted task. Blocking is always caused by an explicit call to an RTOS primitive, such as a delay or a semaphore. --MMS
As always, great video Miro. That said, I'm still not convinced about Event Driver RTOS in embedded system. I've built event driven GUI application in the past and it was great. But I think it worked so well because the Input/Output are very simple in a GUI application (WM_CREATE, WM_DESTROY, button click, etc). How would you structure an embedded application that needs to manage daisy-chained sensors that need filtering, CAN bus mailboxes, async webserver, usb? How would you pass around the data as event?
If all the NASA JPL Martian rovers can be programmed according to the event-driven paradigm, your software can too. Seriously. Please actually read the article "Managing Concurrency in Complex Embedded Systems” referenced in this presentation (see www.state-machine.com/doc/Cummings2006.pdf ). The NASA software admittedly uses a traditional RTOS (VxWorks, I believe), but all threads are structured as event loops, strictly without blocking. So the paradigm can apparently handle all the communication and synchronization issues you enumerate. In fact, the event-based solution is actually simpler than traditional blocking primitives of an RTOS. Also, when you think about it, there are not that many different types of events. As I said in the presentation, even if you don't immediately jump to a radically different kernel, like the SST (which is still incomplete), you should seriously consider using the event-driven paradigm with a traditional RTOS. Your designs will be better for it. --MMS
@@StateMachineCOMI agree with you, Dr.Miro, in my opinion, it seems that the college above doesn't understood the concepts about event-driven paradigms or even the state machine mechanism, I've implemented these paradigm successfully in basic to advanced embedded systems without any problems, I've been used and tested your framework too, and works perfectly for all tests realized.
Thank you Miro! You should get some prize for sharing your very practical knowledge.
Great explanations and implementations. Can't wait for second part.
Miro, this is fantastic as usual. I like that you have added multiple tasks at the same priority.
Thanks for pointing out this aspect. It's important because it means that the number of SST tasks is NOT limited by the number of unique interrupt priority levels available in the NVIC. You can have many more SST tasks than interrupt priorities. The limit is only the number of supported interrupt vectors (minus the number of actually used interrupts). But, as shown in the video, the "reserved" interrupt vectors are usually available for SST tasks. --MMS
Hi Miro, sorry for my silly question, but I’m wondering-I’ve been looking for the post() function that posts TIMEOUT1_SIG signal in the sst_cpp/examples/blinky file, but I couldn’t find it. Is TIMEOUT1_SIG placed in the queue in some other special way? Thanks for the great video.
The post() function is declared inside the Task class in the sst.hpp header file (located in the Super-Simple-Tasker/include directory). The Task::post() is defined in the sst.cpp source file (in the Super-Simple-Tasker/sst_cpp/src directory). These are the most sensible places to put such a central functionality. --MMS
Hi Miro, thanks for the great video! i tried to run your example blinky_button on sst_cpp by gnu compiler on tivaC launpad but when i loaded it into board, green led blink, not like i expected from video. Then i take a look, it turn out from DBC_fault_handler. did i get mistake from somewhere?
@sweetie_taurus Thank you for reporting a problem, but I can't reproduce it. I've just downloaded the SST code from the Git repo github.com/QuantumLeaps/Super-Simple-Tasker and tested the example sst_cpp/examples/blinky_button/gnu . It showed a solid green LED without blinking. I'm not sure which GNU-ARM cross-compiler/linker you're using. I've used the GNU-ARM that ships in the QP-bundle for Windows. Your problem could be as simple as incorrect calling the static constructors in a startup code (which depends on the toolchain that you're using). Therefore, please find out which assertion is violated. I recommend using a toolset that allows you to debug the code. For example, the same code (including the exact same bsp_ek-tm4c123gxl.cpp file is used in the KEIL uVision example: sst_cpp/examples/blinky_button/armclang . GNU-ARM could be free, but if you can't debug the code, you're fighting a battle with one hand tied behind your back. --MMS
Hi Miro, thanks for the great video! You mentioned 'just 2 MHz,' but is it possible to run it at higher frequencies? If so, it would be great if you could show me how to adjust it.
Absolutely, it *is* possible to run SST at much higher CPU frequencies, at which point the kernel performs even better. In fact, SST will be so fast that it would require a much faster logic analyzer, so the cheap 24MHz analyzer won't be adequate anymore. Now, adjusting the CPU clock is not trivial. It requires adjusting several registers in the clock-control section, waiting for the PLL (Phase-Lock Loop) to settle, etc. There are whole tools (e.g., STM32CubeMX), which allow you to set it up correctly. --MMS
Great! When Part-2 will be releasse?
Next week. Stay tuned!
I feel like the word "blocking" is stretched a bit from its usual context here. Sometimes blocking means that the thing just runs without yielding. Other times it means that interrupts are disabled in that section of code. I almost thought here that by non-blocking a very short operation that runs to completion. But I think it's more blocking and non-blocking has to do with the priority of the operations. I need to look closer but it seems like the Systick holds the event loop and selects the hardware-preemption of possible events, pretty much perfect for RMS/RMA.
Indeed, the term "blocking" is extended here to mean any form of waiting in line until some external event happens. For example, busy waiting for a time delay is considered "blocking" in this context. Traditionally, "blocking" tends to mean only by a context switch in an RTOS. However, any form of "blocking" should be clearly distinguished from performing computations. For example, some complex floating-point computations on a CPU without the FPU might take considerable CPU time. This happens in line, but this is NOT waiting for an external event. --MMS
Hi Miro, thanks for the great presentation. I have one question where a message a posted to another task. In that instance, the caller task got pre-empted/blocked. Isnt that the same as not meeting run to completion?
The relation between preemption and run-to-completion (RTC) is explained in Part-2 (see ruclips.net/video/kEJ6QHerSro/видео.html ). But to quickly summarize: preemption and RTC can happily coexist. RTC does NOT mean monopolizing the CPU for the whole RTC step. It only means processing one event at a time. So, a preempted task only pauses momentarily but continues the same RTC step every time it gets the CPU back, and eventually completes the RTC step. Also, preemption is very *different* than blocking. Preemption is transparent to the preempted task. Blocking is always caused by an explicit call to an RTOS primitive, such as a delay or a semaphore. --MMS
As always, great video Miro.
That said, I'm still not convinced about Event Driver RTOS in embedded system.
I've built event driven GUI application in the past and it was great.
But I think it worked so well because the Input/Output are very simple in a GUI application (WM_CREATE, WM_DESTROY, button click, etc).
How would you structure an embedded application that needs to manage daisy-chained sensors that need filtering, CAN bus mailboxes, async webserver, usb? How would you pass around the data as event?
If all the NASA JPL Martian rovers can be programmed according to the event-driven paradigm, your software can too. Seriously. Please actually read the article "Managing Concurrency in Complex Embedded Systems” referenced in this presentation (see www.state-machine.com/doc/Cummings2006.pdf ). The NASA software admittedly uses a traditional RTOS (VxWorks, I believe), but all threads are structured as event loops, strictly without blocking. So the paradigm can apparently handle all the communication and synchronization issues you enumerate. In fact, the event-based solution is actually simpler than traditional blocking primitives of an RTOS. Also, when you think about it, there are not that many different types of events. As I said in the presentation, even if you don't immediately jump to a radically different kernel, like the SST (which is still incomplete), you should seriously consider using the event-driven paradigm with a traditional RTOS. Your designs will be better for it. --MMS
@@StateMachineCOMI agree with you, Dr.Miro, in my opinion, it seems that the college above doesn't understood the concepts about event-driven paradigms or even the state machine mechanism, I've implemented these paradigm successfully in basic to advanced embedded systems without any problems, I've been used and tested your framework too, and works perfectly for all tests realized.