Fantastic talk! The audio quality is great. Your talk has a constant level of abstraction, this makes it easy to follow and absorb. You mention where there are rabbit holes and give a summary of your findings, that's so efficient. Your slideware greatly supports your narrative, is short and precise. The overall narrative visits topic after topic, giving an overview, some examples, some details, some outlook (like inpace_function, static_vector) and mentions the standard or papers where useful. One of the best talks of CppCon 2021! I have nothing to do with audio but I enjoyed it beginning to end.
by definition, real-time does not equal to low-latency, although most of time people tend to use interchangeably. Real-time is a deterministic characteristic.
I suggest an additional (optional) parameter to the attribute: [[realtime_safe(n*m)]] void do_sth(int* array2d, int n, int m); In general attributes for runtime might be useful, which would allow more complex code analysis: [[average_time(n*std::log(n)), worst_time(n*n)]] void sort(...); I guess compilers could bubble up this attribute through callers, such that tools can show the runtime behavior of any function.
I'm not sure why you'd want to enter a try block if you don't want to throw exceptions. Isn't the whole point of that structure to guard against exceptions?
In some scenarios, you don't care about real-time safety (or performance in general) in the case you end up on the sad path (exception is thrown, and you go down some error handling code) but you do care about it on the happy path (exception is not thrown but the try block is still there)
There were a lot of primitives from the thread support library getting tossed out. I wish there was a bit more time spent going over why. Having worked in an RTOS environment, posting semaphores or a msg to a queue seems like a common thing to do from a high priority thread.
@ real time safe dynamic memory: pre-configured and pre-allocatable data structures capacity like string, containers (vectors, ...) etc. can be used. As often in real time programming this shifts work from runtime to compile time, so memory use measurements with a profiler gives information for the system memory capacity configuration, which is good to be also on the safe side.
technically you could do algorithms that are linear or worse in time complexity if you know that your input is within a given range for that the time complexity is within the deadline (or rather some percentage thereof)
I find it sad that it's 2022 and Timur *still* has to dispel the "lambdas heap allocate" myth. Lambdas have been in the language for more than ten years at this point. How many CppCon talks pointing this out is it going to take before we can stop re-explaining all this?
This is a great talk, and covering a subject which I didn't manage to find many good resources about. Can you recommend other resources that cover this topic? Or is the standard the only place to look into?
Rule number one: you don't rely on the standard to "guarantee" a compiler property. You actually check whether YOUR compiler actually has that property when it compiles YOUR code.
38:48 Is waking up a thread really a problem in practice? If you do multi-threaded DSP, you always have to synchronize threads. The helper threads typically wait on a semaphore and the "main" audio thread posts to the semaphore to wake them up. How would you do this if you were not allowed to wake up threads?
You could do this to get a buffer of length computed at runtime before your thread goes real-time, but: • since its size would never change, you’d be better off with a std::unique_ptr stored alongside its size in a std::size_t since the vector has to additional data (like its capacity) which you wouldn’t need here; • you would still need the std::pmr::monotonic_buffer_resource. However, you couldn’t do it in a freestanding environment since dynamic allocations aren’t supported in those.
Most important I think here is avoid using std containers ( or maybe with a custom preallocator ) , and anything else appart from atomic in multithreading . I did not know actually stdarray is allocated on the stack good info there .
operator new has A LOT of overloads (taking std::align_val_t, taking std::nothrow_t, etc). Do you handle all of them? Also, what happens if you work with a 3rd party class that overloads its own operator new? How do you guarantee that that never gets called?
Fascinating talk, thank you for this. If applicable, and where the problem areas are 'small'/manageable, would writing those sections in assembly language give you your required predictive execution behaviour? Of course, it would help if kernel calls are not used, or if used, they don't re-introduce what you seek to prevent. We used to re-write specific (problematic) kernel routines in days gone by - mainly removing their general purpose nature for our specific use cases(and gaining significant speed and 100% behavioural predictability)...don't know how doable that is here/today. A damn shame if those days are gone. Today's abstraction is both good and bad (loss of control, predictability, etc)
54:30 std::atomic itself might add memory fences or hardware locks depending on the std::memory_order. You can't have lock free data structures without these. Luckily, there is no reason why they shouldn't be real-time safe. They work on the CPU level, not on the OS scheduler level.
alloca() doesn't compose, it can only be used in the body of the function where it's called, so for example you can't build a dynamic array class around it.
the best true random number generator ive seen is a reverse polarity base emitter junction in a bjt with non connected collector connected to an ADC...
Why on earth would I use the STL in real-time scenarios. It's completely unsuitable (algorithms excluded) and figuring out whether I can use a piece from it takes me more time than rewriting it for real-time audio needs.
This guy has an unwarranted phobia of multi-threading. What guarantees are there that his beloved single thread won't be interrupted and re-scheduled by the OS? None. I'd rather make my app faster with multi-threading and thus more "realtime".
> This guy has an unwarranted phobia of multi-threading. Where do you see this? > What guarantees are there that his beloved single thread won't be interrupted and re-scheduled by the OS? By setting the appropriate thread priority. > I'd rather make my app faster with multi-threading and thus more "realtime". Multi-threading and realtime-safety are orthogonal. You can use several threads for real-time audio computation (most DAWs do this), but the code for each thread still has to be real-time-safe.
If you have a real-time audio processing thread, there isn't a hard guarantee as such (because it's not a RTOS) but in practice it works really well as the thread gets a high priority assigned by the OS and therefore won't be rescheduled unless you miss the deadline
For hard realtime? Yes, that's the safer bet. It's not necessary because C++ is not all that unreliable in this regard, but to be honest... if you have a real hard real time problem... like making sure that that 100 ton crane really stops before something really heavy hits the ground really hard... yeah, stick to C. ;-)
Fantastic talk! The audio quality is great. Your talk has a constant level of abstraction, this makes it easy to follow and absorb. You mention where there are rabbit holes and give a summary of your findings, that's so efficient. Your slideware greatly supports your narrative, is short and precise. The overall narrative visits topic after topic, giving an overview, some examples, some details, some outlook (like inpace_function, static_vector) and mentions the standard or papers where useful. One of the best talks of CppCon 2021! I have nothing to do with audio but I enjoyed it beginning to end.
This was an excellent talk - this topic is very rarely covered at this depth - extremely useful for my interest in C++ audio DSP on ARM - thank you.
Glad it was helpful!
by definition, real-time does not equal to low-latency, although most of time people tend to use interchangeably. Real-time is a deterministic characteristic.
I agree. Hard real-time is "guaranteed latency". No matter what happens, the CPU has to have a reliable output after a defined amount of time.
7:35
I love the way Timur's lectures change with the years.
Excellent talk! I would die to have a standard implementation of a lock free queue! It's time already!
There's a chance you know. I mean a chance of dying of old age
great lecture especially if your into ableton vst plug ins like izotope or virtual instruments steinberg etc
Great talk. This topic was definitely poorly covered in the past. Thank you.
This is a fabulous presentation, thank you ever so much Timur.
You know it:s going to be a good presentation if Timur is presenting
I suggest an additional (optional) parameter to the attribute:
[[realtime_safe(n*m)]] void do_sth(int* array2d, int n, int m);
In general attributes for runtime might be useful, which would allow more complex code analysis:
[[average_time(n*std::log(n)), worst_time(n*n)]] void sort(...);
I guess compilers could bubble up this attribute through callers, such that tools can show the runtime behavior of any function.
I learned a lot from this talk. Excellent presentation!
This is a very interesting talk. I liked the pmr trick to avoid dynamic memory allocation.
That's not a trick. That is an absolute necessity if you want your software to be truly reliable.
53:31 If all threads were only try-locking, there is no need to use a mutex at all, you can just as well use a spinlock.
I'm not sure why you'd want to enter a try block if you don't want to throw exceptions. Isn't the whole point of that structure to guard against exceptions?
In some scenarios, you don't care about real-time safety (or performance in general) in the case you end up on the sad path (exception is thrown, and you go down some error handling code) but you do care about it on the happy path (exception is not thrown but the try block is still there)
@@timurdoumler4813 That's not how real-time is defined. Either it is real-time or it is not. If it isn't, then don't call it that.
There were a lot of primitives from the thread support library getting tossed out. I wish there was a bit more time spent going over why. Having worked in an RTOS environment, posting semaphores or a msg to a queue seems like a common thing to do from a high priority thread.
@ real time safe dynamic memory: pre-configured and pre-allocatable data structures capacity like string, containers (vectors, ...) etc. can be used. As often in real time programming this shifts work from runtime to compile time, so memory use measurements with a profiler gives information for the system memory capacity configuration, which is good to be also on the safe side.
technically you could do algorithms that are linear or worse in time complexity if you know that your input is within a given range for that the time complexity is within the deadline (or rather some percentage thereof)
True.
I find it sad that it's 2022 and Timur *still* has to dispel the "lambdas heap allocate" myth. Lambdas have been in the language for more than ten years at this point. How many CppCon talks pointing this out is it going to take before we can stop re-explaining all this?
It is a very good talk! Thank you.
Thank you too!
This is a great talk, and covering a subject which I didn't manage to find many good resources about. Can you recommend other resources that cover this topic? Or is the standard the only place to look into?
Rule number one: you don't rely on the standard to "guarantee" a compiler property. You actually check whether YOUR compiler actually has that property when it compiles YOUR code.
c++ and c# are the best. I'm trying to overcome dynamic allocations with my state-machine motion-control applications #squintHarder
38:48 Is waking up a thread really a problem in practice? If you do multi-threaded DSP, you always have to synchronize threads. The helper threads typically wait on a semaphore and the "main" audio thread posts to the semaphore to wake them up. How would you do this if you were not allowed to wake up threads?
Is it not the case that the C++ real-time interface is C?
Why not just use std::vector and pre-allocate? Any reason why we should not do that?
You could do this to get a buffer of length computed at runtime before your thread goes real-time, but:
• since its size would never change, you’d be better off with a std::unique_ptr stored alongside its size in a std::size_t since the vector has to additional data (like its capacity) which you wouldn’t need here;
• you would still need the std::pmr::monotonic_buffer_resource.
However, you couldn’t do it in a freestanding environment since dynamic allocations aren’t supported in those.
We can add *std::span* to the list of useful std real-time-safe classes
Most important I think here is avoid using std containers ( or maybe with a custom preallocator ) , and anything else appart from atomic in multithreading . I did not know actually stdarray is allocated on the stack good info there .
Something else I do is overload operator new and assert() it is never called once everything is set
up. Thoughts?
operator new has A LOT of overloads (taking std::align_val_t, taking std::nothrow_t, etc). Do you handle all of them? Also, what happens if you work with a 3rd party class that overloads its own operator new? How do you guarantee that that never gets called?
Fascinating talk, thank you for this.
If applicable, and where the problem areas are 'small'/manageable, would writing those sections in assembly language give you your required predictive execution behaviour? Of course, it would help if kernel calls are not used, or if used, they don't re-introduce what you seek to prevent. We used to re-write specific (problematic) kernel routines in days gone by - mainly removing their general purpose nature for our specific use cases(and gaining significant speed and 100% behavioural predictability)...don't know how doable that is here/today. A damn shame if those days are gone. Today's abstraction is both good and bad (loss of control, predictability, etc)
54:30 std::atomic itself might add memory fences or hardware locks depending on the std::memory_order. You can't have lock free data structures without these. Luckily, there is no reason why they shouldn't be real-time safe. They work on the CPU level, not on the OS scheduler level.
Insertions into flat_map are not O(1). Indeed, get from a flat_map is O(log n). flat_map is bad news all around.
I am a little confused where can I find the slides for 2021?
good question, there doesn't seem to be a github repo for them yet.
Very useful talk
thanks Timur.
Great public speaking!
Great talk, thank you.
Glad you enjoyed it!
22:30 no mention of alloca()?
alloca() doesn't compose, it can only be used in the body of the function where it's called, so for example you can't build a dynamic array class around it.
Really nice
the best true random number generator ive seen is a reverse polarity base emitter junction in a bjt with non connected collector connected to an ADC...
I'll visit your house with a high gain antenna and an SDR and we'll see how random it is then 😁😉
I do hope "Chip Hogg" is an actual name
Why on earth would I use the STL in real-time scenarios. It's completely unsuitable (algorithms excluded) and figuring out whether I can use a piece from it takes me more time than rewriting it for real-time audio needs.
This guy has an unwarranted phobia of multi-threading. What guarantees are there that his beloved single thread won't be interrupted and re-scheduled by the OS? None. I'd rather make my app faster with multi-threading and thus more "realtime".
> This guy has an unwarranted phobia of multi-threading.
Where do you see this?
> What guarantees are there that his beloved single thread won't be interrupted and re-scheduled by the OS?
By setting the appropriate thread priority.
> I'd rather make my app faster with multi-threading and thus more "realtime".
Multi-threading and realtime-safety are orthogonal. You can use several threads for real-time audio computation (most DAWs do this), but the code for each thread still has to be real-time-safe.
If you have a real-time audio processing thread, there isn't a hard guarantee as such (because it's not a RTOS) but in practice it works really well as the thread gets a high priority assigned by the OS and therefore won't be rescheduled unless you miss the deadline
So stick to C?
For hard realtime? Yes, that's the safer bet. It's not necessary because C++ is not all that unreliable in this regard, but to be honest... if you have a real hard real time problem... like making sure that that 100 ton crane really stops before something really heavy hits the ground really hard... yeah, stick to C. ;-)
In conclusion: Don't use C++.
How do you come to this conclusion?
Let me guess you're going to use rust