I feel like after discussing triviality, it would have been interesting to see some examples about a trivial struct type that couldn't fit in a register, rather than just examples about int which is obviously always going to do the fast thing. If the point was supposed to be "see, triviality always does the fast thing and you don't have to think about it like you do with std::string," it would have helped to see how a large trivial struct interacts with RVO/explicit std::move. Returning in registers kind of renders RVO/std::move irrelevant/not applicable, as Jason even said, so it wasn't really an apples to apples comparison with the earlier std::string examples. For the record, for a large trivial struct, I believe the advice is mostly the same as std::string--don't explicitly std::move because that inhibits copy elision which you are likely to otherwise get. But--you don't have to spend as much time worrying about copy vs. move because they're equivalent anyway. The ultimate takeaway of the talk is that trivial types are faster to pass around your code because they don't have to do work in their SMFs. That is true, but as insights go, it's also kind of... trivial. Jason Turner is an engaging, genuinely likable, and overall just fantastic speaker as always.
Debug performance is a substantial problem in C++, but I find it disheartening that there is being added special handling for the standard library inside the compiler ruclips.net/video/bpF1LKQBgBQ/видео.html. It elevates std code as something with special compiler support doing something that users cannot replicate in their own code.
C++'s simple language construct is great puzzle for seasoned C++ user. Interesting for QA but ? for learning and productivity, debugging, safty and ...
The fact that a developer needs to consider which of the options is better, is a defect in Cpp. The compiler should be able to optimize all to the best performance... I find these trick questions hilarious, cpp programers love these, while in fact a programer should worry about other stuff while the compiler does the optimizations achieving the same performance (and dont tell me it cant! See Rust)
There is no free lunch, designers of C++ had to make a lot of trade offs and every apparently nonsensical issue has very likely a well motivated reasoning behind it. I really like Rust, but for example I find it irritating that many numerical algorithms are imposible for the compiler to optimize without resorting to "unsafe" and/or unergonomic use of the language. This is the case when any random access to an array has an implicit bounds check and I am "mathematically" certain will never trigger, but obviously the compiler can see so far; this is a waste of extra instructions (polluting the I-Cache), many of which are branches (polluting the branch predictor cache), which makes the whole program incrementally slower: even if each pessimisation is insignificant in isolation, many of them can add up into something noticeable.
44:55 The -O0 flag actually means "deliberately deoptimize", rather than "don't bother optimizing". -O1 is the default, and I'm pretty sure it would still result in a trivial return for that function.
This is fully not true. The default for both clang and gcc is -O0, and it just provides no optimization, for the fastest possible compile speed and the easiest debugging experience. There would be no good reason for a compiler to have an option to spend time making your code worse. std::move calls do disappear at -O1, but that's because a small amount of optimization is applied at -O1.
if you are unsure if you have copies or moves and their optimization, you have probably factored wrongly. That said, you act as if const/auto& is not a thing. Furthermore, the amount of contrived invention required to proceed to create this talk seems to be a systemic problem
I started learning C++ in the early 2000s. I am appalled to see today's C++. At some point, we might need to decide whether to indulge in playing our own games or rethink what it means to programming.
I agree. There is too much magic that may or may not happen. Just creating a constructor is fraught with peril. And compilers do a terrible job of at giving any sort of insight into what they did or didn't do. Hidden member functions are one of the worst ideas of c++. Even worse is that hidden member functions can be implicitly deleted by doing seemingly innocuous things. Worse than that is that instead of a compile error, you often just end up with sub-optimal code where the class is used (e.g., you get a copy instead of a move). It would be much better if we required the developer to explicitly delete those member functions in order for the code to compile instead of having the compiler do it automatically. All of this just smacks of a optimizing for characters written instead of maintainability.
Aaaand this is why a well-defined static language like Rust is going to replace C++ eventually. I've been programming in C++ for a long time, and still do for work, but I find myself using Rust for everything I possibly can because of stuff like this. Why should I need to study a language's implicit (hidden) rules just to understand when something will be optimized or not? Why is UB so unacceptably trivial to achieve in C++? I stay up-to-date with C++ because the industry demands it, but the language is getting so bloated and arcane, I don't see younger developers ever wanting to get into this, and I don't blame them. Jason even said it himself that `constexpr` isn't even going to actually require the thing to be a constant expression in the upcoming C++26 standard... Good thing we have `consteval`... Just another nuanced thing my team / newcomers to the language are not going to understand or utilize. And this isn't even to mention the "Principal " engineers that still don't trust using smart pointers...
constexpr was never required to be evaluated at compile time only, it was always meant to be double agent for both runtime and compile time. You can enforce evaluation of something that is constexpr, but unlike consteval which completely disappears after compilation, you can still call your constexpr function. Also no languange in existence will strip programmer from responsibility to think. Rust also has hidden rules that can incapacitate the perfomance like absence of inplace construction, it is just inevitable when you shoot for top perfomance. And instead of instantly doomsaying the very moment you see "trick c++ quiz", wait a moment and think about the most perfomant options in there. They are the simplest ones, that's why the talk is called "great c++ is_trivial"
What Jason said about constexpr is that in C++26 you don't have to worry about making your code "constexpr friendly" because almost everything is able to be executed at compile time. Even and the upcoming SIMD library is constexpr capable. And that's a good thing. Meanwhile speaking of Rust : "Enjoy your ugly proc macros!".
Rust doesn't fix these problems, those are superficial issues, the problem is that language designers don't consider legacy and when it becomes a problem its already too late, I already see that with Rust and it'll only get worse, C used to be young and simple and was considered easy to optimize but then systems became more complex and C kept needing to adopt new features, and now to use it properly is difficult, C++ came along to try and fix these issues and then it grew legacy on top of C legacy (which it used to overtake C) and now its a mess. Rust does not have a solution to this. Also the borrow checker isn't simple, any amount of complexity you add to the base function of a program (no matter where it happens) is inherently going to cause complexity issues in the program itself. There's other problems with Rust, like unstable ABI libraries, which make it pretty difficult to adopt as a replacement to C++ and the lack of direct support of C++ legacy which is how most big name derivative languages win marketshare.
For the record, I'm studying a cs adjacent career and C++ is the language that interests me the most. I'd rather end up programming in C++ than in any web technology.
Man, I absolutely love everything with Jason Turner in it. He's clear, he's concise and he's funny. ^^ Great guy!
warning, though - this particular talk was very trivial
Idk he took forever to get to content
13:08 and other places, it would be useful to have the use_string declaration displayed, otherwize we are speaking without sufficient context.
unlike some conference talks, i find jason talks highly interactive
I feel like after discussing triviality, it would have been interesting to see some examples about a trivial struct type that couldn't fit in a register, rather than just examples about int which is obviously always going to do the fast thing. If the point was supposed to be "see, triviality always does the fast thing and you don't have to think about it like you do with std::string," it would have helped to see how a large trivial struct interacts with RVO/explicit std::move. Returning in registers kind of renders RVO/std::move irrelevant/not applicable, as Jason even said, so it wasn't really an apples to apples comparison with the earlier std::string examples.
For the record, for a large trivial struct, I believe the advice is mostly the same as std::string--don't explicitly std::move because that inhibits copy elision which you are likely to otherwise get. But--you don't have to spend as much time worrying about copy vs. move because they're equivalent anyway.
The ultimate takeaway of the talk is that trivial types are faster to pass around your code because they don't have to do work in their SMFs. That is true, but as insights go, it's also kind of... trivial.
Jason Turner is an engaging, genuinely likable, and overall just fantastic speaker as always.
I see Jason Turner, I click. It’s as simple as that
Me too
Jason is a "CPP Superstar" 🌟
Debug performance is a substantial problem in C++, but I find it disheartening that there is being added special handling for the standard library inside the compiler ruclips.net/video/bpF1LKQBgBQ/видео.html. It elevates std code as something with special compiler support doing something that users cannot replicate in their own code.
Regarding is_trivially_constructible: what is it good for? Any members should have default values, so I currently don't see a use case
May I know is "std::is_trivially_destructible" always better than "std::has_trivial_destructor"?
"This is keeping Ben awake tonight" while he reads the std. :)
So what this basically boils down to is - use C-style structs. Not that I'm complaining.
Then go ahead and use C instead :)
Start at 3:32
C++'s simple language construct is great puzzle for seasoned C++ user. Interesting for QA but ? for learning and productivity, debugging, safty and ...
Tbh none of these were safety concerns. Worst case was a copy where you wanted to move.
This language is insane, only python keeps me more on my toes every time I write something non trivial in it.
The fact that a developer needs to consider which of the options is better, is a defect in Cpp. The compiler should be able to optimize all to the best performance...
I find these trick questions hilarious, cpp programers love these, while in fact a programer should worry about other stuff while the compiler does the optimizations achieving the same performance (and dont tell me it cant! See Rust)
Just what I was thinking the whole time. 100% agree
cool!
I agree and that is also one of the directives of Python (see PEP20)
There is no free lunch, designers of C++ had to make a lot of trade offs and every apparently nonsensical issue has very likely a well motivated reasoning behind it.
I really like Rust, but for example I find it irritating that many numerical algorithms are imposible for the compiler to optimize without resorting to "unsafe" and/or unergonomic use of the language. This is the case when any random access to an array has an implicit bounds check and I am "mathematically" certain will never trigger, but obviously the compiler can see so far; this is a waste of extra instructions (polluting the I-Cache), many of which are branches (polluting the branch predictor cache), which makes the whole program incrementally slower: even if each pessimisation is insignificant in isolation, many of them can add up into something noticeable.
But then how would you run a successful cottage industry of consultants coming in to teach C++ to C++ developers?
24:20 to skip the C++ quiz and get to the actual topic
it's not a quiz, it's to understand the language better bruh
44:55 The -O0 flag actually means "deliberately deoptimize", rather than "don't bother optimizing". -O1 is the default, and I'm pretty sure it would still result in a trivial return for that function.
This is fully not true. The default for both clang and gcc is -O0, and it just provides no optimization, for the fastest possible compile speed and the easiest debugging experience. There would be no good reason for a compiler to have an option to spend time making your code worse.
std::move calls do disappear at -O1, but that's because a small amount of optimization is applied at -O1.
if you are unsure if you have copies or moves and their optimization, you have probably factored wrongly. That said, you act as if const/auto& is not a thing. Furthermore, the amount of contrived invention required to proceed to create this talk seems to be a systemic problem
Post c++14 all you have done is add more interview questions than improving/ simplifying language.
Day by day it’s getting uglier
I started learning C++ in the early 2000s. I am appalled to see today's C++. At some point, we might need to decide whether to indulge in playing our own games or rethink what it means to programming.
ok boomer
What exactly about today's C++ are you appalled by?
Would you mind sharing some of your grudges regarding C++ specific to this talk?
I agree. There is too much magic that may or may not happen. Just creating a constructor is fraught with peril. And compilers do a terrible job of at giving any sort of insight into what they did or didn't do.
Hidden member functions are one of the worst ideas of c++. Even worse is that hidden member functions can be implicitly deleted by doing seemingly innocuous things. Worse than that is that instead of a compile error, you often just end up with sub-optimal code where the class is used (e.g., you get a copy instead of a move). It would be much better if we required the developer to explicitly delete those member functions in order for the code to compile instead of having the compiler do it automatically. All of this just smacks of a optimizing for characters written instead of maintainability.
Aaaand this is why a well-defined static language like Rust is going to replace C++ eventually. I've been programming in C++ for a long time, and still do for work, but I find myself using Rust for everything I possibly can because of stuff like this. Why should I need to study a language's implicit (hidden) rules just to understand when something will be optimized or not? Why is UB so unacceptably trivial to achieve in C++? I stay up-to-date with C++ because the industry demands it, but the language is getting so bloated and arcane, I don't see younger developers ever wanting to get into this, and I don't blame them. Jason even said it himself that `constexpr` isn't even going to actually require the thing to be a constant expression in the upcoming C++26 standard... Good thing we have `consteval`... Just another nuanced thing my team / newcomers to the language are not going to understand or utilize. And this isn't even to mention the "Principal " engineers that still don't trust using smart pointers...
constexpr was never required to be evaluated at compile time only, it was always meant to be double agent for both runtime and compile time. You can enforce evaluation of something that is constexpr, but unlike consteval which completely disappears after compilation, you can still call your constexpr function.
Also no languange in existence will strip programmer from responsibility to think. Rust also has hidden rules that can incapacitate the perfomance like absence of inplace construction, it is just inevitable when you shoot for top perfomance.
And instead of instantly doomsaying the very moment you see "trick c++ quiz", wait a moment and think about the most perfomant options in there. They are the simplest ones, that's why the talk is called "great c++ is_trivial"
What Jason said about constexpr is that in C++26 you don't have to worry about making your code "constexpr friendly" because almost everything is able to be executed at compile time. Even and the upcoming SIMD library is constexpr capable. And that's a good thing.
Meanwhile speaking of Rust : "Enjoy your ugly proc macros!".
Rust doesn't fix these problems, those are superficial issues, the problem is that language designers don't consider legacy and when it becomes a problem its already too late, I already see that with Rust and it'll only get worse, C used to be young and simple and was considered easy to optimize but then systems became more complex and C kept needing to adopt new features, and now to use it properly is difficult, C++ came along to try and fix these issues and then it grew legacy on top of C legacy (which it used to overtake C) and now its a mess. Rust does not have a solution to this. Also the borrow checker isn't simple, any amount of complexity you add to the base function of a program (no matter where it happens) is inherently going to cause complexity issues in the program itself. There's other problems with Rust, like unstable ABI libraries, which make it pretty difficult to adopt as a replacement to C++ and the lack of direct support of C++ legacy which is how most big name derivative languages win marketshare.
For the record, I'm studying a cs adjacent career and C++ is the language that interests me the most. I'd rather end up programming in C++ than in any web technology.
This talk was LAME the guy spent firts 15 minutes playing trivia ... I mean this was more of a Come lets play trivia than a talk ...