By far the most useful, insightful, practical tricks C++14 talk I have seen in 20 years, on par as specialized domain expert with Scott Myers / Herb Sutter - thank you so much for all this and the pointers (weak vs strong memory model, benchmarks, includeOS) .. you are a great teacher!
@@mapron1I've been coding in C++23 for the past 17 years and this was the best introduction to it I could have hoped for. My recruiters are so impressed.
if bjarne is the master of the language. this guy is the master of the language in a domain. which causes me to loose sleep and realize I must get better. does anyone else feel this way? he has left no stone unturned. it incredible and lets me know what I'm capable of but man. why does management get paid more than some one like him? with machine learning, a slight shift in intent, this guy could be lethal in upcoming healthcare market shifts with all the data coming on line. please tell me I am over thinking this.
Patrick, that's attitude is what drives me to stay up a night and keep learning more (and worries my wife). Grab on to it! Just remember that you're watching the fruits of the labor of months or years. So don't let yourself be disheartened by how far ahead someone else is. Start something today. Stick with it if you love it. As for machine learning... that's a totally different area of expertise. Writing high-speed code for HFT is almost diametrically opposed to anything that has to do with lots of data. It's about getting the computer to balance on a hair trigger, so that it's the very least amount of time between tick (market data) and trade (order execution). There are monstrous computer (lots of CPUs, lots of RAM) that do nothing for 99.99% of their day. They just need to read a single message every 100 milliseconds. But when the right message comes through, they have to POUNCE on it right away. Compare that with machine learning, or data processing of most other types. The focus is about throughput. About how many megabytes can you push through every second. And since they are running on a tighter budget than an HFT, the machines have to be used as much as possible. They can't sit around. Well, I'm not sure I'm explaining this properly. But my point is that these are very different, and would require a lot of retraining to switch from one to another :-)
Yes. I once had an interview with some similar quant firm and got rejection. I guess I could be accepted if I had watched his videos and practiced more.
I think that the number one reason why microwave is faster is that the signal travels in a straight line. Fiber is very rarely able to travel anywhere near a straight line over land.
I was wondering, slide 30: I assume that the size of the inplace function is basically a buffer for all captures. Why does the "ClassOf64Bytes" need to be captured? It's a local variable created every time someone runs the functor. If that variable would be outiside the function, that's a different story.
Hi, good catch! Indeed, in this example, ClassOf64Bytes needs to be captured by value rather than locally declared, in order to hit the static assert failure.
Slides are not linked for old talks unfortunately, here is the link: meetingcpp.com/files/mcpp/2016/Carl%20Cook%20-%20The%20Speed%20Game-%20Automated%20trading%20in%20C++.pdf
Hi, just that lambda functions are often able to be inlined (whereas standard functions may incur an allocation, some copying, and then a function call). In this slide, the lambda (on the right hand side) can be completely inlined, and as such, could result in direct writes to the payload of a network packet (i.e. zero overhead). As a bonus, the caller of the function doesn't really need to care about the how the lambda (it supplies) is used.
Hi, C++ lambdas are great - well, certainly they are great fun. I think there's a temptation to overuse them that can lead to code that's difficult to read and probably bad in other ways. However, one question: clearly lambdas are shorthand - you can do the same thing functionally using a local class with a member function. Do C++'s lambdas offer anything more than can be provided (much more verbosely) via a local class?
Hi John, sometimes I use Intel's vtune, but it hasn't been great for low latency to be honest (at least for me). Just the basic tools such as kcachegrind can be useful for spotting things that are clearly wrong with the callpath. And running strace is of course useful to make sure that system calls are not happening at unexpected times. Regarding static analysis tools, both cppcheck and also clang have been useful, but mainly for catching bugs rather than identification of speedups. Perf's c2c tool looks promising for checking for false sharing, but then again, I try to avoid threading anyway, because of many reasons, but one of them is accidental false sharing! Speaking of perf, I find this really useful for checking the number of cache misses, stalls, etc (this would be one of my favourite tools). I have been meaning to take a look at Intel's "architecture code analyzer", but if it's anything like vtune, I suspect I'll be somewhat disappointed by it.
I watch this couple years ago and here I'm watching it again .
You may also want to watch this talk then: ruclips.net/video/8uAW5FQtcvE/видео.html
By far the most useful, insightful, practical tricks C++14 talk I have seen in 20 years, on par as specialized domain expert with Scott Myers / Herb Sutter - thank you so much for all this and the pointers (weak vs strong memory model, benchmarks, includeOS) .. you are a great teacher!
Semantic error on your part?, C++14 has been around for 4 years, so 20 years ago I doubt you saw a C++14 talk :)
@@colinmaharaj That was the best youtube comment I've seen past 30 years of my life!
@@mapron1I've been coding in C++23 for the past 17 years and this was the best introduction to it I could have hoped for. My recruiters are so impressed.
if bjarne is the master of the language. this guy is the master of the language in a domain. which causes me to loose sleep and realize I must get better. does anyone else feel this way? he has left no stone unturned. it incredible and lets me know what I'm capable of but man. why does management get paid more than some one like him? with machine learning, a slight shift in intent, this guy could be lethal in upcoming healthcare market shifts with all the data coming on line. please tell me I am over thinking this.
Patrick, that's attitude is what drives me to stay up a night and keep learning more (and worries my wife). Grab on to it!
Just remember that you're watching the fruits of the labor of months or years. So don't let yourself be disheartened by how far ahead someone else is. Start something today. Stick with it if you love it.
As for machine learning... that's a totally different area of expertise. Writing high-speed code for HFT is almost diametrically opposed to anything that has to do with lots of data. It's about getting the computer to balance on a hair trigger, so that it's the very least amount of time between tick (market data) and trade (order execution). There are monstrous computer (lots of CPUs, lots of RAM) that do nothing for 99.99% of their day. They just need to read a single message every 100 milliseconds. But when the right message comes through, they have to POUNCE on it right away.
Compare that with machine learning, or data processing of most other types. The focus is about throughput. About how many megabytes can you push through every second. And since they are running on a tighter budget than an HFT, the machines have to be used as much as possible. They can't sit around.
Well, I'm not sure I'm explaining this properly. But my point is that these are very different, and would require a lot of retraining to switch from one to another :-)
I spent most of my life in C/C++ and I have always been an optimizing person, now I am into trading and algorithms so I have to get into algo trading.
this guy probably makes a shit ton of money (low 7 figures), I wouldn't worry about his salary compared to management
Yes. I once had an interview with some similar quant firm and got rejection. I guess I could be accepted if I had watched his videos and practiced more.
Goated talk! 🔥👏🙏
I think that the number one reason why microwave is faster is that the signal travels in a straight line. Fiber is very rarely able to travel anywhere near a straight line over land.
Total internal Reflection In Action.
Super cool talk! Thanks for sharing!
Exceptional talk. Congratulations and greetings from Mexico !
Just wonder what kind of techniques as to kernel tuning that been used in the automated trading system
Plenty ;) Core isolation, power management, network bypass, interrupt control, etc.
Awesome video, thanks!
Very good talk. Thanks
Designing VR headsets faces similar challenges, as Windows & GPU's are designed for throughput, not low latency.
Good talk.
I was wondering, slide 30: I assume that the size of the inplace function is basically a buffer for all captures. Why does the "ClassOf64Bytes" need to be captured? It's a local variable created every time someone runs the functor. If that variable would be outiside the function, that's a different story.
Hi, good catch! Indeed, in this example, ClassOf64Bytes needs to be captured by value rather than locally declared, in order to hit the static assert failure.
Thanks
Can you suggest a platform for implementing c++ automated trading?
He seems to like x86
Slides link is not opening, does anyone have link to this presentation's pdf or any other source that could teach me about automated systems in c++.
Slides are not linked for old talks unfortunately, here is the link: meetingcpp.com/files/mcpp/2016/Carl%20Cook%20-%20The%20Speed%20Game-%20Automated%20trading%20in%20C++.pdf
@@MeetingCPP Thank you
Slide 4, It's Mike Acton, not Action
Final Boss Thanks. Yeah, autocorrect. I fixed the slides but the wrong copy was uploaded.
What is the point trying to be made with slide 22?
Hi, just that lambda functions are often able to be inlined (whereas standard functions may incur an allocation, some copying, and then a function call). In this slide, the lambda (on the right hand side) can be completely inlined, and as such, could result in direct writes to the payload of a network packet (i.e. zero overhead). As a bonus, the caller of the function doesn't really need to care about the how the lambda (it supplies) is used.
Hi, C++ lambdas are great - well, certainly they are great fun. I think there's a temptation to overuse them that can lead to code that's difficult to read and probably bad in other ways.
However, one question: clearly lambdas are shorthand - you can do the same thing functionally using a local class with a member function. Do C++'s lambdas offer anything more than can be provided (much more verbosely) via a local class?
@@richardhickling1905 But here you are not trying to write a readable code you just want it to be as fast as it can.
Do you use any publicly available software tools for measurement? What about static analysis tools for identifying potential speedups?
Hi John, sometimes I use Intel's vtune, but it hasn't been great for low latency to be honest (at least for me). Just the basic tools such as kcachegrind can be useful for spotting things that are clearly wrong with the callpath. And running strace is of course useful to make sure that system calls are not happening at unexpected times. Regarding static analysis tools, both cppcheck and also clang have been useful, but mainly for catching bugs rather than identification of speedups. Perf's c2c tool looks promising for checking for false sharing, but then again, I try to avoid threading anyway, because of many reasons, but one of them is accidental false sharing! Speaking of perf, I find this really useful for checking the number of cache misses, stalls, etc (this would be one of my favourite tools). I have been meaning to take a look at Intel's "architecture code analyzer", but if it's anything like vtune, I suspect I'll be somewhat disappointed by it.
That's not how you say Cache!
It's not easy to understand his accent :/
Hi, RUclips's auto-captioning works really well... give it a try!
Carl Cook LOL
Fair dinkum? I mean, it's not as if he sounds like he's from the back of Bourke. Don't split the dummy, mate!