Back to Basics: Concurrency - Mike Shah - CppCon 2021

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • cppcon.org/
    github.com/Cpp...
    ---
    You have spent your hard earned money on a multi-core machine. But what does that mean for you as a programmer or for the consumers of your software who also spent their hard earned money on a multi-core machine? Well the deal is, you only get an increase in performance, if you know how to take advantage of your hardware. Perhaps you have also heard something about the free lunch being over for programmers?
    In this talk we provide a gentle introduction to concurrency with the modern C++ std::thread library. We will introduce topics with pragmatic code examples introducing the ideas of threads and locks, and showing how these programming primitives enable concurrency. In addition, we will show the common pitfalls to be aware of with concurrency: data races, starvation, and deadlock (the most extreme form of starvation!). But don’t worry--I will show you how to fix these concurrency bugs!
    The session will wrap up with discussion and examples of other concurrency primitives and how to use them to write concurrent programs for common patterns(different types of locks, conditional variables, promises/futures). Attendees will leave this session being able to use threads, locks, and start thinking about architecting multithreaded software. All materials and code samples will also be posted online.
    ---
    Mike Shah
    Michael D. Shah completed his Ph.D. at Tufts University in the Redline Research Group in 2017. His Ph.D. thesis advisor was Samuel Z. Guyer. Michael finished his Masters degree in Computer Science in 2013 at Tufts University and Bachelors in Computers Science Engineering at The Ohio State University in 2011. Currently Michael is a lecturer at Northeastern University.
    Michael discovered computer science at the age of 13 when googling ”how do I make games”. From that google search, Mike has worked as a freelance game developer, worked in industry for Intel, Sony Playstation, Oblong Industries, and researched at The Ohio Supercomputer Center to name a few. Mike cares about building tools to help programmers monitor and improve the performance of realtime applications- especially games.
    In Michael’s spare time he is a long distance runner, weight lifter, and amateur pizza maker.
    ---
    Videos Filmed & Edited by Bash Films: www.BashFilms.com
    RUclips Channel Managed by Digital Medium Ltd events.digital...
    *--*

Комментарии • 35

  • @StartupSignals
    @StartupSignals 2 года назад +21

    Mike always has the best power points. My favorite Back to the Basics lecturer for sure 😀

    • @CppCon
      @CppCon  2 года назад +1

      Great to hear!

  • @jamessilva8331
    @jamessilva8331 Год назад +9

    Mike, adding the popular concurrency patterns at the end was extremely helpful to me. Thanks for the great talk if you ever see this

    • @MikeShah
      @MikeShah Год назад

      Cheers -- thank you for the kind words James!

  • @FalcoGirgis
    @FalcoGirgis 11 месяцев назад +1

    Great talk. One thing I'd mention, because you touched on like... literally everything is the thread_local keyword and TLS variables. :)

    • @MikeShah
      @MikeShah 3 месяца назад

      Agreed -- I'll add something on RUclips on thread_local later as it is quite important :)

  • @miketag4499
    @miketag4499 2 года назад +6

    Great video, the back to basics series is very helpful. Appreciate it.

    • @CppCon
      @CppCon  2 года назад +2

      Glad it was helpful!

  • @Benben-ju5vo
    @Benben-ju5vo Месяц назад

    Please add subsection in the timer of the video !
    the joke at 48:29 made me smile : "When in doubt, add more code and complicated things" :D

  • @mahdies9620
    @mahdies9620 2 года назад +2

    such nice talk, thank you dear Mike

    • @CppCon
      @CppCon  2 года назад

      Thanks for listening!

  • @tomasdzetkulic9871
    @tomasdzetkulic9871 2 года назад +1

    Two comments on this: 1. Data race is not just non-determinism. If a data race occurs, the behavior of the program is undefined meaning that there are _no restrictions_ on the behavior of the program. 2. there is a bug in condition_variable example because wait() can unblock spuriously.

    • @tomasdzetkulic9871
      @tomasdzetkulic9871 2 года назад

      Also: The only thing you should be teaching beginners about atomics is to avoid them until they are very experienced and 100% know what they are doing.

    • @AG-ld6rv
      @AG-ld6rv 2 года назад +2

      @@tomasdzetkulic9871 I'd say atomics are some of the building block knowledge. In a perfect world, everything will be pure functions, and you can spin up any number of threads supported by the hardware without a problem. However, the real world is often more complex, and we need more complicated tools slower than that ideal case to move forward.

    • @tomasdzetkulic9871
      @tomasdzetkulic9871 2 года назад

      @@AG-ld6rv Atomics are too low level. When you need to write some code you rarely do it in assembler (low level tool).
      When you need to write concurrent code you rarely do it with atomics. And if you are not careful you will make a lot of mistakes initially.

  • @avedissimracing9628
    @avedissimracing9628 2 года назад

    41:48 this code has a subtle but important bug - the value the condition_variable waits on has to either be atomic or write to it from another thread should be protected by the same mutex the condition_variable waits on. Otherwise it might end with a deadlock in a rare case.
    Also, wait on a cv without a predicate has to be done in a loop as it's mentioned in another comment here

  • @huangdi7116
    @huangdi7116 2 года назад +1

    great session

  • @HrishiPisal
    @HrishiPisal 2 года назад +1

    Thread Example- Launching a Thread (1/2) [17:56] on line number 14 do we need to pass "&test" or just "test" to the thread constructor?

  • @vasiliynkudryavtsev
    @vasiliynkudryavtsev 2 года назад +2

    I think, std::promise/std::future lack the simple mechanism of checking whenever promise was fulfilled.
    It is effectively "future.wait_for(0) == ready", since tasks are usually started in the event loop and are checked periodically in the same event loop as well.
    Any extra delay made by "wait_for" results in UI lag, thus unacceptable.

  • @Mantikor333
    @Mantikor333 9 месяцев назад

    I still don't quite get the whole future/promise thing... Isn't it pretty much the same as spawning a new thread, doing my work there and set a variable when it finished the work. Then poll this variable from my calling thread (eg. each 1ms, like in the example).
    Is that all thats to it or am I missing the point?

  • @VishalSharma-ys9wt
    @VishalSharma-ys9wt 2 года назад

    Isn't there a difference between 'data race' and a 'race condition'?
    As per my understanding, data race is a situation in which multiple threads can access a shared variable(without any synchronisation/variable is not atomic) and atleast one of those threads perform a write operation on that shared data whereas in a race condition the output can change depending on the order of execution.
    Another major difference is that data race is UB whereas race condition is not necessarily so.
    Also tools like thread sanitiser will tell you only about data races and not the race conditions.

  • @cavesalamander6308
    @cavesalamander6308 2 года назад

    40:50 The technique of assigning the 'result' and then the 'ready' flag can give an error if the CPU reverses the order in which the instructions are executed: the CPU can change the order of operations if the result is the same from its point of view. In this particular case, the flag assignment may occur before the result assignment.

    • @MikeShah
      @MikeShah Год назад

      I'll have to take a close look. Perhaps should mark result and ready as volatile to prevent reordering.

    • @cavesalamander6308
      @cavesalamander6308 Год назад

      @@MikeShah See "Memory barrier" in wikipedia.for more information.

  • @KapilKumar-ig6df
    @KapilKumar-ig6df 2 года назад

    At time 38:43, line no 24, there would be th.join().

  • @haykanushkhachaturian
    @haykanushkhachaturian 2 года назад +1

    Great👍

  • @masheroz
    @masheroz Год назад

    Is it possible to lock an element of a vector?

  • @nhanNguyen-wo8fy
    @nhanNguyen-wo8fy 8 месяцев назад

    55:20 pattern

  • @saipan1970
    @saipan1970 2 года назад +1

    Sir, please make fonts little bigger for codes,consider the mobile viewers also.

  • @jaggis4914
    @jaggis4914 2 года назад

    Too much text on the slides, Mike! Great explanation though.

  • @konstantinburlachenko2843
    @konstantinburlachenko2843 2 года назад

    Mike Shah and others, in presentation you do not have volatile qualifier and you do not have memory fence - this is incorrect code

    • @tomasdzetkulic9871
      @tomasdzetkulic9871 2 года назад +8

      Not sure which example you are referring to, but std::mutex uses memory fences internally. Similarly calling std::atomic::operator++ implicitly uses the strongest memory ordering std::memory_order_seq_cst which also produces memory fences in the generated assembly.
      Volatile keyword is not useful for concurrent programming.

    • @konstantinburlachenko2843
      @konstantinburlachenko2843 2 года назад

      @@tomasdzetkulic9871 thanks you very much. I also forget to be honest to which slide I refer. During creating CAS/Atomics for various hardware I had a need in volatile, memory fence, and sometimes special instructions. Without volatile I can not not really create my sycnronization primitives in c/c++. (In my practice - I have co-created clublas, libcudnn, physx, etc. at nvidia this is state of art software in it’s domain and I have used volatile in implementation of various synchronisation primitives during it’s implementations, but that my practice is close to metal)