Back to Basics: C++ Concurrency - David Olsen - CppCon 2023

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.พ. 2024
  • cppcon.org/
    ---
    Back to Basics: C++ Concurrency - David Olsen - CppCon 2023
    github.com/CppCon/CppCon2023
    Concurrent programming unlocks the full performance potential of today's multicore CPUs, but also introduces the potential pitfalls of data races and random, difficult-to-debug application failures. This back-to-basics session will provide a foundation for concurrent programming, focusing on std::thread and mentioning other ways to introduce concurrency and parallelism. A lot of time will be spent on how to recognize data races, and how to avoid them using mutexes, atomic variables, and other standard constructs.
    Attendees will come away knowing how to write simple and correct concurrent programs and will have a foundation to build on when developing more complex concurrent applications.
    ---
    David Olsen
    David Olsen has more than two decades of software development experience in a variety of programming languages and development environments. For the last seven years he has been the lead engineer for the NVIDIA HPC C++ compiler, focusing on running standard parallel algorithms on GPUs. He is a member of the ISO C++ committee, where he was the champion for the extended floating-point feature in C++23.
    ---
    Videos Filmed & Edited by Bash Films: www.BashFilms.com
    TH-cam Channel Managed by Digital Medium Ltd: events.digital-medium.co.uk
    ---
    Registration for CppCon: cppcon.org/registration/
    #cppcon #cppprogramming #cpp #concurrency
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 19

  • @bunpasi
    @bunpasi 2 หลายเดือนก่อน +9

    17:58 Drinking and breathing race condition. Happens to the best of us 😅.

  • @anon_y_mousse
    @anon_y_mousse 3 หลายเดือนก่อน +4

    This is probably the most complete video on concurrency and parallelism there is. Definitely going to bookmark this to recommend. The only thing it needs is more time to flesh out all of the other concepts he brought up at the end, and to talk about compiler support. Otherwise, I love that he provides examples, and that they actually work. Compiler Explorer is a Godsend for this.

  • @VoidloniXaarii
    @VoidloniXaarii 3 หลายเดือนก่อน

    Fascinating complications! Thank you very much for unpacking them 🙏

  • @xrtgavin
    @xrtgavin หลายเดือนก่อน +1

    My takeaway
    Concurrency: Multiple logical threads of execution with some inter-task dependencies
    - Doing things at the same time
    - Some things need to happen before other things
    - Some things can't happen at the same time
    Parallelism: Multiple logical threads of execution with no inter-task dependencies
    Data Race:
    1. Two or more threads access the same memory
    2. At least one access is a write
    3. The threads do not synchronize with each other
    A data race is undefined behavior

  • @briansalehi
    @briansalehi 3 หลายเดือนก่อน +7

    Great talk for "Back to Basics of C++ Concurrency". We also have a great talk on the very same topic by Anthony Williams if you haven't watched it yet. Now, I'll be waiting for "Back to Advanced C++ Concurrency".

    • @ilieschamkar6767
      @ilieschamkar6767 3 หลายเดือนก่อน

      Now way, thanks for the reccomandation
      I'm reading his book about concurrency, so that's a plus :D

  • @balajimarisetti4245
    @balajimarisetti4245 2 หลายเดือนก่อน

    That was a really good talk on concurrency! Thanks David and CppCon.

  • @yurkoflisk
    @yurkoflisk 3 หลายเดือนก่อน +1

    42:47 To avoid holding locks more than necessary I think each thread can hold only one lock for the first change_data, *unlock it* and *then* (re)lock both together:
    Thread 1
    {
    std::scoped_lock{mutex_a};
    change_data(data_a);
    } {
    std::scoped_lock{mutex_a, mutex_b};
    change_data(data_a, data_b);
    }
    Thread 2
    {
    std::scoped_lock{mutex_b};
    change_data(data_b);
    } {
    std::scoped_lock{mutex_a, mutex_b};
    change_data(data_a, data_b);
    }

    • @David_Olsen
      @David_Olsen 3 หลายเดือนก่อน +2

      That will work in some situations, but not all. It depends on the relationship between the two calls to 'change_data' in each thread. If 'change_data(data_a)' leaves 'data_a' in an inconsistent state and 'change_data(data_a, data_b)' restores 'data_a' to a consistent state, then thread 1 cannot release 'mutex_a' in between the two calls, but needs to hold the mutex for the entire time that 'data_a' is being manipulated. ('change_data' is just a placeholder for some code that manipulates data. Don't assume that it is an independent, self-consistent function.)

  • @What-he5pr
    @What-he5pr 3 หลายเดือนก่อน +1

    Oh yeah now this is the topic.

  • @abuyoyo31
    @abuyoyo31 3 หลายเดือนก่อน

    (1) Great talk! (2) 46:45 - for this particular example I believe marking flag as volatile would have been better than atomic. This would have prevented compilers optimizing the while-loop away, and bools are inherently atomic anyway. Am I missing something?

    • @David_Olsen
      @David_Olsen 3 หลายเดือนก่อน +4

      volatile does not have the memory visibility guarantees that std::atomic has. There is no guarantee that the child thread will see the change that the main thread made to the flag variable. Making flag volatile will most likely work in practice, but making flag atomic is guaranteed to work.

  • @alexeysubbota
    @alexeysubbota 3 หลายเดือนก่อน

    It's very funny to hear an advice to use jthread not having it in the last llvm libc++ 17. It seems that C++ lives its own life while LLVM falls far behind the language.

  • @snbv5real
    @snbv5real 3 หลายเดือนก่อน +3

    std::parallel algorithms can't "just be used" right? You have to have an implementation from the *outside* to make it work on clang and GCC right? This has basically made them dead in the water for a lot of code that doesn't *just* target MSVC because of the reliance on intel TBB that doesn't come bundled with the code base and doesn't work on ARM . Until it becomes a *zero install* solution, parrallel algorithms for all popular platforms on linux is basically not a "just use it and forget" thing like implied by this presentation.

    • @12345bvbfan
      @12345bvbfan 3 หลายเดือนก่อน

      TBB is long-standing issue to GCC same as std thread was for years. But it's not applies to Clang same as std thread was

    • @nigelstewart9982
      @nigelstewart9982 3 หลายเดือนก่อน

      It's C++17. (std::for_each for example) Should "just work", even on ARM, Linux or Android?

    • @anon_y_mousse
      @anon_y_mousse 3 หลายเดือนก่อน

      It's weird, but it just uses pthreads. At least it does on all the platforms I've got access to right now. Though, I suppose you could test that with CE, but I'd wager it just uses pthreads as well, since it has existed for decades, is completely stable and just works.

    • @AlfredoCorrea
      @AlfredoCorrea 3 หลายเดือนก่อน +1

      Yes, there is a quality of implementation(s) issue but at least there is a specification to work towards. Also, for context, as David mentioned, he works on Nvidia’s C++ compiler nvc++ which does come bundled with parallel STL. (which is independent of TBB, and incidentally it could run in the GPU if the conditions are right.)

    • @jasperlanda5276
      @jasperlanda5276 3 หลายเดือนก่อน

      You can use them with minimal setup. Some compilers already come with a TBB implementation. If yours doesn’t, then on Linux one can install libtbb-dev and then let your IDE include it in your project. Then it is just a matter of linking against -ltbb and boom goes the parallel dynamite.