GPU-Accelerated Fluid Dynamics - Petr Kodl | Podcast

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • 🌎 Simcenter Engineering: go.sw.siemens....
    🎬 Simcenter RUclips: www.youtube.co...
    💌 My weekly science newsletter - jousef.substac...
    Hear Petr Kodl from Siemens talk about GPU-enabled acceleration of CFD simulation.
    Representation of the benefits of running Simcenter STAR-CCM+ on GPUs provides computational fluid dynamics (CFD) capabilities to go faster while modeling the complexity. One of the constant challenges for computational fluid dynamics (CFD) engineers is to have a good level of simulation throughput.
    CPUs, ARM, GPUs: In times of an increasingly heterogeneous hardware landscape, there are many choices, and simulation engineers need to identify the hardware that best fits their current needs.
    ONLINE PRESENCE
    ================
    🌍 My website - jousefmurad.com/
    💌 My weekly science newsletter - jousef.substac...
    📸 Instagram - / jousefmrd
    🐦 Twitter - / jousefm2
    SUPPORT MY WORK
    =================
    🧠 Subscribe for more free videos: bit.ly/2RLmMxq
    👉 Support my Channel: www.jousefmura....
    👕 Science Merch: engineered-min...-sprin...
    CONTACT:
    --------
    If you need help or have any questions or want to collaborate feel free to reach out to me via email: support@jousefmurad.com
    #gpu
    #engineering
    #fluidmechanics
    Podcast Recorded: March, 4th 2024 - Subscriber Release Count: 31,484.

Комментарии • 18

  • @alialabed6224
    @alialabed6224 5 месяцев назад

    This is one of the best talks in this podcast series. Much appreciation

    • @JousefM
      @JousefM  5 месяцев назад

      Thanks mate!

  • @Ashwah1993
    @Ashwah1993 6 месяцев назад +1

    Fantastic job, Jousef. Thanks for making this podcast.

    • @JousefM
      @JousefM  6 месяцев назад +1

      Thanks my friend!

  • @damjangnjidic
    @damjangnjidic 6 месяцев назад +4

    Great interview!

    • @JousefM
      @JousefM  6 месяцев назад

      Thanks 🙂

  • @GenießenSupraq
    @GenießenSupraq 2 месяца назад

    Summary of Questions and Answers from the Podcast:
    Background and Motivation:
    Host: Can you introduce yourself and your background?
    Petr Kodl: Director of the architecture group at Siemens, originally focused on nuclear physics, then moved to HPC and GPU initiatives.
    Difference Between GPU and CPU:
    Host: What’s the difference between GPU and CPU for fluid dynamics simulations?
    Petr Kodl: GPUs have more cores and better throughput for simulations requiring parallel processing, like fluid dynamics. CUDA made it easier to program GPUs similarly to CPUs.
    Challenges and Intricacies:
    Host: What are the challenges in implementing fluid dynamics on GPUs?
    Petr Kodl: Managing large, existing codebases (~7 million lines), and ensuring the framework supports both CPUs and GPUs seamlessly.
    Development Journey:
    Host: How did the transition from CPU to GPU development start?
    Petr Kodl: Started with a prototype, focusing on memory management and loop optimization, then gradually integrated GPUs into existing frameworks.
    Accuracy and Energy Efficiency:
    Host: How do you ensure accuracy and efficiency in GPU code?
    Petr Kodl: By adhering to IEEE floating-point standards and optimizing for single precision when possible. GPUs are energy efficient due to less overhead compared to CPUs.
    Future and Trends:
    Host: What’s the future of GPU in fluid dynamics and HPC?
    Petr Kodl: The trend is moving towards more specialized chips for AI and HPC. C++ remains a crucial language for development.
    Advice for Beginners:
    Host: Advice for those starting in GPU programming?
    Petr Kodl: Learn C++ and follow the latest trends and frameworks from Nvidia and other leading institutions.
    Code Efficiency:
    Host: How do you handle performance benchmarks and efficiency?
    Petr Kodl: Provide guidelines but emphasize real-world testing and optimization based on specific use cases and workloads.
    Additional Points:
    Emphasized the importance of thorough testing and adapting to evolving technologies.
    Discussed how different scales of problems influence the choice between CPU and GPU.
    Highlighted the significance of Nvidia’s CUDA in facilitating GPU programming.

  • @86baho
    @86baho 6 месяцев назад

    Very interesting interview and crucial questions were raised. I am working with OpenFOAM and experience a mixed feelings since it runs exclusively on CPU. As I understand GPU foremost is exclusively used for computer graphics calculations, and CFD engineers has spotted its usefulness just to do number crunching for linear algebra. CPU are too overcompilated and low parallelism, and designed primary to run a complex logic for e.g. OS, it can do number crunching but parallelisation and memory bandwith is the bottleneck. Maybe in the future it will be a new Unit (chip) that do specifically number crunching, as a result a third main chip in your computer, who knows... I am personally struggling with all of this because you need to know to much to do some decent code, most likely it will be a custom code without using any commercial software. It would be interesting to ask Petr Kodl why exactly StarCMM+ is used? why not OpenFOAM?

    • @hyperopt_
      @hyperopt_ 6 месяцев назад +1

      star ccm+ is the product of the company he works for (Siemens), so it is not surprising that they are not developing a gpu port for openfoam :D

  • @mouna5415
    @mouna5415 6 месяцев назад

    interesting podcast !

  • @deepankersingh4368
    @deepankersingh4368 4 месяца назад

    Hey am working on a project in which I have to generate 1 crore mesh approximately. Best way to increase simulation speed? Meshing is also taking lot of time.

  • @jsierra88
    @jsierra88 5 месяцев назад

    26:05 what paper is he talking about?

  • @carlosegea3064
    @carlosegea3064 6 месяцев назад

    You should divide these podcasts into shorter videos based on the conversation topics. Would be easier to find topics we want to learn. Thanks for the content.

    • @JousefM
      @JousefM  6 месяцев назад

      Not the main prio atm but I used to do it for past episodes 🙂

  • @ProjectPhysX
    @ProjectPhysX 6 месяцев назад

    I can only imagine the development cost to port a bloated 7 million line legacy CPU code to GPU, especially when the original authors are long retired. Porting is always troublesome as you're biased toward the old implementation and tend to overlook critical code optimization techniques specific to the new hardware. Siemens will need many years to catch up with modern native (multi-)GPU codes, if they can ever achieve acceptable compute efficiency all.

    • @Dong-on5jv
      @Dong-on5jv 6 месяцев назад

      Can image that it would be an enormous work. But still have to do the work to catch up. Hope we can have a frame suits for all kinds of hardware even the future ones, otherwise changing to GPU version means little.

    • @JousefM
      @JousefM  6 месяцев назад +1

      Interesting take, Moritz!

    • @andersr9545
      @andersr9545 4 месяца назад

      I’ve heard that some GPU implementations of FVM uses the cells rather than the faces as the entity to loop over to compute fluxes. To my understanding you will end up computing a lot of duplicate fluxes since each cell compute all its own fluxes, but this tradeoff is apparently worth it for some reasons (maybe that you don’t have to beware of race conditioons and so(?). Is this an example of how porting from legacy cpu code would lead to suboptimal gpu code?