Physics Informed Neural Networks (PINNs) [Physics Informed Machine Learning]

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 77

  • @rehankhan-gn2jr
    @rehankhan-gn2jr 7 месяцев назад +26

    The way of teaching is highly beneficial and outstanding. Thank you, Steven!

  • @seshganesh9546
    @seshganesh9546 7 дней назад

    This explanation requires infinite thumbs up. Intuitive explanation always wins

  • @jiaminxu7275
    @jiaminxu7275 6 месяцев назад +33

    Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.

    • @The_Quaalude
      @The_Quaalude 6 месяцев назад +5

      Getting a PhD and learning from RUclips is wild 😭

    • @arnold-pdev
      @arnold-pdev 6 месяцев назад +1

      ​@@The_Quaalude Why?

    • @The_Quaalude
      @The_Quaalude 6 месяцев назад +2

      @@arnold-pdev bro is paying all that money just to learn something online for free

    • @kaihsiangju
      @kaihsiangju 6 месяцев назад +12

      @@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.

    • @Sumpydumpert
      @Sumpydumpert 6 месяцев назад +1

      I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms

  • @alessandrobeatini1882
    @alessandrobeatini1882 7 месяцев назад +17

    This is hands down one of the best videos I've seen on RUclips. Great work, keep it up!

  • @juandiegotoscano_brown
    @juandiegotoscano_brown 2 месяца назад +1

    Thank you so much, Prof. Brunton, for recommending my video on PINNs! It's an honor to have my work mentioned on your channel. I appreciate your support and your incredible job in making advanced topics accessible to the community!

  • @code2compass
    @code2compass 6 месяцев назад +5

    Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero

  • @MLDawn
    @MLDawn 4 месяца назад +2

    In 29:25, the problem lies in the way backpropagation works! That is, even though the loss function is physics-informed, the learning algorithms, backpropagation, is far from physics-informed, which means the neuronal message passing in a traditional neural net, does not resemble how the brain works. More specifically, the gradient trajectories used in backprob, are shared by both terms of the PINN loss! This means while minimizing term 1, the network forgets term 2 and vice versa. That is why you need to artificially balance the MLP and Physics part by some coefficient! This is not a proper solution as it addresses the problem after it already has occured! I would suggest a fundamental alteration of the dynamics of training, that is, NOT using backprop but instead use the Free Energy Principle and in short local Hebbian learning! This should create meaningfully factorised portions of the network that specialise in minimising different parts of your loss, without constantly being over-written (i.e., no catastrophic forgetting).

  • @markseagraves5486
    @markseagraves5486 6 месяцев назад +1

    Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.

  • @ryansoklaski8242
    @ryansoklaski8242 6 месяцев назад +8

    I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.

    • @Eigensteve
      @Eigensteve  6 месяцев назад +7

      Already filmed and in the queue :)

    • @ryansoklaski8242
      @ryansoklaski8242 6 месяцев назад +1

      @@Eigensteve I'm so excited to hear this.
      I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.

  • @aliabdollahian1465
    @aliabdollahian1465 3 месяца назад

    Truly great explanation! It really helps me understand the concepts deeply.
    You're a hero, Steve! T
    hank you for your highly beneficial, outstanding, and most importantly, free teaching! ❤

  • @abhisheksaini5217
    @abhisheksaini5217 7 месяцев назад +5

    Thank you, Professor.😃

  • @mithundeshmukh8
    @mithundeshmukh8 6 месяцев назад +24

    Please share references only 1 Link is visible

    • @tillsteh7273
      @tillsteh7273 6 месяцев назад +3

      Dude they are literally in the video. Just use google.

    • @DrakenRS78
      @DrakenRS78 6 месяцев назад

      Also - take a look at his textbook for further reference

  • @nandhumon2377
    @nandhumon2377 3 месяца назад

    Great video and I always enjoy your presentation. I think we had to include about the loss balancing for PINNs too in this.

  • @THEPAGMAN
    @THEPAGMAN 6 месяцев назад

    This is really helpful, if only you posted this sooner! Thanks

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 6 месяцев назад

    I was waiting for this, hope to see more about this subjects, thanks a lot.

  • @moisesbessalle
    @moisesbessalle 6 месяцев назад +1

    cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?

  • @reversetransistor4129
    @reversetransistor4129 6 месяцев назад +2

    Nice, kinda gives me ideas to mix control theories together.

  • @pantelisdogoulis8662
    @pantelisdogoulis8662 3 месяца назад

    Thanks a lot for the video!
    I would like to ask if you have encountered any PINNs into solving systems described by simple Algebraic equations with no time parameter present.

  • @anthonymiller6234
    @anthonymiller6234 6 месяцев назад

    Awesome video again Steve. Thanks so much.

  • @thepanzymancan
    @thepanzymancan 6 месяцев назад

    Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!

    • @Jononor
      @Jononor 6 месяцев назад

      I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems

  • @luc-nh5lo
    @luc-nh5lo 3 месяца назад

    Good video! I'm starting to see more about PINN, I hope one day I'll do a master's degree at an American university like MIT or Stanford, and your video helped me, thanks (:

  • @mostafasayahkarajy508
    @mostafasayahkarajy508 6 месяцев назад

    Thank you very much for the lecture. I am looking forward for your next lecture on this topic.

  • @MariaHeger-tb6cv
    @MariaHeger-tb6cv 6 месяцев назад

    I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?

  • @valgorbunov1353
    @valgorbunov1353 3 месяца назад

    Great video as always. Quick question, you said you would included resources in the description, but I don't see any links to the tutorials, only a link to the original paper describing PINN's. Am I looking in the wrong section?
    I was able to search for the sources you referenced thanks to the description, but I think actual links would help other viewers.

  • @sedenions
    @sedenions 6 месяцев назад

    Have you made a video on embedding and fitting networks for running simulation inference?

  • @AndrewConsroe
    @AndrewConsroe 6 месяцев назад

    PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?

  • @caseybackes
    @caseybackes 6 месяцев назад

    i knew someone would end up working on this soon. really excited to see some sophisticated applications!

  • @Sumpydumpert
    @Sumpydumpert 6 месяцев назад +1

    Loved the video ❤️❤️

  • @blacklabelmansociety
    @blacklabelmansociety 6 месяцев назад

    Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.

  • @alexanderskusnov5119
    @alexanderskusnov5119 6 месяцев назад

    What about Kolmogorov-Arnold networks (KAN)?

  • @drozdchannel8707
    @drozdchannel8707 6 месяцев назад

    Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.

  • @nafisamehtaj8779
    @nafisamehtaj8779 6 месяцев назад

    Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.

  • @calvinholt6364
    @calvinholt6364 6 месяцев назад

    This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅

  • @alshahriarbd
    @alshahriarbd 6 месяцев назад

    I think you forgot to put the link on the description to the PyTorch example tutorials.

  • @clementboutaric3952
    @clementboutaric3952 5 месяцев назад +1

    The fact that writing the physics in the loss function won't enforce it but rather suggest it can be a cool thing if the hypothesis that lead to this NS equation (incompressible newtonian fluid) start to become less solid.

  • @ayushshukla9959
    @ayushshukla9959 2 месяца назад

    I am really very sorry sir but i am unamble to deduce how pinns replace cfd and whts the difference as I have to put them in a project

  • @zfrank3777
    @zfrank3777 5 месяцев назад

    Will there be problem if the real system is chaotic?

  • @Anorve
    @Anorve 6 месяцев назад

    fantastic! As always

  • @victormurphy3511
    @victormurphy3511 6 месяцев назад

    Great video. Thank you.

  • @mintakan003
    @mintakan003 6 месяцев назад

    Is there anything that works well for chaotic systems (?)

    • @arnold-pdev
      @arnold-pdev 6 месяцев назад

      Think about what the definition of "chaos" is, and you'll have your answer.

  • @notu483
    @notu483 6 месяцев назад

    What if you use KAN instead of MLP?

    • @arnold-pdev
      @arnold-pdev 6 месяцев назад

      Sounds like the start of a research question

  • @muthukamalan.m6316
    @muthukamalan.m6316 6 месяцев назад

    wonderful content, any code sample would be helpful

  • @cfddoc
    @cfddoc 6 месяцев назад

    no audio?

  • @arbor318
    @arbor318 5 месяцев назад

    The idea is cool. But I wonder how truly effective it is. Because once you add penalty function based on physics you probably removed a lot of solutions suggested by neutral networks.

  • @Obbe79
    @Obbe79 6 месяцев назад

    PINNs usually require more training. A lot of attention must be given to activation functions.

  • @MyrLin8
    @MyrLin8 6 месяцев назад

    excellent. thanks :)

  • @commonwombat-h6r
    @commonwombat-h6r 6 месяцев назад

    very nice!

  • @The_Quaalude
    @The_Quaalude 6 месяцев назад +4

    Who else is high af rn⁉️

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 3 месяца назад

    This feels kinda backwards in what (I'd guess) NNs could do for physics. Wouldn't you want to try to use NNs to discover better fundamental relationships based on letting them have a go tabula rasa on a huge amount raw 'agnostic' data. So many physics models have problems being useful, are stats, or are hand-waving-spherical-cows models, heck, most physics is a bunch of properties and operators developed before computers even existed. Why not use the power of NNs to try to discover better, more useful, dynamics, better _fundamental properties and operators_ , instead of using them as sort of a shitty solver?

    • @johnmorrell3187
      @johnmorrell3187 3 месяца назад +1

      Two thoughts in response;
      For a lot of the problems that are mentioned here like fluid flow, we do have very good PDEs that describe the problem very intuitively but which are very difficult to solve. So, existing equation is good and we're not really struggling to explain the physics, but it's hard to work with.
      Second, even if the NN can learn some novel equation from, for example, lots of measured data, there's usually no useful way to get the equation OUT of the NN in any useful way. Like, let's say I'm looking at some particle physics problem, and I have tons of data but no good equation, and I manage to get an NN to predict new data well. That NN clearly has learned some useful equation, but there's nothing that a physicist could take from the NN's parameters and generalize, the solution is not useful or human-readable beyond it's predictive power.

    • @googleyoutubechannel8554
      @googleyoutubechannel8554 3 месяца назад

      ​@@johnmorrell3187 You're being tricked with math notation and a hundred years of hubris, you can formulate almost any relationship into a PDE regardless of how well you understand it if you can find a single relation between two (made up) properties, 'PDEs that are hard to solve' is identical to 'shitty model'.

  • @Sumpydumpert
    @Sumpydumpert 6 месяцев назад

    Wonder how ai is gonna use this ?

  • @alexroberts6416
    @alexroberts6416 6 месяцев назад

    I'm sorry, what? 😁

  • @arnold-pdev
    @arnold-pdev 6 месяцев назад +1

    PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.

    • @arnold-pdev
      @arnold-pdev 6 месяцев назад

      On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn.
      Great vid tho!

  • @SylComplexDimensional
    @SylComplexDimensional 5 месяцев назад

    All of your shit from yesterday forward won’t get seen

  • @ma8695t
    @ma8695t 2 дня назад

    This series of courses is a copy of others already made video 2 years ago. Plagiarism!!
    You're doing good, however, pretending so that you have invented this topic.