Neural ODEs (NODEs) [Physics Informed Machine Learning]

Поделиться
HTML-код
  • Опубликовано: 26 июн 2024
  • This video describes Neural ODEs, a powerful machine learning approach to learn ODEs from data.
    This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
    %%% CHAPTERS %%%
    00:00 Intro
    02:09 Background: ResNet
    05:05 From ResNet to ODE
    07:59 ODE Essential Insight/ Why ODE outperforms ResNet
    // 09:05 ODE Essential Insight Rephrase 1
    // 09:54 ODE Essential Insight Rephrase 2
    11:11 ODE Performance vs ResNet Performance
    12:52 ODE extension: HNNs
    14:03 ODE extension: LNNs
    14:45 ODE algorithm overview/ ODEs and Adjoint Calculation
    22:24 Outro
  • НаукаНаука

Комментарии • 31

  • @smustavee
    @smustavee 20 дней назад +19

    I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.

  • @mohammadxahid5984
    @mohammadxahid5984 Месяц назад +7

    Thanks Dr. Brunton for making a video on Neural ODE. Came across this paper as soon as it came out back in 2018. Still goes over my head particularly the introduction of the 2nd differential equation/ adjoint sensitivity method. Would really appreciate if you explain it in detail.

  • @astledsa2713
    @astledsa2713 19 дней назад +1

    Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !

  • @as-qh1qq
    @as-qh1qq 19 дней назад

    Amazing review. Engaging and sharp

  • @anthonymiller6234
    @anthonymiller6234 16 дней назад

    Awesome video and very helpful. Thanks

  • @joshnicholson6194
    @joshnicholson6194 20 дней назад +2

    Very cool!

  • @hyperplano
    @hyperplano 20 дней назад +12

    So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?

  • @daniellu9499
    @daniellu9499 20 дней назад

    very interesting course, love such great video...

  • @codybarton2090
    @codybarton2090 20 дней назад +2

    I love it great video

  • @kepler_22b83
    @kepler_22b83 17 дней назад

    So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder.
    From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.

  • @topamazinggadgetsoftrendin2916
    @topamazinggadgetsoftrendin2916 20 дней назад +1

    Very interesting

  • @SohamShaw-bx4fq
    @SohamShaw-bx4fq 13 дней назад +1

    Can you please teach latent neural ode in detail?

  • @osianshelley3312
    @osianshelley3312 3 дня назад

    Fantastic video! Do you have any references for the mathematics behind the continuous adjoint method?

  • @HD-qq3bn
    @HD-qq3bn 18 дней назад

    I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.

  • @ricardoceballosgarzon6100
    @ricardoceballosgarzon6100 20 дней назад +1

    Interesting...

  • @-mwolf
    @-mwolf 10 дней назад

    Awesome video. One question I'm asking myself is: Why isn't everybody using NODEs instead of resnets if they are so much better?

  • @digriz85
    @digriz85 12 дней назад

    Nice video, but I really miss the connection point between the NNs and the math part. I have a PhD in physics and I've worked a lot with the math you're talking about. Also I've worked a few years as a data scientist and I kinda understand how it goes with the neural networks.
    But I really miss the point how you make these two work together. Sorry if I sound dumb here.

  • @merrickcloete1350
    @merrickcloete1350 14 дней назад

    @Eigensteve is the nth order runge kutta integrator not just what a UNet is, after its being properly trained. The structure appears the same and the coefficients would be learned.

  • @etiennetiennetienne
    @etiennetiennetienne 19 дней назад

    I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .

  • @smeetsv103
    @smeetsv103 19 дней назад

    If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?

  • @franpastor2067
    @franpastor2067 12 часов назад

    What about periodic functions? Is there a way to get nice approximations with neural networks?

  • @The018fv
    @The018fv 20 дней назад

    Is there a model that can do integro-differential equations?

  • @zlackoff
    @zlackoff 18 дней назад +2

    Euler integration got dumped on so hard in this video

  • @Heliosnew
    @Heliosnew 19 дней назад

    Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!

  • @anonym9323
    @anonym9323 20 дней назад +1

    Does some one have a example repository or libary so i can plaz with it

    • @devinbae9914
      @devinbae9914 20 дней назад

      Maybe in the Neural ODE paper?

  • @edwardgongsky8540
    @edwardgongsky8540 20 дней назад

    Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though

  • @erikkhan
    @erikkhan 20 дней назад +3

    Hi Professor , What are some prerequisites for this course?

    • @tramplerofarmies
      @tramplerofarmies 14 дней назад +1

      I suspect these are not the type of courses with defined prereqs, but def need calculus series, linear algebra series, and some computer science. To really understand it, classical mechanics and signals and systems (control theory, discrete and continuous).

  • @user-oj9iz4vb4q
    @user-oj9iz4vb4q 5 дней назад

    This seems like you are changing your loss function not your network. Like there is some underlying field you are trying to approximate and you're not commenting on the structure of the network for that function. You are only concerning yourself with how you are evaluating that function (integrating) to compare to reality.
    I think it's more correct to call these ODE Loss Functions, Euler Loss Functions, or Lagrange Loss Functions for neural network evaluation.

  • @1.4142
    @1.4142 19 дней назад

    multi flashbacks