Neural ODEs (NODEs) [Physics Informed Machine Learning]

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 39

  • @smustavee
    @smustavee 5 หลายเดือนก่อน +24

    I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.

  • @mohammadxahid5984
    @mohammadxahid5984 6 หลายเดือนก่อน +8

    Thanks Dr. Brunton for making a video on Neural ODE. Came across this paper as soon as it came out back in 2018. Still goes over my head particularly the introduction of the 2nd differential equation/ adjoint sensitivity method. Would really appreciate if you explain it in detail.

  • @kepler_22b83
    @kepler_22b83 5 หลายเดือนก่อน

    So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder.
    From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.

  • @tshepisosoetsane4857
    @tshepisosoetsane4857 4 หลายเดือนก่อน

    Thank you for taking me back to Engineering Class control Systems

  • @hyperplano
    @hyperplano 5 หลายเดือนก่อน +13

    So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?

  • @astledsa2713
    @astledsa2713 5 หลายเดือนก่อน +1

    Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !

  • @lucynowacki3327
    @lucynowacki3327 4 หลายเดือนก่อน +1

    Cool summary and intro for liquid NNs.

  • @OnionKnight541
    @OnionKnight541 4 หลายเดือนก่อน

    this is great --- i think about this stuff all the time, but didn't know others did :/

  • @stefm.w.3640
    @stefm.w.3640 4 หลายเดือนก่อน

    Great video, I learned a lot! Piqued my interest and inspired me to do a deep dive into all the topics mentioned

  • @HD-qq3bn
    @HD-qq3bn 5 หลายเดือนก่อน

    I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.

  • @digriz85
    @digriz85 5 หลายเดือนก่อน +2

    Nice video, but I really miss the connection point between the NNs and the math part. I have a PhD in physics and I've worked a lot with the math you're talking about. Also I've worked a few years as a data scientist and I kinda understand how it goes with the neural networks.
    But I really miss the point how you make these two work together. Sorry if I sound dumb here.

  • @osianshelley3312
    @osianshelley3312 5 หลายเดือนก่อน

    Fantastic video! Do you have any references for the mathematics behind the continuous adjoint method?

  • @-mwolf
    @-mwolf 5 หลายเดือนก่อน

    Awesome video. One question I'm asking myself is: Why isn't everybody using NODEs instead of resnets if they are so much better?

  • @daniellu9499
    @daniellu9499 5 หลายเดือนก่อน

    very interesting course, love such great video...

  • @anthonymiller6234
    @anthonymiller6234 5 หลายเดือนก่อน

    Awesome video and very helpful. Thanks

  • @etiennetiennetienne
    @etiennetiennetienne 5 หลายเดือนก่อน

    I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .

  • @SohamShaw-bx4fq
    @SohamShaw-bx4fq 5 หลายเดือนก่อน +1

    Can you please teach latent neural ode in detail?

  • @merrickcloete1350
    @merrickcloete1350 5 หลายเดือนก่อน

    @Eigensteve is the nth order runge kutta integrator not just what a UNet is, after its being properly trained. The structure appears the same and the coefficients would be learned.

  • @as-qh1qq
    @as-qh1qq 5 หลายเดือนก่อน

    Amazing review. Engaging and sharp

  • @dannychan9461
    @dannychan9461 2 หลายเดือนก่อน

    10:10 or just say: NeuralODE models the vector field itself instead of the discretised increment like Residual connection.

  • @zlackoff
    @zlackoff 5 หลายเดือนก่อน +3

    Euler integration got dumped on so hard in this video

  • @ricardoceballosgarzon6100
    @ricardoceballosgarzon6100 5 หลายเดือนก่อน +1

    Interesting...

  • @topamazinggadgetsoftrendin2916
    @topamazinggadgetsoftrendin2916 5 หลายเดือนก่อน +1

    Very interesting

  • @joshnicholson6194
    @joshnicholson6194 5 หลายเดือนก่อน +2

    Very cool!

  • @Sumpydumpert
    @Sumpydumpert 5 หลายเดือนก่อน +2

    I love it great video

  • @marcelotoledo1820
    @marcelotoledo1820 4 หลายเดือนก่อน

    Why is it implicit that x(k+1)=x(k)+f(x) is Euler integration ? Can be any integrator depending on how you build f(x), Runge Kutta for example f is
    f(x) =h/6*(k1+2*k2+2*k3+k4).

  • @maksim-surov
    @maksim-surov 3 หลายเดือนก่อน

    I couldn't understand what a problem the NODE solves. What is the source data and what is the goal? Perhaps, you are trying to approximate a dynamical system (the rhs function of it) with a NN (i.e. you approximate the rhs as a composition of activation and linear functions), s.t. trajectories of the synthetic system look like the source data. Is this correct?
    Is it like an alternative to HMM?

  • @Heliosnew
    @Heliosnew 5 หลายเดือนก่อน

    Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!

  • @franpastor2067
    @franpastor2067 4 หลายเดือนก่อน

    What about periodic functions? Is there a way to get nice approximations with neural networks?

  • @smeetsv103
    @smeetsv103 5 หลายเดือนก่อน

    If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?

  • @The018fv
    @The018fv 5 หลายเดือนก่อน

    Is there a model that can do integro-differential equations?

  • @anonym9323
    @anonym9323 5 หลายเดือนก่อน +1

    Does some one have a example repository or libary so i can plaz with it

    • @devinbae9914
      @devinbae9914 5 หลายเดือนก่อน

      Maybe in the Neural ODE paper?

  • @erikkhan
    @erikkhan 5 หลายเดือนก่อน +3

    Hi Professor , What are some prerequisites for this course?

    • @tramplerofarmies
      @tramplerofarmies 5 หลายเดือนก่อน +1

      I suspect these are not the type of courses with defined prereqs, but def need calculus series, linear algebra series, and some computer science. To really understand it, classical mechanics and signals and systems (control theory, discrete and continuous).

  • @edwardgongsky8540
    @edwardgongsky8540 5 หลายเดือนก่อน

    Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though

  • @JonathanFraser-i7h
    @JonathanFraser-i7h 5 หลายเดือนก่อน

    This seems like you are changing your loss function not your network. Like there is some underlying field you are trying to approximate and you're not commenting on the structure of the network for that function. You are only concerning yourself with how you are evaluating that function (integrating) to compare to reality.
    I think it's more correct to call these ODE Loss Functions, Euler Loss Functions, or Lagrange Loss Functions for neural network evaluation.

  • @sucim
    @sucim 4 หลายเดือนก่อน +8

    Very confusing presentation! First Neural ODEs are presented as a continuous version of ResNets, which would imply that the integration happens in "depth" which would make them similar to fully-connected or convolutional neural networks (non-sequence models). The afterwards it is suggested that the integration actually happens in "time" which makes neural ODEs much more similar to sequence models. Even ChatGPT et al. are confused and can't answer this distinction properly. Seems like it is a quite buzzword-driven field...

  • @1.4142
    @1.4142 5 หลายเดือนก่อน

    multi flashbacks