Physics Informed Neural Networks (PINNs) [Physics Informed Machine Learning]

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 76

  • @jiaminxu7275
    @jiaminxu7275 5 หลายเดือนก่อน +28

    Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.

    • @The_Quaalude
      @The_Quaalude 5 หลายเดือนก่อน +5

      Getting a PhD and learning from TH-cam is wild 😭

    • @arnold-pdev
      @arnold-pdev 5 หลายเดือนก่อน +1

      ​@@The_Quaalude Why?

    • @The_Quaalude
      @The_Quaalude 5 หลายเดือนก่อน +2

      @@arnold-pdev bro is paying all that money just to learn something online for free

    • @kaihsiangju
      @kaihsiangju 5 หลายเดือนก่อน +9

      @@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.

    • @Sumpydumpert
      @Sumpydumpert 5 หลายเดือนก่อน +1

      I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms

  • @rehankhan-gn2jr
    @rehankhan-gn2jr 6 หลายเดือนก่อน +23

    The way of teaching is highly beneficial and outstanding. Thank you, Steven!

  • @alessandrobeatini1882
    @alessandrobeatini1882 6 หลายเดือนก่อน +16

    This is hands down one of the best videos I've seen on TH-cam. Great work, keep it up!

  • @juandiegotoscano_brown
    @juandiegotoscano_brown หลายเดือนก่อน +1

    Thank you so much, Prof. Brunton, for recommending my video on PINNs! It's an honor to have my work mentioned on your channel. I appreciate your support and your incredible job in making advanced topics accessible to the community!

  • @markseagraves5486
    @markseagraves5486 5 หลายเดือนก่อน +1

    Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.

  • @ryansoklaski8242
    @ryansoklaski8242 5 หลายเดือนก่อน +7

    I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.

    • @Eigensteve
      @Eigensteve  5 หลายเดือนก่อน +7

      Already filmed and in the queue :)

    • @ryansoklaski8242
      @ryansoklaski8242 5 หลายเดือนก่อน +1

      @@Eigensteve I'm so excited to hear this.
      I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.

  • @code2compass
    @code2compass 5 หลายเดือนก่อน +5

    Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero

  • @nandhumon2377
    @nandhumon2377 2 หลายเดือนก่อน

    Great video and I always enjoy your presentation. I think we had to include about the loss balancing for PINNs too in this.

  • @abhisheksaini5217
    @abhisheksaini5217 6 หลายเดือนก่อน +5

    Thank you, Professor.😃

  • @aliabdollahian1465
    @aliabdollahian1465 2 หลายเดือนก่อน

    Truly great explanation! It really helps me understand the concepts deeply.
    You're a hero, Steve! T
    hank you for your highly beneficial, outstanding, and most importantly, free teaching! ❤

  • @reversetransistor4129
    @reversetransistor4129 5 หลายเดือนก่อน +2

    Nice, kinda gives me ideas to mix control theories together.

  • @THEPAGMAN
    @THEPAGMAN 5 หลายเดือนก่อน

    This is really helpful, if only you posted this sooner! Thanks

  • @blacklabelmansociety
    @blacklabelmansociety 5 หลายเดือนก่อน

    Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.

  • @calvinholt6364
    @calvinholt6364 5 หลายเดือนก่อน

    This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅

  • @mostafasayahkarajy508
    @mostafasayahkarajy508 5 หลายเดือนก่อน

    Thank you very much for the lecture. I am looking forward for your next lecture on this topic.

  • @MLDawn
    @MLDawn 3 หลายเดือนก่อน +1

    In 29:25, the problem lies in the way backpropagation works! That is, even though the loss function is physics-informed, the learning algorithms, backpropagation, is far from physics-informed, which means the neuronal message passing in a traditional neural net, does not resemble how the brain works. More specifically, the gradient trajectories used in backprob, are shared by both terms of the PINN loss! This means while minimizing term 1, the network forgets term 2 and vice versa. That is why you need to artificially balance the MLP and Physics part by some coefficient! This is not a proper solution as it addresses the problem after it already has occured! I would suggest a fundamental alteration of the dynamics of training, that is, NOT using backprop but instead use the Free Energy Principle and in short local Hebbian learning! This should create meaningfully factorised portions of the network that specialise in minimising different parts of your loss, without constantly being over-written (i.e., no catastrophic forgetting).

  • @drozdchannel8707
    @drozdchannel8707 5 หลายเดือนก่อน

    Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.

  • @mithundeshmukh8
    @mithundeshmukh8 5 หลายเดือนก่อน +24

    Please share references only 1 Link is visible

    • @tillsteh7273
      @tillsteh7273 5 หลายเดือนก่อน +3

      Dude they are literally in the video. Just use google.

    • @DrakenRS78
      @DrakenRS78 5 หลายเดือนก่อน

      Also - take a look at his textbook for further reference

  • @tshepisosoetsane4857
    @tshepisosoetsane4857 4 หลายเดือนก่อน

    Wooooooow i am back to class Physics Maths Chemistry Electrical Control Systems

  • @luc-nh5lo
    @luc-nh5lo 2 หลายเดือนก่อน

    Good video! I'm starting to see more about PINN, I hope one day I'll do a master's degree at an American university like MIT or Stanford, and your video helped me, thanks (:

  • @clementboutaric3952
    @clementboutaric3952 4 หลายเดือนก่อน +1

    The fact that writing the physics in the loss function won't enforce it but rather suggest it can be a cool thing if the hypothesis that lead to this NS equation (incompressible newtonian fluid) start to become less solid.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 5 หลายเดือนก่อน

    I was waiting for this, hope to see more about this subjects, thanks a lot.

  • @nafisamehtaj8779
    @nafisamehtaj8779 5 หลายเดือนก่อน

    Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.

  • @caseybackes
    @caseybackes 5 หลายเดือนก่อน

    i knew someone would end up working on this soon. really excited to see some sophisticated applications!

  • @pantelisdogoulis8662
    @pantelisdogoulis8662 2 หลายเดือนก่อน

    Thanks a lot for the video!
    I would like to ask if you have encountered any PINNs into solving systems described by simple Algebraic equations with no time parameter present.

  • @anthonymiller6234
    @anthonymiller6234 5 หลายเดือนก่อน

    Awesome video again Steve. Thanks so much.

  • @moisesbessalle
    @moisesbessalle 5 หลายเดือนก่อน +1

    cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?

  • @Obbe79
    @Obbe79 5 หลายเดือนก่อน

    PINNs usually require more training. A lot of attention must be given to activation functions.

  • @MariaHeger-tb6cv
    @MariaHeger-tb6cv 5 หลายเดือนก่อน

    I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?

  • @sedenions
    @sedenions 5 หลายเดือนก่อน

    Have you made a video on embedding and fitting networks for running simulation inference?

  • @AndrewConsroe
    @AndrewConsroe 5 หลายเดือนก่อน

    PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?

  • @valgorbunov1353
    @valgorbunov1353 หลายเดือนก่อน

    Great video as always. Quick question, you said you would included resources in the description, but I don't see any links to the tutorials, only a link to the original paper describing PINN's. Am I looking in the wrong section?
    I was able to search for the sources you referenced thanks to the description, but I think actual links would help other viewers.

  • @muthukamalan.m6316
    @muthukamalan.m6316 5 หลายเดือนก่อน

    wonderful content, any code sample would be helpful

  • @thepanzymancan
    @thepanzymancan 5 หลายเดือนก่อน

    Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!

    • @Jononor
      @Jononor 5 หลายเดือนก่อน

      I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems

  • @Sumpydumpert
    @Sumpydumpert 5 หลายเดือนก่อน +1

    Loved the video ❤️❤️

  • @alshahriarbd
    @alshahriarbd 5 หลายเดือนก่อน

    I think you forgot to put the link on the description to the PyTorch example tutorials.

  • @victormurphy3511
    @victormurphy3511 5 หลายเดือนก่อน

    Great video. Thank you.

  • @alexanderskusnov5119
    @alexanderskusnov5119 5 หลายเดือนก่อน

    What about Kolmogorov-Arnold networks (KAN)?

  • @Anorve
    @Anorve 5 หลายเดือนก่อน

    fantastic! As always

  • @arbor318
    @arbor318 4 หลายเดือนก่อน

    The idea is cool. But I wonder how truly effective it is. Because once you add penalty function based on physics you probably removed a lot of solutions suggested by neutral networks.

  • @commonwombat-h6r
    @commonwombat-h6r 5 หลายเดือนก่อน

    very nice!

  • @zfrank3777
    @zfrank3777 4 หลายเดือนก่อน

    Will there be problem if the real system is chaotic?

  • @MyrLin8
    @MyrLin8 5 หลายเดือนก่อน

    excellent. thanks :)

  • @The_Quaalude
    @The_Quaalude 5 หลายเดือนก่อน +4

    Who else is high af rn⁉️

  • @ayushshukla9959
    @ayushshukla9959 หลายเดือนก่อน

    I am really very sorry sir but i am unamble to deduce how pinns replace cfd and whts the difference as I have to put them in a project

  • @mintakan003
    @mintakan003 5 หลายเดือนก่อน

    Is there anything that works well for chaotic systems (?)

    • @arnold-pdev
      @arnold-pdev 5 หลายเดือนก่อน

      Think about what the definition of "chaos" is, and you'll have your answer.

  • @notu483
    @notu483 5 หลายเดือนก่อน

    What if you use KAN instead of MLP?

    • @arnold-pdev
      @arnold-pdev 5 หลายเดือนก่อน

      Sounds like the start of a research question

  • @cfddoc
    @cfddoc 5 หลายเดือนก่อน

    no audio?

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 2 หลายเดือนก่อน

    This feels kinda backwards in what (I'd guess) NNs could do for physics. Wouldn't you want to try to use NNs to discover better fundamental relationships based on letting them have a go tabula rasa on a huge amount raw 'agnostic' data. So many physics models have problems being useful, are stats, or are hand-waving-spherical-cows models, heck, most physics is a bunch of properties and operators developed before computers even existed. Why not use the power of NNs to try to discover better, more useful, dynamics, better _fundamental properties and operators_ , instead of using them as sort of a shitty solver?

    • @johnmorrell3187
      @johnmorrell3187 2 หลายเดือนก่อน +1

      Two thoughts in response;
      For a lot of the problems that are mentioned here like fluid flow, we do have very good PDEs that describe the problem very intuitively but which are very difficult to solve. So, existing equation is good and we're not really struggling to explain the physics, but it's hard to work with.
      Second, even if the NN can learn some novel equation from, for example, lots of measured data, there's usually no useful way to get the equation OUT of the NN in any useful way. Like, let's say I'm looking at some particle physics problem, and I have tons of data but no good equation, and I manage to get an NN to predict new data well. That NN clearly has learned some useful equation, but there's nothing that a physicist could take from the NN's parameters and generalize, the solution is not useful or human-readable beyond it's predictive power.

    • @googleyoutubechannel8554
      @googleyoutubechannel8554 2 หลายเดือนก่อน

      ​@@johnmorrell3187 You're being tricked with math notation and a hundred years of hubris, you can formulate almost any relationship into a PDE regardless of how well you understand it if you can find a single relation between two (made up) properties, 'PDEs that are hard to solve' is identical to 'shitty model'.

  • @arnold-pdev
    @arnold-pdev 5 หลายเดือนก่อน +1

    PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.

    • @arnold-pdev
      @arnold-pdev 5 หลายเดือนก่อน

      On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn.
      Great vid tho!

  • @Sumpydumpert
    @Sumpydumpert 5 หลายเดือนก่อน

    Wonder how ai is gonna use this ?

  • @alexroberts6416
    @alexroberts6416 5 หลายเดือนก่อน

    I'm sorry, what? 😁

  • @SylComplexDimensional
    @SylComplexDimensional 4 หลายเดือนก่อน

    All of your shit from yesterday forward won’t get seen