Neural Implicit Flow (NIF) [Physics Informed Machine Learning]

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 14

  • @davidlearnforus
    @davidlearnforus หลายเดือนก่อน

    Loved this series. Please make video on Artificial Kuramoto Oscillatory Neurons

  • @AhmedAboElyazeed98
    @AhmedAboElyazeed98 2 หลายเดือนก่อน +11

    Hello, great explanation!
    You mentioned to try and download the code but not found in the description

  • @XuHan-s7o
    @XuHan-s7o 2 หลายเดือนก่อน

    I learn a lot from each of your videos. Thanks very much!

  • @NDDYW303-kb4wr
    @NDDYW303-kb4wr 2 หลายเดือนก่อน

    I look forward to your videos so much

  • @chemputerist9887
    @chemputerist9887 หลายเดือนก่อน

    The idea was published much before this paper. In implicit neural networks, read functa paper

  • @nsshn9690
    @nsshn9690 2 หลายเดือนก่อน

    amazing video. Thanks a lot

  • @ryanwoodall8039
    @ryanwoodall8039 2 หลายเดือนก่อน +1

    Reading through the article, im curious about the approach of using time as an "external" factor instead of a coordinate as in PINNs.
    One reason I like the Neural ODE approach is that time is continuously varied, and therefore allows for construction of a (potentially latent) phase space where the dynamics evolve continuously. Is that possible with this approach?

    • @joshcannon6704
      @joshcannon6704 หลายเดือนก่อน

      I’m gonna have to read his paper, using time as an external factor sounds wild to me

    • @ryanwoodall8039
      @ryanwoodall8039 หลายเดือนก่อน +1

      @@joshcannon6704 yeah, it's an interesting architecture. I'm wondering if it allows for better separation of derivatives w.r.t time using the atom optimizer... Also avoids the need for a numerical integrator. Still, my big question is generalization.
      For instance, we can temporally reframe any differential equation by shifting the initial condition. My concern is that the reference time would not generalize between problems.
      Perhaps that's not a concern of the authors. They seem to care most about compressed sensing and control through few sensors.

    • @joshcannon6704
      @joshcannon6704 หลายเดือนก่อน

      @@ryanwoodall8039 yea I’m not sure exactly how it would shake out with different problems. I use data with a ton of noise so I keep trying to learn different architectures to see what will “see through the noise” better but I’ll be honest I’m still trying to wrap my brain around this one, I’m sure I’ll need to watch it 10 more times lol

  • @arturoeugster7228
    @arturoeugster7228 2 หลายเดือนก่อน

    Este señor es bastante malinformado de la significancia del número definido por el profesor Ackeret. También de la sequencia de Kolmogorof. 🇧🇴

  • @peterfarkas6763
    @peterfarkas6763 6 หลายเดือนก่อน +3

    th-cam.com/video/y09_uGjd1zM/w-d-xo.html
    Shaowu Pan's video.
    I love your content, I myself am a theoretical physicist turned CFD engineer just transitioning to ML and data science. Thank you for the educational content, it makes article discovery mush easier.

  • @lgl_137noname6
    @lgl_137noname6 2 หลายเดือนก่อน +1

    @eigensteve
    TheRe is no resources found in the descrition a you had indicated they would be.
    PLLLLLLLLLLLLLLLLLLLLLLEEEEEEEEEEEEEEEEEEEAAAAAAAAAAAAAAAAZZZZZZZZZZZZ ......
    Where can those resources be found.
    Please advise.
    Thank you.

  • @AntiHolyChrist
    @AntiHolyChrist 2 หลายเดือนก่อน

    but does he go to supercuts? the mirrors and shape of the station Is this technology when we cut. The mirrors are angled, the top of the mirror is the fluorescent paramater filter without the bulbs. the bulbs are overhead. and the paramater of a balanced circle are above the chair. Its angled to the right for the shape projection. Go to supercuts!!! I think people dont realize hair is alive and goes directly to their brain...