Neural Networks for Dynamical Systems

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ก.ย. 2024

ความคิดเห็น • 26

  • @AliRadmehrir
    @AliRadmehrir 4 ปีที่แล้ว +5

    One of the best professors in system analysis.

    • @yahirrohan8780
      @yahirrohan8780 3 ปีที่แล้ว

      Pro tip : watch movies on Flixzone. Me and my gf have been using them for watching loads of movies these days.

    • @codykyle7563
      @codykyle7563 3 ปีที่แล้ว

      @Yahir Rohan definitely, I've been watching on flixzone for months myself :D

    • @charlessalvador7251
      @charlessalvador7251 3 ปีที่แล้ว

      @Yahir Rohan Definitely, have been using flixzone for since december myself :)

  • @mohamadrezaha
    @mohamadrezaha 4 ปีที่แล้ว +1

    Beauty of System Identification!

  • @georgekerwood9100
    @georgekerwood9100 2 ปีที่แล้ว

    Excited to see how this can be used for IK in robotics, thanks for your time.

  • @人類之信仰現代の精神
    @人類之信仰現代の精神 3 ปีที่แล้ว

    the best demonstrating way i have even seen

  • @dr.alikhudhair9414
    @dr.alikhudhair9414 8 หลายเดือนก่อน

    Thank you for posting this great video, Professor!

  • @texwiller7577
    @texwiller7577 3 หลายเดือนก่อน

    Thank you Nathan

  • @mattkafker8400
    @mattkafker8400 3 ปีที่แล้ว +1

    Thank you for posting this great video, Professor! This technique seems quite simple, yet extremely powerful. I am curious what types of otherwise intractable systems might become tractable if one uses this technique with more powerful modern NNs.

  • @chrisprasanna8190
    @chrisprasanna8190 4 ปีที่แล้ว +7

    Hi Professor Kutz,
    I was wondering what the intuition was behind choosing your three activation functions (lines 25-27). I've seen logsig and tanh functions used a lot for the hidden layers of regression networks but I'm not sure what the radial basis function brings to the table and why it is placed in the middle.
    In addition, I thought the output layer for matlab's regression networks is 'purelin' by default so is the third hidden layer a bit redundant? Or is there a reason you chose to have the last hidden layer to have a purelin activation function?

  • @hamedsaeidi2860
    @hamedsaeidi2860 7 หลายเดือนก่อน

    Lovely work!

  • @alejandroreyes8362
    @alejandroreyes8362 2 ปีที่แล้ว

    All my respect to you, master!!

  • @Anorve
    @Anorve 4 ปีที่แล้ว +1

    Great video Nathan, I recently got your other book "Data-Driven modeling &Scienfitif computation" I'm enjoying

  • @TheSugarDealers
    @TheSugarDealers 2 ปีที่แล้ว

    What might be the advantage of training a NN to solve an IVP? It seems that to train a NN to solve for the IVP one must already have a way to generate trajectories.

  • @selvarajp1703
    @selvarajp1703 ปีที่แล้ว

    Really thank you so much Sir.

  • @erickgomez7775
    @erickgomez7775 3 ปีที่แล้ว

    Great way to teach.

  • @pedrovelazquez138
    @pedrovelazquez138 3 ปีที่แล้ว

    Thank you. Really interesting

  • @royshi4802
    @royshi4802 4 ปีที่แล้ว

    Great video!

  • @suningok
    @suningok 4 ปีที่แล้ว +1

    Hi, thanks for the interesting vedio. Is there a method that we could include the "b; sig; r;" as input paramaters for nerual network?

  • @rrr00bb1
    @rrr00bb1 8 หลายเดือนก่อน

    I was searching around for a neural network architecture that works by taking in an input multi-modal "image" at time T, and predicts multi-modal output at tiime T+1. But, how would you represent actuators in this scheme? The position of a motor as an input and predicted output (y_a1 - x_a1) ... would that diff trigger a motor to actuate the world? since LLMs are all based on next-frame prediction, it seems that you could make a generic box and hook up some inputs to pixels, other inputs to positions of actuators; but it's mysterious how the output would control the actuators. ie: if a neural network is trying to minimize surprise, then it can minimize surprise by actuating the world on its own.

    • @rrr00bb1
      @rrr00bb1 8 หลายเดือนก่อน

      this as opposed to encoding actuators in weights somewhere; and letting backprop modify them.

  • @optimizacioneningenieria3385
    @optimizacioneningenieria3385 ปีที่แล้ว

    Does someone know how this approach changes when you also consider an input, u, in your system?

    • @amineatmane7592
      @amineatmane7592 9 หลายเดือนก่อน

      did you get the answer

  • @dosomething3
    @dosomething3 4 ปีที่แล้ว +2

    Most complicated example I’ve seen in my entire life. And I’m very old.

  • @inessanchez-hombria1999
    @inessanchez-hombria1999 2 ปีที่แล้ว

    2:08