MSE101 L7.2 Non-linear least squares minimisation

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ย. 2024

ความคิดเห็น • 16

  • @SHONSL
    @SHONSL 4 ปีที่แล้ว +1

    good god thank you for using proper mic and audio equipment. it makes following MUCH easier for us.

  • @zacharythatcher7328
    @zacharythatcher7328 4 ปีที่แล้ว

    This is the first hint I’ve seen that utilizing the hessian matrix (matrix representing the curvature of a vector valued function) is a way to judge the value of the constant that you should use when minimizing the cost function. Thanks for that.

  • @jogomez1988a
    @jogomez1988a 3 ปีที่แล้ว

    ¿Se podría aplicar este mismo método para una regresión sigmoidal? algún libro que pueda recomendar.

  • @UseLogicPlease123
    @UseLogicPlease123 8 ปีที่แล้ว +1

    Many thanks for the video. Great presentation and all. Would be cool to see if you build all the way up to the more complex quicker solving methods.

  • @salihuibrahim3853
    @salihuibrahim3853 6 ปีที่แล้ว

    HI, please how can I fit my curve using Levenberg-Marquardt Method using curveExpert software?

  • @obewanjacobi
    @obewanjacobi 4 ปีที่แล้ว

    Solid video, very helpful refresher for me

  • @apprentice2101
    @apprentice2101 7 ปีที่แล้ว +1

    Thank you so much!! You are really awesome at explaining math!

  • @skylogic
    @skylogic 6 ปีที่แล้ว +1

    Great lesson! Though it'd be nice if you didn't block the camera's view of the board while writing and explaining the equations :)

  • @absolute___zero
    @absolute___zero 4 ปีที่แล้ว +1

    What I can't understand in all these algorithms that convert the *Error* into a *Error Squared* function is: why are you doing that? Because by converting the Error into ErrorSquared you are creating another level of complexity!!!, and then you have to derive the ErrorSquared so you can do the minimization. Why to elevate complexity into higher degree and then lower that degree immediately in the next step? Cant we just derive the error function as it is in its original form? That seems like the obvious thing to do, and you would go straight to the minimum without any wiggling. And then if you do second derivative on that , you would converge to the minimum even faster, that would be the equivalent of the third order derivative on the ErrorSquared function, so, second order derivative on Error (not ErrorSquared) would be much powerful method.

    • @pedroparamodelvalle6751
      @pedroparamodelvalle6751 4 ปีที่แล้ว +1

      The Error squared function is convex, hopefulyy positive definite also smoother. In simple terms we want to penalize either positive or negative error the same. You can also use absolute error but that is definitely not a smooth function.

    • @zacharythatcher7328
      @zacharythatcher7328 4 ปีที่แล้ว +1

      You have to start with the understanding that this calculation is actually getting the squared distance between the observations and the estimating function. The Pythagorean theorem gives you distance. This is just distance squared. And when you think about it, distance is really what you want to minimize. Simply adding non-squared error is not even giving you an analogue of distance anymore. It would be giving you some weird measure that might actually have an incorrect minimum somewhere that is not representative of minimizing the distance when you get into higher dimensional space. Luckily, the distance equation generalized into higher dimensional space, so we know that we are continuing to minimize distance even though we have no notion of what distance really looks like in 4-D.

  • @adrian-ng
    @adrian-ng 7 ปีที่แล้ว +1

    That is the nicest whiteboard I have ever seen

    • @DavidDyeIC
      @DavidDyeIC  7 ปีที่แล้ว

      Thanks Adrian! Looking forward to supporting learners with you on Coursera!

  • @manushanmugam
    @manushanmugam 8 ปีที่แล้ว

    very nice lecture. Thank you

  • @hsugoodman4223
    @hsugoodman4223 6 ปีที่แล้ว

    讲的真好

  • @miguelangeldiazsanchez5337
    @miguelangeldiazsanchez5337 8 ปีที่แล้ว

    Great!! very useful!