Quadratic Form Minimization: A Calculus-Based Derivation

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ก.ย. 2024
  • bit.ly/PavelPa...
    lem.ma/LA - Linear Algebra on Lemma
    bit.ly/ITCYTNew - Dr. Grinfeld's Tensor Calculus textbook
    lem.ma/prep - Complete SAT Math Prep

ความคิดเห็น • 47

  • @MathTheBeautiful
    @MathTheBeautiful  3 ปีที่แล้ว +2

    Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.

  • @gerardogutierrez4911
    @gerardogutierrez4911 4 ปีที่แล้ว +20

    Why does he talk like hes trying to get me to recapture the means of production from the bourgeoisie?

    • @MathTheBeautiful
      @MathTheBeautiful  4 ปีที่แล้ว +40

      Because he is lenin in that direction

  • @vothiquynhyen09
    @vothiquynhyen09 6 ปีที่แล้ว +7

    I have to say that I love your voice, and the passion you have for the subject.

    • @joshuaronisjr
      @joshuaronisjr 5 ปีที่แล้ว +2

      He talks a little like Feynman

  • @omedomedomedomedomed
    @omedomedomedomedomed 4 ปีที่แล้ว +1

    To understand the least square derivation, I check this. Super helpful !!!

  • @ekandrot
    @ekandrot 7 ปีที่แล้ว +19

    For your gradient descent, do you need the -b in there, eg x -> x - a(Ax-b) ? It seemed without that -b and a positive definite matrix A, zero is the only solution. But with -b then -1,-2,4 is the solution.

  • @ijustneedaname47
    @ijustneedaname47 3 ปีที่แล้ว +1

    This video really helped tie these concepts together for me. I really appreciate your posting it.

  • @TuNguyen-ox5lt
    @TuNguyen-ox5lt 6 ปีที่แล้ว +5

    Gradient descent is a technique used in machine learning nowadays to optimize a loss function . This video is great

  • @bryan-9742
    @bryan-9742 4 ปีที่แล้ว +5

    this is so cool. Love this channel. I'm learning so much I should have learned years ago.

  • @snnwstt
    @snnwstt ปีที่แล้ว

    1:18 Just as an observation, while it is usual to see the quadratic form as presented here, I find the following a little bit more ... elegant:
    0.5 * [W] {x y z 1}
    With a line vector, { } a column vector and [ ] a matrix.
    Here W =
    4 1 2 -2
    1 8 5 -3
    2 5 4 -4
    -2 -3 -4 0
    symmetric if A is symmetric. Note that the minus sign for the last column and the last line is due to the original subtraction. The 0 stands when the constant term is ... zero.

  • @sora290762594
    @sora290762594 3 ปีที่แล้ว +1

    great way of explaining quadratic optimization

  • @joaquingiorgi5133
    @joaquingiorgi5133 2 ปีที่แล้ว +1

    Made this concept easy to understand, thank you!

  • @Userjdanon
    @Userjdanon 2 ปีที่แล้ว +1

    Great video. This was explained very intuitive.

  • @jjgroup.investments
    @jjgroup.investments 2 ปีที่แล้ว +1

    Thanks for this awesome video

  • @serkangoktas5502
    @serkangoktas5502 4 ปีที่แล้ว +4

    I always knew that something was off with this derivation. I am relieved that this wasn't because of my lack of talent in math.

    • @MathTheBeautiful
      @MathTheBeautiful  4 ปีที่แล้ว +7

      It's **never** you. It's always the textbook.

  • @user-xt9js1jt6m
    @user-xt9js1jt6m 4 ปีที่แล้ว +3

    Nice explanation sir
    You look like Jason Statham ❤️❤️❤️
    I felt like action star is giving lecture on matrix❤️❤️🙏

    • @MathTheBeautiful
      @MathTheBeautiful  4 ปีที่แล้ว +4

      I get that a lot when I wear a tight t-shirt.

  • @DiegoAToala
    @DiegoAToala 2 ปีที่แล้ว +1

    Thank you, so clear!

  • @somekindofbluestuff
    @somekindofbluestuff 3 ปีที่แล้ว +1

    thank you!

  • @ibrahimalotaibi2399
    @ibrahimalotaibi2399 5 ปีที่แล้ว +2

    Monster of Math.

  • @kumudayanayanajith6427
    @kumudayanayanajith6427 3 ปีที่แล้ว +1

    Great explanation!! Thank You

  • @kreechapuphaiboon4886
    @kreechapuphaiboon4886 6 ปีที่แล้ว +1

    Great lecture he explains so well.

  • @bobstephens97
    @bobstephens97 ปีที่แล้ว +1

    Awsome. Thank you.

  • @みの-c5c
    @みの-c5c 4 ปีที่แล้ว +1

    This really helps a lot in understanding matrix derivative, and it's so clear. Thanks!!!

  • @ashwinkraghu1646
    @ashwinkraghu1646 3 ปีที่แล้ว +1

    Excellent teacher! and Life saver

  • @devrimturker
    @devrimturker 3 ปีที่แล้ว +1

    Is there a relation between positive definite matrix and convex set

    • @MathTheBeautiful
      @MathTheBeautiful  3 ปีที่แล้ว +1

      Yes, excellent intuition. The level set for a positive-definite quadratic form is a convex shape.

  • @s25412
    @s25412 3 ปีที่แล้ว

    7:15 what if your matrix is positive semi-definite? Wouldn't there be a minimum?

  • @AliVeli-gr4fb
    @AliVeli-gr4fb 7 ปีที่แล้ว +2

    thank you, it was a beautiful course

    • @MathTheBeautiful
      @MathTheBeautiful  7 ปีที่แล้ว +2

      Thank you, Ali, I'm glad you're enjoying our videos. But why "was"?

    • @AliVeli-gr4fb
      @AliVeli-gr4fb 7 ปีที่แล้ว +1

      MathTheBeautiful it is normal to say it in past tense in my language, so I thought in it, but wrote in English. so no real reason

    • @MathTheBeautiful
      @MathTheBeautiful  7 ปีที่แล้ว +3

      :) I just wanted to convey that the course is ongoing!

    • @TheTacticalDood
      @TheTacticalDood 5 ปีที่แล้ว +1

      @@MathTheBeautiful Is it still ongoing? This channel is amazing, it would be sad to see it stop!

  • @telraj
    @telraj 2 ปีที่แล้ว +1

    Why skip the matrix calculus? It's not rocket science

  • @johnfykhikc
    @johnfykhikc 6 ปีที่แล้ว

    where i can found the statement ? i did an unsuccessful search

  • @kaursingh637
    @kaursingh637 4 ปีที่แล้ว

    SIR - U R VERY CLEAR =PLEASE GIVE SHORT LECTUR

  • @joshuaronisjr
    @joshuaronisjr 5 ปีที่แล้ว

    This is just a comment for me to look at in the future, but at some point, he says that A will mostly be filled with zeroes before we start Gaussian elimination.
    A will be the covariance matrix, (X^T X) (look at the next video, the least squares solution video). That it's mostly filled with zeroes indicates that most of the random variables (each column of X is a different random variable of the dataset) are independent of one another (or at least, if they ARE independent then their covariance will be 0).
    However, Gaussian elimination involves linearly combining rows. The matrices in between may NOT be sparse!
    As for computer storage...I don't know much about it, but maybe computers store zeroes in a different way, so that sparse matrices are easier to store?
    Actually, I guess this comment is more than for just me...why can computers store sparse matrices well?

  • @roaaabualgasim4882
    @roaaabualgasim4882 3 ปีที่แล้ว

    I wont examples or meteial to illusstrate the idea of method of maximization and minimaization of function with constraint(lagrage multiplier ) and with no constraint(by quadratic form and hessian matrex) 😭

  • @marshall7253
    @marshall7253 5 ปีที่แล้ว

    I love this guy

  • @darrenpeck156
    @darrenpeck156 2 ปีที่แล้ว

    Absolute value has a minimum.

  • @gustavoexel5569
    @gustavoexel5569 4 ปีที่แล้ว

    At 13:15 my chin literally felt