Normal equation solution of the least-squares problem | Lecture 27 | Matrix Algebra for Engineers

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ต.ค. 2024

ความคิดเห็น • 50

  • @ProfJeffreyChasnov
    @ProfJeffreyChasnov  4 ปีที่แล้ว +4

    Find other Matrix Algebra videos in my playlist th-cam.com/play/PLkZjai-2Jcxlg-Z1roB0pUwFU-P58tvOx.html

  • @ajsctech8249
    @ajsctech8249 2 ปีที่แล้ว +10

    this is the best description of using matrices to solve Linear Regression I have ever seen. With basic to intermediate knowledge of Linear Algebra Chasnow explains each step in clear detail and then rounds things of with a trivial example to explain all the math behind the powerful method of Linear regression. This man is both a Maths and communications genius. Greetings from Germany. Your students are incredibly lucky, you look like a professor who is determined to make sure all students understand the topics rather than one who wants students to marvel at their own math prowess.

  • @johnedakigimode4747
    @johnedakigimode4747 8 หลายเดือนก่อน +1

    Thank you Prof. Chasnov, we appreciate your good work.

  • @idanmalka4561
    @idanmalka4561 2 ปีที่แล้ว +1

    thanks a lot mr. Chasnov ! really saved me hours of mental breakdown

  • @88noureldin
    @88noureldin ปีที่แล้ว +1

    Thanks a lot Prof Jeff for valuable lecture.
    @4:25 : Why AT A is invertible matrix - as per last lesson Invertible matrix which has inverse (AA-1=I) ,kindly clarify what make AT A invertible matrix?
    @5:21 : Projection Matrix (A(ATA)-1 AT - Is there any paper illustrate it as i think we didn't come to it last lectures .

  • @abdula.7064
    @abdula.7064 4 ปีที่แล้ว +2

    Your explanation is superb!

  • @Sergei-ld1iv
    @Sergei-ld1iv ปีที่แล้ว +1

    Many thanks!!! You certainly have a huge talant for teaching as you know where the breakes in understanding sit to be given a particular attention!

  • @sorayyakrimi215
    @sorayyakrimi215 2 ปีที่แล้ว

    Thank you so much Dear Prof. Chansov

  • @lancelofjohn6995
    @lancelofjohn6995 2 ปีที่แล้ว

    Professor,I still have one question you mean b-b_proj is orthogonal to Matrix A,wheteher do we know that at the beginning B is out of the column space of A, so b and b-b_proj is in the kernel of Matrix A,b-b_proj becomes just short or long

  • @EB3103
    @EB3103 3 ปีที่แล้ว +2

    at fisrt you start with
    1) Ax=b
    and then you say
    2) Ax = b(proj)
    you multiplied both sidesof (1) by A transposed, and then use (2). can't understand how you can use both in the same proof.

    • @oiloveggoindian3087
      @oiloveggoindian3087 7 หลายเดือนก่อน

      It should be like this A^TAx = A^T +A*T(b - bproj) however the b-bproj is just Null(AT) so that part just become zero

  • @yuanzhuchen481
    @yuanzhuchen481 5 ปีที่แล้ว +3

    You made it look easy; thank yo so much, Professor Chasnov!

  • @prabaa123
    @prabaa123 4 หลายเดือนก่อน

    Thank you professor !!

  • @dark6.63E-34
    @dark6.63E-34 ปีที่แล้ว

    Amazing explanation

  • @neepamgandhi5251
    @neepamgandhi5251 24 วันที่ผ่านมา

    according to what you did , you found the slope and intercept for 3 points on a graph - how is this related to the least squares problem
    you never got around to the difference of ( y - y' ) where y' is the value lying on the least sqaures line.
    i hope this question makes sense

    • @ProfJeffreyChasnov
      @ProfJeffreyChasnov  24 วันที่ผ่านมา

      The solution minimizes the sum of the squares of the differences.

  • @trentconley4374
    @trentconley4374 ปีที่แล้ว

    Fantastic explaination!

  • @grakeshn
    @grakeshn 2 ปีที่แล้ว +1

    Good class professor

  • @sirginirgin4808
    @sirginirgin4808 ปีที่แล้ว

    Many thanks

  • @calculusguru1063
    @calculusguru1063 ปีที่แล้ว

    How does multiplying a vector in null space by transpose of that matrix get rid of the vector in the null space

  • @andrewsdanquah2395
    @andrewsdanquah2395 5 หลายเดือนก่อน

    Super good.

  • @lancelofjohn6995
    @lancelofjohn6995 2 ปีที่แล้ว

    Hello professor, thanks for your video. But I have one question if the Ax=b is overdetermined, why can we still use "=" instead of Ax>=b or Ax

    • @ProfJeffreyChasnov
      @ProfJeffreyChasnov  2 ปีที่แล้ว

      With = there is no solution. What would be the meaning of >= or

    • @lancelofjohn6995
      @lancelofjohn6995 2 ปีที่แล้ว

      @@ProfJeffreyChasnov thanks a lot, professor, I understand the meaning of the equation and the way to obtain solution.

    • @lancelofjohn6995
      @lancelofjohn6995 2 ปีที่แล้ว

      @@ProfJeffreyChasnov Professor,I realized I made a error when I want to describe the unequal relation between Ax and b. I should use Norm2(Ax)=Norm2(b) .May I know whether this expression is correct?

  • @nped5054
    @nped5054 2 ปีที่แล้ว

    At 10:08 why is b equal to y? I have seen the normal equation as A^T * A * x = A^T * y which im confused by because of the y in the equation

  • @williamwilkins8022
    @williamwilkins8022 3 ปีที่แล้ว

    Why would the columns of A typically be linearly independent in a least squares problem? (4:10)

  • @exove410
    @exove410 2 ปีที่แล้ว

    You are awesome!

  • @chrischoir3594
    @chrischoir3594 4 ปีที่แล้ว +1

    Is this similar to finding a "regression plane"?

    • @GrahamEckel
      @GrahamEckel 4 ปีที่แล้ว +1

      My understanding is that if you continue on with the Normal Equation and find the projection matrix you will solved the regression plane

  • @VoidFame
    @VoidFame 4 ปีที่แล้ว +4

    Did you reflect the video or are you really writing backwards?

    • @vmarchenkoff
      @vmarchenkoff 4 ปีที่แล้ว +3

      Of course, it's reflected ))

    • @VoidFame
      @VoidFame 4 ปีที่แล้ว

      @@vmarchenkoff : )

  • @williamwilkins8022
    @williamwilkins8022 3 ปีที่แล้ว

    Why do you use the normal equations to find x and not just directly use the x = (A'A)^-1 A' b equation you'd derived already?

    • @ProfJeffreyChasnov
      @ProfJeffreyChasnov  3 ปีที่แล้ว +4

      It is computationally inefficient to compute inverses. Faster to do Gaussian elimination.

  • @juhu5504
    @juhu5504 3 ปีที่แล้ว

    At the end with B0 = 1 and B1 = 1/2. If I plug in X = 1 then Y = 1 + 1/2 * 1 = 3/2 and not 1 like in the data. Same for other two datapoints. Am I missing something?

    • @martinsanchez-hw4fi
      @martinsanchez-hw4fi 3 ปีที่แล้ว

      We try to do the line that gets as closer as possible to the data points

    • @leksa8845
      @leksa8845 2 ปีที่แล้ว

      There is a wrong solution B_0 = 0 B_1 = 1 if you drow it, it is clear

  • @meknassihamza9324
    @meknassihamza9324 4 ปีที่แล้ว

    Thanks a lot !

  • @martinsanchez-hw4fi
    @martinsanchez-hw4fi 3 ปีที่แล้ว

    Isn't A(AtA)^(-1)At just the identity matrix?

    • @ProfJeffreyChasnov
      @ProfJeffreyChasnov  3 ปีที่แล้ว +1

      Only if A is invertible. But here A is not even a square matrix.

  • @mathsbyamna1843
    @mathsbyamna1843 4 ปีที่แล้ว

    superb

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 ปีที่แล้ว

    why is y = 1/2(x) better than simply y = x? Plotted it seems like y=x fits better

    • @pipertripp
      @pipertripp 2 ปีที่แล้ว

      it minimizes the square of the errors. the errors being the difference between y value of the data and the y value of the line at each x value.

  • @BlackHoleGeorge
    @BlackHoleGeorge ปีที่แล้ว

    Great video... But there is a set of ounnecessary confusions:
    You name the vrctor of beta_i as x, matrix containing x_i as A, and vector from y as b.
    :/
    Most people are lost at that point. :/

  • @piotrjaga6929
    @piotrjaga6929 2 ปีที่แล้ว

    thank you

  • @aryamaanbasuroy7646
    @aryamaanbasuroy7646 3 ปีที่แล้ว

    Thank you

  • @sujanbhakat1199
    @sujanbhakat1199 3 ปีที่แล้ว

    Thank you.