Fantastic lecture... i am so lucky to watch and learn.... i am an ordinary mother of two little girls and recently finding myself experiencing full of joy learning linear algebra. It is all happened to me beacuse of wonderful lecturer and mit. thank you for sharing with us. I am going to continue to learn. Many thanks from korea.
Thanks for the lecture! I've tried to learn these things before and gotten out more confused than I was when I came in, but Dr. Strang's approach makes it all seem so simple!
The "idea" of orthogonal projection allowed me to understand the Christoffel symbols. I "studied" all the lectures on MIT 18.06 but I am still discovering the linear algebra anew. Thanks G.S. , thanks MIT.
I wish u were my professo Mr Strang, but hey, I have u as my Professor here online: thank you very much for ur elegant explanation. Wish u good healt and long live Mr Strang
Professor Strang thanks for showing different ways to Solve Least Squares problems in linear algebra and statistics. Least Squares is used every day to fit data.
"You can't raise it from the dead"... How true, how true, prof. Strang :))) Even though there are some in this world that think it's actually possible to raise people from the dead, LoL :)))
least square problem: to solve a system of equations that has more equations than unknowns, i.e. non square matrix. we solve by At Ax = At b, but since we cant find At for non square matrix, we approximate using svd
At 46:00 professor says SVD in this case is neither side inverse but Right side is one side inverse, then he says at end under independent columns SVD gives same result as Guass, but sigma matrix in pseudo inverse should still have missing values how will they give same result?
Wel, this matrix here is doing its best to be the inverse. Actually, everybody here is just doing the best to be an inverse. (c) This phrase really describes me fighting my procrastination all the day.
The first question from the problem set asks for the eigenvalues of A+ when A square. I know that A and A+ have the same number of zero eigenvalues but I'm stuck searching for a relationship for the non zero ones. Some hint?? I check numerically and I verified that they are not 1/λ_i as one might have conjecture.
then b is in the null space of the hat matrix H (the orthogonal complement) and so we know that Hb = 0 and so b-hat is 0, so x-hat (Hb = Ax-hat) is 0 if the nxm matrix A has rank=m and if not x-hat is the null space of A. So x-hat would be generated by the columns of the matrix (I - A^+A) where A^+ is any matrix such that AA^+A = A.
Prof claimed that A+b give the same result as ATA-1b in 40:39 if matrix ATA is invertable . But if it is not invertable ,what is geometric meaning of A+b? Is it still projection of b onto the column space of A?
It's it's not invertible,in general the vector gets mapped to a null space which is smaller than n dimension. This means, it gets mapped to lesser dimensional space hence it's impossible to recover/map it back to column space.
Ax-b is a column vector. So (Ax-b)T is a row vector. Let's write Ax-b = w, wT w give us sum_i wi^2, that is exactly the sum of square of all elements in w.
MIT has still got it. What a time to be alive that we can watch this for free
I am sure happy about it.
Fantastic lecture... i am so lucky to watch and learn.... i am an ordinary mother of two little girls and recently finding myself experiencing full of joy learning linear algebra. It is all happened to me beacuse of wonderful lecturer and mit. thank you for sharing with us. I am going to continue to learn. Many thanks from korea.
Thanks for the lecture! I've tried to learn these things before and gotten out more confused than I was when I came in, but Dr. Strang's approach makes it all seem so simple!
The "idea" of orthogonal projection allowed me to understand the Christoffel symbols. I "studied" all the lectures on MIT 18.06 but I am still discovering the linear algebra anew. Thanks G.S. , thanks MIT.
Who dares to dislike this masterpiece of a lesson?
I wish u were my professo Mr Strang,
but hey, I have u as my Professor here online:
thank you very much for ur elegant explanation.
Wish u good healt and long live Mr Strang
Professor Strang thanks for showing different ways to Solve Least Squares problems in linear algebra and statistics. Least Squares is used every day to fit data.
Last two minutes for Gram-Schmidt is really remarkable, 2 mins hardly time to see the heart of that mathematic machine.
One thing in question in the lecture is that Ui but not Vi is in the column space of A. Vi should be in the A's row space.
Thanks, I agree with u. I get trouble when I first see it
lecture starts at 5:18
thanks, i have 5 more minute to study for the final now
Hero
I am not skipping anything from Strang
5:41 least squared
ruthlesss 25 people have disliked this video , who dares to dislike a lecture by prof Strang!!
Thought Id be watching for 5 minutes, ended up staying for the whole class...
Sir you are father of linear algebra. Nice teaching sir.
bookmark Least Squares Problem 23:00
The best of the bests
"You can't raise it from the dead"... How true, how true, prof. Strang :))) Even though there are some in this world that think it's actually possible to raise people from the dead, LoL :)))
Mathematicians teach machine learning way better than machine learning experts do, lol. Hats off to Prof. Strang
least square problem: to solve a system of equations that has more equations than unknowns, i.e. non square matrix.
we solve by At Ax = At b, but since we cant find At for non square matrix, we approximate using svd
At 46:00 professor says SVD in this case is neither side inverse but Right side is one side inverse, then he says at end under independent columns SVD gives same result as Guass, but sigma matrix in pseudo inverse should still have missing values how will they give same result?
Under the independent columns assumption, A has left-inverse, and it's form is exactly same as the Guass's method.
Wel, this matrix here is doing its best to be the inverse. Actually, everybody here is just doing the best to be an inverse. (c)
This phrase really describes me fighting my procrastination all the day.
The first question from the problem set asks for the eigenvalues of A+ when A square. I know that A and A+ have the same number of zero eigenvalues but I'm stuck searching for a relationship for the non zero ones. Some hint?? I check numerically and I verified that they are not 1/λ_i as one might have conjecture.
29:40 why do we use p=2 norm rather than any other p?
xtx is the norm 2 of x, the usual inner product
in notice box why both of equations don't produce same identity matrix? 43:30
Because you cannot open the bracket in the second expression. As the inner matrices are not square and thus they don't have an inverse.
@@jayantpriyadarshi9266 thank youuuu
@@dohyun0047 no worries bro.
If b is perpendicular to the column space of A, what is the solution for Ax=b?
then b is in the null space of the hat matrix H (the orthogonal complement) and so we know that Hb = 0 and so b-hat is 0, so x-hat (Hb = Ax-hat) is 0 if the nxm matrix A has rank=m and if not x-hat is the null space of A. So x-hat would be generated by the columns of the matrix (I - A^+A) where A^+ is any matrix such that AA^+A = A.
Prof claimed that A+b give the same result as ATA-1b in 40:39 if matrix ATA is invertable . But if it is not invertable ,what is geometric meaning of A+b? Is it still projection of b onto the column space of A?
It's it's not invertible,in general the vector gets mapped to a null space which is smaller than n dimension. This means, it gets mapped to lesser dimensional space hence it's impossible to recover/map it back to column space.
Camera man ....Follow Prof Strang!!!
how did he get (Ax-b)T(Ax-b)?
MA MO The dot product in a matrix form
Ax-b is a column vector. So (Ax-b)T is a row vector. Let's write Ax-b = w, wT w give us sum_i wi^2, that is exactly the sum of square of all elements in w.
the pseudoinverse part is unclear, the book tells more details and it relationship with the normal solution
Could u tell me how to find the book or the name of book? Thank you!
@@meyerkurt5875 His own book: linear algebra and learning from Data.
Chalk on a board still makes the hair on my back rise 😅
👍🏼
This guy could be the worst professor of all time
Far from the worst but he ain't great, that's for sure.