I've been taking proper notes, doing the readings, and all the things you would expect of a student (short of taking the quizzes and final). I'm surprised at myself for having the will, interest, and sustained performance to get here. I'm tooting my own horn because I think I deserve it. Thanks to Professor Strang, I haven't just learned one of the most beautiful frameworks in mathematics. I have also proven that I can focus and get things done. If you knew me, you'd know that I'm unreliable, highly motivated at first, but that I ultimately lose steam and perform poorly. This class taught me that when I choose to do something to completion, and mean it, good things can happen. Thank you from the bottom of my heart, Professor G. Strang! Seriously, I'll always hold you in high regard. You gave me the MIT experience. You made me really wanna roll up my sleeves and learn deeply, learn how to think, which is impressive because I'm a college dropout who wanted nothing to do with education (well, schooling, you come across that line of reasoning, I'm sure). This course has improved my confidence. I just wanted to let you know, the special touch you placed on every lecture was precisely what I needed to start to see education in a whole new light. I've said it a million times now, but thank you Professor Strang! And of course, many, many thank you's to the MIT OpenCourseWare for making such high-quality, timeless content available to millions like myself for free. You're changing minds.
There is a wonderful momentum to this course that few courses can match. And Prof Strang puts such great effort into building a good intuition for the subject that when much more advanced topics appear, they just fall into place. Really, the breadth and depth of this "introductory" course is amazing
May I ask if you're still using linear algebra after completing the course, and if so, whether you think learning it from Prof. Strang served you well long-term? And congrats. It takes serious dedication to do something like this without the threat of bad grades and all that.
@@lzawbrito thanks for checking in. Yes, I’m still using linear algebra in my own private studies. I’ve since gone on to self-study Sturm-Liouville theorem, the theory of integral transforms, and complex analysis, which are really just infinite-dimensional vector spaces. Just a few minutes ago, I was solving the Poisson equation in a compact space in R2 (I just took a break, came online, and found your comment 😅). Physicists always use the so-called fundamental solution (which is really just a radial basis function) to solve Poisson’s equation, motivated by the fact that the laplacian has rotational symmetry in the usual sense. However, I’m doing research to show that there is a more general solution involving non-radial basis functions (which makes sense as something you’d look for if you’ve taken a linear algebra course as fine as Professor Strang’s). I find this line of inquisition interesting because the Maxwell’s Equations in classical physics are really just a form of higher order Poisson equations, and their underlying potential fields are expressed as radially symmetric convolutions. But the two issues are that 1) potentials only work over all space, not in compact geometries (Helmholtz decomposition theorem) and 2) You can’t capture all the expected dynamics if you’re only composing your functions with radial bases. So, basically, the retarded potential solutions of E&M are just a projection of a function onto a radial subspace. Linear algebra is a good way to see this :) So I want to figure out the most general structure of solutions to the Maxwell’s equations, and then write a Machine Learning algorithm that can minimize parameters on the equation to predict the mathematically ideal electronic circuit topology given a set of constraints. As you can tell, I’m a little “all over the place”, but I still insist that this was the first class I took that gave me the confidence to do math and stick to it. As for your final point, I’ve found that the threat of bad grades generally hurts my understanding, because then I’m learning to pass rather than to deeply understand. My motivation is that I’m working towards a novel technology and it needs a lot of mathematics. I care deeply about said technology (the float circuit - hence the pseudonym 😅), so I’m internally motivated. I dream lucidly about math. I can’t turn it off. I have a deep appreciation for why I’m trying so hard. Best wishes with everything, Lucas! Edit - one day I’ll start making YT videos documenting my way of understanding things, to help people that want to “go deeper” on concepts. I just don’t feel ready (knowledgeable enough) today, but that’s the plan. Hopefully by this coming January. I’m just doing a measure theory course to tie up some loose ends in understanding for my first course on PDEs.
@@ozzyfromspace This is awesome. The applications of linear algebra in machine learning are many, I imagine. I self-studied using Strang's course exactly for physics (QM) and computer science, since those subjects are what I plan to major in. And your research is this cross-section of so many different fields... I subscribed just to see what you come out with. It's also funny you mention PDEs, as I'm taking a PDE course next semester!
I'm here, I can't believe how I learned math in the past. This class will benefit me for years of my life. Thank you, Mr Strang. Can't wait to continue on 18.065
prof strang is by far the best teacher i've ever known in my life, he's teaching things that is very much complicated and teaches those things like there're super easy. thank to MIT OpenCourseWare for making this lecture, your job is priceless
This is another fine lecture on inverses. This the first time that I have seen Left, Right and Pseudoinverses in a linear algebra class. I really learned these topics from DR. Strang. These topics are important in linear algebra, signal and systems theory and control engineering.
I have huge respect for this lecture series and Professor Strang. This exact syllabus is what we have to learn in 2020 as one of our core courses in CSE MS in AI/Robotics and ML. It is going to take multiple revisions to really internalize all that this lecture series has to offer. Professor, I hope to build some good things and help usher in the AI revolution that is happening. Thank you for sharing your knowledge to us, the next generation to push things a little bit more into the future.
The bijection between subspace of row space and subspace of col space had been hinted quite nicely in previous lectures and in the textbook by Dr. Strang, e.g. he stressed the fact there lies an identity when we do row reduction. So nice that I figured this out when I walk through the previous materials. But still, he managed to show there's a new (to me) point of view to such bijection: pseudo-inverse works as *the* inverse function of the bijection mapping, and that the pseudo-inverse *also eliminates* some (resp. left) null space, just like what the original is doing. Such symmetry, such beauty.
For 8:22, the null-space of A^t A being the same as A's null-space (when A has linearly independent columns) can be proven as follows. Consider the null-space of A^t A first: A^t A x = 0 implies (x^t A^t) (A x) = 0 (A x)^t (A x) = 0 This expression is 0 only if x is in the null-space of A too. But because of A's columns are linearly independent, only 0 is in their null-space.
Thanks a lot MIT OCW for this free lectures.and i am grateful to MIT for introducing the TEACHER Mr. Gilbert to the world. I wish him happy and healthy life. You are a wonderful teacher. Thanks.
Thank you Professor Strang for the great course, and MIT for making it accessible for us. This is such a great opportunity for those of us who want to learn more than what is offered at our high schools.
what a stunning way in which you have given this lecture series, sir. You are the best teacher Gilbert Strang Sir. And very very thanks to MIT OCW to give this beautiful lecture series totally free, and to indroduce the world to this amazing professor.
@@vicevirtuoso123 aayega bhai, but there is a deep meaning to every line of sir, and in watching 1 time you can't get everything, but surely u will get a geometrical sense of every concept, which gives u satisfaction
One suggestion more : plz read book of Gilbert Strang sir also, because it will tell that how much sense u got, and will fill the gaps of understanding
22:03 "Where in calculus, you really, you know, you're trying to visualize these things, well 2 or 3 dimensions is kind of the *_limit_* ..." Professor Gilbert Strang: Come for the fantastic teaching, stay for the fantastic puns.
For all of you who made it to this point in this great course I recommend watching this short serie before taking the final reviews. Full of graphics with some good aha moments. th-cam.com/video/fNk_zzaMoSs/w-d-xo.html
The camera crew does not understand what is important and which shots to take and later include in the final video... Way too many close-ups. In general, if the Professor is pointing to something on the blackboard and it is not included in the shot, it's a major fail.
I never really got the whole picture thing even until now. Does it give any conceptual meaning beyond the definitions embodied by it? Or it is merely a memory aid to remember the definitions? The definitions embodied are: For any m by n matrix A - row space and nullspace are orthogonal complements in Rn, column space and left nullspace are orthogonal complements in Rm. i.e. does it supply meaning to the 4 subspaces the way a graph would supply meaning to a function? From what I see, no... Or am I missing something?
+Quanxiang Loo Yes it is like a function, hence the name linear map. When we multiply by a matrix we take combinations of the columns, so the range of the mapping is the column space. If the column space doesn't span Rm, then we can see there is no way to get answers that have any components orthogonal to the column space. We call this 'orthogonal space' the left nullspace. Similarly, when we multiply by a matrix A we take the inner product of a vector x and the rows of A. Recall that the dot product of v.w can be thought of as capturing information of how parallel v is to w and their magnitudes (||v||||w||cos(v-w)). Therefore in multiplying Ax = b, only the magnitude of x that is parallel to the rows of A is relevant to determining b. In other words, components of x are either in the rowspace, or in the nullspace. The nullspace doesn't contribute to b. Input : Rn = Domain = (Row Space + Null Space) Output : Rm = (Range / Column Space) + Left Null Space Taking the transpose flips the input and output. This is why for orthogonal matrices, which have orthogonal unit vectors, the inverse is the transpose. They aren't scaling or skewing the coordinate vectors, so we can just transpose to flip the input and output and the mapping between vectors in the row and column spaces remains intact.
A projection matrix doesn't change any vector in its column space, but it kills any vector in its left null space. In that sense, it is the "identity" on the column space, and the "zero" or "killer" on the left null space.
That's not quite the full explanation, because there is a third possibility. The matrix A maps a vector that is in the row space but neither the column space nor the null space onto the column space. Thus, he shouldn't have said "everywhere else" because it's not true.
@@davidlovell729 Ax will always be in the column space of A, right? Then can you please explain what do you mean by A maps a vector that is in the row space but not in the colomn space?
@@jagwasenibanerjee1424 Antonio was totally correct, and David was right in saying that the professor shouldn't have said "everywhere else". Here is my summary (hopefully this is totally correct): b is in R^M (A is an MxN matrix). If b is in C(A), then the projection matrix acts on b like the identity matrix. If b is in N(A'), then the projection matrix acts on b like the zero matrix. If b is in neither C(A) nor N(A'), then the projection matrix does its job: project b onto C(A). Note: A' stands for A transpose.
Only when A is invertible can that projection matrix equals to identity. Since A is invertible, it has n independent columns. What happened when you project a vector into a subspace where the vector is actually inside it? The projection matrix is the identity, isn't it? It's like the least square problem AX=b where b is in the column space.
The elsewhere professor refers to could be the elsewhere of R^m. Since the projection matrix projects to the column space of A. And Both column space and left null space span the R^m space. For future learners who may have doubts on this.😅
Ax = b here we need to project 'b' on to the column space of A. We need left pseudoinverse. (A+)Ax= (A+)b x‘ = (A+) b Where x‘ is the approaximated solution. And (AA+) is the projection matrix for the column space of A. So I guess in this case we don't use right pseudoinverse
You can always consult lecture number 15, the , matrix essentially projects any vector on its coloumn space, which is deemed as the best solution to a problem which has no solution.
I've been taking proper notes, doing the readings, and all the things you would expect of a student (short of taking the quizzes and final). I'm surprised at myself for having the will, interest, and sustained performance to get here. I'm tooting my own horn because I think I deserve it. Thanks to Professor Strang, I haven't just learned one of the most beautiful frameworks in mathematics. I have also proven that I can focus and get things done. If you knew me, you'd know that I'm unreliable, highly motivated at first, but that I ultimately lose steam and perform poorly. This class taught me that when I choose to do something to completion, and mean it, good things can happen. Thank you from the bottom of my heart, Professor G. Strang! Seriously, I'll always hold you in high regard. You gave me the MIT experience. You made me really wanna roll up my sleeves and learn deeply, learn how to think, which is impressive because I'm a college dropout who wanted nothing to do with education (well, schooling, you come across that line of reasoning, I'm sure). This course has improved my confidence. I just wanted to let you know, the special touch you placed on every lecture was precisely what I needed to start to see education in a whole new light. I've said it a million times now, but thank you Professor Strang! And of course, many, many thank you's to the MIT OpenCourseWare for making such high-quality, timeless content available to millions like myself for free. You're changing minds.
There is a wonderful momentum to this course that few courses can match. And Prof Strang puts such great effort into building a good intuition for the subject that when much more advanced topics appear, they just fall into place. Really, the breadth and depth of this "introductory" course is amazing
You are not alone with that feel.
May I ask if you're still using linear algebra after completing the course, and if so, whether you think learning it from Prof. Strang served you well long-term? And congrats. It takes serious dedication to do something like this without the threat of bad grades and all that.
@@lzawbrito thanks for checking in. Yes, I’m still using linear algebra in my own private studies. I’ve since gone on to self-study Sturm-Liouville theorem, the theory of integral transforms, and complex analysis, which are really just infinite-dimensional vector spaces. Just a few minutes ago, I was solving the Poisson equation in a compact space in R2 (I just took a break, came online, and found your comment 😅).
Physicists always use the so-called fundamental solution (which is really just a radial basis function) to solve Poisson’s equation, motivated by the fact that the laplacian has rotational symmetry in the usual sense. However, I’m doing research to show that there is a more general solution involving non-radial basis functions (which makes sense as something you’d look for if you’ve taken a linear algebra course as fine as Professor Strang’s). I find this line of inquisition interesting because the Maxwell’s Equations in classical physics are really just a form of higher order Poisson equations, and their underlying potential fields are expressed as radially symmetric convolutions. But the two issues are that 1) potentials only work over all space, not in compact geometries (Helmholtz decomposition theorem) and 2) You can’t capture all the expected dynamics if you’re only composing your functions with radial bases. So, basically, the retarded potential solutions of E&M are just a projection of a function onto a radial subspace. Linear algebra is a good way to see this :)
So I want to figure out the most general structure of solutions to the Maxwell’s equations, and then write a Machine Learning algorithm that can minimize parameters on the equation to predict the mathematically ideal electronic circuit topology given a set of constraints. As you can tell, I’m a little “all over the place”, but I still insist that this was the first class I took that gave me the confidence to do math and stick to it.
As for your final point, I’ve found that the threat of bad grades generally hurts my understanding, because then I’m learning to pass rather than to deeply understand. My motivation is that I’m working towards a novel technology and it needs a lot of mathematics. I care deeply about said technology (the float circuit - hence the pseudonym 😅), so I’m internally motivated. I dream lucidly about math. I can’t turn it off. I have a deep appreciation for why I’m trying so hard. Best wishes with everything, Lucas!
Edit -
one day I’ll start making YT videos documenting my way of understanding things, to help people that want to “go deeper” on concepts. I just don’t feel ready (knowledgeable enough) today, but that’s the plan. Hopefully by this coming January. I’m just doing a measure theory course to tie up some loose ends in understanding for my first course on PDEs.
@@ozzyfromspace This is awesome. The applications of linear algebra in machine learning are many, I imagine. I self-studied using Strang's course exactly for physics (QM) and computer science, since those subjects are what I plan to major in. And your research is this cross-section of so many different fields... I subscribed just to see what you come out with.
It's also funny you mention PDEs, as I'm taking a PDE course next semester!
I can't believe this class would come to the end so soon. Watched from lecture 1 to here
Hello
I'm at 9 . I'll keep on updating this comment
@@Upgradezz liar
@@bosengineer XD
@@Upgradezz so?
Just finished the whole course. Thank you Professor Gilbert Strang
Absolutely wonderful series of lectures. Thanks a zillion to Prof Strang and OCW for making this available. These are absolutely priceless!
@aDBo'Ch 1 Thanks for pointing out typo.
I'm here, I can't believe how I learned math in the past. This class will benefit me for years of my life. Thank you, Mr Strang. Can't wait to continue on 18.065
The charm of this course is after all these lectures, you feel so empowered and confident that is willing to give any linear algebra problem a try.
"If a matrix takes a vector to zero, there is no way its inverse can bring it back to life!" - Gil Strang
Strang: "You know what that matrix is"
Me: "I do?.... Wait a minute... I do!"
Classic moments in Linear Algebra with Prof. Strang 👌👌👌
prof strang is by far the best teacher i've ever known in my life, he's teaching things that is very much complicated and teaches those things like there're super easy. thank to MIT OpenCourseWare for making this lecture, your job is priceless
This is another fine lecture on inverses. This the first time that I have seen Left, Right and Pseudoinverses in a linear algebra class. I really learned these topics from DR. Strang. These topics are important in linear algebra, signal and systems theory and control engineering.
I have huge respect for this lecture series and Professor Strang. This exact syllabus is what we have to learn in 2020 as one of our core courses in CSE MS in AI/Robotics and ML. It is going to take multiple revisions to really internalize all that this lecture series has to offer. Professor, I hope to build some good things and help usher in the AI revolution that is happening. Thank you for sharing your knowledge to us, the next generation to push things a little bit more into the future.
The bijection between subspace of row space and subspace of col space had been hinted quite nicely in previous lectures and in the textbook by Dr. Strang, e.g. he stressed the fact there lies an identity when we do row reduction. So nice that I figured this out when I walk through the previous materials. But still, he managed to show there's a new (to me) point of view to such bijection: pseudo-inverse works as *the* inverse function of the bijection mapping, and that the pseudo-inverse *also eliminates* some (resp. left) null space, just like what the original is doing. Such symmetry, such beauty.
this is the beauty of life, thank you so much Professor Strang and MIT!
Thank you professor Gilbert! What you did is incredible, this help take me where I want to be in life!
For 8:22, the null-space of A^t A being the same as A's null-space (when A has linearly independent columns) can be proven as follows. Consider the null-space of A^t A first:
A^t A x = 0 implies
(x^t A^t) (A x) = 0
(A x)^t (A x) = 0
This expression is 0 only if x is in the null-space of A too. But because of A's columns are linearly independent, only 0 is in their null-space.
that illustration and this intuitive way of looking at pseudo inverse is great! good job!
Thanks a lot MIT OCW for this free lectures.and i am grateful to MIT for introducing the TEACHER Mr. Gilbert to the world. I wish him happy and healthy life. You are a wonderful teacher. Thanks.
the best course to learn linear algebra
It’s been a journey Professor Strang. It’s quite sad that this has come to an end, but what a beautiful way to come full circle.
18.065 follows this course perfectly
@@angfeng9601 100% on it. Thank you brother!!
Thank you Professor Strang for the great course, and MIT for making it accessible for us. This is such a great opportunity for those of us who want to learn more than what is offered at our high schools.
what a stunning way in which you have given this lecture series, sir.
You are the best teacher Gilbert Strang Sir. And very very thanks to MIT OCW to give this beautiful lecture series totally free, and to indroduce the world to this amazing professor.
Bhai samajh me aajayega ? I m Btech 1st yr student
@@vicevirtuoso123 aayega bhai, but there is a deep meaning to every line of sir, and in watching 1 time you can't get everything, but surely u will get a geometrical sense of every concept, which gives u satisfaction
One suggestion more : plz read book of Gilbert Strang sir also, because it will tell that how much sense u got, and will fill the gaps of understanding
@@pankajvadhvani9278 thanks bhai
Vese ap kis course ke liye pad rhe the ?
Fantastic way to introduce Pseudoinverse. GS is a great teacher. Thanks OCW for making this available
I am not ready for this to end!
Thanks MIT OCW and Prof. Gilbert for your hard work.
That’s a really great course review! There are something new to learn at the same time
22:03 "Where in calculus, you really, you know, you're trying to visualize these things, well 2 or 3 dimensions is kind of the *_limit_* ..."
Professor Gilbert Strang: Come for the fantastic teaching, stay for the fantastic puns.
Literally ,the legend of linear algebra
So happy with this journey!! Thanks Doctor Strange of Linear Algebra!!
This class was pure art
In addition to shouting out answers I knew, I would also applaud right when he said thanks.
For all of you who made it to this point in this great course I recommend watching this short serie before taking the final reviews. Full of graphics with some good aha moments. th-cam.com/video/fNk_zzaMoSs/w-d-xo.html
Thanks Prof Strang, you helped me a lot
Thanks professor ! It really really helps understanding 3D computer vision works.
Wrong? He's awesome, he has the best explanations! You should see my teachers :o
i love these lectures! they are fantastic. i love the book too. i think it's really well written. overall, i really enjoyed this course
may not be important to go university. i love for ur lecture give me enjoyment
The camera crew does not understand what is important and which shots to take and later include in the final video... Way too many close-ups. In general, if the Professor is pointing to something on the blackboard and it is not included in the shot, it's a major fail.
Even though i'm studing in Europe, it's still interesting, useful and easy to understand, thx for making these leactures available
MrKrishtal
MIT
🇺🇸📈🏆
“If a matrix takes a vector to zero it’s no way an inverse can bring it back to life.” I love it😇😇😎😎
19:00, 21:10, 22:50, 31:40
the pseudoinverse is denoted A+. Prof. Strang said pseudoinverses wouldn't be on the final exam. Therefore there were no A+'s on the final. :/ ...
I expect that Professor Strang could explain the rank decomposition for calculating the pseudo-inverse. Damn time limit.
The ONE TRUE KING 😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍
One to One 24:00
I never really got the whole picture thing even until now. Does it give any conceptual meaning beyond the definitions embodied by it? Or it is merely a memory aid to remember the definitions? The definitions embodied are:
For any m by n matrix A - row space and nullspace are orthogonal complements in Rn, column space and left nullspace are orthogonal complements in Rm.
i.e. does it supply meaning to the 4 subspaces the way a graph would supply meaning to a function? From what I see, no... Or am I missing something?
+Quanxiang Loo
Yes it is like a function, hence the name linear map. When we multiply by a matrix we take combinations of the columns, so the range of the mapping is the column space. If the column space doesn't span Rm, then we can see there is no way to get answers that have any components orthogonal to the column space. We call this 'orthogonal space' the left nullspace.
Similarly, when we multiply by a matrix A we take the inner product of a vector x and the rows of A. Recall that the dot product of v.w can be thought of as capturing information of how parallel v is to w and their magnitudes (||v||||w||cos(v-w)). Therefore in multiplying Ax = b, only the magnitude of x that is parallel to the rows of A is relevant to determining b. In other words, components of x are either in the rowspace, or in the nullspace. The nullspace doesn't contribute to b.
Input : Rn = Domain = (Row Space + Null Space)
Output : Rm = (Range / Column Space) + Left Null Space
Taking the transpose flips the input and output. This is why for orthogonal matrices, which have orthogonal unit vectors, the inverse is the transpose. They aren't scaling or skewing the coordinate vectors, so we can just transpose to flip the input and output and the mapping between vectors in the row and column spaces remains intact.
+720SouthCalifornia Thank you for this comment! Very informative!
great answer.. here here!!
What does he mean "A projection matrix is the identity matrix where it can be and everywhere else it's the zero matrix" at 19:20?
A projection matrix doesn't change any vector in its column space, but it kills any vector in its left null space.
In that sense, it is the "identity" on the column space, and the "zero" or "killer" on the left null space.
That's not quite the full explanation, because there is a third possibility. The matrix A maps a vector that is in the row space but neither the column space nor the null space onto the column space. Thus, he shouldn't have said "everywhere else" because it's not true.
@@davidlovell729 Ax will always be in the column space of A, right? Then can you please explain what do you mean by A maps a vector that is in the row space but not in the colomn space?
@@jagwasenibanerjee1424 Antonio was totally correct, and David was right in saying that the professor shouldn't have said "everywhere else".
Here is my summary (hopefully this is totally correct):
b is in R^M (A is an MxN matrix).
If b is in C(A), then the projection matrix acts on b like the identity matrix.
If b is in N(A'), then the projection matrix acts on b like the zero matrix.
If b is in neither C(A) nor N(A'), then the projection matrix does its job: project b onto C(A).
Note: A' stands for A transpose.
@@phogbinh Thanks
"projectors are trying to be identity, but it's impossible."
It is awesome! Thank you professor!
Damn, this guy rocks 🎸
You are a great teacher!
simply amazing!
should be the legend
@Zanco I agree, it's almost magical :-). Or as he puts it: elegant.
Prof. G. S. is awesome
There shouldn't be the power of minus one and multiple 'T' - it's excluding matter.
I mean, it's no longer inverses anymore by using this equation.
'34:12'
some great inforamtion here thanks
21:28 Well I shouldn't say anything bad about calculus, but I'll. 😂
I Really Like The Video Left and Right Inverses; Pseudoinverse From Your
The best
40min short lecture contain lots juicy stuff, watch two times
What does it mean when the projection matrix is trying to be the identity matrix?
Only when A is invertible can that projection matrix equals to identity. Since A is invertible, it has n independent columns. What happened when you project a vector into a subspace where the vector is actually inside it? The projection matrix is the identity, isn't it? It's like the least square problem AX=b where b is in the column space.
But elsewhere the projection matrix is zero matrix, how?
The elsewhere professor refers to could be the elsewhere of R^m. Since the projection matrix projects to the column space of A. And Both column space and left null space span the R^m space.
For future learners who may have doubts on this.😅
A good class! Thanks for all your effect(Thank teacher extra)!
Insightful!
interesting video and very informative
Thank you really
Thank you Prof but I would like to find a video about apllied mathematics especially focused on power system thank you
thankyou!
very interesting video thanks
lovely 21:14
I shouldn't say anything bad about calculus.
*But I will*
Does he want students to shout out answers? Because I'd be talking a lot. I'd have to ask him, if he just meant us to think about it or to say it.
Wow this is the end
How can you use right pseudo-inverse to solve Ax = b?
Ax = b
here we need to project 'b' on to the column space of A.
We need left pseudoinverse.
(A+)Ax= (A+)b
x‘ = (A+) b
Where x‘ is the approaximated solution.
And (AA+) is the projection matrix for the column space of A.
So I guess in this case we don't use right pseudoinverse
Awesome!
Is this real? He is the legend..
Thanks sir
very interesting thanks
"Calculus sucks" - Strang, 2005 😆
I think about this in my sleep.............lmao.
147K views with 700 likes, human beings are not very generous about their likes
which lecture did he speak about full column rank?
+Aman Singh Judging from our quick searching, full column ranks are discussed starting in lecture 8.
I am a statistician that watching this video :)))
How is A(A^TA)^(-1)A^T a projection matrix?
Please help.
Check if the matrix P^2=P
You can always consult lecture number 15, the , matrix essentially projects any vector on its coloumn space, which is deemed as the best solution to a problem which has no solution.
haha i love this guy,
wats wrong with that teacher lol