Learn english and discover your channel was the best thing ive ever done. In my native lenguage there are not many people interested in maths, at least in youtube.
As someone who had to discover this on my own, I have to say that this video pretty much should be a standard lesson to people who are new to linear algebra. It's just so simple and it needs to be drilled into students' heads. It was fun to discover myself, but the use of this simple motivation can inspire students to come up with extensions of their own. This lesson is dense and right to the point and is very foundational.
THIS. Just, THIS. I swear, if only this is how they introduced matrix multiplication it'd make so much more sense than just forcing it. Taking students through and seeing how substitution from one point to another point in a linear transformation sequence has this nice pattern, this pattern we now call matrix multiplication is simply wonderful.
@@jay_wright_thats_right Where are you going after you die? What happens next? Have you ever thought about that? Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢. Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
The multiplication rule for matrices A, B: V -> V (for V a finite dimensional vectorspace) actually pops out quite naturally when you look at how A, B and AB act on a Basis of V and expand in terms of that basis
Where are you going after you die? What happens next? Have you ever thought about that? Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢. Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
This is the best way of thinking of matrix multiplication: It is really compositions of linear maps, and matrix multiplication is what it has to be so that the correspondence matrix linear transformation works.
Great video demonstrating the concept behind the mechanics of matrix multiplication. When I was an engineering student 45 years ago, I realized that the right-hand rule for vector multiplication and the row x column multiplication rules for matrices were arbitrary, so when I encountered them in engineering problems I'd often use left-hand rule and "reflected" matrix multiplication to get the exact same final physical answer (which raised some eyebrows of the teachers grading my submittals). I think a video explaining and generalizing our arbitrary rules would be interesting, but I've never seen such a thing in a book or on video.
Teachers are so overworked that I wouldn't be surprised if they didn't grade you off because they didn't recognize your brilliance. That happened to Neils Bohr on his PhD exam. That almost happened to Feynman. What should a teacher do when faced with a student that was smarter then their teacher? Give the student an A and great letters of recommendation. Unfortunately, that rarely happens.
I would like to propose a name for mistakes such as the one he makes here at 7:51. I want to call them Pennisms. Of which his videos have have many an example...😏
15:45 The traces of the two products should be equal, but they aren't. You changed the 1 in the lower right to a 3 when you put them in the opposite order.
This is indeed a great video. I would also recommend 3blue1brown's channel on this topic in his "Essence of Linear Algebra" that tries to naturally evoke all these concepts in a way that anyone could understand.
Nice video, but wish you had mentioned "associativity". ie: we want A(Bx) = (AB)x. we want associativity, so given the definition of matrix-vector multiplication, and given we want matrix multiplication to be associative, we must define matrix-matrix multiplication this way. So the motivation is to make matrix multiplication associative.
Hi, at t.c 14:26, the value of row_1 column_1 should be 23 instead of 21 if I'm not wrong. As it is said in last lines of "Some like it hot" "nobody's perfect." Teacher's mistakes shortens the distance from the pupils and that's encouraging for them- 😅
@@suttoncoldfield9318 I don't think that Dr Penn writes computer code. He has help producing the videos. I doubt that his producers can spot and correct typos. I know that typos are frustrating for students learning material, but these videos are not meant to replace in class instruction where there is interaction. Even with 50 years of studying math, typos are difficult for me. Try keeping a positive attitude and consider them homework problems. Most times other comments correct the typos. Consider watching the videos more than once. I have to study the same material many times to spot typos on the fly. I'd still rather have the typos than teachers who explain poorly or give incorrect explanations from poor understanding of the material. His deep understanding of the subject overcomes these small frustrations for me. I hope my comments help.
You see, that third row on the blackboard, I would have written it the other way around - (x1,x2) goes *through* the matrix, so the matrix belongs on the right: subject verb object. Matrix multiplication only made sense to me when I realised that the matrix was being treated like a *function*, and we write functions f(x) with the operator on the left and the operand on the right. Worked my way halfway through "the geometry of complex numbers" before I worked out what was going on and why my code wasn't doing the right thing.
You did something similar, but more basic, recently. I had never questioned the matrix multiplication algorithm before, so I was fascinated by your explanation. When you started this episode, I had only a vague memory of how you did the previous version. This told me that I hadn't mastered the material and needed review. But then, you expanded the explanation to the tensor product and tensor contraction. Brilliant. I had never understood these concepts so clearly before. Thank you very much. 😃
If you swivel the vector counter-clockwise into the matrix rows instead of swiveling the rows into the vector, you do your calculations in the places where they would be with the separate equations, rather than sideways and on top of each other. Doesn't help for matrix multiplication, but it's easy enough to remember that as two columns next to each other that you do one at a time.
2:21 - It would be probably good to explain to students why x is set as a column in this step, instead of an arrow, as the position of x1 and x2 in the equations would suggest). This would result in another rule for matrix multiplication. I know we want to keep an analogy between the vectors x and y, but this may not be obvious.
This was more or less how I learned it as well (our course was based on Friedberg, Insel, and Spence): matrix multiplication is defined so as to ensure that the matrix of the composite of two linear maps equals the product of the matrices of each map taken separately. (This is basically the same as the video's approach since all finite-dimensional F-vector spaces are structurally the same as F^n, and the linear maps on F^n are just these change-of-variable maps.) I believe you can also define it in terms of commuting diagrams, but these are all just different ways of seeing the same thing.
This is awesome. I've previously tried and failed to build a physical intuition for matrix multiplication, and I think this is the first time it's really clicked for me. If you do it any other way, it basically doesn't work or it turns into a horrible mess, whereas this way is the only elegant way to nest systems of linear equations or linear maps inside one another. Excellent presentation, thanks.
Yes. Some textbook writers just assume that everyone understands that a matrix multiplication equals the two consecutive linear transformation of a point, or it equals merging two sets of dependent linear equations into one. But a good teacher always begins with an example.
3 Blue 1 Brown has a visual explanation in his channel, basically, he constructs a lot of linear Algebra visually. I prefer. Michael Penn's approach, it's more direct and simpler (to me), but not everybody is friend of algebra. In any case, it's a good complement to this channel, getting both the formal and the visual inside. th-cam.com/video/XkY2DOUCWMU/w-d-xo.html. I suggest doing both curses in the order that favors you mind: either complete 3B1B first, and then do Michael's or the other way around. Do both before you get into linear algebra in school and you will have a definitive advantage. If you just love the stuff, you will love it even more.
Fiddly to put in plain text, but here's a summary: let V be a n-dim vector space, and f:V->W be a linear map. Let a_i be a basis for V, and b_i be a basis for W. Then for v = Σ v_ie_i in V, we have by linearity: f(v) = Σ_i v_i f(a_i) so f is completely determined by its effects on the a_i. Now f(a_i) = Σ_j f_ij b_j, so f(v) = f(Σ_i v_ie_i) = Σ_i v_i f(a_i) = Σ_i v_i f_ij b_j. Thus f is totally described by the coeficients f_ij. This is why we write matrices to represent linear maps (with respect to some bases). Now let g be another map W->U and let c_i be a basis for U. Then again, g(b_i) = Σ_j g_ij c_j. So what is g∘f? Well g∘f(a_i) = g(Σ_j f_ij b_j) = Σ_j f_ij g(b_j) = Σ_j f_ij Σ_k g_jk c_k = Σ_k (Σ_j f_ij g_jk) c_k so that g∘f_ik = Σ_j f_ij g_jk.
I love your videos, and immensely appreciate your content. If I am not mistaken your area of research is that algebras, right? I have had a fascination for the Clifford algebra (I saw your video on it) and the Grassmann algebra. There is a book titled "Grassmann Algebra" by John Browne. If you happen to be familiar with that book, would you recommend it? I found it to be interesting because the bibliography includes works from Elie Cartan, for example, and other, pretty old, but reputable sources. I have currently read the first chapter, and though I do have some familiarity with the exterior product, and the tensor product, I had never heard of the regressive product. I found the explanation of it to be mind-blowing. If I understood correctly, it provides, among other things, a way to describe intersections. Apparently there is some analogue for it in the Geometric algebra. Anyway, love your videos.
@@lucaolmastroni6270 Yes. In fact the whole diagonal is wrong, because he also made a mistake calculating -1*1 + 8*3 + 3*0, because this equals 23 and not 21.
I'm still left with a couple of questions: 1) Why the swivel? It seems to be taken for granted, but if the first column corresponds to the coefficients of x₁, the second to the coefficients of x₂, etc., wouldn't it make more sense for the x₁, x₂, etc. to be a row vector and the y₁, y₂,... result to be a column vector? Why was this convention chosen? 2) This explains why matrix multiplication is the way it is, but why is *this* matrix "multiplication"? In other words, why is this particular operation considered a kind of multiplication, rather than being called something else? It doesn't seem all that similar conceptually or in behavior to scalar multiplication. This has bugged me ever since high school pre-calc (the dot product and cross product for vectors irk me in a similar way).
Yeah I have to admit the explanation made sense, but it really just seems like a notation failure. You could just as easily change the columns of the second matrix into rows and it would be a hell of a lot more intuitive.
Very nice. As an engineer, I only ever learned matrix multiplication as a recipe although I often wondered why it is the way it is and sometimes who even came up with this and why was it considered a good idea over what are assuredly innumerable other recipes? It would be interesting to see a comparison between linear algebra and quaternions which to my knowledge were abandoned because of their limited utility.
@2:11 "it makes sense to write this in an array" - but what happened to the "plus" signs? It makes zero sense to write it as an array because there's no relationship between the variables? Why not switch "a" with "b"? Please explain how it "made sense" to write as an array :/
If I was introducing this topic, say to first year undergraduates, I wouldn't use a change of variables as the example. Why are we changing them? I'd start with simultaneous equations in general which they should know from high school. E.g. 3 apples and 2 oranges cost 19 cents 4 apples and 1 orange cost 17 cents How much do apples and oranges each cost? Etc. Get them to solve it any way they know, then introduce the matrix method.
I agree, Systems of Linear Equations is the best way to motivate matrix tines vector. Then, matrix product can be viewed as solving multiple systems with the same coeficients.
Good point. I do belive the linear transformation link is key, and I will try (they already know about y-mx+b) but your approach is simpler. For the visual-oriented this is great: th-cam.com/video/XkY2DOUCWMU/w-d-xo.html
The video isn't about solving systems of equations. Rather, it is about combining linear transformations, which individually happen to represent systems. Solving a system is irrelevant as it is analogous to matrix inverse. Having students solve a system would only confuse them into thinking that systems and matrix multiplication have something to do with each other. In general, I think that starting a linear algebra course with systems is counterproductive. This is because systems can utilize matrix notation for better shorthand, but solving systems has little to do with linear algebra. Linear algebra really begins with linear transformations. We soon run out of trivial analogies as linear algebra mainly serves to introduce abstraction, which enables higher level math and physics.
@@spaghetti1383 It depends on the aims of the course. Matrix methods have applications far beyond the higher reaches of physics and abstract algebra. If the job is to get students comfortable with the formalism and a few basic concepts e.g. eigenvectors, singular systems and the like, then surely it makes sense to start with something they already know. Again, matrix solution doesn't necessarily mean using the matrix inverse -- you could teach Gaussian elimination, for example.
@@pwmiles56 My core point is that your analogy with the cost of apples and oranges can't be nicely extended to matrix multiplication. Thus, starting with it just because matrices are applicable isn't useful. The best analogy I can think of is the following: Meal A consists of 5 nuggets and 20 fries. Meal B consists of 9 nuggets and 50 fries. Bundle 1 consists of 1 of meal A and 2 of meal B. Bundle 2 consists of 2 of meal A and 3 of meal B. Bundles 1 and 2 consist of how many nuggets and fries each? The answer to this would be a direct analogy to substitution, linear transformation, and matrix multiplication. It also demonstrates that each layer of vector space generally represents something different. Nowhere here would we solve a system as it distracts from the point.
How about this definition of linearity?: Linearity is the power one. Non-linearity is every other power! Can you see why linearity in science is used so much, even when it doesn't apply? A wrong answer is better than no answer, maybe.
There's an ongoing meme that showed a winding road, then drew a straight line from point A to B and wrote "Why?" When I saw the thumbnail I had to laugh
Well he explains that the coefficients are the one that transform. Just think of it as a some kind of function, you usually write it like this f(x) right? The f "transforms" x into something.
Notice the size changes when you multiply matrices; a matrix that's A×B times a matrix that's B×C is an A×C matrix. This only works if the "middle size cancels out" i.e., you think (A×B)•(B×C) -> (A×C) If the "middle size" doesn't match, you can't multiply. So if we consider a vector as an N×1 matrix, you can multiply with an N×N matrix like Ax=y so that we get (N×N)•(N×1) -> (N×1) another N×1 matrix. However, we cannot do it from the other side, because the measurements do not match up. We could, however, define x as a row vector, i.e. as a 1×N matrix. At that point, xA=y is a valid operation and represents another linear transformation.
Finally, you present a video with the real purpose to educate us. I love it. I particularly love the example that is NOT a square N x N flavor. Full Points Awarded.
It would be more correct to call it "linear transformation of vectors represented by matrix". "multiplication" is not the best term for that, which confuses many people. It is easier to remember that it is "fake multiplication", in name only, and use the more correct term for that.
Linear algebra is absolutely butchered in most classes, it is definitely no taught well (only in some classes) every one of them need to emphasize the geometric intuition behind all of it. It's like having a music student only practice individual notes without ever showing them an actual song. saying it takes the beauty out of it would be an understatement. It makes me very angry.
Agree. Much of the mathematics I was taught highschool, and even university, was just "rote" so students could learn to use the tools, instead of developing a deeper understanding and remembering the tool for life. It makes me wonder about the collective time wasted over the years by people who never fully understood what they were using. For me the one exception was complex numbers, which fascinated me in highschool. I had access to a very good textbook, and read as much as I could about them.
Couldn't one simply state that the matrix-matrix product is simply multiple matrix-vector products where every column of the right matrix is a column vector of its own?
This video does not explain at all what the title suggests. The question was, why do we multiply matrices the way we do ? And on the very third step of this attemt to explain why, you said " if we swivel this first row into the column of the second matrix " , yeah, the question was, why do we swivel the rows into columns. And the rest of the steps of linear transformation calculation you showed here, you kept saying the same, "swivel the rows in the columns". But why do we do it that way ?
I appreciate the explanation, but I think an important emphasis was still missed, which is that calling this operation "multiplication" is a bit of a misnomer. We can get away with calling it a "product," which has more general connotations, but a student should be told right out of the gate that "multiplication" is more of an analogy here than a descriptor of what's actually going on. Since the more generalized meanings of "product" have not yet been elucidated, I think this might prevent a few headaches in some of our more astute students.
Great video explaining matrix multiplication, but I have some complaints. I noticed a couple mistakes that you didn't correct. At 7:53, you were expanding out z2, but accidentally wrote z1. It's a minor mistake but it confused me for a few seconds. At 14:00, when you were showing how matrix multiplication is not commutative, you incorrectly copied one of the matrices, which leads to a different answer. It didn't really change the point you were making, but it was still an avoidable error. Overall, great video, but you need to do a better job to avoid mistakes (or fixing them) when you do explanations like this. I understand that mistakes happen, but maybe in future, proofread the video after you record and add overlays indicating the mistakes so people don't get confused.
I learn more from Dr Penn's videos even with his typos than I do from other teachers' perfect work. I have taught many subjects: engineering textbooks have the fewest typos of physics, math, and computer science.
This video does answer the question for me. And it gives me an understanding deeper than my 60 years of studying math. Watch it over and over until it does make sense. Then you will have mastered the subject.
@@edwardlulofs444 I agree - this video answers the question very well. He was halfway through and I was thinking "matrix representation of 2-port networks in electrical engineering is a great example, especially when cascading networks". 😊
They're just the dependent variables in a system of two linear equations in which x1 and x2 are the independent variables; it's just a setup to show how matrix-vector multiplication represents a linear system.
Learn english and discover your channel was the best thing ive ever done. In my native lenguage there are not many people interested in maths, at least in youtube.
In my native language, there are "many" math videos but never with the logical reasoning/deduction. Just remember this and that formula blabla.
could be an opportunity, if you want to give it a shot yourself!
What language?
Spanish. There are a lot of people who make videos but are boring and the same of highschool. I want to learn something new.
@@nutriascoc6331 has visto @matesmike?
As someone who had to discover this on my own, I have to say that this video pretty much should be a standard lesson to people who are new to linear algebra. It's just so simple and it needs to be drilled into students' heads. It was fun to discover myself, but the use of this simple motivation can inspire students to come up with extensions of their own. This lesson is dense and right to the point and is very foundational.
To everyone in this chat, Jesus is calling you today. Come to him, repent from your sins, bear his cross and live the victorious life
THIS. Just, THIS. I swear, if only this is how they introduced matrix multiplication it'd make so much more sense than just forcing it. Taking students through and seeing how substitution from one point to another point in a linear transformation sequence has this nice pattern, this pattern we now call matrix multiplication is simply wonderful.
They? You're losing your mind. Comments like this should be kept to yourself.
@@jay_wright_thats_right man, just shut up.
@@jay_wright_thats_right and, yeah, "they", meaning "math teachers". Is that hard to understand???
@@jay_wright_thats_right
Where are you going after you die?
What happens next? Have you ever thought about that?
Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢.
Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
The multiplication rule for matrices A, B: V -> V (for V a finite dimensional vectorspace) actually pops out quite naturally when you look at how A, B and AB act on a Basis of V and expand in terms of that basis
Where are you going after you die?
What happens next? Have you ever thought about that?
Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢.
Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
This is the best way of thinking of matrix multiplication: It is really compositions of linear maps, and matrix multiplication is what it has to be so that the correspondence matrix linear transformation works.
Great video demonstrating the concept behind the mechanics of matrix multiplication. When I was an engineering student 45 years ago, I realized that the right-hand rule for vector multiplication and the row x column multiplication rules for matrices were arbitrary, so when I encountered them in engineering problems I'd often use left-hand rule and "reflected" matrix multiplication to get the exact same final physical answer (which raised some eyebrows of the teachers grading my submittals). I think a video explaining and generalizing our arbitrary rules would be interesting, but I've never seen such a thing in a book or on video.
Teachers are so overworked that I wouldn't be surprised if they didn't grade you off because they didn't recognize your brilliance. That happened to Neils Bohr on his PhD exam. That almost happened to Feynman. What should a teacher do when faced with a student that was smarter then their teacher? Give the student an A and great letters of recommendation. Unfortunately, that rarely happens.
7:53 I noticed Z1 should be Z2 here, I was confused a little bit at first.
Why did he write Z1? It all seemed logical up to then.
I don't think I'll bother watching any more.
I noticed it too. He made a typo. Good catch
Yes. I too noticed the mistake.
Yes it was doing my head in. I paused to look for this comment 👏
7:51 You meant z_2 ?
16:14 Good Place To Stop
I would like to propose a name for mistakes such as the one he makes here at 7:51. I want to call them Pennisms. Of which his videos have have many an example...😏
15:45 The traces of the two products should be equal, but they aren't. You changed the 1 in the lower right to a 3 when you put them in the opposite order.
Well spotted! The simple 1, 0, 0, 0 version the traces are equal. I didn't spot that check with the Trace. Full Points Awarded.
I was also going to bring this up
Yeah. It's a good video, but he tends to rush too much towards the end and makes more mistakes.
I stop watching as soon as I saw this comment. If he's making mistakes then I can't trust him.
That was very constructive.
Thank you, professor.
This is indeed a great video. I would also recommend 3blue1brown's channel on this topic in his "Essence of Linear Algebra" that tries to naturally evoke all these concepts in a way that anyone could understand.
Thanks for the recommendation!
Nice video, but wish you had mentioned "associativity". ie: we want A(Bx) = (AB)x. we want associativity, so given the definition of matrix-vector multiplication, and given we want matrix multiplication to be associative, we must define matrix-matrix multiplication this way. So the motivation is to make matrix multiplication associative.
Hi, at t.c 14:26, the value of row_1 column_1 should be 23 instead of 21 if I'm not wrong. As it is said in last lines of "Some like it hot" "nobody's perfect." Teacher's mistakes shortens the distance from the pupils and that's encouraging for them- 😅
Typos happen.
@@edwardlulofs444 Well he should correct them then.
I wonder what his computer coding is like.
@@suttoncoldfield9318 I don't think that Dr Penn writes computer code. He has help producing the videos. I doubt that his producers can spot and correct typos.
I know that typos are frustrating for students learning material, but these videos are not meant to replace in class instruction where there is interaction.
Even with 50 years of studying math, typos are difficult for me. Try keeping a positive attitude and consider them homework problems.
Most times other comments correct the typos.
Consider watching the videos more than once. I have to study the same material many times to spot typos on the fly.
I'd still rather have the typos than teachers who explain poorly or give incorrect explanations from poor understanding of the material. His deep understanding of the subject overcomes these small frustrations for me. I hope my comments help.
Composition of linear operators is even easier and more natural motivation to define matrix multiplication the way it is.
You see, that third row on the blackboard, I would have written it the other way around - (x1,x2) goes *through* the matrix, so the matrix belongs on the right: subject verb object. Matrix multiplication only made sense to me when I realised that the matrix was being treated like a *function*, and we write functions f(x) with the operator on the left and the operand on the right. Worked my way halfway through "the geometry of complex numbers" before I worked out what was going on and why my code wasn't doing the right thing.
You did something similar, but more basic, recently. I had never questioned the matrix multiplication algorithm before, so I was fascinated by your explanation. When you started this episode, I had only a vague memory of how you did the previous version. This told me that I hadn't mastered the material and needed review.
But then, you expanded the explanation to the tensor product and tensor contraction. Brilliant. I had never understood these concepts so clearly before.
Thank you very much. 😃
If you swivel the vector counter-clockwise into the matrix rows instead of swiveling the rows into the vector, you do your calculations in the places where they would be with the separate equations, rather than sideways and on top of each other. Doesn't help for matrix multiplication, but it's easy enough to remember that as two columns next to each other that you do one at a time.
🌟Thanks to Brilliant for sponsoring today's video! To get started for free, visit brilliant.org/MichaelPenn/
2:21 - It would be probably good to explain to students why x is set as a column in this step, instead of an arrow, as the position of x1 and x2 in the equations would suggest). This would result in another rule for matrix multiplication.
I know we want to keep an analogy between the vectors x and y, but this may not be obvious.
Shouldn't the first entry of the 2x2 matrix be 23 instead of 21?
yes -1 x 1 + 8 x 3 + 3 x 0 = -1 + 24 + 0 = 23
Yes, he wrote 5 0 3 instead of 5 0 1 when switching the matrices.
This was more or less how I learned it as well (our course was based on Friedberg, Insel, and Spence): matrix multiplication is defined so as to ensure that the matrix of the composite of two linear maps equals the product of the matrices of each map taken separately. (This is basically the same as the video's approach since all finite-dimensional F-vector spaces are structurally the same as F^n, and the linear maps on F^n are just these change-of-variable maps.) I believe you can also define it in terms of commuting diagrams, but these are all just different ways of seeing the same thing.
Awesome explanation, thank you!
If I recall correctly, linear substitutions used to be a common entry point to matrix algebra and early abstract algebra like 100 years ago.
This is awesome. I've previously tried and failed to build a physical intuition for matrix multiplication, and I think this is the first time it's really clicked for me. If you do it any other way, it basically doesn't work or it turns into a horrible mess, whereas this way is the only elegant way to nest systems of linear equations or linear maps inside one another. Excellent presentation, thanks.
Yes. Some textbook writers just assume that everyone understands that a matrix multiplication equals the two consecutive linear transformation of a point, or it equals merging two sets of dependent linear equations into one. But a good teacher always begins with an example.
Now, matrix makes perfect sense. Zach star and 3blue1brown has also made great videos on linear Algebra
Such a great channel ! I enjoy literally every video
3 Blue 1 Brown has a visual explanation in his channel, basically, he constructs a lot of linear Algebra visually. I prefer. Michael Penn's approach, it's more direct and simpler (to me), but not everybody is friend of algebra. In any case, it's a good complement to this channel, getting both the formal and the visual inside. th-cam.com/video/XkY2DOUCWMU/w-d-xo.html. I suggest doing both curses in the order that favors you mind: either complete 3B1B first, and then do Michael's or the other way around. Do both before you get into linear algebra in school and you will have a definitive advantage. If you just love the stuff, you will love it even more.
For a start, we should just not call it "multiplication".
Small typo in 7:55 you wrote z1 instead of z2
Love this new lighting
Fiddly to put in plain text, but here's a summary:
let V be a n-dim vector space, and f:V->W be a linear map. Let a_i be a basis for V, and b_i be a basis for W.
Then for v = Σ v_ie_i in V, we have by linearity:
f(v) = Σ_i v_i f(a_i)
so f is completely determined by its effects on the a_i.
Now f(a_i) = Σ_j f_ij b_j, so f(v) = f(Σ_i v_ie_i) = Σ_i v_i f(a_i) = Σ_i v_i f_ij b_j.
Thus f is totally described by the coeficients f_ij. This is why we write matrices to represent linear maps (with respect to some bases).
Now let g be another map W->U and let c_i be a basis for U.
Then again, g(b_i) = Σ_j g_ij c_j.
So what is g∘f? Well g∘f(a_i) = g(Σ_j f_ij b_j) = Σ_j f_ij g(b_j) = Σ_j f_ij Σ_k g_jk c_k = Σ_k (Σ_j f_ij g_jk) c_k
so that g∘f_ik = Σ_j f_ij g_jk.
I love your videos, and immensely appreciate your content. If I am not mistaken your area of research is that algebras, right? I have had a fascination for the Clifford algebra (I saw your video on it) and the Grassmann algebra. There is a book titled "Grassmann Algebra" by John Browne. If you happen to be familiar with that book, would you recommend it? I found it to be interesting because the bibliography includes works from Elie Cartan, for example, and other, pretty old, but reputable sources. I have currently read the first chapter, and though I do have some familiarity with the exterior product, and the tensor product, I had never heard of the regressive product. I found the explanation of it to be mind-blowing. If I understood correctly, it provides, among other things, a way to describe intersections. Apparently there is some analogue for it in the Geometric algebra. Anyway, love your videos.
7:52 That should be z2, not z1.
14:18 24 - 1 = 23, not 21
Our school maths teacher called the calculation of each element "multiplicadding".
Like a lot of things in math, the reason is over determined. It just has to be that way.
Mistake at 14:02, it should be 5 0 1
But then the term 2,2 of the product matrix will be 14 instead of 22
@@lucaolmastroni6270 Yes. In fact the whole diagonal is wrong, because he also made a mistake calculating -1*1 + 8*3 + 3*0, because this equals 23 and not 21.
I'm still left with a couple of questions:
1) Why the swivel? It seems to be taken for granted, but if the first column corresponds to the coefficients of x₁, the second to the coefficients of x₂, etc., wouldn't it make more sense for the x₁, x₂, etc. to be a row vector and the y₁, y₂,... result to be a column vector? Why was this convention chosen?
2) This explains why matrix multiplication is the way it is, but why is *this* matrix "multiplication"? In other words, why is this particular operation considered a kind of multiplication, rather than being called something else? It doesn't seem all that similar conceptually or in behavior to scalar multiplication. This has bugged me ever since high school pre-calc (the dot product and cross product for vectors irk me in a similar way).
Yeah I have to admit the explanation made sense, but it really just seems like a notation failure. You could just as easily change the columns of the second matrix into rows and it would be a hell of a lot more intuitive.
Very nice. As an engineer, I only ever learned matrix multiplication as a recipe although I often wondered why it is the way it is and sometimes who even came up with this and why was it considered a good idea over what are assuredly innumerable other recipes? It would be interesting to see a comparison between linear algebra and quaternions which to my knowledge were abandoned because of their limited utility.
@2:11 "it makes sense to write this in an array" - but what happened to the "plus" signs? It makes zero sense to write it as an array because there's no relationship between the variables? Why not switch "a" with "b"? Please explain how it "made sense" to write as an array :/
Same here.
Yeah.... but why? vectors are a column *just because*
Thank you
Heads up, there's a typo in your title: THEY way we do, should be THE way we do.
That was neat.
Fixed! Thank you.
You could have just emailed him directly rather than shaming publicly. It is easy to do that you know. But you get a like for spotting the error. 👍
@@jewulo It's only shaming if I insult him for having made the error.
good explanation
Could you do a video on the derivation of Strassen's 7 formulas for matrix multiplication?
I think is a lot more straightforward to look at the inner product.
Yes, simplicity is always desirable.
Clear, detailed explanation. Thanks.
4x4 complex matrices are the ones I am interested in.
Would you guys know some vídeos that you can link about euclids geometry?
Im talking about the axiomatic geometry of euclids. Thanks in advance.
And then swivelling this into here
If I was introducing this topic, say to first year undergraduates, I wouldn't use a change of variables as the example. Why are we changing them? I'd start with simultaneous equations in general which they should know from high school. E.g.
3 apples and 2 oranges cost 19 cents
4 apples and 1 orange cost 17 cents
How much do apples and oranges each cost? Etc. Get them to solve it any way they know, then introduce the matrix method.
I agree, Systems of Linear Equations is the best way to motivate matrix tines vector. Then, matrix product can be viewed as solving multiple systems with the same coeficients.
Good point. I do belive the linear transformation link is key, and I will try (they already know about y-mx+b) but your approach is simpler. For the visual-oriented this is great: th-cam.com/video/XkY2DOUCWMU/w-d-xo.html
The video isn't about solving systems of equations. Rather, it is about combining linear transformations, which individually happen to represent systems.
Solving a system is irrelevant as it is analogous to matrix inverse. Having students solve a system would only confuse them into thinking that systems and matrix multiplication have something to do with each other.
In general, I think that starting a linear algebra course with systems is counterproductive. This is because systems can utilize matrix notation for better shorthand, but solving systems has little to do with linear algebra. Linear algebra really begins with linear transformations. We soon run out of trivial analogies as linear algebra mainly serves to introduce abstraction, which enables higher level math and physics.
@@spaghetti1383 It depends on the aims of the course. Matrix methods have applications far beyond the higher reaches of physics and abstract algebra. If the job is to get students comfortable with the formalism and a few basic concepts e.g. eigenvectors, singular systems and the like, then surely it makes sense to start with something they already know. Again, matrix solution doesn't necessarily mean using the matrix inverse -- you could teach Gaussian elimination, for example.
@@pwmiles56 My core point is that your analogy with the cost of apples and oranges can't be nicely extended to matrix multiplication. Thus, starting with it just because matrices are applicable isn't useful. The best analogy I can think of is the following:
Meal A consists of 5 nuggets and 20 fries.
Meal B consists of 9 nuggets and 50 fries.
Bundle 1 consists of 1 of meal A and 2 of meal B.
Bundle 2 consists of 2 of meal A and 3 of
meal B.
Bundles 1 and 2 consist of how many nuggets and fries each?
The answer to this would be a direct analogy to substitution, linear transformation, and matrix multiplication. It also demonstrates that each layer of vector space generally represents something different. Nowhere here would we solve a system as it distracts from the point.
nice video!
Additional video is needed for this one: what is LINEARITY and what is not?
How about this definition of linearity?: Linearity is the power one. Non-linearity is every other power! Can you see why linearity in science is used so much, even when it doesn't apply? A wrong answer is better than no answer, maybe.
There's an ongoing meme that showed a winding road, then drew a straight line from point A to B and wrote "Why?"
When I saw the thumbnail I had to laugh
That's life in a conservative field: where the work done moving from point A to point B is independent of the path taken ... 😊
Is there specific reason to put matrix on the left? What will change if we redefine initial linear transformation from Ax=y to xA=y?
Well he explains that the coefficients are the one that transform.
Just think of it as a some kind of function, you usually write it like this f(x) right? The f "transforms" x into something.
Notice the size changes when you multiply matrices; a matrix that's A×B times a matrix that's B×C is an A×C matrix. This only works if the "middle size cancels out" i.e., you think (A×B)•(B×C) -> (A×C)
If the "middle size" doesn't match, you can't multiply.
So if we consider a vector as an N×1 matrix, you can multiply with an N×N matrix like Ax=y so that we get
(N×N)•(N×1) -> (N×1)
another N×1 matrix. However, we cannot do it from the other side, because the measurements do not match up.
We could, however, define x as a row vector, i.e. as a 1×N matrix. At that point, xA=y is a valid operation and represents another linear transformation.
@@DavidSavinainen Well done, sir.
14:26 You saw it here first! -1x1 + 8x3 + 3x0 = 21! 🤣
seriously check the trace of the matrix and they should be the same if you multiply correctly.
I was hoping this would be a video for someone who has done matrix multiplication but not linear algebra yet. Anyone have a recommendation for me?
Might "Multiplying Matrices" from The Organic Chemsitry Tutor be useful?
@@cheekywombat9208 oh I love that channel, I'll go check it out, thank you!!
Finally, you present a video with the real purpose to educate us. I love it. I particularly love the example that is NOT a square N x N flavor. Full Points Awarded.
It would be more correct to call it "linear transformation of vectors represented by matrix". "multiplication" is not the best term for that, which confuses many people. It is easier to remember that it is "fake multiplication", in name only, and use the more correct term for that.
So clear. I feel like Neo.
You have a typo in the second matrix example. 501 became 503 in your explanation. Noobs will still be confused about matrix multiplication 😂
By extension, this can be used to motivate linear transformations...
You should write on a clear board and film it from it from the other side and flip it.
8:00 why did you write z1 twice ?
Linear algebra is absolutely butchered in most classes, it is definitely no taught well (only in some classes) every one of them need to emphasize the geometric intuition behind all of it. It's like having a music student only practice individual notes without ever showing them an actual song. saying it takes the beauty out of it would be an understatement. It makes me very angry.
Agree. Much of the mathematics I was taught highschool, and even university, was just "rote" so students could learn to use the tools, instead of developing a deeper understanding and remembering the tool for life. It makes me wonder about the collective time wasted over the years by people who never fully understood what they were using.
For me the one exception was complex numbers, which fascinated me in highschool. I had access to a very good textbook, and read as much as I could about them.
@@vk2ig Can you write the title of the book? To be honest, I still can't wrap my head around complex numbers. It's too abstract for me haha:(
THIS!!
I wish this video was made 5 years earlier, then I wouldn't have to memorize matrix multiplication
Excellent video, but how can I use matrices in the real world?
I think computer science.
Couldn't one simply state that the matrix-matrix product is simply multiple matrix-vector products where every column of the right matrix is a column vector of its own?
Matrix multiplication knows what it is at all times. It knows this because it knows what it isn't... XD
This video does not explain at all what the title suggests. The question was, why do we multiply matrices the way we do ? And on the very third step of this attemt to explain why, you said " if we swivel this first row into the column of the second matrix " , yeah, the question was, why do we swivel the rows into columns. And the rest of the steps of linear transformation calculation you showed here, you kept saying the same, "swivel the rows in the columns". But why do we do it that way ?
It's just a convention or a standard way of doing it. Dot Multiplication makes more sense intuitively (multiply just the values to each other)
How comes 3 x9 is 45 that must be 27
Title is misleading. This is really an explanation on How to multiply matrices, not why you multiply It in that way
True
I appreciate the explanation, but I think an important emphasis was still missed, which is that calling this operation "multiplication" is a bit of a misnomer. We can get away with calling it a "product," which has more general connotations, but a student should be told right out of the gate that "multiplication" is more of an analogy here than a descriptor of what's actually going on. Since the more generalized meanings of "product" have not yet been elucidated, I think this might prevent a few headaches in some of our more astute students.
Great video explaining matrix multiplication, but I have some complaints. I noticed a couple mistakes that you didn't correct.
At 7:53, you were expanding out z2, but accidentally wrote z1. It's a minor mistake but it confused me for a few seconds.
At 14:00, when you were showing how matrix multiplication is not commutative, you incorrectly copied one of the matrices, which leads to a different answer. It didn't really change the point you were making, but it was still an avoidable error.
Overall, great video, but you need to do a better job to avoid mistakes (or fixing them) when you do explanations like this. I understand that mistakes happen, but maybe in future, proofread the video after you record and add overlays indicating the mistakes so people don't get confused.
ditto, threw me off also for 5 seconds.
I learn more from Dr Penn's videos even with his typos than I do from other teachers' perfect work. I have taught many subjects: engineering textbooks have the fewest typos of physics, math, and computer science.
I wonder what his computer coding is like?
Your chalk seemed a bit crumbly today.
This video doesn't answer the question
This video does answer the question for me. And it gives me an understanding deeper than my 60 years of studying math. Watch it over and over until it does make sense. Then you will have mastered the subject.
@@edwardlulofs444 I agree - this video answers the question very well. He was halfway through and I was thinking "matrix representation of 2-port networks in electrical engineering is a great example, especially when cascading networks". 😊
@@vk2ig Yup. You have not only mastered the algorithm of matrix multiplication, but you understand linear algebra and use it to solve problems.
Just some feedback, take it or leave it:
The educational value of this video is severely undermined by jamming sponsorships in the middle.
Fast forward right through the ad in like two clicks.
The video doesn’t explain why y1 and y2 are defined in terms of x1 + x2 which is surely the key. The rest is just notation and symbolic manipulation.
They're just the dependent variables in a system of two linear equations in which x1 and x2 are the independent variables; it's just a setup to show how matrix-vector multiplication represents a linear system.
late