14. Orthogonal Vectors and Subspaces

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ม.ค. 2025

ความคิดเห็น • 248

  • @priyankkharat7407
    @priyankkharat7407 5 ปีที่แล้ว +263

    Thank you professor! I am amazed by the fact that professors from top institutes like MIT explain the mere basics without any expectation that we are supposed to know those topics earlier. On the other side our university professors just avoid the whole thing by saying "it isn't the part of syllabus, you are expected know this already". A huge salut and thanks to professor Strang and MIT team for publishing these videos free of cost.

    • @yogeshporwal7219
      @yogeshporwal7219 4 ปีที่แล้ว +8

      Yes this one line is fix-"you are supposed to know this, learn it at your own"
      And here this great professor giving knowledge from basic level to very advance level.

    • @Yellomellowil
      @Yellomellowil 3 หลายเดือนก่อน

      this is what make good institutes and good professors

    • @ManishKumar-qr4hb
      @ManishKumar-qr4hb 3 หลายเดือนก่อน

      true

  • @adamlevin6328
    @adamlevin6328 9 ปีที่แล้ว +180

    That smile at the end, he knew he'd done a good job

  • @ozcan3686
    @ozcan3686 12 ปีที่แล้ว +108

    i dont know how but when ever i need he repeats it.thx mr Strang

    • @9888565407
      @9888565407 4 ปีที่แล้ว +1

      hey thats true mate. so did you watch the whole series ?

    • @kub1031
      @kub1031 4 ปีที่แล้ว +2

      sen de berbat hocalara sahiptin herhalde kader arkadaşım.

    • @rosadovelascojosuedavid1894
      @rosadovelascojosuedavid1894 3 ปีที่แล้ว +1

      @@9888565407 lol let's hope he has the same TH-cam account he had 8 years ago

  • @dmytrobondal4127
    @dmytrobondal4127 7 ปีที่แล้ว +95

    Gilbert Strang, you are truly an outstanding teacher! I am currently doing my Master's thesis in Finite Element Analysis and started watching these video lectures just for fun, since I already had some Linear Algebra back on my bachelor's. Your little sidenote at the end of a lecture about multiplying a system by A.transpose actually helped me crack a problem I'm dealing with right now. My finite element system had more equations than unknowns (because I'm fixing some internal degrees of freedom, not the nodes themselves) and I just couldn't figure out how to solve such system. I completely forgot about this trick of multiplying by a transpose!! THANK YOU SO MUCH!! My final system now has "good" dimensions and the stiffness matrix has a full rank!!!

    • @dmytrobondal4127
      @dmytrobondal4127 7 ปีที่แล้ว +19

      And also his strict mathematical proof, I believe, in 1971, about the completeness being a necessary condition for FEM convergence is actually something I'm using right now! This guy played such a great role in FEM.

  • @nenadilic9486
    @nenadilic9486 3 ปีที่แล้ว +8

    25:56 "I'm a happier person now." I love his interludes. Thank you, professor, a lot.

  • @nenadilic9486
    @nenadilic9486 3 ปีที่แล้ว +23

    To find this course on the web is tantamount to finding massive gold treasure.

  • @palashnandi4165
    @palashnandi4165 2 ปีที่แล้ว +7

    00:00:00 to 00:02:50 : Introduction
    00:02:51 to 00:13:45 : What is Orthogonality?
    00:13:50 to 20:49:00 : What is Orthogonality for Subspaces?
    00:20:50 to 26:00:00 : Why RS(A) ⊥ NS(A)?
    26:01:00 to 34:00:00 : What is Orthogonal complement?
    39:45:00 to End : Properties of A^T.A ?

  • @AryanPatel-wb5tp
    @AryanPatel-wb5tp 7 หลายเดือนก่อน +3

    "Let me cook up a vector that's orthogonal to it" - the goat professor strang 8:25

  • @dougiehwang9192
    @dougiehwang9192 4 ปีที่แล้ว +38

    I really encourage you to buy The Introduction of Linear Algebra which Pf Strang wrote. If I say these videos are rank r, then I can definitely say the book is the orthogonal complement of these videos that makes perfect dimension of Linear Algebra.

    • @rosadovelascojosuedavid1894
      @rosadovelascojosuedavid1894 3 ปีที่แล้ว

      Dude I read this comment and literally TODAY I recommended this book to a guy in a Facebook group and he already ordered it. 👌

  • @elyepes19
    @elyepes19 3 ปีที่แล้ว +9

    This lecture is a Tour of Force, every sentence he says, including the ancillary comments, are so well crafted that makes everything click with ease. Least Squares open the gates for the siamese fields of Optimization and Inverse Theory, so every bit of insight he shares has deep implications on those fields (and many others). It's not exaggeration to say that the whole lecture is an aha! moment. Very illuminating, thank you Professor Strang

  • @steveecila
    @steveecila 11 ปีที่แล้ว +79

    Mr Strang makes me feel, in the first time of my life, that linear algebra is interesting!

  • @youmgmtube
    @youmgmtube 15 ปีที่แล้ว +18

    This series is phenomenal. Every lecture a gem. Thank you Mr Strang!

  • @debarshimajumder9249
    @debarshimajumder9249 7 ปีที่แล้ว +74

    "the origin of the world is right here"

  • @corey333p
    @corey333p 8 ปีที่แล้ว +220

    The dot product of orthogonal vectors equals zero. All of a sudden it clicked when I remembered my conclusion as to what a dot product actually was, that is, "what amount of one vector goes in the direction of another." Basically, if vectors are orthogonal, then no amount of one will go in the direction of the other. Like how a tree casts no shadow at noon.

    • @robertorama8284
      @robertorama8284 6 ปีที่แล้ว +8

      Thank you for this comment! That's a great conclusion.

    • @estebanl2354
      @estebanl2354 4 ปีที่แล้ว +4

      it was very enlightening

    • @anilsarode6164
      @anilsarode6164 4 ปีที่แล้ว +4

      th-cam.com/video/LyGKycYT2v0/w-d-xo.html to get the concept of the dot product.

    • @indiablackwell
      @indiablackwell 3 ปีที่แล้ว +1

      This helped, a lot

    • @kevinliang5568
      @kevinliang5568 3 ปีที่แล้ว +2

      Oh my this is enlightening, I've never thought it that way

  • @professorfernandohartwig
    @professorfernandohartwig 2 ปีที่แล้ว +3

    In many linear algebra courses that I have seen, the student is simply told about the various relationships between the fundamental subspaces. But in this course these ideas are convincingly yet accessibly presented. This is very important because it allows students to really understand such key ideas of linear algebra to the point where they become intuitive, instead of simply memorizing properties and formulas. Another great lecture by professor Strang!

  • @quirkyquester
    @quirkyquester 4 ปีที่แล้ว +10

    so much fun, so much love. Thank you Professor Strang and MIT for inspiring more people around the world. I truly enjoy learning linear algebra with Professor Strang :) we know he's done it!

  • @condafarti
    @condafarti 5 ปีที่แล้ว +15

    okkkkk, cameras are rolling, this is lecture 14. What an intro line!

  • @LisaLeungLazyReads
    @LisaLeungLazyReads 8 ปีที่แล้ว +67

    I remember falling asleep in all my linear algebra classes @ UWaterloo. Not until now that I'm starting to like linear algebra!

    • @Neme112
      @Neme112 8 ปีที่แล้ว +2

      "like linear algebra" Good one!

    • @lucasm4299
      @lucasm4299 6 ปีที่แล้ว +1

      Lisa Leung
      Is that in Ontario, Canada

    • @alpozen5347
      @alpozen5347 5 ปีที่แล้ว +7

      same here for me, I study at EPFL, but Mr. Strang seems to have a natural gift for the subject

  • @tongqiao699
    @tongqiao699 11 ปีที่แล้ว +6

    The most greatest lecturer who I meet in my life.

  • @LinhNguyen-st8vw
    @LinhNguyen-st8vw 8 ปีที่แล้ว +11

    Linear algebra, it's been almost 3 years but I think I've finally got you. *sob *wished I could go back in time

    • @dangernoodle2868
      @dangernoodle2868 6 ปีที่แล้ว +1

      Man, I think a part of me died in the math class I took at the start of university. I feel like I'm ressurecting a part of my soul.

  • @georgesadler7830
    @georgesadler7830 3 ปีที่แล้ว +1

    DR. Strang ,thank you for another classic lecture on orthogonal vectors and subspaces. Professor Strang, you are the grand POOBAH of linear algebra.

  • @rabinadk1
    @rabinadk1 4 ปีที่แล้ว +7

    Really a great lecture. He explains things simply that they seem obvious. I had never learned it as clearly as this in my college.

  • @mikesmusicmeddlings1366
    @mikesmusicmeddlings1366 4 ปีที่แล้ว +2

    I am learning so much more from these lectures than from any teacher I have ever had

  • @adarshagrl
    @adarshagrl 2 ปีที่แล้ว +1

    At 36:22, it is fascinating how he got a bit into the orbital mechanics, saying there are 6 unknowns (which is rightly known at the State vector, which is a 6 by 1 matrix with Position (x,y,z) and velocities (xdot, ydot, zdot)).

  • @VasanthKumar-gk1qt
    @VasanthKumar-gk1qt 19 วันที่ผ่านมา

    Professor you’re not only master in Mathematics but also the way of teaching 🎉

  • @onatgirit4798
    @onatgirit4798 3 ปีที่แล้ว +2

    Omg the orthogonality between nullspace and row space adds up so well with the v=[1,2,3] example prof. gave previous lecture. I've seen much less entertaining tv series than 18.06, this course should be on Netflix lol

  • @mreengineering4935
    @mreengineering4935 3 ปีที่แล้ว +3

    Thank you very much, sir. I am watching lectures and enjoying them. I have benefited from you because we do not have a teacher in Yemen because of the war situation, so you became my teacher

  • @ninadgandhi9040
    @ninadgandhi9040 2 ปีที่แล้ว +1

    Really enjoying this series! Thank you professor Strang and MIT. This is absolute service to humanity!

  • @miami360x
    @miami360x 13 ปีที่แล้ว +2

    I love his explanations. My linear Algebra prof. will just give us definitions, state theorums, and prove them and if were lucky we'll get an example, but never a solid explanation.

  • @gokulakrishnancandassamy4995
    @gokulakrishnancandassamy4995 3 ปีที่แล้ว

    Great summary at the end: A^T*A is invertible if and only if A is full column rank! Just loved the lecture...

  • @zoltanczesznak976
    @zoltanczesznak976 10 ปีที่แล้ว +10

    You are the king Mr Strang! Thanks

  • @muditsaxena3640
    @muditsaxena3640 6 ปีที่แล้ว +19

    At 19:50 he said "When is a line through the origin orthogonal to whole plane? Never" but I think if we take any line through origin and a plane whose normal vector is parallel to that line then they both will be orthogonal. For example x-axis and y-z plane. Help me out please.

    • @wasiimo
      @wasiimo 6 ปีที่แล้ว +11

      By that he means any line passing through the origin that is in the plane(i.e a subspace of the plane) cannot be orthogonal to the whole plane. Of course if this line is parallel to the normal of the plane as you stated, then yes it will be orthogonal to every vector in that plane.

    • @Basta11
      @Basta11 6 ปีที่แล้ว +7

      He’s talking a line through origin (a sub space) that is also in the plane.

    • @khanhdovanit
      @khanhdovanit 4 ปีที่แล้ว +2

      Thanks for your question

    • @jeffabc1997
      @jeffabc1997 3 ปีที่แล้ว +2

      Thanks for the question and answer... it really helps!

    • @AnhTran-sf3hf
      @AnhTran-sf3hf หลายเดือนก่อน

      he's talking about a line IN THAT PLANE. And it cannot be orthogonal with that plane - And that gives us the example of the two subspaces that can not be be orthogonal if they intersect on anything other than zero vector.

  • @vedantparanjape
    @vedantparanjape 5 ปีที่แล้ว +13

    Second best part about watching these lectures is the comment section

  • @carlostrebbau2516
    @carlostrebbau2516 7 หลายเดือนก่อน

    I have never felt the platonic injunction to "to carve nature at its joints" more strongly than after watching this lecture.

  • @anilsarode6164
    @anilsarode6164 4 ปีที่แล้ว +3

    38:30 -Mr. Strang gives a hint about the Maximum Likelihood Estimate (MLE).

  • @rajprasanna34
    @rajprasanna34 9 ปีที่แล้ว +5

    It's an extraordinary and amazing one.. No other lecturer are as good as Gilbert...
    Thank you sir........

  • @niko97219
    @niko97219 4 ปีที่แล้ว

    It is a pure joy watching these lectures. Many thanks to Prof. Gilbert Strang and MIT OCW.

  • @hanzvonkonstanz
    @hanzvonkonstanz 14 ปีที่แล้ว +1

    I swear, these lectures with the Schaum's Outline of Linear Algebra can really help anyone learn the subject.

  • @georgeyu7987
    @georgeyu7987 4 ปีที่แล้ว +24

    "blackboard extends to infinity..." yeah, MIT does have infinitely long blackboard...

    • @akselai
      @akselai 4 ปีที่แล้ว +7

      * slides out the 45th layer of blackboard *

  • @jnnewman90
    @jnnewman90 3 ปีที่แล้ว +1

    This man cooked up some vectors AND insulted MIT's floor integrity. Legend

  • @hits6620
    @hits6620 3 ปีที่แล้ว +2

    At 25:00 Mr. Stang wrote (row 1) transpose x equals 0, but I don't really understand.
    I was thinking about to remove the "transpose" thing, and I was sooo confused.

    • @minagobran4165
      @minagobran4165 2 ปีที่แล้ว

      me too did u ever understand

    • @APaleDot
      @APaleDot 2 ปีที่แล้ว

      @@minagobran4165
      All row vectors are written with the transpose symbol to indicate they are row vectors and not column vectors.

  • @prakhyathbhandary9822
    @prakhyathbhandary9822 3 ปีที่แล้ว +6

    25:00 why transpose of Row's were taken to find combination of row space? Will we be able to multiply transpose of row 1 to X?

    • @rjaph842
      @rjaph842 3 ปีที่แล้ว +1

      I lost it there too man,idk if you've managed to figure out why

    • @joaocosta3506
      @joaocosta3506 3 ปีที่แล้ว

      @@rjaph842 wasn't the point proving the case that the left null space and the column space were ortogonal too?

    • @iamjojo999
      @iamjojo999 2 ปีที่แล้ว +1

      I think its a little mistake that prof. Strang didn’t notice. Probably because prof Strang just taught what property of two vectors are orthogonal have.(ie XtY=0)
      But this require’s X and Y are column vectors. Here row vector is not a column vector, so no need to transpose in order to product another column vector. Simply row vector * x (which is a column vector) =0 is ok though. Nevertheless, I really like prof Strang’s style. Thank you prof Strang.

  • @hassannazeer5969
    @hassannazeer5969 5 ปีที่แล้ว +8

    This is a 90-degree chapter, Strang meant business from the word go!

  • @BestBites
    @BestBites 3 ปีที่แล้ว

    Cameraman would have become Pro in Linear Algebra by absorbing such a high level of teaching.

  • @Anaghish
    @Anaghish 4 ปีที่แล้ว

    You're the best teacher in the world.

  • @jockyitch8815
    @jockyitch8815 2 ปีที่แล้ว

    41:32 recap point for A.t * Ax = A.t * b

  • @abdelaziz2788
    @abdelaziz2788 3 ปีที่แล้ว

    Thats A VERY VERY ESSENTIAL lecture for machine learning
    i used to do the transpose trick but didnt know where it come from, know i may die in peace

  • @abdulazizabdu8362
    @abdulazizabdu8362 9 ปีที่แล้ว +3

    But lessons are great!!!! I'am enjoying from every class. Thank you Gilbert Strang

  • @BigBen866
    @BigBen866 ปีที่แล้ว

    The man puts his Soul into his lectures 🤔🙏🏼😀👍

  • @nateshtyagi
    @nateshtyagi 4 ปีที่แล้ว +1

    Thanks Prof Strang, MIT!

  • @BigBen866
    @BigBen866 ปีที่แล้ว

    “Let me add the great name, ‘Pythegorious’!” I love it 😂😂😊

  • @soulmansaul
    @soulmansaul 3 ปีที่แล้ว

    Recorded in 1999, still relevant in 2021. "Comes back 40 years later" - Yep still relevant

  • @eccesignumrex4482
    @eccesignumrex4482 7 ปีที่แล้ว +13

    Gill uses his 'god' voice at ~8:00

  • @ZehraAkbulut-my7fj
    @ZehraAkbulut-my7fj 9 หลายเดือนก่อน

    I can't stop watching the spinning pens 15:05

  • @jacksonsunny1261
    @jacksonsunny1261 2 ปีที่แล้ว

    East or West, Prof Strang is the best!

  • @shoumikghosal
    @shoumikghosal 4 ปีที่แล้ว +8

    "The one thing about Math is you're supposed to follow the rules."

  • @bfl9075
    @bfl9075 3 ปีที่แล้ว

    19:52 A mistake here. A line through origin can be orthogonal to a whole plane.

    • @ledkicker2392
      @ledkicker2392 3 ปีที่แล้ว +2

      He was talking about 2D, so the line has to lie in the plane

    • @bfl9075
      @bfl9075 3 ปีที่แล้ว

      @@ledkicker2392 You r right, than u a lot.

  • @Seanog1231
    @Seanog1231 6 ปีที่แล้ว +1

    Can't wait for the next one!

  • @mind-blowing_tumbleweed
    @mind-blowing_tumbleweed ปีที่แล้ว

    44:20 why can't we solve it? We couldn't if there was more unknowns than equations.

  • @minagobran4165
    @minagobran4165 2 ปีที่แล้ว

    at 25:17 when he says (row1)^T * x=0. This is wrong. Row1 is 1xn and x is nx1. Row1*x=0. row1^T is nx1 and you can't multiply a nx1 vector by x another nx1 vector.

    • @APaleDot
      @APaleDot 2 ปีที่แล้ว

      Row vectors are written as v^T. It's just a convention to distinguish them from column vectors.

  • @kingplunger1
    @kingplunger1 5 หลายเดือนก่อน

    Why transpose rows for the dot products ? around 25:00

  • @wandileinvestment1413
    @wandileinvestment1413 2 ปีที่แล้ว +1

    Thank you prof I'm writing a exam tomorrow morning

  • @ErickBaiona
    @ErickBaiona 10 ปีที่แล้ว +22

    25:39 Ins´t there an error with the symbol T (for transpose)? Why transpose the rows? Please, explanations!

    • @RetroAdvance
      @RetroAdvance 10 ปีที่แล้ว +19

      Here we contemplate rows as vectors. And it is just a convention to write a vector vertical. So you have to transpose it if you mean to write it down horizontally.

    • @Nakameguro97
      @Nakameguro97 10 ปีที่แล้ว +1

      RetroAdvance Thanks for this confirmation - I suspected this reason was much more likely than Prof. Strang making a mistake here (as I have seen this convention in other textbooks). However, it's still confusing as he sometimes draws an array of rows [row_1 row_2 ... row_m] vertically implying that those are horizontal rows. Is this convention of all variables as vectors typically only apply to variables written in text?

    • @RetroAdvance
      @RetroAdvance 10 ปีที่แล้ว +4

      Ken Feng Yes, he often writes things down without a strict mathematical rigor for didactic reasons. [row1 row2 row3] is probably just Strang's intuitive formulation to get the point across so don't take that too seriously. As long as it is clear what he means by that it is ok.
      But a vector is different from its transposed version in terms of its matrix representation:
      V (element of R^n) is a n x 1 matrix
      V transposed is a 1 x n matrix

    • @longgy1123
      @longgy1123 7 ปีที่แล้ว

      Beto ba Yona It is just a small mistake.

    • @thangibleword6854
      @thangibleword6854 5 ปีที่แล้ว

      @@RetroAdvance no, it is a mistake

  • @antoniosaidwebbesales2418
    @antoniosaidwebbesales2418 2 ปีที่แล้ว

    Amazing, thk u MIT and Prof. Gilbert Strang.

  • @woddenhorse
    @woddenhorse 3 ปีที่แล้ว +1

    "I shouldn't do this, but I will"

  • @VladimirDjokic
    @VladimirDjokic 9 ปีที่แล้ว +2

    He's absolutely amazing!!!

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      И ја сам одушевљен. Његова предавања су пуна просветљујућих момената, бар за нас лаике је то тако.

  • @bca1037
    @bca1037 10 หลายเดือนก่อน

    최고의 선형대수학 강의.

  • @MrCricriboy
    @MrCricriboy 8 ปีที่แล้ว +11

    Was it a burp at 48:48?

    • @SteamPunkLV
      @SteamPunkLV 5 ปีที่แล้ว +1

      now we're asking the real questions

    • @winniejeng7402
      @winniejeng7402 5 ปีที่แล้ว +1

      Sorry had an indigestion right before class

  • @Mike-mu3og
    @Mike-mu3og 6 ปีที่แล้ว +1

    19:50 why can't a line through the origin be orthogonal to a plane? It looks natural to me, that the z-axis is orthogonal to xy-plane

    • @KaiyuKohai
      @KaiyuKohai 5 ปีที่แล้ว

      it won't be orthogonal to every vector contained in the plane so it isn't orthogonal to the plane

    • @vishwajeetdamor2302
      @vishwajeetdamor2302 4 ปีที่แล้ว +3

      I think he was only talking about a line in R2 space

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      @@KaiyuKohai It is orthogonal to every vector in that plane, but the point is we are talking about 2D space: no vector in that space is orthogonal to a plane representing that space.

  • @theali8oras274
    @theali8oras274 5 ปีที่แล้ว

    49:36 did he flip someone off?

  • @tanphan1618
    @tanphan1618 2 ปีที่แล้ว

    Beautiful lecture and amazing lecturer !!!

  • @faustind
    @faustind 5 ปีที่แล้ว +3

    At 25:17 is it necessary to transpose the rows of A before multiplying with X ( since the dimensions match already )?

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว +2

      He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form.
      In other words, it is a convention that, if we want to to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose.
      This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it.
      And it is not a mere convention! It is an essential property of the matrices: the columns are vectors, not the rows. If we could, at our own leisure, claim whenever we want that rows are also vectors, then the whole concept of a transposed matrix will be corrupted, even the concept of a matrix itself.

    • @jimziemer474
      @jimziemer474 3 ปีที่แล้ว

      @@nenadilic9486 I’m not sure that’s completely correct. I’ve seen him show rows as vectors at times to compare what a row vector looks like compared to the column vectors.

  • @WonJable
    @WonJable 11 ปีที่แล้ว

    cuteness level off the charts @49:35

  • @ozzyfromspace
    @ozzyfromspace 4 ปีที่แล้ว +2

    Here I was, thinking I was gonna breeze through this lecture when BAM! I got hit with subtle logic 👨🏽‍🏫

  • @sachinranveer3452
    @sachinranveer3452 3 ปีที่แล้ว +1

    Where are the next lectures for A^TA ???

  • @ashutoshtiwari4398
    @ashutoshtiwari4398 5 ปีที่แล้ว

    13:00 subspace S is orthogonal to subspace T means: every vector in S is orthogonal to vectors in T.
    Can anyone explain what is going on there?

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      @@jigglygamer6887 I would say that a subspace is any chosen selection of vectors. What is a particular kind here is the kind of way we choose subspaces that are practical ;)

  • @hibvio
    @hibvio 14 ปีที่แล้ว

    Really good videos!! This series are helping me pretty much!! I'm from Brazil and I'm loving this videos!!

  • @ozzyfromspace
    @ozzyfromspace 3 ปีที่แล้ว

    This man is my hero 🙌🏽✨

  • @Brekhna
    @Brekhna 12 ปีที่แล้ว

    he is such a great teacher!! thankyou professor strang!!

  • @imegatrone
    @imegatrone 13 ปีที่แล้ว

    I Really Like The Video From Your Orthogonal Vectors and Subspaces

  • @bipashat4131
    @bipashat4131 3 ปีที่แล้ว

    why exactly is the null space of (A transpose )(A) = to the null space of A ?

  • @matrixkernel
    @matrixkernel 13 ปีที่แล้ว +1

    Wouldn't the z-axis be orthogonal to the entire x-y plane? It kind of goes against one of his remarks in the 20-21 minute part of the video.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      There is no z axis in 2D space.

  • @omarelgazzar834
    @omarelgazzar834 7 ปีที่แล้ว

    Great lecture, Thanks prof. Strang.

  • @Maria-yx4se
    @Maria-yx4se ปีที่แล้ว +1

    8:18 let him cook!!!!

  • @ankanghosal
    @ankanghosal 3 ปีที่แล้ว

    At 44:31 why did sir say solving 3 equation by 2 variables is not possible. We have seen in our schools that the number of variable should be equal to number of equation in order to evaluate the variable. Plz explain.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว +2

      Let's suppose some natural law determines that some attribute b is given by exactly 3 parts of some parameter a1 and exactly 5 parts of some parameter a2, and these two parameters don't depend on each other. You would have the equation:
      a1*3 + a2* 5 = b
      If you could measure those two parameters absolutely precisely, you would always end up with the correct b. And the other way around - if you could measure b with absolute precision... but let's see what happens.
      Suppose, you want to discover that natural law, i.e. you didn't know factors 3 and 5 but want to discover them.
      You would measure the observable, b, in situations when you change a1 and a2 by experiment (or when you measure cases where a1 and a2 are naturally different from case to case). Suppose the ideal world (which doesn't exist) where there is no error in measurement.
      You get, for example:
      1*x1 + 1*x2 = 8
      2*x1 + 1*x2 = 11
      -1*x1 + 1*x2 = 2
      1*x -0.2*x2 = 2
      You have more than 2 equations, and two unknowns, x1, and x2. Because all the equations are linear combinations of just two of them, you can take any two equations, and throw away the rest and the solution will be: x1 = 3, x2 = 5.
      If you put the coefficients in front of x1 in the first column of matrix A and those in front of x2 in the second column, then you 'get 4 x 2 matrix A:
      - -
      | 1 1 |
      | 2 1 |
      | -1 1 |
      | 1 -0.2 |
      - -
      You could write x1 and x2 in a column, as a vector x with x1 and x2 as its coordinates:
      - -
      | x1 |
      | x2 |
      - -
      You also write all the right-hand side numbers as coordinates of the vector b:
      - -
      | 8 |
      | 11|
      | 2 |
      | 2 |
      - -
      Then you can write all the above equations in a matrix form, as:
      Ax = b.
      Matrix A has the rank of 2 (only two independent equations - the other two, in this example, are just derived from them as linear combinations, so they don't provide any new information). You can replace A with the matrix A^, which is 2x2 matrix with only any two of the equations. (You also have to reduce your observed b vector to 2-dimensional space, and write it as b^.) For example, you could choose equations:
      - - - - - -
      | 2 1 | * | x1 | = | 11|
      | -1 1 | | x2 | | 2 |
      - - - - - -
      When you solve this system, and we can write it in a matrix form, as:
      A^ * x = b^
      for x, you get vector x with the coordinates x1 and x 2 that are 3 and 5, which is the solution you wanted.
      But the world is not ideal. You do the measurements and you actually get the whole A as this one:
      1,01*x1 + 0,99*x2 = 8,04
      2, 01*x1 + 1,04* 2 = 10,99
      -0,97*x1 + 1*x2 = 2,05
      1*x -0.22*x2 = 2, 01
      It's close to the first set of equations, but not exactly the same.
      Now, you can solve the system of any two of these equations, but you cannot solve the whole system, because there doesn't exist a combination of x1 and x2 that satisfies all four equations. We say that the vector with coordinates: 8,04, 10,99, 2,05, 2,01, i.e. our observed 'b', is NOT in the column space of A. In the first example, which is extremely rare, vector b was, by chance, in the A's column space.
      Instead, you can find the BEST approximate solution, which is exactly the topic of the next lecture :)

  • @pelemanov
    @pelemanov 13 ปีที่แล้ว

    @j4ckjs Thanks for the reply. It's just a confusing definition and if you google a bit on orthogonality, you find many other definitions contradicting this one. I think he should have pointed it out more clearly, but at least now I will be cautious when it comes to this topic. I guess that's good enough...

  • @notslahify
    @notslahify 11 ปีที่แล้ว

    well I don't think A' has an inverse. so you can't backtrack from eq 2 to eq 1

  • @pawankumar-gc6ho
    @pawankumar-gc6ho 2 ปีที่แล้ว

    Brilliant leacture

  • @cutieFAIZANHASSAN
    @cutieFAIZANHASSAN 4 ปีที่แล้ว

    Thank you, sir. You are a great teacher.

  • @MB-oc7ky
    @MB-oc7ky 4 ปีที่แล้ว

    At 42:17 why does multiplying each side by A^T change x? In other words why isn't x equal to x_hat?

    • @MrScattterbrain
      @MrScattterbrain 4 ปีที่แล้ว +4

      Here, prof. Strang is pointing towards Statistics. Assume there is some "true" value of x, which is set by nature's law. We collected measurements of A and b, and try to find that true x by solving the equation Ax=b. But our measurements are noisy, so we will not get the true x. What we can find, is some approximation, "estimate" of x, which is usually denoted by x-hat.
      That's one example of Stats is about - finding estimates for unknown parameters from the given observations. The true parameter usually remains unknown.

  • @vijayamanikandanv8471
    @vijayamanikandanv8471 4 ปีที่แล้ว

    In the section Row space orthogonal to null space, (time 25.40) do we need transpose for row. Because (row)*x is the scalar product, not (row)^T*x.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form.
      In other words, it is a convention that, if we want to to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose.
      This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it.
      And it is not a mere convention! It is an essential property of the
      matrices: columns are vectors, not rows. If we could, at our own
      leisure, claim whenever we want that rows are also vectors, then the whole concept of a transposed matrix will be corrupted, even the concept of a matrix itself.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      @CoeusQuantitative Read my comment and any textbook. He didn't make any mistake. And no, Vijaya is not correct. And professor Gilbert certainly does not have ''senior moments''. His brain is more lucid than mine or yours have ever been or will ever be.

  • @trojanhorse8278
    @trojanhorse8278 ปีที่แล้ว

    Hello can someone please explain how the length formula has derived for vector? how length of vector x is xt .x ? where xt is x transpose.

    • @thepruh1151
      @thepruh1151 ปีที่แล้ว

      a^2 + b^2 = c^2, so a, b, c must be the length of the vectors making this right-angled triangle. Thus, the a^2 in the Pythagorean formula corresponds to ||→a||^2, where →a just indicates vector a. This logic applies to the rest of the theorem, resulting you in
      ||→a||^2 + ||→b||^2 = ||→c||^2
      But we know that →c is just →a + →b, so we can replace that c with the vector sum, therefore
      ||→a||^2 + ||→b||^2 = ||→a + →b||^2
      That covers the vector interpretation of the Pythagoras Theorem.
      As to why ||→x||^2 can be written as x^T * x. We can write ||→x||^2 as
      ||→x||^2 = ||→x|| ||→x||= →x ⋅ →x
      By property of duality, the dot product of any two vectors, say →v and →w is the same as taking the matrix multiplication of one of those vectors transposed and the other vector, mathematically
      →v ⋅ →w = v^T * w = w^T * v
      Why this is so results from, as far as I know, a coincidentally beautiful property where it's just true
      For example, say
      →v = and →w = , where indicates column vectors
      Thereby
      →v ⋅ →w = ⋅
      →v ⋅ →w = 1 * 3 + 2 * 2 + 3 * 1
      →v ⋅ →w = 10
      or
      v^T * w =
      [3]
      [1, 2, 3] * [2] = 1* 3 + 2 * 2 + 3 * 1
      [1]
      v^T * w =
      [3]
      [1, 2, 3] * [2] = 10
      [1]
      or
      w^T * v =
      [1]
      [3, 2, 1] * [2] = 3 * 1 + 2 * 2 + 1 * 3
      [3]
      w^T * v =
      [1]
      [3, 2, 1] * [2] = 10
      [3]
      In conclusion, the dot product of two vectors is also equal to the matrix multiplication of one of the vectors transposed and the other vector. Apply that logic with the dot product of a vector with itself, and you'll get the same result.

  • @nioncao
    @nioncao 3 ปีที่แล้ว

    Ax=b, then AtAx=Atb. awesome!

  • @erdem_cologne
    @erdem_cologne 4 ปีที่แล้ว +4

    3:13 "going way back to greeks..." :) well, sir, i think greeks are still in the world.

  • @emenikeanigbogu9368
    @emenikeanigbogu9368 4 ปีที่แล้ว

    Loved this lecture

  • @SivaramakrishnanSivakumar
    @SivaramakrishnanSivakumar 11 ปีที่แล้ว

    Ax=b,does not have solution,but how does
    A'Ax'=A'b have a solution since
    when we multiply by A' ,we just take combinations of rows and create new rows
    So ,if x' satisfies 2nd eqn , it must satisfy 1st
    pls clear my doubt

    • @tarunkalluri1799
      @tarunkalluri1799 6 ปีที่แล้ว +1

      I guess you are also simultaneously changing b on the RHS by multiplying with A', and the new b might become solvable.

    • @tarunpahuja3443
      @tarunpahuja3443 5 ปีที่แล้ว

      Multiplying by A transpose, will leads to (square matrix)x=Atranapose*b. Which means now you have same number of equation and unknowns and hence you can solve them now for x.

    • @qinglu6456
      @qinglu6456 5 ปีที่แล้ว +2

      "Ax=b does not have a solution" means the vector b is not in the space spanned by the column vectors of A because there are not enough linearly independent column vectors in A. So, there is NO solution. However, we may find an approximate solution (x_hat) if we just consider a component of b that is in the space spanned by the column vectors of A as the new b. A'b is getting the component of b that is in the space spanned by the column vectors of A. To see this, let b*=A'b, then each entry of b* is the inner product of b and a column vector of A, which is proportional to the length of the projection of b on the corresponding column vector of A. What we want to find for x_hat (the approximate solutions) are the length of b* along each direction of the column vector of A. This can be done by solving A'A(x_hat)=A'b. If A'A is invertible (that is, the column vectors of A are linearly independent), then x_hat = (A'A)^(-1)A'b. However, if A'A is not invertible, we do not even have solutions for x_hat.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว +1

      @@qinglu6456 This is the next lecture (15). You are ahead ;)

  • @Gisariasecas
    @Gisariasecas 8 ปีที่แล้ว +1

    Could i see (A^t)(A) as a dot product of two vectors in the matrix space?

    • @antoniolewis1016
      @antoniolewis1016 8 ปีที่แล้ว +2

      no because A is a matrix.

    • @dacianbonta2840
      @dacianbonta2840 8 ปีที่แล้ว +3

      No, because you're doing matrix multiplication. The result is a matrix, not a scalar.

    • @Gisariasecas
      @Gisariasecas 8 ปีที่แล้ว

      Thanks for answering but thats not quite correct, the answer is at the end of this course when professor talks about linear transformations and the matrix representation of them. The correct answer is that i can not see it as a dot product because for every vector we have a coordinate representation,This definition of dot product only works with the coordinate representation of a vector which means that if i would like to do the dot product between this elements i first need the coordinate representation(as columns) and after i apply the definition.

    • @daxterminator5
      @daxterminator5 8 ปีที่แล้ว

      In fact, every value in the new matrix (A^t)(A) is a vectorial product between two columns of A. Since the columns of A are part of C(A) [thanks, Capt. Obvious!], you are right!

  • @pelemanov
    @pelemanov 13 ปีที่แล้ว +1

    I also love this series of lectures, but in this lecture I find that he does not explain perpendicular subspaces enough. The example with the blackboard and the floor is just confusing (to me and my colleagues at least), because obviously they form a right angle and thus are perpendicular. And obviously they are both subspaces. Which leads me to believe that the definition is wrong, which I'm sure it isn't.

    • @alexdowad947
      @alexdowad947 7 ปีที่แล้ว +2

      That's not what "perpendicular" means in relation to vector subspaces. When applied to subspaces in the context of linear algebra, "perpendicular" means that *every* vector in one subspace is perpendicular to every vector in the other.

    • @stanbaltazar
      @stanbaltazar 5 ปีที่แล้ว +2

      The blackboard-floor example is in 3-dimensional space. As such, the dimensions of two orthogonal subspaces in R^3 must add up to 3. The blackboard and the floor are both 2-dimensional and hence could not be orthogonal.

    • @bassmaiasa1312
      @bassmaiasa1312 3 ปีที่แล้ว

      I don't think of perpendicular and orthogonal as synonyms. Perpendicular is the familiar spatial concept of right angles. Orthogonal means the dot product = 0, it's algebraic, not spatial. Once we get past R3, I don't try to think spatially (e.g., 90° angles in R4). When you say the planes are perpendicular, you're thinking of a plane as pointing in one direction, but planes don't actually point in any direction. A plane's normal vector points in a direction. Saying two planes are perpendicular means their normal vectors are perpendicular. Otherwise, to say two planes are perpendicular is meaningless, the way I see it.
      I think Prof Strang only uses the term 'perpendicular' because we are already familiar with the pythagorean theorem and right triangles, to show that the pythagorean theorem also works with orthogonal vectors. I'd just as soon forget the word 'perpendicular' as applied to linear alegreba. The key point is that the familiar spatial relationships in R2 and R3 continue to work as algebraic relationships beyond R3. E.g., I don't think of the 'length' of a vector in R4 because 'length' is a spatial concept; I think of the result of the pythagorean equation.

  • @fehmi35
    @fehmi35 4 ปีที่แล้ว

    ın 25 :05 he says transpose of row1 tımes x is zero but transpose of a row is column so how is it even possible to multiply them. Shouldn’t it be just row tımes x ? thx

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form.
      In other words, it is a convention that, if we want to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose.
      This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it.
      And it is not a mere convention! It is an essential property of the matrices: columns are vectors, not rows. If we could, at our own leisure, claim whenever we want that rows are also vectors, then the whole concept of the transposed matrix will be corrupted, even the concept of matrix itself.

    • @nenadilic9486
      @nenadilic9486 3 ปีที่แล้ว

      @@fehmi35 No, you got it right the first time: the vectors that correspond to rows are orthogonal to the null space, not the column vectors (the latter are orthogonal to the null space of A^T).
      The thing is that in order to get the vectors that correspond to rows of A, you must look at the columns of A^T. Or to put it simply: you write each row of A as a vertical vector, because you always write vectors like that in linear algebra - they are always the columns of some matrix.
      Then, in order to multiply another vector (x in this example) by it, from the left, you obviously must have it in a form of a 1 x n matrix, and for that you transpose your (vertical) vector.

    • @iamjojo999
      @iamjojo999 2 ปีที่แล้ว

      @@nenadilic9486 no. that is clearly a mistake. I think why prof Strang “transpose” row to product x is because he just taught when two vectors are perpendicular they have a property of X transpose * y=0. So he not intentionally transpose the first vector.
      But Xt*Y=0 requires both two vectors are column vectors. And row vector clearly is not a column vector so no need to transpose again.

  • @mathforai-j5y
    @mathforai-j5y 2 หลายเดือนก่อน

    “So ninety degrees, this is a ninety-degree chapter.”😘