Visualizing Diagonalization

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 พ.ย. 2024

ความคิดเห็น • 46

  • @sebastiantruijens7176
    @sebastiantruijens7176 11 หลายเดือนก่อน +17

    I am usually a silent observer of TH-cam videos, but this is special. Enjoyed every second of it. Thank you for making this.

  • @rahman3405
    @rahman3405 ปีที่แล้ว +6

    Great expalnation. The animations and visuals were amazing.
    Answers:
    1) The direction invariant vectors are called Eigen vectors
    2)A matrix is diagonalizable if it has enough linearly independent eigen vectors to span the space
    3)The diagonal entries are the eigen values. correct me if i am wrong. Thanks!

    • @qualitymathvisuals
      @qualitymathvisuals  ปีที่แล้ว

      Thank you for the kind words! It seems you are quite well studied since all of your answers are indeed correct!
      Bonus question: can every onto linear transformation be diagonalized?

    • @AolAlpha
      @AolAlpha 8 หลายเดือนก่อน +3

      @@qualitymathvisuals No sir, not every onto linear transformation can be diagonalized. Diagonalizability is a property of square matrices or linear transformations that have a full set of linearly independent eigenvectors.

    • @qualitymathvisuals
      @qualitymathvisuals  8 หลายเดือนก่อน

      Excellent!

  • @5ty717
    @5ty717 8 หลายเดือนก่อน +3

    This shows your very deeply intuition

  • @wyboo2019
    @wyboo2019 8 หลายเดือนก่อน +4

    there's a great book called Linear and Geometric Algebra by Alan Macdonald. while a good portion of it is about building the foundation of geometric algebra (a very clean way of unifying many parts of linear algebra by defining a new operation on vectors), the best part about the book is that it teaches linear algebra and linear transformations without much matrix usage; there's like one or two chapters covering matrices, as they are important, but most discussion of linear transformations is matrix-free. i really like it because i think matrices are so heavily tied with linear transformations that the two tools can get conflated with one another

    • @qualitymathvisuals
      @qualitymathvisuals  8 หลายเดือนก่อน

      What a great observation! Macdonald is one of the great when it comes to abstract algebra. I believe linear transformations are an incredible artifact of the human brain, coming from the more general idea of morphisms, matrices are just a way of describing their details in a well understood situation. Thank you for the thoughtful comment!

  • @raypanzer
    @raypanzer ปีที่แล้ว +10

    This is a very high quality math visual! Never knew my homework was interesting 👍

  • @jonkazmaier5099
    @jonkazmaier5099 6 หลายเดือนก่อน +4

    HOW does this not have more views?? Best visualization of this concept I have ever seen

  • @DimitrijeĆirić-x1x
    @DimitrijeĆirić-x1x 8 หลายเดือนก่อน +3

    Thanks! Great video

  • @spyral2108
    @spyral2108 6 หลายเดือนก่อน +1

    Wow, this is incredible. I must say you have done a very good job with this video, and you explained the concepts of diagonalization very concisely. Thanks!

  • @Fish-vs6jf
    @Fish-vs6jf วันที่ผ่านมา

    Didn't understand a single word of this, but it was pretty!

  • @guiguio2nd1er
    @guiguio2nd1er ปีที่แล้ว +1

    Great video, as usual

  • @guillermogarcia8912
    @guillermogarcia8912 8 หลายเดือนก่อน +1

    Great video , greetings from Spain !

  • @CrusaderGeneral
    @CrusaderGeneral 6 หลายเดือนก่อน

    I can watch these in a loop all dal long!

  • @OpPhilo03
    @OpPhilo03 9 หลายเดือนก่อน +1

    Great video sir.
    Thank you so much Sir❤

  • @tune_m
    @tune_m 7 หลายเดือนก่อน +2

    Very insightful! Question: when you read the equation at 4:27 you read it from left to right, but aren't the matrices composited from right to left?

    • @tune_m
      @tune_m 7 หลายเดือนก่อน

      As a consequence I read it as "align the eigenvectors with the standard basis" -> "scale standard basis" -> "move the eigenvectors back"

    • @tune_m
      @tune_m 7 หลายเดือนก่อน

      But I'm unsure whether my interpretation is correct

    • @qualitymathvisuals
      @qualitymathvisuals  7 หลายเดือนก่อน

      Excellent question! Yes, given two matrices A and B, their product can be interpreted as the composition of the linear transformation of A with the linear transformation of B. So AB is the transformation that applies B and then A. So yes, the order of highlighting used in the animation is not helpful for this understanding, good catch!

    • @tune_m
      @tune_m 7 หลายเดือนก่อน

      @@qualitymathvisuals Thanks for the prompt response! I'm currently a TA for an undergrad LinAlg course so this video serves me (and my students) well.

  • @Sarah-pu8un
    @Sarah-pu8un 5 หลายเดือนก่อน

    Wow! Extremely helpful

  • @anonymoususer4356
    @anonymoususer4356 6 หลายเดือนก่อน

    Superb video!

  • @B-Ted
    @B-Ted 5 หลายเดือนก่อน

    Truely Underrated 🌟

  • @theodoreshachtman7360
    @theodoreshachtman7360 7 หลายเดือนก่อน

    Amazing video 🎉

  • @cementedlightbulb
    @cementedlightbulb 14 วันที่ผ่านมา

    I am surprised that this has only 15k views

  • @AbuMajeed3957
    @AbuMajeed3957 8 หลายเดือนก่อน +1

    Thank you

  • @SailorUsher
    @SailorUsher 6 หลายเดือนก่อน

    Thank you so much!!!

  • @TheJara123
    @TheJara123 ปีที่แล้ว

    Happpy like Hippo!! Thanks..man

  • @zedentee5652
    @zedentee5652 5 หลายเดือนก่อน

    Wth you are so underrated

  • @sulbhasupriya4180
    @sulbhasupriya4180 ปีที่แล้ว +1

    Thank you🫡

  • @alexmathewelt7923
    @alexmathewelt7923 8 หลายเดือนก่อน +1

    Fun fact: to calculate the largest power of a matrix, where the exponent still fits in 64bit unsigned long, there are only 128 Multiplications needed. Example: You want to calculate 5^14. We split the exponent in binary: 5^(2¹+2²+2³) = 5² × (5²)² × ((5²)²)² = 6.103.515.625 .
    We only have to x := x², and if the current bit is on, we multiply our result with the current power, then we square x again... So to calculate powers up to about 4 Billion, u only need at most 64 multiplications. 32 for the squaring and at most 32 for the result multiplication. Since computers do not have more difficulties with larger numbers , that reduces the amount of calculations by an insane amount.

    • @qualitymathvisuals
      @qualitymathvisuals  8 หลายเดือนก่อน +1

      What a spectacular insight! The algorithm you are describing is called the “square and multiply algorithm” and is one of the main tools needed for computational cryptography. Hopefully I can talk about it soon in an upcoming video!

  • @starcrosswongyl
    @starcrosswongyl 6 หลายเดือนก่อน +1

    Hi with regards to the PDP^-1. The P^-1 is convert to the new basis after which scale by D and then rotate back to the standard basis by P. Am i correct?

  • @Nalber3
    @Nalber3 5 หลายเดือนก่อน

    I was thinking the other day what was used before analytical geometry. And then discovered synthetic geometry. I think there's a need for a balance between analytical and synthetic geometry. What do you think?
    Lovely animation, btw ❤

    • @Nalber3
      @Nalber3 5 หลายเดือนก่อน

      I see a lot of potential in blender as a game changer to do simulations using interconnected nodes 😊

  • @diamondredchannel8024
    @diamondredchannel8024 3 หลายเดือนก่อน

    Sir can you send me example of diagonalisable 5×5 matrix example

  • @buirabxs
    @buirabxs 5 หลายเดือนก่อน

    Guys how i understand we dividing some linear transformation to different steps that easier to calculate, i mean our p matrix help us to change basis and D changes sizes and P inverse ends the work,now i have question:
    Is it correct to say that P realize some rotation that we need and D just change sizes????

  • @Abcdeee_p
    @Abcdeee_p 6 หลายเดือนก่อน

    wow!