omg, you are really great at explaining things by using only a pen and a whiteboard, without the need for fancy digital animation, this is definitely what I call a REAL "Education"!!!
This is definitely a great explanation of eigendecomposition. I kind of got into this rabbit hole trying to understand singular value decomposition, and this video helped me understand that as well. Thanks for your help understanding this.
Holy shit you're literally blowing my mind (in a positive way) with your videos. I've never understood Eigendecomposition (and many more of the topics you're explaining) but now it all makes sense. Please never stop with your videos!
I'm just learning these basics and your videos are very comprehensive and highly informative. Looking forward to completing all the videos in the playlist!!
Surprisingly good explanation. Thanks a lot! I especially liked that all the information goes in order without gaps and an example of practical application is given.
Great explanation! Can you please give an example in machine learning or data science when we need to do the same linear transformation again and again?
Honestly.... U deserve atleast a million subscribers.... A moron professor in our Econometrics class didn't even try to do this in his class! Thanks professor ritvik!
Only one doubt, what's the reason behind normalizing eigenvectors? Btw, your content, the way of explaining these scary concepts taught me something that even MIT lectures couldn't. Thank you so much sir, please keep making such videos! More power to you sir :)
You have a great channel! Thanks for the insight which is hard to come by. Just one confusing area to me at the time was the definition of the 2x2 matrices for u1 and u2. They look like 3x2 matrices with values 1 & u1 (or u2). I did figure it out though. Thanks!
@@Galmionit shouldn't have been written in the way it was in my opinion, as it causes confusion. Those "1's" are just dot dot dots, ..., meant to be arbitrary entries
@@Galmionthe Matrix U is the 2 eigenvectors, u1 and u2, put next to each other in one matrix. And since u1 and u2 are 2x1 vectors, putting them together in a matrix makes it a 2x2
@@GalmionI would have chosen an example with no square roots as the first example personally. Say your eigenvectors are u1= [2] [3] u2 = [4] [5] Then U, the eigenvector matrix: U = [2 4] [3 5] Hope this helps.
Your explanation is the best I have ever seen. But your explanation does not explain what each component really means, ie. The First U^-1, map/rotate the input vectors, and then stretch the result in each eigenvector direction and then finally reverse-rotate the vector (restoring into the original axis).
hey did anyone solve for the eigenvectors? Maybe I am wrong, I got x1 = -2/3 x2 and x2 = -3/2 x1 when solving the equations for lamda = -5. if anyone got the answer please let me know.
Excellent. I was struggling to understand how the form A=ULU^-1 is reached from the definition of an eigenvalue (Au=lu) as explained in my textbook, but the way you explained it made it all click for me. Thanks!
Hang on, if a matrix times its inverse is the identity matrix, why can't the formula for eigendecomposition (U * lambda * U^-1) be simplified as just lambda?
omg, you are really great at explaining things by using only a pen and a whiteboard, without the need for fancy digital animation, this is definitely what I call a REAL "Education"!!!
This is definitely a great explanation of eigendecomposition.
I kind of got into this rabbit hole trying to understand singular value decomposition, and this video helped me understand that as well.
Thanks for your help understanding this.
Lmao I'm in the exact same rabbithole :D
holy shit I guess I'm not alone lmao
+1
same here bro
haha mee too
Holy shit you're literally blowing my mind (in a positive way) with your videos. I've never understood Eigendecomposition (and many more of the topics you're explaining) but now it all makes sense. Please never stop with your videos!
Finally, someone that shows it simple and clear and answers the most important question: why? Thank you!
No problem!
I'm just learning these basics and your videos are very comprehensive and highly informative. Looking forward to completing all the videos in the playlist!!
Surprisingly good explanation. Thanks a lot! I especially liked that all the information goes in order without gaps and an example of practical application is given.
Brief and clear! Thank you. 简短,清晰!
Such a succinct explanation.. can you just explain why we normalised the eigen vectors?
Great video, love the clarity of the explanation
Never seen such a clear explanation! Thank you so much!
Wish I could give more than one like. This channel is so underrated.
This is a great explanation, been stuck trying to understand PCA and this really helps
Watched this video as a refresher for my ML class and it was super helpful. Thanks!!!
Outstanding explanation!
It is very difficult to find that subject in a linear algebra college textbook.
Great explanation! Can you please give an example in machine learning or data science when we need to do the same linear transformation again and again?
Beautiful explanation........ Thanks.............
Honestly.... U deserve atleast a million subscribers.... A moron professor in our Econometrics class didn't even try to do this in his class! Thanks professor ritvik!
While Ritvik is indeed A-MA-ZING, perhaps you should be a bit nicer to your econometrics professor :-)
Thank you very much for your detailed answer with appropriate examples and its benefit
I really love your explanations, really helpful
Appreciated!
A superb explanation that i got the first time through. Liked and subscribed!
thanks for posting it, it would have been nicer to show how matix to the power is used in data science.
This gives a lot of information about the process of doing it and its value in data science. Thanks.
best video on eigen val decomposition on any platform. Thanks man!
Wow, thanks!
This channel is extremely useful, thank you very much
Wow this is the best video on Eigen Decomposition. Thanks a lot man!
thanks, very easy to follow you in your thought process. Helped me very much!
Glad it helped!
You made it so easy to understand! Thank you!
Glad it helped!
Great short explenation! Thanks!
I liked the video, very explanatory and understandable
Best help I found online. Thanks :)
You're welcome!
Only one doubt, what's the reason behind normalizing eigenvectors?
Btw, your content, the way of explaining these scary concepts taught me something that even MIT lectures couldn't. Thank you so much sir, please keep making such videos!
More power to you sir :)
Because any scalar multiplied by a eigenvector also remains a eigenvector only, hence we generally take unit vector
Thank you so much. I always love to learn why things are important. Makes studying much more interesting :)
Thanks...Very nice explanation...
You are welcome
Love bro! This explanation was so clear
Glad to hear it!
Great Clear explanations... Thanks a lot!
Thank you for this amazingly simple explanation!
Could you give me an example of that kind of multiplication used in Machine Learning?
Great explanation !
Amazing clear explanation! Love u dude! Thx a million!
Awesome Explanation.. Keep it up!
Thanks a lot!
thankyou so much, u are a saviour
You have a great channel! Thanks for the insight which is hard to come by. Just one confusing area to me at the time was the definition of the 2x2 matrices for u1 and u2. They look like 3x2 matrices with values 1 & u1 (or u2). I did figure it out though. Thanks!
Thank you!
can you elaborate on this? I still don't get how it isn't a 3x2 matrix.
@@Galmionit shouldn't have been written in the way it was in my opinion, as it causes confusion. Those "1's" are just dot dot dots, ..., meant to be arbitrary entries
@@Galmionthe Matrix U is the 2 eigenvectors, u1 and u2, put next to each other in one matrix. And since u1 and u2 are 2x1 vectors, putting them together in a matrix makes it a 2x2
@@GalmionI would have chosen an example with no square roots as the first example personally. Say your eigenvectors are
u1= [2]
[3]
u2 = [4]
[5]
Then U, the eigenvector matrix:
U = [2 4]
[3 5]
Hope this helps.
very nice explanation
Thanks for liking
OMG the application part was amazing😍
Thanks a lot for this clear explanation!
great explanation
Wow, such a good explanation!
Glad it was helpful!
OMG, literally understood the eigen shit in 8 minutes, thank you so much
Awesome!
Damn, just a good video. Thank you very much for explaining
Your explanation is the best I have ever seen. But your explanation does not explain what each component really means, ie. The First U^-1, map/rotate the input vectors, and then stretch the result in each eigenvector direction and then finally reverse-rotate the vector (restoring into the original axis).
Great video ! Can you also touch on the topic of LU Decomposition, Jordan Canonical Form, Rayleigh quotient, etc. ?
Super helpful. Thanks
Thanks a lot. This was sublime.
You're very welcome!
Great video, thanks!
your videos are helpful and concise at the same time, thats rare on today's yt
this is awesome!
Thanks for your help!
Thank you. Thank you. Thank you.
Any time!
Can y tell me about what are the pros of this topic?
What is the difference between decomposition and factorisation?
I think they're often used interchangeably
But are most matrices decomposable to eigendecomposition? Then doesn’t that mean limited use?
Best intro ever
awesome thanks
Love this. Thank u❤
Fantastic!!!!!!!!!!!!!!!!!!
Great video!
Why do we need normalized eigenvectors? won't any eigenvectors from the family of eigenvectors suffice
You are AWESOME! thank you!
Excellent
great job , I had no idea before the video now I know everything
Great job, peace
Man, this rocks! thank you!
pff great video, i feel bad i didnt knew this guy erlier, saves a lot of time.
Good video
iF P=6 OR p=7 is this arbitrary p=8?
7:54 Shouldn't you do the rightmost multiplication first? Lambda * U inverse.
Ojalá me lo hubieran explicado así de fácil cuando lo estudiaba hace casi 30 años
Explicación excepcional
Gracias por las amables palabras!
very nice
hey did anyone solve for the eigenvectors?
Maybe I am wrong, I got x1 = -2/3 x2 and x2 = -3/2 x1 when solving the equations for lamda = -5.
if anyone got the answer please let me know.
Beautiful
Excellent. I was struggling to understand how the form A=ULU^-1 is reached from the definition of an eigenvalue (Au=lu) as explained in my textbook, but the way you explained it made it all click for me. Thanks!
Excellent!
Amazing
Thanks man!
Thank you
Of course!
Thanks
10/10 ty
That was beautiful !!!! :')
thanks!
Awsome
nice
thanks*10^10000
SVD is superior imo
Coool
thanksssssssssssssssssssssssssssssssssssssssss
damn i like you, good job
Hang on, if a matrix times its inverse is the identity matrix, why can't the formula for eigendecomposition (U * lambda * U^-1) be simplified as just lambda?
You cannot rearrange the equation with matrices multiplication as you would with numbers/variables
Exactly, matrix multiplication is not commutative!
Beautiful and handsome and pretty and
my only comment is that you are awesome 🦾
Great explanation!
Love this. Thank u❤
thank you
Thanks man !!