I don't understand how and why the first component explains the most important variation in the data. And do you calculate the percentages of each one?
Hello! This video shows a very quick/brief overview of PCA. For a more in-depth explanation (that goes more into the math), I suggest this video: th-cam.com/video/dhK8nbtii6I/w-d-xo.html The eigenvector that corresponds with the largest magnitude eigenvalue (of our covariance matrix) will be along the direction that explains the highest proportion of the variation in our data. We label this eigenvector as "PC1".
Very clear and simple in its terms. I am truly grateful.
Very good ! Congratulations !
I've seen many conceptual presentations of PCA, but yours was the best one.
Thanks!
The clarity of your presentations is both admirable and easy to follow. Thank you for sharing!
Thank you so much, Willy!
wow thank you for this clear explanation! the rotation of axis vs eigenvector part is concise and effective.
thank you from brazil
You're welcome! Glad the video was helpful. :)
Please do some examples on real data.
By the way appreciated your efforts
I don't understand how and why the first component explains the most important variation in the data. And do you calculate the percentages of each one?
Hello! This video shows a very quick/brief overview of PCA. For a more in-depth explanation (that goes more into the math), I suggest this video: th-cam.com/video/dhK8nbtii6I/w-d-xo.html
The eigenvector that corresponds with the largest magnitude eigenvalue (of our covariance matrix) will be along the direction that explains the highest proportion of the variation in our data. We label this eigenvector as "PC1".