if you find this lecture challenging, it might be because you forget some basic linear algebra. Don't be discouraged by the somewhat trivial algebraic calculation. the Prof does a very good job in explaining the intuition and statistical foundation for doing PCA. PCA is so commonly used in psychology studies, yet no one in the my Psy department seem to have a clue where PCA is coming from.
For people who are whining about the lecture is too hard or can not follow. I think you guys do not have the prerequisite for the course. His lecture is trying to illustrate from the statistical prospective of PCA, anybody who is series in data science should know that statistics and linear algebra shares a lot of same ideas from different prospective.
Gave me some insight, Thanks. I liked the part about how u^TSu is the variance of the X's along the u direction. Good to know for an alternative viewpoint to Singular value decomposition as a PCA.
But I have a doubt here n shows number of examples and d tells about how many dimension the space have so v should be of size (dx1) so it should not be feasible for Hv??
@@professorravik8188 May I ask you why? Is it because we suppose that we are calculating empirical covariance matrix for a whole population. But if we wanted to calculate it only for sample from population, we would have to divide by n-1?
Is he deliberately making the writing hard to read by making the blackboards so poorly erased and specifically writing on those poorly erased boards instead of the nice black ones?
thanks for the video! Question: can someone explain the difference between big Sigma and S. One is covariance matrix, one is sample covariance matrix. they are not the same thing? Thanks!
Big Sigma is for the whole population. S is when selecting a sample from the population. S is an estimate of Sigma. If the sample is big enough, S would approach Sigma, but may not be exactly equal to the population parameter. I hope this is clear!
Lectures of both professors are awesome. It may be helpful to understand this course if prerequisite courses (18.600, 18.06, 18.100, etc.) are completed first. May also be helpful to study the slides first before listening to the lectures.
horrible. Don't max out your volume. There's nothing till you get a huge surprise at 1:15. One of the cameras is tracking the movement of the lecturer, and it makes me dizzy. The view of the blackboard is enough. Even in 2016, the camera man at OCW still can't master how to record good video lectures.
too much unnecessarily complicating. this is in fact not only unnecessarily complicating but also in a confusing and destructive way. however, i should still say that other lecturers up to this lecture were better in term of presentation
if you find this lecture challenging, it might be because you forget some basic linear algebra. Don't be discouraged by the somewhat trivial algebraic calculation. the Prof does a very good job in explaining the intuition and statistical foundation for doing PCA. PCA is so commonly used in psychology studies, yet no one in the my Psy department seem to have a clue where PCA is coming from.
extremely helpful with building the basics and then moving forward
For people who are whining about the lecture is too hard or can not follow. I think you guys do not have the prerequisite for the course. His lecture is trying to illustrate from the statistical prospective of PCA, anybody who is series in data science should know that statistics and linear algebra shares a lot of same ideas from different prospective.
Gave me some insight, Thanks. I liked the part about how u^TSu is the variance of the X's along the u direction. Good to know for an alternative viewpoint to Singular value decomposition as a PCA.
I was forwarding like crazy until I hear something and was thinking "Damn not only the first minute without audio". Just to realise my sound was mute
A good opportunity to burn calories would be to wipe the blackboard properly.
Audio starts at 1:14
Thanks.
Clean the blackboard PROPERLY
H is a n by n matrix, and v is a d-element column vector. H can not multiply v
He corrected the idea but didn't clean up the board. v is n-dim
But I have a doubt here n shows number of examples and d tells about how many dimension the space have so v should be of size (dx1) so it should not be feasible for Hv??
Can someone pls answer on this??
nice example of seeing matrix in perspective of stat
49 min in and still hoping he'll get to PCA soon hahaha... great lecture though
wonderful teacher and everything. But what's with the horrible chalk rubbing.
Great lecture. Thank so much Professor.
He is pretty good actually
Can anyone pls help me with how prof. Come up on the final result from multiplication of Hv?? Steps i am little bit confused
Absolutely great. If you have trouble getting this, maybe read a book first.
Shouldn't the empirical covariance matrix be divided by n-1 and not n?
both definitions works well
@@professorravik8188 May I ask you why? Is it because we suppose that we are calculating empirical covariance matrix for a whole population. But if we wanted to calculate it only for sample from population, we would have to divide by n-1?
Absolutely precious! Excellent in explaining details! Thank you.
how to proof that eigenvectors are coulums of projection matrix
Concept of Eigenvector at 1:02
wtf is he writing in a messed up white board???
The only bothersome thing in this video is the dirty is the blackboard.
Anyone can explain how the did he get the term in the parenthesis at 39:07? Why does Transpose(v)[1] = Transpose([1])v?
both are transpose to each other and anyway, you are going to take the expectation of those two. so it will be same
Is he deliberately making the writing hard to read by making the blackboards so poorly erased and specifically writing on those poorly erased boards instead of the nice black ones?
Good Lecture. But bad handling by Cameraman
Rather watch one of the lectures on PCA by Prof Ali Ghodsi.
Link please
@@NphiniT th-cam.com/play/PLehuLRPyt1Hy-4ObWBK4Ab0xk97s6imfC.html
This is full playlist.
In 1:08:10, those lambda's should not be eigen values of Sigma ? (or covariance matrix ?)
Can anyone explain how is he multiplying Identity matrix Id which is dxd with all-ones matrix which is nxn?????
Nevermind....he clears it up around 40:00 it was gigantic mess
I don't know why he let I_d rather than I_n denote n by n Identity matrix since 32:10.
oh, he correct this mistake since 40:30
for such an important concept you would think Mit would've fixed this issue by now
I understand nothing...
Man this video is such a torture! :D
why?
47:25 bottom left: How is Var(u^TX)defined? What does the "variance" of a random vector mean? Thank you so much
X is vector, not matrice in this case. So u^TX is just scalar
Try working backwards from the result U^TxU
He doesn't even cares to rub the board properly 😅😅😅😅
thanks for the video! Question: can someone explain the difference between big Sigma and S. One is covariance matrix, one is sample covariance matrix. they are not the same thing? Thanks!
Big Sigma is for the whole population. S is when selecting a sample from the population. S is an estimate of Sigma. If the sample is big enough, S would approach Sigma, but may not be exactly equal to the population parameter. I hope this is clear!
He should learn how to teach from Gibert Strang
Bit rude.
@@aazz7997but true
Lectures of both professors are awesome. It may be helpful to understand this course if prerequisite courses (18.600, 18.06, 18.100, etc.) are completed first. May also be helpful to study the slides first before listening to the lectures.
He makes PCA way more complicated than it should be, wow...
Most of what he is doing is introducing the linear operator formalism. The gravy here is this side stuff, not the minimalist way to explain PCA
no
Can you share your slide please?
The lecture slides are available on MIT OpenCourseWare at: ocw.mit.edu/18-650F16. Best wishes on your studies!
horrible. Don't max out your volume. There's nothing till you get a huge surprise at 1:15.
One of the cameras is tracking the movement of the lecturer, and it makes me dizzy. The view of the blackboard is enough. Even in 2016, the camera man at OCW still can't master how to record good video lectures.
my computer is so smart
I see why this was made free.
This guy is so cute
No sound?
It has sound... it's just really low. Sorry!
11:00 There is a ghost on a board, in a right lower corner of it.
Is this really MIT?
Dude needs better erasers
too much unnecessarily complicating. this is in fact not only unnecessarily complicating but also in a confusing and destructive way. however, i should still say that other lecturers up to this lecture were better in term of presentation
Insane :)
this is really not the quality i expected from MIT, pretty sloppy instructor
1:13:51 v1 ZULUL
this guy is a complete mess...
He looks rather unsecure.
Terrible
Winy have you published such a mess? Shame on you!
Audio starts at 1:15