Thanks Andrew and Stanford for putting this out. K-means clustering: 00:46 Density estimation (prelude to mixture of Gaussians): 16:20 Mixture of Gaussians: 21:16 EM Algorithm: 23:41
I've watched a few videos explaining EM and this is by far the best one. Most of the others forget to explain some of the concepts or terms that are part of the algorithm, but this gives a clear explanation for everything.
Thanks Andrew and Stanford for putting this out.
K-means clustering: 00:46
Density estimation (prelude to mixture of Gaussians): 16:20
Mixture of Gaussians: 21:16
EM Algorithm: 23:41
I've watched a few videos explaining EM and this is by far the best one. Most of the others forget to explain some of the concepts or terms that are part of the algorithm, but this gives a clear explanation for everything.
Great feedback, thanks for watching!
To choose the right number of clusters, we can use Elbow method.13:50
can someone please buy Andrew some decent pens, with darker ink.
Whats not in this lecture 😮, ML, Stat, Information Coding Theory 🔥🔥
thank you so much for making me learn and really understand. You´re helping me so much with AI
uploaded 2 years ago, 80k views, and no comments?
The views are from bots LOL
@@yuxiang3147😂😂😂😂😂😂😂😂 under a Stanford lecture 👍😂🤡
@@yuxiang3147 bots are collecting data to train themself. should we compete with them?
Why Q is with subscript i? Isn't it iid?
Awesome 🤩
thank you so much prof.
32:30 How is p(xi|zi=j) calculated from Gaussian? Isn't p of single "point" = 0? What am I missing here?
Its likelihood. You can take a look at bayes theorem
Awesome
思路非常清晰
hi
hello random stranger, hope you have a nice day!
You too😂
You should have used different indices for x and z. x^(i) and z^(k) perhaps...
Excellent teacher! I learn this chapter for three time from different channels. This teacher let me know further on this chapter. Thank you!