We are doing for the identity distribution of data, independence of instances in data is NOT equivalent to independence of predictor(input) and response(output).
Because we are approximating the real but intractable distribution p from distribution family P, by optimizing a simpler but tractable distribution q from family Q. We a therefore varying the distribution from P to Q for the purpose of approximation.
Wow. Amazing lecture.. so much intuition and clarity
technical content starts at 17:10
What is the meaning of no closed form in step 2 of Bayesian inference?
it means that it's impossible to write down on paper a derived solution to the integral. Instead you need to use approximations
thank you for much for this amazing lecture
fantastic lecture
Brilliant. Great teacher.
Starts at 7:02
Excellent lecture. However, the video resolution is not good.
1:24:58 Excellent catch there... we assume independence of data *GIVEN* the parameters. If the data is independent what the heck are we doing here.
We are doing for the identity distribution of data, independence of instances in data is NOT equivalent to independence of predictor(input) and response(output).
This is very good lecture, I liked. But audio is so asynchronized, so I had to download the video to manually adjust.
Thank you
Good explanation, but why is this method called "variational" Bayes ?
it comes from calculus of variation
Because we are approximating the real but intractable distribution p from distribution family P, by optimizing a simpler but tractable distribution q from family Q. We a therefore varying the distribution from P to Q for the purpose of approximation.