I really like the explanations. PS: It seems that there was a very minor error at the end. bigger sigma_f^2 means we model the problem with larger variance; and if we take the formula in the previous slide, we would "explore" more vs "exploit", which turns out to work well for this problem. The smoothness is more directly controlled by the analytical form of x, x' (e.g. a gaussian kernel is smoother than the laplacian one).
I have been trying to learn GP for a few days and I believe this is the best explanation I have seen so far. Thank you!
The clarity of explanations is great! I will definitely come back to this channel for other topics. Thanks a lot for sharing :)
Good explanation Dr. Pascal. Really thankful
I really like the explanations.
PS: It seems that there was a very minor error at the end. bigger sigma_f^2 means we model the problem with larger variance; and if we take the formula in the previous slide, we would "explore" more vs "exploit", which turns out to work well for this problem. The smoothness is more directly controlled by the analytical form of x, x' (e.g. a gaussian kernel is smoother than the laplacian one).
\
You
The algorithm suggested at the end of the talk (slide 22) seems very similar to GP-UCB!
Finally, I can understand it! Thanks a lot.
it is very sad that the crazy camera movements make it hard to listen to the lecture -- otherwise, fantastic work, thumbs up! :)
so x consists of ALL observations in our dataset ?
Yes
Very helpful
Nice