ML Tutorial: Gaussian Processes (Richard Turner)

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • Machine Learning Tutorial at Imperial College London:
    Gaussian Processes
    Richard Turner (University of Cambridge)
    November 23, 2016

ความคิดเห็น • 70

  • @Vikram-wx4hg
    @Vikram-wx4hg ปีที่แล้ว +9

    Super tutorial!
    Only wish: I wish I could see what Richard is pointing to when he is discussing a slide.

  • @Tobaman111
    @Tobaman111 4 ปีที่แล้ว +7

    I've come back to this for years. The visualization in the beginning is always a ray of light. Excellent.

  • @zhou7yuan
    @zhou7yuan 3 ปีที่แล้ว +72

    Motivation: non-linear regression [1:00]
    Gaussian distribution [3:09]
    conditioning [5:55]
    sampling [7:28]
    New visualization [8:51]
    New visualization dimension*5 [10:54]
    dimension*20 [13:06]
    Regression using Gaussians [15:08]
    (conditional on 4 un-continuous point) [16:17]
    Regression: probabilistic inference in function space [19:09]
    Non-parametric (∞-parametric) vs Parametric model [20:08]
    (hyper-parameter explain) [23:02]
    Mathematical Foundations: Definition [24:08]
    Mathematical Foundations: Regression [30:48]
    Mathematical Foundations: Marginalisation [34:02]
    Mathematical Foundations: Prediction [36:29]
    What effect do the hyper-parameters have? [41:40]
    short horizontal length-scale [41:58][42:21]
    long horizontal length-scale [42:30][42:41]
    [42:58]
    - l -> horizontal length-scale
    - \sigma^2 controls the vertical scale of the data
    Higher dimensional input spaces [44:06]
    What effect does the form of the covariance function have? [45:20]
    Laplacian covariance function |x1-x2| [46:16]
    Rational Quadratic [46:32]
    Periodic [46:55]
    The covariance function has a large effect [48:12]
    Bayesian model comparison (too sensitive to priors) [48:49]
    Scaling Gaussian Process to Large Datasets [56:04]
    Motivation: Gaussian Process Regression [56:08]
    O(N^3) [57:15]
    idea: summarize dataset by small number (M) pseudo-data [58:38]
    A Brief History of Gaussian Process Approximations [1:02:01]
    approximate generative model exact inference (simpler model) [1:02:20]
    pseudo-data [1:03:11]
    FITC, PITC, DTC
    (generate pseudo-data, elsewhere data are independent - broke connections)
    A Unifying View of Sparse Approximation Gaussian Process Regression (2005) [1:04:12]
    (problem of this approach) [1:04:31]
    exact generative model approximate inference [1:05:59]
    VFE, EP, PP [1:06:27]
    A Unifying View for Sparse Gaussian Process Approximation using ... (2016) [1:07:10]
    EP pseudo-point approximation [1:07:45]
    EP algorithm [1:15:27]
    Fixed points of EP = FITC approximation [1:23:33]
    Power EP algorithm (as tractable as EP) [1:25:05]
    Power EP: a unifying framework [1:25:56]
    How should I set the power parameter ɑ? [1:27:19]
    Deep Gaussian Process for Regression [1:34:34]
    Pros and cons of Gaussian Process Regression [1:34:35]
    From Gaussian Processes to Deep Gaussian Processes [1:38:26]
    Deep Gaussian Precesses [1:41:53]
    Approximate inference for (Deep) Gaussian Processes [1:42:09]
    Experiment: Value function of the mountain car problem [1:42:31]
    Experiment: Comparison to Bayesian neural networks [1:44:15]

  • @julianocamargo6674
    @julianocamargo6674 2 ปีที่แล้ว

    Brilliant presentation, thanks!

  • @yeshuip
    @yeshuip 2 ปีที่แล้ว +1

    i understood like variable index coressponds to the variable and we are plotting its values then somehow you talking about variable index can take real values and forgot about the distances. I didn't understand this concept. Can anyone explain me this

  • @sakcee
    @sakcee ปีที่แล้ว

    Excellent !!! very clear explanation

  • @niveyoga3242
    @niveyoga3242 5 ปีที่แล้ว +1

    Awesome explanation!

  • @GGasparis7
    @GGasparis7 4 ปีที่แล้ว +1

    amazing video, thank you very much

  • @mathewspeter1274
    @mathewspeter1274 5 ปีที่แล้ว +3

    Great explanation. Thank you.
    Is the PPT slide or PDF file that is presented, available for download?
    Which tool/script is used to generate the contour plots and blue coloured prediction plots? Is it scikit python library?

    • @ret2666
      @ret2666 5 ปีที่แล้ว +7

      Slides for this and similar presentations are here: cbl.eng.cam.ac.uk/Public/Turner/Presentations

    • @chenxin4741
      @chenxin4741 5 ปีที่แล้ว +1

      Perfect slides for GP

    • @monsume123
      @monsume123 5 ปีที่แล้ว +1

      @@ret2666 Hello Richard, first of amazing explanation of the Gaussian Process origins and motivations. I was wondering whether there might have happened some notation mixup at the slide 22:10 (s. 15) Since K(x1,x2) with a scalar x is also a scalar in the final covariance Sigma(x1,x2 = K(x1,x2) + Isigma_y, maybe you originally differentiated between element wise covariances such as k(x1,x2) and the matrix collection of element wise covariance functions with K(x1,x2) so that element K_12 is K_12 = k(x1,x2) = exp... ?

    • @ret2666
      @ret2666 5 ปีที่แล้ว +1

      @@monsume123 Thanks for the comment. You're right that I should have written this as: Sigma(x1,x2) = K(x1,x2) + I(x1,x2) sigma^2_y, and explained that I(x1,x2) is a function that is 1 when x1=x2 and zero otherwise. Hope that clarifies things.

    • @saikabhagat
      @saikabhagat 4 ปีที่แล้ว

      @@ret2666 The best explanation on the web by far. Thanks for the link. Somehow it seems unavailable. Is there an alternative location? Truly appreciate your attention.

  • @yeshuip
    @yeshuip 2 ปีที่แล้ว

    hello can anyone provide the code please

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 ปีที่แล้ว

    52:33

  • @dewinmoonl
    @dewinmoonl 5 ปีที่แล้ว +100

    one of the best GP explanations. People have gotten me lost horribly with "too much math" without properly motivating the problems to begin with. This explanation is to the point, and the math is exactly the same in the end, just presented in a much better way.

    • @priyamdey3298
      @priyamdey3298 3 ปีที่แล้ว +2

      absolutely! The motivation couldn't have been any better, to say the least.

  • @ncsquirll
    @ncsquirll 6 ปีที่แล้ว +77

    really great video. one of the best GP explanations on the web.

  • @Benedetissimo
    @Benedetissimo 6 ปีที่แล้ว +30

    The inherent beauty of Gaussian Processes, as well as the clarity of the explanation left me utterly impressed. Thank you so much for uploading!

  • @johnkrumm9653
    @johnkrumm9653 4 ปีที่แล้ว +12

    Wow, that was a great explanation of GPs! Thank you for making it so clear. You should tour around giving this lecture in huge stadiums. I'd buy the t-shirt! :-)

  • @ryankortvelesy9402
    @ryankortvelesy9402 4 ปีที่แล้ว +7

    51:20 yo dawg I heard you like gaussians so I put an infinite gaussian in your infinite gaussian

  • @ponyta7
    @ponyta7 5 ปีที่แล้ว +9

    Wonderful video, deeply thank you for this. From Seoul.

  • @TheAIEpiphany
    @TheAIEpiphany 3 ปีที่แล้ว +3

    It'd be nice to hear about some real-world application of (deep) GPs. We saw its performance on toy datasets compared to similarly-sized NNs. If you throwed in bigger NNs I'd assume they'd improve quite trivially not sure whether that's the case with deep GPs (I might be wrong - I'm no expert on GPs).
    So far I've seen GPs used only obscurely - somebody uses a GP to figure out a small set of hyperparams. One prominent example is the AlphaGo Zero paper - they have a single sentence in their paper ("Methods" section) where they mention that they've used it to tune MCTS's hyperparams - whether that was even necessary is not at all clear from the paper, so I'm still looking for a use-case where GPs are definitely the right thing to do. I'd love to hear some examples if you know of them!
    Thanks for the lecture! I found the first part especially useful!

  • @parthasarathimukherjee7020
    @parthasarathimukherjee7020 4 ปีที่แล้ว +5

    How are they assuming that the covariance matrix(similarity between dimensions) is the same as the kernel matrix(similarity between data points)?

    • @ganeshsk106
      @ganeshsk106 4 ปีที่แล้ว

      Hi Patha, I have the same confusion. Were you able to understand this? Also from 56:10 minute of the video, he will start saying that they have collections of input (X) and respective ground truth (Y). So the prior assumption is that the data should be generated using the *Squared Exponential Kernel*. So if my understanding is right the data is in 1-D and with "N" data points the Kernel Matrix will be "NxN". Is it right?

    • @zakreynolds5472
      @zakreynolds5472 ปีที่แล้ว

      @@ganeshsk106 I am having same confusion. If anyone could explain this it would really help me out!

  • @kianacademy7853
    @kianacademy7853 10 หลายเดือนก่อน

    rational Qudratic kernel has |x1-x2|^2 term, not |x1-x2|

  • @7andromeda
    @7andromeda 3 ปีที่แล้ว +1

    not sure how he goes from the variable index on the x-axis to data points on the x-axis in the visualizations. What is X on 20:20? Is each point on X a data instance, or a single feature value? I guess this X is just one dimension.

  • @zitafang7888
    @zitafang7888 ปีที่แล้ว

    Thanks for your explanation. May I ask where I can download the slide?

  • @bernamdc
    @bernamdc 3 ปีที่แล้ว +1

    At 14:29, why is the 3rd point above the 2nd point? I would expect it to be slightly below, as it is very correlated with point 2 and a bit correlated with point 1

  • @DVDPlayer18
    @DVDPlayer18 วันที่ผ่านมา

    videomark 33:30

  • @heyjianjing
    @heyjianjing 2 ปีที่แล้ว +2

    By far the best introduction to GP, thank you Prof. Turner!

  • @appliedstatistics2043
    @appliedstatistics2043 11 หลายเดือนก่อน

    Does anyone know where to download the slides?

  • @DVDPlayer18
    @DVDPlayer18 2 วันที่ผ่านมา

    16:41

  • @ethantao9249
    @ethantao9249 4 ปีที่แล้ว +2

    super clear explanation. Thank you so much!

  • @tumitran
    @tumitran 5 ปีที่แล้ว +3

    So nice that they give credits to the earlier paper.

  • @IslamEldifrawi
    @IslamEldifrawi 2 ปีที่แล้ว +1

    This is the best GP explanation I have seen till now. Great job!!!

  • @balalaika678
    @balalaika678 4 ปีที่แล้ว +1

    Best source I could find in youtube, very clear and precise explanations ! After this the equations from a book are much easier to understand !

  • @saikabhagat
    @saikabhagat 4 ปีที่แล้ว +2

    absolutely amazing! Thank you!

  • @Jononor
    @Jononor 2 ปีที่แล้ว

    Does anyone have some insights on how this relates to the Radial Basis Function (RBF) kernel, as used in for example SVM?

  • @lahaale5840
    @lahaale5840 3 ปีที่แล้ว

    Does GP only work super simple data like y=sin(x) + N()? In my experience, even a simple model like linear regression can beat GP in real-world data.

  • @zakreynolds5472
    @zakreynolds5472 ปีที่แล้ว

    Thanks this presentation has been really useful but I am a little stuck and have a question. In this first portion of the presentation the CoV function is shown to show correlation between random variables (x axis=variable index) but from there on it seems to revert to being used to compared to values within the same variable (from X in bold on axis to lower case x). I appreciate that this is a difference between multivariate and univariate (I think?) But could you please elaborate?

  • @Ivan-td7kb
    @Ivan-td7kb 5 ปีที่แล้ว +3

    Incredible explanation!

  • @zacharythatcher7328
    @zacharythatcher7328 4 ปีที่แล้ว +1

    Can someone explain what is actually being done at 43:30? I understand that you are maximizing the likelihood of getting your outputs, y, given some inputs by varying sigma and l. But what is the output that you are optimizing for? The function at every point other than the known?

    • @ianmoore957
      @ianmoore957 4 ปีที่แล้ว

      Spatially, I like to think of it like a 3D curve (with L, sigma2, and log p(y|theta) as the axis, and theta being your parameter set [L, sigma2]) with a peak (ie, peak -> maximum point of log p(y|theta)); if you take that peak, and project down onto a point on the L,sigma2 plane (ie, [L*,sigma2*]); you have the estimates of your parameters L and sigma2

    • @MayankGoel447
      @MayankGoel447 ปีที่แล้ว +1

      I guess over all the possible outputs y. Whichever y has the highest probability, you take the corresponding l, sigma^2

  • @michaelwangCH
    @michaelwangCH 3 ปีที่แล้ว

    I listed lots of explanation in lecture halls during my study about gaussian process, your demo is the best one, that I ever saw. Thanks Marc.

  • @vmt4gator
    @vmt4gator 5 ปีที่แล้ว +1

    great class. Thank you very much

  • @ardeshirmoinian
    @ardeshirmoinian 4 ปีที่แล้ว

    Does anyone know of a good description on learning the hyperparameters using k-fold cv?

  • @maddoo23
    @maddoo23 2 ปีที่แล้ว

    At 45:30, the covariance of brownian motion cov(B_s, B_t) = min(s,t), right?
    And not whats given on the slide..

    • @ret2666
      @ret2666 2 ปีที่แล้ว

      See here for the sense this is Brownian motion: en.wikipedia.org/wiki/Ornstein-Uhlenbeck_process

  • @airindutta1094
    @airindutta1094 2 ปีที่แล้ว

    Best GP visualization and explanation I have ever seen.

  • @0929zhurong
    @0929zhurong 2 ปีที่แล้ว

    The best GP explanation, amazingly done

  • @Nunocesarsa
    @Nunocesarsa 4 ปีที่แล้ว +1

    epic class!

  • @jinyunghong
    @jinyunghong 5 ปีที่แล้ว +1

    Great video :)

  • @pattiknuth4822
    @pattiknuth4822 3 ปีที่แล้ว +2

    This video in many cases was INCREDIBLY annoying. Students would ask questions. They were not loud enough to understand. Turner didn't repeat the question so you have no idea what was asked. Sometimes these questions were long so you would have long gaps in the audio. Pro tip: If you're going to allow questions during a lecture, repeat the question so everyone else knows what was asked and the answer then means something.

  • @GauravJoshi-te6fc
    @GauravJoshi-te6fc ปีที่แล้ว

    Woah! Amazing explanation.

  • @o0BluMenTopfErde0o
    @o0BluMenTopfErde0o 3 ปีที่แล้ว

    Now its becoming a shoe draus !

  • @norkamal7697
    @norkamal7697 2 ปีที่แล้ว

    The best GP explanation evaaa

  • @CppExpedition
    @CppExpedition ปีที่แล้ว

    WOOOOOOOOOOOOOOOW you blow my mind! 🤯

  • @sathya_official3843
    @sathya_official3843 3 ปีที่แล้ว

    Awesome! Totally worth the time

  • @mario7501
    @mario7501 4 ปีที่แล้ว

    I wish I had found this video earlier. Took me using the equations myself to code up an example similar to yours to get an intuition of what’s going on

    • @yode8
      @yode8 3 ปีที่แล้ว

      Any advice, or resources or papers. I feel like I generally understood what was happening in the video, but no everything. For example some of covariance functions equations. And also the EP example when he mentioned KL divergence. I am beginning to understand gps for my dissertation but some of the notation nd literature is hard to understand. Thanks

  • @redberries8039
    @redberries8039 3 ปีที่แล้ว

    this is nicely done

  • @apbosh1
    @apbosh1 3 ปีที่แล้ว

    What practical use have you done with this apart from to teach it? My head exploded about 1 minute in. Clever stuff!