3.3.3 Bayesian Linear Regression: Equivalent Kernel (with Code!) - Pattern Recog. & Machine Learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 ม.ค. 2025

ความคิดเห็น • 6

  • @kH-ul4hk
    @kH-ul4hk หลายเดือนก่อน +1

    I just had an exam on GPs by Carl Rasmussen himself, such a cool concept!

    • @sinatootoonian
      @sinatootoonian  หลายเดือนก่อน +1

      He wrote the book on the topic :) Hope the exam went well!

  • @chuticabj
    @chuticabj หลายเดือนก่อน

    Have you thought on opening a Discord server? I think it would be really helpful.
    I'm personally starting to read the book today, but these videos you're uploading are gold.
    Thanks.

    • @sinatootoonian
      @sinatootoonian  หลายเดือนก่อน +2

      That's a great idea, I'll do it! I haven't done that before, so will probably need some fine-tuning from you guys :)

  • @sempercrescere6274
    @sempercrescere6274 26 วันที่ผ่านมา

    So far, this discussion assumes a prior mean of zero. I think this is quite restrictive; often in real world we have a strong intuition of what the mean of our features should be like.
    Just wondering if all the discussion and formula (e.g. those for the equivalent kernel) would apply if we stray away from zero mean and isotropic variance assumptions?

    • @sinatootoonian
      @sinatootoonian  24 วันที่ผ่านมา

      Thanks for the question! The equivalent kernel is the same, just with S_0 baked into the S_N. Interpolation with the equivalent kernel is adjusted by adding a constant offset to the training targets. See my derivations here: sinatootoonian.com/index.php/2024/12/20/the-equivalent-kernel-for-non-zero-prior-mean/. As for the rest of the discussion, it's an exercise for the reader :)