Gibbs Sampling : Data Science Concepts

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.ย. 2024
  • Another MCMC Method. Gibbs sampling is great for multivariate distributions where conditional densities are easy to sample from.
    To emphasize a point in the video:
    - First sample is (x0,y0)
    - Next Sample is (x1,y1)
    - Next Sample is (x2,y2)
    ...
    That is, we update all variables once to get a new sample.
    Intro MCMC Video : • Markov Chain Monte Car...

ความคิดเห็น • 73

  • @Adam-ec9dk
    @Adam-ec9dk 3 ปีที่แล้ว +72

    I like that you wrote all the major points on the board and fit everything into one slide. Super easy to take a screenshot so I can remember the gist of the video.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +10

      Glad it was helpful!

  • @ResilientFighter
    @ResilientFighter 3 ปีที่แล้ว +37

    Ritvik, your videos are ranking the top when a person searches "metropoolis hastings" and "gibbs sampling". Great job man!

  • @rahulchowdhury9739
    @rahulchowdhury9739 ปีที่แล้ว +2

    You're one of the best teachers of statistics. Thanks for taking the time to share the way you understand theories and problems.

  • @md.salahuddinparvez6578
    @md.salahuddinparvez6578 2 หลายเดือนก่อน

    In our masters course of Pattern Analysis in of the top ranking universities of Germany, the professor has actually put a link to this video in the slides. And after watching the video, I understand why. You have done a great job explaining, thank you !

  • @musclesmalone
    @musclesmalone 3 ปีที่แล้ว +9

    fantastic concise explanation, excellent visualisations. it's also very appreciated that everything is written prior to recording so there isn't thousands of people ( and in some cases millions ) waiting while watching you draw a graph or write a formula. Huge appreciation for your work, thank you!

  • @Ciavi-ar
    @Ciavi-ar 7 หลายเดือนก่อน +1

    This did actually help to finally wrap my brain around this topic. Thanks!

  • @DhruveeChauhan
    @DhruveeChauhan ปีที่แล้ว +1

    You are literally saving us one day before an exam!

  • @rmb706
    @rmb706 6 หลายเดือนก่อน

    I had to write a Gibbs sampler for my Bayes midterm. That moment when I checked it with PyMC and it was spot on first attempt just felt amazing. 🎉 🔥

  • @des6309
    @des6309 3 ปีที่แล้ว +1

    dude you're so talented at explaining

  • @user-me9mw5oc7u
    @user-me9mw5oc7u 3 ปีที่แล้ว +2

    Thanks, you are soooooo good at explaining. I will recommend my professor to take a look at your videos.

  • @bachi5373
    @bachi5373 หลายเดือนก่อน

    What a very clear explanation. Thanks a lot!

  • @shuangli5466
    @shuangli5466 10 หลายเดือนก่อน

    Thank you for giving me probably 15 marks on my exam and lower my probability of failing from 10% to 5%

  • @christophersolomon633
    @christophersolomon633 3 ปีที่แล้ว +1

    Excellent video - wonderfully clear.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      Glad it was helpful!

  • @AdrianYang
    @AdrianYang 3 ปีที่แล้ว

    Thank you for your video, Ritvik. Can I understand this as: search within a multi-dimension space is difficult because there are infinite choices of directions, while by fixing all the other dimensions and only leaving one movable, search within one dimension space becomes super easy because there are only two choices of directions.

  • @MirGlobalAcademy
    @MirGlobalAcademy 3 ปีที่แล้ว +2

    Simple Explanation. Just like spoon feeding -Goood

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Glad you liked it!

  • @Reach41
    @Reach41 3 ปีที่แล้ว +1

    This is one of the few channels left where p(x), with p(1) = Democrat, etc, is not a factor. Now to apply this to LIDAR ranging to produce either a Bayesian occupancy grid or a point cloud. Laser beams expand in diameter and lose energy (in air) going out from the device lens, vary in intensity both as the distance increases, and independently across the beam as a function of both horizontal and vertical beam width.

  • @PeterSmitGroningen
    @PeterSmitGroningen 3 ปีที่แล้ว +2

    With the "probably spikes" example, I think a more formal explanation would be "steep gradient" or lack of gradient even. Many approximation techniques have problems with steep or sudden gradients, think neural networks

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      thanks for putting a name to it! Indeed, many ML algorithms and stat methods are not happy with quick, unexpected changes.

  • @anushaavyukt6381
    @anushaavyukt6381 2 ปีที่แล้ว +3

    Hi Ritvik, Thanks for such a clear explanation. Would you please make a video on EM algorithm? I saw a lot of videos on it and understand the basics but not sure how to implement it for any problem.Thanks a lot.

  • @senyaisavnina
    @senyaisavnina 2 ปีที่แล้ว

    This high density bubble is like a supermassive black hole, once you get there, you'd never go out :)

  • @salahlaaroussi9896
    @salahlaaroussi9896 2 ปีที่แล้ว +1

    really well explained. Nice job!

  • @vitorsantana2795
    @vitorsantana2795 2 ปีที่แล้ว

    You just saved my ass so hard right now. Thanks a lot

  • @adamtran5747
    @adamtran5747 2 ปีที่แล้ว

    absolutely love the content brother. Please keep up the amazing work.

  • @thename305
    @thename305 ปีที่แล้ว

    Excellent video, your explanation was clear and helpful!

  • @Mv-pp7is
    @Mv-pp7is ปีที่แล้ว

    This is incredibly helpful, thank you!

  • @mikeshin77
    @mikeshin77 ปีที่แล้ว

    fantastic and easy explanation. I like the way to explain!

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Glad it was helpful!

  • @aalailayahya
    @aalailayahya 3 ปีที่แล้ว +1

    Great video, keep up the work I love it

  • @squidgeypea
    @squidgeypea 2 ปีที่แล้ว

    Thank you! Your videos are all really helpful and well explained.

  • @RollingcoleW
    @RollingcoleW 2 ปีที่แล้ว

    Thank you! I am a hobbiest and this is helpful.

  • @monicamilagroshuaytadurand2076
    @monicamilagroshuaytadurand2076 2 ปีที่แล้ว

    Thank you very much! Your explanation helped me a lot!

  • @dddd-ci2zm
    @dddd-ci2zm 3 ปีที่แล้ว

    Thank You! I finally understand it now !

  • @filosofiadetalhista
    @filosofiadetalhista 2 ปีที่แล้ว

    Tight video. Thanks!

  • @praveenkumarkazipeta
    @praveenkumarkazipeta ปีที่แล้ว

    this post is awesome, keep going

  • @tsen6367
    @tsen6367 ปีที่แล้ว +2

    Hello sir.. first things first, I want to say thank you very much for your incredible explanation through your videos.
    I am currently working on my thesis which use hierarchical Bayesian method, but I still confused and don't understand how to determin the right prior for my data. If you don't mind and have a free time, can I discuss with you through social media? I really need someone to guide me🙏 Thank you very much in advance sir.

  • @mrocean1293
    @mrocean1293 3 ปีที่แล้ว

    Great explanation, love it!

  • @eduardo.garcia
    @eduardo.garcia 2 ปีที่แล้ว

    Thanks a lot for all your videos!!! Please do Hamiltonian Monte Carlo Next, please :D

  • @AleeEnt863
    @AleeEnt863 ปีที่แล้ว

    A big thanks!

  • @snehanjalikalamkar2268
    @snehanjalikalamkar2268 ปีที่แล้ว +4

    Hey Ritvik, your videos are very helpful, I learned a lot from them.
    Could you also provide some references for some points that you don't cover (mostly for pre-requisites)?
    In this video, I could not find out why p(x|y) = N(ρy, 1 - ρ²)? Could you please provide a reference for this?

  • @shirleygui6533
    @shirleygui6533 ปีที่แล้ว

    so clear

  • @prof1math
    @prof1math 3 ปีที่แล้ว +1

    great explanation keep it up thanks

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      Thanks, will do!

  • @Abhilashaisgood
    @Abhilashaisgood 3 หลายเดือนก่อน

    amazing

  • @Alexander-pk1tu
    @Alexander-pk1tu 2 ปีที่แล้ว

    thank you! Very good video

  • @marcoantoniocoutinho
    @marcoantoniocoutinho 3 ปีที่แล้ว +2

    Great video, thanks. How could I associate (conceptually or intuitively) GIBBS sampling with variable's Markov Chain modeling once I'm building a sampling based on their conditional probability?

  • @MoodyMooMoo
    @MoodyMooMoo 11 หลายเดือนก่อน

    Thanks!

  • @cleansquirrel2084
    @cleansquirrel2084 3 ปีที่แล้ว +4

    i'm watching

  • @rodrigoaguilardiaz
    @rodrigoaguilardiaz 20 วันที่ผ่านมา

    one question, what if i have no idea of the correlation between the variants?, and actually, that's the thing that i want to find. Can i combine this method and also use the metropolis algorithm to find the values of mu and sigma and calculate p every iteration or something like that?
    thanks!

  • @AshokKumar-lk1gv
    @AshokKumar-lk1gv 3 ปีที่แล้ว +2

    nice

  • @LL-lb7ur
    @LL-lb7ur 2 ปีที่แล้ว

    Thank you for the video. What real life problems can you use gibbs sampling, and what do you get at the end of sampling?

  • @princessefleure8360
    @princessefleure8360 3 ปีที่แล้ว

    Thank you soo much for this video, it helps me a lot!
    I just had a quesiton, if I well undertood, if we have 3 variables we have to calculate p(x|(y,z))
    But how to know the "p" in this case, because I guess we need a 3*3 covariance matrix.
    Have a good day!

  • @Gasgar800
    @Gasgar800 3 ปีที่แล้ว

    Sick ! thanks

  • @shahf13
    @shahf13 3 ปีที่แล้ว +2

    great channel ! can you do a video about autoencoders?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +2

      good suggestion!

    • @ResilientFighter
      @ResilientFighter 3 ปีที่แล้ว

      @@ritvikmath i would like this also

  • @leohsusolid
    @leohsusolid 2 ปีที่แล้ว

    Great videos! Make the concept very clear! Thank you!
    I have a question about the correction: After sampling (X0, Y0), how can we sample (X1, Y1)? In other words, what is the condition when we change both? Or just sample X1, Y1 respectively?

    • @leohsusolid
      @leohsusolid 2 ปีที่แล้ว

      The other question is that if we go from (X0, Y0) to (X1, Y1), then we don't face the situation of "Probability Spike", do we?

    • @apah
      @apah 2 ปีที่แล้ว

      The reason he made the correction is that what we call a sample is (xi, yi). Therefore an iteration of Gibbs is the update to both variables with the method he gave; sampling x1 given y0 then y1 given x1.

    • @leohsusolid
      @leohsusolid 2 ปีที่แล้ว

      @@apah Thank you for replying me!
      Do you mean that we can sample (X1, Y1), but actually in this sample, there is an order which is X1 first given by Y0, Y1 given by X1.

    • @apah
      @apah 2 ปีที่แล้ว

      @@leohsusolid My pleasure ! Exactly, starting with either one is fine. As a said earlier, a sample is by definition the pair (Xi, Yi). The point of gibbs sampling is to find a way to make these samples grow closer and closer to samples drawn from the actual distribution P(X, Y). And the method to do so, is to alternatively sample from the the conditional distributions.

  • @chainonsmanquants1630
    @chainonsmanquants1630 3 ปีที่แล้ว

    Am I right if I say that Gibbs sampling is possible only when you know the marginal probability distribution for each variable ?

  • @vs7185
    @vs7185 2 ปีที่แล้ว

    Is there no accept reject here like in Metropolis Hastings or Rejection sampling?

  • @anondsml
    @anondsml 11 หลายเดือนก่อน

    do you offer any tutoring in bayesian statistics?

  • @juanpabloaguilarcabezas8089
    @juanpabloaguilarcabezas8089 3 ปีที่แล้ว +1

    Can you do a video on hamiltonian monte carlo ?

    • @paulhowrang
      @paulhowrang 3 ปีที่แล้ว

      no 😠 ....no HMC 😶

  • @BreezeTalk
    @BreezeTalk 2 ปีที่แล้ว

    Please show a code implementation

  • @edwardhartz1029
    @edwardhartz1029 2 ปีที่แล้ว

    At around 4:30 , you started at (x0,y0), but then the value of x0 was never used. Why is this?

    • @vs7185
      @vs7185 2 ปีที่แล้ว

      I am thinking you can use either one to start the process. If you are using x0, then next you will use p(y1 | x0); in case you are using y0, then next you will use p(x1 | y0)

  • @apica1234
    @apica1234 3 ปีที่แล้ว

    Could you please explain hands-on?