Probability for Machine Learning!

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 พ.ย. 2024

ความคิดเห็น • 20

  • @NicholasRenotte
    @NicholasRenotte ปีที่แล้ว +2

    I just binged watched the whole Probability Theory playlist this morning! Smashed it Ajay!

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว +2

      Bwahaha thanks Nick !

    • @NicholasRenotte
      @NicholasRenotte ปีที่แล้ว

      @@CodeEmporium anytime man, you’re so fluid with your explanations. Inspiring as hell!!

  • @devharal6541
    @devharal6541 ปีที่แล้ว

    Your videos are so accurate and intution behind learning via connecting concepts to machine learning is just awesome.

  • @badriveera8941
    @badriveera8941 8 หลายเดือนก่อน +2

    Great set of videos. One subtle point of clarification. If fy(yi) is a probability density function, then the value of fy(yi) for a particular house price would be zero since it is a continuous variable. How do you reconcile that? Appreciate your thoughts on this.

    • @virgenalosveinte5915
      @virgenalosveinte5915 2 หลายเดือนก่อน

      Great question, I would also love to hear an explanation.

  • @CodeEmporium
    @CodeEmporium  2 ปีที่แล้ว +2

    Please checkout the accompanying blogpost in the description below. For more information on each topic discussed in the video (Random Variables, Probability mass / density functions), please refer to the "Probability Theory for Machine Learning" playlist.
    Video Correction #1: Prices are dependent random variables that depend on number of bedrooms, age and sqft. So from 14:06 onwards, we should see the conditional distribution also depend on the X_ij terms. That said, the overall derivation should remain the same. Hope this helps!

  • @chinmayeejoshi2119
    @chinmayeejoshi2119 2 ปีที่แล้ว +1

    This is learning series has been excellent. Danke!

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      Thanks so much for watching 🎉 :)

  • @rishidixit7939
    @rishidixit7939 9 วันที่ผ่านมา

    How to study Probability Theory for Deep Learning ?

  • @carsten011640
    @carsten011640 ปีที่แล้ว +1

    Hi, you're videos in this series have been so useful for my understanding, thank you!
    Could I clarify something please? At 20:00, you say "in reality, all these PDFs can be assumed to be the same...practically meaning that probability that house #1 is $700-800 is the same for house #2 too, and all other houses".
    I'm wondering whether this correct, my understanding: the PDFs are the same for every X value (gaussian) but they centre around a new mean for every value X value too. House number 1's X values mean that it will have a certain probability of being $700-800 according to the linear equation's y^ estimate at that X value. And house number 2's X value would follow that it will have a different probability of being $700-800 according the y^ estimate at that X value. Is this a correct interpretation?
    Again, thank you so much for this series.

    • @jaxejaxejaxe
      @jaxejaxejaxe ปีที่แล้ว +1

      This assumption is a very high-level assumption and we don't need to talk about any estimators to understand it. You should think of this assumption as coming before any math is done at all.
      Generally:
      We try to find the best hypothesis/ML-model/prediction-rule from our sample data that can predict well on new data. Therefore, we assume that our sample data has been "given to us" from some unknown distribution. We don't know this distribution, but we have to assume that all the data come from this _same_, unknown distribution.
      In this example, it means that everytime a house-price is "sampled" it comes from this unknown distribution putting out prices on the houses. This means that for any houses x and z, the probability of them costing any number (700-800k for instance) is the same.

  • @keren718
    @keren718 ปีที่แล้ว

    I love your series. I wonder why there is (-1)x in the last derivation

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว +1

      Thanks so much. The last term (if derived) will become the squared loss. And this loss needs to be minimized. It’s the same as maximizing the (-1) * the same value

  • @theforthdoctor7872
    @theforthdoctor7872 2 ปีที่แล้ว

    You forgot to mention the "Bambleweeny 57 Sub-Meson Brain" and the "atomic vector plotter".😉

  • @mustafizurrahman5699
    @mustafizurrahman5699 2 ปีที่แล้ว

    Excellent

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      Thanks a ton for tuning it!! :)

  • @badermuteb4552
    @badermuteb4552 2 ปีที่แล้ว +1

    argmax same as max???

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว +5

      They are not. Max will return “what is the maximum value of this function”. But arg max is “what is the value of the parameters such that the function is maximized”

    • @vtrandal
      @vtrandal ปีที่แล้ว

      @@CodeEmporium You are very very good with this subject matter. Thank you for making these great videos!