Maximum Likelihood : Data Science Concepts

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • The story behind max likelihood .... fully explained!
    Sigmoid Video : • The Sigmoid : Data Sci...
    Logistic Regression Video : • Logistic Regression - ...
    My Patreon : www.patreon.co...

ความคิดเห็น • 62

  • @Aruuuq
    @Aruuuq 7 หลายเดือนก่อน +3

    Undeniably you're creating some of the best videos concerning statistics out there. And this is another one. Thank you so much

  • @corybeyer1
    @corybeyer1 3 ปีที่แล้ว +10

    this is art

  • @mehtipsz
    @mehtipsz 3 ปีที่แล้ว +4

    As always, great videos! I mainly use them as supplement to masters level courses. What I love are the parts where you cover the intuitions about the formulas, it makes them so much more understandable. Keep up the good work!

  • @KareemMnour
    @KareemMnour 2 ปีที่แล้ว +1

    Thank you so much for preferring to actually help people understand concepts rather than throwing fancy multi-step jargon that gets people frustrated at math topics. I would do anything I can to help keep the channel alive and I will recommend your channel to all my friends and colleagues. Thanks again and keep the excellent work.

  • @ИльяХренков-б8я
    @ИльяХренков-б8я 3 ปีที่แล้ว +7

    Love your videos! Very nice to revise + learn new things, not missing on intuition either. Hope, your followers number will soar soon.

  • @DaquanMHall
    @DaquanMHall 2 ปีที่แล้ว +1

    Man I watch your videos all the time. I can write the code and understand the outcome but you’re the only way I can understand the math. Thanks!

  • @bannerlad01
    @bannerlad01 3 ปีที่แล้ว +28

    You are a brilliant teacher - thanks so much for doing this

  • @abhishekchandrashukla3814
    @abhishekchandrashukla3814 2 หลายเดือนก่อน

    I swear to god, when I was searching for maximum likelihood estimation, I was hoping you would make a video on this, scrolled down a bit and bingo!! I see Rithvik Math. My happiness knows no bounds. Thank you for existing.

  • @dilinijayasinghe8134
    @dilinijayasinghe8134 3 หลายเดือนก่อน

    Thank you very much. Been struggling to get the intuition of MLE and you helped me to understand it. Would be awesome if you could do a video on GMM estimation. Thank you!!!

  • @omniscienceisdead8837
    @omniscienceisdead8837 2 ปีที่แล้ว +2

    this was a very beautiful lecture

  • @TheScawer
    @TheScawer 3 ปีที่แล้ว +1

    Thank you for the video! I wanted to say they are great for revision, but I usually learn a lot more than I did in school on the topic... so thank you!

  • @dataman6744
    @dataman6744 2 ปีที่แล้ว +1

    Just Brilliant! thanks for demystifying logistic regression equations for me🤝

  • @moravskyvrabec
    @moravskyvrabec ปีที่แล้ว +1

    Great stuff. I'm taking an online MIT class. Complicated topic? I come to your channel to solidify my understanding!

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Glad it was helpful!

  • @rmiliming
    @rmiliming 2 ปีที่แล้ว +1

    excellently explained. Very clear and logical! Tks !

  • @yodarocco
    @yodarocco ปีที่แล้ว +1

    Have you ever done a video on Maximum a Posteriori (MAP)?

  • @MariaBarcoj
    @MariaBarcoj ปีที่แล้ว +1

    Thanks for making things seem to be quite simpler ☺

  • @alessandro5847
    @alessandro5847 3 ปีที่แล้ว +1

    Thanks for these lectures. You're great at explaining this stuff. Keep it up!

  • @arshadkazi4559
    @arshadkazi4559 2 ปีที่แล้ว +1

    excellent explanation, very good as an introduction. Can you make something which delves into maths even more? Explanation of the last part it necessary and would be fun to understand. :)

  • @shubhampandilwar8448
    @shubhampandilwar8448 3 ปีที่แล้ว

    very well explained. I am gaining confidence by these fundamentals lectures.

  • @jansanda544
    @jansanda544 3 ปีที่แล้ว +1

    Amazing video. But during the whole time, I was distracted by figuring out what number series is on the tattoo. :D

  • @yulinliu850
    @yulinliu850 3 ปีที่แล้ว

    Thanks for the great lecture. I really liked the word "seeing" outcomes.

  • @yerzhant701
    @yerzhant701 ปีที่แล้ว +2

    Isn't likelihood should be inverse to probability P(y|x,beta), i.e. L(beta|x,y)?

    • @kisholoymukherjee
      @kisholoymukherjee 8 หลายเดือนก่อน

      exactly my thoughts. From what I read from other sources, Likelihood is given by L(parameters or distribution|observed data). Perhaps @ritvikmath can explain better

  • @Whatever20237
    @Whatever20237 ปีที่แล้ว

    WOW! Thank you!

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 2 ปีที่แล้ว

    keep putting out your bamgers. Use them to learn how to communicate concepts. Shared this one with my network

  • @BehrouzShamsaei
    @BehrouzShamsaei ปีที่แล้ว

    Thanks for the video, will you be able to guide to a reference about why EM converges to a maximum, either local or global?

  • @harshads885
    @harshads885 3 ปีที่แล้ว

    In the logistic regression part on the left, its probably better to callout that probability p is not the same as number of data points p.

  • @aminmohammadigolafshani2015
    @aminmohammadigolafshani2015 2 ปีที่แล้ว +1

    Amazing! Amazing! thanks a lot

  • @oneclickdiy
    @oneclickdiy 3 ปีที่แล้ว

    thank you ! these videos are good refresher

  • @bobby2636
    @bobby2636 ปีที่แล้ว

    Question: In 8:03, you're introducing the conceptions of the likelihood, which from my understanding is the probability of real observation emerging given the y; but in the formula, it looks like the posterior probability, not likelihood, is there something missing?

  • @goelnikhils
    @goelnikhils ปีที่แล้ว

    Amazing Content. Thanks

  • @tianhelenaa
    @tianhelenaa 2 ปีที่แล้ว

    This is truly amazing!!!

  • @robertpollock8617
    @robertpollock8617 11 หลายเดือนก่อน

    I am confused. You are saying the probability and likelihood are the same according to what you have written by your equations. For likelihood are you not trying to say given the acceptance into med school the likelihood of having these values for gpa, mcat score etc…For instance if probability is P(y|x) then likelihood is L(x|y)? You have these two being equal.

  • @hameddadgour
    @hameddadgour 2 ปีที่แล้ว +1

    The Gods of Statistics finally decided to send us a prophet :)

    • @Leila0S
      @Leila0S 5 หลายเดือนก่อน +1

      100%
      He us a magician
      I don’t understand how smoothly he makes the concept sweep into one’s brain

  • @TheRish123
    @TheRish123 3 ปีที่แล้ว

    What a guy! Amazing stuff

  • @amjedbelgacem8218
    @amjedbelgacem8218 ปีที่แล้ว

    This guy makes Machine Learning easy bro, subscribed

  • @ling5544
    @ling5544 ปีที่แล้ว

    When the derivative is 0, it could also be a local minimum right? How to assure when the derivative is 0 then the likelihood is maximized?

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      while it's true that derivative=0 could mean min or max, we can distinguish since a min has a decreasing gradient on the left and increasing gradient on the right. a max is the opposite. hope that helps!

    • @ling5544
      @ling5544 ปีที่แล้ว

      @@ritvikmath thanks! I got it.

  • @ireoluwaTH
    @ireoluwaTH 3 ปีที่แล้ว +1

    Neat 'Mathematese'...

  • @asadkhanbb
    @asadkhanbb 2 ปีที่แล้ว

    Wow that t-shirt ❣️❣️❣️ cool 😎

  • @xxshogunflames
    @xxshogunflames 3 ปีที่แล้ว +6

    The collection of info *chefs kiss*

  • @prateekcaire4193
    @prateekcaire4193 4 หลายเดือนก่อน

    What should be the probability P(y_i| x_i, beta) where actual y_i is reject(= 0). If P(y_i| x_i, beta) is close to 0 or 0. Max Likelihood estimator will not be max even though beta parameters are fine tuned.

  • @yashpundir2044
    @yashpundir2044 3 ปีที่แล้ว +1

    Just 3K views on this? people are crazy. This deserves wayyyy more.

  • @bryany7344
    @bryany7344 3 ปีที่แล้ว +1

    Can I know what is the difference between log likelihood vs negative log likelihood graphically ? How do I choose which of the loss functions?

  • @ianstats97
    @ianstats97 ปีที่แล้ว

    Great video, just did not understand where the sigma came from?

  • @maneechotunpuang5299
    @maneechotunpuang5299 3 ปีที่แล้ว +1

    Your videos are absolutely helpful!! You're soooo damn good teacher and really good at delivering the complicated lessons into the easier way to digest. I hope I can pass this semester with your videos bc without your video it would be even worse! 😂 THANK A MILLION ❤️

  • @montycardman2535
    @montycardman2535 ปีที่แล้ว

    would the likelihood function be between 0 - 1?

  • @fatriantobong8169
    @fatriantobong8169 ปีที่แล้ว

    Hmmm how u bind the Yi to the sigmoid function..

  • @fotiskamanis8592
    @fotiskamanis8592 10 หลายเดือนก่อน

    Thank you!!!

  • @akashswain7939
    @akashswain7939 5 หลายเดือนก่อน

    L

  • @jiaqint961
    @jiaqint961 ปีที่แล้ว

    Gold!

  • @redherring0077
    @redherring0077 2 ปีที่แล้ว

    Please marry me😍😂😂. I can listen to you forever. Such a passionate teacher!

  • @kaym2332
    @kaym2332 3 ปีที่แล้ว

    Amazing style of teaching. Thank you!

  • @jakobforslin6301
    @jakobforslin6301 2 ปีที่แล้ว

    You're an awesome teacher, thanks a lot!

  • @jijie133
    @jijie133 2 ปีที่แล้ว

    I love your videos.

  • @fszhang9010
    @fszhang9010 3 ปีที่แล้ว +1

    Great & helpful video ! From 12:40 says : "This's the probability of seeing all the real world outcomes that i actually see in my data". I think its better to replace "real world" with "predicted" or other synonym since the former kind of mislead the viewers to think those "outcomes" are recording of happened event which actually not, they stem from the model prediction, it's the "data"(y:nx1) that recorded the real result. and from 20:05 that's the correct way to express it.

  • @NickKravitz
    @NickKravitz 3 ปีที่แล้ว +1

    In English most people use the terms Probability and Likelihood interchangeably - I can't help but correct this when I hear it. One nuance is that the Maximum Likelihood result is often very small, meaning the parameter value isn't very likely, it is just more likely than the alternatives. Ranked Choice Voting is designed to promote the Most Likable Choice past the 50% threshold. Great video as always; I hope you become a stats and data science professor!