ความคิดเห็น •

  • @DistortedV12
    @DistortedV12 6 ปีที่แล้ว +8

    You have no idea how helpful this is for people trying to learn data science, this is also the cost function for logistic regression.

  • @bryansimmons9859
    @bryansimmons9859 3 ปีที่แล้ว +8

    Just gotta say, I'm always so happy when I search for a topic and your videos come up. You are such an excellent teacher. Thank you!

  • @jamiekwon7225
    @jamiekwon7225 5 ปีที่แล้ว

    I've been learning statistics for the past 2-3 years and your videos have always been helpful!! Thank you so much

  • @BharatPatidar333
    @BharatPatidar333 7 ปีที่แล้ว +1

    Thank You so much Ben, I have tried a lot to understand this concept from other content present online but this is the best which I got so far.
    Thanks again.

  • @goodmorningprester3936
    @goodmorningprester3936 2 ปีที่แล้ว

    Thanks Mr. Lambert. I like the simplicity of your lectures. You are the best

  • @shonendumm
    @shonendumm ปีที่แล้ว

    Thanks! This video is helping me understand the calculations, even in 2023!

  • @barovierkevinallybose1040
    @barovierkevinallybose1040 4 ปีที่แล้ว +1

    My absolute favorite part of statistics, determining MLE

  • @zilinzhang5123
    @zilinzhang5123 4 ปีที่แล้ว

    Thank you! You are much more helpful than my lecturer!

  • @zombielearn1400
    @zombielearn1400 10 ปีที่แล้ว +1

    Thank you so much! You explain the concepts in a very easy to understand fashion.
    Could you pls let me know whether this video belongs to a certain playlist, so ppl can watch the entire series. Thanks again.

  • @ahmedel-mikawi8688
    @ahmedel-mikawi8688 9 ปีที่แล้ว

    Excellent video!

  • @nazisucksass
    @nazisucksass 10 ปีที่แล้ว

    Thank you so much, man!

  • @mehradghazanfaryan640
    @mehradghazanfaryan640 3 ปีที่แล้ว

    Crystal clear explanation

  • @nuradilah4934
    @nuradilah4934 4 ปีที่แล้ว

    best explanation on MLE. Do watch all 3 videos

  • @101dostoyevsky
    @101dostoyevsky 7 ปีที่แล้ว +1

    So good! god bless.

  • @steadymedia234
    @steadymedia234 8 ปีที่แล้ว

    simply awesome.

  • @eliasdanielozuna
    @eliasdanielozuna 7 ปีที่แล้ว +1

    great.!
    just great

  • @IIAndersII
    @IIAndersII 9 ปีที่แล้ว

    very helpfull - thx!

  • @elfadlaouielfadel932
    @elfadlaouielfadel932 5 ปีที่แล้ว

    thank you. why professor we are using as dependant variable the probability although we have a qualitative variable( what 's thelogic behind).

  • @zubairqureshi7365
    @zubairqureshi7365 6 ปีที่แล้ว

    what is the difference between cost function and maximum likely hood estimation in logistic regression. please explain.

  • @milannarula2410
    @milannarula2410 4 ปีที่แล้ว

    Would you be able to show how to take the likelihood of a rank ordered multi nomial probit model

  • @claratan3744
    @claratan3744 7 ปีที่แล้ว

    Amazing

  • @megapanka
    @megapanka 5 ปีที่แล้ว +1

    You are a god

  • @louisesun3003
    @louisesun3003 10 ปีที่แล้ว +5

    Thanks so much for your great explanations!! Is there any chance you could make a video about partial effects in the Probit and Logit models?

    • @lastua8562
      @lastua8562 4 ปีที่แล้ว

      He did that, see video 37

  • @michelle9859
    @michelle9859 5 ปีที่แล้ว

    thank you

  • @naphatjantaranukul4258
    @naphatjantaranukul4258 4 ปีที่แล้ว

    thank you for saving me from the finals

  • @marlonbrando6826
    @marlonbrando6826 6 หลายเดือนก่อน

    Does the logistic regression never have a closed form / analytical solution? Even if we use it for continuous dependent variables?

  • @elfmas
    @elfmas ปีที่แล้ว

    Can you do one for continuous (like Tobit) please?

  • @ipitythefool97
    @ipitythefool97 7 ปีที่แล้ว +1

    i dont understand where logit and probit even came into anything tho - u didnt use the models within the estimation i swear???

  • @lastua8562
    @lastua8562 4 ปีที่แล้ว

    How was this executed before computers were invented and used for this purpose?

  • @svenmeier5921
    @svenmeier5921 7 ปีที่แล้ว

    But the product of the probabilities of N obervations becomes smaller and smaller as N becomes larger, since the probabilities are between 0 and 1. True? Does that even matter?

    • @filipporicci9315
      @filipporicci9315 5 ปีที่แล้ว

      Yes, but you always look for parameter estimates that are in between 0 and 1 since you will interpret them as percentage changes in the dependent variable y.

    • @lastua8562
      @lastua8562 4 ปีที่แล้ว

      Interesting question. Any thoughts on it today?

    • @yepyep266
      @yepyep266 2 ปีที่แล้ว

      This is the lost function, not the sum of probabilities

  • @chilipepper8397
    @chilipepper8397 4 ปีที่แล้ว

    how do you calculate the value of beta0 and beta1?

    • @yepyep266
      @yepyep266 2 ปีที่แล้ว

      By finding the result of the partial derivative =0

    • @chilipepper8397
      @chilipepper8397 2 ปีที่แล้ว

      @@yepyep266 oh yea, I think I did that a year ago 😂
      Thanks man! 😊

    • @yepyep266
      @yepyep266 2 ปีที่แล้ว

      @@chilipepper8397 np. I’m taking a ML course right now and have an assigment where I need to implement a logistic regression model to predict meteorological phenomenon. I am a bit lost right now

  • @marlonbrando6826
    @marlonbrando6826 6 หลายเดือนก่อน

    Does the logistic regression never have a closed form / analytical solution? Even if we use it for continuous dependent variables?