The Cramer-Rao Lower Bound ... MADE EASY!!!

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ย. 2024

ความคิดเห็น • 16

  • @huanranchen
    @huanranchen หลายเดือนก่อน +1

    I'm so grateful for all the videos you make that inspire our curiosity!

  • @JuhiMaurya-ym3ud
    @JuhiMaurya-ym3ud 11 วันที่ผ่านมา

    perfect and easiest explanation in utube....thanku so much sir it is really helpful

  • @QuantNovice
    @QuantNovice 2 หลายเดือนก่อน

    That video is gold for every stats student! Thanks a lot for this amazing content!

  • @chrisolande1061
    @chrisolande1061 หลายเดือนก่อน

    Exactly the kind of video I was looking for, Perfect explanation.

  • @santiagodm3483
    @santiagodm3483 5 หลายเดือนก่อน

    Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.

  • @ligandro
    @ligandro 5 หลายเดือนก่อน

    Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 4 หลายเดือนก่อน

    Fantastic video... preparing for IIT JAM MS

  • @swatigoel5387
    @swatigoel5387 หลายเดือนก่อน

    Super helpful video! Thank you:)

  • @jayjain1033
    @jayjain1033 29 วันที่ผ่านมา

    Soo good! Didn't get it in class at all

  • @LAQ24
    @LAQ24 หลายเดือนก่อน

    Brilliant!

  • @jayanthiSaibalaji
    @jayanthiSaibalaji 3 หลายเดือนก่อน

    Many thanks 🙏

  • @ridwanwase7444
    @ridwanwase7444 4 หลายเดือนก่อน +1

    Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?

    • @statswithbrian
      @statswithbrian  4 หลายเดือนก่อน +1

      I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.

    • @ridwanwase7444
      @ridwanwase7444 4 หลายเดือนก่อน +1

      @@statswithbrian Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?

    • @statswithbrian
      @statswithbrian  4 หลายเดือนก่อน +1

      Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined.
      My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.