Why maximise 'log' likelihood?
ฝัง
- เผยแพร่เมื่อ 9 ก.พ. 2025
- In this video it is explained why it is, in practice, acceptable to maximise log likelihood as opposed to likelihood.
Check out oxbridge-tutor.... for course materials, and information regarding updates on each of the courses. Check out ben-lambert.co... for course materials, and information regarding updates on each of the courses. Quite excitingly (for me at least), I am about to publish a whole series of new videos on Bayesian statistics on youtube. See here for information: ben-lambert.co... Accompanying this series, there will be a book: www.amazon.co....
My professor never explained why we log L. This makes it so clear. Thank you.
One mistake: log likelihood l should be smaller than zero when L is smaller than 1.
the idea here is
one of the rare cases when it doesn't get on my nerves
2:25 I couldnt find a reason to multiply f(xi | tetha) among them. Why?
Thank you professor Lambert!
Stat Quest GANGSTAAZ, they BAD TO THE BOOOONE.
what is penalized likelihood ?
was not clear unfortunantly
ok, i think i got it. if somebody will stack into the same problem of understanding this material i can recommend this url - en.wikipedia.org/wiki/Monotonic_function
@@igorrizhyi5572 Hmm I know what a monotonic f is, though this is not entirely clear to me.
1) Why do we maximize likelihood at the same point IF we have monotony (4:50)? answer: whenever L is increasing, l also is. Hence, they reach extreme points in tandem.
2) When did we prove that dl/dL is monotonic? answer: it just follows from the calculation. l = log(L)
I answered the questions myself, thanks.