Learn Statistics with Brian
Learn Statistics with Brian
  • 35
  • 251 202

วีดีโอ

Likelihood Ratio Tests Clearly Explained
มุมมอง 2.4K2 หลายเดือนก่อน
What is a likelihood ratio test (LRT) in statistics?
The Physics Behind Probability Mass and Probability Density Functions
มุมมอง 1.1K2 หลายเดือนก่อน
Buy my full-length statistics, data science, and SQL courses: linktr.ee/briangreco Why are they called probability mass and probability density functions? What's the difference between a pmf and a pdf?
Fail to reject the null hypothesis?
มุมมอง 1.1K2 หลายเดือนก่อน
Why do people say "I fail to reject the null hypothesis"?????
What is a consistent estimator in statistics?
มุมมอง 2.1K2 หลายเดือนก่อน
What does it mean for a statistical estimator to be consistent? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Complete Statistics, Ancillary Statistics, and Basu's Theorem
มุมมอง 8142 หลายเดือนก่อน
Learn about ancillarity, complete statistics, and Basu’s Theorem! Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Minimal Sufficient Statistics: th-cam.com/video/IsgteDaNTFk/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Minimal Sufficient Statistics Clearly Explained
มุมมอง 1K2 หลายเดือนก่อน
You’ve learned what a sufficient statistic is, but what does it mean to be minimal sufficient? Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Chi-squared tests ... MADE EASY!!!
มุมมอง 3413 หลายเดือนก่อน
Learn about Chi-Squared Tests - Goodness of fit (GOF), Independence, and Homogeneity, oh my! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Rao-Blackwell Theorem Explained
มุมมอง 2.6K3 หลายเดือนก่อน
Learn about Rao-Blackwellization, a way to use sufficient statistics to improve your statistical estimators! Learn more about Sufficient Statistics and the Factorization Theorem: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The EM Algorithm Clearly Explained (Expectation-Maximization Algorithm)
มุมมอง 9K4 หลายเดือนก่อน
Learn all about the EM algorithm, a way to find maximum likelihood estimates in problems with missing data. Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
p-values explained, in 5 levels of complexity
มุมมอง 4K5 หลายเดือนก่อน
Learn all about what p-values and how to interpret them in statistical hypothesis testing! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
What is a degree of freedom?
มุมมอง 1.5K5 หลายเดือนก่อน
Learn about the mysterious, hard-to-interpret number used in many statistical methods, degrees of freedom (df). Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Monty Hall Problem without probability
มุมมอง 8225 หลายเดือนก่อน
The Monty Hall Problem is a classic probability puzzle with a counter-intuitive result. Instead of using probability trees, let's try to deeply understand the problem!
Analysis of Variance (ANOVA) and F statistics .... MADE EASY!!!
มุมมอง 4805 หลายเดือนก่อน
Learn the intuition behind ANOVA and calculating F statistics! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Cramer-Rao Lower Bound ... MADE EASY!!!
มุมมอง 6K7 หลายเดือนก่อน
What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Outliers in Data Analysis... and how to deal with them!
มุมมอง 4.2K8 หลายเดือนก่อน
Outliers in Data Analysis... and how to deal with them!
Link functions for GLMs... MADE EASY!!!
มุมมอง 1.3K8 หลายเดือนก่อน
Link functions for GLMs... MADE EASY!!!
The Uniform Distribution MLE is... very UNLIKELY
มุมมอง 1.1K8 หลายเดือนก่อน
The Uniform Distribution MLE is... very UNLIKELY
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
มุมมอง 20K8 หลายเดือนก่อน
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
Maximum Likelihood Estimation ... MADE EASY!!!
มุมมอง 32K8 หลายเดือนก่อน
Maximum Likelihood Estimation ... MADE EASY!!!
The simplest non-parametric test... The Sign Test
มุมมอง 2.4K8 หลายเดือนก่อน
The simplest non-parametric test... The Sign Test
Skewness... MADE EASY!!!
มุมมอง 3978 หลายเดือนก่อน
Skewness... MADE EASY!!!
Unbiased Estimators ... Made Easy!
มุมมอง 7K9 หลายเดือนก่อน
Unbiased Estimators ... Made Easy!
Can more data be BAD??? (The 10% rule and Finite Population Correction)
มุมมอง 3189 หลายเดือนก่อน
Can more data be BAD??? (The 10% rule and Finite Population Correction)
Inverse Transform Sampling ... MADE EASY!!!
มุมมอง 3K9 หลายเดือนก่อน
Inverse Transform Sampling ... MADE EASY!!!
Regularization... Made Easy!!!
มุมมอง 2369 หลายเดือนก่อน
Regularization... Made Easy!!!
Heteroskedasticity and Homoskedasticity... What are they???
มุมมอง 3.8K9 หลายเดือนก่อน
Heteroskedasticity and Homoskedasticity... What are they???
Independent vs Mutually Exclusive Events ... MADE EASY!!!
มุมมอง 2.6K9 หลายเดือนก่อน
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Probability vs. Likelihood ... MADE EASY!!!
มุมมอง 43K9 หลายเดือนก่อน
Probability vs. Likelihood ... MADE EASY!!!
The Method of Moments ... Made Easy!
มุมมอง 29K9 หลายเดือนก่อน
The Method of Moments ... Made Easy!

ความคิดเห็น

  • @DupinderSingh
    @DupinderSingh วันที่ผ่านมา

    Very helpful! Appreciated

  • @muslimmurtazaliyev2942
    @muslimmurtazaliyev2942 วันที่ผ่านมา

    thank you

  • @Ray-xy1bp
    @Ray-xy1bp วันที่ผ่านมา

    I understand that in the coins example in the beginning, you used the binomial distribution to get the likelihood. Is it true that you have to replace that with the log likelihood ?

    • @statswithbrian
      @statswithbrian วันที่ผ่านมา

      The whole video is a coin flip example, so I’m not sure what you mean. We don’t use log likelihoods in the chart where we solve things, but if you watch the whole video we derive the method for the chart by using log likelihoods. Likelihoods and log likelihoods are the same thing, one just has a log.

  • @southsun1149
    @southsun1149 2 วันที่ผ่านมา

    Very good presentation.

  • @Honest_Reply900
    @Honest_Reply900 2 วันที่ผ่านมา

    Amazing explained 🎉

  • @vaibhavbajaj682
    @vaibhavbajaj682 8 วันที่ผ่านมา

    Shouldn’t the variance of the first data point be (X1-Xbar)/1 or (X1-Xbar)/0 which is just 1 as we want the variance of just first data point whereas sigma^2 is the variance of the whole sample?

    • @statswithbrian
      @statswithbrian 8 วันที่ผ่านมา

      No, sigma^2 is the variance of each individual data point because it’s the variance of the entire distribution. You’re getting a little mixed up - realized data values don’t have variances, I’m talking about the first data point being a random variable that has variance sigma^2. Nothing in this video is about calculating sample variances so X-bar and the sample variance are not involved at all.

    • @vaibhavbajaj682
      @vaibhavbajaj682 8 วันที่ผ่านมา

      Just to be clear, the “First data point X1” refers to one random value from a sample. We are checking how good of an estimator just one value from a sample is as opposed to the X-bar which is the mean of that sample?

    • @statswithbrian
      @statswithbrian 8 วันที่ผ่านมา

      Yes, basically, but keep in mind we are never talking about any particular sample. We are just comparing samples of size one, which have variance sigma^2 and samples of size n, where X-bar has variance sigma^2/n

    • @vaibhavbajaj682
      @vaibhavbajaj682 8 วันที่ผ่านมา

      Got it. Thanks so much for taking the time out to answer my queries. You are very kind.

  • @AAU-il1lv
    @AAU-il1lv 10 วันที่ผ่านมา

    thanks it is so important explanation. keep it up!😍😍😍

  • @zazakh7804
    @zazakh7804 11 วันที่ผ่านมา

    I completely understand it! Thank you so much.

  • @711mak6
    @711mak6 11 วันที่ผ่านมา

    shukriya bro <3

  • @711mak6
    @711mak6 11 วันที่ผ่านมา

    thanks very well explained

  • @haniyeanishe
    @haniyeanishe 12 วันที่ผ่านมา

    Hello. thanks. we will be so glad to see more videos from you about Beysian estimators and MAP. Better explaining than my professor. thanks again.

  • @ngananhtruongha5830
    @ngananhtruongha5830 13 วันที่ผ่านมา

    This video is very easy to understand. Thank you so much

  • @BillHaug
    @BillHaug 16 วันที่ผ่านมา

    This is very helpful thank you

  • @spyzom4383
    @spyzom4383 18 วันที่ผ่านมา

    W video man, gotta love it when somebody knows what they are talking about!

  • @vishrutpandya3257
    @vishrutpandya3257 19 วันที่ผ่านมา

    You made it soo simple to understand. Thanks!!!!

  • @ShubhamPlays
    @ShubhamPlays 20 วันที่ผ่านมา

    wow such a great perspective , that is really a good video. i loved how you explained with different scenarios that how removing outliers would affect. the only small issue was your pace please try to speak a little slow rest everything was really good.

  • @noobgaming8843
    @noobgaming8843 24 วันที่ผ่านมา

    Love you!

  • @noobgaming8843
    @noobgaming8843 24 วันที่ผ่านมา

    Thank you so much one of the greatest videos!

  • @chief4180
    @chief4180 25 วันที่ผ่านมา

    I failed statistics course twice and I'm studying for my third final exam. This is the first time ever I completely understand this concept. Very very helpful video. Thanks a lot.

  • @soubhagyaghosh8558
    @soubhagyaghosh8558 25 วันที่ผ่านมา

    Excellent! Can you provide an example of an estimator that is both biased and inconsistent? My thoughts - if we consider a sample X1,...,Xn and set 𝜇^ = max{X1,...,Xn} as an estimator for the population mean then it will be biased and inconsistent; do clarify.

    • @statswithbrian
      @statswithbrian 25 วันที่ผ่านมา

      For sure, almost any bad idea you can come up with will almost certainly be both biased and inconsistent. The estimator you just came up with would be both for almost any distribution.

    • @soubhagyaghosh8558
      @soubhagyaghosh8558 25 วันที่ผ่านมา

      @@statswithbrian thanks; in short better not fall for the baddies, haha!

  • @Heisenberg-pr1hk
    @Heisenberg-pr1hk 26 วันที่ผ่านมา

    We learnt this concept in Probab. And Random Signal Princ. lecture and now I am trying to use it on the antennas, good explanation thanks

  • @tathaztu
    @tathaztu 27 วันที่ผ่านมา

    Thank you for this! Very helpful. Good content and a nicely structured.

  • @Betleyman_7
    @Betleyman_7 27 วันที่ผ่านมา

    I have watched three videos on this subject. I am still non the wiser, just better informed!

  • @kr15w
    @kr15w 28 วันที่ผ่านมา

    imagine the power of this man with manim

    • @statswithbrian
      @statswithbrian 28 วันที่ผ่านมา

      You are in for a treat very soon.

  • @robertarvanitis8852
    @robertarvanitis8852 29 วันที่ผ่านมา

    There is only one flip of a coin, just as there is only one election. It makes sense for an F to say "given two sides, it's 50% probability the NEXT flip will be a head. It makes sense for a B to say "given the mood of the country, I'd wager even money the NEXT election will see X win." Our two punters, F and B, are using different forms and sources of data and related factors, but both are willing to wager at odds. That's the test of sincerity, willing to bet on risk vs. reward. Answer Darwin's question: is that ripe fruit worth the risk of a predator?

  • @衮雪
    @衮雪 หลายเดือนก่อน

    crystal clear explaination

  • @gabrielleveque7613
    @gabrielleveque7613 หลายเดือนก่อน

    People in China like pasta too tho..

    • @statswithbrian
      @statswithbrian หลายเดือนก่อน

      I know, even this completely made-up problem admits that over 100 million people in China love pasta.

  • @gourangvats8763
    @gourangvats8763 หลายเดือนก่อน

    Sir, your videos are amazing!! Thank you Please keep on uploading more

  • @ata.iskecman
    @ata.iskecman หลายเดือนก่อน

    This complex topic could only be explained this easily and with such simple examples.

  • @shibandevi9865
    @shibandevi9865 หลายเดือนก่อน

    This was the best explanation of the MLE on youtube.

  • @devangaaravinda3487
    @devangaaravinda3487 หลายเดือนก่อน

    Excellent Explanation

  • @manasmehra3203
    @manasmehra3203 หลายเดือนก่อน

    this video is so well made marry me

  • @AatmikJain-td6sx
    @AatmikJain-td6sx หลายเดือนก่อน

    thanks for lecture

  • @MarcoBova
    @MarcoBova หลายเดือนก่อน

    Hey brian, neat video as always! Could you made some videos on hypothesis testing, the neyman pearson's lemma, UMP test and Generalised likelihood ratio test? Thankyou in advance

    • @statswithbrian
      @statswithbrian หลายเดือนก่อน

      The most recent video besides this one is likelihood ratio tests and I talk about all those things.

  • @sabihasultana8002
    @sabihasultana8002 หลายเดือนก่อน

    most simple yet in depth analysis found so far, very intuitive..thanks a lot, because of you all my doubts got cleared

  • @zendruoflynstin8275
    @zendruoflynstin8275 หลายเดือนก่อน

    Thanks. Never thought it would be that easy.

  • @LebuAckermann
    @LebuAckermann หลายเดือนก่อน

    You explained nicely, great job

  • @andi-w6p
    @andi-w6p หลายเดือนก่อน

    If the one explaining truly understands the concept well, his explanation will be easy to understand. So, if I haven't been able to grasp my professor's convoluted explanations all this time, it could be because he doesn't actually understand the concept very well.

  • @vanessawertheim
    @vanessawertheim หลายเดือนก่อน

    thank you so much! this helped a lot 😊

  • @nocontextnikhil
    @nocontextnikhil หลายเดือนก่อน

    awesome explanation.

  • @Jonathan-cn6xz
    @Jonathan-cn6xz หลายเดือนก่อน

    9:42 Is the log(0.8) meant to be log(1.8)? Nice video btw.

    • @statswithbrian
      @statswithbrian หลายเดือนก่อน

      Yes, exactly, good catch.

  • @brazilfootball
    @brazilfootball หลายเดือนก่อน

    Thank you, but one thing I still don't get about about the Bayesian way of thinking: if one is supposed to constantly update a belief via data, then where did the prior come from? How does one quantify a belief based on zero data?

  • @Indioharp
    @Indioharp หลายเดือนก่อน

    Great explanation Brian! I have a small question, though. If the response variable has to be normal (in a normal linear regression), why do you think most statistics articles insist that only the residuals have to be normal and not the variable? What tests do you think should be done before a GLM, besides residual plots?

    • @statswithbrian
      @statswithbrian หลายเดือนก่อน

      Saying the response is normal and the residuals are normal means the same thing basically. The response is normal (around the mean for that X value), which just means the response’s distance from the mean (residual) is normal with mean 0. If we want to evaluate normality of residuals, it’s then easier to look at a graph of residual since they all have the same mean so we can easily visualize if they seem normally distributed.

    • @Indioharp
      @Indioharp หลายเดือนก่อน

      @@statswithbrian Thank you.

  • @jingngo8727
    @jingngo8727 หลายเดือนก่อน

    Amazing. Enlightening

  • @nimalannagendiran
    @nimalannagendiran หลายเดือนก่อน

    Love the video. These concepts are explained very nicely and in a easy to understand manner.

  • @TalgatOmarov-y9z
    @TalgatOmarov-y9z หลายเดือนก่อน

    Very clear explanation. Thanks.

  • @st8k490
    @st8k490 หลายเดือนก่อน

    great content, keep it up

  • @carlosazevedo5079
    @carlosazevedo5079 หลายเดือนก่อน

    Is there any reason to have two different ways of solving it? One at 5:43 and another at the end. Which one do you think is better and why ?

    • @statswithbrian
      @statswithbrian หลายเดือนก่อน

      They are the same way, least squares. The last part is just a little extra justification on “why” - least squares is very related to normal distributions.

  • @kates-creates
    @kates-creates หลายเดือนก่อน

    you have great videos !! thank you so much <3