Learn Statistics with Brian
Learn Statistics with Brian
  • 35
  • 198 691

วีดีโอ

Likelihood Ratio Tests Clearly Explained
มุมมอง 53714 วันที่ผ่านมา
What is a likelihood ratio test (LRT) in statistics?
The Physics Behind Probability Mass and Probability Density Functions
มุมมอง 99828 วันที่ผ่านมา
Buy my full-length statistics, data science, and SQL courses: linktr.ee/briangreco Why are they called probability mass and probability density functions? What's the difference between a pmf and a pdf?
Fail to reject the null hypothesis?
มุมมอง 687หลายเดือนก่อน
Why do people say "I fail to reject the null hypothesis"?????
What is a consistent estimator in statistics?
มุมมอง 1.2Kหลายเดือนก่อน
What does it mean for a statistical estimator to be consistent? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Complete Statistics, Ancillary Statistics, and Basu's Theorem
มุมมอง 428หลายเดือนก่อน
Learn about ancillarity, complete statistics, and Basu’s Theorem! Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Minimal Sufficient Statistics: th-cam.com/video/IsgteDaNTFk/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Minimal Sufficient Statistics Clearly Explained
มุมมอง 579หลายเดือนก่อน
You’ve learned what a sufficient statistic is, but what does it mean to be minimal sufficient? Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Chi-squared tests ... MADE EASY!!!
มุมมอง 2872 หลายเดือนก่อน
Learn about Chi-Squared Tests - Goodness of fit (GOF), Independence, and Homogeneity, oh my! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Rao-Blackwell Theorem Explained
มุมมอง 1.4K2 หลายเดือนก่อน
Learn about Rao-Blackwellization, a way to use sufficient statistics to improve your statistical estimators! Learn more about Sufficient Statistics and the Factorization Theorem: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The EM Algorithm Clearly Explained (Expectation-Maximization Algorithm)
มุมมอง 5K2 หลายเดือนก่อน
Learn all about the EM algorithm, a way to find maximum likelihood estimates in problems with missing data. Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
p-values explained, in 5 levels of complexity
มุมมอง 3.3K3 หลายเดือนก่อน
Learn all about what p-values and how to interpret them in statistical hypothesis testing! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
What is a degree of freedom?
มุมมอง 1.1K3 หลายเดือนก่อน
Learn about the mysterious, hard-to-interpret number used in many statistical methods, degrees of freedom (df). Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Monty Hall Problem without probability
มุมมอง 7984 หลายเดือนก่อน
The Monty Hall Problem is a classic probability puzzle with a counter-intuitive result. Instead of using probability trees, let's try to deeply understand the problem!
Analysis of Variance (ANOVA) and F statistics .... MADE EASY!!!
มุมมอง 3924 หลายเดือนก่อน
Learn the intuition behind ANOVA and calculating F statistics! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Cramer-Rao Lower Bound ... MADE EASY!!!
มุมมอง 4.2K5 หลายเดือนก่อน
What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Outliers in Data Analysis... and how to deal with them!
มุมมอง 3.1K6 หลายเดือนก่อน
Outliers in Data Analysis... and how to deal with them!
Link functions for GLMs... MADE EASY!!!
มุมมอง 1K7 หลายเดือนก่อน
Link functions for GLMs... MADE EASY!!!
The Uniform Distribution MLE is... very UNLIKELY
มุมมอง 6587 หลายเดือนก่อน
The Uniform Distribution MLE is... very UNLIKELY
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
มุมมอง 16K7 หลายเดือนก่อน
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
Maximum Likelihood Estimation ... MADE EASY!!!
มุมมอง 25K7 หลายเดือนก่อน
Maximum Likelihood Estimation ... MADE EASY!!!
The simplest non-parametric test... The Sign Test
มุมมอง 1.6K7 หลายเดือนก่อน
The simplest non-parametric test... The Sign Test
Skewness... MADE EASY!!!
มุมมอง 3727 หลายเดือนก่อน
Skewness... MADE EASY!!!
Unbiased Estimators ... Made Easy!
มุมมอง 5K7 หลายเดือนก่อน
Unbiased Estimators ... Made Easy!
Can more data be BAD??? (The 10% rule and Finite Population Correction)
มุมมอง 2927 หลายเดือนก่อน
Can more data be BAD??? (The 10% rule and Finite Population Correction)
Inverse Transform Sampling ... MADE EASY!!!
มุมมอง 2.4K7 หลายเดือนก่อน
Inverse Transform Sampling ... MADE EASY!!!
Regularization... Made Easy!!!
มุมมอง 2157 หลายเดือนก่อน
Regularization... Made Easy!!!
Heteroskedasticity and Homoskedasticity... What are they???
มุมมอง 2.7K7 หลายเดือนก่อน
Heteroskedasticity and Homoskedasticity... What are they???
Independent vs Mutually Exclusive Events ... MADE EASY!!!
มุมมอง 2.1K7 หลายเดือนก่อน
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Probability vs. Likelihood ... MADE EASY!!!
มุมมอง 38K7 หลายเดือนก่อน
Probability vs. Likelihood ... MADE EASY!!!
The Method of Moments ... Made Easy!
มุมมอง 22K8 หลายเดือนก่อน
The Method of Moments ... Made Easy!

ความคิดเห็น

  • @Jonathan-cn6xz
    @Jonathan-cn6xz 8 ชั่วโมงที่ผ่านมา

    9:42 Is the log(0.8) meant to be log(1.8)? Nice video btw.

    • @statswithbrian
      @statswithbrian 8 ชั่วโมงที่ผ่านมา

      Yes, exactly, good catch.

  • @brazilfootball
    @brazilfootball วันที่ผ่านมา

    Thank you, but one thing I still don't get about about the Bayesian way of thinking: if one is supposed to constantly update a belief via data, then where did the prior come from? How does one quantify a belief based on zero data?

  • @Indioharp
    @Indioharp 2 วันที่ผ่านมา

    Great explanation Brian! I have a small question, though. If the response variable has to be normal (in a normal linear regression), why do you think most statistics articles insist that only the residuals have to be normal and not the variable? What tests do you think should be done before a GLM, besides residual plots?

    • @statswithbrian
      @statswithbrian 2 วันที่ผ่านมา

      Saying the response is normal and the residuals are normal means the same thing basically. The response is normal (around the mean for that X value), which just means the response’s distance from the mean (residual) is normal with mean 0. If we want to evaluate normality of residuals, it’s then easier to look at a graph of residual since they all have the same mean so we can easily visualize if they seem normally distributed.

  • @jingngo8727
    @jingngo8727 4 วันที่ผ่านมา

    Amazing. Enlightening

  • @nimalannagendiran
    @nimalannagendiran 5 วันที่ผ่านมา

    Love the video. These concepts are explained very nicely and in a easy to understand manner.

  • @TalgatOmarov-y9z
    @TalgatOmarov-y9z 6 วันที่ผ่านมา

    Very clear explanation. Thanks.

  • @st8k490
    @st8k490 6 วันที่ผ่านมา

    great content, keep it up

  • @carlosazevedo5079
    @carlosazevedo5079 7 วันที่ผ่านมา

    Is there any reason to have two different ways of solving it? One at 5:43 and another at the end. Which one do you think is better and why ?

    • @statswithbrian
      @statswithbrian 6 วันที่ผ่านมา

      They are the same way, least squares. The last part is just a little extra justification on “why” - least squares is very related to normal distributions.

  • @kates-creates
    @kates-creates 7 วันที่ผ่านมา

    you have great videos !! thank you so much <3

  • @JuhiMaurya-ym3ud
    @JuhiMaurya-ym3ud 7 วันที่ผ่านมา

    perfect and easiest explanation in utube....thanku so much sir it is really helpful

  • @udayteja6595
    @udayteja6595 8 วันที่ผ่านมา

    Great Content

  • @Kwintessential2
    @Kwintessential2 9 วันที่ผ่านมา

    Good examples

  • @Moment12378
    @Moment12378 9 วันที่ผ่านมา

  • @alaataktokani8884
    @alaataktokani8884 10 วันที่ผ่านมา

    if I can i will give a 100 likes, best explanation that i have found for this topic so far

  • @Rotte-o5e
    @Rotte-o5e 10 วันที่ผ่านมา

    Excellent video! It really explains this concept in a simple manner!

  • @yahweh1782
    @yahweh1782 10 วันที่ผ่านมา

    I got this question that says let 𝑋 be a discrete random variable where P(𝑋=x) are some function of 𝜃(e.g. 𝜃/2, 1-𝜃) for some number x, and it ask "Find the Method of Moments Estimator of 𝜃 by using 𝑛 copies of 𝑋, where 𝑛 > 1.", what does " using 𝑛 copies of 𝑋" means?

    • @statswithbrian
      @statswithbrian 10 วันที่ผ่านมา

      A sample of size n. They mean the same thing as in this video - set xbar (which is based on the n data points) equal to E(X) which is a function of theta

  • @amina0218
    @amina0218 11 วันที่ผ่านมา

    Please, don't stop creating the content! I am so grateful for you! The stats at universities are not properly explained, infortunately, you make difficult things seem easy indeed!

    • @statswithbrian
      @statswithbrian 11 วันที่ผ่านมา

      Thank you, much more to come! :)

  • @sabelojupiter6081
    @sabelojupiter6081 11 วันที่ผ่านมา

    This is an absolutely incredible video. Amazing information. Please keep doing your thing

  • @angrymedbird
    @angrymedbird 11 วันที่ผ่านมา

    Universe!!! It had broken my stone, my head. Thank you!

  • @TheProblemSolvers38
    @TheProblemSolvers38 12 วันที่ผ่านมา

    Finally got the point. Not even ChatGPT or Claude explained this topic as easily as you did. Thanks !!

  • @Zaina-w5c
    @Zaina-w5c 12 วันที่ผ่านมา

    Brooo, i am very thankful someone is choosing the things to explain where on avg every student feel confused and tensed. Thank you very much. It was helpful ❤️

  • @Zaina-w5c
    @Zaina-w5c 12 วันที่ผ่านมา

    Thank you very much bro. This was really helpful, nicely explained ❤️🍁. Have a bright day🌻

  • @Lilac_Pearl
    @Lilac_Pearl 12 วันที่ผ่านมา

    Best and clear cut video. It made sense to me. Thanks, man 👏🏽

  • @willbutplural
    @willbutplural 12 วันที่ผ่านมา

    Amazing explanation

  • @sajanator3
    @sajanator3 14 วันที่ผ่านมา

    Could you do a video on the Neyman Pearson lemma ?

    • @statswithbrian
      @statswithbrian 14 วันที่ผ่านมา

      I talk about it a little in this video, and I don't have a lot more to say about it than I included here, so probably not. Is there something in particular about it you're looking to learn?

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 14 วันที่ผ่านมา

    You could ask which country you are in 😂 Great video tho

    • @statswithbrian
      @statswithbrian 14 วันที่ผ่านมา

      The video would have been very short. 😂 I laughed out loud, thanks!

  • @joefish8087
    @joefish8087 14 วันที่ผ่านมา

    Excellent!

  • @kalidindisridharvarma
    @kalidindisridharvarma 15 วันที่ผ่านมา

    Seriously very clean and simple explanation , keep continuing these lectures

  • @vknthp
    @vknthp 16 วันที่ผ่านมา

    I'm a fan of your videos! So well explained! Keep up the good work :)

  • @williammartin4416
    @williammartin4416 16 วันที่ผ่านมา

    Thanks!

  • @roshanjames4564
    @roshanjames4564 16 วันที่ผ่านมา

    Thanks!

  • @ИванВасильев-ж6ъ
    @ИванВасильев-ж6ъ 17 วันที่ผ่านมา

    I am on my second year of education in uni and these theme was explained awful and I had no idea of what was going on but after watching I am wondering how professor managed to explain such an easy thing in such difficult way thank you so much❤

  • @Utsa-qb5ro
    @Utsa-qb5ro 20 วันที่ผ่านมา

    Exactly what the title says! Thank you

  • @lbognini
    @lbognini 21 วันที่ผ่านมา

    05:30 The probabiliy that X=1 is 1. It's certain since we've already observed it. The question makes no sense simply because it makes no sense to talk about probability for something that already happened. The same holds true even for discrete probability.

    • @statswithbrian
      @statswithbrian 21 วันที่ผ่านมา

      The implication is that our observation X=1 is just one *sample* from a probability distribution of a random variable X that has a distribution, and other samples will not all be X=1. So the P(X=1) is not one, but P(this particular realization of X is 1)=1, yes.

    • @lbognini
      @lbognini 21 วันที่ผ่านมา

      ​​@@statswithbrianI see. You meant to ask what is the probability that the next observation will be one given that we have already one observation with X=1.

    • @statswithbrian
      @statswithbrian 21 วันที่ผ่านมา

      @lbognini I definitely did not mean that. That is a Bayesian concept (a posterior predictive distribution) that is not related to this.

  • @rodrigovm
    @rodrigovm 22 วันที่ผ่านมา

    thank's Mr. Brian

  • @nickmillican22
    @nickmillican22 23 วันที่ผ่านมา

    I don't understand the first "not-complete" example. E[X_1 - X_2] = E[X_1] - E[X_2] = mu - mu = 0, no?

    • @statswithbrian
      @statswithbrian 23 วันที่ผ่านมา

      Correct - it’s 0. The definition of completeness tells us that if that expected value is zero, then the function must also be 0 everywhere. But X1-X2 is basically never 0.

  • @ramilkerimzade5568
    @ramilkerimzade5568 23 วันที่ผ่านมา

    i liked it

  • @chocolatemodelsofficial5859
    @chocolatemodelsofficial5859 25 วันที่ผ่านมา

    So many people say physics does not apply to the stock market, but after watching this video I must disagree.

  • @lilbacon7777
    @lilbacon7777 25 วันที่ผ่านมา

    Great video, sincerely a second year stat major.

  • @chocolatemodelsofficial5859
    @chocolatemodelsofficial5859 25 วันที่ผ่านมา

    Great explanation, thanks.

  • @umbraemilitos
    @umbraemilitos 25 วันที่ผ่านมา

    This content is excellent. Would a better microphone or audio setup be a bad idea?

    • @statswithbrian
      @statswithbrian 25 วันที่ผ่านมา

      Working on it. Next videos will be improved.

  • @MinMax-kc8uj
    @MinMax-kc8uj 25 วันที่ผ่านมา

    I round up. I'm not 70.238749327429 inches tall, I'm 71 and I'm sticking to it.

  • @ramilkerimzade5568
    @ramilkerimzade5568 25 วันที่ผ่านมา

    why we use a formula of (predicted weight = -500+10*height)? Why 500 and 10?

    • @statswithbrian
      @statswithbrian 25 วันที่ผ่านมา

      We don't - I just made those numbers up for the example for easy computation. In reality, we will use the least-squares regression line (which in this example was the predicted weight = -439.286+8.929*height). That's not obvious at all, you'd have to have the data and compute a regression line.

  • @kevon217
    @kevon217 25 วันที่ผ่านมา

    Great straightforward intuitive explanation.

  • @jayjain1033
    @jayjain1033 26 วันที่ผ่านมา

    Soo good! Didn't get it in class at all

  • @huanranchen
    @huanranchen 27 วันที่ผ่านมา

    I'm so grateful for all the videos you make that inspire our curiosity!

  • @Kwintessential2
    @Kwintessential2 27 วันที่ผ่านมา

    Constant question, what is the probability of??? Happens with recruiting

  • @jakeaustria5445
    @jakeaustria5445 28 วันที่ผ่านมา

    Thank You😊

    • @statswithbrian
      @statswithbrian 26 วันที่ผ่านมา

      Thank YOU! I appreciate the support. :)

  • @MarcoBova
    @MarcoBova 29 วันที่ผ่านมา

    really neat video, could you do one on estimators and LME?

    • @statswithbrian
      @statswithbrian 28 วันที่ผ่านมา

      If by LME you mean linear mixed-effects models, probably not any time soon, but that's a good idea. I have lots of videos on estimators and their properties though (MLE, MoM, Consistency, Unbiasedness, CRLB etc)

  • @carl3260
    @carl3260 29 วันที่ผ่านมา

    Hi Brian, nice explanation. I was wondering what a more formal approach to deriving the expression at 13:00 might look like? The intuitive "approach" mentioned, which makes sense, implicitly sums over a (latent) variable indicating which X_i is maximal. But I'm not sure how else you could go about it, e.g. the earlier method calculated p(X_1, max(X_i) = t) / p(max(X_i) = t), but that doesn't seem available, e.g. how is p(max(X_i) = t) defined? Many thanks!

    • @statswithbrian
      @statswithbrian 29 วันที่ผ่านมา

      You're right that a similar method as before doesn't seem available to us, because we can't easily separate the maximum in the same way as we can the rest of the sum/mean in the Poisson example. I think one could show a little more work in the example, but I can't think of a method that wouldn't at least require using the law of total probability/expectation in the same way, where we add in the conditioning on whether or not X1 is also the max.