- 35
- 198 691
Learn Statistics with Brian
เข้าร่วมเมื่อ 16 มิ.ย. 2021
Statistics, made easy.
The easiest way to support the channel is to follow the linktree below to my full-length statistics courses offered on Udemy. The courses provide a more structured way to learn introductory statistics, mathematical statistics, Bayesian statistics, R, and SQL.
The easiest way to support the channel is to follow the linktree below to my full-length statistics courses offered on Udemy. The courses provide a more structured way to learn introductory statistics, mathematical statistics, Bayesian statistics, R, and SQL.
What is Least Squares Linear Regression?
What do we mean by least squares, and why do we use it?
มุมมอง: 267
วีดีโอ
Likelihood Ratio Tests Clearly Explained
มุมมอง 53714 วันที่ผ่านมา
What is a likelihood ratio test (LRT) in statistics?
The Physics Behind Probability Mass and Probability Density Functions
มุมมอง 99828 วันที่ผ่านมา
Buy my full-length statistics, data science, and SQL courses: linktr.ee/briangreco Why are they called probability mass and probability density functions? What's the difference between a pmf and a pdf?
Fail to reject the null hypothesis?
มุมมอง 687หลายเดือนก่อน
Why do people say "I fail to reject the null hypothesis"?????
What is a consistent estimator in statistics?
มุมมอง 1.2Kหลายเดือนก่อน
What does it mean for a statistical estimator to be consistent? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Complete Statistics, Ancillary Statistics, and Basu's Theorem
มุมมอง 428หลายเดือนก่อน
Learn about ancillarity, complete statistics, and Basu’s Theorem! Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Minimal Sufficient Statistics: th-cam.com/video/IsgteDaNTFk/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Minimal Sufficient Statistics Clearly Explained
มุมมอง 579หลายเดือนก่อน
You’ve learned what a sufficient statistic is, but what does it mean to be minimal sufficient? Sufficient Statistics: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Chi-squared tests ... MADE EASY!!!
มุมมอง 2872 หลายเดือนก่อน
Learn about Chi-Squared Tests - Goodness of fit (GOF), Independence, and Homogeneity, oh my! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Rao-Blackwell Theorem Explained
มุมมอง 1.4K2 หลายเดือนก่อน
Learn about Rao-Blackwellization, a way to use sufficient statistics to improve your statistical estimators! Learn more about Sufficient Statistics and the Factorization Theorem: th-cam.com/video/J-TTqCgRzbM/w-d-xo.html Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The EM Algorithm Clearly Explained (Expectation-Maximization Algorithm)
มุมมอง 5K2 หลายเดือนก่อน
Learn all about the EM algorithm, a way to find maximum likelihood estimates in problems with missing data. Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
p-values explained, in 5 levels of complexity
มุมมอง 3.3K3 หลายเดือนก่อน
Learn all about what p-values and how to interpret them in statistical hypothesis testing! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
What is a degree of freedom?
มุมมอง 1.1K3 หลายเดือนก่อน
Learn about the mysterious, hard-to-interpret number used in many statistical methods, degrees of freedom (df). Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Monty Hall Problem without probability
มุมมอง 7984 หลายเดือนก่อน
The Monty Hall Problem is a classic probability puzzle with a counter-intuitive result. Instead of using probability trees, let's try to deeply understand the problem!
Analysis of Variance (ANOVA) and F statistics .... MADE EASY!!!
มุมมอง 3924 หลายเดือนก่อน
Learn the intuition behind ANOVA and calculating F statistics! Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
The Cramer-Rao Lower Bound ... MADE EASY!!!
มุมมอง 4.2K5 หลายเดือนก่อน
What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator? Buy my full-length statistics, data science, and SQL courses here: linktr.ee/briangreco
Outliers in Data Analysis... and how to deal with them!
มุมมอง 3.1K6 หลายเดือนก่อน
Outliers in Data Analysis... and how to deal with them!
Link functions for GLMs... MADE EASY!!!
มุมมอง 1K7 หลายเดือนก่อน
Link functions for GLMs... MADE EASY!!!
The Uniform Distribution MLE is... very UNLIKELY
มุมมอง 6587 หลายเดือนก่อน
The Uniform Distribution MLE is... very UNLIKELY
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
มุมมอง 16K7 หลายเดือนก่อน
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
Maximum Likelihood Estimation ... MADE EASY!!!
มุมมอง 25K7 หลายเดือนก่อน
Maximum Likelihood Estimation ... MADE EASY!!!
The simplest non-parametric test... The Sign Test
มุมมอง 1.6K7 หลายเดือนก่อน
The simplest non-parametric test... The Sign Test
Can more data be BAD??? (The 10% rule and Finite Population Correction)
มุมมอง 2927 หลายเดือนก่อน
Can more data be BAD??? (The 10% rule and Finite Population Correction)
Inverse Transform Sampling ... MADE EASY!!!
มุมมอง 2.4K7 หลายเดือนก่อน
Inverse Transform Sampling ... MADE EASY!!!
Heteroskedasticity and Homoskedasticity... What are they???
มุมมอง 2.7K7 หลายเดือนก่อน
Heteroskedasticity and Homoskedasticity... What are they???
Independent vs Mutually Exclusive Events ... MADE EASY!!!
มุมมอง 2.1K7 หลายเดือนก่อน
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Probability vs. Likelihood ... MADE EASY!!!
มุมมอง 38K7 หลายเดือนก่อน
Probability vs. Likelihood ... MADE EASY!!!
9:42 Is the log(0.8) meant to be log(1.8)? Nice video btw.
Yes, exactly, good catch.
Thank you, but one thing I still don't get about about the Bayesian way of thinking: if one is supposed to constantly update a belief via data, then where did the prior come from? How does one quantify a belief based on zero data?
Great explanation Brian! I have a small question, though. If the response variable has to be normal (in a normal linear regression), why do you think most statistics articles insist that only the residuals have to be normal and not the variable? What tests do you think should be done before a GLM, besides residual plots?
Saying the response is normal and the residuals are normal means the same thing basically. The response is normal (around the mean for that X value), which just means the response’s distance from the mean (residual) is normal with mean 0. If we want to evaluate normality of residuals, it’s then easier to look at a graph of residual since they all have the same mean so we can easily visualize if they seem normally distributed.
Amazing. Enlightening
Love the video. These concepts are explained very nicely and in a easy to understand manner.
Very clear explanation. Thanks.
great content, keep it up
Is there any reason to have two different ways of solving it? One at 5:43 and another at the end. Which one do you think is better and why ?
They are the same way, least squares. The last part is just a little extra justification on “why” - least squares is very related to normal distributions.
you have great videos !! thank you so much <3
Thank you! :)
perfect and easiest explanation in utube....thanku so much sir it is really helpful
Great Content
Good examples
❤
if I can i will give a 100 likes, best explanation that i have found for this topic so far
Excellent video! It really explains this concept in a simple manner!
I got this question that says let 𝑋 be a discrete random variable where P(𝑋=x) are some function of 𝜃(e.g. 𝜃/2, 1-𝜃) for some number x, and it ask "Find the Method of Moments Estimator of 𝜃 by using 𝑛 copies of 𝑋, where 𝑛 > 1.", what does " using 𝑛 copies of 𝑋" means?
A sample of size n. They mean the same thing as in this video - set xbar (which is based on the n data points) equal to E(X) which is a function of theta
Please, don't stop creating the content! I am so grateful for you! The stats at universities are not properly explained, infortunately, you make difficult things seem easy indeed!
Thank you, much more to come! :)
This is an absolutely incredible video. Amazing information. Please keep doing your thing
Universe!!! It had broken my stone, my head. Thank you!
Finally got the point. Not even ChatGPT or Claude explained this topic as easily as you did. Thanks !!
Brooo, i am very thankful someone is choosing the things to explain where on avg every student feel confused and tensed. Thank you very much. It was helpful ❤️
Thank you very much bro. This was really helpful, nicely explained ❤️🍁. Have a bright day🌻
Best and clear cut video. It made sense to me. Thanks, man 👏🏽
Amazing explanation
Could you do a video on the Neyman Pearson lemma ?
I talk about it a little in this video, and I don't have a lot more to say about it than I included here, so probably not. Is there something in particular about it you're looking to learn?
You could ask which country you are in 😂 Great video tho
The video would have been very short. 😂 I laughed out loud, thanks!
Excellent!
Seriously very clean and simple explanation , keep continuing these lectures
I'm a fan of your videos! So well explained! Keep up the good work :)
Thank you!
Thanks!
Thank you!
Thanks!
Thank you!
I am on my second year of education in uni and these theme was explained awful and I had no idea of what was going on but after watching I am wondering how professor managed to explain such an easy thing in such difficult way thank you so much❤
Exactly what the title says! Thank you
05:30 The probabiliy that X=1 is 1. It's certain since we've already observed it. The question makes no sense simply because it makes no sense to talk about probability for something that already happened. The same holds true even for discrete probability.
The implication is that our observation X=1 is just one *sample* from a probability distribution of a random variable X that has a distribution, and other samples will not all be X=1. So the P(X=1) is not one, but P(this particular realization of X is 1)=1, yes.
@@statswithbrianI see. You meant to ask what is the probability that the next observation will be one given that we have already one observation with X=1.
@lbognini I definitely did not mean that. That is a Bayesian concept (a posterior predictive distribution) that is not related to this.
thank's Mr. Brian
I don't understand the first "not-complete" example. E[X_1 - X_2] = E[X_1] - E[X_2] = mu - mu = 0, no?
Correct - it’s 0. The definition of completeness tells us that if that expected value is zero, then the function must also be 0 everywhere. But X1-X2 is basically never 0.
i liked it
So many people say physics does not apply to the stock market, but after watching this video I must disagree.
Great video, sincerely a second year stat major.
Great explanation, thanks.
This content is excellent. Would a better microphone or audio setup be a bad idea?
Working on it. Next videos will be improved.
I round up. I'm not 70.238749327429 inches tall, I'm 71 and I'm sticking to it.
why we use a formula of (predicted weight = -500+10*height)? Why 500 and 10?
We don't - I just made those numbers up for the example for easy computation. In reality, we will use the least-squares regression line (which in this example was the predicted weight = -439.286+8.929*height). That's not obvious at all, you'd have to have the data and compute a regression line.
Great straightforward intuitive explanation.
Soo good! Didn't get it in class at all
I'm so grateful for all the videos you make that inspire our curiosity!
Thank you! :)
Constant question, what is the probability of??? Happens with recruiting
Thank You😊
Thank YOU! I appreciate the support. :)
really neat video, could you do one on estimators and LME?
If by LME you mean linear mixed-effects models, probably not any time soon, but that's a good idea. I have lots of videos on estimators and their properties though (MLE, MoM, Consistency, Unbiasedness, CRLB etc)
Hi Brian, nice explanation. I was wondering what a more formal approach to deriving the expression at 13:00 might look like? The intuitive "approach" mentioned, which makes sense, implicitly sums over a (latent) variable indicating which X_i is maximal. But I'm not sure how else you could go about it, e.g. the earlier method calculated p(X_1, max(X_i) = t) / p(max(X_i) = t), but that doesn't seem available, e.g. how is p(max(X_i) = t) defined? Many thanks!
You're right that a similar method as before doesn't seem available to us, because we can't easily separate the maximum in the same way as we can the rest of the sum/mean in the Poisson example. I think one could show a little more work in the example, but I can't think of a method that wouldn't at least require using the law of total probability/expectation in the same way, where we add in the conditioning on whether or not X1 is also the max.