I've heard some people argue that rank based nonparametric methods are not very useful because you aren't measuring the data, but the ranks of the data, which is a fundamentally different problem. What do you make of this debate? Ive seen the wasserman "all of nonparametric statistics" cited as providing alternatives and support for that contention.
Disclaimer: I have not thought a lot about this, but here’s my two cents. I think it’s a fair issue to bring up, especially when the specific values of the data have real-world meaning. I.e I’d want a hypothesis test on my blood pressure to say something about my blood pressure, not its rank relative to others. Overall, it’s still a valuable tool for people because I view working with a transform of the data to be better than totally ignoring the assumptions of a hypothesis test.
The answer is that signed ranks allow us to determine whether or not there are statistically significant effects, even when minorities are present in samples. This is something the Central Limit Theorem can't handle, because when using the Central Limit Theorem you can't tell whether or not an unlikely sample mean is caused by minorities or caused by statistically significant effects. This forces you to either decide that the minorities don't matter or that the majority doesn't matter -- it forcibly discriminates against groups that are different from one another. Signed ranks solve this problem, so that you can test hypothesis without discriminating against minorities.@@very-normal You need to make the judgment call on which to use based on the situation. In personal medicine, parametric tests make sense because your own body doesn't benefit from signed ranks. But anything involving diverse communities of several people of different backgrounds would benefit from non parametric techniques. Likewise, machine learning involving classification of diverse objects would benefit from non parametric techniques -- such as k-means clustering.
Yeah you’re right, the notation is for a general CDF. I chose to show the PDF instead since it’s easier to see the symmetry but I should have had another bit of notation there to connect that
Question!! How could you test if the “typical non-work watch time” was either significantly less than or greater than the 60 min? (Let’s say you get mad at your employees for watching on the clock, but in reality they watch near 0 min which is causing the low p-value)
The Central Limit Theorem *always* applies. But, it *also* marginalizes different groups and minorities in the population. And for that reason, I do prefer non-parametric models.
Wait a second, was hypothesis testing P(param | data) is proportional to P(data | param) (by bayes) all along? Makes sense I suppose, you do that in maximum likelihood estimation I think, this seems like the instantaneous version, where you're judging one case before moving to a more likely param candidate? (single cost evaluation rather than whole optimisation?)
I'd like to know more about where such a statistic was derived from, I'm not an expert but it seems like a sort of intuitive way(almost back of the envelope-ish) to get the behaviour you described at 6:47
I haven’t read all of it, but I have it as a reference! I like his work overall though. Not sure about Monte Carlo, but my usual rec for Bayesian stuff is Bayesian Data Analysis by Gelman
to this day i can`t understand this "thing" with t-test, data not normally distributed and using Central Limit Theorem((( what does it mean with a big bunch of data distributed not-normally? just bootstrap it million times and get new set of bootstrapped means, which will be normally distributed and use t-test on them?
@@very-normal is the following correct? i have a lot of data. like 10 millions data points for group A and the same for group B. but they are not normally distributed. do i just re-sample it... for example 10 00 times with sample size of 500, calculate the mean and get new data sets of "sample means", which by CLT will be normally distributed. and then i just apply t-test? if that is correct - what is the threshhold for "a lot of data to use CLT" and how to correctly choose a sample size and a number of times to re-sample? thank you kindly.
Ah okay I see, in that case, you’d probably be okay using a bootstrap hypothesis test instead of a t-test. Based on your resampled data, you can calculate something like the median, which will be more robust to that non-normality. The bootstrap also requires a lot of data to get good results, but it’s also a nonparametric method. As for how many to bootstrap, usually people are fine with a few thousand. These resamples are easily done by a computer so getting a lot of them is not hard, it’s that sample size that’s usually the limiting factor. To be completely honest, when you have “a lot” of data, there are more options for you in terms of tests. The question of how much data is enough to assume CLT works is a difficult one. Each dataset has its own quirks (effect size you want, skewness, variance, missingness, etc) so a general answer is hard to give. The best general advice I can give is to test out realistic situations in simulation to see what gives you good type-I error control.
@@very-normal the "fun" part is.... my example guided you in another direction. so i still have no clue what the assumption about CLT for t-test in your video means....(((
lol no worries, can you clarify for me which assumption you’re referring to? Do you mean the assumption that we have enough data to use CLT for t-test?
I don't have a lot of knowledge in statistics, so this question might sound dumb. The only thing we've assumed about the distribution to perform this test is that the distribution is symmetric, right?
Nonparametric is not applied a lot practically and does not seem to be preferable that much in research papers. Why? Parametric analysis seem the go-to typically
This is my opinion, but I think a lot of it comes from unfamiliarity and unawareness that the methods even exist, especially among non-statistician researchers. I’ve seen some researchers use it, but it is not that common. There are other reasons concerning power & efficiency, but I think most people just don’t think about them
The interpretation of a non-parametric test also tends to be less intuitive and useful to researchers than a parametric test. Bootstrapping would be a good alternative, though most people tend to forget they have it in their bag of tools (including myself)
@@very-normal I took it as a choice reflective of the no-frills, and approachable non-elitist attitude toward a difficult subject where the content is what matters
What a time to be alive... just open TH-cam and get educational quality content to procrastinate from your statistic lectures. Thank you!
I minored in Statistics and I always wondered how we would handle data that doesn’t follow a certain distribution. I’m glad I stumbled on this video
nonparametrics sounds like a branch of the SCP Foundation
I appreciate everything you do in these videos. A sincere thank you, you helped me a lot.
This was very well explained. Thanks for sharing your insight!
Bootstrap is life
Bootstrap is love.
@@ilusoriob Bootstrap is joy
I've heard some people argue that rank based nonparametric methods are not very useful because you aren't measuring the data, but the ranks of the data, which is a fundamentally different problem.
What do you make of this debate?
Ive seen the wasserman "all of nonparametric statistics" cited as providing alternatives and support for that contention.
Disclaimer: I have not thought a lot about this, but here’s my two cents.
I think it’s a fair issue to bring up, especially when the specific values of the data have real-world meaning. I.e I’d want a hypothesis test on my blood pressure to say something about my blood pressure, not its rank relative to others.
Overall, it’s still a valuable tool for people because I view working with a transform of the data to be better than totally ignoring the assumptions of a hypothesis test.
The answer is that signed ranks allow us to determine whether or not there are statistically significant effects, even when minorities are present in samples. This is something the Central Limit Theorem can't handle, because when using the Central Limit Theorem you can't tell whether or not an unlikely sample mean is caused by minorities or caused by statistically significant effects. This forces you to either decide that the minorities don't matter or that the majority doesn't matter -- it forcibly discriminates against groups that are different from one another. Signed ranks solve this problem, so that you can test hypothesis without discriminating against minorities.@@very-normal
You need to make the judgment call on which to use based on the situation. In personal medicine, parametric tests make sense because your own body doesn't benefit from signed ranks. But anything involving diverse communities of several people of different backgrounds would benefit from non parametric techniques.
Likewise, machine learning involving classification of diverse objects would benefit from non parametric techniques -- such as k-means clustering.
Cool! Excited for part 2.
Loving the pharma twist to this video
Excellent video but at min 4.15 it should not be the density distribution fonction rather than the cdf ? Because the cdf is strictly increasing.
Yeah you’re right, the notation is for a general CDF. I chose to show the PDF instead since it’s easier to see the symmetry but I should have had another bit of notation there to connect that
@@very-normal yes it's just a detail, anyway the video is super clear
Question!! How could you test if the “typical non-work watch time” was either significantly less than or greater than the 60 min?
(Let’s say you get mad at your employees for watching on the clock, but in reality they watch near 0 min which is causing the low p-value)
I could specify in wilcox.test that I’d like a one sided test via one of its arguments. By default, it goes with a two-sided test
@@very-normal Faster response time than most of my professors! I appreciate you and your amazing stats content!! Thanks :)
The Central Limit Theorem *always* applies. But, it *also* marginalizes different groups and minorities in the population. And for that reason, I do prefer non-parametric models.
Would love some content on complex linear models, mixed linear models and all that. But maybe you'd have to start with general linear models first.
Yeahhh, it might be a while before I get to the more complex linear models, but I’ll definitely get to them since they’re so commonly used
@@very-normal Subscribed so I can catch those, keep up the good work, love the format of the videos!
You're a great teacher!
You should also cover nonparametric regression stuff, like smoothing
I think that would be cool! You mean something like kernels or splines, yeah?
@@very-normal yes
which SW do you use to show the formulas with the animations and the graphs, curves, etc?
I use manim for those!
Will you cover Dempster-Shafer theory in the future?
Wait a second, was hypothesis testing P(param | data) is proportional to P(data | param) (by bayes) all along? Makes sense I suppose, you do that in maximum likelihood estimation I think, this seems like the instantaneous version, where you're judging one case before moving to a more likely param candidate? (single cost evaluation rather than whole optimisation?)
I'd like to know more about where such a statistic was derived from, I'm not an expert but it seems like a sort of intuitive way(almost back of the envelope-ish) to get the behaviour you described at 6:47
Pretty sure I get an entire class on these and SEMs next semester
Good luck! SEM was rough for me when I took it 💀
What is SEM?
It stands for “structural equation modeling”, it’s often used with latent variables, which are common in fields like psychology
What if you need to watch a TH-cam tutorial? It still would count as a non-work site wouldn’t it?
Thank you bro. This is helpful
AWESOME VIDEO
Thanks man. Keep it up.
opinion abput all of nonparametric statistics by Wasserman?
Also any suggestions on bayesians / monte carlo methods??
I haven’t read all of it, but I have it as a reference! I like his work overall though.
Not sure about Monte Carlo, but my usual rec for Bayesian stuff is Bayesian Data Analysis by Gelman
to this day i can`t understand this "thing" with t-test, data not normally distributed and using Central Limit Theorem((( what does it mean with a big bunch of data distributed not-normally?
just bootstrap it million times and get new set of bootstrapped means, which will be normally distributed and use t-test on them?
What’s the thing you don’t understand?
@@very-normal is the following correct?
i have a lot of data. like 10 millions data points for group A and the same for group B. but they are not normally distributed.
do i just re-sample it... for example 10 00 times with sample size of 500, calculate the mean and get new data sets of "sample means", which by CLT will be normally distributed. and then i just apply t-test?
if that is correct - what is the threshhold for "a lot of data to use CLT" and how to correctly choose a sample size and a number of times to re-sample?
thank you kindly.
Ah okay I see, in that case, you’d probably be okay using a bootstrap hypothesis test instead of a t-test. Based on your resampled data, you can calculate something like the median, which will be more robust to that non-normality. The bootstrap also requires a lot of data to get good results, but it’s also a nonparametric method. As for how many to bootstrap, usually people are fine with a few thousand. These resamples are easily done by a computer so getting a lot of them is not hard, it’s that sample size that’s usually the limiting factor.
To be completely honest, when you have “a lot” of data, there are more options for you in terms of tests.
The question of how much data is enough to assume CLT works is a difficult one. Each dataset has its own quirks (effect size you want, skewness, variance, missingness, etc) so a general answer is hard to give. The best general advice I can give is to test out realistic situations in simulation to see what gives you good type-I error control.
@@very-normal the "fun" part is.... my example guided you in another direction. so i still have no clue what the assumption about CLT for t-test in your video means....(((
lol no worries, can you clarify for me which assumption you’re referring to? Do you mean the assumption that we have enough data to use CLT for t-test?
What would you recommend for students who can't afford a license to use R or MATLAB environments?
R is free tho! Also you could go with Python
R is completely free for everyone, same with Rstudio
edit: so is Python, Julia, and VSCode
I don't have a lot of knowledge in statistics, so this question might sound dumb. The only thing we've assumed about the distribution to perform this test is that the distribution is symmetric, right?
Yes! And also that it’s continuous
great video
for the algo!!
✊
for the algo!!!!
very helpful!
Nonparametric is not applied a lot practically and does not seem to be preferable that much in research papers. Why? Parametric analysis seem the go-to typically
This is my opinion, but I think a lot of it comes from unfamiliarity and unawareness that the methods even exist, especially among non-statistician researchers. I’ve seen some researchers use it, but it is not that common. There are other reasons concerning power & efficiency, but I think most people just don’t think about them
The interpretation of a non-parametric test also tends to be less intuitive and useful to researchers than a parametric test. Bootstrapping would be a good alternative, though most people tend to forget they have it in their bag of tools (including myself)
you are 1 week too late i already flunked my nonparametric statistics midterm exam 😅
my b, i gotchu for the final
The ACTUAL name of the sgn(x) function is 'signum.' It is Latin for something that has a sign or a signature.
🆒
Love your vids, but isn't it theta-nought, as in zero, not theta-not
lol yeah I know, I tried going for a pronounciation type thing but I don’t think it worked out 😅
@@very-normal I took it as a choice reflective of the no-frills, and approachable non-elitist attitude toward a difficult subject where the content is what matters
fix your mic
on it