Great informative video, thanks! One question - if your ANOVA result is significant, leading you to follow up with post-hoc tests, such as a t-Test with some kind of family wise error rate adjustment such as bonferroni, why bother with the ANOVA in the first place? Couldn't you just run the t-Tests and adjusted the p-values for the number of tests (as per bonferroni), making the ANOVA redundant. I appreciate for lots of groups this would be time consuming but is that the only benefit of the ANOVA in that case? I can't see how it offers any extra protection against type 1 error if the end result is to run t-Tests and report corrected p-values anyway? Appreciate any thoughts! Thanks and keep up the great vids.
That's a good question. Sometimes the post-hoc tests don't yield strong significance, but if they are all just a little significant, the ANOVA may still be strongly significant, since it is aggregating all of them together. Then, perhaps a follow up study with a larger sample size is warranted to gain more power to reject the pairwise tests. Also, strictly speaking, you reject the global null first (that all of the means are identical) before following with a post-hoc test. If the global null isn't rejected, it's a bit erroneous to follow up with post-hoc testing. It's like saying, we detect no differences among the means, so now we're going to look for differences among the means. That being said, I'd hope that multiple testing correction would avoid the problem of false positives.
Don't know that much about Canada, but , by being from a province, do you mean being born there? What if you move, at say, 2 years of age? Or move more than one? Maybe could do some sort of blocking in that regard? Or maybe assign to a person the province where they've lived the highest % of their lives?
Moved over to this new channel. Once again, awesome can't wait to get through all the DOE videos.
Great informative video, thanks! One question - if your ANOVA result is significant, leading you to follow up with post-hoc tests, such as a t-Test with some kind of family wise error rate adjustment such as bonferroni, why bother with the ANOVA in the first place? Couldn't you just run the t-Tests and adjusted the p-values for the number of tests (as per bonferroni), making the ANOVA redundant. I appreciate for lots of groups this would be time consuming but is that the only benefit of the ANOVA in that case? I can't see how it offers any extra protection against type 1 error if the end result is to run t-Tests and report corrected p-values anyway? Appreciate any thoughts! Thanks and keep up the great vids.
That's a good question. Sometimes the post-hoc tests don't yield strong significance, but if they are all just a little significant, the ANOVA may still be strongly significant, since it is aggregating all of them together. Then, perhaps a follow up study with a larger sample size is warranted to gain more power to reject the pairwise tests.
Also, strictly speaking, you reject the global null first (that all of the means are identical) before following with a post-hoc test. If the global null isn't rejected, it's a bit erroneous to follow up with post-hoc testing. It's like saying, we detect no differences among the means, so now we're going to look for differences among the means. That being said, I'd hope that multiple testing correction would avoid the problem of false positives.
Don't know that much about Canada, but , by being from a province, do you mean being born there? What if you move, at say, 2 years of age? Or move more than one? Maybe could do some sort of blocking in that regard? Or maybe assign to a person the province where they've lived the highest % of their lives?