When you created your null hypothesis you stated men = women. What about the possibility that women are taller than men ? That condition does not fit in the Hsub0 or Hsub1 ??
That's a great question. There will be different answers (with increasing nuance) depending on the audience. This video was made for IB Biology and IB Psychology students, who are in high school and have to use statistics but really haven't learned it formally. If you're in that group, think of "your hypothesis" as a traditional scientific hypothesis rather than a statistical alternative hypothesis H1. But it sounds like you might be a statistics student, or you're just thinking about this from a higher level than a typical IB high school student. So, here's the subtler answer: whether H1 encompasses or excludes µ_women > µ_men depends on whether we're doing a one-sided test or a two-sided test. If we don't care about the possibility that men are *shorter* , then we can do a one-sided test, and our alternative hypothesis would be µ_men > µ_women. This would require very compelling reasons and strong justification from the relevant scientific field. (For example, in a clinical trials setting, inferiority trials aim to win approval for a new drug/therapy by showing that it's *not worse* than an existing approved drug/therapy. This results in a one-sided test where we discard the possibility that µ_new > µ_old.) Back to the example of height. Let's say we don't have any good reason for ignoring the possibility that µ_men < µ_women. In that case, the statistical hypotheses in our t-test would be: H0 µ_men = µ_women and H1 µ_men ≠ µ_women. Now, the statement from the video "your hypothesis: men are taller" could really only be a scientific hypothesis in the traditional sense of the scientific method. (The IB Bio students in particular often learn that they must include a hypothesis in their IA report.) In that circumstance, the scientific hypothesis doesn't exactly match the statistical alternative hypothesis H1. This might be more problematic for advanced research work, because the statistical test won't bear fully on the research question/hypothesis. But for a high school class, it's fine for the scientific hypothesis to not perfectly match the statistical hypothesis. (After all, other statistical analysis might be included in the report.)
I love the way you explain this. How original. These are some of the questions I ask but haven't got a clue how to answer.....you however have provided me with those answers. Thank you👍🏿
I'm confused. If a p-value states how often the difference occurs and the p-value is very low, that would mean the difference occurs very rarely and most of the time there is no difference. Would this not rather support the 0-Hypothesis?
Good question. This statement is not quite correct: "a p-value states how often the difference occurs." The p-value isn't how often a difference ACTUALLY occurs. The p-value is how often a difference WOULD occur IF the null hypothesis were true. You can think about it this way: the p-value is how often the null hypothesis THINKS your data would be produced in an experiment. Imagine your teacher flips a coin. We will let the null hypothesis be that the coin is fair. (i.e., H0: the probability of heads is equal to the probability of tails.) The teacher flips the coin and it comes up heads. The null hypothesis says "that's a fair coin, and the data don't conflict with me. I expect the coin to come up heads p=0.50 or 50% of the time." Your teacher flips it again, and it comes up head again. The null hypothesis says "that's a fair coin, and the data don't conflict with me. I expect two consecutive heads p = (1/2)(1/2) = 0.25 or 25% of the time." The teacher keeps flipping the coin, and every time, it comes up heads. The null hypothesis continues to insist that the coin is fair and that we're simply observing an extremely rare event. After 6 consecutive heads, the null hypothesis says "the p-value is p = (1/2)^6 = 0.02 or 2%. That's a fair coin. I expect to see 6 consecutive heads in 2% of experiments." In this example, the only way the null hypothesis can explain our data is by calling it an extremely rare statistical fluke that occurs in p=0.02 or 2% of experiments. The coin example demonstrates how, as the p-value gets lower, we become less trusting of the null hypothesis. Here's the important part: is 2% *actually* the probability of flipping 6 straight heads? Well, only if you think the null hypothesis is true. But why would we believe such a null hypothesis? A much more reasonable hypothesis would be that the coin has heads on both sides. This alternative hypothesis would say that there's a 100% chance of observing 6 straight heads. So the fact that the null hypothesis assigns a low p-value to our data makes us question the null hypothesis, and we start believing there might be some other explanation for the data other than "your results are just a statistical fluke." At the core of this argument is the idea that, in general, our experimental results aren't statistical flukes. Any hypothesis that repeatedly characterizes experimental data as a fluke is a bad hypothesis. We can imagine a null hypothesis which states "gravity doesn't usually pull things down; usually it pulls things up, and every time you do an experiment, you just happen to observe the rare cases where gravity pulls down. But in a majority of cases, gravity pulls up." Such a hypothesis would be absurd because it can only example the empirical evidence by calling it a fluke. (That said, on rare occasion, experiments *can* produce fluke-y results, which is why it's important for experiments to be repeated by other researchers and supported by theory.)
P value is just a probability that the set of numbers (your data) is generated by the distribution, under the null hypothesis , the higher the p value : the more probable the set of data is generated by the null hypothesis We choose also a threshold : to decide with less false negative/positive possible if we retain or reject the null hypothesis
Watched many videos about p-value and try to understand. All of them are confusing. This is the clearest one. I like the place where u kept reducing the p-value and insisted “u r just being really lucky”. That tells me why we need to reject the null hypothesis. Good job 👍🏼!
It's a confusing concept that requires lots of repetition! It'll be confusing again, and then you'll have to figure it out again, and this process kind of repeats for a while. If you have any questions, feel free to ask!
i needed a quick recap for my msc thesis after not doing stats for almost 5 years since undergrad, thank you ! very helpful recap before working with my data :)
Folk, I watched more than one thousand video on t-test and p-value but got confused. This one explained everything I need to know in simple terms. Thank you 🎉
If I could 'love' a youtube video it would be this one. Unfortunately I can only like it. What a great idea, framing unintuitive concepts in terms of a social situation to make them easier to digest. Thank you
Thank you! UC Boulder used to have a website with physics instructional material, and it was all based around these types of dialogues between animated characters. They had comic-style drawings with characters having conversations, explanatory text, and animations all combined. It was such an incredible resource and such an outstanding way to convey the material. I *think* it was UC Boulder, but I could be misremembering which Colorado school hosted the content. I haven't seen it for a while now and have a vague recollection of it being discontinued/removed. After falling in love with that resource, I came across an ed article about comics as a way to convey material. I agree that it's a great approach!!
I need help understanding why a low P-value is favorable. Wouldn't you want a high P-value to indicate that the results of the experiment are consistent with the same experiment being run on other sample groups? For example, If I found there was a 4" height difference in my sample group, and I ran the t-test and got a p-value of 0.95, wouldn't that mean that 19/20 times this experiment is run there will be a 4" height difference? So I could say more confidently that there is in fact, a 4" height difference?
Typically the null hypothesis represents the status quo, or a scenario where "the effect does not exist (and is 0)," or something else uninteresting. The idea is that, if we want to establish a new effect, or new trend, or new relationship, then we have to show that the signal we're measuring isn't just noise. That's what the p-value does. It establishes "this is actually an interesting result which is different from the status quo and thus must be explained by some new phenomenon or relationship we haven't previously detected." The p-value isn't simply a probability of an event occurring. It's the probability of the event occurring *if the null hypothesis is true*. If you get a p-value of 0.001, then that's equivalent to saying "the null hypothesis is a ridiculous proposition because the null hypothesis says our data should 'never' occur. The fact that our data *did* occur--when the null says it shouldn't--clearly refutes the null hypothesis and the way it assigns near-0 probabilities to observed events." Something I wanted to follow up on: "If I found there was a 4" height difference in my sample group, and I ran the t-test and got a p-value of 0.95, wouldn't that mean that 19/20 times this experiment is run there will be a 4" height difference?" The p-value doesn't actually tell you any real probability. It simply tells you what the null hypothesis thinks the probability *should* be. If you get a p-value of 0.95, then the null hypothesis thinks you should reproduce those same results (*or more extreme*) 95% of the time. That doesn't mean the null hypothesis is right or wrong--it simply means that the data doesn't contradict the null. Hence, a p-value of 0.95 isn't sufficient to reject or to accept the null hypothesis. We don't know if the null is right--we just don't have any clear evidence that the null is obviously wrong. The last important thing that I think might be escaping you right now: the null hypothesis is *not* the same as your measured difference. If you measure a difference of 4", that doesn't mean the null hypothesis thinks the difference should be 4". In fact, usually (but definitely not always) the null hypothesis thinks the value should be 0. To make this point clearly: if the null hypothesis was ∆h=4 in, and your data also said ∆h=4 in, then your p-value would be 100% or 1.00. So the fact that the p-value is not 100% means your observed difference differs from the null hypothesis's expected value. This is another way to think about the p-value--you can think of it as roughly being "how far away is your measured value from what the null was expecting you to measure?" If your measurement is really far, then this means the null hypothesis thinks your data is so improbable that it should never occur. Hence, the fact that your data *did* occur contradicts the null. It's like we're saying "the null is clearly assigning incorrect probabilities, because it thought what we just observed effectively never should've happened. So the null is 'broken' and making bad probability predictions, which means it must be wrong."
@@JohnYoga "The p Value is the probability that the Null Hypothesis (meaning no change) is true." This is incorrect and is an extremely common misconception. The p-value is not the probability that the null hypothesis is true. The p-value is the *hypothetical* probability that our DATA would occur IF the null hypothesis WERE true. Imagine you flip a coin 5 times in a row and it comes up heads every time. The question "is this a fair coin?" can be answered by calculating a p-value: p = (1/2)^5 = 0.03. This p-value does NOT tell us that there's a 3% chance that the null hypothesis is true. Rather, this p-value says: IF we assume it's a fair coin, THEN the probability of flipping 5 consecutive heads is 3%. Our conclusion to this experiment would be "we reject the null hypothesis because it's a bad assumption. We know it's a bad assumption because it leads to the bad conclusion that our data are just a random fluke. We don't think our data are just a random fluke; we think they are representative of reality. Thus, the null hypothesis must be wrong." Logically, the argument is using the contrapositive. A [the null hypothesis] implies B [your data are a fluke that would only occur 3% of the time]. We reject B, which means we can also reject A.
You know it is counter-intuitive. I think about it like this, if p-value is low then the probability of randomization is so low, and there is a relationship between the two variables. But if it is too big, then there is a very high randomization i.e. noise and the null hypothesis seems to be true about the noise causing the observed value and not the relationship btw. the variables i.e. the effect.
Such a great video! Simple explanation which is easy to understand. I major in language and is working on my master dissertation (some kind of questionnaire) but know so little about statistics let alone how to analyse all those numbers. This video really helped me a lot. Thank you, Daniel!
Alternative hypothesis: There is effect in the population. Null hypothesis: there is no effect in population. Lets say P value found is 0.3 which suggest that there is 30% chance that what alternative hypothesis suggest is purely by luck so we have to fail to reject null hypothesis.
I love this explanation and would only edit one thing--instead of 0.3 meaning "there's a 30% chance that what the alternative hypothesis suggests is purely by luck," it means "there's a 30% chance that the particular effect we measured in our experiment would occur purely by luck if the null hypothesis were true."
Hey! Thanks so much for a great video. I am doing some homework from Uni and my professor asked us in the assignment to do a t-test and a logistic regression. Both t test and logistic regression p-values were significant. The question then became "How does the t test and logistic regression relate to one another?" as in what is the relation between the two test or models? I wonder if you could help me shed some light at this as I can only think of differences. Different nullhypothesis, different form of testing, one tests means another probability... and so on..
So, as a broad overview, this can get pretty deep depending on the level of the stats. Broadly speaking, t-tests (and ANOVA, for >2 groups) are mathematically equivalent to a linear regression with the categorical predictors. You could (a) run a simple t-test on data where x={child, adult} and y=height, and then (b) fit a linear regression model of the form height = b0 + b1*(is child). The p-value from the t-test will exactly match the p-value from the linear regression. The same sort of relationship holds between logistic regression and a chi-square test on 2x2 contingency table. To your specific question, it's hard to know what's relevant without details about the data. t-tests and logistic regression assess different types of outcome data. So there must be something relating the outcome data in the t-test to the outcome data in the logistic regression (e.g., the logistic regression might binarize the continuous variable that is assessed in the t-test). But without any other info, I would encourage you to consider things like: given the relationship between the dependent variable in the t-test and the dependent variable in the logistic regression, would you expect the t-test p-value to be significant but not the logistic regression p-value? Other way around? Does one test give more information than the other? In other words, is information lost in one of the tests vs. the other? Would you expect better statistical power in one of the tests? Are there circumstances where one would be more appropriate than the other, based on e.g. the data range or other considerations? Is one more meaningful? If you want to continue the convo, I'd encourage you to reply with (a) more info about the data/context, (b) the relationship between the t-test outcome data and the logistic regression outcome data, and (c) the null and alternative hypotheses for both models.
Still don't understand. When p-value 0.05, the null hypothesis said it's luck because the difference only happens in 1 out of 20 trials, it just makes 0.05 sounds like we only have 5% probability of 4" difference (edit: then we're supporting null hypothesis). Edit: why is p-value very complicated (I'm not blaming your explanation), what is the math or calculus behind it that allows p-value says such logic?
You're still assuming the null hypothesis is correct even when it says p=0.05. The whole point is that p=0.05 is an extremely bad way to explain your experimental results. That is, the null hypothesis is a bad assumption if the only way it can explain your experimental results is by calling them a highly rare one-in-twenty occurrence. Imagine you place 10 plants on the south side of your home and 10 identical plants on the north side of your home. You find that the 10 plants on the north side grow 20% taller over the course of a year. Your friend comes along and says "it's incorrect for you to conclude that plants grow taller on the north side of your house because your results are just a statistical fluke. If you repeated this experiment 19 more times, you'd get no difference between the northern and southern plants on average." You probably wouldn't believe this friend, and you would reject their hypothesis. We do the exact same thing to the null hypothesis when it says "your experimental results are just a one-in-twenty fluke and p=0.05." Our response is "that's a bad way to explain the results of my experiment. I reject the null hypothesis and I reject the idea that my data are a highly rare one-in-twenty occurrence."
thank you so much very easy to understand, p-value=5 or 5% means 20 times in a row we only get 4" one time. p Value = 0.5 or 50% means 20 times in a row we'll get 4" ten time I just wonder if you set H0 = there is no difference in the population P value=0.5> 0.05 →H0 accepted or cannot be denied→there is no difference in the population BUT 20 times in a row we'll get 4" ten time, I just don't get it
Here's how I would modify your statement. I've added some additional text in [brackets] "P value=0.5> 0.05 →H0 accepted or cannot be denied→there is no difference in the population BUT [if you draw a random sample] 20 times in a row [then because of random chance and variation within the population, the sample will have a] 4" ten times [even though there is no difference in the population because the average in a random sample deviates randomly from the true population average due to random chance]." The idea is that, because we're only measuring a random sample and random samples can differ from the population (depending on the sample size and how much variation there is in the population), we need to know how much variation we WOULD expect to see in the sample IF there wasn't a difference in the population. If the variation is so high that half of the samples would show a difference even if there isn't one in the population, then we should be a bit more wary of seeing a difference in our sample and shouldn't jump to any conclusions. The p-value helps us figure out how likely a false positive might be (i.e., how likely are we to observe a difference in the sample even though there's no real difference in the population).
I seem to understand it after reading the definition: it's double negative...it's tells: what is the probability of "random chance" or "luck" that this would occur. if P is high the odds that the heights are different due to random chance is high, and Null-H wins...we don't want that...we want P to be low, meaning, the heights are NOT due to randomness or luck, they are real and representative of the population, so Null-H loses and we Win.
I'm still confused. If P value is lower, say 0.001 or 1 in 1000 repetitions, it's supporting the Null Hypothesis more and more right ? meaning the height difference of 4 Inches occurs 1 time and the heights are same the remaining 999 times, which is what Null hypothesis is saying. On the other hand, if P value is 0.999, it means 999 out of 1000 times, the Height difference will be 4 inches or more and only 1 time, the heights will be same, and therefore rejecting the Null hypothesis...It's very counter Intuitive....I'm missing something...
This is a great question and a very common misunderstanding. Let's focus on this statement: "meaning the height difference of 4 Inches occurs 1 time and the heights are same the remaining 999 times." This is not quite true. Here's what we should say instead: "meaning the [the null hypothesis can only explain your data by claiming it is an extremely rare event, because a] height difference of 4 Inches occurs 1 time and [the null hypothesis is saying that, if you repeated your experiment 999 times, you would realize that your results are just pure luck.]" Pretend I'm the null hypothesis. I flip a coin 100 times in a row, and it comes up heads all 100 times. You say "that is not a fair coin." I respond "it IS a fair coin--it's just that the experiment is an extremely rare fluke." This would be an absurd response. Every time I flip the coin and it comes up heads again, my statement ("this is a fair coin and these results are extremely unlikely...") becomes less believable. The null hypothesis is the exact same. It's like me saying "no, it's actually a fair coin, and you just have extremely unlikely results." At some point, that's less believable than the alternative--the coin is not fair and has heads on both sides.
Hi Danial. I was going through your other statistics videos, can you organize them into a playlist in your channel. It will be really helpful for us to find them easily. Again thanks for your efforts.👍
Hi Daniel... I've been thoroughly enjoying your videos. I'm slightly ashamed to admit that I didn't understand any of these things as a student, just memorized them for exams. But what I'm really concerned about is that your videos seem to be for school kids while I learned most of this in undergrad and post-grad courses. How old are these kids? Do all American kids study statistical inference and hypothesis testing in high school? Then why do UG 101 courses go over the same stuff again? Even the physics videos seem quite tough.
Hi Deepika, I'm glad you enjoy the videos! (Some are better than others, lol.) That's a good question. That one video about statistical power really is aimed at a graduate level. But all of the other videos on my channel are for high school students (11th and 12th grade). Except for that statistical power video, the statistics content is mostly for Biology students in the International Baccalaureate program. They have to do hypothesis testing in their Internal Assessment report, but they don't actually learn any inferential statistics, and it's not in their curriculum. They're in a tough position, so I tried to put together some videos to give them enough intuitions and basic skills to do a t-test or a chi-squared test. (It's still on my list to do a video on chi-squared tests.) Almost all of the physics content is for high school students in the International Baccalaureate physics course. And you're right--it is a VERY tough class!! The highest possible grade is a 7, and to get that grade, students historically only have to score ~65% overall on the test. (Only ~8% of students around the world score that high.) Comparatively, I think I put more emphasis on conceptual understanding than is usual. I always thought a strong grasp of the big-picture framework strengthened students' ability to apply equations and solve problems. (It's *definitely* how I learn, personally.) So I suspect there's a bit more of that in my videos than is typical. But in general, I don't think US high school students are learning statistical inference.
@@danielm9463 Thanks, Daniel. Glad to hear that it's 11th and 12th grade. After 10th grade I took the option to drop science completely, and channelled my analytical energy towards eco/stats/accounting. But now I understand why college classes were a breeze for the science classmates. Your students are lucky to have you. I don't know if they're old enough to appreciate how much harder it is to structure a lecture that aims for conceptual clarity than a class that uses definitions and neatly designed examples. Personally, the one video in which you start with blank paper, write things out, strike out some, change your mind in the middle of a thought... was the best medium for concepts. I know others complained about the 2-second focus problem, but I disagreed. There is something about forming an idea, growing it, on paper/blackboard that helps learning in a way that neat slides/animations cannot replicate. I wonder what the 15-16 year old kids prefer. Over 90% of your videos are not for me (I might muster up the courage and click on the physics ones one day), and I need to scroll a lot to find something, but it's really worth it when I find one. They're all very very good. And you've been doing this for nine years, that's really impressive!
@@deepika9186 Thank you for the kind words!! I agree with you completely. You'll have to post some econ/stats videos so that I can learn from you too! For 15-16 year olds, what I learned over time is that the most valuable part of a lesson is when they try it out themselves. Even a perfect video that explains everything clearly will still leave them with some confusion, lingering misconceptions, sections where they lost their focus, and content they just didn't fully appreciate. Getting into the work, making mistakes, and correcting their errors is probably the most important part of them learning successfully. I think a huge part of effective teaching is creating a space where students can engage in that messy learning process (embrace their mistakes, ask questions openly, and believe that they get smarter with hard work).
@@danielm9463 Oh wow I totally agree about messy learning; things I've learned through tedious elimination, or trial and error, or just lots of frustrated repetition are embedded. If I know a topic, I can transfer it to someone else clearly and simply - but only in person, not for an open audience. Maybe because I depend a lot on cues, even if only an expression that says "oh no you've confused me even more than when we started" because then I can start all over again (or proceed and build on the idea, as the case may be). Even on the phone I find it difficult when a friend stays quiet or is too polite to say they didn't understand / I'm not explaining well. You have a lot of practice, repeatedly, and with different types of kids - so you know which explanations work, and obviously you've sharpened them enough to have internalised the best parts and the best flow. I have a style I often use, which is to build different blocks, keep each aside, and then bring them all together at the end. Literature has a tool called the "unreliable narrator" and sometimes I find it really useful to deliberately say something wrong in one of the blocks, and then at the end correct it "Remember I said abc, but no it wrong to say that because abc and xyz cannot be true at the same time... Instead..." This is a long-winded way of saying I can't make a good eco/stats video. There are a few good stats ones (and a lot of really bad ones that cover stats without explaining anything, just statement after statement) because data science is the it thing now. Economics I feel sounds easy to understand in a video, often animated, but then ten minutes later that content is difficult to retain/recall. In my opinion, the subject lends itself better to the written form and reflection because it is afterall mostly observed/empirical and covered daily in the news. I'm just realising that I've hardly ever reached for eco videos on TH-cam!! If you are ever curious about a topic or a news story and want it broken down, will you ask me? I can then either try to explain it with what I know, or look for the right answer and try to convey it to you.
@@deepika9186 I will! Also, you've probably encountered it before, but my absolute favorite channel for stats is stat quest: th-cam.com/channels/tYLUTtgS3k1Fg4y5tAhLbw.html He covers graduate level topics, but breaks everything down. I find it really helpful for building intuitions--a nice supplement to the typical in-depth study one gets in grad school.
Seriously--the statistic for the hypothesis being true is 5% or less that means the null hypothesis is true if the hypotheses is not likely to occur...I am so confused. UGH!
@@heatherbingaman5693 let's imagine a different scenario: you're testing whether 4-year-olds have the same mean height as 40-year-olds. The null hypothesis states the two groups have the same mean height (which is preposterous, of course). One way to rebut the null hypothesis is to randomly sample one hundred 4-year-olds, randomly sample one hundred 40-year-olds, and measure their mean heights. We find that 40-year-olds have a mean height of 65 inches and 4-year-olds have a mean height of 35 inches. If the null hypothesis were true, then the only way we could measure this large a gap is through an incredibly rare statistical fluctuation--we just happened to select the tallest 40-year-olds and the shortest 4-year-olds. This is, of course, an absurd proposition. It's much more likely that the null hypothesis was wrong to begin with--that's a far more plausible idea than the notion that our random sample just happened to produce a 30-inch difference when there's really no difference in the broader population. In other words, here's how we use our data to rebut the null hypothesis: (1) we assume the null hypothesis is true, (2) we calculate how probable our data are under the assumption that the null hypothesis is true, and (3) if the null hypothesis deems our data to an incredibly rare 1-in-20 event, then we conclude that the null hypothesis was a bad assumption to begin with and we reject H0. That is, H0 is a bad assumption that should be rejected if it deems our data statistically "impossible" (or even just highly improbable). On the other hand, something like p=0.60 would not be such a rare event that we overturn the null hypothesis, since p=0.60 corresponds to a statistical deviation from 0 that happens 60% of the time. The basic idea is this: we live in a noisy world, and random fluctuations cause us to occasionally observe differences in our sample when, in reality, the two groups we're comparing are actually the same. Our default position is to ignore these insignificant differences and assume they don't indicate anything strange/unusual, and they don't point to any real underlying differences. But now and then, we measure significant differences that are so rare we must reject the default null hypothesis that the groups are the same.
@@heatherbingaman5693 one other point that may help. You said "the statistic for the [alternative] hypothesis being true is 5%." That's not what the p-value is. The p-value isn't the probability that the alternative hypothesis is true. It's also not the probability that the null hypothesis is true. The p-value is how likely the null hypothesis thinks your DATA are. If p
When you created your null hypothesis you stated men = women. What about the possibility that women are taller than men ? That condition does not fit in the Hsub0 or Hsub1 ??
That's a great question. There will be different answers (with increasing nuance) depending on the audience. This video was made for IB Biology and IB Psychology students, who are in high school and have to use statistics but really haven't learned it formally. If you're in that group, think of "your hypothesis" as a traditional scientific hypothesis rather than a statistical alternative hypothesis H1.
But it sounds like you might be a statistics student, or you're just thinking about this from a higher level than a typical IB high school student. So, here's the subtler answer: whether H1 encompasses or excludes µ_women > µ_men depends on whether we're doing a one-sided test or a two-sided test. If we don't care about the possibility that men are *shorter* , then we can do a one-sided test, and our alternative hypothesis would be µ_men > µ_women. This would require very compelling reasons and strong justification from the relevant scientific field. (For example, in a clinical trials setting, inferiority trials aim to win approval for a new drug/therapy by showing that it's *not worse* than an existing approved drug/therapy. This results in a one-sided test where we discard the possibility that µ_new > µ_old.)
Back to the example of height. Let's say we don't have any good reason for ignoring the possibility that µ_men < µ_women. In that case, the statistical hypotheses in our t-test would be: H0 µ_men = µ_women and H1 µ_men ≠ µ_women. Now, the statement from the video "your hypothesis: men are taller" could really only be a scientific hypothesis in the traditional sense of the scientific method. (The IB Bio students in particular often learn that they must include a hypothesis in their IA report.) In that circumstance, the scientific hypothesis doesn't exactly match the statistical alternative hypothesis H1. This might be more problematic for advanced research work, because the statistical test won't bear fully on the research question/hypothesis. But for a high school class, it's fine for the scientific hypothesis to not perfectly match the statistical hypothesis. (After all, other statistical analysis might be included in the report.)
This video was very well presented in extremely helpful! Thank you!
You are still helping people 5 years later! This is the most straightforward explanation of t tests I’ve ever seen, thank you! Keep up the great work
6 years later here! Thank you for this video
this is the most clear explanation of p-value I have ever heard. Thank you for making this!
I love the way you explain this. How original. These are some of the questions I ask but haven't got a clue how to answer.....you however have provided me with those answers. Thank you👍🏿
Thanks for the kind message Jenny!! If you have any stats questions, feel free to post them to any of these videos and I'll try to reply.
@@danielm9463 Thank you I will be subscribing to maintain my knowledge.
I understood the t-test and statistics more in 8:57 minutes here than a whole year at the university
Great video! Explained the tricky concept quite easily
I'm confused. If a p-value states how often the difference occurs and the p-value is very low, that would mean the difference occurs very rarely and most of the time there is no difference. Would this not rather support the 0-Hypothesis?
Good question. This statement is not quite correct: "a p-value states how often the difference occurs." The p-value isn't how often a difference ACTUALLY occurs. The p-value is how often a difference WOULD occur IF the null hypothesis were true. You can think about it this way: the p-value is how often the null hypothesis THINKS your data would be produced in an experiment.
Imagine your teacher flips a coin. We will let the null hypothesis be that the coin is fair. (i.e., H0: the probability of heads is equal to the probability of tails.) The teacher flips the coin and it comes up heads. The null hypothesis says "that's a fair coin, and the data don't conflict with me. I expect the coin to come up heads p=0.50 or 50% of the time." Your teacher flips it again, and it comes up head again. The null hypothesis says "that's a fair coin, and the data don't conflict with me. I expect two consecutive heads p = (1/2)(1/2) = 0.25 or 25% of the time." The teacher keeps flipping the coin, and every time, it comes up heads. The null hypothesis continues to insist that the coin is fair and that we're simply observing an extremely rare event. After 6 consecutive heads, the null hypothesis says "the p-value is p = (1/2)^6 = 0.02 or 2%. That's a fair coin. I expect to see 6 consecutive heads in 2% of experiments." In this example, the only way the null hypothesis can explain our data is by calling it an extremely rare statistical fluke that occurs in p=0.02 or 2% of experiments.
The coin example demonstrates how, as the p-value gets lower, we become less trusting of the null hypothesis. Here's the important part: is 2% *actually* the probability of flipping 6 straight heads? Well, only if you think the null hypothesis is true. But why would we believe such a null hypothesis? A much more reasonable hypothesis would be that the coin has heads on both sides. This alternative hypothesis would say that there's a 100% chance of observing 6 straight heads. So the fact that the null hypothesis assigns a low p-value to our data makes us question the null hypothesis, and we start believing there might be some other explanation for the data other than "your results are just a statistical fluke."
At the core of this argument is the idea that, in general, our experimental results aren't statistical flukes. Any hypothesis that repeatedly characterizes experimental data as a fluke is a bad hypothesis. We can imagine a null hypothesis which states "gravity doesn't usually pull things down; usually it pulls things up, and every time you do an experiment, you just happen to observe the rare cases where gravity pulls down. But in a majority of cases, gravity pulls up." Such a hypothesis would be absurd because it can only example the empirical evidence by calling it a fluke.
(That said, on rare occasion, experiments *can* produce fluke-y results, which is why it's important for experiments to be repeated by other researchers and supported by theory.)
P value is just a probability that the set of numbers (your data) is generated by the distribution, under the null hypothesis , the higher the p value : the more probable the set of data is generated by the null hypothesis
We choose also a threshold : to decide with less false negative/positive possible if we retain or reject the null hypothesis
I salute you men for your creativity and explanation, I do not why you are getting low views, i will share this definitely
Watched many videos about p-value and try to understand. All of them are confusing. This is the clearest one. I like the place where u kept reducing the p-value and insisted “u r just being really lucky”. That tells me why we need to reject the null hypothesis. Good job 👍🏼!
It's a confusing concept that requires lots of repetition! It'll be confusing again, and then you'll have to figure it out again, and this process kind of repeats for a while. If you have any questions, feel free to ask!
This was really easy to follow and comprehensive, just what I needed for my own statistical research. Thank you!
i needed a quick recap for my msc thesis after not doing stats for almost 5 years since undergrad, thank you ! very helpful recap before working with my data :)
Folk, I watched more than one thousand video on t-test and p-value but got confused. This one explained everything I need to know in simple terms. Thank you 🎉
Feel free to post any questions you have in the comments!
If I could 'love' a youtube video it would be this one. Unfortunately I can only like it. What a great idea, framing unintuitive concepts in terms of a social situation to make them easier to digest. Thank you
Thank you! UC Boulder used to have a website with physics instructional material, and it was all based around these types of dialogues between animated characters. They had comic-style drawings with characters having conversations, explanatory text, and animations all combined. It was such an incredible resource and such an outstanding way to convey the material. I *think* it was UC Boulder, but I could be misremembering which Colorado school hosted the content. I haven't seen it for a while now and have a vague recollection of it being discontinued/removed. After falling in love with that resource, I came across an ed article about comics as a way to convey material. I agree that it's a great approach!!
Thank you so much! Very easy to follow, with concise and clear explanation
I need help understanding why a low P-value is favorable. Wouldn't you want a high P-value to indicate that the results of the experiment are consistent with the same experiment being run on other sample groups? For example, If I found there was a 4" height difference in my sample group, and I ran the t-test and got a p-value of 0.95, wouldn't that mean that 19/20 times this experiment is run there will be a 4" height difference? So I could say more confidently that there is in fact, a 4" height difference?
Typically the null hypothesis represents the status quo, or a scenario where "the effect does not exist (and is 0)," or something else uninteresting. The idea is that, if we want to establish a new effect, or new trend, or new relationship, then we have to show that the signal we're measuring isn't just noise. That's what the p-value does. It establishes "this is actually an interesting result which is different from the status quo and thus must be explained by some new phenomenon or relationship we haven't previously detected."
The p-value isn't simply a probability of an event occurring. It's the probability of the event occurring *if the null hypothesis is true*. If you get a p-value of 0.001, then that's equivalent to saying "the null hypothesis is a ridiculous proposition because the null hypothesis says our data should 'never' occur. The fact that our data *did* occur--when the null says it shouldn't--clearly refutes the null hypothesis and the way it assigns near-0 probabilities to observed events."
Something I wanted to follow up on: "If I found there was a 4" height difference in my sample group, and I ran the t-test and got a p-value of 0.95, wouldn't that mean that 19/20 times this experiment is run there will be a 4" height difference?" The p-value doesn't actually tell you any real probability. It simply tells you what the null hypothesis thinks the probability *should* be. If you get a p-value of 0.95, then the null hypothesis thinks you should reproduce those same results (*or more extreme*) 95% of the time. That doesn't mean the null hypothesis is right or wrong--it simply means that the data doesn't contradict the null. Hence, a p-value of 0.95 isn't sufficient to reject or to accept the null hypothesis. We don't know if the null is right--we just don't have any clear evidence that the null is obviously wrong.
The last important thing that I think might be escaping you right now: the null hypothesis is *not* the same as your measured difference. If you measure a difference of 4", that doesn't mean the null hypothesis thinks the difference should be 4". In fact, usually (but definitely not always) the null hypothesis thinks the value should be 0. To make this point clearly: if the null hypothesis was ∆h=4 in, and your data also said ∆h=4 in, then your p-value would be 100% or 1.00. So the fact that the p-value is not 100% means your observed difference differs from the null hypothesis's expected value. This is another way to think about the p-value--you can think of it as roughly being "how far away is your measured value from what the null was expecting you to measure?" If your measurement is really far, then this means the null hypothesis thinks your data is so improbable that it should never occur. Hence, the fact that your data *did* occur contradicts the null. It's like we're saying "the null is clearly assigning incorrect probabilities, because it thought what we just observed effectively never should've happened. So the null is 'broken' and making bad probability predictions, which means it must be wrong."
@@JohnYoga "The p Value is the probability that the Null Hypothesis (meaning no change) is true." This is incorrect and is an extremely common misconception. The p-value is not the probability that the null hypothesis is true. The p-value is the *hypothetical* probability that our DATA would occur IF the null hypothesis WERE true.
Imagine you flip a coin 5 times in a row and it comes up heads every time. The question "is this a fair coin?" can be answered by calculating a p-value: p = (1/2)^5 = 0.03. This p-value does NOT tell us that there's a 3% chance that the null hypothesis is true. Rather, this p-value says: IF we assume it's a fair coin, THEN the probability of flipping 5 consecutive heads is 3%. Our conclusion to this experiment would be "we reject the null hypothesis because it's a bad assumption. We know it's a bad assumption because it leads to the bad conclusion that our data are just a random fluke. We don't think our data are just a random fluke; we think they are representative of reality. Thus, the null hypothesis must be wrong." Logically, the argument is using the contrapositive. A [the null hypothesis] implies B [your data are a fluke that would only occur 3% of the time]. We reject B, which means we can also reject A.
You know it is counter-intuitive. I think about it like this, if p-value is low then the probability of randomization is so low, and there is a relationship between the two variables. But if it is too big, then there is a very high randomization i.e. noise and the null hypothesis seems to be true about the noise causing the observed value and not the relationship btw. the variables i.e. the effect.
Great video! Easy to follow along understand
Great video- thank you! made things much clearer for my dissertation. Still a tricky subject though!
very easy to understand, thank you so much
Such a great video! Simple explanation which is easy to understand. I major in language and is working on my master dissertation (some kind of questionnaire) but know so little about statistics let alone how to analyse all those numbers. This video really helped me a lot. Thank you, Daniel!
Good luck on your dissertation!! If you have any questions feel free to post them :)
Alternative hypothesis: There is effect in the population.
Null hypothesis: there is no effect in population.
Lets say P value found is 0.3 which suggest that there is 30% chance that what alternative hypothesis suggest is purely by luck so we have to fail to reject null hypothesis.
I love this explanation and would only edit one thing--instead of 0.3 meaning "there's a 30% chance that what the alternative hypothesis suggests is purely by luck," it means "there's a 30% chance that the particular effect we measured in our experiment would occur purely by luck if the null hypothesis were true."
Brilliantly explained
That was good and straight forward.
Hey! Thanks so much for a great video. I am doing some homework from Uni and my professor asked us in the assignment to do a t-test and a logistic regression. Both t test and logistic regression p-values were significant. The question then became "How does the t test and logistic regression relate to one another?" as in what is the relation between the two test or models? I wonder if you could help me shed some light at this as I can only think of differences. Different nullhypothesis, different form of testing, one tests means another probability... and so on..
So, as a broad overview, this can get pretty deep depending on the level of the stats. Broadly speaking, t-tests (and ANOVA, for >2 groups) are mathematically equivalent to a linear regression with the categorical predictors. You could (a) run a simple t-test on data where x={child, adult} and y=height, and then (b) fit a linear regression model of the form height = b0 + b1*(is child). The p-value from the t-test will exactly match the p-value from the linear regression. The same sort of relationship holds between logistic regression and a chi-square test on 2x2 contingency table.
To your specific question, it's hard to know what's relevant without details about the data. t-tests and logistic regression assess different types of outcome data. So there must be something relating the outcome data in the t-test to the outcome data in the logistic regression (e.g., the logistic regression might binarize the continuous variable that is assessed in the t-test). But without any other info, I would encourage you to consider things like: given the relationship between the dependent variable in the t-test and the dependent variable in the logistic regression, would you expect the t-test p-value to be significant but not the logistic regression p-value? Other way around? Does one test give more information than the other? In other words, is information lost in one of the tests vs. the other? Would you expect better statistical power in one of the tests? Are there circumstances where one would be more appropriate than the other, based on e.g. the data range or other considerations? Is one more meaningful?
If you want to continue the convo, I'd encourage you to reply with (a) more info about the data/context, (b) the relationship between the t-test outcome data and the logistic regression outcome data, and (c) the null and alternative hypotheses for both models.
At 5:32 shouldn't the null hypothesis be "... once every 1000 repetitions."?
ditto
@@mayawen Yep, that's a typo
This was very helpful. Thanks!
Still don't understand. When p-value 0.05, the null hypothesis said it's luck because the difference only happens in 1 out of 20 trials, it just makes 0.05 sounds like we only have 5% probability of 4" difference (edit: then we're supporting null hypothesis).
Edit: why is p-value very complicated (I'm not blaming your explanation), what is the math or calculus behind it that allows p-value says such logic?
You're still assuming the null hypothesis is correct even when it says p=0.05. The whole point is that p=0.05 is an extremely bad way to explain your experimental results. That is, the null hypothesis is a bad assumption if the only way it can explain your experimental results is by calling them a highly rare one-in-twenty occurrence. Imagine you place 10 plants on the south side of your home and 10 identical plants on the north side of your home. You find that the 10 plants on the north side grow 20% taller over the course of a year. Your friend comes along and says "it's incorrect for you to conclude that plants grow taller on the north side of your house because your results are just a statistical fluke. If you repeated this experiment 19 more times, you'd get no difference between the northern and southern plants on average." You probably wouldn't believe this friend, and you would reject their hypothesis. We do the exact same thing to the null hypothesis when it says "your experimental results are just a one-in-twenty fluke and p=0.05." Our response is "that's a bad way to explain the results of my experiment. I reject the null hypothesis and I reject the idea that my data are a highly rare one-in-twenty occurrence."
thank you so much very easy to understand, p-value=5 or 5% means 20 times in a row we only get 4" one time.
p Value = 0.5 or 50% means 20 times in a row we'll get 4" ten time
I just wonder if you set H0 = there is no difference in the population
P value=0.5> 0.05 →H0 accepted or cannot be denied→there is no difference in the population BUT 20 times in a row we'll get 4" ten time, I just don't get it
Here's how I would modify your statement. I've added some additional text in [brackets]
"P value=0.5> 0.05 →H0 accepted or cannot be denied→there is no difference in the population BUT [if you draw a random sample] 20 times in a row [then because of random chance and variation within the population, the sample will have a] 4" ten times [even though there is no difference in the population because the average in a random sample deviates randomly from the true population average due to random chance]."
The idea is that, because we're only measuring a random sample and random samples can differ from the population (depending on the sample size and how much variation there is in the population), we need to know how much variation we WOULD expect to see in the sample IF there wasn't a difference in the population. If the variation is so high that half of the samples would show a difference even if there isn't one in the population, then we should be a bit more wary of seeing a difference in our sample and shouldn't jump to any conclusions. The p-value helps us figure out how likely a false positive might be (i.e., how likely are we to observe a difference in the sample even though there's no real difference in the population).
I seem to understand it after reading the definition: it's double negative...it's tells: what is the probability of "random chance" or "luck" that this would occur. if P is high the odds that the heights are different due to random chance is high, and Null-H wins...we don't want that...we want P to be low, meaning, the heights are NOT due to randomness or luck, they are real and representative of the population, so Null-H loses and we Win.
This video is so useful! thank you!
Great stuff of the stats. Side question, what is your process for video creative? Do you do it yourself or outsource it?
I made this video in powerpoint
explained it better than the people im paying to teach me
I'm still confused. If P value is lower, say 0.001 or 1 in 1000 repetitions, it's supporting the Null Hypothesis more and more right ? meaning the height difference of 4 Inches occurs 1 time and the heights are same the remaining 999 times, which is what Null hypothesis is saying. On the other hand, if P value is 0.999, it means 999 out of 1000 times, the Height difference will be 4 inches or more and only 1 time, the heights will be same, and therefore rejecting the Null hypothesis...It's very counter Intuitive....I'm missing something...
This is a great question and a very common misunderstanding. Let's focus on this statement: "meaning the height difference of 4 Inches occurs 1 time and the heights are same the remaining 999 times." This is not quite true. Here's what we should say instead:
"meaning the [the null hypothesis can only explain your data by claiming it is an extremely rare event, because a] height difference of 4 Inches occurs 1 time and [the null hypothesis is saying that, if you repeated your experiment 999 times, you would realize that your results are just pure luck.]"
Pretend I'm the null hypothesis. I flip a coin 100 times in a row, and it comes up heads all 100 times. You say "that is not a fair coin." I respond "it IS a fair coin--it's just that the experiment is an extremely rare fluke." This would be an absurd response. Every time I flip the coin and it comes up heads again, my statement ("this is a fair coin and these results are extremely unlikely...") becomes less believable. The null hypothesis is the exact same. It's like me saying "no, it's actually a fair coin, and you just have extremely unlikely results." At some point, that's less believable than the alternative--the coin is not fair and has heads on both sides.
Hi Danial. I was going through your other statistics videos, can you organize them into a playlist in your channel. It will be really helpful for us to find them easily.
Again thanks for your efforts.👍
Very nice video❤
Hi Daniel... I've been thoroughly enjoying your videos. I'm slightly ashamed to admit that I didn't understand any of these things as a student, just memorized them for exams. But what I'm really concerned about is that your videos seem to be for school kids while I learned most of this in undergrad and post-grad courses. How old are these kids? Do all American kids study statistical inference and hypothesis testing in high school? Then why do UG 101 courses go over the same stuff again?
Even the physics videos seem quite tough.
Hi Deepika, I'm glad you enjoy the videos! (Some are better than others, lol.) That's a good question. That one video about statistical power really is aimed at a graduate level. But all of the other videos on my channel are for high school students (11th and 12th grade). Except for that statistical power video, the statistics content is mostly for Biology students in the International Baccalaureate program. They have to do hypothesis testing in their Internal Assessment report, but they don't actually learn any inferential statistics, and it's not in their curriculum. They're in a tough position, so I tried to put together some videos to give them enough intuitions and basic skills to do a t-test or a chi-squared test. (It's still on my list to do a video on chi-squared tests.) Almost all of the physics content is for high school students in the International Baccalaureate physics course. And you're right--it is a VERY tough class!! The highest possible grade is a 7, and to get that grade, students historically only have to score ~65% overall on the test. (Only ~8% of students around the world score that high.) Comparatively, I think I put more emphasis on conceptual understanding than is usual. I always thought a strong grasp of the big-picture framework strengthened students' ability to apply equations and solve problems. (It's *definitely* how I learn, personally.) So I suspect there's a bit more of that in my videos than is typical. But in general, I don't think US high school students are learning statistical inference.
@@danielm9463 Thanks, Daniel. Glad to hear that it's 11th and 12th grade. After 10th grade I took the option to drop science completely, and channelled my analytical energy towards eco/stats/accounting. But now I understand why college classes were a breeze for the science classmates.
Your students are lucky to have you. I don't know if they're old enough to appreciate how much harder it is to structure a lecture that aims for conceptual clarity than a class that uses definitions and neatly designed examples. Personally, the one video in which you start with blank paper, write things out, strike out some, change your mind in the middle of a thought... was the best medium for concepts. I know others complained about the 2-second focus problem, but I disagreed. There is something about forming an idea, growing it, on paper/blackboard that helps learning in a way that neat slides/animations cannot replicate. I wonder what the 15-16 year old kids prefer.
Over 90% of your videos are not for me (I might muster up the courage and click on the physics ones one day), and I need to scroll a lot to find something, but it's really worth it when I find one. They're all very very good. And you've been doing this for nine years, that's really impressive!
@@deepika9186 Thank you for the kind words!! I agree with you completely. You'll have to post some econ/stats videos so that I can learn from you too! For 15-16 year olds, what I learned over time is that the most valuable part of a lesson is when they try it out themselves. Even a perfect video that explains everything clearly will still leave them with some confusion, lingering misconceptions, sections where they lost their focus, and content they just didn't fully appreciate. Getting into the work, making mistakes, and correcting their errors is probably the most important part of them learning successfully. I think a huge part of effective teaching is creating a space where students can engage in that messy learning process (embrace their mistakes, ask questions openly, and believe that they get smarter with hard work).
@@danielm9463 Oh wow I totally agree about messy learning; things I've learned through tedious elimination, or trial and error, or just lots of frustrated repetition are embedded.
If I know a topic, I can transfer it to someone else clearly and simply - but only in person, not for an open audience. Maybe because I depend a lot on cues, even if only an expression that says "oh no you've confused me even more than when we started" because then I can start all over again (or proceed and build on the idea, as the case may be). Even on the phone I find it difficult when a friend stays quiet or is too polite to say they didn't understand / I'm not explaining well. You have a lot of practice, repeatedly, and with different types of kids - so you know which explanations work, and obviously you've sharpened them enough to have internalised the best parts and the best flow.
I have a style I often use, which is to build different blocks, keep each aside, and then bring them all together at the end. Literature has a tool called the "unreliable narrator" and sometimes I find it really useful to deliberately say something wrong in one of the blocks, and then at the end correct it "Remember I said abc, but no it wrong to say that because abc and xyz cannot be true at the same time... Instead..."
This is a long-winded way of saying I can't make a good eco/stats video. There are a few good stats ones (and a lot of really bad ones that cover stats without explaining anything, just statement after statement) because data science is the it thing now. Economics I feel sounds easy to understand in a video, often animated, but then ten minutes later that content is difficult to retain/recall. In my opinion, the subject lends itself better to the written form and reflection because it is afterall mostly observed/empirical and covered daily in the news. I'm just realising that I've hardly ever reached for eco videos on TH-cam!!
If you are ever curious about a topic or a news story and want it broken down, will you ask me? I can then either try to explain it with what I know, or look for the right answer and try to convey it to you.
@@deepika9186 I will! Also, you've probably encountered it before, but my absolute favorite channel for stats is stat quest: th-cam.com/channels/tYLUTtgS3k1Fg4y5tAhLbw.html
He covers graduate level topics, but breaks everything down. I find it really helpful for building intuitions--a nice supplement to the typical in-depth study one gets in grad school.
Seems like the null hypotheses is true if the p
Seriously--the statistic for the hypothesis being true is 5% or less that means the null hypothesis is true if the hypotheses is not likely to occur...I am so confused. UGH!
@@heatherbingaman5693 let's imagine a different scenario: you're testing whether 4-year-olds have the same mean height as 40-year-olds. The null hypothesis states the two groups have the same mean height (which is preposterous, of course). One way to rebut the null hypothesis is to randomly sample one hundred 4-year-olds, randomly sample one hundred 40-year-olds, and measure their mean heights. We find that 40-year-olds have a mean height of 65 inches and 4-year-olds have a mean height of 35 inches. If the null hypothesis were true, then the only way we could measure this large a gap is through an incredibly rare statistical fluctuation--we just happened to select the tallest 40-year-olds and the shortest 4-year-olds. This is, of course, an absurd proposition. It's much more likely that the null hypothesis was wrong to begin with--that's a far more plausible idea than the notion that our random sample just happened to produce a 30-inch difference when there's really no difference in the broader population.
In other words, here's how we use our data to rebut the null hypothesis: (1) we assume the null hypothesis is true, (2) we calculate how probable our data are under the assumption that the null hypothesis is true, and (3) if the null hypothesis deems our data to an incredibly rare 1-in-20 event, then we conclude that the null hypothesis was a bad assumption to begin with and we reject H0. That is, H0 is a bad assumption that should be rejected if it deems our data statistically "impossible" (or even just highly improbable). On the other hand, something like p=0.60 would not be such a rare event that we overturn the null hypothesis, since p=0.60 corresponds to a statistical deviation from 0 that happens 60% of the time.
The basic idea is this: we live in a noisy world, and random fluctuations cause us to occasionally observe differences in our sample when, in reality, the two groups we're comparing are actually the same. Our default position is to ignore these insignificant differences and assume they don't indicate anything strange/unusual, and they don't point to any real underlying differences. But now and then, we measure significant differences that are so rare we must reject the default null hypothesis that the groups are the same.
@@heatherbingaman5693 one other point that may help. You said "the statistic for the [alternative] hypothesis being true is 5%." That's not what the p-value is. The p-value isn't the probability that the alternative hypothesis is true. It's also not the probability that the null hypothesis is true. The p-value is how likely the null hypothesis thinks your DATA are. If p
Amazingly good!
THANK YOU
thank you so much, useful video👌
here to pass my biostatistics exam, pray for me yall
Update: I failed 1.5/10
Thank you!
You're welcome! Feel free to ask any questions if you have any!
This is good stuff!
Thank you
2:49 she should reject that men are taller, right? Not that the difference is 4 inches
Correct! It says at 3:05 that we would reject the null hypothesis that case.
I got t test and p value❤
lol
This was very helpful. Thanks.