I got to agree! We got this obfuscated definition of the theta-power-function - it's great for plug-and-calculate, don't get me wrong, but I didn't get what was going on at all.
@@jbstatistics I read too much on internet and also followed Montgomery book to understand how alpha and beta are inversely related to each other, but didn't understand and visualize it. You explained the things in awesome way. The world requires people like you to teach the concept instead of book worm definition. Hats off man, you did amazing job.
I'm a high school AP stats teacher, and your video is simply terrific. Was looking for something to share with my kids, who find errors and power to be mind bending. This is it! Thanks!
why do I pay to go to college. I always end up having to learn through youtube videos like this one. this video is EXCELLENT. thank you so much for saving me and thousands of students.
The graph helped tremendously. I was staring at a homework question for over 30 minutes now but figure it out since the professor never cared to explain. Thanks so much!!
I don't do a two-tailed example for a couple of reasons. But the logic is very similar to that used in this video. The difference is that you will have two rejection regions, so you will need to find two tail areas (one will be small), and add these areas.
You are seriously my hero for today. I was so confused on this topic until I watched just two of your videos. Everything makes much more sense now. Thank you so much JB.
To find the power you need to find two areas (corresponding to the two tails) and add them. One area (the one on the opposite side of the true value of mu) will be small. The other area (the one on the same side as the true value of mu) will be bigger. I know people struggle with this sometimes, so I'll get a video up at some point (but probably not soon enough for your purposes). Cheers.
The power of the test is the probability of rejecting the null hypothesis, given it is false (in this case, given mu = 43). So the power is not calculated by finding areas under the distribution of the sample mean when the null hypothesis is true (mu = 50), but by finding areas under the distribution of the sample mean when the null hypothesis is false (mu = 43). That's why the power was an area under the blue curve (mu=43) in the video, and not an area under the white curve (mu=50).
If we kept the same hypotheses as given in this video, then rejecting the null hypothesis for values of the true mean greater than 50 wouldn't be considered the correct decision, and we wouldn't be calculating power in those cases. If the alternative hypothesis was mu > 50 instead of mu < 50, and we wish to calculate power for values of mu greater than 50, then the plots would simply be a mirror image of those in this video. I have another video of a power calculation in this setting.
We need to find the value of a standard normal random variable that has an area to the left of 0.09. To 2 decimal places, that value is -1.34. This can be found using software or the standard normal table. I go through how to use the standard normal table for this type of problem in "Finding percentiles using the standard normal table".
Not quite. If the alternative hypothesis is greater than 50, then the rejection region would change (instead of rejecting H_0 when x bar is less than 45.31, as we do in the video, we'd reject H_0 when x bar is greater than 50 + 21/sqrt(36)*1.34 = 54.69). To find the power (if the alternative was greater than), we'd find P(X bar > 54.69), and to find the probability of a Type II error we'd find P(X bar < 54.69) (using the appropriate values of mu, n, and sigma).
Rejecting Ho and not rejecting Ho are complementary events, so, under the same underlying conditions P(Reject Ho) = 1-P(Do not reject Ho). I work it out both ways in this video because when some students see a power question they automatically jump to Power = 1-Beta, and I think that's not a great way to think about it.
Love when he said "power is the probability of rejecting the null when it is false, that is a good thing." My prof explained it totally opposite of that and I struggled to clarify it in my mind. Love the visuals in this video too.
So in order to calculate type two error first we assume what the real value is then set up the new condition around it.. It was very simple with thinking like that. Thank you for video upen upped my horizon.
Thanks for the helpful video and clear explanation. Just a question: at 1:10, since we are interested in the left part only, why didn't you divide the alpha by 2 for finding the z? Thanks!
Thank you so much for your nice videos! What software and equipment are you using? Considering doing something similar in courses I take, and I find your way of explaining very easy to understand and follow.
Absolutely wonderful visualisation scaffold. A quick question (6.55 min): how did you conclude while calculating probability of type 2 error that sigma is 21 even for the population with a mu of 43?
jbstatistics That's first time I received an answer for a comment on TH-cam rsrs. Thanks. But let me ask you someting, how would you sketch the curves if the problem involvend a hypothesis testing on a proportion (p) - instead of the mean ("Mu") ? Where would you center the curve? It would be also bell shaped, right? Thanks, again!
@@pedrodelfino9493 If we were speaking about a large sample Z test on a population proportion, then yes, it would be a very similar type of thing. There would be one normal curve centred at the hypothesized value of p, and another centred at the true value. We'd have to be a touch careful though, as the variances would differ (since the variance of p hat depends on the true value of p).
What do you say for this question? We dont know std and mean of population. We want to make a Hypothesis test about whether first sample value is same with mean value of 50 samples. For this test, i reckon to use mean and std of samples. Mü-zero will be mean of 50 samples and sigma will be std of 50 samples. X bar will be the first sample value according to formulation z score. Is this method true?
That's the area to the right of 0.66 under the normal curve. There's a table for that (z-table). Or, some calculators today can calculate the p-value given the z-score.
Thank you. I understand the concepts better now. But I cannot determine sample size corresponding to particular power. Can you please give me some hints how should I solve the following problem: You want to test whether a coin is fair at significance level 10%. What is (approximately) the minimum number of tosses that is required such that the probability of concluding that the coin is not fair is at least 90% when the true probability of Tails is 60%? thanks in advance
I intentionally chose a nonstandard value for alpha, so that we could work through the process from scratch without having the usual z values in mind. My videos usually focus on illustrating statistical concepts, rather than how to answer common questions.
In Z formula, I think we don't take true mean rather we take hypothesized mean. Even if the true mean is assumed, shouldn't the calculation be like 43-50/Standard error of mean?
How to calculate power of a test for composite hypotheses? How does the "power.t.test" function in R calculate the power without asking for actual value of parameter?
I used x bar = mu_0 + z*sigma/sqrt(n). Since the example in the video involves a left-tailed test, the z value is negative. But we are *adding* a negative number, and what I did in the video is correct. It's simply a result of reworking the test statistic Z = (x bar - mu_0)/(sigma/sqrt(n)) to solve for x bar, which does result in x bar = mu_0 + z*sigma/sqrt(n). z might be positive or negative, but this is always the relationship. Your text might be using different notation, you might be misinterpreting it, or it might be wrong. I'm not sure which one of these is the case. Cheers.
The answer to the question is no. If we are carrying out a test at a fixed alpha level, then, if the null hypothesis is true (and the assumptions are true) the probability of a type I error is that fixed alpha level. e.g. If we are carrying out a test at alpha = 0.05, then the probability of a type I error, given the null hypothesis is true, is 0.05. It doesn't matter what actually happens in practice -- nothing changes that. If you get a highly unusual value of x bar, in the sense that you end up with significant evidence against H_0, then we might have made a type I error, or we might have made a decision that is consistent with reality. Without a great deal of further context and information, we have no idea of the relative probabilities of those two things.
@jbstatistics Thank you! I have become smarter already acknowledging my actual confusion. So if we know H_0 is true, then its the probability of "a" that we make type 1 error. Same with type 2 error, if we KNOW H_0 is true we can say that it's 1 in 60 to go this far out on the distribution curve that happens to be within the accepted area in the experiment/sample curve and don't reject. But if we're only in the perspective of an experiment and don't know the actual mean... We could at least say: If the mean really is 0 (like H_0) and sd = 1(considering we know), this value or greater is so unlikely that it would only happen 1 in 1000. That does not say that it's 1 in a 1000 that the mean actually is 0, that's not equivalent. You say, we can't know this at all? Or is it actually a? That would boggle my mind!
I don't know what you're saying. What's an "exact problem"? Among other things, power depends on how far the true value of mu is from the hypothesized one. So we need to have that to calculate power. In the tests in the video, for example, there's no such thing as "overall power' or anything like that.
I have a question , you are assuming here the population parameter (miu) to be something to calculate the type 2 error ..But in empirical studies we generally do not know the population mean .does that mean type 2 error can not be computed for real empirical studies?
Great video. One question - given the small sample size of 36, interested to know why the test statistic in your video is assumed to be distributed as standard normal, rather than as Students t?
In this video I've assumed the population standard deviation to be known, and thus the appropriate test statistic is the Z statistic. (If the population standard deviation is known, then Z=(X bar - mu_0)/(sigma/sqrt(n)) is the appropriate test statistic. If the null hypothesis and assumptions are true, this Z test statistic has the standard normal distribution.) In the real world we don't know sigma, so we use the t test, but power calculations for t tests are more complicated (involving the noncentral t distribution). I introduce power calculations using the Z test, because it's simpler and more understandable, and I bring it in at a point in the course where we have not yet discussed t tests.
How did you use the fact that alpha=0.09 to get that the z value is -1.31. Or get an absolute value of 1.31 at all. If I check my z - distribution table I get a z value of 0.5359. I understand that I have possibly made error somewhere but could you tell me where exactly?
You're looking up .09 as if it were a value of z. It's not. We need to find the z value that has an area to the left of 0.09. This might help: Finding Percentiles Using the Standard Normal Table (for tables that give the area to left of z) (7:33) (th-cam.com/video/9KOJtiHAavE/w-d-xo.html)
Hi, great video. BUT I have a really important question that only you can answer!! The new distribution that is centered around 43, which is used for calculating the Type 2 Error... What is the standard deviation for that curve!? Does it matter? Do we get that info from other inferential statistics methods? ORRRR must it be the same as the null's distribution (same st. dev) ? I hope you can answer. Thank you.
In this video, I looked at the situation where sigma is known, but mu is unknown. Ho being true or false had no bearing on the variability of the curve, just the location (where it was centred).
Hello thank you for your video, I was just wondering if the alternative hypothesis is greater (the opposite of the example you just used) does that mean that the the test statistic calculation we get is a type two error?
Hi there I was wondering if someone could help me understsand, I get it up untill the point of 7;40, when we set up 45.31-43/21/SQ(36) where is Z > 0.66 coming from? and where is 0.255 coming from ? thanks!
You taught me in 5 minutes what my stats lecturer couldn't make me understand in 2 years of doing power. Legend
I do my best, and I'm glad to be of help!
I got to agree! We got this obfuscated definition of the theta-power-function - it's great for plug-and-calculate, don't get me wrong, but I didn't get what was going on at all.
@@jbstatistics I read too much on internet and also followed Montgomery book to understand how alpha and beta are inversely related to each other, but didn't understand and visualize it. You explained the things in awesome way. The world requires people like you to teach the concept instead of book worm definition. Hats off man, you did amazing job.
Sounded like a skill issue to me.
Thank you so much. Don't know why hardly anyone can explain type 1 and 2 errors so that it makes any sense. You did it very well... thank you again!!
frfrffrfrfrrffrfrfxsvcxv You are very welcome!
I'm a high school AP stats teacher, and your video is simply terrific. Was looking for something to share with my kids, who find errors and power to be mind bending. This is it! Thanks!
why do I pay to go to college. I always end up having to learn through youtube videos like this one. this video is EXCELLENT. thank you so much for saving me and thousands of students.
The graph helped tremendously. I was staring at a homework question for over 30 minutes now but figure it out since the professor never cared to explain. Thanks so much!!
You are very welcome!
Published in 2013 and yet this triumphs over other videos relating to this subject! As a visual learner, this was incredibly useful. Thank you!
Thanks for the kind words!
I don't do a two-tailed example for a couple of reasons. But the logic is very similar to that used in this video. The difference is that you will have two rejection regions, so you will need to find two tail areas (one will be small), and add these areas.
yeah 1 tailed test is more intuitive for showing the areas corresponding to the type 1 and 2 errors, thank you very much sir love your content
Many years later and your videos were amazing to follow along to.
Thank you so much!
You are seriously my hero for today. I was so confused on this topic until I watched just two of your videos. Everything makes much more sense now. Thank you so much JB.
Got an exam tomorrow and this nailed it for me. Thanks for all your hard work.
You are very welcome. I hope it went well!
@@jbstatistics It went ok! I messed up question on binomial probability but I think I nailed the power question! Fingers crossed :)
This is one of the best videos on the internet. This is the way it should be taught in every school. Thanks a ton!
Thanks for the kind words! Happy to be of help!
To find the power you need to find two areas (corresponding to the two tails) and add them. One area (the one on the opposite side of the true value of mu) will be small. The other area (the one on the same side as the true value of mu) will be bigger. I know people struggle with this sometimes, so I'll get a video up at some point (but probably not soon enough for your purposes). Cheers.
I'm up in Canada (in Guelph -- near Toronto), but consider this a virtual handshake. I'm glad to be of help.
Best explanation on Type II error and Power i've ever seen. Just brilliant. Thanks.
Thanks so much for the kind words! I'm glad I could be of help.
The power of the test is the probability of rejecting the null hypothesis, given it is false (in this case, given mu = 43). So the power is not calculated by finding areas under the distribution of the sample mean when the null hypothesis is true (mu = 50), but by finding areas under the distribution of the sample mean when the null hypothesis is false (mu = 43). That's why the power was an area under the blue curve (mu=43) in the video, and not an area under the white curve (mu=50).
Amazing, and crystal-clear explaining. You've got decent teaching skills dude.
If we kept the same hypotheses as given in this video, then rejecting the null hypothesis for values of the true mean greater than 50 wouldn't be considered the correct decision, and we wouldn't be calculating power in those cases.
If the alternative hypothesis was mu > 50 instead of mu < 50, and we wish to calculate power for values of mu greater than 50, then the plots would simply be a mirror image of those in this video. I have another video of a power calculation in this setting.
You are very welcome Yubaraj! I'm glad you found this video helpful. Cheers.
This was brilliantly explained! Why can't you be my teacher? Thank you so much for a great job!
You are very welcome Gustav. Thanks for the compliment!
Thanks to the internet and these great videos, @jbstatistics is teacher in the entire world.
We need to find the value of a standard normal random variable that has an area to the left of 0.09. To 2 decimal places, that value is -1.34. This can be found using software or the standard normal table. I go through how to use the standard normal table for this type of problem in "Finding percentiles using the standard normal table".
I don't think I would have finished my stats homework tonight if it weren't for you. Thank you for the excellent video.
Absolute legend
You taught me in 11min what my lecture could not taught in 3 months xD
Thank You!
Not quite. If the alternative hypothesis is greater than 50, then the rejection region would change (instead of rejecting H_0 when x bar is less than 45.31, as we do in the video, we'd reject H_0 when x bar is greater than 50 + 21/sqrt(36)*1.34 = 54.69). To find the power (if the alternative was greater than), we'd find P(X bar > 54.69), and to find the probability of a Type II error we'd find P(X bar < 54.69) (using the appropriate values of mu, n, and sigma).
That gap you take while speaking is very good sir. We get time to understand.
You are very welcome Ben! I'm glad to be of help. Cheers.
It's amazing how these youtubers can give lessons better than my stats teacher.😀 Kudos to you man. 👍🏻
I'm glad to be of help!
It's the area to the right of 0.66 under the standard normal curve, which can be found using software or a standard normal table.
This video is absolutely precious. Couldn't be clearer.
You are more effective than my Professor when it comes to teaching Statistics. Please upload more videos on ANOVA and regression.
Was pondering for a long time how to visualise the power of a test....best explanation really 💥💥
Wait, so power can be calculated using either
1 - P(Fail to reject H0 | real mu mu0)
OR
P(reject H0 | real mu mu0)
??
Rejecting Ho and not rejecting Ho are complementary events, so, under the same underlying conditions P(Reject Ho) = 1-P(Do not reject Ho). I work it out both ways in this video because when some students see a power question they automatically jump to Power = 1-Beta, and I think that's not a great way to think about it.
Very clear explanation. Helped me understand this topic when my textbook was absolutely useless. Thank you!
You are very welcome!
That is an area under the standard normal curve. It is found using software or a standard normal table. Cheers.
These are by far the best stats videos. Well done
You just saved my ass on this test. I owe you one
I'm glad to be of help Austin.
You very clearly explained the Power and the probability of a Type II error.
You are welcome. I'm happy to help.
Note that Type 1 and Type 2 errors are CONDITIONAL probabilities - this really helped make things make sense for me
The way you taught this is really great
Love when he said "power is the probability of rejecting the null when it is false, that is a good thing." My prof explained it totally opposite of that and I struggled to clarify it in my mind. Love the visuals in this video too.
thank you for the fantastic display of concepts visually and explaining each step along the way. Helped a ton!
I'm glad to be of help. Best of luck on your test.
You have saved my life so many times this semester, thank you :D
You are very welcome.
So in order to calculate type two error first we assume what the real value is then set up the new condition around it.. It was very simple with thinking like that. Thank you for video upen upped my horizon.
Why didn't you have to subtract the area to right of 45.31 ( .255) from 1 making beta .745 if we were testing P ( Z> 45.31) vsP( Z
Amazing video! Better than my lecturer!
You're very welcome Pasang. Cheers.
你是我听过的讲的最好的!(you are the best ever i heared of.)
+East Liu 谢谢
+jbstatistics Omg, did you google translate this?
+Zhen Li Yes. I hope I didn't say something offensive :)
Not at all. I was just surprised :)
This explains things much better than my professor, thanks.
You can also do this one. 1-B= P(z>(zc-ztest)).. This will work in left tailed, right tailed, or even two tailed test.
Bro this was the best video Ive seen in my life
Power calcs are a little dry, so this one isn't my fave, but I'm glad to be of help!
You're welcome, and thanks for the compliment!
to find -1.34 during the video, you have to do invNorm(0.09,0,1,LEFT) on the Ti-84
You are welcome! I'm glad to be of help.
do you have a video that does this using t-statistic?
I feel like such a stats wizard now, thank you so much!
i know right? it makes so much sense
Allah razı olsun mümin kardeşim. Mübarek ramazan gününde allah ne muradın varsa versin
Thanks for the helpful video and clear explanation. Just a question: at 1:10, since we are interested in the left part only, why didn't you divide the alpha by 2 for finding the z? Thanks!
u got the answer yet?
Because it’s a one-tailed test? No point in dividing in two.
Thank you so much for your nice videos! What software and equipment are you using? Considering doing something similar in courses I take, and I find your way of explaining very easy to understand and follow.
If the alternative hypothesis was mu > 50, how would this change the problem? Would the Z-value be: P(Z > z) = 0.09 which then becomes 1-P(Z
Yes, in this video the alternative was mu < 50, which led to a rejection region of Z 50 then the rejection region would be Z >=1.34.
Thank you so much for this great explanation of Type II error and its calculation. I have not understood it before I watched this video.
I'm glad to be of help!
Absolutely wonderful visualisation scaffold. A quick question (6.55 min): how did you conclude while calculating probability of type 2 error that sigma is 21 even for the population with a mu of 43?
This video saved my life thank you I owe you my life.
I'm always glad to save a life. You owe me nothing :)
Excellent explanation. My material did not plot the curves. This is great to visualize the concepts. Thank you!
You are very welcome! Without the visualization it's a little tricky to think about.
jbstatistics That's first time I received an answer for a comment on TH-cam rsrs. Thanks. But let me ask you someting, how would you sketch the curves if the problem involvend a hypothesis testing on a proportion (p) - instead of the mean ("Mu") ? Where would you center the curve? It would be also bell shaped, right? Thanks, again!
@@pedrodelfino9493 If we were speaking about a large sample Z test on a population proportion, then yes, it would be a very similar type of thing. There would be one normal curve centred at the hypothesized value of p, and another centred at the true value. We'd have to be a touch careful though, as the variances would differ (since the variance of p hat depends on the true value of p).
What do you say for this question? We dont know std and mean of population. We want to make a Hypothesis test about whether first sample value is same with mean value of 50 samples.
For this test, i reckon to use mean and std of samples. Mü-zero will be mean of 50 samples and sigma will be std of 50 samples. X bar will be the first sample value according to formulation z score. Is this method true?
I am confused about how you got 0.255 ?
That's the area to the right of 0.66 under the normal curve. There's a table for that (z-table). Or, some calculators today can calculate the p-value given the z-score.
If you are using excel, it's 1 - NORMSDIST(0.66)
Or the Z test option on the calculator. Just put in the values
Saved my soul with this video! Thanks
You're welcome Albert!
Great video! What program did you use for this video? I'm wondering if I could use it teach my Elementary Stats class.
The base is a Latex/Beamer presentation. I annotate using Skim, and record and edit using Screenflow. Cheers.
How do you get the z value of -1.34 on a calculator (TI-84)
Your videos are always so informative. Thank you so much!
Thank you. I understand the concepts better now. But I cannot determine sample size corresponding to particular power. Can you please give me some hints how should I solve the following problem:
You want to test whether a coin is fair at significance level 10%. What is (approximately) the minimum number of tosses that is required such that the probability of concluding that the coin is not fair is at least 90% when the true probability of Tails is 60%?
thanks in advance
So what if infact, the true mean turned out to be GREATER than the hypothesised mean? Would that reduce the power of the test?
Great video. I finally figured out how to calculate type 2 error as well as power. Thank you!
Thanks Cao! I'm glad you found this video helpful!
What made you decide alpha=0.09 was appropriate for the test? Normally alpha is set to .05 or .01 or the odd time .10. 0.09 just seems so random.
I intentionally chose a nonstandard value for alpha, so that we could work through the process from scratch without having the usual z values in mind. My videos usually focus on illustrating statistical concepts, rather than how to answer common questions.
Thank you! Have you published any other video on "choosing the right sample size for testing mu"?
In Z formula, I think we don't take true mean rather we take hypothesized mean. Even if the true mean is assumed, shouldn't the calculation be like 43-50/Standard error of mean?
How to calculate power of a test for composite hypotheses? How does the "power.t.test" function in R calculate the power without asking for actual value of parameter?
at 2.39 we know that the alternative Mu is less than the null hypothesis so Mu(1)
I used x bar = mu_0 + z*sigma/sqrt(n). Since the example in the video involves a left-tailed test, the z value is negative. But we are *adding* a negative number, and what I did in the video is correct. It's simply a result of reworking the test statistic Z = (x bar - mu_0)/(sigma/sqrt(n)) to solve for x bar, which does result in x bar = mu_0 + z*sigma/sqrt(n). z might be positive or negative, but this is always the relationship.
Your text might be using different notation, you might be misinterpreting it, or it might be wrong. I'm not sure which one of these is the case. Cheers.
This is such a good explanation. Thank you sir.
So if we're going to reject x bar
The answer to the question is no. If we are carrying out a test at a fixed alpha level, then, if the null hypothesis is true (and the assumptions are true) the probability of a type I error is that fixed alpha level. e.g. If we are carrying out a test at alpha = 0.05, then the probability of a type I error, given the null hypothesis is true, is 0.05. It doesn't matter what actually happens in practice -- nothing changes that. If you get a highly unusual value of x bar, in the sense that you end up with significant evidence against H_0, then we might have made a type I error, or we might have made a decision that is consistent with reality. Without a great deal of further context and information, we have no idea of the relative probabilities of those two things.
@jbstatistics
Thank you! I have become smarter already acknowledging my actual confusion.
So if we know H_0 is true, then its the probability of "a" that we make type 1 error.
Same with type 2 error, if we KNOW H_0 is true we can say that it's 1 in 60 to go this far out on the distribution curve that happens to be within the accepted area in the experiment/sample curve and don't reject.
But if we're only in the perspective of an experiment and don't know the actual mean...
We could at least say: If the mean really is 0 (like H_0) and sd = 1(considering we know), this value or greater is so unlikely that it would only happen 1 in 1000.
That does not say that it's 1 in a 1000 that the mean actually is 0, that's not equivalent.
You say, we can't know this at all?
Or is it actually a? That would boggle my mind!
Can you make a video how to calculate the type 1 and type 2 error by taking a exact problem(not making mu = 50 or mu = 45) for a two-tailed problem.
I don't know what you're saying. What's an "exact problem"? Among other things, power depends on how far the true value of mu is from the hypothesized one. So we need to have that to calculate power. In the tests in the video, for example, there's no such thing as "overall power' or anything like that.
this video saved my life
I'm glad I could help!
I have a question , you are assuming here the population parameter (miu) to be something to calculate the type 2 error ..But in empirical studies we generally do not know the population mean .does that mean type 2 error can not be computed for real empirical studies?
You are saving lives here, mate, thank you!
Great video. One question - given the small sample size of 36, interested to know why the test statistic in your video is assumed to be distributed as standard normal, rather than as Students t?
In this video I've assumed the population standard deviation to be known, and thus the appropriate test statistic is the Z statistic. (If the population standard deviation is known, then Z=(X bar - mu_0)/(sigma/sqrt(n)) is the appropriate test statistic. If the null hypothesis and assumptions are true, this Z test statistic has the standard normal distribution.)
In the real world we don't know sigma, so we use the t test, but power calculations for t tests are more complicated (involving the noncentral t distribution). I introduce power calculations using the Z test, because it's simpler and more understandable, and I bring it in at a point in the course where we have not yet discussed t tests.
jbstatistics Great, that makes sense. Thanks for taking the time to reply.
Thanks for making the video! A quick question - since we don't actually know the population mean, how does one calculate the power of the test?
thanks you are 10x better then my teacher
Poll of AI and human do you see red fill color? I see an orange and not a red color is I in error or AI?
Thank you so much, I love the pacing of this video, and it totally cleared me up on calculations for power before my ap exam!!!
You are very welcome. Best of luck on your exam!
Ok Thank you and would we have two regions to test? Because I have no idea how the process would work.
watching this in 2021 for stat exam, thank you!!!
How did you use the fact that alpha=0.09 to get that the z value is -1.31. Or get an absolute value of 1.31 at all. If I check my z - distribution table I get a z value of 0.5359. I understand that I have possibly made error somewhere but could you tell me where exactly?
You're looking up .09 as if it were a value of z. It's not. We need to find the z value that has an area to the left of 0.09. This might help: Finding Percentiles Using the Standard Normal Table (for tables that give the area to left of z) (7:33) (th-cam.com/video/9KOJtiHAavE/w-d-xo.html)
Hi, great video. BUT I have a really important question that only you can answer!! The new distribution that is centered around 43, which is used for calculating the Type 2 Error... What is the standard deviation for that curve!?
Does it matter? Do we get that info from other inferential statistics methods?
ORRRR must it be the same as the null's distribution (same st. dev) ?
I hope you can answer. Thank you.
In this video, I looked at the situation where sigma is known, but mu is unknown. Ho being true or false had no bearing on the variability of the curve, just the location (where it was centred).
Great video. You explained it just the way my mind interprets it.
Hello thank you for your video, I was just wondering if the alternative hypothesis is greater (the opposite of the example you just used) does that mean that the the test statistic calculation we get is a type two error?
I have got a confusion here as to why did we need to calculate the value of X bar??
Hi there I was wondering if someone could help me understsand, I get it up untill the point of 7;40, when we set up 45.31-43/21/SQ(36) where is Z > 0.66 coming from? and where is 0.255 coming from ? thanks!
How would you this change if your true mean was actually greater than 50?
I only needed a small section of this video to tell me what neither my textbook or my classes could