19:48 I believe the critical value listed should be 0.84 and not 0.8. Excellent video, this is the first video I’ve seen that explains the sample size determination clearly!
The power is not the probability of detecting the effect if it exists. The way the power is calculated is the probability of detecting an effect if the effect size is exactly equal to stated Minimum Effect Size. To get “the probability of detecting an effect if one exists” you need to integrate over a prior of the different effect sizes.
What he mentioned is that probability of rejecting null hypothesis when alternative hypothesis is true, and that is the area under curve of alternate hypothesis for all Z
@@mayankanand507 when p value is in the region he shows, then we dont reject the null hypothesis. instead then we fail to reject the null hypothesis. correct definition should probability of failing to reject the null hypothesis when alternative is true
great lecture and thanks for sharing. But why is the two-sample pooled variance for proportion is the sum of the two samples's variance? should it be the 2*variance 1, because of the similar reason to it's mean counterpart?
17:34 Why is (15.68 * sample variance) / delta squared ≈ (16 & population variance) / delta squared? Is it because sample variance is almost equal to population variance?
This is a fantastic video, it helps me clear up the confusion I had with power analysis. Though I know the famous formula of 16 sigma square/delta , I had no idea the pooled variance = 2* control sample variance. Thanks for the detailed video!
Am I mistaken, or is the description of the Type II error at 8:05 incorrect? He says Beta is the "the probability of rejecting a null hypothesis when the alternative hypothesis is true." But isn't Beta/Type II error the probability of *not* rejecting a null hypothesis when the alternative is true? Genuinely trying to clarify to make sure I have proper understanding.
loved the explanation man. This is the first video I have seen that is explaining where is sigma and delta coming from. I have had such a hard time in reasoning where are the parameters coming from when we have not even started the test. Thanks for the good work. :)
8:49 Does the distribution of test-statistic under alternative hypothesis have the same shape with the distribution of test-statistic under null hypothesis?
Thank you for the amazing AB test lecture! I have one question. How can I project the effect of this AB test from the entire product's view (e.g. calculating sitewide impact of the observed significant list)?
But in case of proportions calculating variances , how we can know the proportion for the treatment group. why there we haven't taken 2 * p1(1-p1) like how we done in case of mean?
Thank you for the explanation! I have a question on the chicken& egg problem: if we cannot calculate the variance of the difference of 2 means, how can we calculate the pooled var for proportions? We also do not know the success rate of the 2nd sample as we have not yet run an experiment
Hi @ChangKaiHua300, I think you're confusing MDE with statistical power. MDE is the minimum detectable effect, that's the lift between control and treatment and 10% is already usually good enough, 20% is quite reasonable. However the statistical power's default in the industry is 80%, because we want to keep a type II error of maximum 20% (that's beta).
Hey, been loving your channel! I also have a similar background in college as you! I was wondering; Would you recommend a master in AI or statistics & data sciencr in order to become a data scientist and/or machine learning engineer? Stats would give me an European Master of Statistics (EMOS) and R knowledge. AI is more focused on python. What would give better opertunities down the road? honestly the stats would be easier to get higher grades than AI since its a killer master where I study it. My background is; double bachelor neuroscience and psychology, with a specialization in stats after which i knew it was what I really liked. Did a minor in data science and AI and studied mathematics on an exchange and did comp science and (discrete) math courses extracurricular. My goal is to work at a big tech firm but im unsure what gives better opertunities
Hey, thanks for the post! Honestly, it really boils down to what you are interested in. Seems to me that you are mostly interested in developing and application of AI - in which case, computational neuroscience I think would be a perfect track. A combination of neuroscience, stats, and computer science may help you in the near term and long-term. Any internships you could snatch would be great in building a portfolio. Invest heavily on learning how to code, the math, and application of the latest algos like transformers, ChatGPT and so forth.
Very good and helpful explanation, but why pooled variance is to multiply by 2 instead of directly use the variance itself according to the link en.wikipedia.org/wiki/Pooled_variance
19:48 I believe the critical value listed should be 0.84 and not 0.8. Excellent video, this is the first video I’ve seen that explains the sample size determination clearly!
Dan is back with another jam-packed useful AB Testing course!
The power is not the probability of detecting the effect if it exists. The way the power is calculated is the probability of detecting an effect if the effect size is exactly equal to stated Minimum Effect Size.
To get “the probability of detecting an effect if one exists” you need to integrate over a prior of the different effect sizes.
This is the only video one would needs to demystify the power analysis. Thank you.
Hey Hi, I think the definition you gave for the type II error (beta) at around 8:05min is for power. Could you clarify that ?
What he mentioned is that probability of rejecting null hypothesis when alternative hypothesis is true, and that is the area under curve of alternate hypothesis for all Z
@@mayankanand507 when p value is in the region he shows, then we dont reject the null hypothesis. instead then we fail to reject the null hypothesis. correct definition should probability of failing to reject the null hypothesis when alternative is true
I thought this was an outstanding tutorial, thank you so much
great lecture and thanks for sharing. But why is the two-sample pooled variance for proportion is the sum of the two samples's variance? should it be the 2*variance 1, because of the similar reason to it's mean counterpart?
17:34 Why is (15.68 * sample variance) / delta squared ≈ (16 & population variance) / delta squared? Is it because sample variance is almost equal to population variance?
This is a fantastic video, it helps me clear up the confusion I had with power analysis. Though I know the famous formula of 16 sigma square/delta , I had no idea the pooled variance = 2* control sample variance. Thanks for the detailed video!
This is so detailed. Thank you for this!
Am I mistaken, or is the description of the Type II error at 8:05 incorrect? He says Beta is the "the probability of rejecting a null hypothesis when the alternative hypothesis is true." But isn't Beta/Type II error the probability of *not* rejecting a null hypothesis when the alternative is true? Genuinely trying to clarify to make sure I have proper understanding.
This is GOLD!!
loved the explanation man. This is the first video I have seen that is explaining where is sigma and delta coming from. I have had such a hard time in reasoning where are the parameters coming from when we have not even started the test. Thanks for the good work. :)
8:49
Does the distribution of test-statistic under alternative hypothesis have the same shape with the distribution of test-statistic under null hypothesis?
Thank you for the amazing AB test lecture! I have one question. How can I project the effect of this AB test from the entire product's view (e.g. calculating sitewide impact of the observed significant list)?
But in case of proportions calculating variances , how we can know the proportion for the treatment group. why there we haven't taken 2 * p1(1-p1) like how we done in case of mean?
in case of single tail test will Z(1- a/2) change to Z(1 - a) ?
Is it the variance of the metric or the difference in metric between control and treatment?
absolutely fantastic video!
thank you so much!
Video is good. It is better to explain what is beta before jumping into the power
Thanks!
Great video, thank you for sharing. In the case of A/B/n testing, the formula that you shared in the video could be adapted and used?
Thank you for the explanation! I have a question on the chicken& egg problem: if we cannot calculate the variance of the difference of 2 means, how can we calculate the pooled var for proportions? We also do not know the success rate of the 2nd sample as we have not yet run an experiment
should the variance not be multiplied by 2?
can you please clarify what is one sample and two sample?
why pooled variance = 2* control sample variance?
How would you do if you want to perform an A/B/C test?
how to answer this interview question? what if we cannot collect that much sample, what should we do?
Very useful
Hi sir, In the example i see you use MDE=20%, I am confused shouldn't it is normally be like 80% to 90%. is using 10%, 20% practical in real world?
Hi @ChangKaiHua300, I think you're confusing MDE with statistical power. MDE is the minimum detectable effect, that's the lift between control and treatment and 10% is already usually good enough, 20% is quite reasonable. However the statistical power's default in the industry is 80%, because we want to keep a type II error of maximum 20% (that's beta).
14:37 and 17:25 are the answer we are looking for 😄
Hey, been loving your channel! I also have a similar background in college as you! I was wondering; Would you recommend a master in AI or statistics & data sciencr in order to become a data scientist and/or machine learning engineer?
Stats would give me an European Master of Statistics (EMOS) and R knowledge.
AI is more focused on python.
What would give better opertunities down the road? honestly the stats would be easier to get higher grades than AI since its a killer master where I study it.
My background is; double bachelor neuroscience and psychology, with a specialization in stats after which i knew it was what I really liked. Did a minor in data science and AI and studied mathematics on an exchange and did comp science and (discrete) math courses extracurricular.
My goal is to work at a big tech firm but im unsure what gives better opertunities
Hey, thanks for the post! Honestly, it really boils down to what you are interested in. Seems to me that you are mostly interested in developing and application of AI - in which case, computational neuroscience I think would be a perfect track. A combination of neuroscience, stats, and computer science may help you in the near term and long-term. Any internships you could snatch would be great in building a portfolio. Invest heavily on learning how to code, the math, and application of the latest algos like transformers, ChatGPT and so forth.
@@DataInterview thank you!
@@DataInterview I actually really want to go more towards machine learning engineering or data science. Would computational still be best then?
Why does every one just start writing the equation? Where is the deeper intuition?
Very good and helpful explanation, but why pooled variance is to multiply by 2 instead of directly use the variance itself according to the link en.wikipedia.org/wiki/Pooled_variance