Wish I saw this video while taking stats class. Your explanation with real world examples is something that textbooks completely missed out on. Thanks again!
Predictive forecasting isn't really my specialty. Are you trying to predict sales while you run an A/B test? If so, the best prediction is the average conversion rate. The larger your sample size the more confident you can be in the conversion rate of the variations and the more you could predict if that conversion rate will play out in a couple weeks.
Hi! Thank you very much for this excellent video. I have a question, why did you separate statistical significance and p-value. Aren't they the same? Thanks!
Population - The whole dataset Sample - subset of dataset (the more random and larget it is the more real your output might be) Lift - Percentage change from the last model to the new model Confidence/Statistical Significance - Accuracy percentage of the model (higher the better; More data = High accuracy) Conversion Rate - How many people signed up out of the total population (in percentage value) Mean - Average of the data (more data = more accurate mean = more confidence) P Value - Probability the model accuracy or Trust ability of the result ( lower the better; P Value of 0.05 = 95% of confidence) Confidence Interval - Range of the confidence accuracy (as it keeps changing based on the samples)
what great videos you make thanks for sharing. How are you looking at these statistics? Some tutorials how to actually do this and recommendation of apps & sheets to record & analyse data would be great.
When you run a test you are keeping back a portion of the traffic that remain in the original experience. Theses visitors see no change to the site. This gives you a point of comparison with your variations where the visitors did see a change.
@@TestingTheory Thank you sir. I hope you can share some of your knwledge here. My specific interest is in TH-cam ads, for example testing various videos but also testing various targeting channels. I can not find any similar information on google at all. Cheers
I think this is a very good tutorial. however it seems you miss a critical information, which statistical test we should use? (I am guessing it's a chi square test?)
Most testing softwares already have a method that they use. If you are using the specific tool then the test isn't as important because you get what the tool uses. If you are looking for an exploration of statistical tests, that wouldn't really fit into a "Statistics Made Easy" video, but is a great idea for a follow up video.
Hi Rhett! Could you please explain why "as you get more data, the results of that test become more confident"? I usually think of "confidence level" as something we set before making an inference; for example, we need to make a certain inference with confidence level 95%; so why would this set value change with the amount of data? The answer I can think of is that by "become more confident", you mean that the p-value initially fluctuates and then stabilizes at a smaller value. Is this what you mean? Thanks! :)
Yes, the control is part of the test and is your comparison point for the new variations you are testing. The old rate is the conversion rate for the control.
Hi Rhett! Can you recommend a way to practice A/B testing in real life? I'm working as a Data Scientist and there are many places that had asked experience implementing A/B testing
Hi Zucethy, the Data Scientist role is a great springboard into doing A/B Testing. Are you doing analysis that might draw conclusions to recommend an A/B test? It might be easiest to convince the company you are working with to test things you are recommended to get causal data. Help them understand that correlative analysis doesn't provide the same value as a/b tests that give you causal data. You could also explore entry level positions for conversion rate optimizers.
Thanks! I'm looking for a job. I had applied to some and got interviewed but they found out I'm lacking of A/B testing experience that had driven conclusions. I was looking for options to learn on my own like Kaggle which gives free dataset to analyse
The video is great and clear but it is a pity you did not include any practical examples showing to to calculate most of the values. Otherwise, thank you for sharing!
If you have a testing tool it will do most of what you need automatically. If you want to calculate everything manually in excel or something the formula is pretty complex which wouldn't really fit in a "Statistics Made Easy" video. :-)
This video is more focused on the statistics "made easy." We could get all technical but it is enough for people to know that a lower p-value is a good thing. Most testing tools don't even show the p-value and most people learning testing don't even need to understand this. We don't need more complexity in testing we need more strategy. I don't think it would have been better if I would have said "P-values are usually found using p-value tables or spreadsheets/statistical software. These calculations are based on the assumed or known probability distribution of the specific statistic being tested. P-values are calculated from the deviation between the observed value and a chosen reference value, given the probability distribution of the statistic, with a greater difference between the two values corresponding to a lower p-value. Mathematically, the p-value is calculated using integral calculus from the area under the probability distribution curve for all values of statistics that are at least as far from the reference value as the observed value is, relative to the total area under the probability distribution curve...." www.investopedia.com/terms/p/p-value.asp
People like you make world a better place
Can't thank you enough for this simple and effective video. Wish I had seen this before. Thank you!!
Glad it was helpful!
This is a great video. I have no idea how it only got 2621 views.
Glad you liked it. Hopefully there are more views to come :-)
@@TestingTheory it’s picking up! 😃 4.2K
Ps: I love all your videos! They’re super helpful! I am pretty sure it’s going to be raining views soon.
Cause it's your fault
Wish I saw this video while taking stats class. Your explanation with real world examples is something that textbooks completely missed out on. Thanks again!
Glad it was helpful!
My first video of yours- thanks so much! I’m interested in understanding predictive forecasting for sales over weeks.
Predictive forecasting isn't really my specialty. Are you trying to predict sales while you run an A/B test? If so, the best prediction is the average conversion rate. The larger your sample size the more confident you can be in the conversion rate of the variations and the more you could predict if that conversion rate will play out in a couple weeks.
Hi! Thank you very much for this excellent video. I have a question, why did you separate statistical significance and p-value. Aren't they the same? Thanks!
Population - The whole dataset
Sample - subset of dataset (the more random and larget it is the more real your output might be)
Lift - Percentage change from the last model to the new model
Confidence/Statistical Significance - Accuracy percentage of the model (higher the better; More data = High accuracy)
Conversion Rate - How many people signed up out of the total population (in percentage value)
Mean - Average of the data (more data = more accurate mean = more confidence)
P Value - Probability the model accuracy or Trust ability of the result ( lower the better; P Value of 0.05 = 95% of confidence)
Confidence Interval - Range of the confidence accuracy (as it keeps changing based on the samples)
Great summary Paritosh!
what great videos you make thanks for sharing. How are you looking at these statistics? Some tutorials how to actually do this and recommendation of apps & sheets to record & analyse data would be great.
Glad you like them and that is great feedback. Thanks.
Thank you for your content. Really good and nicely expressed.
Great video! Great explanation! Thanks! 🙏
Thank you for the videos. can you explain AB testing and test/control, which hold off some population.
When you run a test you are keeping back a portion of the traffic that remain in the original experience. Theses visitors see no change to the site. This gives you a point of comparison with your variations where the visitors did see a change.
Great videos. Hope to see you back at it soon!
I hope so too!
Thank you for sharing. Would you mind making one video on TH-cam ads A/B testing? That would be pretty interesting topic.
Great suggestion!
@@TestingTheory Thank you sir. I hope you can share some of your knwledge here. My specific interest is in TH-cam ads, for example testing various videos but also testing various targeting channels. I can not find any similar information on google at all. Cheers
@@TestingTheory Any ideas on when video A/B testing will be ready? Sorry to bother! Cheers
I think this is a very good tutorial. however it seems you miss a critical information, which statistical test we should use? (I am guessing it's a chi square test?)
Most testing softwares already have a method that they use. If you are using the specific tool then the test isn't as important because you get what the tool uses. If you are looking for an exploration of statistical tests, that wouldn't really fit into a "Statistics Made Easy" video, but is a great idea for a follow up video.
If possible, can you please do some videos related to fractional factorial design in AB testing
Yeah, I will add that to the list of upcoming videos
Hi Rhett! Could you please explain why "as you get more data, the results of that test become more confident"? I usually think of "confidence level" as something we set before making an inference; for example, we need to make a certain inference with confidence level 95%; so why would this set value change with the amount of data?
The answer I can think of is that by "become more confident", you mean that the p-value initially fluctuates and then stabilizes at a smaller value. Is this what you mean?
Thanks! :)
Yes, the more data you get the tighter your lift intervals and confidence intervals become.
Cool stuff. very helpful!
Glad you think so!
I am new to this so clarification if we test the control group at all or not and when you mention old rate is that the rate from control group ?
Yes, the control is part of the test and is your comparison point for the new variations you are testing. The old rate is the conversion rate for the control.
Hi Rhett! Can you recommend a way to practice A/B testing in real life? I'm working as a Data Scientist and there are many places that had asked experience implementing A/B testing
Hi Zucethy, the Data Scientist role is a great springboard into doing A/B Testing. Are you doing analysis that might draw conclusions to recommend an A/B test? It might be easiest to convince the company you are working with to test things you are recommended to get causal data. Help them understand that correlative analysis doesn't provide the same value as a/b tests that give you causal data.
You could also explore entry level positions for conversion rate optimizers.
Thanks! I'm looking for a job. I had applied to some and got interviewed but they found out I'm lacking of A/B testing experience that had driven conclusions. I was looking for options to learn on my own like Kaggle which gives free dataset to analyse
The video is great and clear but it is a pity you did not include any practical examples showing to to calculate most of the values. Otherwise, thank you for sharing!
If you have a testing tool it will do most of what you need automatically. If you want to calculate everything manually in excel or something the formula is pretty complex which wouldn't really fit in a "Statistics Made Easy" video. :-)
great channel man thank you
Glad you enjoy it!
Hello sir, This is a great video, but i still have some doubts. Is there by any chance I could contact you ?
I'd be happy to connect on linked in and chat there.
Your videos are awesome but please, please improve the microphone. All the best!!
Noted
Just subscribed
Thanks and welcome!
I don't like to curse, but this is fucking helpful. Thanks so much.
Ha ha. I am glad it was helpful to you
You say 95% of the times, I can trust my friend. Shouldn't it be the opposite? What was my hypothesis? You said my friend lies 95% of the time
Ha ha. You are right, I think I mixed up my words a little on that one.
He is misinterpreting the P-value, which is not a reverse of level of confidence.
This video is more focused on the statistics "made easy." We could get all technical but it is enough for people to know that a lower p-value is a good thing. Most testing tools don't even show the p-value and most people learning testing don't even need to understand this. We don't need more complexity in testing we need more strategy. I don't think it would have been better if I would have said "P-values are usually found using p-value tables or spreadsheets/statistical software. These calculations are based on the assumed or known probability distribution of the specific statistic being tested. P-values are calculated from the deviation between the observed value and a chosen reference value, given the probability distribution of the statistic, with a greater difference between the two values corresponding to a lower p-value. Mathematically, the p-value is calculated using integral calculus from the area under the probability distribution curve for all values of statistics that are at least as far from the reference value as the observed value is, relative to the total area under the probability distribution curve...." www.investopedia.com/terms/p/p-value.asp
You are magnificent! But please ... change the white background ...
What don't you like about the white background? This is the first time I have heard that.