This is really helpful. I learned a lot. Learning more about A/B testing for UX and the natural variance is a new and very interesting concept to me. Thank you!
If larger changes are made, results are less likely to be close. From my experience people running tests will make little tweaks to the site and get results that are very close. The bigger the change the bigger the difference in the result for most tests.
If you are peeking at your data for the right reasons, then you aren't suffering from peeking, but you are benefiting from peeking. Check out this video which explains that more. th-cam.com/video/cS072qIYhBg/w-d-xo.html
Could you do a video explaining lift? I’ve only seen it used in the context of data mining and association rules, but it seems like you’re using it different here and I’m not sure what this measure represents here.
Are you talking about running the natural variance test? You would want to get more data than just a day, but one day is better than nothing. I guess it depends on your risk tolerance for having an unknown higher or lower variance amount. Me personally, I prefer more data so I can be more confident in the numbers I am using.
It really depends on your organization. Some organizations have have a cross functional team of experts to support this work. In most smaller organizations a person might wear multiple hats and do many functions.
Yes that would be the extreme example and if it was that low you would need to see the trend and lift doing well too. That one data point in isolation isn't enough.
I checked tenths videos of A/B testing, this one deserves more likes
Fabulous explanation and very crisp video and demonstration with example.....thank you
good Job! Your focus on Data Literacy and Quality of Data is a huge plus of your Video! Thank you!
This is really helpful. I learned a lot. Learning more about A/B testing for UX and the natural variance is a new and very interesting concept to me. Thank you!
Glad it was helpful!
This is incredible! Really clear, concise and easy to understand. Thank you so much!
Glad it was helpful
Nice vid on A/B testing and interpreting results through data quality, managerial significance, and statistical significance. Clear and concise!
Thanks for the comment Aleksandr
Binge-watching these AB testing videos :) ;)
Have fun!
Loved the video. Very useful.
Glad it was helpful!
Wooow. Thank you! Really understandable. Very important topic!
You're very welcome!
Really a nice video on A/B Test, thank you!
Glad you liked it!
Quite useful concepts..not everything is covered by a single number. Noted
Glad it was helpful
Variance doesn't need to be consistent. If you have a fairly optimal site already results will tend to be close and you just need more volume.
If larger changes are made, results are less likely to be close. From my experience people running tests will make little tweaks to the site and get results that are very close. The bigger the change the bigger the difference in the result for most tests.
Great video. Really helpful mate thanks
Glad you enjoyed it
Very comprehensive videos #gratitude
Glad it was helpful!
Very well explained!
Glad you liked it
Great points, however, doesn't this method suffer from Peeking issues?
If you are peeking at your data for the right reasons, then you aren't suffering from peeking, but you are benefiting from peeking. Check out this video which explains that more. th-cam.com/video/cS072qIYhBg/w-d-xo.html
Could you do a video explaining lift? I’ve only seen it used in the context of data mining and association rules, but it seems like you’re using it different here and I’m not sure what this measure represents here.
Hi Kelsey I think this video will help you out. At 3:26 I start talking about lift. th-cam.com/video/bGdTr7yJbNs/w-d-xo.html
@@TestingTheory thanks so much, that helped a lot.
Loved the video
Glad you like it!
Well explained.
Thanks Hafiz.
thank you for your vdo. it's very useful and easy to understand :)
Glad it was helpful!
Awesome, thank you!
Glad you like, thanks Rohan.
Can I run this test over one full day?
Great advice by the way.
Are you talking about running the natural variance test? You would want to get more data than just a day, but one day is better than nothing. I guess it depends on your risk tolerance for having an unknown higher or lower variance amount. Me personally, I prefer more data so I can be more confident in the numbers I am using.
Thx a lot!
You're welcome!
Nice video. Nicer shirt.
Thanks!
great video! :)
Thanks Anna.
This is a great video, thank you. Do PMs design, implement and run the A/B testing or get help from data scientists to do it and report the result?
It really depends on your organization. Some organizations have have a cross functional team of experts to support this work. In most smaller organizations a person might wear multiple hats and do many functions.
Steadily 😎
100.... wow that's low
Yes that would be the extreme example and if it was that low you would need to see the trend and lift doing well too. That one data point in isolation isn't enough.