Well they made the presentation together and she had quite a bit of engagement imo. He asked her to talk about the hypothesis statement and they had some solid dialogue... I guess the idea was that he would guide the presentation passively, and she'd give great insights/alternate. I learned quite a bit
Hey Dan, I really love your courses on TH-cam and purchase the monthly subscription a week ago. Now I could not login due to Google's prevention: "Attackers tried to steal your information"
You mentioned that CTR is not a good metric which is understandable, but how about CTP? In CTP, we are looking at unique page visits and unique clicks, therefore, if a user has multiple sessions and multiple visits, they will all be counted as one.
Question on metric tested in the hypothesis test - avg. # of clicks per user. Why wouldn't it be avg revenue per user per day? I believe the user could click the "buy now" button and still exit, or they may buy but spend less money with the new button,etc....
On 24:30 about not using CTR due to violation of iid, I think you’re missing some important info, would love your feedback: 1. If you’re aggregating CTR to per user across session, your unit are users and still iid, assuming users are independent 2. Even if measuring at session level, you can still use delta method to estimate variance where otherwise variance would be underestimated due to sessions are not iid
User clicks in the following session is not a violation of IID… your analysis unit is ‘user’… if you are only measuring user clicks uplift in the first session, how could you confidently say that once launched the feature is actually increasing user clicks? This just doesn’t make sense…
A user would be way more likely to click a 'buy now' button for a cheap, insignificant item. I think this definitely needs to be controlled for in the experiment.
Hi, Please share the remaining part
4. Experiment Design (MDE, Power, Alpha, Sample Size)
5. Running an Experiment (Ramp Up, Validation Checks)
6. Launch Decision (Decision Tree, Post-Launch)
Playback Speed 1.5x recommended
Incredible! Professional yappers, very useful for practicing corporate English✍️😁
What's the purpose of having a guest speaker if you don't let her talk?
Well they made the presentation together and she had quite a bit of engagement imo. He asked her to talk about the hypothesis statement and they had some solid dialogue... I guess the idea was that he would guide the presentation passively, and she'd give great insights/alternate. I learned quite a bit
thats your whole take away from the video ?
Not really sure why we need the "first session only" constraint? If you just sum/average per user you can also get results "promptly", right?
I agree, the first session only would also be impacted by the novelty effect.
Hey Dan, I really love your courses on TH-cam and purchase the monthly subscription a week ago. Now I could not login due to Google's prevention: "Attackers tried to steal your information"
You mentioned that CTR is not a good metric which is understandable, but how about CTP? In CTP, we are looking at unique page visits and unique clicks, therefore, if a user has multiple sessions and multiple visits, they will all be counted as one.
Question on metric tested in the hypothesis test - avg. # of clicks per user. Why wouldn't it be avg revenue per user per day? I believe the user could click the "buy now" button and still exit, or they may buy but spend less money with the new button,etc....
On 24:30 about not using CTR due to violation of iid, I think you’re missing some important info, would love your feedback:
1. If you’re aggregating CTR to per user across session, your unit are users and still iid, assuming users are independent
2. Even if measuring at session level, you can still use delta method to estimate variance where otherwise variance would be underestimated due to sessions are not iid
Use delta method to account for the repeated measures
User clicks in the following session is not a violation of IID… your analysis unit is ‘user’… if you are only measuring user clicks uplift in the first session, how could you confidently say that once launched the feature is actually increasing user clicks? This just doesn’t make sense…
A user would be way more likely to click a 'buy now' button for a cheap, insignificant item. I think this definitely needs to be controlled for in the experiment.
Gold
*God bless me, please. I wish I could get my dream job.*
Sorry, I think you don't know statistics. T-test is a parametric test.
So wordy….. get to the points
some people love the sound of their own voice
She is beautiful