Great mock interview! In the future, I think it would be helpful to mention what level of experience the candidate has (ie: New grad, 3 years, etc) so that people watching with varying levels of experience can have a better sense of where they should be or what's expected.
You mentioned the randomization portion makes sense, but I don't think it does? This seems like a really good example of potential for network effects impacting the results - if 2 friends are binned in control and variant, then variant friend may call control friend more frequently leading to a rise in both variant and control metrics. So either the metric would need to be adjusted to be "# of calls STARTED by user" or the randomization should be done using network information, randomizing sets of user networks.
Hey Dan, this is a really great mock interview. Enjoyed so much watching it! Little thought of mine, one additional thing to consider for the A/B testing framework would be the network effects. If the call feature is designed to increase the number of calls per user, then because you always need two people for a call to happen, you can have the call receiver being in the control group, leading to spill over effects between treatment and control.
Great point on the UI+Algorithm test. However I think the test should be a simple A/B/C test and not a paired T test which is typically performed on the same population at two different points of time.
On picking what feature is best. I think here one can do SVD or PCA and look at the weights of the largest eigenvector (in PCA and matrix V in SVD) on each initial feature. Here you should see the best combo and the largest contributor. Tell this to Meta :)
This is a good example to show why following the frameworks blindly is a terrible idea. Most of the metrics have nothing to do with engagement. It almost seemed that the interviewee had a checklist and just filling up the details. The interviewer did a poor in not emphasizing that the metrics are for measuring engagement.
Although I agree with your overall point, I think calling it a "terrible idea" is a bit harsh. The framework does help the interviewee to structure the order of what things to consider. he more so missed out on fleshing out the metrics in a way that connects them to the business case. this is particularly difficult to do when you are given a product that you are not familiar with. most people do not use Portal. If this were a case study on instagram, by contrast, it would probably be easier for an interviewee to think of more meaningful and useful engagement metrics. Just my two cents.
Great Mock interview: for the effect of UI or algorithm we can : A/B/N Testing Design: - Control Group (A):This group does not have access to the "Call Suggest" feature. This serves as the baseline to measure the impact of introducing the feature. - Treatment Group 1 (B - Algorithm Only):This group receives the "Call Suggest" feature but with a basic or standard UI. The purpose of this group is to test the effectiveness of the algorithm itself without significant influence from UI changes. - Treatment Group 2 (C - UI Only):This group receives the same recommendations (possibly using a basic algorithm or random suggestions) but with the enhanced UI designed to highlight or promote the "Call Suggest" feature. This helps isolate the effect of the UI design on user engagement. - Treatment Group 3 (D - Full Feature):This group receives the "Call Suggest" feature with both the advanced algorithm and the enhanced UI. This group tests the combined effect of both the algorithm and UI.
I guess what Dan means ANOVA alone is not enough, It's common to conduct post-hoc tests (What you mentioned) to identify which specific groups are different from each other. The Bonferroni correction can be applied during these post-hoc tests to adjust for the multiple comparisons being made.
Hey Pitt, there are 12 lessons that cover core concepts in AB testing, and some of the mock interview videos in the premium subscription course contains AB testing questions as well. Please check out the course page for more info: datainterview.com/pricing/
"Q2 - How would you prioritize the metrics you listed in Q1?" Shouldn't it be the executive who picks out the top 3 metrics? I would present the result of Q1 and ask the executive to rank the priority... Also, can you elaborate on why choosing t-test instead of z-test?
You can either improve your listening skills or turn on the captions. The truth is that there will be many non-native speakers working in Meta. Will you quit your job just because of the ascent your colleagues have?
Great mock interview! In the future, I think it would be helpful to mention what level of experience the candidate has (ie: New grad, 3 years, etc) so that people watching with varying levels of experience can have a better sense of where they should be or what's expected.
Thanks for the suggestion! -- Dan
You mentioned the randomization portion makes sense, but I don't think it does? This seems like a really good example of potential for network effects impacting the results - if 2 friends are binned in control and variant, then variant friend may call control friend more frequently leading to a rise in both variant and control metrics.
So either the metric would need to be adjusted to be "# of calls STARTED by user" or the randomization should be done using network information, randomizing sets of user networks.
Hey Dan, this is a really great mock interview. Enjoyed so much watching it!
Little thought of mine, one additional thing to consider for the A/B testing framework would be the network effects. If the call feature is designed to increase the number of calls per user, then because you always need two people for a call to happen, you can have the call receiver being in the control group, leading to spill over effects between treatment and control.
Thanks for the input! -- Dan
Great point on the UI+Algorithm test. However I think the test should be a simple A/B/C test and not a paired T test which is typically performed on the same population at two different points of time.
Do u have any resources on how to learn which test to select?
On picking what feature is best. I think here one can do SVD or PCA and look at the weights of the largest eigenvector (in PCA and matrix V in SVD) on each initial feature. Here you should see the best combo and the largest contributor.
Tell this to Meta :)
This is a good example to show why following the frameworks blindly is a terrible idea. Most of the metrics have nothing to do with engagement. It almost seemed that the interviewee had a checklist and just filling up the details. The interviewer did a poor in not emphasizing that the metrics are for measuring engagement.
Although I agree with your overall point, I think calling it a "terrible idea" is a bit harsh. The framework does help the interviewee to structure the order of what things to consider. he more so missed out on fleshing out the metrics in a way that connects them to the business case. this is particularly difficult to do when you are given a product that you are not familiar with. most people do not use Portal. If this were a case study on instagram, by contrast, it would probably be easier for an interviewee to think of more meaningful and useful engagement metrics. Just my two cents.
@@intrepid_grovyle Yes, and Given the stress during the interview, inexperienced interviewees tend to use framework for safety.
For the t-test, since this is a multivariate test, wouldn't we apply Bonferroni correction?
Great Mock interview: for the effect of UI or algorithm we can :
A/B/N Testing Design:
- Control Group (A):This group does not have access to the "Call Suggest" feature. This serves as the baseline to measure the impact of introducing the feature.
- Treatment Group 1 (B - Algorithm Only):This group receives the "Call Suggest" feature but with a basic or standard UI. The purpose of this group is to test the effectiveness of the algorithm itself without significant influence from UI changes.
- Treatment Group 2 (C - UI Only):This group receives the same recommendations (possibly using a basic algorithm or random suggestions) but with the enhanced UI designed to highlight or promote the "Call Suggest" feature. This helps isolate the effect of the UI design on user engagement.
- Treatment Group 3 (D - Full Feature):This group receives the "Call Suggest" feature with both the advanced algorithm and the enhanced UI. This group tests the combined effect of both the algorithm and UI.
This is goal! thank you!
step 6: pair wise t-test. How did you come to decide this any resoruces?
Since it's a multivariate test why not use ANOVA and Tuckey HSD pairwise Comparisons?
How pair wise T-test is gonna Capture the interaction?
I guess what Dan means ANOVA alone is not enough, It's common to conduct post-hoc tests (What you mentioned) to identify which specific groups are different from each other. The Bonferroni correction can be applied during these post-hoc tests to adjust for the multiple comparisons being made.
Very helpful mock!
Can you do Data Engineering mock interviews? Like Data Modeling and ETL design?
Yes!
@@DataInterview Sweet! I have a Meta Data Engineer interview next week!
Hey Dan, I would like to subscribe your premium option. I was wondering how many courses fo AB testing? and Is there mock videos for AB testing too?
Hey Pitt, there are 12 lessons that cover core concepts in AB testing, and some of the mock interview videos in the premium subscription course contains AB testing questions as well. Please check out the course page for more info: datainterview.com/pricing/
I’m wondering if there is a way to sign up for this kind of mock interview?
Hi Ruby, please check out the coaching page to enroll for such a mock interview. Here's the link: datainterview.com/coaching/
"Q2 - How would you prioritize the metrics you listed in Q1?" Shouldn't it be the executive who picks out the top 3 metrics? I would present the result of Q1 and ask the executive to rank the priority... Also, can you elaborate on why choosing t-test instead of z-test?
What is CLT? can anyone plz explain.
When Mark wrote it down he was talking about Customer Lifetime Value, probably just a typo.
So hard to understand Mark's accent
i don't understand a word he's saying
You can either improve your listening skills or turn on the captions. The truth is that there will be many non-native speakers working in Meta. Will you quit your job just because of the ascent your colleagues have?