If you enjoyed discussing this machine learning question with us, explore more on our website! Keyword Bidding: www.interviewquery.com/questions/keyword-bidding? Dynamic Pricing System: www.interviewquery.com/questions/dynamic-pricing-system? Bank Fraud Model:www.interviewquery.com/questions/bank-fraud-model?
This guy has mastered the art of how to talk for 20 minutes something that can be explained to a technologist in 2 minutes, and that my friends is what a system design interview is all about. You have to talk about every detail no matter how boring/mundane it is to you or how obvious you might think it is.
Typical end to end ML Question: Understate the problem, Data collection, Feature Engineering, Building Model, Train Model, Evaluate Performance ( Confusion Matrix: Precision ± Recall) , Deploy Model, Rebuild Model if needed
Decent summary, but most FAANG interviewers would probably dock for not discussing online training, A/B testing, exploratory analysis for selecting model
I feel this video is a fantastic resource, not only the explanation was great and very insightful, but I think you also made the right questions, going for the extra-mile of the explanation/analysis...thank you for sharing!
2 points that I would added for the end questions: 1. in order to overcome the coded firearm words -> use tranformers models like BERT as you can catch the meaning by the embeddings (ie: cosine similarity) and filter the best ratings 2. Computer Vision on the images can be used as additional inference if the F1 score is low, but not always as this type of inference is more expensive
Thanks for tuning in! If you're interested in learning more about machine learning, be sure to check out our machine learning course. It's designed to help you master the key concepts and skills needed to excel in machine-learning roles. www.interviewquery.com/learning-paths/modeling-and-machine-learning
Wow this guy is good. I really like how he start from model framework with baseline model, point out the reasoning and key considerations - and we can evolve from there to more complicated model just by all similar reasoning
The first tree based model Zarrar was talking about is probably AdaBoost, where the weight of those misclassified instances will be enlarged for the next tree.
Should've mentioned that people try to disguise the actual product description using proxy words. Also, to include image analysis or not, I'd draw multiple samples and train models in A/B setting. Then run a t-test to see if the mean prediction metric is significantly different or not.
Sorry, where did you discuss the label generation part? There are multiple ways to generate labels with pros and cons: 1. user feedback: Automatic, lot of data but noisy. 2. Manual annotation: accurate labels but not scalable. Very high proportion of examples would be tagged as negative. 3. Bootstrap: Train a simple model and sample more examples based on model scores to get a higher proportion of positive examples. 4. Hybrid: Manually annotate examples marked as "X" by users where "X" can be tags like "illegal", "offsensive", etc.
Re; whether or not to do CV on images - shouldn't one do error analysis to check if text and other features lacked the predictive power and the signal was elsewhere (aka images) which is why we should invest in extracting signals from images; as opposed to building a giant model with all features and doing ablations to understand feature class importance. Latter seems quite expensive?
If you are working for FB, you can afford to go for an expensive model. If a candidate didn’t mention CV, I would be unhappy since there is a good source of data you are not making use of.
you mean use a gradient boosted tree in the first stage and in the second stage use a vision/mlp model (which is more complex and takes longer to excute)?
I would have suggested CNN as an alternative approach but ya agree. The listing is not only about an image but also text. Edge case where they have different text and different images then that won't get captured. Thank you.
I haven't watched the video yet, but a lot of people will dock points for over-engineering. I haven't seen his suggested solution yet, but if a really basic ensemble approach (one model for image, one for text) can achieve the goal instead of a single multi-modal one and with less resources at every step- go for that and explain why. Now, to be fair, you were commenting prior to the multimodal LLMs being everywhere, so that does change the considerations.
Hello. I think this was super helpful overall. I'm a little confused when he describes Gradient Boosting. For each successor tree, we should set new target labels for training errors in the predecessor, no? (and leave the weights alone)
@@Gerald-iz7mv You already have labelled data in this case. They described having the historical set of previously flagged posts. Also, expecting to cluster out the gun posts in an unsupervised manner when they make up such a small proportion of the listings is unrealistic. The other thing is that feature engineering and labeling pipelines are simply part of the job, when it comes to ML. Nowadays, you can also very easily create synthetic labelled data of a very high quality using generative models as well to help with the imbalanced set.
@iqjayfeng I think Zarrar mistakenly mixed up False Pos and False Neg around 2:00 mark. It would be ok if customer service received False Neg (model pred True but its really False) not False Pos
Why does he say that it is a better idea to use NN rather than gradient boosted trees if we need to continuously train/update the model with every new training label that we collect from the customer labeling team?
Because you can update a NN weights with just new data points by fine-tuning unlike tree based models which *may* require re-training with old+new data
I can never remember what Precision and Recall stands for. It is clearly visible how the interveiwee was also confused and video is edited around that point.
You wan a system that overfits and hits lots of false positives, as false negatives can be catastrophic, legally, for the reputation of the business, could even lead to regulatory action and media scrutiny, killing sales, market cap, etc.. You then have agents go through the false positives and efficiently decide if they are truly false positive or not. This data can also help train the model. The cost of hiring people to go through and check is much lower than losing 5% of market cap due to negative press.
great insights but the text data can be various language but when he also said augment the some keywords to detect can that work or train different language different??just curious
Very useful! Thanks for sharing. Do they ask about data pipelines and technologies that might be useful to scale the model (for the MLE role)? Would love to know more resources on it! as well as more mock interviews :)
I believe he was using accuracy in its semantic meaning, not the actual metric. He already said he would use F1, and then referred to it as “accuracy” because it’s an easier word. Probably “score” would have cleared the confusion.
If you enjoyed discussing this machine learning question with us, explore more on our website!
Keyword Bidding: www.interviewquery.com/questions/keyword-bidding?
Dynamic Pricing System: www.interviewquery.com/questions/dynamic-pricing-system?
Bank Fraud Model:www.interviewquery.com/questions/bank-fraud-model?
This guy has mastered the art of how to talk for 20 minutes something that can be explained to a technologist in 2 minutes, and that my friends is what a system design interview is all about. You have to talk about every detail no matter how boring/mundane it is to you or how obvious you might think it is.
That's the hardest part of it. I always want to get to the point :(
Typical end to end ML Question:
Understate the problem, Data collection, Feature Engineering, Building Model, Train Model, Evaluate Performance ( Confusion Matrix: Precision ± Recall) , Deploy Model, Rebuild Model if needed
Good summary!
Decent summary, but most FAANG interviewers would probably dock for not discussing online training, A/B testing, exploratory analysis for selecting model
I feel this video is a fantastic resource, not only the explanation was great and very insightful, but I think you also made the right questions, going for the extra-mile of the explanation/analysis...thank you for sharing!
2 points that I would added for the end questions:
1. in order to overcome the coded firearm words -> use tranformers models like BERT as you can catch the meaning by the embeddings (ie: cosine similarity) and filter the best ratings
2. Computer Vision on the images can be used as additional inference if the F1 score is low, but not always as this type of inference is more expensive
But as they said, inference speed is not that important as model accuracy/precision for that specific problem (regarding your second point).
Thanks for tuning in! If you're interested in learning more about machine learning, be sure to check out our machine learning course. It's designed to help you master the key concepts and skills needed to excel in machine-learning roles.
www.interviewquery.com/learning-paths/modeling-and-machine-learning
Wow this guy is good. I really like how he start from model framework with baseline model, point out the reasoning and key considerations - and we can evolve from there to more complicated model just by all similar reasoning
The first tree based model Zarrar was talking about is probably AdaBoost, where the weight of those misclassified instances will be enlarged for the next tree.
Should've mentioned that people try to disguise the actual product description using proxy words.
Also, to include image analysis or not, I'd draw multiple samples and train models in A/B setting. Then run a t-test to see if the mean prediction metric is significantly different or not.
Amazing! As a point to improve even more, I’d add as finishing touch fine-tuning the model with adversarial examples.
Sorry, where did you discuss the label generation part? There are multiple ways to generate labels with pros and cons:
1. user feedback: Automatic, lot of data but noisy.
2. Manual annotation: accurate labels but not scalable. Very high proportion of examples would be tagged as negative.
3. Bootstrap: Train a simple model and sample more examples based on model scores to get a higher proportion of positive examples.
4. Hybrid: Manually annotate examples marked as "X" by users where "X" can be tags like "illegal", "offsensive", etc.
You can also scrape for images, and generate listing using LLMs for high quality synthetic data.
Re; whether or not to do CV on images - shouldn't one do error analysis to check if text and other features lacked the predictive power and the signal was elsewhere (aka images) which is why we should invest in extracting signals from images; as opposed to building a giant model with all features and doing ablations to understand feature class importance. Latter seems quite expensive?
If you are working for FB, you can afford to go for an expensive model. If a candidate didn’t mention CV, I would be unhappy since there is a good source of data you are not making use of.
It's also possible to use re-ranking or bagging approaches to combine xgboost model and vision/nlp model, which would most likely improve performance
you mean use a gradient boosted tree in the first stage and in the second stage use a vision/mlp model (which is more complex and takes longer to excute)?
I would have suggested CNN as an alternative approach but ya agree. The listing is not only about an image but also text. Edge case where they have different text and different images then that won't get captured. Thank you.
I haven't watched the video yet, but a lot of people will dock points for over-engineering. I haven't seen his suggested solution yet, but if a really basic ensemble approach (one model for image, one for text) can achieve the goal instead of a single multi-modal one and with less resources at every step- go for that and explain why.
Now, to be fair, you were commenting prior to the multimodal LLMs being everywhere, so that does change the considerations.
@@sophiophile thank you
Around @12:00 the algorithm that upweights incorrect prediction is Adaboost instead of GBM, right?
Yes.
can be xgboost also ?
This is a fantastic video for giving an idea for an ML system design interview ! Thanks for making this.
Hello. I think this was super helpful overall. I'm a little confused when he describes Gradient Boosting. For each successor tree, we should set new target labels for training errors in the predecessor, no? (and leave the weights alone)
I think he was talking about adaboost instead of gradient boosting.
@@jiahuili2133 how does a Gradient Boosted Tree work in this context? Any other models would could use here? Unsupervised machine learning?
@@Gerald-iz7mvYou already have labels, though. So supervised learning is probably superior.
@@sophiophile but labeling the data is a lot of effort?
@@Gerald-iz7mv You already have labelled data in this case. They described having the historical set of previously flagged posts. Also, expecting to cluster out the gun posts in an unsupervised manner when they make up such a small proportion of the listings is unrealistic. The other thing is that feature engineering and labeling pipelines are simply part of the job, when it comes to ML.
Nowadays, you can also very easily create synthetic labelled data of a very high quality using generative models as well to help with the imbalanced set.
what will be the online metric in this case ? reduction in reported or flagged items?also awesome explanation !!
Were you use white board for ML design architecture? Is white boarding helpful in the interview?
@iqjayfeng I think Zarrar mistakenly mixed up False Pos and False Neg around 2:00 mark. It would be ok if customer service received False Neg (model pred True but its really False) not False Pos
False Positive means model predicted True, but Label is False. False negative is Model Predicted False, but value is True.
Why does he say that it is a better idea to use NN rather than gradient boosted trees if we need to continuously train/update the model with every new training label that we collect from the customer labeling team?
Because you can update a NN weights with just new data points by fine-tuning unlike tree based models which *may* require re-training with old+new data
Remember tree based models are sensitive to change in data
what does the following mean? TF-IDF: "We scale the values of each word based of each frequency in different postings"?
I can never remember what Precision and Recall stands for. It is clearly visible how the interveiwee was also confused and video is edited around that point.
Great video. I find all the quick cuts to be a bit disorienting though.
You wan a system that overfits and hits lots of false positives, as false negatives can be catastrophic, legally, for the reputation of the business, could even lead to regulatory action and media scrutiny, killing sales, market cap, etc.. You then have agents go through the false positives and efficiently decide if they are truly false positive or not. This data can also help train the model. The cost of hiring people to go through and check is much lower than losing 5% of market cap due to negative press.
great insights but the text data can be various language but when he also said augment the some keywords to detect can that work or train different language different??just curious
Synonyms and similar words can help embellish the classifier and create new features
Great videos! Where do you get the sample questions from shown at the start of the video?
www.interviewquery.com/
Great Interview Zarrar!
Thanks folks! That was incredible.
We're glad it was helpful!
@@iqjayfeng Absolutely!
Very useful! Thanks for sharing. Do they ask about data pipelines and technologies that might be useful to scale the model (for the MLE role)? Would love to know more resources on it! as well as more mock interviews :)
Definitely in the MLE interview loops!
If the dataset is biased? Why bother using accuracy as the metrics to evaluate the model?
I believe he was using accuracy in its semantic meaning, not the actual metric. He already said he would use F1, and then referred to it as “accuracy” because it’s an easier word. Probably “score” would have cleared the confusion.
Great insights to sample questions
this is the best video ever
Wow this was so useful.
GBM is fast to train?????
simple boosting algo can't have parallel processing unlike bagging...but xgboost is optimized for parallel processing
F2 score will be better here I think 🤔
thanks Zarrar
Excellent
Thank you! Cheers!
What will happen if it's a toy gun?
Too mock, not like real interview, all things were mouth work without any drawing and writing.
intellectual masturbation