Meta (Facebook) Machine Learning Mock Interview: Illegal Items Detection

Поделиться
HTML-код
  • Опубликовано: 17 янв 2022
  • Today Zarrar talks us through this question asked by Facebook about how to use Machine Learning to flag illegal items posted on a marketplace.
    Try adding your own solution to the question here: www.interviewquery.com/questi...
    👉 Subscribe to my data science channel: bit.ly/2xYkyUM
    🔥 Get 10% off machine learning interview prep: www.interviewquery.com/pricin...
    ❓ Check out our machine learning interview course: www.interviewquery.com/course...
    🔑 Get professional coaching from Zarrar here: www.interviewquery.com/coachi...
    🐦 Follow us on Twitter: / interview_query
    Attention Hiring Managers & Recruiters: Ready to find the top 1% of machine learning talent for your team? Accelerate your hiring with Outsearch.ai. Their AI-powered platform seamlessly filters the best candidates, making building your dream team easier than ever: www.outsearch.ai/?...
    More from Jay:
    Read my personal blog: datastream.substack.com/
    Follow me on Linkedin: / jay-feng-ab66b049
    Find me on Twitter: / datasciencejay
    Related Links:
    Facebook Data Science Interview Questions: www.interviewquery.com/blog-f...
    Facebook Data Science Internships: How to Land the Job: www.interviewquery.com/p/face...
  • НаукаНаука

Комментарии • 57

  • @AlexXPandian
    @AlexXPandian Месяц назад +5

    This guy has mastered the art of how to talk for 20 minutes something that can be explained to a technologist in 2 minutes, and that my friends is what a system design interview is all about. You have to talk about every detail no matter how boring/mundane it is to you or how obvious you might think it is.

  • @anasal-tirawi2096
    @anasal-tirawi2096 2 года назад +63

    Typical end to end ML Question:
    Understate the problem, Data collection, Feature Engineering, Building Model, Train Model, Evaluate Performance ( Confusion Matrix: Precision ± Recall) , Deploy Model, Rebuild Model if needed

    • @hongliangfei3170
      @hongliangfei3170 Год назад +2

      Good summary!

    • @sophiophile
      @sophiophile 3 месяца назад +5

      Decent summary, but most FAANG interviewers would probably dock for not discussing online training, A/B testing, exploratory analysis for selecting model

  • @julianmartindelfiore7420
    @julianmartindelfiore7420 2 года назад +26

    I feel this video is a fantastic resource, not only the explanation was great and very insightful, but I think you also made the right questions, going for the extra-mile of the explanation/analysis...thank you for sharing!

  • @ploughable
    @ploughable 3 месяца назад +1

    2 points that I would added for the end questions:
    1. in order to overcome the coded firearm words -> use tranformers models like BERT as you can catch the meaning by the embeddings (ie: cosine similarity) and filter the best ratings
    2. Computer Vision on the images can be used as additional inference if the F1 score is low, but not always as this type of inference is more expensive

  • @umamiplaygroundnyc7331
    @umamiplaygroundnyc7331 7 месяцев назад +1

    Wow this guy is good. I really like how he start from model framework with baseline model, point out the reasoning and key considerations - and we can evolve from there to more complicated model just by all similar reasoning

  • @sallespadua
    @sallespadua Год назад +3

    Amazing! As a point to improve even more, I’d add as finishing touch fine-tuning the model with adversarial examples.

  • @being.jajabor2187
    @being.jajabor2187 2 года назад

    This is a fantastic video for giving an idea for an ML system design interview ! Thanks for making this.

  • @sunny2253
    @sunny2253 3 месяца назад +1

    Should've mentioned that people try to disguise the actual product description using proxy words.
    Also, to include image analysis or not, I'd draw multiple samples and train models in A/B setting. Then run a t-test to see if the mean prediction metric is significantly different or not.

  • @RanjitK1
    @RanjitK1 Год назад

    Great Interview Zarrar!

  • @junweima
    @junweima Год назад +2

    It's also possible to use re-ranking or bagging approaches to combine xgboost model and vision/nlp model, which would most likely improve performance

    • @Gerald-iz7mv
      @Gerald-iz7mv 5 месяцев назад

      you mean use a gradient boosted tree in the first stage and in the second stage use a vision/mlp model (which is more complex and takes longer to excute)?

  • @marywang8013
    @marywang8013 Год назад +1

    Were you use white board for ML design architecture? Is white boarding helpful in the interview?

  • @iqjayfeng
    @iqjayfeng  2 года назад +2

    Thanks for tuning in! If you're interested in learning more about machine learning, be sure to check out our machine learning course. It's designed to help you master the key concepts and skills needed to excel in machine-learning roles.
    www.interviewquery.com/learning-paths/modeling-and-machine-learning

  • @mdaniels6311
    @mdaniels6311 3 дня назад

    You wan a system that overfits and hits lots of false positives, as false negatives can be catastrophic, legally, for the reputation of the business, could even lead to regulatory action and media scrutiny, killing sales, market cap, etc.. You then have agents go through the false positives and efficiently decide if they are truly false positive or not. This data can also help train the model. The cost of hiring people to go through and check is much lower than losing 5% of market cap due to negative press.

  • @87prak
    @87prak 9 месяцев назад +2

    Sorry, where did you discuss the label generation part? There are multiple ways to generate labels with pros and cons:
    1. user feedback: Automatic, lot of data but noisy.
    2. Manual annotation: accurate labels but not scalable. Very high proportion of examples would be tagged as negative.
    3. Bootstrap: Train a simple model and sample more examples based on model scores to get a higher proportion of positive examples.
    4. Hybrid: Manually annotate examples marked as "X" by users where "X" can be tags like "illegal", "offsensive", etc.

    • @sophiophile
      @sophiophile 3 месяца назад

      You can also scrape for images, and generate listing using LLMs for high quality synthetic data.

  • @bhartendu_kumar
    @bhartendu_kumar 2 года назад +3

    Great insights to sample questions

  • @Gerald-iz7mv
    @Gerald-iz7mv 5 месяцев назад

    what does the following mean? TF-IDF: "We scale the values of each word based of each frequency in different postings"?

  • @robertknight9242
    @robertknight9242 2 года назад +2

    Great videos! Where do you get the sample questions from shown at the start of the video?

    • @iqjayfeng
      @iqjayfeng  2 года назад +2

      www.interviewquery.com/

  • @_seeker423
    @_seeker423 2 года назад +3

    Re; whether or not to do CV on images - shouldn't one do error analysis to check if text and other features lacked the predictive power and the signal was elsewhere (aka images) which is why we should invest in extracting signals from images; as opposed to building a giant model with all features and doing ablations to understand feature class importance. Latter seems quite expensive?

    • @besimav
      @besimav Год назад +3

      If you are working for FB, you can afford to go for an expensive model. If a candidate didn’t mention CV, I would be unhappy since there is a good source of data you are not making use of.

  • @pratikmandlecha6672
    @pratikmandlecha6672 Год назад

    Wow this was so useful.

  • @jamessukanto8078
    @jamessukanto8078 2 года назад +7

    Hello. I think this was super helpful overall. I'm a little confused when he describes Gradient Boosting. For each successor tree, we should set new target labels for training errors in the predecessor, no? (and leave the weights alone)

    • @jiahuili2133
      @jiahuili2133 2 года назад +10

      I think he was talking about adaboost instead of gradient boosting.

    • @Gerald-iz7mv
      @Gerald-iz7mv 5 месяцев назад

      @@jiahuili2133 how does a Gradient Boosted Tree work in this context? Any other models would could use here? Unsupervised machine learning?

    • @sophiophile
      @sophiophile 3 месяца назад

      ​@@Gerald-iz7mvYou already have labels, though. So supervised learning is probably superior.

    • @Gerald-iz7mv
      @Gerald-iz7mv 3 месяца назад

      @@sophiophile but labeling the data is a lot of effort?

    • @sophiophile
      @sophiophile 3 месяца назад

      @@Gerald-iz7mv You already have labelled data in this case. They described having the historical set of previously flagged posts. Also, expecting to cluster out the gun posts in an unsupervised manner when they make up such a small proportion of the listings is unrealistic. The other thing is that feature engineering and labeling pipelines are simply part of the job, when it comes to ML.
      Nowadays, you can also very easily create synthetic labelled data of a very high quality using generative models as well to help with the imbalanced set.

  • @ArunKumar-bp5lo
    @ArunKumar-bp5lo 2 года назад +3

    great insights but the text data can be various language but when he also said augment the some keywords to detect can that work or train different language different??just curious

    • @iqjayfeng
      @iqjayfeng  2 года назад +1

      Synonyms and similar words can help embellish the classifier and create new features

  • @dkshmeeks
    @dkshmeeks Год назад +3

    Great video. I find all the quick cuts to be a bit disorienting though.

  • @KS-df1cp
    @KS-df1cp Год назад +2

    I would have suggested CNN as an alternative approach but ya agree. The listing is not only about an image but also text. Edge case where they have different text and different images then that won't get captured. Thank you.

    • @sophiophile
      @sophiophile 3 месяца назад +2

      I haven't watched the video yet, but a lot of people will dock points for over-engineering. I haven't seen his suggested solution yet, but if a really basic ensemble approach (one model for image, one for text) can achieve the goal instead of a single multi-modal one and with less resources at every step- go for that and explain why.
      Now, to be fair, you were commenting prior to the multimodal LLMs being everywhere, so that does change the considerations.

    • @KS-df1cp
      @KS-df1cp 3 месяца назад

      @@sophiophile thank you

  • @_seeker423
    @_seeker423 Год назад +2

    Around @12:00 the algorithm that upweights incorrect prediction is Adaboost instead of GBM, right?

  • @fahnub
    @fahnub Год назад

    this is the best video ever

  • @fahnub
    @fahnub Год назад

    thanks Zarrar

  • @hasnainmamdani4534
    @hasnainmamdani4534 2 года назад +3

    Very useful! Thanks for sharing. Do they ask about data pipelines and technologies that might be useful to scale the model (for the MLE role)? Would love to know more resources on it! as well as more mock interviews :)

    • @iqjayfeng
      @iqjayfeng  2 года назад +1

      Definitely in the MLE interview loops!

  • @claude7222
    @claude7222 3 месяца назад

    @iqjayfeng I think Zarrar mistakenly mixed up False Pos and False Neg around 2:00 mark. It would be ok if customer service received False Neg (model pred True but its really False) not False Pos

  • @alexeystysin8265
    @alexeystysin8265 Год назад

    I can never remember what Precision and Recall stands for. It is clearly visible how the interveiwee was also confused and video is edited around that point.

  • @evanshlom1
    @evanshlom1 2 года назад

    INFORMATIVE GOOD SIR

  • @goelnikhils
    @goelnikhils Год назад +2

    Excellent

  • @georgezhou9211
    @georgezhou9211 Год назад +1

    Why does he say that it is a better idea to use NN rather than gradient boosted trees if we need to continuously train/update the model with every new training label that we collect from the customer labeling team?

    • @sandeep9282
      @sandeep9282 Год назад

      Because you can update a NN weights with just new data points by fine-tuning unlike tree based models which *may* require re-training with old+new data

    • @sandeep9282
      @sandeep9282 Год назад

      Remember tree based models are sensitive to change in data

  • @scchouhansanjay
    @scchouhansanjay 2 года назад +1

    F2 score will be better here I think 🤔

  • @huanchenli4137
    @huanchenli4137 Год назад

    GBM is fast to train?????

  • @songsong2334
    @songsong2334 2 года назад +1

    If the dataset is biased? Why bother using accuracy as the metrics to evaluate the model?

    • @mikiii880
      @mikiii880 Год назад +2

      I believe he was using accuracy in its semantic meaning, not the actual metric. He already said he would use F1, and then referred to it as “accuracy” because it’s an easier word. Probably “score” would have cleared the confusion.

  • @lidavid6580
    @lidavid6580 Год назад +1

    Too mock, not like real interview, all things were mouth work without any drawing and writing.