The Science Behind InterpretML: SHAP

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024
  • Learn more about the research that powers InterpretML from SHAP creator, Scott Lundberg from Microsoft Research
    Learn More:
    Azure Blog aka.ms/AiShow/...
    Responsible ML aka.ms/AiShow/...
    Azure ML aka.ms/AiShow/...
    The AI Show's Favorite links:
    Don't miss new episodes, subscribe to the AI Show aka.ms/aishows...
    Create a Free account (Azure) aka.ms/aishow-...

Комментарии • 23

  • @chandrimad5776
    @chandrimad5776 2 года назад +5

    Like SHAP, there is LIME as well for interpretability of the models. Would be great if you post a video on comparing these two. I have built one credit risk model using lightgbm regressor, there I used SHAP to represent model interptetability. I would also like to use LIME for the same purpose. If you are interested, will post my kaggle link to get your feedback on my work.

    • @sehaconsulting
      @sehaconsulting 2 года назад

      Would love to see the work you did for SHAP be LIME

    • @missfatmissfat
      @missfatmissfat Год назад

      Hello I’m also interested as well 🙂

  • @robertoooooooooo
    @robertoooooooooo Год назад +1

    Insightful, thank you

  • @caiyu538
    @caiyu538 Год назад +1

    It looks that shap is a brutal force search of all features, consider all kinds of combinations. Is my understanding correct.

  • @hen-e1v
    @hen-e1v 3 года назад +2

    great introduction of SHAP

  • @juanete69
    @juanete69 Год назад +1

    In a linear regression what is the difference (interpretation) between the SHAP value and the partial R^2?

    • @Lucky10279
      @Lucky10279 Год назад +1

      In linear regression, R^2 is literally just the proportion of the variance of the probability distribution of the dependent variable(s) that can be predicted/explained by the model. Or, in other words, if I understand correctly, it's the proportial difference between the variance of the actual variable we're attempting to model and the variance of the predicted variable. So if y is what we're trying to predict and the model is y_hat = mx+b,
      R² = |var(y)-var(y_hat)|/var(y). It's a measure of how well the model matches the actual relationship between the independent and dependent variables.
      Shap values, OTH, if I'm understandingly the video correctly, don't necessarily say anything, in themselves, about how well the model fits the actual data overall -- instead they tell us how much each independent variable is affecting the predictions/classifications the model spits out.

  • @muhammadusman7466
    @muhammadusman7466 Год назад

    does shaply works well with categorical features?

  • @FirstNameLastName-fv4eu
    @FirstNameLastName-fv4eu 3 месяца назад

    Since when XGBoost became an AI model.

  • @igoriakubovskii1958
    @igoriakubovskii1958 3 года назад

    How can we make a decision based on shap, when it’s not causality ?

  • @manuelillanes1635
    @manuelillanes1635 3 года назад +1

    Can shap values be used to interpret unsupervised models too?

    • @mananthakral9977
      @mananthakral9977 3 года назад

      No,I think we can't use it for unsupervised learning since it requires to look at the output value after each feature is added to the model.

    • @cassidymentus7450
      @cassidymentus7450 2 года назад

      For unsupervised learning there might not be a 1dimensional numeric output (e.g. credit risk). It still might be possible to make a useful one. Take PCA for example. You can define define the output as |x-x_mean| (|| = Euclidean distance aka Pythagorean formula).
      Shap will tell you how much principle component contributes to distance from mean... essentially it is the variance along that axis. (Depending on how you look at it.)

  • @berniethejet
    @berniethejet 2 года назад +2

    Seems Scott confounds the term 'linear models' with 'models with no interactions'. Linear regression models can still have cross products of variables, e.g. y = a + b1 x1 + b2 x2 + b3 x1 x2.

    • @sebastienban6074
      @sebastienban6074 2 года назад

      i think he meant that they dont interact the same way trees do.

  • @SandeepPawar1
    @SandeepPawar1 3 года назад +2

    Scott - should the SHAP values for feature importance be based on training data or test data?

    • @lipei7704
      @lipei7704 3 года назад +4

      I believe the SHAP values for feature importance should be based on training data, Because It will cause a data breach If you choose test data.

    • @claustrost614
      @claustrost614 3 года назад +1

      @@lipei7704 I agree. It depends what you want to do with it. If you want to use it for more or less global feature importance/feature selection I would only use it on the training set.
      If you mean by "feature importance" the local sharpley values for one example of your test data you can have a look whetever it would be an outerlayer or something like that compared to your traing set.

  • @grooveykans
    @grooveykans 2 года назад

    tentara tuh harus hitam

  • @zhanli9121
    @zhanli9121 Год назад +1

    This is some most opaque presentation of SHAP. And the presentation style is also bad -- very rare to see a presenter uses formula when the formula is not really needed. However, this guy manages to be one of them.

    • @samirelzein1095
      @samirelzein1095 11 месяцев назад

      after i wrote a similar sharp comment above, found yours. actually everyone agrees here. Good case for Microsoft staffing.

  • @samirelzein1095
    @samirelzein1095 11 месяцев назад

    the dude managed to get many questions, but didnt manage getting many compliments. Neither will i give one. dont make videos to be uploaded for Microsoft, make videos when you feel the ideas is complete and the intuition is clear and you re capable of speaking 10 mins out of hours you could keep showing your idea. wasted my time.