Explainable AI explained! | #4 SHAP

Поделиться
HTML-код
  • Опубликовано: 25 янв 2025

Комментарии • 49

  • @SinAndrewKim
    @SinAndrewKim 2 года назад +34

    There is an error in your formula for Shapley values (compared to christophm's book and wikipedia). Most write the weight as (M choose 1, |S|, M-|S|-1) where S is some subset WITHOUT feature i. However, you are summing over subsets z' WITH feature i. Thus, the weight should be (M choose 1, |z'| - 1, M-|z'|).

    • @DeepFindr
      @DeepFindr  2 года назад +13

      Thanks for pointing out, you are absolutely right! I'll pin this comment so that everyone can see it.

  • @khadijakaimous3408
    @khadijakaimous3408 12 дней назад +1

    in case you face an error in this cell, i have been debugging for a while: shap.force_plot(explainer.expected_value[1],
    shap_values[..., 1],
    X_test[start_index:end_index])

  • @utkarshkulshrestha2026
    @utkarshkulshrestha2026 3 года назад +3

    Really insightful. Thank you for the video. It was a great explanation and demonstration to begin with.

  • @aprampalsingh8381
    @aprampalsingh8381 Год назад

    Best explanation over internet..you should do more videos!

  • @Florentinorico
    @Florentinorico 2 года назад

    Great example with the competition!

  • @nidhisingh1325
    @nidhisingh1325 3 года назад +1

    Great explanation, would love more videos from you!

  • @nkorochinaechetam2516
    @nkorochinaechetam2516 Год назад

    straight to the point and detailed explanation

  • @davidlearnforus
    @davidlearnforus 2 года назад

    all our videos are great. thanks a lot!

  • @mohammadnafie3327
    @mohammadnafie3327 2 года назад

    Amazing explaination and to the point. Thank you!

  • @techjack6307
    @techjack6307 2 года назад

    Thank you very much for explaining the concept so clearly.

  • @muzaffarnissar1978
    @muzaffarnissar1978 Год назад

    Thanks, an Exceptional Explanation! Looking Forward to More Videos!.
    Can you send me any links where we we can use Explainable on audio data.

  • @Gustavo-nn7zc
    @Gustavo-nn7zc 7 месяцев назад

    Hi, great video, thanks! Is there a way to use SHAP for ARIMA/SARIMA?

  • @WildanPutraAldi
    @WildanPutraAldi 2 года назад

    Excellent video, thanks for sharing !

  • @orkhanmd
    @orkhanmd 3 года назад +1

    Great explanation. Thanks!

  • @hprshayan
    @hprshayan 3 года назад +1

    Thank you for your excellent explanation.

  • @Bill0102
    @Bill0102 Год назад

    This is a tour de force. A book I read with like-minded themes was also magnificent. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @catcatyoucatmedie1161
    @catcatyoucatmedie1161 Год назад

    Hi, May I know the dataset you used for the demo?

  • @TheEverydayAnalyst97
    @TheEverydayAnalyst97 Месяц назад

    Informative!

  • @zahrahsharif8431
    @zahrahsharif8431 3 года назад +1

    When you use summary force_plot to get an individual points contribution is the prediction shown the log of odds? If so how do I show the actual probability

    • @ea2187
      @ea2187 3 года назад

      i have the same issue ... did you find something out?

    • @DeepFindr
      @DeepFindr  3 года назад

      Hi! Sorry didn't see that comment somehow.
      Did you have a look at this post: github.com/slundberg/shap/issues/963
      ?
      For Tree Explainer you have an option to get probabilities, according to that.

  • @muratkonuklar3910
    @muratkonuklar3910 2 года назад

    Great Presentation Thanks!

  • @kevinkpakpo3215
    @kevinkpakpo3215 2 года назад

    Amazing tutorial. Thanks a lot!

  • @codewithyouml8994
    @codewithyouml8994 2 года назад

    Great Video. I have one question.... Can I use this SHAP model for Graph classification techniques, to see which nodes contribution is how much and kind of show a grad cam like effect? If you got any sort of resoyrces regarding this, please share. Thank you.

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! I have a video called "how to explain graph neural networks" that exactly addresses this question :)

  • @yashnarendra
    @yashnarendra 3 года назад +1

    Might be a very stupid question but why is z' under modulus, if it represents number of features in the subset, it should be positive always, right.

    • @DeepFindr
      @DeepFindr  3 года назад

      Hi! Do you mean the bars wrapped around z? That comes from set theory in math and the notation stands for cardinality = the number of elements in the set. It doesn't mean the "abs" function.
      Is that what you were referring to? :)

    • @yashnarendra
      @yashnarendra 3 года назад

      @@DeepFindr yes, thank you for clarifying. One more doubt, if I put z' =M then the (M -|z'| -1)! becomes (-1) ! which is not defined. What am I missing here?

    • @DeepFindr
      @DeepFindr  3 года назад +2

      @@yashnarendra Hi, good catch!
      In the original formula of Shapley values this can never happen, because the sum is over subsets without the feature i. Therefore F will always be greater than S (with the notation of the original shapley formula in the paper).
      But you are right, this is not really reflected in the SHAP formula. However in the paper they state that they excluded |z'| = 0 and |z'| = M as both are not defined.

    • @yashnarendra
      @yashnarendra 3 года назад +1

      @@DeepFindr Thanks a lot, really appreciate your efforts in replying to my queries.

  • @Diego0wnz
    @Diego0wnz 3 года назад +2

    thanks for the video!

  • @nikolai228
    @nikolai228 10 месяцев назад

    Great video, thanks!

  • @shaz-z506
    @shaz-z506 2 года назад

    Can we use shap for multiclass classification, is there any resources which you can suggest.

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! Have a look at this discussion: github.com/slundberg/shap/issues/367

  • @PrabhjotSingh-mn2ku
    @PrabhjotSingh-mn2ku 2 года назад

    Does the classification threshold have an effect on shapley values? The default threshold in binary classification is 0.5, what if one changes it to 0.7, how to incorporate this in the shap library?

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! This discussion might be what you are looking for :)

    • @PrabhjotSingh-mn2ku
      @PrabhjotSingh-mn2ku 2 года назад

      @@DeepFindr did you mean to add a link to the discussion?

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Yep, sorry here: github.com/slundberg/shap/issues/257

  • @aleixnieto88
    @aleixnieto88 2 года назад

    Amazing video again bro. It helped a lot! One question: Do you know the reference where I can find the proof of the 2nd theorem in the SHAP paper? I can't find it :(

    • @DeepFindr
      @DeepFindr  2 года назад +3

      Hey :) thanks!
      The supplement can be downloaded here: papers.nips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
      Plus there is another link in a discussion on Github, which might be helpful as well: github.com/slundberg/shap/issues/1054
      Hope this helps :)

    • @aleixnieto88
      @aleixnieto88 2 года назад

      @@DeepFindr There's nothing else I can say because you're the boss! ❤

  • @minhaoling3056
    @minhaoling3056 3 года назад +1

    Does shap works on small dataset ?

    • @DeepFindr
      @DeepFindr  3 года назад

      LIME is independent from the size of the Dataset. The only question is if the (Blackbox) model works on the Dataset. Can you maybe share some more details what makes you raise this question? :)

    • @minhaoling3056
      @minhaoling3056 3 года назад

      I have a very small dataset that surprising do well on prediction of 15 different class of identical species. In my black box model, I use three layers of feature extraction methods and finish off with one random forest model. I am not sure whether I can implement LIME for this situation because my black box were mostly feature extractions rather than ensemble of models.

    • @DeepFindr
      @DeepFindr  3 года назад

      Which feature extraction layers are you using?
      Is it trained end-to-end with the RF?
      It doesn't really matter what is happening inside your model. LIME is able to explain the input-output relation in a local area of a single prediction :)

    • @minhaoling3056
      @minhaoling3056 3 года назад

      @@DeepFindr I see, thanks! I will try this in my project soon.

    • @DeepFindr
      @DeepFindr  3 года назад +1

      OK good luck! If you have any problems let me know :)