Interpretable vs Explainable Machine Learning

Поделиться
HTML-код
  • Опубликовано: 14 окт 2024
  • Interpretable models can be understood by a human without any other aids/techniques. On the other hand, explainable models require additional techniques to be understood by humans. We discuss this definition and how it relates to interpretability--the degree to which a model can be understood by a human.
    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.c...
    XAI course: adataodyssey.c...
    Newsletter signup: mailchi.mp/409...
    Read the companion article (no-paywall link):
    towardsdatasci...
    Medium: / conorosullyds
    Twitter: / conorosullyds
    Mastodon: sigmoid.social...
    Website: adataodyssey.com/

Комментарии • 32

  • @adataodyssey
    @adataodyssey  8 месяцев назад +1

    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap-with-python/
    XAI course: adataodyssey.com/courses/xai-with-python/
    Newsletter signup: mailchi.mp/40909011987b/signup

  • @banytoshbot
    @banytoshbot 9 месяцев назад +2

    Great video, thank you!
    Thoughts on SHAP vs Explainable boosting classifier?

    • @adataodyssey
      @adataodyssey  8 месяцев назад

      I don't know much about EBC. Will look into it!
      The major benefit of SHAP and other model agnostic methods is they can be used with any model. This gives you flexibility over model choice which can lead to higher accuracy

  • @clazo37
    @clazo37 3 месяца назад

    Thank you for your clear and structured explanations, I have learned a lot from your videos. One question: are you aware about causal inference? Are those techniques something that you would be willing to tackle in your videos? Thank you and continue your excellent work

    • @adataodyssey
      @adataodyssey  3 месяца назад

      Hi Cesar, thanks for the kind words! Casual analysis is something I am aware of. For now, I will probably not go in to the topic of this channel. My focus is more on Computer Vision so I will be explaining methods in that field
      However, it is something I hope to learn more about in the future :)

  • @AllRounder-vc1yl
    @AllRounder-vc1yl 9 месяцев назад +1

    Great explanation. Thanks 👍

    • @adataodyssey
      @adataodyssey  9 месяцев назад +1

      No problem! I'm glad you found it useful

  • @jadonthomson8234
    @jadonthomson8234 Год назад +3

    Great video 🙌

  • @gemini_537
    @gemini_537 3 месяца назад

    Gemini 1.5 Pro: The video is about the difference between interpretable and explainable machine learning models.
    The video starts by acknowledging that the field of interpretable machine learning (IML) is new and there is no consensus on the definitions. There are two possible definitions for interpretable vs. explainable models discussed in the video.
    One definition is that an interpretable model is a model that is simple enough for a human to understand directly by looking at the model itself. For example, a decision tree is interpretable because you can follow the tree to see how it makes a decision. Another example is a linear regression model, where you can see the coefficients of the model to understand how each feature affects the prediction.
    On the other hand, an explainable model is a model that is too complex to understand directly. For these models, you need additional techniques to understand how they work. For example, a random forest is an ensemble of many decision trees, and it is too complex to understand how each tree contributes to the final prediction. Similarly, a neural network is a complex model with many layers and weights, and it is impossible to understand how it works by looking at the weights alone. To understand these models, you would need to use techniques like LIME or SHAP values.
    The video then argues that interpretability is on a spectrum, rather than a binary classification. There is a gray area where it might be difficult to decide whether a model is interpretable or explainable. For example, a random forest with a few trees might be interpretable, but a random forest with many trees might not be. Additionally, even a simple model can become difficult to understand if it has many features or parameters.
    Another issue with this definition is that it is subjective and depends on the person's understanding of the model. There is no formal way to measure interpretability, and what one person finds interpretable, another person might find difficult to understand.
    The video concludes by arguing that the goal of IML is to understand and explain models, rather than to classify them as interpretable or explainable. The best way to understand a model depends on the specific model and the questions you are trying to answer. There is no single best way to classify models, and the field is still under development.

  • @constantineketskalo5203
    @constantineketskalo5203 11 месяцев назад

    Thanks.
    Just some thoughts from me:
    It seems to me, that every model could be considered as exaiplainable, because nothing stops you from running your analyzing tool on a simple algorythm, which could be understanded by a human alone without additional tools just by looking at that tree. The question is that whether it's mandatory or optional to use these additional tools to explain this ai logic for us. So if there is such way to call it something like "mandatory to be explained" or something like that but shorter - then I'd rather go with that term. If not - then let it be as it is.
    Also I don't think you need a gray area there. It's rather a line, but it's not clear. Just like there is not clear definition of junior/middle/senior software developer. One person could be called different grades from this calsification system in different companies. It's very subjective.

    • @adataodyssey
      @adataodyssey  11 месяцев назад +1

      Some good points Constantine! Goes to show that these definitions are still very debatable. There are hopefully some model where everyone would agree on the definition.

  • @ahmadalis1517
    @ahmadalis1517 Год назад +2

    Great video, but I prefer to stick with white-box / black-box categories. I use Interpretable and Explainable interchangebly

    • @adataodyssey
      @adataodyssey  Год назад

      That's fair! It is less confusing terminology. Interpretable and explainable really mean the same thing to a layperson.

    • @RyanMcCoppin
      @RyanMcCoppin 10 месяцев назад

      @@adataodyssey Sick burn.

    • @ojaspatil2094
      @ojaspatil2094 Месяц назад

      @@adataodyssey goddam

  • @rizzbod
    @rizzbod 9 месяцев назад

    Great! Thank you

    • @adataodyssey
      @adataodyssey  9 месяцев назад

      No problem! I’m glad you found the video useful :)

  • @ojaspatil2094
    @ojaspatil2094 Месяц назад

    thank you!

  • @MarharytaMars
    @MarharytaMars Месяц назад

    Thank you!

  • @filoautomata
    @filoautomata 11 месяцев назад

    This is where human logic fails, because since our young age, we are trained in Boolean logic paradigm, and not probabilistic / fuzzy logic paradigm. Things are either true or false, while in reality certain things can have a degree of truthiness and falseness.

    • @adataodyssey
      @adataodyssey  11 месяцев назад

      So true! So you’re saying the definition is often a false dichotomy?
      We probably make the same mistakes when providing explanations for model predictions. Usually they are only one of the many potential reasons for how they work.

  • @dance__break4155
    @dance__break4155 7 месяцев назад

    whats the color of your eyes?

    • @adataodyssey
      @adataodyssey  6 месяцев назад

      Blue :)

    • @Nowobserving
      @Nowobserving 3 месяца назад

      Hahahaha - that is also part of parameters - that why asked lolz😂

  • @slimeminem7402
    @slimeminem7402 5 месяцев назад

    Personally, the distinction is not necessary.

    • @adataodyssey
      @adataodyssey  5 месяцев назад +1

      I agree :) But I did think it was important when I first got into XAI.

  • @camerashysd7165
    @camerashysd7165 3 месяца назад +1

    You said nothing in 7 minutes bro and then asked, what we think?😂 wow