Explainable AI explained! | #1 Introduction

Поделиться
HTML-код
  • Опубликовано: 10 янв 2025

Комментарии •

  • @silviaz.3887
    @silviaz.3887 3 года назад +6

    This is a great series. Eye opening indeed. Thanks. Look forward to new videos on other ML subjects.

  • @phurbayenden7698
    @phurbayenden7698 2 года назад +1

    such a great video! thank you for condensing it and also pointing out to directions where we can venture out.

  • @sohailpatel7549
    @sohailpatel7549 Год назад

    Thanks for this video and playlist. It's really hard to get this info in such a simple manner.

  • @geoffbenstead6461
    @geoffbenstead6461 3 года назад

    Great intro and series to XAI - thanks

  • @ninjanape
    @ninjanape Год назад

    Thank you very much for making this playlist - absolute chamption!

  • @keshav1989
    @keshav1989 3 года назад +1

    Hi this series is extraordinarily good. Very well we explained. Thank you.

  • @HarrisBaig
    @HarrisBaig Год назад

    @3:00 you talk about the psychological area about what makes good explanations. I am thinking to do my thesis around this topic Human Centered AI or user friendly AI. Can you direct me to some sources which helps me to understand these topics?

  • @thegimel
    @thegimel 3 года назад

    Great video on a very important subject. Thanks!

  • @chenqu773
    @chenqu773 3 года назад

    This is a great tutorial! Many thanks!

  • @dheerajvarma4821
    @dheerajvarma4821 2 года назад

    Great explanation & Great series

  • @Anirudh-cf3oc
    @Anirudh-cf3oc 2 года назад

    Thank you for the amazing explaination

  • @martinwohlan4891
    @martinwohlan4891 2 года назад

    Hi, first of all thanks for the nice video! I have a question: Is the base value given by the shap explainer the appearance of a certain class (stroke in this case) in the data on which model was trained on?

    • @DeepFindr
      @DeepFindr  2 года назад

      Hi! Thanks :)
      Yes exactely, the base value is calculated as average of the train dataset, as far as I know.

  • @ongsooliew
    @ongsooliew Год назад

    Good Stuff!

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 года назад

    Hi iam not familiar with this topic ,it is quite new for me ,but i like your way in presenting as usual,my question is this field normaly ml data science should know to anlysis the model .or this field could person work separately to just test and anlysis the module

    • @DeepFindr
      @DeepFindr  3 года назад

      Hi and thanks! It depends :) in my experience the person who build a model also works on interpreting it. This is to make sure the model works as intended - basically its a verification step after developing the algorithm. But I can imagine that also other people work on the XAI tasks especially in larger companies when larger models are used

    • @DeepFindr
      @DeepFindr  3 года назад +1

      By the way: more and more companies and frameworks also started to include interpretability tools. E. G. Tensorflow has the what-if tool or Google Has interpeetable AI for Google Cloud.

    • @DeepFindr
      @DeepFindr  3 года назад +1

      cloud.google.com/explainable-ai here for example

  • @rottaca
    @rottaca 3 года назад

    Amazing 😍

  • @touhidulislamchayan6896
    @touhidulislamchayan6896 3 года назад

    Helpful

  • @rajdeepdas283
    @rajdeepdas283 3 года назад

    woowwwww!!

  • @rivershi8273
    @rivershi8273 3 года назад

    How can I get in touch with you to ask you some questions

    • @DeepFindr
      @DeepFindr  3 года назад

      deepfindr@gmail.com :)

    • @rivershi8273
      @rivershi8273 3 года назад

      @@DeepFindr Thanks, I' ve sent my problems to your email

  • @tilkesh
    @tilkesh Год назад

    Thx