[Own work] MM-SHAP to measure modality contributions

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024

Комментарии • 23

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk Год назад +4

    Great stuff Letitia! Great to see you showcase some of your work!

  • @danji9485
    @danji9485 Год назад +6

    I have to admit I don't understand a lot of it, but keep up the awesome work!

  • @anishbhanushali
    @anishbhanushali Год назад +2

    "IT'S not much but it's honest work" ... I Understood That Reference

  • @AIology2022
    @AIology2022 Год назад +1

    Very interesting! It is great that you present your works!

  • @mangotee82
    @mangotee82 Год назад +2

    Ah, cool, so nice to see some of your own research! Congrats for the accepted paper!

  • @oncedidactic
    @oncedidactic Год назад +3

    Awesome! Would love to hear more about SHAP approaches

  • @jmirodg7094
    @jmirodg7094 Год назад +2

    Extremely interesting episode that I could watch🤩 drinking my coffee ☕. It could be cool to have a specific episode on this "Shapley values" as it seems to be one key to really improve the output quality of models.

  • @MaJetiGizzle
    @MaJetiGizzle Год назад +2

    Congratulations on getting your work accepted!

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +2

      Thank you so much 😀

    • @MaJetiGizzle
      @MaJetiGizzle Год назад +1

      @@AICoffeeBreak You’re welcome and again well congratulations!

  • @Micetticat
    @Micetticat Год назад +2

    I still remember your illustration with the "TEEEEEEXT" caption. One image was worth 1000 words... Oh wait. Anyway amazing work! And thanks to your previous videos someone like me that has no expertise in your field can understand some of this highly technical short talk.

  • @jonginkim941
    @jonginkim941 Год назад +7

    wow interesting work and nice presentation! What tool do you use to edit the presentation slide?

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +3

      Thanks! Just good old PowerPoint for the slides and animations (“morph“ transition ftw). 😅

  • @dianai988
    @dianai988 Год назад +3

    Awesome work Letitia! I got to admit I was hooked in when I saw the use of SHAP as I've always been interested in explainable AI and was introduced early on to the use of Shapley values for explainable AI. I haven't taken a look at your paper yet for citations, but I presume you encountered Scott Lundberg's research on the use of SHAP values for explaining predictions? It's awesome to see how those techniques can evolve to multimodal models, thanks for sharing!

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +2

      Glad you liked it! :) Sure I have encountered SHAP, I've used and cited it in the paper github.com/slundberg/shap
      Maybe it was not clear from the video. I had to leave out a lot of details to stick to the conference video length of 6 mins. Maybe I will find time to do an in-length video if people are interested.

  • @shivamkaushik6637
    @shivamkaushik6637 Год назад +2

    Super interesting.

  • @elinetshaaf75
    @elinetshaaf75 Год назад +5

    03:40 going small! 😂

  • @leeme179
    @leeme179 Год назад +2

    great work!!, it is also like reverse engineering how much attention the model is paying to a specific part of input..

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +1

      Well put, that is interpretability in a nutshell. :)

  • @__--JY-Moe--__
    @__--JY-Moe--__ Год назад

    congrats on the paper! \^o^/ it will be crazy 2 one day have a vision cognitron, that can classify every thing it sees!
    U should try the matlab plug-ins 4 U'r area of expertise! good luck!

  • @fonkmonkey2715
    @fonkmonkey2715 Год назад +3

    You are pretty. Just letting you know)