Extremely interesting episode that I could watch🤩 drinking my coffee ☕. It could be cool to have a specific episode on this "Shapley values" as it seems to be one key to really improve the output quality of models.
I still remember your illustration with the "TEEEEEEXT" caption. One image was worth 1000 words... Oh wait. Anyway amazing work! And thanks to your previous videos someone like me that has no expertise in your field can understand some of this highly technical short talk.
Awesome work Letitia! I got to admit I was hooked in when I saw the use of SHAP as I've always been interested in explainable AI and was introduced early on to the use of Shapley values for explainable AI. I haven't taken a look at your paper yet for citations, but I presume you encountered Scott Lundberg's research on the use of SHAP values for explaining predictions? It's awesome to see how those techniques can evolve to multimodal models, thanks for sharing!
Glad you liked it! :) Sure I have encountered SHAP, I've used and cited it in the paper github.com/slundberg/shap Maybe it was not clear from the video. I had to leave out a lot of details to stick to the conference video length of 6 mins. Maybe I will find time to do an in-length video if people are interested.
congrats on the paper! \^o^/ it will be crazy 2 one day have a vision cognitron, that can classify every thing it sees! U should try the matlab plug-ins 4 U'r area of expertise! good luck!
Great stuff Letitia! Great to see you showcase some of your work!
I have to admit I don't understand a lot of it, but keep up the awesome work!
"IT'S not much but it's honest work" ... I Understood That Reference
Very interesting! It is great that you present your works!
Ah, cool, so nice to see some of your own research! Congrats for the accepted paper!
Awesome! Would love to hear more about SHAP approaches
Me too 🤩
Extremely interesting episode that I could watch🤩 drinking my coffee ☕. It could be cool to have a specific episode on this "Shapley values" as it seems to be one key to really improve the output quality of models.
Congratulations on getting your work accepted!
Thank you so much 😀
@@AICoffeeBreak You’re welcome and again well congratulations!
I still remember your illustration with the "TEEEEEEXT" caption. One image was worth 1000 words... Oh wait. Anyway amazing work! And thanks to your previous videos someone like me that has no expertise in your field can understand some of this highly technical short talk.
😂 thanks!
wow interesting work and nice presentation! What tool do you use to edit the presentation slide?
Thanks! Just good old PowerPoint for the slides and animations (“morph“ transition ftw). 😅
Awesome work Letitia! I got to admit I was hooked in when I saw the use of SHAP as I've always been interested in explainable AI and was introduced early on to the use of Shapley values for explainable AI. I haven't taken a look at your paper yet for citations, but I presume you encountered Scott Lundberg's research on the use of SHAP values for explaining predictions? It's awesome to see how those techniques can evolve to multimodal models, thanks for sharing!
Glad you liked it! :) Sure I have encountered SHAP, I've used and cited it in the paper github.com/slundberg/shap
Maybe it was not clear from the video. I had to leave out a lot of details to stick to the conference video length of 6 mins. Maybe I will find time to do an in-length video if people are interested.
Super interesting.
03:40 going small! 😂
great work!!, it is also like reverse engineering how much attention the model is paying to a specific part of input..
Well put, that is interpretability in a nutshell. :)
congrats on the paper! \^o^/ it will be crazy 2 one day have a vision cognitron, that can classify every thing it sees!
U should try the matlab plug-ins 4 U'r area of expertise! good luck!
You are pretty. Just letting you know)