Great questions from the audience and great explanation from the speaker. As someone working in Scientific Machine Learning where interpretability really matters, all concerns about the reliability of these XAI techniques (i.e. issues with independence among features) are valid but I think the speaker was just unable to emphasize that those are the limitations of the available techniques for XAI that we have such as SHAP, but the thing is these are the best that we have for now and those concerns are works in progress. There is no one-size-fits-all models and algorithms; each of them has their own advantages and disadvantages but for as long as they serve and usable in today's demand for models' interpretability, I think that is better than having nothing at all. Thanks for the great content! I love this :D
I am currently pursuing the Explainable AI course at UW and read Scott's paper as a class discussion. But I truly only understood it through this lecture, thanks for posting this!
SHAP summaries should be integreated in all Machine Learning models. Computers can be programmed to learn and programmed to teach what and how they have learned with SHAP summaries ... diminishing inference and diminishing singularity.
This is so frustrating. He is saying for 1 min then all the people asking him questions for 10 min. Why the hell they are not letting him finish the presentation and ask questions later. It's good that you are smart but being annoying is not.
Machine Learning models of Multi-omics data in combination with biology physiological and pathological mathematic and 3D models to ascertain causality in order to suggest intervention(s) on a continuous basis.
The audience asked challenging questions because they UNDERSTAND the content. Kudos!
Great questions from the audience and great explanation from the speaker. As someone working in Scientific Machine Learning where interpretability really matters, all concerns about the reliability of these XAI techniques (i.e. issues with independence among features) are valid but I think the speaker was just unable to emphasize that those are the limitations of the available techniques for XAI that we have such as SHAP, but the thing is these are the best that we have for now and those concerns are works in progress.
There is no one-size-fits-all models and algorithms; each of them has their own advantages and disadvantages but for as long as they serve and usable in today's demand for models' interpretability, I think that is better than having nothing at all.
Thanks for the great content! I love this :D
I am currently pursuing the Explainable AI course at UW and read Scott's paper as a class discussion. But I truly only understood it through this lecture, thanks for posting this!
My largest concern is the independence of features assumption, but this is a great talk
Very interesting talk! Highly informed audience can really be tough sometimes. Great presentation! :)
Great presentation, very clearly explained the concept, appreciate the great work!
One of the best talks I've heard in 2020. Awesome!
SHAP summaries should be integreated in all Machine Learning models. Computers can be programmed to learn and programmed to teach what and how they have learned with SHAP summaries ... diminishing inference and diminishing singularity.
Great audience! I love the atmosphere there
at 35:01 with a caption on, we got a valuable meme material. Thanks Scott! Good presentation btw
Impressive lecture (and impressive audience too)
found this talk really great, shared it with everyone!
what a nice dude
The lady always asking is really annoying...
Thank you
She asks great questions actually!
so annoying!!!!!! It really breaks the flow of the presentation
This is so frustrating. He is saying for 1 min then all the people asking him questions for 10 min. Why the hell they are not letting him finish the presentation and ask questions later. It's good that you are smart but being annoying is not.
A Karen.
is there any method to evaluate XAI framworks results?
Great talk
Wonderful presentation !!!! Thank u so mcuh
great work and detailed presentation. thanks for sharing.
Now I must have this toy... thank you!!
Haha, Susan was there as well. I detected her voice ^+^.
Thanks for sharing. Really informed audience
Let the man talk lol
Amazing discussion 👍
excellent presentation, thanks
can any one say what are the tools/libraries used for xai?
Thank you for this talk!
Great presentation, much appreciated! 👍
Machine Learning models of Multi-omics data in combination with biology physiological and pathological mathematic and 3D models to ascertain causality in order to suggest intervention(s) on a continuous basis.
Good talk! plse speak slowly and articulate better. not understandable sometimes.
19:20 I'm a mathematician and, LOL!
14:53 이어보기