- Видео 17
- Просмотров 26 699
UCI NLP
США
Добавлен 23 окт 2020
Videos about the research taking place in the UC Irvine Natural Language Processing Group, led by Prof. Sameer Singh.
ACL 2023: MISGENDERED: Limits of Large Language Models in Understanding Pronouns
MISGENDERED: Limits of Large Language Models in Understanding Pronouns
Paper: arxiv.org/abs/2306.03950
Project Demo: tamannahossainkay.github.io/misgendered/
Tamanna Hossain, Sunipa Dev, Sameer Singh
Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering.
Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use En...
Paper: arxiv.org/abs/2306.03950
Project Demo: tamannahossainkay.github.io/misgendered/
Tamanna Hossain, Sunipa Dev, Sameer Singh
Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering.
Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use En...
Просмотров: 267
Видео
Combining Feature and Instance Attribution to Detect Artifacts (ACL Findings 2022)
Просмотров 1452 года назад
Combining Feature and Instance Attribution to Detect Artifacts Pouya Pezeshkpour, Sarthak Jain, Sameer Singh, Byron Wallace (presented by Pouya Pezeshkpour) Paper: arxiv.org/pdf/2107.00323.pdf Training the deep neural networks that dominate NLP requires large datasets. These are often collected automatically or via crowdsourcing, and may exhibit systematic biases or annotation artifacts. By the...
NeurIPS 2021: Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Просмотров 8203 года назад
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability Video for NeurIPS 2021 Poster Paper: arxiv.org/abs/2008.05030 Dylan: dylanslack20 Hima: hima_lakkaraju Sameer: sameer_ Abstract: As black box explanations are increasingly being employed to establish model credibility in high-stakes settings, it is important to ensure that these explanatio...
NeurIPS 2021: Counterfactual Explanations Can Be Manipulated
Просмотров 1,3 тыс.3 года назад
Counterfactual Explanations Can Be Manipulated Video for NeurIPS 2021 Poster Dylan: dylanslack20 Hima: hima_lakkaraju Sameer: sameer_ Abstract: Counterfactual explanations are emerging as an attractive option for providing recourse to individuals adversely impacted by algorithmic decisions. As they are deployed in critical applications (e.g. law enforcement, ...
How to Win LMs and Influence Predictions (Sameer Singh, UCI), Repl4NLP 2021 Invited Talk
Просмотров 2733 года назад
How to Win LMs and Influence Predictions: Using Short Phrases to Control NLP Models Sameer Singh University of California, Irvine Current NLP pipelines rely significantly on finetuning large pre-trained language models. Relying on this paradigm makes such pipelines challenging to use in real-world settings since massive task-specific models are neither memory- nor inference-efficient, nor do we...
An Empirical Comparison of Instance Attribution Methods for NLP (NAACL 2021)
Просмотров 1893 года назад
An Empirical Comparison of Instance Attribution Methods for NLP. Pouya Pezeshkpour*, Sarthak Jain*, Byron Wallace, Sameer Singh. * equal contribution (presented by Pouya Pezeshkpour) Widespread adoption of deep pretrained (masked) neural language models has motivated a pressing need for approaches for interpreting network outputs and for facilitating model debugging. Instance attribution meth...
AAAI 2021 Tutorial on Explaining Machine Learning Predictions
Просмотров 4,8 тыс.3 года назад
AAAI 2021 Tutorial on Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities Himabindu Lakkaraju (Harvard) Julius Adebayo (MIT) Sameer Singh (UCI) explainml-tutorial.github.io/ As machine learning is deployed in all aspects of society, it has become increasingly important to ensure stakeholders understand and trust these models. Decision makers must have a clea...
NeurIPS 2020 Tutorial on Explaining ML Predictions: State-of-the-art, Challenges, and Opportunities
Просмотров 7 тыс.4 года назад
Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities Himabindu Lakkaraju, Julius Adebayo, Sameer Singh 00:00:00-00:18:45 Introduction: Overview and Applications 00:18:45-01:24:16 Approaches for Post hoc Explainability 01:24:16-01:48:52 Explanations in Different Modalities 01:48:52-2:11:48 Evaluation of Explanations 2:11:48-2:28:27 Limits of Post hoc Explainab...
MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics (EMNLP 2020)
Просмотров 2514 года назад
MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics Anthony Chen, Gabriel Stanovsky, Sameer Singh, and Matt Gardner (presented by Anthony Chen) allennlp.org/mocha Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by ...
COVIDLies: Detecting COVID-19 Misinformation on Social Media (EMNLP 2020 NLP-Covid19 Workshop)
Просмотров 5064 года назад
COVIDLies: Detecting COVID-19 Misinformation on Social Media Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh (presented by Tamanna Hossain) Best Paper Award at the EMNLP 2020 NLP-Covid19 Workshop ucinlp.github.io/covid19/ The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, sp...
AutoPrompt: Eliciting Knowledge from Language Models w/ Automatically Generated Prompts (EMNLP 2020)
Просмотров 3,1 тыс.4 года назад
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts Taylor Shin and Yasaman Razeghi and Robert L. Logan IV and Eric Wallace and Sameer Singh (presented by Robert L. Logan) ucinlp.github.io/autoprompt/ The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating t...
EMNLP 2020 Tutorial on Interpreting Predictions of NLP Models
Просмотров 5 тыс.4 года назад
Interpreting Predictions of NLP Models Eric Wallace, Matt Gardner, Sameer Singh 0:00-22:00 Part 1: Overview of Interpretability 22:00-1:01:51 Part 2: What Part of An Input Led to a Prediction? (Saliency Maps) 1:01:51-1:15:22 Zoom Question Answer 1:15:22-1:35:32 Part 2: What Part of An Input Led to A Prediction? (Perturbation Methods) 1:35:32-1:38:25 Zoom Question Answer 1:38:25-2:03:18 Part 3: ...
Evaluating and Testing Natural Language Processing Models (Sameer Singh, UC Irvine)
Просмотров 8084 года назад
Evaluating and Testing Natural Language Processing Models Presented at WeCNLP 2020: www.wecnlp.ai/wecnlp-2020 Sameer Singh University of California, Irvine sameersingh.org Current evaluation of the generalization of natural language processing (NLP) systems, and much of machine learning, primarily consists of measuring the accuracy on held-out instances of the dataset. Since the held-out instan...
Tweeki: Linking Named Entities on Twitter to a Knowledge Graph
Просмотров 3194 года назад
Tweeki: Linking Named Entities on Twitter to a Knowledge Graph
Gradient-based Analysis of NLP Models is Manipulable
Просмотров 1964 года назад
Gradient-based Analysis of NLP Models is Manipulable
Entity Resolution by Clustering Contextualized Mention Embeddings
Просмотров 7214 года назад
Entity Resolution by Clustering Contextualized Mention Embeddings
could you be kind to to share stata sytax for counterfactual prediction in the Multinomial endogenous switching model- you can use my email that i will send if I learn you are willing to help me
i really dont understan how peopoles can believe this false sience we can believe all lies if someone brain wash us and this is happening west now!!!!!
This was recommended number 1 vid when I was looking up anti pedo/child trafficing Song of Freedom, great movie. Wonder why youtube/google would tie such a great important move a very anti-pedo movie with a small channel talking about pronouns. Any data that using pronouns improves mental health, last I seen the mental issues get worst or stay the same. I thought supporting someone mental delusions caused real damage.
keep science out of your disgusting ideology!
What explainable model would you use when it comes to time series data?
Hi, I wonder how would interaction terms interact with the DiCE model. Should we include it in the query instance or should we not include it?
Half of everything we were told by the health authorities was “misinformation,” has now been proven TRUE. This video did not age well.
Great tutorial
The WHO cannot be trusted with the health of the world. The authorities are the ones pushing disinformation.
⬆Misinformation Detected (95% confidence)
@@ucinlp oh so opinions are now lobbed into the "misinformation " box! God help us. Based on the last few years, the WHO has shown how they pander to globalist agendas and actually handle pandemics and pandemic preparedness terribly. P.s free speech is a cornerstone of democracy.
Awesome presentation! Definitely a great compliment to the paper itself!
Super well explained, kudos
Awesome implementation demo.
Deep Rooted Research intensive work on decision data science and information arts decoding the beurocracy technology of surveillance and intelligence covered bad actors that run behind the Scenes waveform attached strings associated simulation experiments.
always a pleasure to listen to Dr Singh
Amazing talk! Ratingomination!! Thanks for introducing me to the "Concealed Data Poisoning" paper. Reminds me of one of my favourite papers from last year's ACL, Kurita et al. (2020).
Thanks, glad you liked it!
Thank you very much! This was a clear and on point explanation of explanations :)
Glad it was helpful!
It is hard to follow links in the slides. The github has a link to the slides, but even then, it doesn't seem possible to click on the links in the slides
docs.google.com/presentation/d/e/2PACX-1vRXRfXCI_tuynZHD6wkoHO2TNh3WVPK1Q0IkEzWdHAtzm5jEEbMWbvS5eAvFeJuFS0IO01qLMGi7diT/embed?start=false&loop=false&delayms=3000&slide=id.gade4847fcd_0_6009
Thanks for pointing this out, didn't realize google slides didn't allow you to click links. We'll put up PDF slides, and are also preparing a list of all references.
Available now: explainml-tutorial.github.io/neurips20
This is great stuff! :D
Glad you think so!
nice stuff guys! Thanks for sharing
Thanks for watching!
Sameer almost had a BBC moment. lol
Yeah, at 1:31:00? We got a new dog, who doesn't like being away from me for too long :)
@@ucinlp makes sense :D! Great tutorial btw, got a ton of great papers to read. Thanks for taking the time! :)