Diyi Yang: Building Human Centered LLMs for Social Impact

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024
  • Title: Building Human-Centered NLP for Social Impact
    Speaker: Diyi Yang, Assistant Professor in Computer Science Department at Stanford University
    Abstract: Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. The benefits and promises of LLMs are accompanied by an increase in evidence and concern about its negative aspects. In this talk, we discuss how to build socially responsible LLMs for social impact from a human-centered perspective. The first half presents a participatory design approach to develop dialect-inclusive language tools and adaptation techniques for low-resourced language and dialect, and further introduces a distilled voice assistant using cross-modal context distillation to enable positive speech interaction. The second half looks at skill training with LLMs by demonstrating how we use LLMs to empower novice therapists through simulated practice and deliberative feedback. We conclude by discussing how human-centered LLMs can empower individuals and foster positive change.
    Bio: Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences.

Комментарии •