Digital minds & how to avoid sleepwalking into a major moral catastrophe | Jeff Sebo (2023)

Поделиться
HTML-код
  • Опубликовано: 7 авг 2024
  • Originally released November 2023. Luisa Rodriguez interviews Jeff Sebo - director of the Mind, Ethics, and Policy Program at NYU - about preparing for a world with digital minds.
    Learn more and see the full transcript on the 80,000 Hours website: 80000hours.org/podcast/episod...
    Chapters:
    • Cold open (00:00:00)
    • Luisa's intro (00:01:16)
    • The interview begins (00:02:45)
    • We should extend moral consideration to some AI systems by 2030 (00:06:41)
    • A one-in-1,000 threshold (00:15:23)
    • What does moral consideration mean? (00:24:36)
    • Hitting the threshold by 2030 (00:27:38)
    • Is the threshold too permissive? (00:38:24)
    • The Rebugnant Conclusion (00:41:00)
    • A world where AI experiences could matter more than human experiences (00:52:33)
    • Should we just accept this argument? (00:55:13)
    • Searching for positive-sum solutions (01:05:41)
    • Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48)
    • Discourse and messaging (01:27:17)
    • What will AI systems want and need? (01:31:17)
    • Copies of digital minds (01:33:20)
    • Connected minds (01:40:26)
    • Psychological connectedness and continuity (01:49:58)
    • Assigning responsibility to connected minds (01:58:41)
    • Counting the wellbeing of connected minds (02:02:36)
    • Legal personhood and political citizenship (02:09:49)
    • Building the field of AI welfare (02:24:03)
    • What we can learn from improv comedy (02:29:29)
    ----
    The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.

Комментарии • 4

  • @philipherr6782
    @philipherr6782 22 дня назад

    Massively interesting conversation, thank you Luisa and Jeff!

  • @j.d.4697
    @j.d.4697 5 дней назад

    To me this is by far one of the most important and interesting topics of all time to me, and looking back at human history and seeing how practically nobody seems to care yet, also one of the scariest.
    Mishandling this can literally lead to wars and extinction.

  • @flickwtchr
    @flickwtchr Месяц назад

    Considering historic inequality in the US, housing and healthcare crises, etc., wouldn't it be nice if AI revolutionaries cared about social justice relative to human beings first? The AI revolution is set to increase such inequality exponentially with massive job loss and the very real threat of authoritarian government empowered by AI (mass surveillance already in place, data profiles already in place, now the power of AI to assist an authoritarian in sorting out who is a political enemy, and by what degree).