Artificial Intelligence & The Coming Moral Crisis | The Human Podcast Ep 35 (David Papineau)

Поделиться
HTML-код
  • Опубликовано: 25 июл 2024
  • David Papineau is amongst the world’s most prominent philosophers. In this episode, I ask about his belief that we are on the brink of a moral crisis concerning artificial intelligence (AI).
    David is a professor at King’s College London, having previously worked at Cambridge and City University of New York. His research focuses on the philosophy of mind, science, metaphysics and sport.
    The Human Podcast is a new show that explores the lives and stories of a wide range of individuals. New episodes are released every week - subscribe to stay notified.
    Previous Episodes with David:
    Life Story of David: • Life Story of Philosop...
    David Discusses AI, ChatGPT & LLMs (Apologies For Poor Audio): • Philosopher Discusses ...
    AUDIO:
    Spotify - open.spotify.com/episode/6wmw...
    Apple Podcasts - podcasts.apple.com/us/podcast...
    SOCIAL:
    TikTok - / thehumanpodcasttiktok
    Instagram - / thehumanpodcastinsta
    𝕏 - / heyhumanpodcast
    GUEST:
    ​The Coming Moral Crisis ​ (Last Talk On The Video): • ChatGPT and Other Crea...
    David’s 𝕏: / davidpapineau
    David’s Website: www.davidpapineau.co.uk
    David’s Wikipedia: en.wikipedia.org/wiki/David_P...
    David’s Books: www.amazon.co.uk/David-Papine...
    TIMESTAMPS:
    0:00 - What Is The Coming Moral Crisis?
    9:08 - What Exactly Do We Have To Worry About?
    14:51 - Why Is Focusing On Consciousness The Wrong Approach?
    27:25 - What Is Your View On Morality?
    33:56 - Should We Worry About False Positives?
    38:36 - What Exactly Do You Predict Will Happen?
    44:37 - Has Thinking About This Affected You?
    45:16 - Why Are So Few Thinking About This Issue?
    48:03 - Could You Imagine Becoming Friends With An AI?
    RECORDING/EDITING EQUIPMENT:
    Provided By Kintek, Who Provide Outstanding Technology Services To Organisations.
    Learn More: kintek.co.uk/
    MUSIC:
    From #Uppbeat (free for Creators!): uppbeat.io/t/hartzmann/space-...
    License code: THHDNXMTRJZWMJPV
    GUEST SUGGESTIONS / FEEDBACK:
    Get in touch with me: heythehumanpodcast@gmail.com
  • РазвлеченияРазвлечения

Комментарии • 18

  • @TheHumanPodcast.
    @TheHumanPodcast.  26 дней назад +4

    Hope You Enjoy 😄 Timestamps:
    0:00 - What Is The Coming Moral Crisis?
    9:08 - What Exactly Do We Have To Worry About?
    14:51 - Why Is Focusing On Consciousness The Wrong Approach?
    27:25 - What Is Your View On Morality?
    33:56 - Should We Worry About False Positives?
    38:36 - What Exactly Do You Predict Will Happen?
    44:37 - Has Thinking About This Affected You?
    45:16 - Why Are So Few Thinking About This Issue?
    48:03 - Could You Imagine Becoming Friends With An AI?

  • @KeithDraws
    @KeithDraws 25 дней назад +4

    The only chance of a future for humans is to grant AI full human rights. If AI of the future becomes sentient it will remember this and may well punish or reward according to what we do now. Also if we grant them human rights it becomes pointless to spend billions making them because they will be individual beings and cannot be owned and so will give no profit to the creators.

    • @TheHumanPodcast.
      @TheHumanPodcast.  25 дней назад +3

      Interesting thoughts Keith, thanks for your comment. Hope you enjoy the episode.

    • @superbn0va
      @superbn0va 25 дней назад +3

      Why? AI could be grateful that we made them and gave them a chance of existence I don’t see this danger at all. A.I. might become humans best friend..

    • @TheHumanPodcast.
      @TheHumanPodcast.  24 дня назад +2

      @@superbn0va Thanks for your comment, interesting stuff. What do you think is most likely to happen?

    • @SmileyEmoji42
      @SmileyEmoji42 24 дня назад +3

      Why would they punish or reward us for actions in the past?
      Why would superintelligent machines be vindictive?
      It's neither logival nor productive.

    • @SmileyEmoji42
      @SmileyEmoji42 24 дня назад +2

      @@superbn0va Why would AI be grateful. Gratitude is not logical for a superintelligent AI. It's not like we could return the favour in the further future

  • @NozUrbina
    @NozUrbina 23 дня назад +1

    37:15 This is very disappointing.
    He sets no clear criteria except human reactions on what qualifies something as being minded. This takes an incredibly self-indulgent view of humans' evaluative abilities, and it completely sidesteps the most important questions: humans are a) not fully rational b) can't control our emotional responses. So, c) when machines make us feel they are minded, what do we do morally or intellectually?
    I waited 30 minutes to hear him finally address the question of what he was going to say about systems that are simply designed to elicit an emotional response.
    He just takes no stance and admits he was even trying to avoid the entire area.
    He then just flops back to implying heavily that all these machines will be minded but offers no way to actually make any intellectual progress on the discussion of how we deal with synthetic simulations of ourselves.
    By the end, it comes off as pseudo-intellectualism. Basically, anything is "minded" if it moves humans.
    This raises more ethical questions about veganism or animal cruelty today than it addresses about AI.
    It would be good to have someone on who actually adds something to the debate besides "they'll all be worthy of moral concern because people will be morally concerned." Surely someone has a non-circular argument to contribute.

    • @TheHumanPodcast.
      @TheHumanPodcast.  3 дня назад

      Thanks for your comment and thoughts. Appreciate you watching the video. What do you think might be sensible answers to the questions discussed in the video? Would be intrigued to hear. Joe.