"We must slow down the race to God-like AI": Ian Hogarth in the Financial Times

Поделиться
HTML-код
  • Опубликовано: 3 фев 2025

Комментарии • 18

  • @ollie-d
    @ollie-d 8 месяцев назад +13

    Great idea for a series, excellent while doing chores. The idea of the “island” is hilarious, because presumably any sufficiently advanced AGI would recognize instantly that it’s in an island and that humans are vetting it.

  • @jawyor-k3t
    @jawyor-k3t 8 месяцев назад +11

    Dude I love the idea of this channel. You read very well.

  • @Enhancedlies
    @Enhancedlies 7 месяцев назад +1

    Can i say, this is so helpful. please do more of this!

  • @KolTregaskes
    @KolTregaskes 8 месяцев назад +2

    10:21 * £500 million. 🙂
    I like the extra bits in this video, thank you.

  • @blessedbethebloom
    @blessedbethebloom 8 месяцев назад +1

    Rob, always love your content!

  • @onuktav
    @onuktav 6 месяцев назад

    How on earth did you sneak into Max Headroom's room?

  • @tiagotiagot
    @tiagotiagot 7 месяцев назад +1

    The island idea makes no sense, the seawater will be barely an inconvenience. Hell, even moving the research to Mars won't help delay things for too long....

  • @alanjenkins1508
    @alanjenkins1508 8 месяцев назад +1

    I can't understand how companies spending obscene amounts of money on AI hope to get their money back. However it is fun watching them do it.

    • @tiagotiagot
      @tiagotiagot 7 месяцев назад +2

      Not money, power. When you're a god you don't need money, and the greedy idiots think they'll be able to put a leash on a god...

  • @mgg4338
    @mgg4338 8 месяцев назад +2

    Rob, do you think that a machiavellian AGI would try to weaken humanity before to openly strike? Do you think that the current geopolitical tensions may be influenced but such a non-human actor?

  • @vorpalinferno9711
    @vorpalinferno9711 8 месяцев назад +1

    Its inevitable.
    If you dont build it your competitor will.
    And any competitor that uses AI has an advantage over you.

    • @willcooper8028
      @willcooper8028 8 месяцев назад +4

      It could be stopped or at least slowed by an international governmental effort, but I feel like that’s unlikely to happen

    • @no-cv4dx
      @no-cv4dx 8 месяцев назад +2

      @@willcooper8028 Impossible to ensure compliance. There's no instruments to detect AI like there is for nuclear and other weapons, thus an Open Skies type, or similar, treaty is pointless.

    • @willcooper8028
      @willcooper8028 8 месяцев назад +5

      @@no-cv4dx well it certainly takes a lot of specialized people, specialized resources, and a hell of a lot of money, all of which are definitely traceable. You can’t track the fire with AI but you can track the smoke, at least until it’s development gets much, much easier/cheaper

    • @no-cv4dx
      @no-cv4dx 8 месяцев назад

      @@willcooper8028 You don't need a lot of resources or money (a lot for governments, not companies/people). Specialized people? Sure, but you don't need many of those; just one who has the right idea.

  • @testboga5991
    @testboga5991 8 месяцев назад

    The AI doomerism is becoming boring and making the claims even more bombastic doesn't help at all. Whoever uses poorly defined terms like AGI should immediately have to pay 20 bucks for the poor. 😂

    • @unvergebeneid
      @unvergebeneid 8 месяцев назад +10

      Speaking of definitions, how do you define "doomerism"?