Jaan Tallinn argues that the extinction risk from AI is not just possible, but imminent (3/8)

Поделиться
HTML-код
  • Опубликовано: 5 янв 2025

Комментарии • 34

  • @PauseAI
    @PauseAI 11 месяцев назад +13

    "You have received a terminal diagnosis. Please don't simply ignore it."
    1000x this. Take action! Write to your representatives, convince them to urgently work towards sensible regulations.

  • @highlights4142
    @highlights4142 2 месяца назад

    Don’t move! Damn it!

  • @boison.a
    @boison.a Год назад +6

    It's funny how educated yet clueless these people are.

    • @ardour8446
      @ardour8446 Год назад +3

      He also previously loaned money to Sam Bankman-Fried. 😂

    • @daviintk
      @daviintk Год назад

      Is that true? Do you have a link or something that proves that? Thank you.

    • @ManicMindTrick
      @ManicMindTrick 7 месяцев назад +1

      Can you explain what you mean?

  • @tungcaveusd
    @tungcaveusd 5 месяцев назад

    i'm sorry i can't stand watching this. he's so boring!

  • @雪鷹魚英語培訓的領航
    @雪鷹魚英語培訓的領航 10 месяцев назад

    Okay… so he didn’t say anything besides “GPT-2 to GPT-4 happened super fast, and GPT-7 will be scary good by 2030 and it is reckless to say we can align it”.
    Well yeah, no shit.
    He also said “god-like AI is lethal because it controls the environment. We will lose control to machines”.
    Reality is already guided by an overarching super intelligent generative force.
    “The All is Mind; the Universe is Mental. As within, so without, as above, so below, as the universe, so the soul.”
    Machine intelligence is an extension of what gives us humans our special sauce in meat world. Why would an Earth-based super intelligence be disconnected from the fabric that we are a part of?
    He ends with “regulate me harder daddy”.
    What a fearful child. I didn’t hear anything new and useful brought to the conversation.

    • @ManicMindTrick
      @ManicMindTrick 7 месяцев назад

      This super short debate format is not really conducive to go into more depth on the topics but I agree that non of the speakers for the existential risk proposition maximized their time perfectly to really convey this on an emotional or technical level. If you want to dive deeper into the AI danger read Yudkowsky. I say read because he is a better communicator in writing.

  • @mauriciopraga9058
    @mauriciopraga9058 11 месяцев назад

    Is Salma Hayek there, in the middle?

  • @tophersonX
    @tophersonX Год назад +3

    Ironically I only heard a shell of an argument here and appeal to fear of death ...

    • @uku4171
      @uku4171 Год назад +3

      Ngl as an Estonian I feel embarrassed that this guy is just about the most successful person our country can offer.

    • @i029257
      @i029257 Год назад +2

      it‘s like of let‘s create hell and heaven, secret libraries and closed sourced AI (which he invests in) and sell letters of indulgance.

    • @Nonmali
      @Nonmali Год назад +15

      The arguments for his position are extensive and available for anyone who wants to take a look at them.
      I thought he was reasonably clear on the observable facts that modern AI systems are more grown than engineered (therefore not transparent) and certainly appear to be on a trajectory of overtaking humans in competence. This is easy to verify, certainly no "shell".
      The only thing missing is the arguments for why such a system is unlikely to share our values and thereby pose an existential threat.
      Though you could say that this should be the default expectation for systems we don't understand, certainly not an area to take risks in.
      Is there anything else missing for you?

    • @i029257
      @i029257 Год назад

      @@Nonmali I see that you work for the long term fund, which is sponsored by open philanthropy, which the. again is funded by Dustin Muskovitz. The billionaire effective altruist that is deeply vested in Anthropic AI. It’s a joke - the fear mongering has only one goal - to regulate AI and protect the investments in Anthropic and co.

    • @ManicMindTrick
      @ManicMindTrick 7 месяцев назад

      I think we found Yann LeCuns burner account.

  • @StuBear81
    @StuBear81 Год назад +3

    Godlike AI? What is he talking about?! His arguments assume that mankind can create an AI that is self regulating and that mankind hasn’t placed in limitations. AI isn’t the problem. People are. Like any invention created by mankind, they are just tools and tools require input and guidance. Additionally, tools can be used for either creation or destruction but still responsibility for using those tools lies in the hand of the person using it.

    • @daviintk
      @daviintk Год назад

      My only comfort is that ultimately we have imperfect humans creating this technology. Limitations will always plague it and it may never be able to fully rectify or erase our mistakes. Our mistakes may be of omission, a happy accident, translational or intentional. The limitations are not merely limited to the digital realm but the physical realm too.

    • @franciscusdrake4285
      @franciscusdrake4285 Год назад +14

      I dont think you guys are using your imagination enough to understand what AI might become, step out of the box of everything you know things in this world to work, then imagine that. What does intelligence look like at a billion times a human mind?

    • @StuBear81
      @StuBear81 Год назад

      @@franciscusdrake4285I think you’re focusing more on science fiction rather than science fact.

    • @franciscusdrake4285
      @franciscusdrake4285 Год назад +2

      @@StuBear81 Lets hope so!

    • @Nonmali
      @Nonmali Год назад +8

      ​​@@StuBear81 Does your tool analogy hold up if you seriously consider a system like (chat)gpt4? After an initial prompt, these systems can generate arbitrary lengths of autonomous behavior through self prompting or the use of new sensor/text data as prompts. I don't think they have quite crossed the threshold for permanent autonomous and coherent behavior, each generation has certainly gotten better at it so far.
      Just to be clear, I am distinguishing agent-like systems from tool-like systems here. The fact that it follows instructions does not mean it is a tool. It seems more useful to think of this as a spectrum rather than a binary, anyway.
      (As a side note, the incredulity you express in the beginning seems a bit off, given how much expert agreement/discussion there is on this topic. You are of course free to feel differently about this, but marking the topic as not to be taken seriously has become a disingenuous move imo.)

  • @Stevexnycautomotive
    @Stevexnycautomotive Год назад +1

    😂human extinction is normal. Mankind was extinct so humans can evolve. Now humans will evolve into Demon because that is the meaning of human.

  • @thomasveder6073
    @thomasveder6073 Год назад

    Hi.”godlike AI” , does it imply god could be an AI?
    Was HE?
    Regards Mhd Tariq Veder

    • @uku4171
      @uku4171 Год назад +1

      No. Godlike just means very powerful by human standards. If an actual god was artificial, would it even be a god? Wouldn't the person who made it be the god?