Eliezer Yudkowsky on the Human Importance of the Intelligence Explosion

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 12

  • @tan_ori
    @tan_ori 6 месяцев назад +2

    So positive at the end…

  • @crowlsyong
    @crowlsyong Год назад

    Great talk

  • @BeMyArt
    @BeMyArt 2 года назад

    I'm fan and just really happy to him live. I surprised that he is also good speaker not just thinker😃

  • @DariusTheTenth
    @DariusTheTenth 12 лет назад +1

    We could be what Kurzweil called "lucky first": the first technological civilization to develop a Singularity in the Universe. The probability of this, according to astronomers, is small, but not zero.

  • @supahacka
    @supahacka 12 лет назад

    if all was so straight forward there would be no fermi paradox and we would be seing extremely powerful superintelligent agents everywhere!?

    • @lucy-pero
      @lucy-pero Год назад

      we can't even see planets outside the solar system... why would we see aliens?

    • @_yak
      @_yak Год назад

      @@lucy-pero I enjoy when a comment takes 10 years to get a reply. It's like a conversation between Ents.

    • @lucy-pero
      @lucy-pero Год назад

      @@_yak yeah it's p cool we can do that now haha.. well i don't think they'll ever see the reply but who knows

    • @_yak
      @_yak Год назад +2

      ​@@lucy-pero haha eventually conversations will play out over centuries.
      About the original question: we can actually see some planets outside of our solar system, though indirectly and detecting life on them is something that maybe we'll be able to do with the James Webb Telescope. But when the original commenter mentioned the Fermi Paradox, it's likely more to do with why we don't receive any sort of signals or see the sort of super structures that very advanced civilizations could build, like Dyson Spheres.
      One theory is that there's a "great filter" ahead of us that stops intelligent civilizations from advancing beyond a certain point. For example, we've only had the nuclear bomb for around 80 years. It's possible that we've been lucky so far and that there's a principal that goes: "the odds that a civilization will destroy itself within 500 years of the discovery of the power of the atom approaches 1". Or more pertinent to our moment: "the odds that any sufficiently advanced civilization will destroy itself by developing a misaligned AI approaches 1".
      Of course it's also possible that we're early and other advanced civilizations are in our future, or we just haven't detected them for one reason or another.

    • @robertweekes5783
      @robertweekes5783 Год назад

      @@_yak Eliezer doesn’t think misaligned AGI is a solution to the Fermi paradox (great filter), because we should still see signs of (artificial) intelligent life if it were taking over their host civilizations