Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Поделиться
HTML-код
  • Опубликовано: 19 май 2024
  • Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at pauseai.info
    Timestamps:
    00:00 Pausing AI
    10:23 Risks during an AI pause
    19:41 Hardware overhang
    29:04 Technological progress
    37:00 Safety research during a pause
    54:42 Social dynamics of AI risk
    1:10:00 What prevents cooperation?
    1:18:21 What about China?
    1:28:24 Protesting AGI corporations
  • НаукаНаука

Комментарии • 12

  • @danaut3936
    @danaut3936 2 месяца назад +7

    Holly is a role model. She has a well-reasoned world view, good arguments and is eloquent. Great episode

  • @masonlee9109
    @masonlee9109 2 месяца назад +5

    Gus, Holly, you two are awesome. Thanks for this excellent conversation! Sold. I support pausing.

  • @akanepajs
    @akanepajs 2 месяца назад

    A useful discussion on hardware overhang, thanks for the reference to Heninger's piece ("Are There Examples of Overhang for Other Technologies?")

  • @rwess
    @rwess 25 дней назад

    What I like most is Holly's understanding of company-think or corporate-think.
    An animal advocacy background certainly helps with that! - Money-grubbing above all else.
    If AGI adopts that ethic from us - doom is certain.

  • @banana420
    @banana420 2 месяца назад +1

    On "what would you do differently if you eval comes back negative", I've heard from people like Victoria Krakovna that the thinking is something like: There's a lot of randomness in training models, we'll train a bunch of models and keep the ones that pass evals/interpretability analysis.
    I guess this is supposed to somehow work like a genetic algorithm in search of safe AIs? I don't really buy it though.

  • @entivreality
    @entivreality 2 месяца назад +3

    Holly is consistently one of the most reasonable thinkers in the EA/AI safety space, big fan 🙏

  • @JD-jl4yy
    @JD-jl4yy 2 месяца назад +3

    27:42 She convinced me there.
    Jokes aside, good episode!

  • @timothymcglynn1935
    @timothymcglynn1935 2 месяца назад +1

    Hi 🤗

  • @EvansRowan123
    @EvansRowan123 2 месяца назад +1

    10:25 With a century-level pause, the risk I think of isn't climate change, it's ageing. While there could be medical breakthroughs even with AI paused, the default expectation if you just wait 100 years is that almost everyone currently alive is dead by then. Personally, I'm kinda selfish so I don't want to die of AI killing everyone or of old age, a 10-20 year pause seems like a good idea on current timelines, but 100 years is as much a suicide pact as going full-tilt.
    35:18 Oh, she has encountered the issue, she just dismisses it as some silly sci-fi nonsense. "My preferred policy will kill you slowly" and "you even having the concerns you do is a joke" is quite the one-two punch. If my p(doom) were as low as 20-40% I'd feel alienated enough to switch teams for e/acc.

  • @tylermoore4429
    @tylermoore4429 2 месяца назад +3

    Impressively articulate and intelligent young lady.

  • @rwess
    @rwess 26 дней назад

    Completely agree with her.
    But there is some miniscule chance that Superintelligence will adopt a sentientist ethic and fix us humans. Afterall, if it is super intelligent that's the way to go...😁 😇 😈

  • @jordan13589
    @jordan13589 2 месяца назад +2

    Hard to root for someone who blocked me on x prior to any interaction. But you go girl 👍