AGI by 2027, Insider Warnings, and Altman's Dealings: The AI Argument EP12

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • It's been a quieter week for AI releases, but there's no shortage of debate. Join Justin and Frank as they tackle the growing concerns about AI safety and the potential backlash.
    → Increased talk of AI safety and potential dangers - is the backlash real or just a filter bubble?
    → Startling prediction from Anthropic's senior manager: will any of us need to work in five years?
    → Scrutiny of Sam Altman’s investments - ethical or just savvy business?
    → The parallel between OpenAI and Facebook - is AI's mission to benefit humanity just a façade?
    The more AI insiders express concerns, the more the general public will eventually share those fears. If the experts are spooked, how long before everyone else is too? This episode is a must-watch for anyone interested in the ethical and societal impacts of AI.
    Leopold Aschenbrenner, previously at OpenAI, believes artificial general intelligence (AGI) could emerge by 2027 based on the rapid progress in the field. This AGI would be as capable as a PhD student, a remarkable and concerning milestone.
    The accelerating pace of AI advancements has experts sounding the alarm. Some argue for the "right to warn," stressing that AI insiders have an obligation to speak out about the potential risks. But will the public be able to fully comprehend the gravity of these warnings? Justin and Frank discuss the challenges in bridging this gap.
    The repercussions of AGI's arrival could be immense, from widespread job displacement to economic disruption. Our political and business leaders must start grappling with these weighty issues now to guide society through the turbulence ahead. Proactive leadership and open public discourse will be essential.
    The ethics of AI development are also under scrutiny. Justin and Frank debate the concerns surrounding OpenAI CEO Sam Altman's investments and business dealings, as well as the pushback Meta faces over plans to train AI models on Facebook user data without explicit consent.
    ► LINKS TO CONTENT WE DISCUSSED
    Leopold Aschenbrenner ‘Situational Awareness: The Decade Ahead’: situational-aw...
    Open letter requesting ‘the right to warn’ about the dangers of AI: righttowarn.ai/
    Avital Balwit ‘My Last 5 Years Of Work’: www.palladiumm...
    The Atlantic ‘OpenAI is just Facebook now’: www.theatlanti...
    The Wall Street Journal ‘The Opaque Investment Empire Making OpenAI’s Sam Altman Rich’: www.wsj.com/te...
    RTE ‘Meta gets 11 EU complaints over use of personal data to train AI models’ www.rte.ie/new...
    ► SUBSCRIBE Don't forget to subscribe to our channel to stay updated on all things marketing and AI.
    ► STRATEGIC AND CREATIVE AI FOR SMALL BIZ MARKETING For my full insights, be sure to subscribe to my emails here: www.frankandma...
    ► CONNECT WITH US For more in-depth discussions, connect with Justin and Frank on LinkedIn. Justin: / justincollery Frank: / frankprendergast
    ► YOUR INPUT Do you think AI insiders should have the right to warn the public about potential dangers? Why or why not? Share your thoughts in the comments!
    00:17 Increased talk of safety and dangers of AI - is there a backlash?
    01:52 Will any of us be working in 5 year's time?
    08:08 Is OpenAI the new Facebook, is their goal really to benefit all of humanity?
    11:49 Leopold Aschenbrenner - will we have AGI by 2027?
    19:49 Should AI insiders have the right to warn about the dangers of AI?
    21:43 What kind of leadership is needed to guide us through AI risks?
    27:25 Should Meta be allowed train AI on user generated content?

Комментарии • 3

  • @af.tatchell
    @af.tatchell 3 месяца назад +1

    25:34 do you think monetary stimulus would be superior to fiscal interventions? (Like increasing corporate income taxes).
    I would expect that if corporations translate AI cost savings into consumer price reductions, then there should be a deflationary effect that can balance out the monetary stimulus and prevent demand-pull inflation (from too much money chasing too few goods).
    But if corporates pocket the AI windfall, then I think you will need to use increased taxation and government spending to force the same deflationary outcome.
    And ultimately hyper-deflation is exactly what a socially acceptable intelligence explosion should yield for society - that the cost of all goods and services should just keep falling as supply becomes increasingly cheaper due to AI hyper-efficiency, yielding in the limit a state of material abundance (and massive real wealth for everyone).
    Ironically this would look just like Karl Marx's original vision of "communism", or effectively a post-labour economy. Capital will eventually take over all production from labour.

    • @frankandmarci3006
      @frankandmarci3006  3 месяца назад

      So.. I'm far from an economist (I'm sure you gathered that from the show 😂), but my understanding is that support would still be needed for a displaced workforce, even with an extremely low cost of living?

    • @TheThundertaker
      @TheThundertaker 3 месяца назад

      Unfortunately we would be dealing with the same issue that currently exists, world corporations fleeing to the countries offering the lowest corporation tax. Any country that gets left behind in that race to the bottom will get hit hardest, as will their citizens. It's even easier if you have no human workforce that is not easily uprooted so relocation would be even easier than before.