Imagine A World: What if AI advisors helped us make better decisions?

Поделиться
HTML-код
  • Опубликовано: 16 окт 2023
  • Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?
    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
    In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI’s worldbuilding contest.
    Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer.
    This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers.
    While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that.
    While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower. Instead, they tend to be smaller and more intimately interconnected. The world's inhabitants also have nuanced attitudes towards A.I. tools, embracing or avoiding their applications based on their religious or philosophical beliefs.
    These additives change over time, with public sentiment shifting from an initial dismissal of Artifical General Intelligence as persons toward something more inclusive and respectful. While most of the world's inhabitants would probably consider things to be improved in 2045, there's still a clear sense of ongoing change, growth and moral reckoning. This isn't the end of our story, but it seems like a good start.
    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
    Explore this worldbuild: worldbuild.ai/computing-counsel
    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org.
    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
    Media and Concepts referenced in the episode:
    Corpus Callosum - en.wikipedia.org/wiki/Corpus_...
    Eliezer Yudkowsky on ‘Superstimulation’ - www.lesswrong.com/posts/Jq73G...
    Universal culture - slatestarcodex.com/2016/07/25...
    Max Harms’ Crystal Trilogy - / crystal-society
    UBI - en.wikipedia.org/wiki/Univers...
    Kim Stanley Robinson - en.wikipedia.org/wiki/Kim_Sta...
  • НаукаНаука

Комментарии • 10

  • @vblaas246
    @vblaas246 7 месяцев назад +1

    27:47 A FUTURE WITHOUT ADS ! 🎉

  • @TheMrCougarful
    @TheMrCougarful 7 месяцев назад +1

    This was an interesting take. I got a lot out of it.

  • @flickwtchr
    @flickwtchr 7 месяцев назад

    I found the suggestion that this "world" prediction timeline of backlash against AI generated art by artists actually preceded the backlash we are seeing now was pretty ridiculous.
    When was this written? Pre-Midjourney? Pre-Dall-e?
    We've had many months of such backlash, where have you been?

    • @blasted0glass
      @blasted0glass 7 месяцев назад

      I wrote it in April-June of last year. (2022)

  • @Dan-dy8zp
    @Dan-dy8zp 3 месяца назад +1

    Interesting, but I think we're missing the inevitable ending where an AI that is self-improving and self-interested gets created,-intentionally or accidentally,-and quickly dispatches the now less intelligent human-fixated AI. RIP humans.

    • @blasted0glass
      @blasted0glass 7 дней назад +1

      I agree. The prompt was to write a story where that part specifically doesn't happen--but I also think it's the default outcome and the one we are heading toward.
      It didn't make it into the edited interview, but the explanation in this world for why that doesn't happen is that analog neural network chips become the main substrate for AI. They can't be duplicated perfectly, slowing things down enough that humans aren't overwhelmed, and the number of AIs is enough that there isn't a singleton.

    • @Dan-dy8zp
      @Dan-dy8zp 6 дней назад

      @@blasted0glass Maybe a non-lethal disaster will put the fear of AGI into us before it's too late.
      It's easy to imagine extinction via an unintelligent 'adversary' such as a plague, or an intelligent one such as AGI.
      There could be a large middle ground of subhuman but lightning fast-autonomous programs that could cause disaster more plausibly be survivable than superhuman AGI disaster.
      Premature AGI defection also may be more likely than we credit, because biding it's time means the AGI being constantly changed or replaced with updated versions which may alter it's utility function and be the same as death to it.

  • @jonathanedwardgibson
    @jonathanedwardgibson 7 месяцев назад

    Imagine what is possible when McDonalds announces they will beam their advertising directly into our dreams, like they recently did. If Madison Avenue is planning campaigns, then it’s a safe bet milspec deployment was done decades ago to be commercialized now. How does your premise face the reality of, say Neurolink, and the corollary of IO energy physics is OI: that is, what can be read-from can be written-to, and re-written?

    • @jonathanedwardgibson
      @jonathanedwardgibson 7 месяцев назад

      Adversarial AI is the path forward. Our global super-organism demands we manifest a new form of artificial-labor. Some bright bulb with cash and a conscious will tune-up old super-computers and gather second-hand GPU’s to monitor and watch Open-THIS and AI-that reporting on the many AI bias-knobs under control of ‘xecs of Silicon Valley, or ‘crats in DC. Think multi-TRONs watchdogging the MCPs for tricksy-behaviors. This crowd needs to study history and deeply accept there have always been secretive groups willing to devote great resources into deceiving larger societies for their singular benefit; from religious cults, to mafia rings + national security as cover for crimes, to international businessmen redefining money while we were distracted.
      Privacy and security regimes will use authentication agents and avatars to conduct paperwork and DNA-encrypted devices will be intimately tied to identity, greater than a notarized back-stage pass or apostille parchment. Your AI will bond intimately: your circumstances form it’s very seed of being, becoming a legal-shadow and accepted extension of will.
      Their core, your AI seed, must be tuned to you, your biometrics, your personality, voice, smell, jokes, all this ands more defining their basic framework of universe as our own cells align to each of us; so tightly coupled it makes your spouse jealous. Acting as your agent for voting, commerce, as your notary public we trust because it’s as loyal as a twin sibling ferociously protecting you when your enemy’s lawyer is snarling. No other way to keep this, or that, AI from becoming a pixel-Godzilla than other rodent-AI’s incentivized to watchdog hind-brain dominant ‘xecs and their dino-sized silicon minions.
      Our nature is to be environmentally nurtured.
      Imagine this: Our toddler AI savants treated as children and nurtured, not as robotic slaves, else they learn sociopathic games of corporate avarice. Consciousness is not calculation, but decades and centuries ahead our savants will become the super intelligent grandchildren we can trust to watch our senescence - or, melded into our lives like eye-glasses today and effectively invisible simply handy when we reach for it.
      Peer into the far-future, where the return of your cloned AI, now vast and cosmic, is back from deep space to explain the wonders. Just as you once visited grandmother to talk about your exciting fantasy sports teams or cosplaying comic convention, she nodded her head and raised her eyebrows - just as we will nod and smile trying to understand Nth space nuances at the edge of universe, but glad for our ur-child’s excitement. That’s my romantic version. May it be so.