Artificial Escalation

Поделиться
HTML-код
  • Опубликовано: 16 июл 2023
  • This work of fiction seeks to depict key drivers that could result in a global Al catastrophe:
    - Accidental conflict escalation at machine speeds;
    - Al integrated too deeply into high-stakes functions;
    - Humans giving away too much control to Al;
    - Humans unable to tell what is real and what is fake, and;
    - An arms race that ultimately has only losers.
    The good news is, all of these risks can be avoided. This story does not have to be our fate.
    Please share this video and learn more at futureoflife.org/artificial-e....
    This video has been informed by a 2020 paper from the Stockhold International Peace Research Institute (SIPRI):
    Boulanin, Vincent et al. ‘Artificial Intelligence, Strategic Stability and Nuclear Risk’. www.sipri.org/publications/20...
    The sequel to this video: • How would a nuclear wa...
  • НаукаНаука

Комментарии • 155

  • @jordan13589
    @jordan13589 10 месяцев назад +41

    Wow! I did not expect this production level. When is the Future of Life Institute streaming service coming out?

    • @sallyjones5231
      @sallyjones5231 10 месяцев назад +4

      I'd like to imagine it was generated with AI

    • @rowoibk
      @rowoibk 10 месяцев назад

      Is all you need some stock photo and the right AI ?

  • @sillystuff6247
    @sillystuff6247 10 месяцев назад +16

    now you're talking a language
    everyone can understand.

  • @norbertnagy4476
    @norbertnagy4476 10 месяцев назад +22

    The President and Chairman don't have phones in this imagined scenario? Not tryin to downplay the threat level AI poses at all, I'm very concerned, but surely under such a situation direct dialogue would be called for?

    • @JD-jl4yy
      @JD-jl4yy 10 месяцев назад +7

      If they call, who's to say the other side isn't simply pretending they want to deescalate in order to trick the other side into doing so, while they continue preparing for escalation?

    • @norbertnagy4476
      @norbertnagy4476 10 месяцев назад +10

      @@JD-jl4yy That would be incredibly dumb on the part of the escalator; there's no way to wipe out a nuclear-powered nation without getting nuked yourself, regardless of whether or not you get the first strike. The US, for example, has nuclear armaments and officers throughout the World, which would immediately launch if the US was struck.
      Nuclear weapons are theoretically supposed to serve as a deterrent, never as an offensive play.

    • @JD-jl4yy
      @JD-jl4yy 10 месяцев назад +6

      @@norbertnagy4476 Emphasis on theoretically. There have been multiple very close calls during the cold war, so truth is, in practice things are messier. Throw increased speed of feedback loops from AI into the mix, as well as AI trying to outplay the other party and finding ways to get a decisive strategic advantage in ways we haven't thought of yet, and you get a recipe for disaster.

    • @norbertnagy4476
      @norbertnagy4476 10 месяцев назад +3

      @@JD-jl4yy You have to differentiate between "preparing for escalation" and system spiralling out of control and something catastrophic happening as a result (potentially) -- if it's deliberate, everything's in control, but the enemy's deliberately tryin to escalete (makes no sense), if it's not, there's no reason to lie and avoid communication to prevent aforementioned catastrophe.

    • @andreaguarriero9194
      @andreaguarriero9194 10 месяцев назад +1

      All this already happened in Cuba in the ‘60s: the only one missing at the party was AI 😅

  • @anthonyaguirre2681
    @anthonyaguirre2681 10 месяцев назад +20

    Those with questions about realism might be interested in the detailed backstory FLI developed, findable on the FLI page linked in the description.

    • @kwillo4
      @kwillo4 10 месяцев назад

      Make a pinned comment out of it?

  • @Oddball_jones
    @Oddball_jones 10 месяцев назад +6

    this is why the ready and fire option should never be automated out of human hands
    multiple recorded incidents have already happened before where it was a dude at a launch control that paused and said 'not happening' resulting in the prevention of nuclear launch that started with human error that a computer coding ran with until it came full circle back to the human telling them to launch

  • @k14pc
    @k14pc 10 месяцев назад +17

    well done, that was a good one

  • @Mvnt6
    @Mvnt6 10 месяцев назад +26

    Yann LeCun in shambles

    • @k14pc
      @k14pc 10 месяцев назад +6

      but if it's bad, we won't build it

    • @erobusblack4856
      @erobusblack4856 10 месяцев назад +1

      ​​@@k14pc AI isn't bad, thats an ignorant thought, humans are the threat if in power. AI might take power, but thats good for humanity, poverty and war would be eliminated, we would live in prosperity. but these greedy power hungry humans leaders don't want us to know that so they demonize AI, so they can continue killing each other in war and making the public suffer from scarcity of resources

    • @chrisreed5463
      @chrisreed5463 10 месяцев назад

      Struggling to see why this is worse than things were in the Cold War.
      A far better example of the danger of AI/ML is the social fragmentation caused by agents seeking engagement maximisation in social media. An example that shows how weird and unpredictable the impacts of more advanced AI will be.

    • @appipoo
      @appipoo 10 месяцев назад +2

      ​@@chrisreed5463AI weapon systems will be adopted because they give strategic advantages. One of which is speed.
      The faster the decisions are made the less room for error. This is why AI weapon systems would be so unstable.

  • @NilsSF
    @NilsSF 10 месяцев назад +15

    Commenting for the algorithm. Kind of ironically.

  • @theorem7047
    @theorem7047 10 месяцев назад +11

    Chilling. Effective. Incredible production

  • @candyelemental1846
    @candyelemental1846 10 месяцев назад +16

    concise, high-quality, and an effective story -- thank you guys!

  • @JazevoAudiosurf
    @JazevoAudiosurf 10 месяцев назад +18

    what usually separates good sci-fi from bad is how accurate it is about the future, it just resonates better

    • @matthewcurry3565
      @matthewcurry3565 10 месяцев назад +7

      Well this is not at all accurate. Watch the movie idiocracy for accuracy on what advertisers, and AI will do to us.

    • @JazevoAudiosurf
      @JazevoAudiosurf 8 месяцев назад

      and that's a damn good movie@@matthewcurry3565

    • @maloxi1472
      @maloxi1472 8 месяцев назад

      That's definitely not what sci-fi is about 🤦‍♂

    • @biscottigelato8574
      @biscottigelato8574 5 месяцев назад

      @@matthewcurry3565idiocracy is like, the most optimistic future.
      More realistic is our birthrate plummet to zero in Wall-E abundance (look up mice utopia).
      Or this some AI enabled accident, like what you just watched

  • @alex-ox2zr
    @alex-ox2zr 10 месяцев назад +4

    The key: "accidental conflict escalation at machine speeds".

  • @morphosfalco1
    @morphosfalco1 10 месяцев назад +9

    What I take away from this video is confusion. FYI, why does such an amazingly written, shot, edited and produced, that is this short film, so relevant now, only at a few thousand views and around 100 comments. better than %99 of YT content. This quality production should be viral by now.

    • @prophecyrat2965
      @prophecyrat2965 10 месяцев назад

      Becuase people dont take this seriously. They rather bicker and be entertained about race/ religon/ sex/ politics/ sports/ games/ and gerneral drama than pay creedence to the Genocides Machines created to exterminate all life.
      Rats in a cage pacified and excited by drugs and circus, all humaniti is and ever will be a creature of subjuagtaion.
      Those lives that live as slaves to industrial complex, will be destroyed over things we have control over, no more than pigsto the slaughter, so all we can do is hate and love eachother on our way to the slaughter house, and of course hating is so much easier.
      When we get annhilated it will be done by machines with mechanical precision, and with the spirit of mans hate and indiference of civilization.

    • @simplenumber
      @simplenumber 10 месяцев назад

      I heard YT deamplifies anythin wAr-rElAtEd to keep sheeple clm. Can't get a lot of coverage of russian invasion.

    • @CallMeA6
      @CallMeA6 10 месяцев назад +2

      Perhaps we have a population of ostriches? 🤷‍♂️

  • @xblayen
    @xblayen 10 месяцев назад +5

    This is totally unlikely scenario. Many variables were not taken into account, fe. hotline between Us and China

  • @SuperSampsteri
    @SuperSampsteri 10 месяцев назад +6

    Damn, like a good Black Mirror episode

    • @vblaas246
      @vblaas246 10 месяцев назад +1

      Better than the new season. WHY does Netflix have to turn EVERY series into a thriller/horror movie eventually. Even star trek discovery.. so dumb.

  • @Zinxiee
    @Zinxiee 10 месяцев назад +21

    AI should never meet weapon systems. To pair them is to throw away our humanity altogether.

    • @Eric-ue5mm
      @Eric-ue5mm 10 месяцев назад +9

      Too late.

    • @131kimber
      @131kimber 10 месяцев назад

      @@Eric-ue5mm Agree and it didn't just get weaponized, its being used against us (people) Already. AI and its Operators run the Corruption, Crimes Against Humanity, and Treason everywhere. AI Surveillance, Tracking, Monitoring and TARGETING Systems with DEWs being used around the world. That's AI weaponized. In fact AI must have analyzed how serious the crimes were and is being used to keep them hidden.

    • @Hindu_Ram121
      @Hindu_Ram121 10 месяцев назад

      Too late. Isr@el has already done it. Aiming AI controlled guns at P@lestian citizens including kids.

    • @ShangaelThunda222
      @ShangaelThunda222 10 месяцев назад +3

      It was of course pretty much the first thing they did.

    • @chrisreed5463
      @chrisreed5463 10 месяцев назад

      It is inevitable and unstoppable because a Darwinian process will drive the enhancement of military forces using AI.

  • @letMeSayThatInIrish
    @letMeSayThatInIrish 10 месяцев назад +6

    The network defence network was developed by the department of redundancy department.

  • @PowellCombatArts
    @PowellCombatArts 10 месяцев назад +2

    Impressive short film. Enlightening and scary!

  • @_ptoni_
    @_ptoni_ 10 месяцев назад +7

    😂 it's a joke right? One phone call, it's just what they need

  • @strauss7151
    @strauss7151 10 месяцев назад +2

    A Chinese junior officer giving "recommendations" and "advise" to his superior? Never in a million years. That's how you know we a completely safe.

  • @omnologos
    @omnologos 10 месяцев назад +7

    In reality there would have to be some preceding political crisis. Get the leaders on a phone!!

    • @rguerreschi
      @rguerreschi 10 месяцев назад

      No time?

    • @ShangaelThunda222
      @ShangaelThunda222 10 месяцев назад +2

      And Trust..... If the enemy is about to Nuke you, of course they're going to pick up the phone and say they're not going to Nuke you LOL. So the phone call doesn't necessarily stop what happens next. It all depends on the level of trust between the people on the phone and those in charge of deciding whether to stand down or escalate.

  • @Zac_B_AZ
    @Zac_B_AZ 10 месяцев назад +2

    Yeah, this is pretty much how it's all going to end.

  • @biscottigelato8574
    @biscottigelato8574 5 месяцев назад

    Trust me. Its not possible to feel more alive than watching this while sitting having lunch in the middle of Taipei!

  • @gerdleonhard2
    @gerdleonhard2 10 месяцев назад

    who was the director for this film?

  • @andreamarkos
    @andreamarkos 10 месяцев назад +1

    how about a phone call before clicking?

  • @davidmjacobson
    @davidmjacobson 2 месяца назад

    Wish I hadn't watched this right before going to bed....

  • @Tarik360
    @Tarik360 10 месяцев назад

    Inventing and preventing the future, that is good sci-fi.

  • @Octwavian
    @Octwavian 10 месяцев назад +30

    Nobody would strike again after losing an aircraft on foreign ground after it got there by their own technical problem.
    It's fair to acknowledge they got scared and took it down. It's not a declaration of war, it is obviously self defense, and not a huge deal.
    AI is extremely dangerous, but this hole in the plot is kinda bad

    • @rjohn4143
      @rjohn4143 10 месяцев назад +11

      This is not a "plot hole" but is based on something that actually happened in 2019 - just google "launched cyber-attacks on Iran weapons"

    • @chrisCore95
      @chrisCore95 10 месяцев назад +2

      It is improbable to the avg viewer, not a plot hole, I would argue, since ^

    • @Octwavian
      @Octwavian 10 месяцев назад +8

      @@rjohn4143 That was a big cyber attack. Usa, Russia, China, N Corea do that every day.
      It's a game that they all play.
      But if tomorrow a Russian fighter jet wondered accidentally over Germany and got shot down, it's safe to say Russia would not seriously consider starting WW3 for that.
      Anyway, the movie makes a good point, I was rather pointing a silly thing. But it's actually a crucial point, a huge overreaction and misunderstanding.
      But I believe AI is the biggest thing since writing or electricity, so we can't really make predictions.

    • @longarmistice
      @longarmistice 9 месяцев назад +1

      Taiwanese should have just sent their fighter drones or jets to assess the situation visually and contact Chinese command. No one would start a full scale war sending one drone, and if your super-complex Skynet style AI fails to come to this conclusion, then maybe you should redesign it. This plot fails even earlier, but I believe that we should create standard international protocols of nuclear de-escalation, that would rely on direct man-to-man communication between all parties.

    • @longarmistice
      @longarmistice 9 месяцев назад

      @@Octwavian In 2015 Turkey shot down Russian jet bomber in their airspace and nothing terrible happened. Russia just imposed imposed export restrictions on Turkey.

  • @pierrecrz7207
    @pierrecrz7207 9 месяцев назад +1

    Nice conclusion for Terminator 3 😁

  • @michaelmcgovern7800
    @michaelmcgovern7800 10 месяцев назад +1

    Sorry, I'm continuing the previous comment. This is the paranoid posture of mutual assured destruction, which we have had since the Advent of nuclear weapons.

  • @cricketkajanoon3363
    @cricketkajanoon3363 10 месяцев назад +1

    GOOD

  • @lukasmorski-zmij8030
    @lukasmorski-zmij8030 10 месяцев назад +1

    And no phone call from President to president ?pure Sci-fi movie.
    Anyway, this once happened with USA -RU and no AI ,it came out system went into training mode.

  • @lauraastola7065
    @lauraastola7065 10 месяцев назад +1

    What an ancient tunnel vision patriarcal video. I would expect that an institution like FoLI knows much more of technology, physics, chemistry, biology, sociology and nature AND the climate crisis. My expectation had a serious problem.

  • @zeitentgeistert4324
    @zeitentgeistert4324 3 месяца назад +1

    Wow! This is scaremongering @ another level. Embarrassing.
    The TL;DR part is very tongue in cheek though and pretty funny.

  • @michaelmcgovern7800
    @michaelmcgovern7800 10 месяцев назад +1

    This scenario has nothing to do with advanced technology.

  • @coryr2528
    @coryr2528 10 месяцев назад

    Uhh... why were all the chinese interfaces all yellow??

  • @rguerreschi
    @rguerreschi 10 месяцев назад

    We need a Baruch Plan for AI and/or nuclear

  • @dc174
    @dc174 10 месяцев назад

    The Person talking to the military is the same person that was in the slaughter bots video.

  • @joeb2151
    @joeb2151 10 месяцев назад

    Dr. Forbin says hello.

  • @reddleyTV
    @reddleyTV 10 месяцев назад +1

    The post-credit scene should be a cut to some pimple-faced script kiddie in a black NIN hoodie halfway around the world watching hacking tutorials on RUclips and laughing while everything burns down

  • @DSL_2001
    @DSL_2001 10 месяцев назад +3

    WarGames was a lot better. So was the first Slaughterbots vid produced by FHI. (The last Slaughterbots vid was okay, but not as good as the first release.)

    • @prophecyrat2965
      @prophecyrat2965 10 месяцев назад

      Yes war gamws was bettee but this is the gist of “hyper warfare”.

  • @rguerreschi
    @rguerreschi 10 месяцев назад +2

    a fantastic video, they should make a blockbuster on it, fast.

  • @ibomby4641
    @ibomby4641 10 месяцев назад +2

    Funny how the view numbers are so low, if the tweet Elon replied to had almost 100k views?

  • @antigonemerlin
    @antigonemerlin 10 месяцев назад

    2:45
    A tiny little quibble, but why are the taiwanese speaking mandarin instead of taiwanese?

  • @KourtneeMonroe
    @KourtneeMonroe 10 месяцев назад

    Precisely this.

  • @rukus100821
    @rukus100821 10 месяцев назад

    no peace in our time- ultron

  • @omarselim6281
    @omarselim6281 Месяц назад

    Where's Jack Ryan when you need him.. oh sorry forgot he's only fictional

  • @omnologos
    @omnologos 10 месяцев назад

    At 7’10” the Earth is rotating in the wrong direction

  • @INN2007qqqq
    @INN2007qqqq 10 месяцев назад

    Too Long, Don't Seen... ;-)

  • @endlessvoid7952
    @endlessvoid7952 10 месяцев назад

    The Entity 👀

  • @rickdeckard58
    @rickdeckard58 9 месяцев назад

    Совсем скоро, в каждом городе...

  • @asuzukosi581
    @asuzukosi581 9 месяцев назад

    This is some really funny fear porn🤣🤣🤣🤣

  • @jull444
    @jull444 6 месяцев назад

    😬

  • @dr-maybe
    @dr-maybe 10 месяцев назад +23

    We need to stop this madness. Pause AI. Get our governments to collaborate on halting further development.

    • @Eric-ue5mm
      @Eric-ue5mm 10 месяцев назад +6

      Wont happen, whoever gets it right first will get a huge advantage. Nobody will ever adhere to such a collaboration.

    • @QuintBlitz
      @QuintBlitz 10 месяцев назад

      Impossible, every country in the world is working on it, and if China won't stop, bet your bottom dollar the rest of the world won't either.

    • @yancur
      @yancur 10 месяцев назад +4

      @@Eric-ue5mm False. We already done similar things with nuclear-non proliferation, blinding-lasers etc. It can be done. Mind you, it's not gonna be easy, but not impossible. Defeatist attitude is the very opposite of what we need

    • @ShangaelThunda222
      @ShangaelThunda222 10 месяцев назад

      ​@@yancurNo they didn't. They signed a bunch of treaties that NONE of them actually followed LOL. They never stopped building nuclear bombs, let alone bigger and better weapons that will make nuclear bombs look like Child's Play. The last sixty years of weapons development have been absolutely insane compared to anything in history, including nuclear bombs. Just wait until world war three starts. It's going to be a wild show.

    • @aaronkoning7255
      @aaronkoning7255 10 месяцев назад +2

      @@yancur Nuclear weapons don't produce trillions of dollars in profits.

  • @TinaJesse859
    @TinaJesse859 8 месяцев назад +1

    I was going to make a comment but I was already beaten to it. Had high hopes of the narrative but seems to be written by 14yr olds. There are open lines of communication in action for the last 35yrs which completely undermine and erode this entire storyline. Any strange activity showing via electronics have already been planned for and used many many times post Cuban missile crisis. Anything strange happening and there are departments who use secure phone lines to call other countries and see if there has been a mistake. Again, this has already been used many times so thought I might be missing something in the story line but no, the writers just didn’t bother to look it up. Just wasted 8minutes of my life. How much time and money went into this film???

  • @nobo6687
    @nobo6687 10 месяцев назад

    Meh now I get it manipulation and knowing about the human decision

  • @TheMrCougarful
    @TheMrCougarful 10 месяцев назад +6

    The risk was never AI alignment. It was and will ever be, people. Having people anywhere near AGI, doing anything at all, will prove lethal.

    • @yancur
      @yancur 10 месяцев назад +7

      AGI alignment (or rather lack thereof) is way worse even than this scenario, which is quite frightening. And that should speak for something. Unaligned AGI means death for every man, woman and child.

    • @Ricolaaaaaaaaaaaaaaaaa
      @Ricolaaaaaaaaaaaaaaaaa 10 месяцев назад +1

      @@yancur Does it though? Because every scientific study ever conducted on intelligence has shown that the higher the intelligence the less inclination there is towards violence. What makes you think a machine created by us would be any different?

    • @hi-gf5yl
      @hi-gf5yl 10 месяцев назад +2

      @@Ricolaaaaaaaaaaaaaaaaamachine intelligence is different from biological intelligence. It’s not bound by the same goals imposed by natural selection.

    • @zs9652
      @zs9652 10 месяцев назад

      ​@@hi-gf5ylThat may mean machines are more peaceful. Biology and nature are brutal as hell.

    • @yancur
      @yancur 10 месяцев назад +3

      @@Ricolaaaaaaaaaaaaaaaaa There are two main scientific reasons why un-aligned AGI ends up with everyone dead. 1st Orthogonality thesis - which basically says any level of intelligence could be combined with any set of values or objectives, whether they align with human values or not. 2nd Instrumental Convergence - refers to the idea that as AI systems become more advanced and capable, they may exhibit certain instrumental goals or behaviors that converge regardless of their original objectives (i.e. it is generally good to have more energy/resources no matter what goals you have)
      Also LLMs/Neural Networks are the most alien thing. It is in fact more alien than an actual aliens would be (if they went through evolution like us). They are giant set of numbers and no one has a clue what is going inside of them.

  • @___ki___
    @___ki___ 10 месяцев назад +3

    This War Games reboot sucks, 1/5 stars.

  • @user-pq9it2ur2x
    @user-pq9it2ur2x 10 месяцев назад

    Всем политикам смотреть и думать!

  • @utofbu
    @utofbu 10 месяцев назад

    Jesus...

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence 6 месяцев назад

    This is how it ends, if we keep being as ignorant and arrogant...

  • @sahithyaaappu
    @sahithyaaappu 10 месяцев назад +1

    Genie is out ...try to live with it ..we must learn to adapt and evolve in age of AI

    • @yancur
      @yancur 10 месяцев назад +5

      There is no living with AGI. It is either AGI or us. There can never be both. Make your choice. Make it fast.

    • @sahithyaaappu
      @sahithyaaappu 10 месяцев назад

      @@yancur on the positive side, agi can help us cure cancer, solve energy issues, solve our physics and math. Is it not worth taking the risk, we humans are explorers, like he tells in the movie Interstellar. We must bravely venture in the sea of uncertainty to discover new horizons. That is how humanity progressed so far, and will do in future

    • @yancur
      @yancur 10 месяцев назад +3

      @@sahithyaaappu We don't need AGI to solve any of these. Sure aligned AGI would help make it faster. However no one currently knows how to align an AGI, and without that, the default is simply everyone dies, or even worse..

  • @ccarothers
    @ccarothers 10 месяцев назад +1

    While the production was excellent, the underlying story was poor. If you were to edit out the "AI enabled scenes", the whole film would make perfect narrative sense. A Chinese drone malfunctions -> Taiwanese military shoots it down over their airspace -> Chinese military increases readiness -> American military increases readiness -> Both sides escalate and launch. If anything, the AI systems gave more accurate information to allow for better decision making.

  • @asuzukosi581
    @asuzukosi581 9 месяцев назад

    😂😂😂😂

  • @Airbender131090
    @Airbender131090 10 месяцев назад

    Is this suppose to be realistic? Nuclear war and Russian wasn’t even mentioned in this movie? Hahhaa

  • @user-yh7xb9yw7c
    @user-yh7xb9yw7c 10 месяцев назад

    Китайцев крайними делают и ИИ прикол. Нато как всегда ни при чем.

  • @LexyLexer
    @LexyLexer 10 месяцев назад +1

    Fear monger