Why We Should Ban Lethal Autonomous Weapons

Поделиться
HTML-код
  • Опубликовано: 25 мар 2019
  • Top AI researchers -- including deep-learning co-inventor Yoshua Bengio and AAAI President-elect Bart Selman -- explain what you need to know about lethal autonomous weapons. Thanks to Joseph Gordon-Levitt for the narration. Take action at autonomousweapons.org
  • КиноКино

Комментарии • 142

  • @gradymoxley2925
    @gradymoxley2925 3 года назад +68

    Your here for the debate topic and you know it

  • @SmartEngine-
    @SmartEngine- 4 года назад +33

    It cannot be blocked by just banning...

    • @TruAnRksT
      @TruAnRksT 4 года назад +5

      No more than pipe bombs or assault weapons can be stopped by making them illegal.

    • @TheManinBlack9054
      @TheManinBlack9054 4 года назад +2

      @@TruAnRksT making guns less ubiquitous and harder to get for everyone do help with the gun deaths

    • @TruAnRksT
      @TruAnRksT 4 года назад +2

      @@TheManinBlack9054 Only the few accidental deaths, if you are intent on killing you don't need a firearm. But they are fun!

  • @futavadumnezo
    @futavadumnezo 2 года назад +3

    Hi, I'm from the future. We didn't van them. Actually big tech is using you as a means of training their drone AI without you even knowing.

  • @paryanindoeur
    @paryanindoeur 5 лет назад +25

    We need John Nash's game-theory perspective (even though he is dead) on this issue: _coalitions (or agreements) are unsustainable when it benefits one party (generally the first party) to cheat._
    IOW, China will not stop researching and producing AI weapons; the U.S. knows this, and thus cannot and will not stop researching and producing AI weapons, and so on...

    • @adelaperrone9351
      @adelaperrone9351 5 лет назад +4

      Dont blame on China, the first atomic human killing was done by the US, and the rest of the world must defense itself. Who is the first weapon producer today? US works for the DEATH, not for LIFE

    • @LandoCalrissiano
      @LandoCalrissiano 5 лет назад

      @@adelaperrone9351 No weapon under use today can make decisions on who to kill. That is still a human's job. This video supports a ban on the kinds of weapons that can make such decisions.

    • @ninapramudia8500
      @ninapramudia8500 4 года назад +2

      that is what we call a "security dilemma" in this anarchy system

    • @sbalogh53
      @sbalogh53 4 года назад +4

      Don't blame China. In the last 200 years, how many wars or regime changes were initiated by China? Yes, there are a few, but mainly in their surrounding region. Compare that with the USA which has been at war somewhere in the world for almost every year for the past 200+ years. How many military bases does China have around the world? How about the USA? Who has been the real aggressor for the past 2 centuries? They claim that they "protect" freedom around the world, but what they are really doing is exploiting everyone else in the world for their own benefit. USA was built on theft and violence.

    • @zaneferrin4659
      @zaneferrin4659 4 года назад +1

      @@LandoCalrissiano except for anti-vehicle landmines and claymore landmines

  • @autonomous2010
    @autonomous2010 5 лет назад +31

    Criminals that don't follow the law: Meh.

    • @MrNabows
      @MrNabows 4 года назад +1

      exactly, but idk where that leaves us exactly

    • @okjacob
      @okjacob 4 года назад +4

      Criminals often don't have access to the same resources which universities or military have access to. It's easy to ship a weapon but it's harder to develop a weapon by kidnapping university professors and forcing them to work for you over the span of 15+ years. BUT with AGI then maybe it doesn't matter. The "human" is already the most dangerous "weapon"

    • @autonomous2010
      @autonomous2010 4 года назад +6

      ​@@okjacob Oh no doubt. My comment was more tongue-in-cheek than anything. The vast majority of criminals are not nearly intelligent enough to implement and deploy something as presumably complex as a mostly-autonomous weapon. Same reason why we don't see criminals building nuclear bombs despite how reasonably easy it is to get access to radioactive materials. The knowledge requirements are very significant.

    • @krtcampbell9007
      @krtcampbell9007 3 года назад

      But if they understand that there just as easy a target if not more so as the R&D is not on there side. Then they will understand
      It's in there best interest also. Criminals with AI is one thing but a rogue gov like china. That's scary.

    • @silverhawkscape2677
      @silverhawkscape2677 3 года назад +1

      @@okjacob Just leak the blueprints and source code online. Wait for China to eventual make a functional version then have it leak to bad actors.
      Once the tech is put down into writing, you can't stop it from spreading.

  • @turn-n-burn1421
    @turn-n-burn1421 2 года назад +4

    Banning criminals from criminal behavior is a joke.

  • @mistercohaagen
    @mistercohaagen 5 лет назад +14

    We live in a world full of private dictatorships called Corporations. Does anyone really think a tiny pocket of legislation in some Nation-State is going to prevent or halt development? AI is just a continuation of the same arms race we've been in since the dawn of the perpetual warfare, constantly derived from scarcity itself. The real eternal struggle, has always been between the concepts of zero-sum and win-win. AI could be the most powerful weapon our species has ever developed for this "first principles" conflict, that might actually turn the tide by providing a process like science itself, that everyone can constantly point to and say "don't worry, we'll have enough"... so we can stop the violent cycle of desperation.

    • @sbalogh53
      @sbalogh53 4 года назад +4

      For some there is never enough. Try telling all the multi-billionaires that they have too much and they will answer they want more, and more. There is never enough to satisfy some people, and some countries. They want ultimate wealth and POWER.

    • @Bankable2790
      @Bankable2790 4 года назад +1

      "The industrial revolution and it's consequences have been a disaster for the human race."

    • @mistercohaagen
      @mistercohaagen 3 года назад

      @@RazgrisFloob I'm still hopeful. Whatever hardware it takes to run an AGI, eventually it'll be productized. Once it's in the public's hands, the hacking will begin, and shortly thereafter be freed like all software.

    • @mistercohaagen
      @mistercohaagen 3 года назад

      ​ @razgriz I'm a fan of Computer/Numberphile vids as well. So, Cambridge Analytica was again a corporate entity created for a strategy that could only be enacted by the labor of coders. They more closely resembled a rogue intelligence group mixed with a troll farm, than a software suite that any one person could orchestrate in a lone act of terrorism. There were data breaches and leaks of the data and tools they collected and hacked together, plus some insight into how big social media corporations allowed C.A. to leverage their own API's that led to the so far toothless attempts at new regulation, but that fight is ongoing. This video is more worried about deploying bespoke AI weaponry in the field, as-is. To me, that's just more of the same competitive capitalism we've been up to for a while now. Even if they allow smart tanks and robotic dogs to fire bullets at hapless shepherds in some desert (their usual targets), they're only doing basic things like image recognition and other signals analysis to identify and fire upon a target (are there sheep around, does the guy have a colorful rag on his head and an AK-47? If tallied true's are >= 2, jump to fire function). Those types of devices themselves aren't impervious to conventional munitions fired by humans, and they're an interdependent fusion of hardware and software that can't escape or change it's own form. They also can't (re)produce themselves. There would need to be a convergence of nanotech, manufacturing and some kind of universal architecture of computation that all the different forms of AI could run across, plus there would need to be a universal network fabric such as what SpaceX is building right now. Once AGI's are no longer a physical locus of interdependent hard & software synergy, they might be made to exist as a pure virtualization, such as a cloud application does today... in which case our species could be in real trouble. Up until that possible singularity, we can still fight these things pretty well in all the old ways we're accustomed to; bombing factories, schools, hospitals, weddings and then freezing assets, usurping supply chains, stealing natural resources, etc. The technology in your phone is made from various materials mined from 6 different continents, and put through miles of highly fallible refinement and manufacturing facilities... all dependent on human labor, and these proposed AI weapons would be made of the exact same stuff. The only way they become real for a while yet, is if we really go out of our way to build and maintain such a high level capability, with a specific purpose (military-industrial profit making) in mind. There is a great distance between all of that toil, and some kind of liberated AI committing human genocide. I wouldn't mind being wrong, cause there is no way any intelligence is escaping this rock as gassy bags of meat that turns to muck moments after being irradiated even slightly. It's up to people like us to learn and wield these same types of AI as weapons of mass creation, so that the abundance produced can tranquilize the war-makers if at all possible.

    • @mistercohaagen
      @mistercohaagen 3 года назад

      @@RazgrisFloob For coding AI? I used to know a guy who had 80x PS3's in his room that he used to cluster to break passwords. He did a ton of speed.

  • @lorenzogiorgioni9001
    @lorenzogiorgioni9001 3 года назад +9

    In developed countries with shrinking populations, these systems can male armies that nobody wants to serve in, ensuring defence and creating industries and jobs.
    They can make a dependable and expendable force to avoid conflict and respond to threats.
    They end our (developed nations) dependence on mass immigration and on mass society to keep militaries staffed.
    Like the nuclear bomb, these could be the next threat of total annihilation that deters conflicts (nuclear bombs did this for 70 years now.. Without them, despite them being totally immoral, we would have certainly had a 3rd and maybe even a fourth World War)
    The path to hell is paved with good intentions.

  • @user-cv2of4ve4u
    @user-cv2of4ve4u 9 месяцев назад +2

    Everyone is on the same playing feild..period

  • @sdmarlow3926
    @sdmarlow3926 5 лет назад +7

    How do you target people by ideology? How does a ban on "authorized development" combat those building such a weapon for an intentionally illegal (by any current standard) action? If it's about current tech not being able to make such decisions, who safe is it to use them on the road today in our cars? Current AI isn't even AI, it's ML, and that has no path to "greater intelligence or understanding" than it did in the 80's. Besides, freedom of speech is already under attack with the "dumb systems" we have today. Where is the call to ban automated flagging of social content? It's fine to say we shouldn't deploy such systems because they don't have the awareness and context needed to make life or death decisions, but that is different than an outright ban on research which is needed to make things like driverless cars safer. What about defending against such autonomous systems? China has zero ethical issues with the kinds of research it is doing. Some feel good Western platitudes has no impact on them (perhaps a tad ironic if sanctions and a blockade of China in the future should they be in violation of a ban actually precipitates military action). And the public stigma part... videos like this create the stigma (or the slaughterbots video esp.). There are a lot of communities that would likely welcome a robocop (that isn't afraid of being shot at) to replace the trigger happy humans that respond to 911 calls.

    • @letMeSayThatInIrish
      @letMeSayThatInIrish 5 лет назад +4

      How do you target people by ideology? You identify each person, then scan their social media history. For instance, it's not difficult to categorize right leaning and left leaning people.

    • @Mictla155
      @Mictla155 5 лет назад +1

      @@letMeSayThatInIrish Exactly, we are screwed. You will not be able to say or do anything that even mildly rocks the boat, not to mention rogue groups that would not mind using such things as well for whatever they want.

    • @donder172
      @donder172 5 лет назад +1

      @@letMeSayThatInIrish This is actually done in a video called 'Slaughterbots'. It's on youtube on the channel 'Stop Autonomous Weapons'. It's pretty terrifying, though.

    • @Apolyion
      @Apolyion 4 года назад

      Good points until you reached the "trigger happy humans....". Makes me question your intelligence. Is it real or artificial?

  • @whenyougodown228
    @whenyougodown228 3 года назад +2

    0:22 That is a SUPPLY CARRIER. It CARRIES SUPPLIES. It is NOT A WEAPON.

    • @thatbee4923
      @thatbee4923 3 года назад +1

      but an autonomous weapon can easily be treated as a supply.

    • @whenyougodown228
      @whenyougodown228 3 года назад

      @@thatbee4923 Are you saying that is's still a weapon?

  • @Misskittycat8
    @Misskittycat8 4 года назад

    tell us how

  • @user-cv2of4ve4u
    @user-cv2of4ve4u 9 месяцев назад +2

    In the age of deonesand robo dogs every milyary has the same thing we all on proson earth

  • @vitaliener
    @vitaliener 4 года назад +1

    Very scary! Important Topic!

  • @MaxPlayne87
    @MaxPlayne87 Год назад +2

    You think you can "legislate" this out of the way?

  • @FarnazCreations
    @FarnazCreations 5 месяцев назад +1

    I think that Yes, we must remain cautious about the immense progress in artificial intelligence
    Yes, it could, in the worst case scenario, destroy our humanity
    Yes, only humans can feel emotions, make decisions, think correctly
    If one-day intelligence dominated us, how often it was wrong
    We are the only ones building a future world
    However, unlike video, I think that artificial intelligence will allow great advances in the future, while obviously remaining cautious.

  • @ericpham8205
    @ericpham8205 3 года назад

    Are human or machine different and now AI had renewable energy while human is not

  • @viralvideos9227
    @viralvideos9227 4 года назад

    the killer drone can be hacked easily isn't?

    • @thatbee4923
      @thatbee4923 3 года назад

      the point of a autonomous weapon is to operate independently. If the weapon is independent, there would be no way to connect to it without physical tampering. there would be, however, a time when the software could be changed. that would be when the program is being put on to the weapon. that would be incredibly difficult, but a successful hack would lead to a swarm of weapons with corrupted code.

  • @PlayLists-For-Everyone
    @PlayLists-For-Everyone Год назад

    THE MORAL DILEMMA:
    A 100% autonomous vehicle is racing down a road. Something happens that causes an unpredictable accident and dangerous scenario to occur. The car has lost control and is now speeding toward two different sets of human beings. It does not have time to avoid all of them, it might be able to avoid one of them, the only other option is to crash into a wall at potentially lethal speeds for the human inside. The "robot" must now decide between smashing straight into 5 young school kids, or five older men. The school kids are more difficult to avoid, any logics engine would calculate that the children are to be hit. What do you think about that? And more importantly, you know damn well you would have smashed into that wall, taking yourself out potentially, but sparing EVERYONE in YOUR CAR'S PATH. But your "autonomous car" won't see life that way. It's going to use its complete lack of emotional understanding as its excuse to choose which humans get to live in that scenario.
    Now, I did a very poor job of describing a "moral dilemma" with autonomous AI. There are others who've provided you with much more realistic and frankly gut wrenching scenarios, just Google them, and you'll see what I was trying to convey here. Human emotion is so essentially vital in each and every aspect of our daily lives and how we interact and communicate and live safely among each other. Logic alone is nowhere near enough to create a safe, functioning, effective AI, and it baffles me why these "great minds" don't seem to really care about this fundamental problem.
    Anyway, whatevs, maybe you see what I'm trying to say here.... :))
    Stay human ;))

  • @Daniel-ih4zh
    @Daniel-ih4zh 3 года назад

    Arguments here mostly argue from the fact that they're not good enough *yet*

  • @Earl_E_Burd
    @Earl_E_Burd 4 года назад

    Just make a law to be good and not be bad.

  • @Doeff8
    @Doeff8 3 года назад

    This video has only been watched 50.000 times? What the f? That is ridiculous.

  • @w.s8676
    @w.s8676 2 года назад +2

    Think about it ...banning will only make those countries who have them and refuse to get rid of them stronger

    • @Cousgoose
      @Cousgoose Год назад

      I agree Stoping the United States from producing them just makes China and Russia stronger because China and Russia don’t care about the rule’s. Lol 😂

  • @HiTechDiver
    @HiTechDiver 4 года назад +3

    Your sentiment and intentions are noble, but it is naive to believe that a ban will stop bad actors from using Autonomous Weapons. Unfortunately, this technology is out there, and like every other technology, will be used for both good and bad; that's reality. Ban or no ban, bad actors will employ this technology. Therefore, the US must maintain the edge on this technology and every other technology.

  • @gustavobarreraq3567
    @gustavobarreraq3567 4 года назад

    control total

  • @md.tahseenraza4791
    @md.tahseenraza4791 2 года назад

    This is one LAW Newton will be relieved to have been rejected...

  • @notyou1567
    @notyou1567 4 года назад +2

    Human extinction is inevitable.

  • @myartchannel8205
    @myartchannel8205 5 лет назад +2

    You might as well say "I suppose a ban on autonomous AI". Any AI, no matter how good the intentions, can become an autonomous weapon. Why not just not be assholes to Robots?

    • @meshachwallace2752
      @meshachwallace2752 2 года назад

      yes not just physically, but like exposing hostile information or personal stuff.

    • @spook6394
      @spook6394 7 месяцев назад

      This is not true. Don’t be ridiculous. “Autonomous AI” can come in many forms, many which cannot kill. And they can be programmed to have limitations, and killing humans can be a limitation.

  • @ericpham8205
    @ericpham8205 3 года назад +1

    One day AI can create its own AI then earth would be stranger place for normality

  • @coolconcrete6945
    @coolconcrete6945 5 лет назад +8

    You need to change ur logo it creates distance and coldness. You need something aesthetically pleasing so people will listen to ur messages!!

  • @samuel_hafen
    @samuel_hafen 4 года назад

    But what if "The enemies" do not have this ethics, so we become the Hunted?
    I think we shall work against War in general, because the more technology evolves the more will die in a war.
    And if we do not stop this it will end in the total annihilation of Humanity. (which would be best for earth)
    So i call YOU out to stop the war by stopping the hate and be more open to opinions of others.
    So try being nice to other people and try to understand their opinion and their Personality,
    Because if you are not open with other, the wont be with you.
    Try to understand Other religions/opinions and why they are doing this, like:
    -Why wearing burkas?
    -Why love everyone?
    -Why shall the earth be flat? (this is more of a joke)
    -Why War
    I hope this helps a bit, English is not my native language, so there could be serious writing issues.
    #NoAutonomousWeapons #Love #NoWar #StopClimateChange #StopRacisim

  • @dadadruma
    @dadadruma 4 года назад

    Its only a matter of time prolly within a year this will begin to take place, Humanity is Fucked

  • @tzm_tvp_rbe5808
    @tzm_tvp_rbe5808 4 года назад +2

    There is only one path Humanity can take to avoid this dystopian future:
    #DirectDemocracy
    #GiftEconomy
    Only by rejecting hierarchy in all of its forms can we ensure that AI and robots are never used against fellow Human beings.

  • @gregolsen7102
    @gregolsen7102 4 года назад +3

    Like when we were told computers would make life easier and simpler, still waiting! :-)

  • @everythingunderthesun2273
    @everythingunderthesun2273 3 года назад +2

    AI is our worst nightmare.

    • @bb-gb7jv
      @bb-gb7jv 3 года назад

      So were computers a few years ago

  • @TheBossMan1453
    @TheBossMan1453 3 года назад +3

    Can't stop progress, you can only fall behind

  • @ndenise3460
    @ndenise3460 День назад

    Too late the genie is.out of t the bottle

  • @stevesewall
    @stevesewall 4 года назад

    Ban 'em.

  • @gdicommando4456
    @gdicommando4456 4 года назад

    FUCK YES AUTONOMOUS NUKES!!!! SEE HOW FAST HUMANITY LASTS THEN!!!

  • @Christian-Maxwell.
    @Christian-Maxwell. 3 года назад

    Yes ! 😰 Let us be on our O.W.N* For good, 😁 & for Goog.. (*Own Worst Nightmare) ! 😓

  • @gregolsen7102
    @gregolsen7102 4 года назад +1

    Too bad the negatives weren't considered with dynamite and nuclear weapons before hand! :-(

  • @ntsa755
    @ntsa755 4 года назад

    what can i say, for me thsi speak more in favor of autonomus weapons since, you know, humand judgement tends to fail, and a machine doesn't, if a machine does fail is always due to a previous human failure. also countermeasures to this are extremely simple, focused pem or,laser point-defence an just a counter ai.

  • @dunn0r
    @dunn0r 3 года назад

    Interesting how many chinese and russian researchers speak up against this. Oh. Wait.

  • @Ludens93
    @Ludens93 7 месяцев назад

    I agree. There should be a worldwide ban on lethal autonomous weapons.

  • @llgoldstein2710
    @llgoldstein2710 4 года назад +2

    Adding in the Three Laws would be a good start. Asimov was right.

    • @noahway13
      @noahway13 4 года назад +1

      Are you joking? I can't tell. You think everyone will abide by that?

  • @theSpicyHam
    @theSpicyHam 4 года назад

    maybe ban the virus'es rather some of first

  • @teliosausdenwaldern1033
    @teliosausdenwaldern1033 4 года назад +3

    Agreed. Autonomus weapons should be banned.

  • @rudiwiedemann8173
    @rudiwiedemann8173 3 года назад

    Great! When they banned guns from Morton Grove village the murder rate went over 5000%! How did THAT work out for the locals?

    • @TorbenRudgaard
      @TorbenRudgaard 3 года назад +1

      when the banned guns in Japan the murder rate went to 5-6 pr year out of 130 MILLION people

  • @shobatina9726
    @shobatina9726 4 года назад

    JOEL VERSE 2:8. ACCURATE BIBLE PROPHECY

    • @TruAnRksT
      @TruAnRksT 4 года назад

      Fuck your fictional bible. Go get your "the end is near" sign and go stand on the corner moron.

    • @TruAnRksT
      @TruAnRksT 4 года назад +1

      @ArabicObsessions The only thing that happened is that your brain got seriously damaged by a steady diet of that garbage from birth and now you can't tell fiction from reality.

    • @TruAnRksT
      @TruAnRksT 4 года назад

      @ArabicObsessions You voted for bush didn't you?

    • @TruAnRksT
      @TruAnRksT 4 года назад

      @ArabicObsessions
      "A world that has been thoroughly permeated by the structures of the social order, a world that so overpowers every individual that scarcely any option remains but to accept it on its own terms ... reproduces itself incessantly and disastrously. What people have forced upon them by a boundless apparatus, which they themselves constitute and which they are locked into, virtually eliminates all natural elements and becomes “nature” to them."
      Theodor Adorno, Critical Models (1998)

    • @TruAnRksT
      @TruAnRksT 4 года назад

      @ArabicObsessions Yeah so what! The old jewish testament is even more fake and fictional than the King James version.
      But I get what you say there (not that its really appropriate to the discussion) But that it is something of a universal truth. Natural human circumstances and way of thinking, our universal truths that guide us have/should have, nothing to do with religion except that they are appropriated bastardized and used for religious purposes by con men!

  • @rlwieneke-cf3xq
    @rlwieneke-cf3xq 5 лет назад +1

    How LIBTARDS see the world: "We will ban Killer Robots and everybody will hold hands and sing Kum Ba Yah and there will be fuzzy bunnies for everybody to pet".......... How Reality will see the world: China: "Release the Killer Drones and KILL everybody that doesn't support our Totalitarian Police State!!!!!!!!!"

    • @brackets6127
      @brackets6127 5 лет назад

      Yea sure mate, ruclips.net/video/G3hbtM_NJ0s/видео.html

    • @noahway13
      @noahway13 4 года назад

      You're a dickhead.

    • @TruAnRksT
      @TruAnRksT 4 года назад

      China's people are doing surprisingly well these days and like their .gov And there is no housing shortage or homeless people either.

    • @RussetfurTv
      @RussetfurTv 2 месяца назад

      @@TruAnRksT Could I ask where you found that information?

  • @viswagsena108
    @viswagsena108 5 лет назад

    SAVE EARTH PLANET-LIFE SUPPORT BELOW IONOSPHERE..EVOLVE SPACE BABY CONCEPT. Unity of Consciousness-OM-Science,religion,Philosophy-My book 2002 [revised from 1997]Intelligent Humanity cannot end-up under perils of ignorance- PRIDHVEEM SHANTIH

  • @IonorReasSpamGenerator
    @IonorReasSpamGenerator 5 месяцев назад

    An international entity like the United Nations should develop a testing range for unmanned autonomous systems where any nation including North Korea can bring their autonomous weapons systems for testing and qualification where their ability to terminate only what it should without any collateral damage will be thoroughly tested in simulated weather conditions, dust, smoke and this together with their ability to recognize simulated combatants within artificial crowds and dispose of them without causing harm to civilians.
    So, in fact, autonomous weapons systems should be tested by an international body in the same manner as cars claiming to be full self-driving should be tested before being allowed to be promoted as having such feature.
    While it's nice that people think they can stop the arrival of autonomous weapons systems to international targets by protests, the ugly truth is that these weapons systems will sooner or later be introduced because nations without them will be on short list to become the next Tibet. Thus, instead of wasting time on protesting that can't achieve much beyond more thorough testing standards, we should instead be proactive and create infrastructure to force progress through competition to make autonomous weapons at least similarly accurate in their judgment as human combatants currently operating vehicles, drones, and their able bodies on the battlefield.
    This of course without using such international combat AI certification infrastructure to promote weapons of a certain nation at the expense of others, which would inevitably result in these tests not being taken seriously as are nowadays not being seriously taken international laws exactly due to preferential treatment for some friendly nations like US or Israel at the expense of others because such preferential treatment behavior will be used as an excuse for not trying to meet highest standards for combat AI which meeting would otherwise result in better chances on the international weapons market while making such qualifications useful to help internationally standardize the required level of sophistication in autonomous weapons systems in order to further reduce collateral damage beyond what is achievable by humans.
    People from soldiers and commanders to leaders of nations often choose self-survival and self-interests over the greater good like sending drone strikes to deal with high-value enemy combatants even when this comes at the expense of some locals as a worthy trade-off that military leadership will happily hide from the public to suppress negative responses one way or another as whistleblowers of War on Terror clearly showed. Considering that not so long ago, the US military was recruiting drone operators from video gamers who then suffered trauma from what they did in their work (disposing of people with a push of a button purely because of some metadata and questionable local informants with self-interests that often proven incorrect), I have no doubt that one day machines will do a far better job of correct judgment than humans, especially compared to soldiers on battlefield exposed to unbelievable levels of stress that together with a survival instinct and various biases clouding their judgment at least at the beginning of their career a lot...