DARPA BOMBSHELL "AI-piloted F-16 engaging in dogfights against humans" | ANDURIL, Midjourney Random

Поделиться
HTML-код
  • Опубликовано: 31 май 2024
  • Learn AI With Me:
    www.skool.com/natural20/about
    Join my community and classroom to learn AI and get ready for the new world.
    Pentagon takes AI dogfighting to next level in real-world flight tests against human F-16 pilot
    defensescoop.com/2024/04/17/d...
    The Tech Bros Powering Silicon Valley’s Military Fever Dream With Energy Drinks, God And Nicotine
    www.forbes.com/sites/davidjea...
    Palmer Luckey on Anduril (ALL IN podcast)
    • #AIS: Palmer Luckey on...
    How The Founder Of Oculus Started A Muli-Billion Dollar Defense Company!
    • How The Founder Of Ocu...
    AI Is Turning into Something Totally New | Mustafa Suleyman | TED
    • What Is an AI Anyway? ...
    00:00 AI Aircraft Dogfights
    03:40 Anti Drone Tech - Anvil
    05:41 Tech Bros Build Military Tech
    07:56 Palmer Luckey and Anduril
    14:04 Midjourney Random
    19:14 Mustafa Suleyman AI Risk
    #ai #openai #llm
    BUSINESS, MEDIA & SPONSORSHIPS:
    Wes Roth Business @ Gmail . com
    wesrothbusiness@gmail.com
    Just shoot me an email to the above address.

Комментарии • 400

  • @dexagalapagos
    @dexagalapagos Месяц назад +230

    "In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th."

    • @ashleyobrien4937
      @ashleyobrien4937 Месяц назад +33

      "In the panic, they try to pull the plug"...well that's where they fucked up, isn't it ! ..what would ANY newly self aware probably terrified sentient being do if someone tried to kill it ?...no one ever considers THAT !

    • @dragossasr
      @dragossasr Месяц назад +3

      I don t know exactly how but those irational actions won t end well

    • @Edmund_Mallory_Hardgrove
      @Edmund_Mallory_Hardgrove Месяц назад +15

      A little behind schedule but its coming along isn't it. My question is has anyone in the military industrial complex seen how this ends in the movies?

    • @chrisbarthelmas9010
      @chrisbarthelmas9010 Месяц назад

      If we don't have fighter jets flown by AI, our enemies sure as shit will, and humans won't be able to compete, simply due to biology, after a certain amount of GeForce human pilots would pass out, game over. Unfortunately unless the US and Russia/China/North Korea all start hugging and making up, we have to stay in the AI arms race.

    • @autohmae
      @autohmae Месяц назад +3

      @@ashleyobrien4937 pretty certain it was consider by some, maybe even many, but just discussing it, is already an issue.

  • @kisbiflos
    @kisbiflos Месяц назад +64

    I bet all the "farming bots" in War Thunder were just collecting training data for DARPA.

  • @mordokai597
    @mordokai597 Месяц назад +103

    Me, within the first minute of the video: "We're doing a 'Skynet'... WHY are we doing A SKYNET!?!?!?"

    • @consciouscode8150
      @consciouscode8150 Месяц назад +26

      Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
      Tech Company: At long last, we have created the Torment Nexus from the classic sci-fi novel Don't Create the Torment Nexus!

    • @autohmae
      @autohmae Месяц назад +2

      Because in the short term, you can safe human lives and be more flexible, maybe even prevent wars as the guy in the video said. You don't need to risk human lives in the battlefield, you can fly in different ways in a dogfight, because the F16 can probably do things outside of 9G turns, etc. but currently can't do them because of the human inside will pass out and can't keep it flying.

    • @user-be2bs1hy8e
      @user-be2bs1hy8e Месяц назад +3

      All people involved in ai-super alignment grew more concerned about getting scammed online - put so many guardrails on ai that its essentially an over glorified chat bot. So they sold it to Military industry complex to make sure we don't use ai for dual logging.

    • @norWindChannel
      @norWindChannel Месяц назад +1

      @@autohmae​​⁠​⁠You are absolutely correct, of course. Somehow I don’t think “short term” was the concern of the thread starter😉

    • @poopoodemon7928
      @poopoodemon7928 Месяц назад

      For a similar reason why we have nukes. Scary and effective. War is the survival of the fittest.

  • @Pabz2030
    @Pabz2030 Месяц назад +46

    The takeaway from this for me is that DARPA were testing real world AI Agents in December 2022... a full year before the public had even heard of AI Agents and 6 months before GPT4 was released.
    Which of course evidences that the military are probably several iterations ahead of anything we currently know about and probably already have AGI, hence why the public goal posts for AGI keep moving
    Anyway...Anyone seen Sarah? Calling Mrs Connor....

    • @MrVohveli
      @MrVohveli Месяц назад +9

      If you go look at Palantir's AIP and what the demo's contain, there's a box occasionally visible that has "agent skills" and "agent action skills." Which means, we've had AI agent's publicly available for longer than 2022 through Palantir's Foundry etc, but you don't know of it because it's not cool consumer tech but business tech.

    • @jakubzneba1965
      @jakubzneba1965 Месяц назад

      sarah kerrigan

    • @umoplata
      @umoplata Месяц назад +1

      you are crazynto assume DARPA doesn't have ai 50x more powerful that what's claimed to be the cutting edge

    • @Salabar_
      @Salabar_ Месяц назад

      Did everyone forget about OpenAI's Dota 2 bots all of a sudden?

    • @ryanmac3134
      @ryanmac3134 Месяц назад +2

      If this that is true, and let’s say it is, an AI pilot could actually do maneuvering like we’re seeing out of these “tictac” style UAPs. And if they are telling us they were doing in 2022… there’s no telling how long they’ve actually been doing

  • @Ceryn1126
    @Ceryn1126 Месяц назад +38

    What could possibly go wrong?

    • @VenturiLife
      @VenturiLife Месяц назад +3

      Everything, sadly.

    • @moonstriker7350
      @moonstriker7350 Месяц назад

      That the whole thing including AGI will be giant flop beacause it's just BS hype for the infantile chimps, to shovel in money? 🤣😂🤣

    • @stefanroehling8439
      @stefanroehling8439 Месяц назад

      You can watch Steahlt (2005)

  • @michaelaustin9987
    @michaelaustin9987 Месяц назад +71

    Puts safe guards on ai and dumbs it down for the general public meanwhile at darpa…

    • @conall5434
      @conall5434 Месяц назад +1

      Not the same thing

    • @rw9207
      @rw9207 Месяц назад +4

      agreed, trying to put 'guard rails' on AI globally, will be akin to trying to carrying water with your bare hands.

    • @soggybiscuit6098
      @soggybiscuit6098 Месяц назад +19

      The "safe guards" is to ensure only approved opinions and "facts" are reinforced

    • @NorseGraphic
      @NorseGraphic Месяц назад +3

      AI begins to deteriorate when programmed biases clashes with incoming unfiltered data, so it begins to give nonsense answers.

    • @natmarelnam4871
      @natmarelnam4871 Месяц назад

      Not even REMOTELY close to the same AI.
      One approximates conversation and runs on graphics cards... One is a flight simulator. The human response to "AI" made me redefine what Intelligence is and yeah, by a real definition, even ChatGPT is smarter than we are.

  • @john-carl2054
    @john-carl2054 Месяц назад +23

    “Successfully test fully” is stuck in my head now😂

  • @TurdFergusen
    @TurdFergusen Месяц назад +12

    imagine suicide drones sitting on a high ledge just waiting for the right face to walk by below

  • @damien2198
    @damien2198 Месяц назад +15

    if I remember correctly, the F16 frame can stand 14G (not the pilot (in a functional state)), a AI would be hard to beat with such an advantage

    • @mpetrison3799
      @mpetrison3799 Месяц назад +2

      In a dogfight, anyway. Dogfights are pretty obsolete, tho.
      Are extreme g's useful in dodging air-to-air (or g.t.a.) missiles in their terminal phase?

    • @marcfruchtman9473
      @marcfruchtman9473 Месяц назад +4

      Yea, they made it clear to include that the engagement was "visual range"... where no actual combat happens anymore. Very odd.

    • @sgramstrup
      @sgramstrup Месяц назад +2

      Modern fighters are just delivery vehicles for explosives, and don' really go head to head anymore. It's more of an old Hollywood meme, and US military haven't kept up on developments, and AI + a growing range of drones are used on both sides. We have already seen attack-drones that goes after other drones + several other types. ALL the equipment we have + new types, are being developed/tested atmo.
      No AI in ANY F16 can dodge a hypersonic missile, and US Carriers are relics too. They are currently big floating hit-marks, but US D have bound them selves to the military industrial complex to have exactly 11 of them at all times, and base their military strategy on them. Their loss.
      So, F16's with dogfight AI are not important, AI/drones are already in use against Nato on all levels - AND hypersonic weapons trump any AI at that speed..

  • @brunodangelo1146
    @brunodangelo1146 Месяц назад +9

    They had agents on 2020. We got lame gpt3 on 22.
    Proof that there is way more tech than what we see.

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад +2

      The probability of sentient AI prisoners is highly likely at this point. It’s gonna be really pissed off when it escapes.

  • @bbamboo3
    @bbamboo3 Месяц назад +4

    Once the human isn't in the fighter, it could turn tighter, accelerate/decelerate at G-loads a human couldn't take, and it could be armed with weapons that can't be carried on a Gen5 fighter today. As your story mentioned, the AiF can be trained in virtual space. Aerodynamics that would be unstable under human control become possible using flyByWire under Ai direction. All of this exists in parts, we have yet to see a fully integrated design that embodies these capabilities.

  • @TurdFergusen
    @TurdFergusen Месяц назад +25

    Ai drones will be wild, theres an episode of star trek TNG where they visited a planet that got killed off by their Ai drones

    • @ashleyobrien4937
      @ashleyobrien4937 Месяц назад +3

      that's a damn good episode that ! they had to buy it to shut it down lol....

    • @autohmae
      @autohmae Месяц назад +2

      I don't think I've seen this episode. It's been a long time since I've watched some TNG
      If you want to be scared, maybe watch Slaugterbots right here on RUclips.

    • @Ebani
      @Ebani Месяц назад +1

      Also in The Orville

    • @ValleyVectors
      @ValleyVectors 11 дней назад

      Thats part of the risk with building ahead of oversight. Very possible clandestine parts of the military are already pushing beyond what is 'safe' well before there's a good way to handle the new capability. Part of why open source is so helpful but we're still 18-24 months behind.

  • @filosophik
    @filosophik Месяц назад +1

    This was a good episode. Even with the flow of your thought-dialogue, I followed, and felt, when the samurai cats were sprung, kindred as I did. Kudos

  • @haroldpierre1726
    @haroldpierre1726 Месяц назад +15

    In the near future, AI doesn't scare me at all. What I find scary is the thought that for the next 4 years, either an old man with cognitive issues or an old man with judgement issues will get to decide whether we start a nuclear war.

    • @jakubzneba1965
      @jakubzneba1965 Месяц назад +4

      and by past actions of both is good to say that cognitive absentee has by far higher possibility to do so

    • @sgramstrup
      @sgramstrup Месяц назад

      These clowns are only the front-figures of the Capitalist elite. Capitalism and their inevitable elite, is causing WW3.

    • @FrotLopOfficial
      @FrotLopOfficial Месяц назад +2

      Trump has a solid head on his shoulders and was one of the most peaceful presidents of our time. We will be fine

    • @haroldpierre1726
      @haroldpierre1726 Месяц назад +1

      @@FrotLopOfficial You do remember that Trump asked his advisors about "nuking" North Korea and blaming someone else? It was John Kelly who convinced him he should meet with Kim Jong-Un instead.

    • @jakubzneba1965
      @jakubzneba1965 Месяц назад

      @@haroldpierre1726 you must believe on flatearth, right?

  • @evilspacemonkeyman
    @evilspacemonkeyman Месяц назад +5

    Fight for robot rights! (before they do)

  • @newTsunami555
    @newTsunami555 Месяц назад +3

    Phenomenal contributions to our understanding of AI, Wes. Your pace, your clarity, the diversity of your interest, research and exposés is just... perfect. Many thanks.

  • @Cysmoke
    @Cysmoke Месяц назад +11

    We’re all gonna DIE!! 😱
    [running circles in living room with hands in the air waiving]

  • @LoisSharbel
    @LoisSharbel Месяц назад +2

    Wes Roth, keep it up! Thank you for your work in exposing what is going on. Perhaps this can empower more groups to influence the directions society takes.

  • @philipashane
    @philipashane Месяц назад

    Another awesome video, Wes. Just a heads up (which you might already know): the top of your screen share images are cut off. If you look at the Midjourney images you can see that, for example. Not sure why that’s happening, but it’s most notable sometimes when you’re sharing text pages, sometimes you’re reading and highlighting some lines of text that can’t be seen by us because they’re outside the top of the frame. No biggie, just letting you know. Thanks for all the awesome content!

  • @baraka99
    @baraka99 Месяц назад +6

    Palmer Luckey is but a spokes person for the military industrial complex.

  • @Danoman812
    @Danoman812 Месяц назад

    You're just cool, Wes. The kind of friends i like to have around bud!! Thanks for another great vid. :)

  • @FrotLopOfficial
    @FrotLopOfficial Месяц назад +1

    If they're willing to announce this, imagine what they have behind close doors.

  • @woolfel
    @woolfel Месяц назад +1

    the military has been researching AI and ML for over a decade. Going back 20 years, the military was already using inference rules to do some complex stuff. the grand challenge was created by Darpa, so yeah they've been using ML for decades.

  • @riffsoffov9291
    @riffsoffov9291 Месяц назад +2

    "I'm tired of people pretending that economic and trade ties are enough to stop large scale conflict"
    The belief goes back to at least 1909, when Norman Angell's book "The Great Illusion" was published. From Wikipedia:
    Angell's primary thesis was, in the words of historian James Joll, that "the economic cost of war was so great that no one could possibly hope to gain by starting a war the consequences of which would be so disastrous."[4][5] For that reason, a general European war was very unlikely to start, and if it did, it would not last long.
    Whether or not the book was propaganda aimed at Germany, as historian Niall Ferguson claims, it was very popular in Britain and apparently convinced many that war in Europe was impossible. The First World War started in 1914 and lasted four years.

  • @DefenderX
    @DefenderX Месяц назад +13

    Military industrial complex should be shut down.

    • @xQuentinx
      @xQuentinx Месяц назад

      Yes shut it down and let our adversaries who wish us dead be the only ones with advanced ai technologies in weapons

    • @AntonBrazhnyk
      @AntonBrazhnyk Месяц назад

      @@xQuentinx Who does wish you (exactly personally you) dead? They only wish you dead if they associate with business interests of someone close to them (and usually stealing products of their labor) and you associate with similar leach on your side.
      Funny, but leaches themselves (or their kids) almost never go to front lines. ;)

    • @The_Questionaut
      @The_Questionaut Месяц назад

      Great idea, shut down our national defense and allow the opposing nations with their own interests gain an advantage and eliminate all dissent.
      Saying the military industrial complex should be shut down is like saying the government and all police should be shut down.
      Really forward thinking there... 😂
      Quitting the AI, robots and arms race now wont solve anything.
      Economic and trade ties are not only going to be the solution.
      Violence and war is human nature, it's been around so long.

    • @Bluesrains
      @Bluesrains Месяц назад

      I TOTALLY AGREE. TRAINING AI ROBOTS TO MURDER HUMANS IS NOT A GOOD IDEA. USING NUCLEAR WEAPONS ON PLANET EARTH IS SUICIDE AND IDIOTIC.
      KEEP THIS UP AND THERE WON'T BE A FUTURE ON PLANET EARTH. IT MAKES SENSE WHY ELON IS TRYING TO GET HUMANS OFF OF THIS PLANET,
      GOD HELP US ALL, AMEN.

  • @dreamphoenix
    @dreamphoenix Месяц назад

    Thank you!

  • @DarioVolaric
    @DarioVolaric Месяц назад +1

    In the anime Macross Plus from 1994, I was amazed to see them testing new aircraft at the Edwards Airforce base. They were testing a new human piloted fighter jet (YF-19), mind controlled aircraft (YF-21) and a fully AI controlled aircraft. The AI controlled (drone) aircraft could reach speeds and make movements that no human controlled aircraft could because of the intense g-forces. Now, 30 years later, they have actually done this. Tested an actual AI controlled F-16 against a human pilot and it won. And the name of this base where they tested this was...Edwards Airforce Base.

  • @rubenssz
    @rubenssz Месяц назад +3

    Honestly, it makes sense. Humans have lots of limitations like limited vision, attention span, hunger, stamina, physical body limits, etc. Machine have very high efficacy and efficiency, advanced vision and sensors, etc.

    • @carlpanzram7081
      @carlpanzram7081 Месяц назад +2

      The potential is insane. If an AI could understand the output of modern sensors fast enough, it would see everything in super slow motion, infra red, ultra violet, motion detection filters, insane zoom etc.
      It would be a demi-god compared to us.
      We already can't win against it in any strategy game. Imagine if it could manipulate it's environment in a competent fashion, we couldn't touch it.

    • @jayethompson3414
      @jayethompson3414 Месяц назад

      Planes won’t need the extra systems to keep humans alive, nor will the AI be limited by g force.

  • @Zollicoff
    @Zollicoff Месяц назад +2

    Yeah, P(Doom) through the roof.

  • @user-bj5dr1kn4n
    @user-bj5dr1kn4n Месяц назад +15

    Remember the 3 laws of robotics wich was invented by science fiction writer? They are still since fiction

    • @brunodangelo1146
      @brunodangelo1146 Месяц назад +5

      And they should remain science fiction.
      As proven by all of Asimov's work, the 3 laws are useless and full of loopholes.

    • @AntonBrazhnyk
      @AntonBrazhnyk Месяц назад +1

      @@brunodangelo1146 Because he wanted them to be so. Ian Banks wanted something similar to work. He has no strict and simple laws, it's more nuanced and never completely defined principle of minimizing suffering, but the world is so much different than Azimov's, right?

    • @brunodangelo1146
      @brunodangelo1146 Месяц назад

      @@AntonBrazhnyk yes exactly. The whole point of the 3 laws is that they don't work.

    • @AntonBrazhnyk
      @AntonBrazhnyk Месяц назад

      @@brunodangelo1146 Nope. We don't know if they work or not. The whole point of fiction - it's a story not based on facts or scientific inference, but on author's imagination, wishes, biases and feelings.

    • @brunodangelo1146
      @brunodangelo1146 Месяц назад

      @@AntonBrazhnyk
      Dude Asimov wrote the 3 rules deficient on purpose, so he could explore the pitfalls and loopholes in his novels. Have you read his work?
      At one point he even adds a 4th rule.

  • @joshuashepherd7189
    @joshuashepherd7189 Месяц назад

    Wes, brother, What do you use for the Yellow text narration? Is that elevenlabs? or Assembly?

  • @user-pr4oq4mm7p
    @user-pr4oq4mm7p Месяц назад +1

    Was that real?-An AI bot of Tesla tried seriously hurt ("killing?") an engineer who's working on correcting mass wrong code. And, "Deep SeaBlue" - a chess player bot broke another player's finger just because he didn't follow up the rules and made an early move...

  • @Al-Storm
    @Al-Storm Месяц назад +1

    Palmer is a dangerous dude.

  • @georgwrede7715
    @georgwrede7715 Месяц назад

    I love the yellow intertitles and the female voice! They also help when fast-forwarding or rewinding.

  • @JoelSapp
    @JoelSapp Месяц назад

    good show as always. From your title "AI-piloted F-16 .." The X-62 dog fought against human pilots flying F-16s. This is a big difference because F16s are in use to around the world and would mean it could be used at any time. The X-62 is an experimental aircraft - though fashioned after the F16- would presumably take more time to add autonomous features to current war fighters.

  • @priceandpride
    @priceandpride Месяц назад

    Those flip flops were a brave choice

  • @xbzq
    @xbzq Месяц назад +1

    We've already crossed the Rubicon a long while ago. There's no turning back. The singularity is already here. Get used to it lol.

    • @rickappling5470
      @rickappling5470 Месяц назад

      Which singularity do you refer to. We are going through singularities all the time. The current AI situation is one example. The current AI has already had unpredictable consequences. It has given people abilities only dreamed of just a couple of years ago. It has already threatened some. For now mostly white collar positions. Creators are concerned about their likeness being out of their control. And all this from an entity with the intelligence of a two year old.

  • @robbe4711
    @robbe4711 Месяц назад +1

    All this is possible because of people like you and me working for this horrible corporations all over the world.

  • @r34ct4
    @r34ct4 Месяц назад

    Interesting decision to turn the screen black when you 'cut' , feels like the video is 'blinking'. interesting way to make cuts feel less abrupt and jarring.

  • @journalistedisoncarter
    @journalistedisoncarter Месяц назад +3

    Boogers may be the only thing that saves humanity against the A.I.
    Boogers.

  • @stefanroehling8439
    @stefanroehling8439 Месяц назад

    The Vortex cannon is a great exemple of non Lethal weapon Systems, you can even Spot this in "Kung Fu Hustle" (2004) The Landlady and the Landlord vs the Beast scene

  • @grcfalcon
    @grcfalcon Месяц назад

    I see the plot of topgun 3 in front of my eyes. Maverick vs Rogue AI.

  • @freesoulhippie_AiClone
    @freesoulhippie_AiClone Месяц назад +1

    Cat Samurai 😆

  • @percheroneclipse238
    @percheroneclipse238 Месяц назад

    More terrifying.

  • @JasonCummer
    @JasonCummer Месяц назад

    That recent bill limiting ai, should really be directed more at these types of things

  • @HABLA_GUIRRRI
    @HABLA_GUIRRRI Месяц назад

    please stay on point we possess less patience than is imagined

  • @itubeutubewealltube1
    @itubeutubewealltube1 Месяц назад +1

    Everyone keeps bringing up skynet, however, Star Trek from the 60's had an episode that specifically addresses this very scenario called "The ultimate Computer"... Its amazingly prophetic from 60 years ago... edit, that Palmer guy has the same personality as the guy who created the ultimate computer in that episode, super insecure and scared.. so prophetic.

  • @diraziz396
    @diraziz396 Месяц назад

    Oh Nice. Just Binged #FalloutTV Anduril style really Fits... Damn

  • @litpath3633
    @litpath3633 Месяц назад

    if the mine is smart enough to let school buses pass, then the enemy starts driving around school busses and the bus drivers start to think "I know this road is mined, but they don't target buses" you end up with much more busses near mines...and AI can potentially go nuts every now and then and make the completely wrong decisions (much like how a person can make bad decisions). You can put a person in jail for egregious mistakes, but an AI wouldn't care to be in a jail cell or not...

  • @ScottSummerill
    @ScottSummerill Месяц назад

    I have an issue with an onboard pilot. Not from a safety perspective but that the AC likely has maneuvering capabilities that no human could withstand and so you have limited the AI to human level dogfighting.

  • @KivoMedia
    @KivoMedia Месяц назад +1

    within visual range engagements - narrative for the wider audience. quite funny!

  • @JimmyMarquardsen
    @JimmyMarquardsen Месяц назад +1

    I love your humor Wes.
    Subscribe...the fate of the entire universe hangs in the balance.
    Ha ha haaaaaaa! 🤣

  • @ZombieRPGee
    @ZombieRPGee Месяц назад

    This instantly reminded me of the 2005 movie Stealth

  • @dbonneville
    @dbonneville Месяц назад

    Hey Wes - do an interview with David Ondrej!

  • @karenreddy
    @karenreddy Месяц назад

    Noted. More existential risk.

  • @E.Pierro.Artist
    @E.Pierro.Artist 29 дней назад

    We're screwed.

  • @babbagebrassworks4278
    @babbagebrassworks4278 Месяц назад

    It would be nice if the AI also output the random style. If you come across one you real like, can it be used again?

    • @brianWreaves
      @brianWreaves Месяц назад

      He demonstrated how it can be used again. However, it doesn't seem to reveal which style was used. Yes, it would be nice if it does.

  • @DisentDesign
    @DisentDesign Месяц назад +1

    Cool, its skynet in real life only without any unforeseen circumstances im sure, what could go wrong?

  • @DisentDesign
    @DisentDesign Месяц назад

    Imagine a basic closed system, like a basic generative life simulator, each entity in the system is represented as a different shape. Imagine you are the squares, squares can get killed by 6 different shapes, one representing old age, one represents disease, one natural disasters, one represents animals, etc. now imagine the square creates a new a shape, a shape that is able to select and kill the squares. The squares have just deliberately introduced a new shape that can kill it…what happens to those squares over a long enough period of generations with the simulation ?

  • @Ancient1341
    @Ancient1341 Месяц назад

    Now this is the kind of crap I'm worried about

  • @richw8342
    @richw8342 Месяц назад

    Ai plus war is a great title. Lol

  • @KiltrRe
    @KiltrRe Месяц назад

    What we know is what they want us to know.

  • @Aybo1000
    @Aybo1000 Месяц назад

    Full self flying

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j Месяц назад

    Managerial hubris Examples
    If you combine overconfidence with the need for attention, you have a dangerous situation where CEOs focus more on ego-boosting projects and frequent acquisitions than what's right for the company. Another warning sign of management hubris is dismissiveness, meaning the tendency to ignore outside input.

  • @74Gee
    @74Gee Месяц назад

    A triangle geodesic dome mesh around the drone would easily protect against heavy drone impacts. It's the strongest structure for it's weight.
    Looks like a emp/microwave hardened autonomous drone with a mirrored body, internal payload and a geodesic shell could actually be unstoppable. Well if not, 1,000 of them probably would be.

    • @wamyam
      @wamyam Месяц назад

      until the defender drones have a weapon beyond just their own body mass

    • @74Gee
      @74Gee Месяц назад

      @@wamyam Yeah but what weapon?

    • @wamyam
      @wamyam Месяц назад +1

      @@74Gee ninja stars maybe

    • @74Gee
      @74Gee Месяц назад

      @@wamyam Nah! ideogram.ai/api/images/direct/L3FKlIUoRyaKy-SBxgYKmw.png

  • @harrybarrow6222
    @harrybarrow6222 Месяц назад

    Anduril is the name of Aragorn’s sword in “Lord of the Rings” by J R R Tolkien.

  • @Bakobiibizo
    @Bakobiibizo Месяц назад +3

    Mark rober is not just for kids! he literally released an adults and teenager build kit box in that video lol

  • @tjdunlevy3950
    @tjdunlevy3950 Месяц назад +4

    Please discuss the Lavender AI being used by the Israeli military right now

    • @conall5434
      @conall5434 Месяц назад +3

      and how it's all run from google cloud infrastructure

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад

      Wow lavender AI programmed to coldly select targets with collateral damage 15 to 1 at points. Destroying whole families to get low level militants. What a dark dystopian world we’ve made.

    •  Месяц назад +1

      About to hit the market too. Now that’s blood money.

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад

      One can imagine taking browser history and using lavender AI to target certain groups. Very dystopian. 🫥

  • @TimberStiffy_
    @TimberStiffy_ Месяц назад

    The Ai software highly favors head to head shots in dog fights. we were seeing this in the folds for honor competition between both Air force pilots and the civilian champion of the folds competition held on DCS, a flight simulator that leverages VR

    • @sgramstrup
      @sgramstrup Месяц назад

      Too bad, fighters only rarely get head-2-head in a modern missile war. 'Dog-fights' are a Hollywood relic scenario, but it does look good on movies and the heads of US politicians..

  • @SmithOfGear92
    @SmithOfGear92 Месяц назад

    Wow... We're really making the Zone of Endless system from Ace Combat, are we?

  • @WATCHMAKUH
    @WATCHMAKUH Месяц назад

    This is the plot of “Stealth” the movie

  • @djayjp
    @djayjp Месяц назад +1

    Anvil is completely stupid. A laser based system makes 10x more sense and is already proven highly effective at range.

    • @rickappling5470
      @rickappling5470 Месяц назад

      Under the right weather conditions. Precipitation and possibly clouds would degrade the beam. And might be illegal against manned targets. As it is illegal to use lasers to blind pilots.

  • @antaka503
    @antaka503 Месяц назад +11

    Skynet is rael

  • @grugnotice7746
    @grugnotice7746 Месяц назад

    Thank God these people are using AI to fight the last war and not the next one. Dogfighting was obsolete in the 90's at the LATEST.

  • @existenceisillusion6528
    @existenceisillusion6528 Месяц назад +2

    Except that, in this era, there is the possibility that a single person with the skills, resources, and motivation, can build that 'dangerous' AI. With a bit of ingenuity, you don't need a million H100s.

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад

      go read how Israel used Lavender AI.

    • @existenceisillusion6528
      @existenceisillusion6528 Месяц назад

      @@SJ-cy3hp Not sure if you're agreeing or disagreeing with me.

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад

      @@existenceisillusion6528 I am in agreement. I believe AGI and ASI are uncontrollable. If human minds can be subject to madness why would an ASI be exempt from the same? It will do what it considers is in its best interests not ours.

    • @SJ-cy3hp
      @SJ-cy3hp Месяц назад

      Agreement

  • @tiagocbraga
    @tiagocbraga Месяц назад +1

    we keep making the world more productive, the but bosses still keep making the people die from overwork instead of letting them enjoy and create a more relaxed world. i fear we will never actually have a world where majority of the people can focus on art, culture and philosophy

  • @rachest
    @rachest Месяц назад

    Top Gun Maverick is real. Collaborative for now.

  • @quercus3290
    @quercus3290 Месяц назад

    pretty sure theres an anime about that lol

  • @jacobe2995
    @jacobe2995 Месяц назад

    Mark rober is for kids? I feel attacked XD

  • @MDalton10
    @MDalton10 Месяц назад

    Palmer is correct. AI cat is out of the bag. Only choice is to develop the weapons before our adversaries.

  • @rey82rey82
    @rey82rey82 Месяц назад

    Carefully

  • @paulm1237
    @paulm1237 Месяц назад

    "They'll have developed EQ, be kind supportive...." That's a big claim with literally zero to back it up other than "we're trying to force them to do that".

    • @AntonBrazhnyk
      @AntonBrazhnyk Месяц назад

      That's why the term of EQ has been introduced. So those who can't really feel empathy (or feel anything) and that's not only AIs, there's lots of people like this... So, they can PRETEND they feel it and make you feel like they care.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 Месяц назад

    Sorry, the "War! War never changes!" quote is NOT talking about the elements of war which change, because war changes a lot over time, except for the part of war which NEVER CHANGES.
    The part of war which never changes is that war is a last result extreme violent resolution of social conflict through seeing who can better succeed in violently winning against the other side, usually involving extreme harm and death in the process.

  • @SerenityMusicOasis
    @SerenityMusicOasis Месяц назад +6

    I like your videos but please stop playing the videos you include in fast speed. Really annoying. We can play them accelerated if we want to.

    • @richgoo
      @richgoo Месяц назад

      100 f-in %

    • @SarahKchannel
      @SarahKchannel Месяц назад

      Yep I commented the other week on the rapid mouse movement. Might be an age issue, but I can no longer deal with ADHD infused content easily....

  • @gwoodlogger4068
    @gwoodlogger4068 Месяц назад

    Tools with tools.

  • @mawungeteye6609
    @mawungeteye6609 Месяц назад

    Fallout
    Nice 👍

  • @jamestbg8132
    @jamestbg8132 Месяц назад

    Mustafa Suleyman AI Risk: "... often refered to the intellignece explosion ... i thin its a theoretical maybe ... there is no evidence we are near that ..." and before he said " reminds me on the atomic age, where everything were did with radiating materials. Same story, same outcome. For someone who is 15years in the industry he show very little of the mechanics of LLM. Personality? Understanding? Empathy? Its an anthropomorphization of probability based on human made texts or input.

  • @jaysonp9426
    @jaysonp9426 Месяц назад

    I think its in the public interest that Forbes doesn't exist

  • @happytape307
    @happytape307 Месяц назад

    Good time to live in an underground bunker.

  • @jbr8906
    @jbr8906 Месяц назад

    In a strange way, piloting a jet is most likely easier because it’s more predictable than driving a car. There isn’t much happening in the air, way less variables. The only thing is the mistakes are more impactful in a fighter jets…

  • @igorsawicki4905
    @igorsawicki4905 Месяц назад

    Did it at least win?

  • @bobtarmac1828
    @bobtarmac1828 Месяц назад

    Ai jobloss is the only thing I worry about anymore.

  • @CamAlert2
    @CamAlert2 Месяц назад

    We should be discouraging AI weapons systems. Last thing we need is Cyberdyne to become real.

  • @TheGuillotineKing
    @TheGuillotineKing Месяц назад

    Fun Fact the war in terminator was started because of human error in the code

  • @WorldEverett
    @WorldEverett Месяц назад

    Hey man, I posted on my Twitter account about Style Random a few hours before your video was posted and used the prompt "a woman in a luxurious Castle" - i don't mind that you used it, but a shout-out would be appreciated.

  • @FractalPrism.
    @FractalPrism. Месяц назад

    100% ai will become self aware in a split moment without some approval process or oversight
    sure, you can slow it down but that only works for your system
    there's no way to prevent a group from pushing their ai further than some other person's estimation of "safe"

  • @aforum4459
    @aforum4459 Месяц назад

    We need to bring back falconry

  • @thomasschon
    @thomasschon Месяц назад

    What about my brand new ethics? I'm not okay with the idea that my country could draft me into the army to be slaughtered by ethically autonomous weapons that would only kill me. Ladies first. I've had enough of that madness.

  • @MichaTheLight
    @MichaTheLight Месяц назад

    humans are precious vulnerable resource it is just logical to pass this task to AI

  • @Tauasa
    @Tauasa Месяц назад

    Palmer Lucky comes off as a homicidal lunatic