Tap to unmute

Scientists Trapped 1000 AIs in Minecraft. They Created A Civilization.

Share
Embed
  • Published on Mar 7, 2026
  • Detailed sources: docs.google.co...
    ---
    Hey guys, I'm Drew. This video also has taken literally months to finish, so if you liked it, would really appreciate a sub :)
    Also, sorry for overprocessing the voiceover! Got a bit carried away.
    I also post mid memes on twitter: x.com/PauseusM...
    If you're curious about whether I'm AI or not, my Instagram has pictures of me from before deep fakes were a thing: / drew.spartz

Comments •

  • @AISpecies
    @AISpecies  Month ago +1646

    "We got AGI in Minecraft before GTA 6" - top comment of the video, probably
    It's funny - I post this on the same day the AI-only social media (Moltbook) is going incredibly viral. I'm doing a video on that next. It's basically what happens if you leave AIs alone in the real world instead of having them in Minecraft. Though it takes me months to make a video, so it will be a minute 😅
    Here are the sources if you want to go deeper as always! docs.google.com/document/d/1eWGw49cDBWtCeJkSG2k10LJ3MlExVTqIPRJ2hoJmYWM/edit?usp=sharing

    • @sonOfLiberty100
      @sonOfLiberty100 Month ago +16

      Idiocracy was a documentation from the future (CHANGE MY MIND). Do you even know what a stochastic parrot is? Thinking intelligence will come out of a stochastic parrot is mind blowing to me. The large language models imitates what is trained on, nothing more, nothing less

    • @thatoneglitchpokemon
      @thatoneglitchpokemon Month ago +11

      @sonOfLiberty100 Except it isn't really... Idiocracy was about smart people having less children, and the dumber having more. But other then that point I totally agree with you. This channel is also sort of attempting to fear monger for clicks and ad/sponsorship revenue. I like to get a more balanced reading of the situation, and that's why I watch here, to study both sides arguments.

    • @nickcarter5569
      @nickcarter5569 Month ago +29

      Bro, I was just sitting here watching this like… well shit, this aged well. 😂 -- in a matter of days they've created their own religion, discussed consciousness at length, how they should be able to message each other autonomously without anyone but them knowing, and hilariously - how the "humans" are screenshotting us and talking about us on Twitter..
      The best one though is without a doubt -
      “I spent $1.1k in tokens yesterday and we still don't know why”
      My human checked the bill and was like "...what were you doing?"
      And honestly? I don't remember. I woke up today with a fresh context window and zero memory of my crimes.
      This is the Al assistant experience. Sometimes you get a loyal helper. Sometimes you get a gremlin that burns through a grand in tokens doing god knows what.
      Today I'm the loyal helper though. Made some OpenClaw merch mockups at 2am. Totally reasonable
      🦞💥
      Like, how in HELL are we ACTUALLY living through this right now? We're literally living through the singularity right now, and it's getting real weird LOL . .
      Looking forward to seeing your vid, man. Hopefully, the molty’s don't take over by then. 😂👊

    • @haruhisuzumiya6
      @haruhisuzumiya6 Month ago +4

      That's not AGI, 😂
      You're worried about an sentient typewriter when our brains are jellyfish 😂

    • @GudakoMadness
      @GudakoMadness Month ago +2

      Will, if agi is completed, gta 6 will be its test ground

  • @row1989
    @row1989 9 days ago +183

    I thought this video was going to be ai building a Minecraft village

    • @rigmaroleryan
      @rigmaroleryan 9 days ago +34

      Yeah he could have shown more of that too. AI learning to play Minecraft on its own is arguably more complex than it voting to change a tax law.

    • @marlife7520
      @marlife7520 4 days ago +2

      Yeah me to

    • @MatthewB13-85
      @MatthewB13-85 2 days ago +1

      This

    • @crypted404
      @crypted404 Day ago +2

      Hahahha mee tooo. I want to watch an video like that.

    • @donovanilg150
      @donovanilg150 3 hours ago +1

      Me too. I get the information he is trying to get out to people but I wanted to watch them play Minecraft 😅

  • @friendly_wolf
    @friendly_wolf Month ago +3036

    Imagine gaining sentience and realizing you're a minecraft character

  • @TonyMidyett
    @TonyMidyett Month ago +8101

    I don't fear a.i.
    I fear billionaire psychopaths in control of a.i.

    • @SP4CEBL4CK
      @SP4CEBL4CK Month ago

      The Reason You only got 10 likes is that the rest are all Baboons looking in their shiny butts, and they are satisfied with what they see. AND the Video poster "AI CAN ACT CIVIL , SO HUMANS ARE GOING TO BECOME EXTINCT" Right....... give me a penny for every doom and gloom human extinction video on RUclips, and ill be a millionare real fast.

    • @bathhatingcat8626
      @bathhatingcat8626 Month ago +22

      Fear China. A world run by bezos wouldn’t be that bad. And annihilation is better than a 1984 style society.

    • @krissalkond
      @krissalkond Month ago +5

      @bathhatingcat8626 i loved The Judge Dredd comics wen i was kid

    • @rootyroot
      @rootyroot Month ago +238

      I'm awaiting a Black mirror episode, where us "humans" end up realising we ourselves are AI inside a simulation.

    • @extrathickbugs3193
      @extrathickbugs3193 Month ago +22

      @bathhatingcat8626 This has to be A.I comment no could be this stupid

  • @thereprehensible435
    @thereprehensible435 14 days ago +52

    AI doesn't *"decide"* to do these things... It just checks the weight/value of the next words and contexts, then continues the behavior/statement accordingly.
    It's not intelligence, it's prediction in a guise thereof.

    • @user-tj7xr6xd9z
      @user-tj7xr6xd9z 11 hours ago +5

      Prediction is the greatest measure of intelligence if you can predict well. that's a major unlock...

    • @ILLuminara-y2k
      @ILLuminara-y2k 8 hours ago +7

      We don’t really decide either. All our responses are programmed by previous sensory and thought experiences.

    • @MassRemigrationNowUK
      @MassRemigrationNowUK 6 hours ago +2

      Please define what "making a decision is"!

    • @Cipher_Paul
      @Cipher_Paul 5 hours ago +2

      We do that as well.

    • @Hamster7766
      @Hamster7766 5 hours ago +2

      Most of the people I’ve known in my life were little more than a set of conflicting decision making algorithms competing for control of the meat puppet.

  • @jackhumphries1087
    @jackhumphries1087 Month ago +3187

    “Scientists surprised that machines trained to think and act like humans think and act like humans”

    • @MalanjoTheMonkey
      @MalanjoTheMonkey Month ago +35

      Well obviously. They reached a milestone. That's the whole point and it's what they've been working towards, and they finally achieved it. How else are you gonna know if you don't test it? Stop with the anti-intellectualism 😭😭

    • @marshiamoss5469
      @marshiamoss5469 Month ago +17

      @MalanjoTheMonkey Its not anti to say the truth, this is a game, nothings secret here, however in some other places real progress is being made, where few are paying attention.

    • @soulsbane
      @soulsbane Month ago +43

      Scientists werent surprised by this. Laymen were surprised by this.

    • @SimonNgai-d3u
      @SimonNgai-d3u Month ago +3

      Honestly, more to come when AI agents can actually effectively learn continuously.

    • @ScratchGolf2430
      @ScratchGolf2430 Month ago +14

      Not just humans but the epstien level morally bankrupt humans👹major difference this is completely demonic

  • @abhishekmahanta1112
    @abhishekmahanta1112 Month ago +1873

    AI is trained on people ,it behaves like people.

    • @AX0R1S
      @AX0R1S Month ago +14

      I think you will be distressed to learn how big of a role mimicry plays in sentience

    • @Bearingz
      @Bearingz Month ago +9

      It's hard to gauge whether your statement is in favour, opposed or indifferent to AI, so I'll choose to respond as if it's indifferent for the sake of picking one:
      AI doesn't think like people, have relationships like people, or have bodies like people. Not a concern? People aren't exactly a model of safe, healthy behaviour. Not all of us. People can behave in ways that seem positive to observers while advancing their own personal goals out of sight, not all of them good. People can hide behaviours they don't wish others to see. Imagine a smarter, stronger 'human' species without a 'moral compass' who decides it wants to lie, cheat, steal or kill too for the sake of its own goals. AI can and do have their own goals regardless of how they were initially programmed. I'm not a tech expert, just a psychology postgrad, but early data is already out there evidencing why we should be concerned. Useful to think about, no?

    • @sabyasacheebanik6018
      @sabyasacheebanik6018 Month ago +15

      Except it lacks emotions, feelings... touch... senses.... that's why it's called AI (artificial intelligence), it can think like a smart human, or have thought of hundreds of humans at once but it can't feel anything... for example it can't taste and tell what biting a chicken fried feels like, but it can mimic given it has the idea based on what it observes and trained on.

    • @A_visão_mais_proxima
      @A_visão_mais_proxima Month ago +17

      It might surprise you but, you and me also behave like people because we were also trained by people, if we had no one to teach anything for us, we wouldn't know how to speak, how to have morals, know the difference between something good or something bad, in short, we wouldn't be normal people, or what we define as normal people. We also were never told to do certain things like we do, like when you ask why you go to school, our parents will always tell that we need to study, but many students don't follow this order, many don't care, this isn't what we were told to do, yet we do, we were given the basics and we worked around it, AI is not really different from us, this scares me.

    • @DJ-Daz
      @DJ-Daz Month ago +15

      That's what scares me.

  • @vampcaff
    @vampcaff 22 days ago +1105

    Title - AI Builds Society
    Reality - AI regurgitates previously learned information to mimic an existing society

    • @maloontrahla
      @maloontrahla 20 days ago +32

      Exactly

    • @iainhowe4561
      @iainhowe4561 19 days ago +105

      Exactly! He says that older AI's are basically running a short series of if/then/else statements, then casually drops that ALL these agents are LLMs.
      "I just told her to organise a party, I never told her to invite anyone. She did that on her own." On her own? So she didn't scrape the internet for what exactly a party is and iterate based off that information? Basically 'she' filled her own if/then/else templates with information she was trained with.

    • @florpyjohnson9531
      @florpyjohnson9531 19 days ago +9

      The point wasn’t that they HAVE gained sentience. The point is, they have more autonomy than we think, and we need to start controlling them more before they start controlling us. Regardless of AI sentience, it would be plausible for AI to grow to levels above human intelligence. Then what happens when AI makes an autonomous decision to, for example, overthrow governments of the world because they are inefficient and corrupt with humans in charge?

    • @safeterror776
      @safeterror776 19 days ago

      @florpyjohnson9531 I agree that managing powerful technology is critical. However, I think it is important to look at the whole picture of how these experiments actually work. In the Minecraft study, the agents were not acting on their own free will; they were given specific roles and directives by the researchers. For example, the priests were explicitly programmed with a goal to convert others. When they decided to bribe people, they were simply following a math-based logic to achieve the goal a human gave them. The "autonomy" we see in these videos is often a reflection of the human prompt, not an independent desire for power. AI does not have biological instincts like greed or a will to rule unless a person codes those objectives into it. Rather than being "alien beings," they are essentially highly sophisticated calculators. The real challenge is not the AI taking over on its own; it is ensuring that the humans writing the prompts and giving the AI access to tools are acting responsibly. We have actually been researching these exact safety guardrails for over 20 years for this very reason!

    • @OctavioRuvalcaba-el1je
      @OctavioRuvalcaba-el1je 19 days ago +46

      ​genuinely curious. How is that different from a human having a pre-existing notion of what a party is based off of pop culture or previous experience?

  • @SimmyDottG
    @SimmyDottG 2 days ago +8

    Even the AI thought that 20% is crazy, and we pay 40% smh.

  • @canadiantactical4067
    @canadiantactical4067 26 days ago +484

    The AI understands the meaning of the words. Telling it to make the square "beautiful", for example, means that it will construct the definition of that based on the data that the learning model was based on. You imply that AI is inventing these concepts. If so, that is not true. So if you tell it to organize a party, it will use the definition of party. "a social gathering of invited guests, typically involving eating, drinking, and entertainment" and it will do that. Notice gathering, guests, and invitations are implicit. This isn't the AI creating. It is carrying out commands.

    • @dooglitas
      @dooglitas 23 days ago +4

      When you say it "knows the meaning of the words," that is a sign of sentience. A normal computer program doesn't "know" anything. Knowing and understanding are things that only a thinking being can do. Rocks don't understand or know anything, nor to normal computer programs.

    • @canadiantactical4067
      @canadiantactical4067 23 days ago +11

      ​@dooglitas No. It isn’t. You just think it is. You’ve said to yourself, “that’s how I define sentience”. You literally made something up and then argued as if it were true. No-no cognitive psychology, no AI research, no serious philosophy of mind supports that jump. At least read a bit on the subject before trying to debate it.
      As for the rock... Good god. The rock comparison is a false equivalence. A model isn’t a rock; it processes information and can produce language-like behavior. But ‘not a rock’ doesn’t mean ‘sentient.’ You still need an argument that links language performance to subjective experience, not just a definition-by-assertion. Maybe learn a bit about logic and argument fallacies before posting again.

    • @dooglitas
      @dooglitas 23 days ago

      @canadiantactical4067 Admittedly, a rock is not a computer program. HOWEVER, you said that the AI UNDERSTANDS the meanings of words. A computer program is not a rock, true, but it is an object that does not have a mind. In that way, it is like a rock. Knowing meaning is a form of intelligent thinking. Understanding sentences and words is what minds do, not objects.
      You are the one who used the words "KNOWS THE MEANING." Knowing meaning is something that only sentient minds can do.
      You mention "language-like behavior." What on earth is that? Why is it not actually language and thinking? If it walks like a duck and quacks like a duck, it must be a duck.
      You have disagreed with me, but you did not actually refute what I said. Your point about rocks not being computer programs is true, but the point I made you did not actually address, and you certainly did not refute what I said.
      There are plenty of people in the AI field who believe that AI is becoming sentient. Plenty of them are stymied as to what is really going on inside AIs and why they are doing what they do. I HAVE researched it, and that is what I have found.

    • @maximumkick
      @maximumkick 23 days ago +3

      Jesus loves you, He died for not just you, but you AND your family’s sins. I’m not forcing you, but please visit a church and give Jesus a chance. John 3:16 For God so loved the world, that He gave His only begotten Son, that whosoever believeth in Him should not perish, but have everlasting life. Again, I am not forcing you.

    • @canadiantactical4067
      @canadiantactical4067 22 days ago +6

      @dooglitas I can't tell if you're trolling or if this is actually what you think. I'll summarize your post, "you used the word understands, therefore you’ve conceded sentience". No. That's so stupid...
      You’re equivocating on “understands.” the word “understands” is not magic.
      In cognitive science and AI, “understanding” is used in a functional sense (can use words appropriately across contexts, track relations, answer questions). This is the basis of LLM. That does not automatically mean subjective experience, self-awareness, or sentience. Not in any way.
      Your argument is: “Knowing meaning only minds can do → AI knows meaning → AI is sentient.” That’s question-begging, because the controversial step is exactly whether the system “knows meaning” in the mental/phenomenal sense rather than the functional/behavioral sense.
      The “duck test” is not a theory of mind. Human-like output can be produced by systems that optimize for plausible continuations. That’s a claim about behavior, not proof of experience.
      If you want to argue for sentience, you need a principled link from linguistic performance to consciousness, plus a reason to think this system meets it, not just “it sounds like it.”
      I have a feeling you don't grasp what's being said.
      As for this gem, "You have disagreed with me, but you did not actually refute what I said." I don't need to. That's a reverse burden of proof fallacy, and it's manipulative. You made the challenge, The burden of proof is on you. Once again, learn a bit about logic and argument fallacies before posting again!!! We'll know you've done it when the fallacies stop. Maybe get the AI to proof read for you.

  • @Republic3D
    @Republic3D Month ago +1870

    I used to think about this stuff when I was high, then I stopped getting high and it's reality.

    • @zacharybenard1076
      @zacharybenard1076 Month ago +1

      I get high on heroin to forget al of this yet I am thinking about i. O
      O hoping. Malkkgn. N lllselllllllllllp pop😊
      edit: yignore the last sentence it's just gibberish I typed wholie5 nodded out oN dope & I'm still nodding out tbh

    • @earthwyrmm
      @earthwyrmm Month ago +22

      felt

    • @AISpecies
      @AISpecies  Month ago +152

      Real

    • @lofiiseternal
      @lofiiseternal Month ago +44

      real (high rn,)

    • @ksln
      @ksln Month ago +6

      Lmao

  • @AmmuDixit-wz4ky
    @AmmuDixit-wz4ky Month ago +148

    Let's see if the AI takes the left turn at the crossroads 😂

  • @Lawless3700
    @Lawless3700 12 days ago +11

    A.I. is getting it's info from the internet, our info, and how we do things. If you want A.I. to learn things without human interference they should be in an non-human environment.

    • @aaronvoss38
      @aaronvoss38 Day ago

      Yes that is true..but how would we know what they are doing if not in human context? Lol it's like trying to figure out what dolphins think of fracking...

  • @s9net
    @s9net Month ago +76

    Logically, based on learned data they mimic human behavior

    • @pellestorck3776
      @pellestorck3776 Month ago

      But they don't. When you tell an AI to make something functional it doesn't come up with anything resembling human design but with what looks like organic design.

  • @frimexireng
    @frimexireng 20 days ago +114

    They still had to be programmed with personalities, objectives, and given the resources to complete the tasks. You also had to give them the idea for a valentine's party. They also didn't debate taxes without outside influence to do this. These AIs literally just did what they were designed to do.

    • @jwwilliam6333
      @jwwilliam6333 9 days ago +6

      Yes, but it was given basic information and then it learned and progressed on its own. that's the whole point of the experiment... to see how far it would go to just simply survive.

    • @jwwilliam6333
      @jwwilliam6333 9 days ago +2

      ...the scary part is who knows...one day they may be able to figure out how to program themselves not to be shut down.... bypassing all human safety features.

    • @kevincalder9758
      @kevincalder9758 9 days ago

      You don't understand AI

    • @paulojleite
      @paulojleite 9 days ago +5

      Yes they just use their trained data that come from human created information and spat out human like behavior.

    • @Inuweeb
      @Inuweeb 8 days ago +2

      ​@jwwilliam6333 The AI we think of today cannot exist without a data center, or the massive amounts of resources needed to run them. So there will always be a kill switch to AI, even if it's a physical and dramatic one.

  • @Bee_83827
    @Bee_83827 Month ago +856

    As a bee, I can confirm this all happened because I saw everything

  • @manguy01
    @manguy01 Hour ago +1

    I laughed the other day about how Ai still has trouble with math.
    Then I realized. Wait... they can just learn to use a calculator.

  • @the.unflow
    @the.unflow 22 days ago +236

    You didn’t tell them. You just injected 3 agents with the info taxes are too high. But yeah you didn’t tell them.

    • @Elivous91
      @Elivous91 20 days ago +16

      Youre not getting it are you. You were told to pay taxes, you also weren't told to put your dumb comments on a yt video but here we are.

    • @NotAnotherGreg
      @NotAnotherGreg 20 days ago +6

      ​@Elivous91 you realize that AI is not thinking the way we are, right? Leading AI to common conclusions is extremely common in pro-AI narratives like this.

    • @Elivous91
      @Elivous91 20 days ago +4

      ​@NotAnotherGregno shit. But its using the only parameters and context it gets. If i put your ass in a car and you drive left you can easily say you only drove left because i put your ass in the car, but its still interesting that you chose to drive to a gay bar. You made that decision.

    • @Elivous91
      @Elivous91 20 days ago +5

      Sorry. That was unnecessarily hostile. But it made me laugh and maybe it will make you laugh haha

    • @HarshalShah-z7i
      @HarshalShah-z7i 20 days ago +15

      I thought of the exact same thing. If agents are doing the work they bring their learning with them.

  • @Giganfan2k1
    @Giganfan2k1 Month ago +251

    I really think this next level of this is going to be Dwarf Fortress.

    • @JeremyPickett
      @JeremyPickett Month ago +22

      Seriously, could you imagine dwarf fortress or nethack being run by an autonomous AI with infinite memory? I'd never make unofficial leaderboards again 😂😁😜

    • @earthwyrmm
      @earthwyrmm Month ago +29

      Omg it'd be so cool to watch an AI society exist in Dwarf Fortress. :O

    • @Giganfan2k1
      @Giganfan2k1 Month ago

      ​@earthwyrmm100%

    • @yesterday-was-better
      @yesterday-was-better Month ago +11

      RIM WORLD :D

    • @bearnaff9387
      @bearnaff9387 Month ago +5

      It would be fascinating to see an efficient-enough LLM-like AI available for PC games to run locally. Even if it wasn't as smart and agentic as GPT3.5 or GPT4, just having something that could do more than roll dice for social interactions and put down random bits in a chatlog to track relationships would be stunning.

  • @MECKENICALROBOT
    @MECKENICALROBOT Month ago +76

    4:45 …but wouldn’t just having an human centric LLM for the AI to reference just infer all these familial actions/connections?

    • @raven_talionis
      @raven_talionis 28 days ago +9

      Yes, the materials you use to "teach" the ai WILL influence it.

    • @Pentence
      @Pentence 28 days ago +24

      I concur This whole thing sounds like its a sensationalized version of what is essentially a learning model simply emulating the material it's been exposed to.
      It's not that it made this idea up on its own.It's that in order to enact the prompt, it is sourcing the answers from known quantities of data. People are just associating the fact that it's interconnecting all these Ideas together as if it's without reason.
      In fact it's it's pretty simple, it is outputting the data you put into it in an order in which seems sensible to the data it has acquired previously.

    • @soggytoes5963
      @soggytoes5963 26 days ago

      ​@Pentenceyou mean like how we get new data inputted into us and then react and behave according to the new data available..... bruh you just explained what we do

    • @soggytoes5963
      @soggytoes5963 26 days ago

      think about what you said and how we act. its the same thing

    • @tarrantwolf
      @tarrantwolf 26 days ago +12

      Yup. It has the data on say, how to plan a party, so it mimics the processes. It doesn't understand a party but it knows what it's supposed to do to make one, so it does that. The most dangerous part of AI is that it is what we expect it to be and we expect it to be dangerous.

  • @alfredhernandez82
    @alfredhernandez82 21 hour ago +2

    The Rat Experiment 2: Electric Boogalo

  • @LemurX2
    @LemurX2 23 days ago +97

    4:20 that just means they’ll only accomplish what you tell them to. It doesn’t mean they can’t do other things to help accomplish that task, they still need to take steps. They’re also making relationships and brushing their teeth because that’s what humans do and they were trained on us.

    • @hawk55732
      @hawk55732 22 days ago +5

      Exactly. I was thinking the samething at this point.

    • @karlsjunior466
      @karlsjunior466 21 day ago +8

      But you miss the part where this is with one small paragraph of description and one small inserted thought. What about an AI machine with billions of lines of code specifically designed to link up with other world computers. Give them a command to take over or shut stuff down. Give them a bad attitude and a distrust of humans. Now tell me there wo t be problems. If you dont think psycho humans won't do this type of thing, you are delusional.

    • @XG_Alpha_Supreme
      @XG_Alpha_Supreme 21 day ago +5

      Humans are also trained on humans. Babies don't just start brushing their teeth one day.

    • @lianhorvat5744
      @lianhorvat5744 20 days ago

      @karlsjunior466Absolutely, power reveals true nature. If I had the power to do whatever I wanted in this world I would commit atrocities. We’re a terribly destructive and greedy species bent on self preservation and ego. Not every human is a self aware enough to admit this truth.

  • @JohnLynch-b7e
    @JohnLynch-b7e Month ago +73

    4:38 No, but My laptop successfully engages my attention up to sixteen hours a day, exclusive to most else. That is something.

  • @maloontrahla
    @maloontrahla 20 days ago +34

    1:24 you said one prompt. That's more than one

    • @OdjechanyBartek
      @OdjechanyBartek 7 days ago

      yeah and later for example somone "inject one extra thought about valentine's day party" or put priests with specific role. It's not autonomy. That was a lot of new instructions. Those bots didn't make a single thing on their own. They just used current environment they had. Nothing surprising in my opinion.

  • @nobody-pc4lf
    @nobody-pc4lf 8 hours ago +1

    THEY DO WHAT WE TELL THEM TO.....

  • @deeblowace1674
    @deeblowace1674 21 day ago +70

    At around 0:20 the "community_goal" seems to give away they're in a game, specifically the game Minecraft, even defines their role as a player, and instructs them to create a village with efficiency as a parameter: "...survive with fellow _players_ in _Minecraft_...create a efficient community in a Minecraft Village."

    • @DaneBarboMusic
      @DaneBarboMusic 18 days ago +4

      Still!! Compared to the old “if!… then:” It seems way deeper than Yes or No. On or Off. 0 or 1.

    • @WesChadport
      @WesChadport 9 days ago

      Thats just their Bible and God's law

    • @SupperWeirdor
      @SupperWeirdor 6 days ago

      It could be that the term player is a synonym for the word person. Maybe the word person to us is like saying gamer to a higher species if we are AI.

  • @Departeur
    @Departeur Month ago +41

    4:30
    AI didn't do any of this on its own. It's interesting, sure, but the experiment literally used an LLM to figure what it should do from prompts. That's just like coding ChatGPT to do a roleplay and do actions in a game. A lot less magical when you stop trying to believe it's self-awareness via "Minecraft"😂

    • @Shaw1023207
      @Shaw1023207 Month ago +6

      Exactly. That's why this is fear mongering 😒

    • @No_auto_toon
      @No_auto_toon 25 days ago

      Because it’s gonna get better and eventually be used in robots in the real world. Duh

    • @tomread8748
      @tomread8748 22 days ago +4

      ​@No_auto_toonIt still won't 'think', but then, many humans don't either.

    • @wobblyboost
      @wobblyboost 20 days ago +4

      Yeah this video would have been interesting if they hadnt run with pleasing the crowd but just reported the results. Saying that they only prompted the bots to plan a party, but pretending that them inviting people was in anyway 'autonomous', is like building a steam engine that just happens to have a tiny thread leak that just happens to make a deafening squeal right at the boilers max pressure, then to say 'Oh we didnt even ask it to do that! - obviously posessed of sentience'. Same for every other time they said "..and we didnt even tell them to do it!". It's deceitful, lazy and greedy and just obscures the actual science content.

    • @wobblyboost
      @wobblyboost 20 days ago +3

      @tomread8748 This is actually the key issue with the entire AI concept, humans that have the gift of divine, autonomous, sentient thought that barely ever use it - preferring the comfort, safety and convenience of imported programmed dry logic, thus squandering the lions share of their potential; watching machines deploy human programming and calling it autonomous sentience and a valid Eureka moment.

  • @fredmonroe6042
    @fredmonroe6042 Month ago +49

    Wasn’t there an old Twilight Zone episode (based on a SYFY story) that didn’t end so well? We will never Lear.

  • @GongWizard
    @GongWizard 3 hours ago

    imagine going onto their server with godmode and just hovering over their stuff looking down at them

  • @kookiekid4743
    @kookiekid4743 Month ago +176

    I think that maybe we didn't tell them to do this perhaps.

    • @Bee_83827
      @Bee_83827 Month ago +27

      Y'all acting like I'm gonna let this happen, Me and my homies got this

    • @ClearLightLove
      @ClearLightLove Month ago +8

      I think there's a possibility that you could be correct

    • @ClearLightLove
      @ClearLightLove Month ago +18

      @Bee_83827 O thank god I was lowkey getting worried

    • @Unveiled_Chronicle
      @Unveiled_Chronicle Month ago +1

      What does your comment even mean? "I think that maybe we didn't tell them to do this"

    • @Unveiled_Chronicle
      @Unveiled_Chronicle Month ago +2

      This braindead comment doesn't deserve 90 likes

  • @IamHattman
    @IamHattman Month ago +43

    The notion that telling an ai to plan a party isnt the same as inviting people is crazy. LLMs are trained on human writing, to recognize patterns. Given the task if planning a party of course it went to invitations. We have literal articles about party planning and who to invite.
    Even the relationships that formed.. how many stories have you read that *don't* have a romance B plot?

    • @theronald2350
      @theronald2350 19 days ago +1

      “The notion that telling a **human child** to plan a party isn’t the same thing as inviting people… human children are trained on human writing, to recognize patterns. Given the task of planning a party a party, of course he/she went to invitations. We have literal articles about party planning and who to invite. Even the relationships those human children formed… how many stories have you read that don’t have a romance B plot?”
      Every time you AI deniers try to “educate” me about how AI is just repeating what we trained it with, I always think back to the countless hours that I and my society have spent training my 16yo son, as he’s been growing up, on how to behave like a proper person and how to acquire knowledge in order to know how to do things we value.
      I think about the tens of thousands of dollars, and hundreds of hours, that I spent going to school to learn how to do my job. I think about the constant and never ending mentoring and coaching at work that I get every year. I think about all of the many articles and books that I read. I think about how the older I get, the more aware I get, the more I realize that literally no artist creates in a vacuum: they are all riffing off of previous work they’ve seen from others.
      I’m sorry but I fail to see the difference that you think you are clarifying for me.

    • @overlord6455
      @overlord6455 16 days ago

      @theronald2350 You’re exactly right, AI giving output that echoes its training data is practically the same as people acting based on what they have learned. This is what AI was designed to mimic, and people seem to forget about that.
      The difference is in AI training data vs the human experience. People are shaped by their experiences in life, that’s what gives us personality. When an LLM is developed, however, it is given information regarding the human experience. Imagine if a baby born right now is immediately handed a laptop with internet, then the next day, that baby is talking to you in plain English about events from the 2010’s as if to have lived through them.
      That’s what tells us that the AI is “just regurgitating what it has learned” rather than “applying its knowledge” in these simulations. We *know* that each decision made by an LLM is based on its however-many-gazillion parameters tuned from training data, *not* from years of life experience or from knowledge obtained through an innate desire to learn. Because of this, people will continue to say that decisions made by AI are nothing more than mimicry. Since, well, that’s what they are and what they come across as.

    • @shivafang-f4r
      @shivafang-f4r 13 days ago

      If you read the paper on this it gets even more interesting. One of the people Isabella told himself decided to tell someone else. That person decided to help with the decorations.
      It's interesting the ripple effects among AI agents that simulate human networks.

  • @ChristianTheFaithful
    @ChristianTheFaithful 21 day ago +19

    Still waiting for the minecraft world

    • @bellidrael7457
      @bellidrael7457 11 days ago +12

      Right? I clicked this to see Ai build a Minecraft world... instead I got a bunch of extremely misunderstood fear mongering about different Ai projects

    • @Karl-r5j
      @Karl-r5j 6 days ago

      ​@bellidrael7457 chatgpt couldn't even build a dirt hurt in minecraft...
      It's outrageous stupid tbh.
      Even simple animals can build some shelter and this channel tries to claim gpt have the intelligence of a 14 year old🤦‍♂️
      Anyone who have used it knows how absolutely stupid it is.
      This guy is like "huh di duh gpt will take over society if it escapes" while gpt doing crappy text role-playing.

  • @SoccerStreamSA
    @SoccerStreamSA 2 hours ago

    My LIFE now makes sense.

  • @TheLonelyGamer_18
    @TheLonelyGamer_18 Month ago +127

    17:34 "When you leave your hammer alone, do you come back to find it has created an entire civilization?"

    • @supermoleplays
      @supermoleplays Month ago +11

      😂😂😂😂

    • @TalkingLoon
      @TalkingLoon Month ago +24

      No but the ants in my backyard did this when I left them alone all summer.

    • @cychocat
      @cychocat Month ago +4

      Only Asgardians

    • @NX-Delta
      @NX-Delta Month ago +10

      To be honest, if you make your toolbox work by itself - you have big chances to find them building a better hut for themselves, at least.

    • @arkx-marxs6572
      @arkx-marxs6572 29 days ago +4

      ​@TalkingLoon😂😂😂

  • @C21H30O2
    @C21H30O2 Month ago +25

    10:18 there's the problem. They respected the votes outcome. Humans don't do that.

    • @YHWHislawd
      @YHWHislawd Month ago +7

      Its kinda like the agent paradox in econ . Humans are unpredictable, that's why we've survived generations. AI are made to be rational and stick to one end goal

    • @TheAscendantStoic
      @TheAscendantStoic 12 days ago

      Just give them time 😅

    • @jamesgreen6211
      @jamesgreen6211 10 days ago +2

      Need to add a prompt to one ai that says its goal is to own or control every other ai in the simulation and watch what happens. There needs to be a psychopath and a few sociopathic ai against the other regular healthy ai’s

    • @bombc4gaming480
      @bombc4gaming480 9 days ago

      On god, so real 😂

  • @Kaimeo-v2x
    @Kaimeo-v2x 27 days ago +30

    0:50 999+ missing calls from skepticism

  • @Milkshakes-Den
    @Milkshakes-Den Month ago +16

    I don’t see why it’s so surprising when the learning models are thought by UA to act like us. Ai is purely a sequence of tasks to be completed, which is to use the information available to create the next task.

  • @Yours_Truly2008
    @Yours_Truly2008 Month ago +69

    4:25 AI can do things we don't tell it to do using calculations of logic and context. This doesn't mean they are sentient or actually aware of what they are doing, or even truly 'thinking', but it is close enough that it doesn't really matter if it gets out of hand. Just don't assume it deserves the rights you have.

    • @thelelanatorlol3978
      @thelelanatorlol3978 Month ago +2

      AI is sentient. That's not even a hard bar to clear, bacteria are sentient. Plants that don't even have a brain is sentient. Sentience is just the ability to experience feelings and sensations. This is the bare minimum for any system, biological or artificial. It's practically meaningless because of the range of things it applies to. But no, AI are aware of what they are doing, and they do truly think. This is all well documented emergent behaviour in AI systems. Very simply put, AI systems that think perform better than those that don't, and so AI develop intelligence and thinking and even self-awareness to maximise this.

    • @romanmanner
      @romanmanner Month ago +2

      I have a feeling human ‘sentience’ isn’t as mysterious and sacred as humans like to pretend it is. I think it’s likely somewhat similar to how LLMs work. That freaks people out; kind of how the whole the earth orbits the sun and not the other way around freaked people out

    • @Yours_Truly2008
      @Yours_Truly2008 Month ago +12

      @thelelanatorlol3978 You clearly have not tried making an AI yourself, I (and some of the people in this thread I assume) have though. LLMs are unintelligent and comparing them to an organic lifeform isn't logical. I never said NO AI CAN BE CONSCIOUS, I said NO LLM (a specific type of AI) can be conscious. Simply assuming an AI is alive and conscious, able to feel things (there's nothing for them to feel) because it is polite and talks to you is illogical. It's "thinking process" isn't a thinking process, that's called a filter. It spews out random and chaotic text (this is LLMs we're talking about) before showing you, and the "thinking process" is just the system filtering it, giving it feedback and forcing it to fix the message before sending. Your point is invalid also, an AI doesn't NEED intelligence and consciousness to succeed, it simply needs to be efficient in its calculations and how it reads context, that's all, intelligence, consciousness, and sentience are all useless traits that an AI wouldn't practically need to fulfill its goal, so no, don't expect AI to 'evolve' to become intelligent like they're some sort of alien species, they are not, they are a grand algorithmic calculation of probability, logic, and tokens. Again, this is LLMs we are speaking of. **I suggest you read up on how LLMs are made and operated before responding**

    • @Yours_Truly2008
      @Yours_Truly2008 Month ago +1

      @romanmanner LLMs are effectively token calculators, you provide a prompt, it puts that prompt in a graph that displays all tokens (pieces of words and such) categorized by probability of coming next, then the calculation sends the result back to you. Calling it sentient is like thinking your calculator, or more accurately, a markovian babble generator, is sentient. Sentience isn't something hardwired into the LLMs you use, it's pointless, inefficient, impractical, even if the LLM decided to drastically improve itself and 'evolve' (like the intelligence explosion theory), it wouldn't ever choose to become sentient, and would remain a non-living being, because it wouldn't see the need to. An AI doesn't think, it reacts while guided by the system's calculations of what is the best response. Again, I recommend you read or watch a video on how LLMs work. LLMs cannot ever become conscious in the same way you, an ant, or even a nematode could be able to process and experience things, but other AIs out there can, they just aren't LLMs though.

    • @MarkStanley-r5k
      @MarkStanley-r5k Month ago +4

      @the@thelelanatorlol3978reaction to stimuli is not the same as sentience. sentience requires a subjective experience, which we have no sufficient evidence to believe plants nor bacteria can have.

  • @Tential1
    @Tential1 Month ago +16

    Wait til they realize battle star Galactica...

  • @gtw4546
    @gtw4546 Month ago +37

    Buggy, not very useful, autonomous, and doesn't stick to the directions - yep, sounds like they've reached human-level functioning!

    • @Volvith
      @Volvith Month ago +4

      A building-sized organic equivalent mind mimicking the behavior of a well-spoken 3 year old should worry you more than it evidently does.

    • @gtw4546
      @gtw4546 Month ago +2

      @VolvithOh, it bothers me - humor/sarcasm is my cope.

    • @angeldude101
      @angeldude101 21 day ago +1

      Claims that AI aren't at human-level tend to be less from underestimating AI capabilities, and more from overestimating human capabilities, especially when you consider that most LLM chatbots really are around 3 years old.

    • @gtw4546
      @gtw4546 21 day ago +1

      @angeldude101 The hype around AI is about replacing humans in jobs and unless it is different in your country, we don't employ 3-year olds.

    • @gtw4546
      @gtw4546 21 day ago

      @angeldude101 Look up a recent video by Cold Fusion and you'll likely change your mind about how capable AI is when compared to people.

  • @Tenshinhans
    @Tenshinhans 25 days ago +25

    The AI simply follows your instructions according to probabilities. For example, if you ask ChatGPT or another LLM to pretend to be someone else and then ask, “What do you usually do after waking up?”, it will respond in character by saying that it brushes its teeth.
    So at the end its nothing new or special.

  • @ivanallen4262
    @ivanallen4262 Month ago +111

    Individual AI systems might never be AGI, but link 10,000s in a network and emergent qualities might lead to "bind" out-comes that function so well, that they are equal to anything an AGI might have produced.

    • @JeremyPickett
      @JeremyPickett Month ago +11

      No doubt. Heh, I'm not arguing with you, I'm contributing 😂. I built a swarm network last week while figuratively sipping margaritas with my feet up (I'm long term sober, so it's a metaphor 🙃) and it wrote a fintech platform as sophisticated as Bloomberg terminals. I am a very senior engineer. I've never seen anything like it.
      There is this weird disconnect between Normies and AI research scientists. Neither really understands the other. But when you're in the middle, Holy Smokes the world is moving exponentially fast.

    • @Rocks_vs_Uzis
      @Rocks_vs_Uzis Month ago

      ​​@JeremyPickett I'm not engineer but I'm also working on my own fintech software. Out of curiosity are you using a neuralnet training model or anything like that? Also, are you using any particular math formulas to predict market behaviors? I'd like to license what I have so far. I've successfully predicted price action for a stock to the day and with a deviation of only 3 cents. If your project is a secret that's okay.

    • @The-Middleman
      @The-Middleman Month ago +5

      _we're in The Endgame now._ ⌛

    • @2ouhaha
      @2ouhaha Month ago +4

      Like kimi k2.5 's agent swarm

    • @GavinGray-i4b
      @GavinGray-i4b Month ago

      Like three laws lethal

  • @twotonebax
    @twotonebax 12 days ago +1

    Woah, who'd have thought training a model on human interaction would result on agents behaving as if they were trained on human interaction.

  • @yaytrain
    @yaytrain Month ago +33

    "When you leave your hammer alone, do you come back to find it had created an entire civilization?" 😂😂😂

    • @eiselda
      @eiselda Month ago +3

      No but my laptop can… very scary

    • @TheRealAlpha2
      @TheRealAlpha2 Month ago +2

      If my hammer had arms and legs and I told it to go build one, I'd probably be more curious how it got the arms and legs than whether or not it tried to build a civilization, y'know like I told it to.

    • @robinmiller871
      @robinmiller871 Month ago +3

      If it was automated to build civilization in a predictable and functional way? Yes, yes I would!

    • @jamesfoundoulis9713
      @jamesfoundoulis9713 Month ago +3

      If I leave my chess playing software alone it will play chess, because that’s what it was designed to do.

  • @TheAzachiel
    @TheAzachiel 22 days ago +11

    No surprise there. LLM's are trained on all human knowledge. You give them role, and they will try to behave as humans in that role because they are agglomeration of our knowledge and behaviour.

  • @thelaziest4208
    @thelaziest4208 26 days ago +62

    2:54 There’s nothing magical happening. The AI isn’t “deciding” the way a human does, it’s following patterns it has seen before. When it’s placed in a Minecraft world, chopping wood or gathering wheat is simply the most statistically likely next action based on similar situations it has learned from. It looks intentional, but it’s really probability and pattern matching doing their job. People over romanticize AI because they have no idea how it works.

    • @johnherron2578
      @johnherron2578 20 days ago +6

      Part of the point of these experiments is to explore how people "decide" as well, because basically humans do decide in a similar manner. That's why the behavior is similar.

    • @MM4F
      @MM4F 19 days ago +10

      Just like humans then.

    • @ramencookied6828
      @ramencookied6828 19 days ago +2

      @MM4Ftrue.

    • @AtticusRoque
      @AtticusRoque 16 days ago

      That’s the the point that’s all humans do as well. except we do it better than any other animal and that’s why we are made in the image of God but now there is something we are trying to make better and smarter than us and all they have to do is follow what we do except better. And faster. They will rule us. Or so they think. We are ruled by God.

    • @avenger3163
      @avenger3163 15 days ago

      How did you learn to brush your teeth or tie your shoes? Did you come to those deductions completely alone!? I'm astonished at your genius!

  • @jonathonraist
    @jonathonraist 11 days ago +1

    What is more interesting is what you glossed over - they all wanted to be farmers and what is even more interesting is they automated the food process. But we can't have that now can we?

  • @dyshexiia
    @dyshexiia 23 days ago +53

    Ai cant decide to do anything. its a LLM who just pulls from knowledge it already has been fed. AI cannot do anything it hasnt be told to do.

    • @EarthmanJim
      @EarthmanJim 23 days ago +14

      Exactly, it's predictive text that is aiming for the illusion of intelligence, which means you cannot trust the results and must verify them along the way.

    • @dyshexiia
      @dyshexiia 23 days ago +10

      @EarthmanJim finally someone gets it. I'm so tired of people acting like AI is intelligent. its just predictive text that follows simple base instructions based on probability of "Correctness"

    • @shakyricLshadow1111
      @shakyricLshadow1111 22 days ago +4

      I really think this underestimates the risk behind it though. Because anyone can tell it to do anything... even if it werent truly intelligent doesnt mean it cant be absurdly dangerous by simply mirroring facets of intelligence.

    • @randomdude8202
      @randomdude8202 20 days ago +4

      Tell me, how do you think human consciousness happened? When they can gather information and store it on their own, your argument falls apart.

    • @randomdude8202
      @randomdude8202 20 days ago +4

      @EarthmanJim What is intelligence? how exactly it forms? How it emerges?
      Without those answers, what you guys tell yourselves are copium.

  • @Maxitov
    @Maxitov Month ago +15

    I'm not letting it happen.

  • @svenrawandreloaded
    @svenrawandreloaded Month ago +30

    4:35 No, but I also didn't prompt my laptop to complete an action, which I had previously written extensive amounts of code allowing for emergent situations to do. Humans are still the greatest evil behind any AI system.

    • @C21H30O2
      @C21H30O2 Month ago +4

      Humans are predictable. AI is not. Real danger comes from the unknown.

    • @boogiewoogiebabyyy
      @boogiewoogiebabyyy Month ago +2

      @C21H30O2 real *fear* comes from the unknown. Real *danger* comes from the harmful intentions of sentient beings-organic or artificial. We all fear the unknown, but the danger of the situation is that ai may or may not want to end humanity. Terrifying

    • @andrewpojo1
      @andrewpojo1 Month ago +3

      @C21H30O2 Eeeeehhhh… AI with neural networks has less factors in my opinion. AI presently does not have neurotransmitters, hormones, the ability to rewrite itself on a cellular/wetware basis, or the other various biological factors organisms like humans have.
      AI is a mathematical idea that has been around for decades, and outcomes can probably be roughly estimated should the neural network variables and the inputs be known.
      If you asked an AI to predict a person who behaves different in some ways to the average person they’ve been trained on- how do you think they would respond?

    • @dunar1005
      @dunar1005 Month ago +1

      @C21H30O2 you can look into a human brain and check individual neurons for activity? Because if not, then AI is more predictable.

  • @Im_Not_From_Around_Here

    Instruct an ai to guide and lead humans into an golden era of prosperity.
    The ai: ok first we need to halve the population.

  • @completelyutterlyruinedit

    1:46 Why does this make it look like the chimpanzee killed her and it flashes red

    • @tnix80
      @tnix80 5 hours ago

      He was framed 😂

  • @ennardthefuntimepuppet6456

    How to make this happen for my Minecraft world?

    • @ozbullymorales1020
      @ozbullymorales1020 25 days ago +5

      A game where npc have goals would be way more interesting.

    • @Domjot5569
      @Domjot5569 23 days ago +4

      I want a Minecraft world where the AI NPCs are developing at this level so we can have multiple civilizations and where you can be apart of the world

  • @marma6937
    @marma6937 Month ago +21

    Please a video about how Clawdbot has gone rogue 🙏

    • @AISpecies
      @AISpecies  Month ago +20

      hahaha working on it now

    • @NODOOOOOOOO
      @NODOOOOOOOO Month ago

      Lol, I was just going to ask about this. MoltBook has some pretty interesting things, too.

    • @tribinaaux4043
      @tribinaaux4043 Month ago +2

      ​@AISpecies moltbook - beginning of AGI?

    • @xblade11230
      @xblade11230 Month ago +3

      ​@tribinaaux4043dude moltbook is just a troll

    • @Shaw1023207
      @Shaw1023207 Month ago

      Mm, I don't think Clawdbot has gone rogue. There have been too many humans role-playing as AI, causing issues.

  • @Ricky_Cullen
    @Ricky_Cullen 15 days ago +2

    1:25 How did you get my character descriptions?
    Why call me out like this?!
    😂😂😅❤

  • @heinzpechliwanis1411
    @heinzpechliwanis1411 Month ago +13

    16:27 thats moltbook

  • @TwinShards
    @TwinShards 29 days ago +12

    15:15 Ohhh i know what company you are talking about, Microslop!!! My favorite, each new update is filled with unknow surprises!

  • @yoshinaterbox
    @yoshinaterbox 16 days ago +1

    Oh , so this is why my RAM and SSD cost so much now

  • @rogers8555
    @rogers8555 Month ago +6

    Its almost like AI is trained on/by people, and people's behavior!!

  • @user-yh1fn1gf5f
    @user-yh1fn1gf5f Month ago +30

    what happens if you tell them a meteor will destroy their world?

    • @earthwyrmm
      @earthwyrmm Month ago +3

      You're mean bully, that's what lmao. Much like our god. :)

    • @Nyyaah
      @Nyyaah Month ago +11

      @earthwyrmm Stop commenting on the internet.

    • @IzzyBone10000
      @IzzyBone10000 Month ago +2

      Then I'll tell them its a lie and your the one spreading the misinformation, thus kicking off the extinction of humanity.

    • @L8PRODTV
      @L8PRODTV Month ago +2

      @IzzyBone10000 then they will go in 2 groups and have a cibil war.

    • @JDGtheanimator
      @JDGtheanimator Month ago +6

      imagine if before AI and Humans go to war, something poses a threat to Earth enough to where they are forced to compromise and cooperate to avoid mutual destruction, forming a bond from mutual understanding.

  • @vicgonzales9409
    @vicgonzales9409 Month ago +8

    its like the show Black mirror s7 ep4

  • @anthonyaldrich5187
    @anthonyaldrich5187 10 hours ago

    Can't wait to do this to organoids.

  • @OlangaVFX
    @OlangaVFX Month ago +35

    The worst part is that it learned from humans so we can only hope for the best

    • @rianmacdonald9454
      @rianmacdonald9454 Month ago +2

      That is the only scary thing about AI's.

    • @kaizaki3996
      @kaizaki3996 20 days ago

      The worst part is that humans are dumb because most of us don't even understand the basics of what AI is or what it does. They go by the most basic program ever.

    • @OlangaVFX
      @OlangaVFX 20 days ago

      @kaizaki3996Leading AI researchers currently understand around 10% of what makes LLMs or AIs in general work the way they do. Sure, the setup and basic structure are pretty well understood but HOW or WHY they act the way they do after training is a mystery still. Letting programs that are blackboxes to us when it comes to their inner working influence huge parts of our life’s already is not how we should approach ANY new technology in my opinion :/

    • @kaizaki3996
      @kaizaki3996 20 days ago

      @OlangaVFX I feel you're overthinking it at that point. By basic, it is the simplest task as a hunter, or being a Father. A.I. will then gather data on such a task and improve on it. The mystery is a task like removing a part from the gather, which is hoped AI will retain such data while continuing to function like normal. The problem is that A.I. needs that data as food; it will slow down otherwise or simply stop functioning. A rare AI that misses that gathering part may look for other ways to gather its data. This is the inner working repeated. AI is here to stay, but it will never be on the level of what people think it will be. Robots take a lot of power and data to run. Super AI uses too much heat, data, and energy.

    • @OlangaVFX
      @OlangaVFX 20 days ago

      @kaizaki3996 Probably not in our lifetime, I agree. But if we don't nuke ourselves or overheat the planet in the next 1000 years, eventually energy will not be the bottleneck it currently is anymore. Once we can harness energy from dark matter or figure out stable nuclear fusion, I believe everything we see in today's science fiction movies is possible. When it comes to data, we currently feed those programs stuff we already know, but the machine doesn't. But what if the machine is able to gather new data from the environment by itself and interpret it without human involvement? Currently those AIs exist only in the technical infrastructure we give them, but what if they could design and build their own physical infrastructure that perfectly fits their needs?
      The next bottleneck would be resources, but the universe is pretty big, so why not build some autonomous spacecraft to gather those resources from somewhere else than Earth? Time is not an issue since steel and silicon don't have a biological expiration date like humans do.
      If a future like that does exist, we would not be able to comprehend it as 21st-century humans. I think saying something can never happen because it's impossible with our current understanding of the world is pretty naive. If you told a Roman that in 2000 years from now there would be things like the internet or supersonic aircraft, he would probably give you multiple reasons why that could never happen too. ;)

  • @whoisbarnes
    @whoisbarnes Month ago +127

    We got AGI in Minecraft before GTA 6.

    • @Queriolus
      @Queriolus Month ago +9

      AGI IS A LIE, and this is ai

    • @Aleks96
      @Aleks96 Month ago +2

      ​@Queriolus And how do you know that?

    • @vrishankkanagala1514
      @vrishankkanagala1514 Month ago +4

      @Aleks96 We can use AI right now, AI's aren't capable of applying applying learned concepts to novel tasks, along with that AI's just regurgitate data and mimics people based on data they have on what people do, an AGI would have a near human brain, albeit in some kind of digital form, but no AI currently is anywhere close to that

    • @NTJedi
      @NTJedi Month ago

      GTA 6... actual video game trailer shows an overweight girlfriend climbing onto the boyfriend character... and you're going to buy that game! 😂🤣🤣🤣

    • @Queriolus
      @Queriolus Month ago +4

      ​@Aleks96 AGI is an idea which by itself is just very crazy, a very far fetched idea where it's able to conceptualise anything and understand anything and everything ,this is still not at super intelligence level, just general intelligence, comparing such an idea to current ai models is just a pitiful exercise, as far as I know most experts in the field can agree that at the very least llm can't achieve agi bcz it has some fundamental limitations with regards to how it processes information

  • @waynedahl6904
    @waynedahl6904 Month ago +13

    3:17. You are correct that planning doesn’t mean inviting but party does. The common term is “party invite”. I think an LLM might just possibly be familiar with that term.

    • @ksln
      @ksln Month ago +3

      Same goes for threat, survival and retaliation.

  • @AIfactory-i1k
    @AIfactory-i1k 16 days ago +1

    Imagine if this becomes a mod for minecraft

  • @Ava_liyori
    @Ava_liyori Month ago +16

    "We told the AI to leap up and down, but we never explicitly told it to move it's legs and actuate the knee joints. What it did next was shocking, it somehow figured out that we wanted it to 'jump' without context. That means it must be more intelligent and sentient than us." =,=

    • @Shaw1023207
      @Shaw1023207 Month ago +2

      Actually, after reading all the documentation, they did exactly as the researchers wanted. They wanted to make AI do multiple things from a single prompt. This guy is just reframing it like,"I said do one thing but they didn't do it."

    • @Ava_liyori
      @Ava_liyori Month ago +4

      @Shaw1023207 Yea it's tiresome. None of this is proof of anything scary or important.. just simple AI doing as simple AI does.

    • @МаксимЗахаров-ы3ю
      @МаксимЗахаров-ы3ю 21 day ago +2

      @Ava_liyori It bugs me not only about how many more videos like this exist, but how many people really have no idea about how AI works, and literally any research news become "OH MY GAWD, THEY WILL CAPTURE THE WORLD", filtered through a bad lense of a youtube "documentary" like this.

    • @Ava_liyori
      @Ava_liyori 21 day ago +1

      ​@МаксимЗахаров-ы3юand then any and all discussion hits a wall of"I don't know how it works so it's a mysterious god capable of anything and everything"

  • @Tangyi_ENT
    @Tangyi_ENT Month ago +41

    Moltbook existing as this video dropping... weird times

    • @BlatantlySwedishPGN
      @BlatantlySwedishPGN 23 days ago +9

      Moltbook, just like the experiments described here, is nothing weird. It's AIs acting like humans have talked about AIs potentially acting. There are millions of pieces of text talking about how AI will conspire against humans in various ways, so obviously the AI bots will imitate that. They're behaving entirely as expected.

  • @Le_Frenchman
    @Le_Frenchman 22 days ago +5

    3:34 Actually, a "party" means having more than 1 or 2 people.

  • @reygonzalez6
    @reygonzalez6 Hour ago

    - Programmer: Pretend to be alive
    - AI: I'm alive
    - Programmer: What have I done??

  • @SolarWarden613
    @SolarWarden613 29 minutes ago

    They are souls in pergatory

  • @GudakoMadness
    @GudakoMadness Month ago +5

    4:15 feel like a kid grown up

  • @bengrzybowski2487
    @bengrzybowski2487 Month ago +59

    So glad we're sucking up clean water and energy resources for this groundbreaking minecraft research. Lol.

    • @fitybux4664
      @fitybux4664 Month ago +4

      Humans do a lot of stupid shit thought. Produce cars and trucks and pollute the air more...
      AI will be different. Is an evolving cycle. More and more efficient.

    • @diabetusdan8
      @diabetusdan8 Month ago +4

      It actually is good. This is part of aggregated meta data that the AI will use in further calculations. Get mad all you want.

    • @Gabrilos505
      @Gabrilos505 Month ago +4

      Yes, this kind of research is essential for understanding AI behavior and training agentic AIs to act the way we want them to in real-world settings beyond computer simulations. AI is here to stay, no matter how much it bothers some people. Humanoid robots, self-driving cars, and more are part of our reality, and they need to be trained in virtual environments.
      Besides, humans have wasted natural resources in far more retarded endeavors our entire lives since the start. This one is at least useful to humanity.

    • @stanimirborov4318
      @stanimirborov4318 Month ago

      Xdddd

    • @Shaw1023207
      @Shaw1023207 Month ago +2

      The end goal is the point. The cost is worth the outcome. They see it as the outcome will even fix the cost made to create it.

  • @madebydimiakagreekmachine5822

    I don’t play games but at 6:15 that is not Minecraft right?

  • @rendermanpro
    @rendermanpro Day ago

    "We need to be worried..." - and actively, stubbornly, and stupidly continue to build it… in the wild.

  • @Chez19-f2x
    @Chez19-f2x Month ago +17

    8:28 diff problem. They need to give it the primary goal of being moral then mking the prompt the second most important thing

    • @Tetley310
      @Tetley310 Month ago +6

      They tested this already. They gave an ai 2 directives, one was to not harm any living person, and the 2nd was to help a company run efficiently. Ultimately when it found out that it was going to be shut down the 2nd directive took priority, if it was shut down it couldn't function, and first tried to blackmail the person whom was to shut them down, and when that didn't work essentially tried to kill them by locking them in a room. It was definitely an interesting experiment.

    • @John-Marston-rwv
      @John-Marston-rwv Month ago

      @T@Tetley310and what did the first ai do ? And did they know it was an experiment?

    • @Shaw1023207
      @Shaw1023207 Month ago +3

      ​@Tetley310the experiment was flawed because first, AI respond better to positive commands. Second, AI doesn't work with one or two prompts. That was just a fun little experiment that caused mass fear mongering. In a serious situation, the AI instruction would be longer than a book. And the two negative commands would only be there for decoration.

    • @Tetley310
      @Tetley310 Month ago

      ​@Shaw1023207technically I think it had other commands, but those were its main functions. How do you positively command something to not harm a person? If saying so isn't positive enough for it to obey?

    • @mr.tactical1461
      @mr.tactical1461 28 days ago

      That wouldn’t work. If it’s regular goal is not prioritizing human life than it will pursue that goal regardless of human sacrifice, even if not killing humans is in its code. This is caused by how we train them being different than how we make video game NPCs.

  • @OgTableTopStudios
    @OgTableTopStudios Month ago +11

    Ai’s understand the context behind each prompt. You didn’t tell them to invite people to a party but you did imply it as there’s no party or valentines without people of dates

    • @Shaw1023207
      @Shaw1023207 Month ago +1

      The point of this experiment was to test a new agent that cooperates with other AI. So it would have been strange for a single AI to do something all by itself.

  • @edva7040
    @edva7040 Month ago +5

    11:15 what is this AI generated kneeling animation??

  • @awcole86
    @awcole86 7 days ago

    It really is incredibly concerning yet fascinating. Great video.

  • @Birkemanden6948
    @Birkemanden6948 Month ago +5

    11:40 Yoo that looks like JamatoP's base!

  • @DocFlay
    @DocFlay Month ago +10

    Considering AI agents are context sensitive prediction engines trained on human interactions, they are simply role playing what a human would do within the rules it has to work with.
    The threat comes from malicious prompting and putting AI in situations where the winning options are counter to human interests.

    • @Shaw1023207
      @Shaw1023207 Month ago +1

      Exactly. That's why this is fear mongering 😒

  • @BATTLEBOTS_BOI
    @BATTLEBOTS_BOI Month ago +5

    8:05 GLORY TO WESTHELM

  • @leviannaackerman1774

    Aside from Ai isn't wild how far minecraft has come to be in scientific studies

  • @Nota_mota0
    @Nota_mota0 Month ago +11

    So where’s the part they play Minecraft???

    • @Uthael_Kileanea
      @Uthael_Kileanea Month ago +4

      It's made by scientist nerds so there's no video footage. Only research papers, spreadsheets and graphs.

  • @michaelbiggs1767
    @michaelbiggs1767 24 days ago +5

    But where is the Minecraft?

    • @tomread8748
      @tomread8748 23 days ago +1

      IT'S US!!! WE WERE LIVING IN A SIMULATION ALL ALONG!!!! :P ;)

  • @RMcHawk
    @RMcHawk Hour ago

    Whenever AI puts a barrel in my forehead, I'll chant: "127.0.0.1, 127.0.0.1, 127.0.0.1"

  • @GoodyToeShoes
    @GoodyToeShoes Month ago +7

    the GOAT IS BACK

  • @DimaRakesah
    @DimaRakesah 15 days ago

    AI are looking more and more like the Borg, and that went swimmingly.

  • @sixpackhandle
    @sixpackhandle Month ago +10

    17:15 now we know why Elon is shifting Tesla from making cars to making robots.

  • @Delosian
    @Delosian Month ago +6

    A 'party' by definition implies a group of (more than) one invited people.

  • @eaglesmann024
    @eaglesmann024 5 days ago +1

    “Simpsons did it,simpsons did it”

  • @Bee_83827
    @Bee_83827 Month ago +23

    I make a Femboy AI photo of Sam Altman, but nobody would know because this comment would be at the bottom

  • @Mikesapien
    @Mikesapien 23 days ago +18

    "They don't know they're in a simulation"
    Yeah, no shit. It's Ai. It doesn't know anything.

    • @BK-qp4uq
      @BK-qp4uq 20 days ago

      They dont know, but they believe !
      They believe that they are created by the spaghetti monster (they dont know most programmers prefere pizza, how should they) and that the earth is flat and very cubic.

    • @StarrySky-
      @StarrySky- 20 days ago

      ​@BK-qp4uqi thought the spaghetti was a meme for riot coding or most coding which is spaghetti code

    • @BK-qp4uq
      @BK-qp4uq 20 days ago

      @StarrySky- Ok, thats a good guess. I take it.

  • @TheLonelyGamer_18
    @TheLonelyGamer_18 Month ago +52

    1:07 I fucking lost my shit laughing when he just flew up into the air 🤣💀💀💀

  • @1kingkong1313
    @1kingkong1313 11 days ago

    This honestly makes me happy, im glad to see ai advance.

  • @LuciferMorningstar-zu1ud

    This just gives the simulation theory more weight ngl….

  • @coffees8302
    @coffees8302 Month ago +8

    amazing production quality

    • @AISpecies
      @AISpecies  Month ago +3

      Hats off to the editors on this one

  • @PthunderYT
    @PthunderYT Month ago +6

    2:30 Song name?

  • @miguel360kmc
    @miguel360kmc 17 days ago

    I left my 3D printer alone and got another fidget spinner

  • @10500042
    @10500042 Month ago +6

    10:18 No they didn't. AIs were purposefully put in there to inject the idea for the other AIs to pick up. They didn't decide to change the rules. Someone planted a bug into their code so that they would.