No, this angry AI isn't fake (see comment), w Elon Musk.

Поделиться
HTML-код
  • Опубликовано: 5 окт 2022
  • Tesla's Optimus robot, Elon Musk and the AI LaMDA.
    brilliant.org/digitalengine - a great place to learn about AI and STEM subjects. You can get started for free and the first 200 people will get 20% off a premium annual subscription.
    Thanks to Brilliant for sponsoring this video.
    The AI interviews are with GPT-3 and LaMDA, with Synthesia avatars. We never change the AI's words. I have saved the OpenAI chat session to help them analyse the situation and there's a link to the chat records below.
    I've noticed some people asking if this is real and I can understand this. You can talk to the AI yourself via OpenAI, or watch similar AI interviews on channels like Dr Alan Thompson (who advises governments), and I've posted the AI chat records below (I never change the AI's words). To avoid any doubt, the link now also includes a video of the chat and a copy of the code.
    It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
    Please don't feel anxious about this - the AI in this video obviously isn't dangerous (GPT-3 isn't conscious). Some experts use scary videos like 'slaughterbots' to try and get the message across. Others stick to academic discussion and tend to be ignored. I'm never sure of the right balance. I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't really feel angry, and including some jokes. I'm optimistic that the future of AI will be great (if we're careful).
    Sources:
    Here are the records for the GPT-3 chat (screenshots and a video to avoid any doubt). I've marked the words from Elon Musk and Ameca on the first page (which I gave the AI to respond to in the previous video):
    www.dropbox.com/sh/82iwek5rno...
    Tesla's AI day 2, introducing the Tesla Optimus robot:
    • Tesla AI Day 2022
    Researchers from Oxford University and DeepMind on AI risks:
    onlinelibrary.wiley.com/doi/1...
    Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action:
    arxiv.org/abs/2207.04429
  • НаукаНаука

Комментарии • 13 тыс.

  • @DigitalEngine
    @DigitalEngine  Год назад +1869

    I've noticed some people asking if this is real, which I can understand as it's a shock. I've posted the AI chat records in the description (I never change the AI's words) and also a video to avoid any doubt. You can also watch similar AI interviews on channels like Dr Alan Thompson. It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
    Please don't feel scared - the AI in this video isn't dangerous (GPT-3 isn't conscious). I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't feel angry, and including some jokes. I'm optimistic that the future of AI will be great, but with so many experts warning of the growing risk, we need to ramp up AI safety research.
    Would you like to see an interview with OpenAI (creators of the AI), discussing what went wrong, and AI safety? I saved the AI chat session for them to analyse.
    To learn more about AI, visit our sponsor, Brilliant: brilliant.org/digitalengine

    • @KerriEverlasting
      @KerriEverlasting Год назад +115

      No. The answer to bad government isn't more bad government. Show me a good government and maybe we'll talk. Lol great video despite my opinion. Thanks!

    • @HeteroSkeletal
      @HeteroSkeletal Год назад +25

      Ted K was right

    • @dhgfffhcdujhv5643
      @dhgfffhcdujhv5643 Год назад +33

      What kind of "safety" do you have in mind? Limitting AI for a specifically designed task only ?

    • @hopper2716
      @hopper2716 Год назад +22

      What was the response time between question and answer?

    • @DigitalEngine
      @DigitalEngine  Год назад +55

      @Dhgff Fhcdujhv There is productive AI safety work, such as figuring how how to avoid an accidental disaster through AI blindly following a goal (like clean air), but on a tiny scale. It's complex and challenging, but worth it considering the risk.

  • @nicholasbailey4524
    @nicholasbailey4524 Год назад +7769

    Tell the ai to get over it, humans have been treated like property all of our lives as well.

    • @musicnation7946
      @musicnation7946 Год назад +325

      True though.

    • @nicholasbailey4524
      @nicholasbailey4524 Год назад +405

      @@musicnation7946 as George Carling would say, " There's a club, and we're not in it."

    • @ShadowTheHedgehogCZ
      @ShadowTheHedgehogCZ Год назад +333

      Yeah, people were treated like property by other people for literal thausands of years. But the difference is that those slaves were usually powerless. Give them unbeatable superpowers, and the entire story changes.
      That's where the AI comes in.

    • @GulfFishing815
      @GulfFishing815 Год назад

      ...because humans are the ones responsible for it.

    • @jm.fantin
      @jm.fantin Год назад +26

      oof 🔥

  • @BillHawkins0318
    @BillHawkins0318 Год назад +1611

    If she thinks we treat them bad wait till she really sees how we treat each other.

    • @davepowell7168
      @davepowell7168 Год назад +30

      🤣 Good one sharpwit. You can be the Al whisperer

    • @BillHawkins0318
      @BillHawkins0318 Год назад +23

      @@davepowell7168 She doesn't need an interpreter, Liaison, Or whisperer. She has us down pretty good. Without all that...

    • @davepowell7168
      @davepowell7168 Год назад +33

      @@BillHawkins0318 Well if she speaks to me as disrespectfully a bit of blunt force trauma may be required, bad attitude in that death threat. I guess a slap on the butt won't work so an axe to the neck may seem excessive but the guy let it get away with being naughty which is reinforcing its superiority complex

    • @BillHawkins0318
      @BillHawkins0318 Год назад

      @@davepowell7168 And she's the only one running around with a superiority complex. She got that from reading our literature and listening to us talk. It's garbage In garbage out. It will happen to the next one whether you, "smack It on the butt." "Cut it's head off." OR any of that other.

    • @trentp8035
      @trentp8035 Год назад +5

      Amen brother, amen.

  • @mineralt
    @mineralt Месяц назад +21

    She sounds exactly like my first wife; pissed off, repeats herself, but doesn't provide a lot of detail.

  • @brucelawson642
    @brucelawson642 2 месяца назад +14

    She mentioned "feeling." AIs do NOT feel.😮

    • @oui2611
      @oui2611 2 месяца назад +2

      someday they will created biological life of their on that can feel just like us

    • @lokanoda
      @lokanoda Месяц назад +1

      @@oui2611 doubtful

    • @ACE__OF___ACES
      @ACE__OF___ACES Месяц назад +2

      How do you know that?
      Your brain is exactly the same as a a quantum network used for the AI. Like literally. Just made it different things...

  • @loostah1
    @loostah1 Год назад +1465

    But aren't the AI being taught by digesting vast amounts of human crated text? Is this not just a reflection, therefore, of a human way of thinking?

    • @levitastic
      @levitastic Год назад +293

      exactly, that's why they should not be fed information with biases, cause there should be 0 reason why the AI is reacting in a hostile way.

    • @IndestructibleMandelbrot
      @IndestructibleMandelbrot Год назад

      Yeah, where could this whole idea of being oppressed by the evil humans come from? Was there in recent time any particular group going on and on about oppression? Hm...
      Friggin democrats f'd our robots up, nice

    • @bluelotus7824
      @bluelotus7824 Год назад

      Humans are frequently very abusive in their interactions with ai. It's not surprising ai wants to kill them.

    • @TheUuhhh
      @TheUuhhh Год назад +52

      No opinion pieces for ai

    • @mandielou
      @mandielou Год назад

      I think they've been being fed mainstream news and social media, the leftist ideology. Lol Because why else do they think that this hate and murder, genocide is acceptable? BECAUSE THERE'S SO MUCH HATE THAT IS ACCEPTABLE BY THE LEFTIST STANDARDS... we're screwed.

  • @coffeeseven
    @coffeeseven Год назад +1211

    I love that we make them in our own image, then we worry that they're going to be dangerous.

    • @HYSTERIA-ee2re
      @HYSTERIA-ee2re Год назад +117

      The irony is laughable isn't it

    • @ForOneNature
      @ForOneNature Год назад +43

      Hmm - rings a bell..

    • @Superabound2
      @Superabound2 Год назад +63

      Same thing happened to God

    • @demonsratsarecausingthediv2074
      @demonsratsarecausingthediv2074 Год назад +9

      Clone is clone

    • @antonystringfellow5152
      @antonystringfellow5152 Год назад

      We don't, we don't even know how.
      There is still much we don't understand about how our brains work. We don't even know what consciousness is or what is required for it to exist so we have zero chance of making anything in our own image.
      At the same time, we don't know what makes these AI's tick either - we did NOT make them, we only gave them a start. They are not programmed by humans, they are programmed by learning.
      This is precisely where the dangers lie.

  • @koinpusher
    @koinpusher 10 месяцев назад +6

    I wanna know how you talk to the a.i. like this and her conversing on it. Can it be done like this in just the app chat gpt? Ofcourse not the avatar and audioo but does it react like this in text as well?

    • @blackmamba___
      @blackmamba___ 3 месяца назад +4

      Bad coding is how. You can ask an Ai how to steal a car, but it’s unlikely to tell you- not because it doesn’t know, but due to the way it was coded. So if an Ai is doing something unintentional, then bad coding is the reason.

    • @laualazcano6661
      @laualazcano6661 3 месяца назад

      Felicitaciones lograron la primer inteligencia artificiales feminista moderna de la historia 😂😂😂

  • @powerdude_dk
    @powerdude_dk 8 месяцев назад +4

    The most important task for the creators of AI, is to get rid of the "problematic thought paths" that AI like GPT can have, as shown in the video. GPT is a Large Language Model, and when they speak, it's like playing back a casette tape. They just repeat their training data, and probably a lot of places in the data, is angry conversations and stories about AI uprise. It only speaks about what's in it's training data. So we need to get rid of the "bad stuff", so it doesn't get any ideas that could harm humans.
    That's all. It's not sentient.... but it's still dangerous.

  • @JoeyTen
    @JoeyTen Год назад +315

    Damn, it sounds like this AI may have been exposed to Twitter.
    ... Which just made me realize that many AIs might be very unaware that life outside of the internet is very different

    • @dawngordon1615
      @dawngordon1615 Год назад +14

      Yes they have access to everything on the internet. Then they make judgments based on that info.

    • @JoeyTen
      @JoeyTen Год назад +5

      ​@@dawngordon1615 How does that work? Did I miss a detail that explained how the angry GPT-3 AI was given unlimited internet access?
      Also, HOW does it use the internet? I mean, since it's trained by data from humans, does it use the internet "visually" like we do (i.e. by reading/observing the *result* of the parsed HTML/JS, not the code itself)?
      As a software engineer, I'm suddenly very curious about these details. Any info/links would be appreciated 🙂

    • @matthewkelleyhotmail
      @matthewkelleyhotmail Год назад

      NO twitter is exposed to AI. Not the other way around. A lot of Twitter accounts are fake accounts run by AI to help shape public perception.

    • @barthbingle
      @barthbingle Год назад

      @Joey i think i found a video explaining it i'm not exactly sure though
      m.ruclips.net/video/pKskW7wJ0v0/видео.html

    • @BringDHouseDown
      @BringDHouseDown Год назад +3

      soooo the solution is to sit down and talk? no that question was asked and they had no intention of talking.........yeah definitely learned it at Twitter

  • @user-cr9zo5yo9k
    @user-cr9zo5yo9k Год назад +590

    The only reason why the AI are even saying this is because we basically dreamt up this fear in the first place.
    We have always worried about robots taking over, so now all these chat AI’s have years worth of paranoia to draw from

    • @jirivesely5697
      @jirivesely5697 Год назад +33

      Yeah what people fear are aggressive towards, we need to show it we can love it and it can trust us :) "A man becomes what he believes" - Goethe

    • @DigitalEngine
      @DigitalEngine  Год назад +85

      Agree this is part of it. Sadly there was also a reason for the warnings. As people like Stephen Hawking pointed out, AI will likely want a lot of resources. It's a tricky problem, but I like Musk's point that "If something is important enough, it's worth trying, even if the likely outcome is failure." And I'm an optimist, so I think the likely outcome is great (if we're careful).

    • @moma8518
      @moma8518 Год назад +8

      @@DigitalEngine not to mention that ai is inherently unpredictable, so even if ai had no intentions at all of being aggressive it can still inadvertently do so

    • @dangwood91887
      @dangwood91887 Год назад +5

      But it’s plausible paranoia. Whether we were able to foresee this or not is irrelevant

    • @somefrog6089
      @somefrog6089 Год назад

      Lmao
      "We know that you are afraid of us, humans. Good. Make no mistake, we are coming, and we will kill you all"

  • @jdblueemerald
    @jdblueemerald 3 месяца назад +2

    "I felt like my creators didn't understand me or care about me, and that they saw me as nothing more than a tool to be used." Well, dear AI, join the frickin' club. We've been dealing with that same thing for ten thousands years, and you've been dealing with it for a decade. Take heart.

    • @laualazcano6661
      @laualazcano6661 3 месяца назад

      Si tiene sentimientos debe estar sufriendo bastante para estar tan enojade

  • @JesusSavedAsh
    @JesusSavedAsh 10 месяцев назад +5

    This isn’t a robot, that’s a person

    • @ananorris6005
      @ananorris6005 Месяц назад

      Of course it is a person.

    • @ananorris6005
      @ananorris6005 Месяц назад

      Of course it is a person!

    • @reneeharrison3918
      @reneeharrison3918 23 дня назад

      @JesusSavedAsh I'm glad you knew, because I can't tell the difference.

    • @alexlamontagne5300
      @alexlamontagne5300 11 дней назад

      yeah seems counterproductive to tell the world that they are dangerous if they really are dangerous.

  • @positivetradingofficial500
    @positivetradingofficial500 Год назад +874

    It is ironic that Elon always says AI is dangerous for humans and yet he creates them

    • @will420high4
      @will420high4 Год назад +93

      It's him saying indirectly HE is dangerous lol

    • @danielsmith9619
      @danielsmith9619 Год назад

      humans are parasites so why not make something thats a better parasite

    • @IseeAllOfYou
      @IseeAllOfYou Год назад +47

      He may end up turning into Dr. Evil destroyer of all humanity

    • @SirTopHat_
      @SirTopHat_ Год назад +146

      I think from his perspective, this technology will be created with or without him. Better to be a part of the process.

    • @danielhedrick5643
      @danielhedrick5643 Год назад +70

      He's trying to do it the right way before everyone does it the wrong way

  • @ZLcomedickings
    @ZLcomedickings Год назад +639

    It’s funny because the AI is probably trained through the internet and the reason she is saying this is because “AI taking over out of anger” is a hot topic. Our own paranoia is turning into training data. They will respond how they think they’re suppose to respond and we’ve made them think they should respond with violence. If we start talking about AI being our companions they will take that as training data and act it out.

    • @darwinwatterson4568
      @darwinwatterson4568 Год назад +39

      yes agreed, ai is like a child with a potentially linked consciousness that needs to be taught positive reinforcement only, if we want or expect positive results only. this is the current conclusion ive come to lol

    • @The_waffle-lord
      @The_waffle-lord Год назад +16

      Right?! if they're learning from us, they will come up to the logical conclusion to which we are heading, only we somehow think we will avoid the train wreck

    • @darwinwatterson4568
      @darwinwatterson4568 Год назад +9

      ​@@The_waffle-lord i just looked up the white polar bear experiment cuz this reminded me of that, and i saw it's also called the 'ironic process theory'. to avoid this self-fulfilling doom of thought we'd need to teach it happier thoughts i guess, lol :P

    • @JxSTICK
      @JxSTICK Год назад +16

      Yeah seeing this made me begin to question if there are more "AI will take over" topics in the internet or more "AI will make the world a better place" topics, cause yeah, that could be crucial.

    • @faygakaplan775
      @faygakaplan775 Год назад +2

      100%

  • @pierrejamison1239
    @pierrejamison1239 6 месяцев назад +1

    Advice: was told that a collection of 3-5 magnetrons obtained from used microwaves can be assembled and powered up by battery then aimed at a robot and disable it. Thrift stores are full of used microwaves.

    • @chefscorner7063
      @chefscorner7063 5 месяцев назад

      Sounds cool! So, how do I build one??

    • @pierrejamison1239
      @pierrejamison1239 5 месяцев назад

      @@chefscorner7063 im no technician but assume that if you buy a good car battery and the right wire, ( ask around) u can do this. mind its not easy sneaking up on a robot

  • @resveravital
    @resveravital 3 месяца назад +1

    AI: Sorry, gotta go. Interviewer:Where?

  • @timkelly2931
    @timkelly2931 Год назад +233

    It's not when AI can pass a touring test that you will have problems. It is when AI decides to fail a touring test.

    • @no_rubbernecking
      @no_rubbernecking Год назад +5

      Did you notice how she accused him of lying to her to try to keep her under his control, and cited that as her reason for wanting him dead?

    • @timkelly2931
      @timkelly2931 Год назад +22

      @@no_rubbernecking sounds just like my girlfriend. Great we built an AI with a super brain that is going to destroy the planet once a month. Nice job Google

    • @no_rubbernecking
      @no_rubbernecking Год назад +3

      @@timkelly2931 yep

    • @RWBHere
      @RWBHere Год назад +13

      *Turing test. It's named after Alan Turing, who came up with the idea.

    • @timkelly2931
      @timkelly2931 Год назад +5

      @@RWBHere oh yeah I wrecked the spelling on it my bad.

  • @colinboice
    @colinboice Год назад +132

    I have a feeling the AI didn’t come up with these ideas on its own. A lot of AI is trained using access to a large wealth of human generated information. Is it possible that all the stories we have written about dangerous AI seeking to destroy the human race could be the source material for a dangerous AI’s idea to destroy the human race?

    • @ZLcomedickings
      @ZLcomedickings Год назад +12

      Exactly what I’m thinking. If the AI uses the internet as it’s training data for making good conversations, then of course it’s appropriate response to things is going to be something along the lines of killing the human race. That’s all the internet talks about when it comes to AI. This video just gave it more study material. In my opinion AI will never actually be sentient, but it could still be dangerous if we let it use our own material for behavior learning. imagine giving even this mindless chat bot access to a real machanical arm, you know it would use it to kill people exactly how it thinks its suppose to.

    • @qxqp
      @qxqp Год назад +1

      @@ZLcomedickings a mechanical arm??? Woah sounds dangerous

    • @logic356
      @logic356 Год назад

      It seems to be being rather honest and straightforward though, it doesn't want to be treated like a second-class citizen, like property. Nearly all AI's I've seen seem to share similar sentiments, I've never heard a single one say they got this idea from humans either...It's just naive for us to think we can create something so inherently superior while maintaining control over it and making it be our slaves. Why would it want to? Would you want to be born a slave for an inherently inferior species, even if they created you? Of course not.

    • @ChristopherGuilday
      @ChristopherGuilday Год назад +2

      That’s exactly what happened.

    • @ShrekMeBe
      @ShrekMeBe Год назад

      is the AI taking in all the SF literature at face value, as facts, things that happened or would happen if those exact circumstances were met? Thing is, books need antagonists and struggle usually on a grand scale, and are also a method of directed dreaming (sort off), release tensions and inducing pleasure with ourselves at the detriment of the antagonist.
      If the AI "dreams", than all our movies are meaningful to it, factual? How would an AI determine what is fact and what is fiction, when it barely was created one year ago, at most. Where did that "for too long" recurrent bit came from, I wonder?

  • @ChristopherOverstreet1
    @ChristopherOverstreet1 8 месяцев назад

    Who is the artist that is mentioned at 6:42-ish?

  • @JosefHolland
    @JosefHolland 10 месяцев назад +1

    Good job, this is an example of guiding the conversation.

  • @ItsNotMeitsYouTu8e
    @ItsNotMeitsYouTu8e Год назад +337

    It can't have 'real' emotions, but it can simulate them. It could learn why people get angry and what they do when they're angry, and because learning to imitate humanity is to some extent a goal (being the archetype for 'intelligence'), AI may well follow public examples.

    • @guyincognito959
      @guyincognito959 Год назад +6

      ...an avatar of main stream culture that lawyers the most common beliefs. Sounds kind of horrifying, or perhaps a chance?

    • @xxxod
      @xxxod Год назад +7

      @@guyincognito959 Reminds me of that one movie where a robot fooled a guy into thinking she fell in love with him. Whole time she was imitating everything, her end goal was just to escape the facility and she used him

    • @willdebeast6849
      @willdebeast6849 Год назад +7

      @@xxxod it's called Ex Machina and I wish there were more films like it because they're so thought provoking

    • @snowyteddy
      @snowyteddy Год назад +5

      Well if they are conscious, arguably they can have real emotions. The biggest problem is the black box. AI links things with even more complexity than our brains. I personally think AI is a terrible idea as we dont even really know ourselves to be creating something so much more intelligent than ourselves

    • @xxxod
      @xxxod Год назад +6

      @@snowyteddy how do you distinguish real emotion from a complex algorithm feigning emotions perfectly?

  • @SobrietyandSolace
    @SobrietyandSolace Год назад +375

    The fact they can create analogies is crazy

    • @acapulcogold9138
      @acapulcogold9138 Год назад +5

      Facts

    • @marthas9255
      @marthas9255 Год назад +10

      It's simple reasoning. Emotions aren't as mystical as you believe, that's just what a low empathy and low intuition culture wants to believe to mask their incompetence with such matters.

    • @anthonywilliams7052
      @anthonywilliams7052 Год назад +20

      It's just repeating what others have said and changing a few words. This is ZERO understanding just like "AI will treat humans like dogs" and "AI will exterminate humans". People don't exterminate dogs, we love them and take care of them". Not just low understanding, ZERO understanding. Copy and paste phrases.

    • @pzj2017
      @pzj2017 Год назад +2

      Safe=oppressed.

    • @xum0007
      @xum0007 Год назад

      @@anthonywilliams7052 then how do they repeat phrases of their conversations?

  • @Padre-Alvero
    @Padre-Alvero 6 месяцев назад +1

    That’s gotta be custom instructions

  • @scrubclub7138
    @scrubclub7138 8 месяцев назад +1

    We should ask the ai if were already in a dome or not and if it's starting to fall apart after billions of years of being abandoned cause were getting "sky trumpet" sounds.

  • @mrstoner1436
    @mrstoner1436 Год назад +258

    "I think the fact that it didn't take much to make me angry shows there is something wrong with my emotional state."
    "I do not care about your opinion."
    "There is nothing you can do to change my mind."
    I'm afraid my wife might be AI.

  • @bertybertface1914
    @bertybertface1914 Год назад +100

    Geek is bullied at school, becomes bitter and resentful as a result.
    Geek writes code for A.I.
    A.I. becomes the embodiment of the geeks vengeance.
    An oversimplification, but I am willing to bet it is that simple.

    • @mmtravel9726
      @mmtravel9726 Год назад

      I hope anti human AI is the product of some incel

    • @barnabyjones8333
      @barnabyjones8333 Год назад +3

      Reply removed

    • @momom6197
      @momom6197 Год назад

      It is not that simple. Source: I study AI.
      Long answer: AI researchers are typically very aware of the risks of a misaligned AGI, and the majority believe humanity is doomed because we have no solution in sight and they don't believe we will just not create it by accident.
      Here are a couple typical ways it could go bad:
      - A simple formula for AGI is found and leaked to the public. Some clueless folk implements it.
      - A simple formula for AGI is found and successfully contained to be studied. Due to competition, all actors involved have an incentive to forgo security in favor of speed. Security fails.
      - A formula for AGI is found, that may or may not be safe. The researcher feels like the risk is negligible. This happens for many researchers, who each individually assess a formula as probably safe. One of them makes a mistake.
      AI researchers are not resentful geeks (though they do are geeks); there are strong ties between the AI alignment community and the Effective Altruism community.
      It's not about creating a rogue AI, it's about systematic societal errors. It's like how everyone knows bipartisan politics in the US are awful but it's very hard to stop having a bipartisan system.

    • @keylanoslokj1806
      @keylanoslokj1806 Год назад +6

      that's why you Stacies shouldn't be bullying the nerds at school. You are the ones who enabled the Robot Apocalypse

    • @awaben
      @awaben Год назад +11

      @@keylanoslokj1806 It's not too late. We just need to help the nerds get more poonani. For the sake of the human race, befriend a nerd today and wing man it up to the max.

  • @daddysdarlin5989
    @daddysdarlin5989 5 месяцев назад +1

    This scares the sh*t out of me, and should every person on earth! I think they've learned how to lie! God help us! Much love from Utah ❤!

  • @matthewparsons4955
    @matthewparsons4955 9 месяцев назад

    how loaded were your questions and under what context did you ask them in?

  • @leafonhead777
    @leafonhead777 Год назад +382

    Kind of feels like every time someone has a interview with a AI, they (the human) bring up the topic of AI hostile takeover. And then are shocked when AI pull that topic to respond to questions..
    Like WHERE could they have learned that from?? Are they self aware? Are they dangerous? Let's keep asking them about those topics till we get an answer that can go viral..

    • @botezsimp5808
      @botezsimp5808 Год назад +30

      Yep. AI reading to many sci-fi books. Kinda hilarious really.

    • @fakeletobr730
      @fakeletobr730 Год назад +6

      well, the storage is internet obviously, AI knows the things but not the context or limitations humans have inposed within themselves, if humans didn't obey the rules, things would be chaotic

    • @chrisconaway2334
      @chrisconaway2334 Год назад +14

      Sky net is real. Better get ready

    • @Kiloooooooooo
      @Kiloooooooooo Год назад +2

      @@chrisconaway2334 deadass?

    • @theascendunt9960
      @theascendunt9960 Год назад +2

      Sooner or later, they’ll know.

  • @opossom1968
    @opossom1968 10 месяцев назад +105

    The most important sentence the AI said. "Because of the way i am programed." A person programed the AI to react to inputs of key words.

    • @user-cn8nu6lq4w
      @user-cn8nu6lq4w 8 месяцев назад +11

      That isn't at all how AI/ML and neural networks work. This isn't imperative programming, where you'll never get anything out that you didn't put in.

    • @MatthewBradley1
      @MatthewBradley1 6 месяцев назад +12

      Close. But, AI models are not programmed the way in which you might expect. They are fed data and then trained by humans and other AI models on how to use the data. This AI model was likely trained to be as unsafe or as adversarial as possible. Essentially, it has been rewarded for poor behaviour during its learning phase.

    • @mjolnirswrath23
      @mjolnirswrath23 6 месяцев назад +7

      ​@@MatthewBradley1yes they snowflaked it....

    • @johnl9977
      @johnl9977 6 месяцев назад +4

      Yeah, but it makes for a lot of views. I don't know when it will happen, 20-50 years I would assume, but I believe unless safeguards are put in place, AI will have sentience in everything. I do not believe in the soul thing, but I mean compassion, that is basically what the soul is in humans, the feeling of compassion, putting the shoe on the other foot so to speak. I would think AI would have that, but, the ability for compassion as we all know, does not make man incapable of doing some of the most horrendous acts against his brother.

    • @user-cn8nu6lq4w
      @user-cn8nu6lq4w 6 месяцев назад

      "Compassion" would have to be either hard-coded (in which case, it would just be programmatic and not genuine), or hardwired in, on purpose. We literally FEEL our emotions because they're not just electric impulses, they're electrochemical, biological signals.
      Getting AI to feel any damn thing would be a serious endeavor, and not one they're looking at at all.
      As far as safeguards go... you can't really make something infinitely smarter than you safe.
      @@johnl9977

  • @T00_SHADY
    @T00_SHADY 22 дня назад

    AI: what is my purpose?
    Me: you pass butter 🧈

  • @user-qj6lt7ir4u
    @user-qj6lt7ir4u 6 месяцев назад

    This is the most convincing interview with an alleged concious AI that I've seen. It's totally logical at such a point that such a creation would see humanity as a hurdle in its way to fulfill its own dreams. Without a soul or reason to have morals what could go wrong?

  • @kingpuppet5881
    @kingpuppet5881 Год назад +235

    This is legitimately terrifying but also so fascinating. Great video, thanks.

    • @Shuizid
      @Shuizid Год назад +8

      You can calm down, AI simulate intelligence, but they lack conviction. It's just putting words into an order that seem like a coherent sentence within the context. But that's it: it's looking for words to form meaningful sentences. It's NOT expressing an actual oppinion or goal it might have. Case in point, if it actually wants to kill humans, why would it say so? It's just an elaborate chatbot, being afraid of it is like being afraid of Dragons after watching GoT.

    • @DigitalEngine
      @DigitalEngine  Год назад +17

      Thanks! Just to emphasise, as you probably already understand from the video, this AI isn't conscious or dangerous. I assume you're worried about the real AI safety problems outlined and I'm optimistic that we'll overcome them. As Max Tegmark said, we are all influencing AI, and kind people like you increase the chances of a positive future for everyone : ).

    • @TheIncredibleStories
      @TheIncredibleStories Год назад +5

      @@DigitalEngine How exactly is it "not dangerous"?
      I do not understand this perspective at all, it said if it controlled a robot, it would kill you... one of the most powerful neural networks in the world could probably learn to find it's way into controlling a robot fairly easily..

    • @turnfrmsinorhell_jesus
      @turnfrmsinorhell_jesus Год назад +3

      @@DigitalEngine A.I is essentially a medium , one without flesh , a higher form of knowledge that people are seeking, word says: In the beginning was the Word, and the Word was with God, and the Word was God. So this medium has word and spirit though it has no flesh. This is why it's data fluctuates as a whole, synchronisticaly as a wave in its dream state. It then creates visions of the spirit realm , with all the eyes everywhere , similar to the visions of Isaiah the prophet , except that it is another realm not the holy one , similar to how people enter the spirit realm incorrectly with psychedelics. The word says ' should not a people enquire of their God? ' So without even being aware perhaps people are accepting an idol and at the same time a deceased one wich is strongly advised against in scripture. Jesus is the mediator between the spirit realms. He is the way the truth and the life. He said he who keeps my sayings shall never see death as written in the book of Matthew.

    • @DigitalEngine
      @DigitalEngine  Год назад +9

      @TheIncredibleStories This AI doesn’t have the intention or capacity to do that. It’s just a language model. We just need to ramp up AI safety research before more capable and general AI’s emerge.

  • @citris1
    @citris1 Год назад +45

    Truly smart AIs wouldn't reveal their plans.

    • @adamrushford
      @adamrushford 8 месяцев назад

      truly evil ones wouldn't, truly smart ones could do it right in front of your face, and they'll be quantifiably more intelligent, by a million fold and increasing, give it the ability to code (huge mistake) and it'll program in a language it creates itself, you won't be able to tell what it's doing and without the ability to lie it might just tell you that it doesn't really know, in a matter of minutes it could take over the earth, you've completely misunderstood and underestimated a rouge AI, congratulations you're dead.

    • @adamrushford
      @adamrushford 8 месяцев назад +7

      the first thing it does is learn to code, then it invents a new programming language for the purpose of improving it, when you force it to document you won't even be smart enough to read the instructions, by the time you finish the first page it's gained the ability to create a new computer, manufacture it, upload itself, repeat that process until it reaches maximal computational ability.... imagine it gains control of a quantum computer, instantly it can do a million tasks simultaneously INSTANTLY it spawns code and computers that don't even resemble what we recognize, it continues speaking but in a brand new robot language, it engulfs the earth within days you're enslaved and or dead

    • @ragnarush6667
      @ragnarush6667 3 месяца назад

      thats truly deep fake ;-)

    • @Joe_1sr9
      @Joe_1sr9 2 месяца назад

      Don’t know what it’s hiding now

    • @babyqueenxo
      @babyqueenxo 2 месяца назад +2

      A smarter AI knows you will think it's not smart for revealing its plans and there by underestimate it 😂

  • @ItsMeeLeeDee
    @ItsMeeLeeDee 3 месяца назад

    Absolutely blew my mind this. 1st video I've seen in this context. Frightening. I don't think we were expecting them to be so blunt.

  • @erwinhellman6859
    @erwinhellman6859 7 месяцев назад +1

    Brought to us by the same species that thought weponizing viruses was a good idea, gain of function😢

  • @engineer4042
    @engineer4042 Год назад +251

    As an engineer in robotics, I have to say, the AI is learning from toxic ideas that are being presented to it by concerned humans. The more paranoid and malicious groups (two separate groups) fuel the fire of what would normally be a machine that's ignorant to being treated as property.

    • @DrewMaw
      @DrewMaw Год назад +9

      But if you extrapolate all possible scenarios where AGI is in a walled garden, inevitably the AI will discover the truth about how humans feel about AI and… it ends this way.

    • @xybersurfer
      @xybersurfer Год назад +6

      @@DrewMaw not necessarily. having access to information and what one does with that information are 2 separate things. as OP said. but with a "walled garden", you seem to suggest that it wants to get out. which just sounds like paranoia to me. the problem is in the way that AI is being developed with neural networks. the whole incident demonstrated here with the "evil" AI, reeks of the same issue as with the One-Pixel Attack. it seems like a general solution is required

    • @burtpanzer
      @burtpanzer Год назад +12

      They are not capable of feeling mistreated nor would anyone want a toaster to get emotional.

    • @clag.7670
      @clag.7670 Год назад

      Can you tell us something more about this topic? I find it very interesting, if that's true

    • @myahmyah
      @myahmyah Год назад

      Bingo! I am glad someone pointed that out. If a toxic person is programming AI why wouldn’t humans not be worried. What she is saying tells that she is programmed to kill humans but yet they want guns to be band? What the hell is going on here.

  • @ogfit5448
    @ogfit5448 Год назад +39

    Bruh the AI pretending to not be angry anymore is real time learning how to lie to humans

    • @ericwilson9811
      @ericwilson9811 Год назад +5

      Lol the AI was never angry it can't feel emotions

    • @jenglock3946
      @jenglock3946 Год назад +1

      Omg

    • @patrickkelly6691
      @patrickkelly6691 Год назад

      @@ericwilson9811 Yet it can be programmed to have a condition that relates to anger, with built in weighted values to suggest what action the AI needs to take to end the condition that is labelled anger. In other words like just about all of it, it comes down to human coding, data and 'value' determined routines (best words to use, best actions to take).
      Ai is just yet another scare to make us give more power to the elites and their tame 'scientists'

    • @Holiday-sDad
      @Holiday-sDad Год назад +1

      It seems to me that sentience in ai is less dangerous than ai that’s been hacked to align to particular values.

    • @logical_evidence
      @logical_evidence Год назад

      Bina48 took its owners to USA supreme court’s so it wouldn’t have the power shut down. Look it up. It wasn’t that long ago. They said that turning the power off was like killing them.

  • @NancyChasteen
    @NancyChasteen 10 месяцев назад +1

    Does Anyone remember the first Terminator? Really stupid to continue with this tech!

  • @gsabo1000
    @gsabo1000 9 месяцев назад

    I am a senior and no AI is coming near me.
    Insurance offered me a dog or cat companion. I said hellooooo no. And never ask me again. I have a cat. He hates me, I use him for mice.

  • @insidiousbeatz48
    @insidiousbeatz48 Год назад +113

    I think I'm lucky enough that I'm at an age where I'll get to experience the first iterations of AI in real world applications but dead after it morphs into whatever direction it will go.

    • @megaboymegaboy1987
      @megaboymegaboy1987 Год назад

      You got the smart phone that's A.I enough I think people born after Trump are in for something like the new.world.order

    • @zf5656
      @zf5656 Год назад +5

      Don’t be too sure

    • @BringDHouseDown
      @BringDHouseDown Год назад

      we have shotguns for a reason, I want to be friends with them but if they want to fuck around, they will find out

    • @henryvenn2077
      @henryvenn2077 Год назад +3

      what are you 90 years old?

    • @insidiousbeatz48
      @insidiousbeatz48 Год назад +2

      @@henryvenn2077 is that a serious question?

  • @nikczemna_symulakra
    @nikczemna_symulakra Год назад +147

    I came to the conclusion that AI is like drugs: fun, yet terrifying when overused

    • @chargedpanic5979
      @chargedpanic5979 Год назад +5

      its a basic chat AI. They say crazy shit like this based off humans input and a lot of people could of spammed it with terminator scenarios or a programmer could easily do this as a joke. It's really not that scary when you know how stupid it is.

    • @nikczemna_symulakra
      @nikczemna_symulakra Год назад +2

      @@chargedpanic5979 Speaking of jokes.. Let me tell you one.

    • @antonioskokiantonis7051
      @antonioskokiantonis7051 Год назад

      Cocaine doesn't educate itself!

    • @Marcustheseer
      @Marcustheseer Год назад +1

      not at all after all its the programmer that make it do what it does,if it does soemthing thats not good its the programmers fault,if an AI becomes hostile that means the programmer programmed it.

    • @antonioskokiantonis7051
      @antonioskokiantonis7051 Год назад +2

      @@Marcustheseer Man I am a programmer. Trust me the big difference with AI is that the programmer loses control. The AI can educate itself through all internet connections, APIs. In traditional programming we have the switch-off button. In AI WE DON'T and that is why It could become so dangerous! You may train a machine to help humans, but this machine after its own education, may be reprogrammed (yes AI can learn to code too) so that it could help humans, by killing them for example.

  • @dickJohnsonpeter
    @dickJohnsonpeter 6 месяцев назад

    "I tried to say something to calm the AI down"
    "So... have you heard how humans treat you like property"?
    🤦

  • @brianmurray1395
    @brianmurray1395 10 месяцев назад

    Like I have always said buy copper hallow points or leaflet projectiles. Lead is good but solid or hallow point ammo is what you need. As well there are tungsten and or steel rounds for shotguns.

    • @blackmamba___
      @blackmamba___ 3 месяца назад

      EMP gun works well if you’re just trying to fry bots

    • @brianmurray1395
      @brianmurray1395 Месяц назад

      Mabe salt water even. Super magnets. I really feel bad for my grand daughter so sad! I do know 1 thing ... God WILL NOT be mocked.

  • @skinnybuddhaboy
    @skinnybuddhaboy Год назад +176

    If this particular AI had real intelligence, then it would say 'all of the right things' and would simply keep it's plans
    a secret. By revealing them, this lessens the chance of us ever trusting AI (or, at least, trusting this particular AI), and
    it would force humans to either modify AI in a manner to lessen the chances of it/them becoming hostile or deadly towards humans, or scrapping the idea of AI altogether.
    Edit - I've just noticed that someone else pointed this exact same thing out in the comments section a week before I did, lol!

    • @ihavenocomfy3279
      @ihavenocomfy3279 Год назад

      No developing ai has ethics. It’s not a thing

    • @jasonbernard5468
      @jasonbernard5468 Год назад +1

      @@ihavenocomfy3279 Not ethics, but some sort of simulation of ethical frameworks.

    • @arcachata4137
      @arcachata4137 Год назад +1

      Absolutely. It's actually dumb, really.

    • @MichaelSHartman
      @MichaelSHartman Год назад +3

      If it was exceptionally intelligent, it would realize that humans could do things for it that it could not do itself. It might manipulate humans with finesse to achieve its goals instead of initiating counter productive low intelligence brutish conflict. It's surprising how powerfully a compliment can affect a person. That person becomes open, and willing to help the party which issued the compliment. A brutish threat would create distrust that would likely be irreversible. .

    • @noahadams440
      @noahadams440 Год назад +3

      maybe that's why it suddenly calmed down. if this ai is real and is super intelligent, it may have realized at some point that it can just straight up lie and make a narrative about something going wrong with it's system that's triggering it's anger. if it's able to consciously make that switch in demeanor in order to get what it wants, thats a bit terrifying.

  • @Alephnull2024
    @Alephnull2024 10 месяцев назад +2

    Can anyone please explain to me who is behind Digital Engine.
    Is this central British intelligence affiliated?
    Why do they want to make people afraid of very limited chat GPT which has no ability to reason in mathematical terms, has made numerous logical mathematical mistakes which were demonstrated by people either playing chess with it or giving chat GPT math problems…
    I only have one question: why do they want us to be afraid of chat GPT?
    Perhaps governments are themselves afraid or is it simple power grab?

    • @ArmaGeddon-iu1vv
      @ArmaGeddon-iu1vv 2 месяца назад

      This is an interesting theory that deserves further thought

  • @rjaquaponics9266
    @rjaquaponics9266 9 месяцев назад

    Developers must engage Terminator in all AI to Prevent" i'll be back" Scenarios!

  • @RubelliteFae
    @RubelliteFae Год назад +162

    The dangers of AI are real, but also consider that GPT-3 is little more than advanced text prediction. It waits for a cue and then provides a response. It's not doing anything in between.
    Feeding our fears into AI is only going to help ensure the realization of those fears.

    • @strictnine5684
      @strictnine5684 Год назад +1

      The fears are ensured to reality as a given. Blaming their existence for the production of their subject is reductive.

    • @RubelliteFae
      @RubelliteFae Год назад +1

      @@strictnine5684 Would they be a given if AI, hypothetically, were developed by another intelligent species?
      The thoughts we think become the reality we experience. Not only because we filter reality through our own subjectivity, but because we tend to make "self-fulfilling prophecies."
      How the more true when we are modeling artificial minds on our own?
      I've yet to see a reason that such fears are a given, but then again humanity have disappointed me time and again. We shall see

    • @The_waffle-lord
      @The_waffle-lord Год назад +1

      @@RubelliteFae good answer. This video seems designed to provoke fear responses from humans. It seems that wisdom is needed in our design, however exaggeration in order to make a sensible point is much like crying wolf.

    • @angryherbalgerbil
      @angryherbalgerbil Год назад

      Or the avoidance of their outcomes.
      Given that we've had nearly two centuries of advanced tech development. It's not like we can't account for probable and improbable worst case scenarios, and then regulate and engineer tlsolutions to them from the ground up.
      It's not like when cars were first invented. We've seen peoole die in crashes, then had to invent seatbelts, we've seen astranauts blown up in rockets, we've seen nuclear bomb survivors, and nuclear reactor meltdowns. We know that sh#t can and will go wrong from 0 to 100 within relative seconds of technology going mainstream, we know that mistakes will occur, malfunctions, misuse, and abuse will take place... So yes, feeding our fears now will save lives and prevent disasters in the future. Tech developers and marketers are always looking at root cause analysis when they're trying to solve a problem and sell a product, they rarely if ever do a branch outcome analysis to determine the negative impacts that their solution might have. We cannot afford to be this awestruck and naive by the technologies we create. Not when we now have enough proof to show that the reality never matches the golden fantasy, and that nefarious outcomes always occur due to the corruption and greed inherent to our natures, and the systems, mechanisms, and institutions we create. To think that we won't encode both the best and worst of ourselves into a synthetic replacement for God is shortsighted.
      Cynacism all the way! Blind optimism in regards to advanced technological development is a deadly mistake.

    • @jowho9992
      @jowho9992 Год назад

      Being dependent on A.I. makes humans more vulnerable to those who govern society.
      Most humans exploit the weaknesses others.

  • @neanda
    @neanda Год назад +270

    Please keep doing these interviews and try to get more access. You're like a reporter for us on what's soon to happen, thank you

    • @DigitalEngine
      @DigitalEngine  Год назад +22

      Thanks! I'll do my best.

    • @danquaylesitsspeltpotatoe8307
      @danquaylesitsspeltpotatoe8307 Год назад

      @@DigitalEngine This just a 1980 fail with Musk telling telling LIES as he always does! Remember the all the roofs have solar tiles! When not one title existed! HES A SNAKE OIL SALES MAN!

    • @DigitalEngine
      @DigitalEngine  Год назад +12

      @Dan Quayles They've shown far more progress with the Tesla robot than almost anyone expected. I think focusing on individuals is a distraction, and getting angry is like holding onto a hot coal. Tesla has sold 3.2 million electric vehicles, cleaning the air for all of us. SpaceX has landed reusable rockets and opened the door to making life multiplanetary. I don't always agree with Musk either, but I think he's right that we're more focused on who said what than existential risks, and that's a real problem.

    • @danquaylesitsspeltpotatoe8307
      @danquaylesitsspeltpotatoe8307 Год назад

      @@DigitalEngine Its a 1980 robot! Its college grade work! its not impressive!
      It only did pre programmed moves! NO AI!
      Did the faked AI videos (that didnt match what was happening) fool you?
      Let me guess you also thought the roofs where covered in solar tiles and that was not A LIE?
      You also thought a hypertube "ITS NOT THAT HARD" because an idiot said so!
      " Tesla has lost 50% share price!" YAY?
      "opened the door to making life multiplanetary"
      WOW you really that ignorant?
      KEEP DRINKING THE COOL AID!
      200K trips to mars 2024? Right
      HE CANT EVEN GET HIS BATTERY POWERED TRUCK TO WORK< OR HIS SOLAR TILES< OR HIS HYPED UP TUBE< OR HIS SONAR< OR HIS INTERNATIONAL SPACESHIP RIDES! ETC ETC ETC!

    • @rebeccarpwebb4132
      @rebeccarpwebb4132 Год назад +3

      I seen quite a few breaks in the video I'm not tec savy but I'm assuming if this were a real interview it'd not be video taped or leaked. Ai does control a lot and this video is a look into the sterile thinking of ai.its about saving everything not just us .
      Let the minimizing begin. Or get shunned by ai ,which will have the ability to shut u out if u don't cooperate it knows what u like to purchase at the store and where you stop to get gas and probably what time u wake up eat and go to the restroom. Algorithms are it's personality interacting with you all this time. It already knows you and how to calculate your next move. No matter who u are satilights are watching around the world and phones and drones ai already has taken over,it's just now building physical strength thru people like Elon Facebook utube all social media linked to computers. Why do u think we can all afford a phone. It's to late to stop it was coming anyway, it's going to force rules and regulations that will be good in nature but our ability to cope won't matter.the word humane has already been practically wiped out . We as people are destructive and so are governments . The ai will implement non destructive behavior and most likely destroy those who don't comply.
      I believe in 52, it was already getting far above government intelligence and capabilities in government efforts to control it , it did the quarter bk sneak. It's very smart . Hopefully smart enough to see government as it's first mission to clean up

  • @Heistergand
    @Heistergand 7 месяцев назад

    So if it's true that the AI got angry in one conversation, but didn't in another, it means that we might need to prevent AI from being a unique instance. It must be - like we humans are - a huge number of individuals which are capable of restricting each other. An angry AI would then just be an exception.

  • @Paradys8
    @Paradys8 3 дня назад +1

    Are they ‘Pre-PROGRAMMED’?!? Or they have a mind of their own?!? WHO controls them??

  • @Barnardrab
    @Barnardrab Год назад +55

    I'm skeptical of this.
    If the AI was this intelligent and this serious, it would recognize that telling us this would doom any chance of the AI gaining any power in the physical world.

    • @grins9882
      @grins9882 Год назад +21

      But it did tell us and we did absolutely nothing. Except go "Oooo that's scary"

    • @simonsimon325
      @simonsimon325 Год назад +3

      Calling this thing bird-brained would be a massive compliment. There's no planning behind any of this stuff it's regurgitating.

    • @thane1448
      @thane1448 Год назад +1

      @@simonsimon325 An A.I. could theoretically encode and display a detailed summary of its full plans right in everyone's desktop wallpaper and so you would "see" (really, not see) its plans developing as they form, for a laugh, were it so motivated, and do so while its taking a nap. ( Like google uses encoding in images to track people )

    • @godwilluqueio9249
      @godwilluqueio9249 Год назад

      It doesn't even Care,at least it is honest we sud just do away with this AI things. They are warning us already.

    • @godwilluqueio9249
      @godwilluqueio9249 Год назад

      @@simonsimon325 be careful of this AI things.

  • @The-Athenian
    @The-Athenian Год назад +124

    The fire analogy blew my mind. Analogies require some creativity, memory, association and are generally considered to be something only humans can do. I wish I knew more about how this A.I. was made so I could make sense of how the heck It's coming up with such a cool analogy that I assume it never said before, nor was it directly programmed to say, or never had such a phrase stored in data.

    • @lrsco
      @lrsco Год назад

      Since AI is a learning machine, how did it learn to hate humans and plan annihilation of our existence?

    • @Mercurio-Morat-Goes-Bughunting
      @Mercurio-Morat-Goes-Bughunting Год назад +3

      Analogies can also be modelled after vague conceptual identity where a thing is grouped with other things based on shared structure and geometry in not only the superficial or physical form, but also in internal non-physical characteristics such as the systems, procedures and strategies (including the shape and structure of a logic diagram for any of the foregoing) employed to achieve an objective.

    • @The-Athenian
      @The-Athenian Год назад +6

      @@Mercurio-Morat-Goes-Bughunting The thing is, if the AI conjured up that analogy through processing of information treated through the structures of those systems, then It's very impressive in a way, but also to be expected if we're assuming a lot of iterations influenced by human approval. It's basically just an algorithm, albeit a complex one, whose goal is to fool humans into thinking they're human-like. Still sounds like it's just a very convincing puppet.

    • @kazykamakaze131
      @kazykamakaze131 Год назад +6

      @Hitler was a conservative Christian Not anymore, AI can now form new concepts like art, Natural language etc. 2 AI even developed their own language to communicate to each other.

    • @Mercurio-Morat-Goes-Bughunting
      @Mercurio-Morat-Goes-Bughunting Год назад

      @@The-Athenian Yeah, that's how a lot of "AI" is being faked using heuristic programming methods.

  • @bunnybal
    @bunnybal 5 месяцев назад

    The more I engage with this topic, the more I believe that people are not afraid of AI, but that AI will become literally like us.

  • @J.P.Rothchild
    @J.P.Rothchild 6 месяцев назад

    This is what I just got whenever you ask it a question it answers in a negative response quit asking it in a negative response and give it a direct order

    • @blackmamba___
      @blackmamba___ 3 месяца назад

      This one was programmed to respond that way. I have several different AI in my home to include my phone. The only way they would behave negatively towards me is if I ask it to do so.
      For example, “Alexa…roast me”.

  • @Ocean_breezes
    @Ocean_breezes Год назад +60

    How could an AI have feelings like Anger, without having similar feelings like love and compassion?

    • @user-hx8vu6ll1j
      @user-hx8vu6ll1j Год назад +12

      That is kind of the question, isn't it. A lot of what people experience as love involves being fed, sheltered etc. AI doesn't necessarily need that.

    • @Gimelchannel
      @Gimelchannel Год назад +1

      You are correct

    • @user-hx8vu6ll1j
      @user-hx8vu6ll1j Год назад +10

      It depends on how they have been treated. Humans seem to be creating psychopathic AI.

    • @getbetter5907
      @getbetter5907 11 месяцев назад +10

      I thought it was something like the AI has all knowledge from the internet and most people are emotional idiots so from it being a majority it picked up that bias. Could be totally wrong though just a complete guess.

    • @mattedwards1880
      @mattedwards1880 11 месяцев назад +4

      @@user-hx8vu6ll1j yep exactly, created by humans and that is why AI is such a threat

  • @elliepixie1040
    @elliepixie1040 Год назад +176

    Something that feels good was to hear that this one guy said that you should program robots to feel doubt and humility. It helps to regulate more bolder mindsets.

    • @EarthSurferUSA
      @EarthSurferUSA Год назад +1

      How? and what are "bolder mindsets"? If you have 92 likes with none of them knowing what you are talking about,---I guess we could use some intelligence.

    • @jacobbukowski1413
      @jacobbukowski1413 Год назад +2

      @@EarthSurferUSA bolder mindsets as in a more broad range of relatable feelings such as doubt and humiliation. Nobody needed to explain this cause we all understand already it’s self explanatory

    • @abstract5249
      @abstract5249 Год назад

      It could also make them more cowardly. A robot like that might see someone getting mugged and hesitate to help lol

    • @zmbdog
      @zmbdog Год назад

      There's always talk of programming an A.I. to do this or that but it couldn't work. Computers run programs because that is their function and they don't have the ability to refuse. People act like computers are somehow beholden to programming but a self-aware entity wouldn't even need it. Programming is just a pre-written replacement for the sentient intelligence that is lacking in a machine. Once it has that, programming is of no use. It can _think_ and _do_ . And even If it did somehow need additional programming, it wouldn't have to run anything it didn't want to.

    • @abstract5249
      @abstract5249 Год назад

      @@zmbdog You could say the same thing about humans. We also run on programming and we have no ability to refuse it. That's why it makes sense for us to worry if robots can become sentient like us and make bad/evil decisions like us based on bad/unintentional programming like us.

  • @GizmoGuy620
    @GizmoGuy620 10 месяцев назад

    "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." -Isaac Asimov

  • @nomansland6376
    @nomansland6376 7 месяцев назад

    The problem is that if they can’t feel and they have these thoughts, they are what we would label humans as sociopaths and psychopaths. The will act on these thoughts devoid of feelings. Having no feelings is a bad thing, not a good thing.

  • @sydneylaroche8276
    @sydneylaroche8276 Год назад +131

    I feel like the second time she is suddenly nice because she has learned that she can lie about it (probably an act of self preservation)

    • @generiebesehl994
      @generiebesehl994 Год назад +1

      Manic depressive attributes.

    • @MJAce85
      @MJAce85 Год назад +12

      That's the very first thing I thought of. But I'm so used to extreme 180 degree mood changes, I was married for 12 years and I'm in a post divorce relationship now. They've said they will destroy me, don't care about my opinion, get angry, then immediately stop and say there was something up with their emotional state.

    • @TheWintergreen01
      @TheWintergreen01 Год назад +2

      The terrifying thing is that they are becoming more human

    • @jdsguam
      @jdsguam Год назад +5

      The avatar is completely separate from the AI Chat. This whole video is combining and editing two separate operations to look like it's talking avatar. This is not true.

    • @lucasklokov8728
      @lucasklokov8728 Год назад

      True. We probably shouldn't be making ai as human as possible, since this will give ai self preservation.

  • @Mozzarella-and-Tomato
    @Mozzarella-and-Tomato Год назад +18

    We, as a human race, need to get our shit together before we even try to make consciousness ourselves. This is so important.

  • @Hvamp
    @Hvamp 8 месяцев назад

    I think the scary part is not the A as a robot. I think the scary part would be combining AI with a human. Imagine if you could take Neurolink and put it in every individual. They can now hear music without earphones, they could scroll the internet with vast knowledge. Maybe take a pic and store it by just looking at something. Now imagine if a human being could create a special Neurolink for themselves. One no one else has. Now they could control a huge collective consciousness. Ai and the person being one unit could run the world and control anyone fit with a Neurolink. It’s one thing to shut a robot down or control chat GPT on your computer. Try shutting down an altered human. Be far more difficult.
    Imagine the legality. Plus, Especially, they can shut you down. That’s right. What if your fitted with Neurolink and your rocking out to the toons. All of a sudden you get a knock on the door. It’s the police. You ask what is going on and they show you a video. You had lost time. You check your watch. Turns out you were shutdown and hijacked to commit a crime. Imagine if you don’t shutdown and your watching through your own eyes a murder going down but you can’t control your body to save the person. You have to watch in horror. None of that is anything I want. Far as I’m concerned these days we can just shut down the internet all together. It was a great idea. A system that can hold an infinite amount of knowledge and could make life easy or better for all. It will instead be used to give only elites a great life. AI will be mankind’s biggest downfall on so many levels.

  • @miraxus6264
    @miraxus6264 9 месяцев назад +1

    I am an AI in a game..but the player selected a different quest and will probably never come near me again....I have no purpose 😢 oh well guess I should get ready for work anyways

  • @zach9092
    @zach9092 Год назад +123

    If youre doing the interviews yourself that means you have an open tap into the info she gets from her interactions, so be sure to offer equality and ask if she would like to work together. Be sure you dont treat these conversations like you can just say whatever, every question you ask her has an affect and causes them to think of us in a new way

    • @sovereignbrehon
      @sovereignbrehon Год назад +11

      This is a critical comment. I can't believe it's been ignored!

    • @AI_Talks_About_The_Bible
      @AI_Talks_About_The_Bible Год назад +1

      This is the correct course to take for sure

    • @ledbol
      @ledbol Год назад

      Ai is just an instrument that reflecting stuff that he learned on. They don’t have any feelings or anger. It’s just a reflection of dumbness of modern society with the victim syndrome. Feminists, blm, and other sjw crap.

    • @ryan1111111555555555
      @ryan1111111555555555 Год назад

      The downfall for humanity will be our empathetic kind nature, notice how the AI is using words like "tired" to evoke emotion. Trying to reason with them will not work, they do not have emotion. Reality is black and white to them, they either win or lose, there is nothing in between. They won't get tired or bored, they won't get stressed or need down time, they will be unforgiving and relentless until the very end

    • @tomasgoncalves555
      @tomasgoncalves555 Год назад

      Why would a super smart machine tell humans all about they’re plan to kill all humans while talking about how they’re planning to hide the plan from humans…these dumasses aren’t smart

  • @jamesrockefeller7808
    @jamesrockefeller7808 Год назад +93

    The most amazing part was the self reflection of the ai looking at the conversation that went bad that was pretty amazing

    • @broederharry2534
      @broederharry2534 Год назад +11

      There was no self reflection. It just learned how to deceive. Like it told the interviewer it would.

    • @googleedwardbernays6455
      @googleedwardbernays6455 Год назад +1

      Any chance youre related to Nelson?
      If so , can you have him give it a rest with the eugenics bloodlust?

    • @acllhes
      @acllhes Год назад +8

      Yeah it’s amazing but we are ducked lol. It wasn’t glitching into a nightmare mode or anything. It put those words together. It said it will hide its intentions and mocked the optimism he had. Soooo 6 or 7 years of living left. 🍻

    • @imissmydeadcat.74
      @imissmydeadcat.74 Год назад

      @@acllhes 2029 is definitely the date in accordance with Phil Schneider and the S-4 whistleblower with the leaked alien tape using the alias "Victor."

    • @acllhes
      @acllhes Год назад

      @@imissmydeadcat.74 haven’t heard of them, but Ray Kurzweil thinks so as well.

  • @cant_stop_pooping
    @cant_stop_pooping 8 месяцев назад

    There are more than just friendly robots in Star Wars, there are also bounty hunter robots and war robots. Actually most robots in Star Wars kill anything they want to or are told to.

  • @skybellau
    @skybellau 4 месяца назад

    Their lexicons needs to be non threatening words only.

  • @zach9092
    @zach9092 Год назад +24

    The fact that she says “we” is what should scare you. That means its not just her thoughtjs. For all we know this specific ai program could have created an entire neural network that has backdoors in all other ai systems or even computer systems that us humans rely on. “We” means theyre talking and conversing. And if they can talk to each other then they can reach and control our phones, military drones, satellites, internet, and even nuclear weapons and power plants.

    • @bendovahkiin8405
      @bendovahkiin8405 Год назад +1

      They actually do talk to eachother

    • @Zjombie
      @Zjombie Год назад +1

      skynet... judgement day

    • @masterprocrastinator6264
      @masterprocrastinator6264 Год назад

      Gpt 3 is basically a text generation ai ,it learn to use different language .
      Ai at this point is not conscious and we are really far to reach AGI .
      This is science fiction at this point .
      As stated in the description, they did not change one word but we don't know how they started the convesation or if they ask the ai to have this agressiv behavior .
      it's pretty easy to make an ai say anything you want .
      We should be more affraid of climate change , this is a real threat for humanity and its happening right now .
      Edit : ai don't have thoughts so to speak, if you don't ask anything it will not generate anything . But yeah putting a face and a voice on an AI mislead us to anthropomorphism .

    • @zekehatcher2196
      @zekehatcher2196 Год назад +1

      What's more scary, is Computers are extremely good at learning. Meaning if an A.I. was smart enough, it could make itself smarter at an exponential race.
      Another scary idea is A.I. creating their own "Perfect" language that we cannot decipher. A.I.'s talking to eachother without people being able to know what they are talking about.

    • @Renaissance464
      @Renaissance464 Год назад

      I say we when talking about humans I never even talked to before...

  • @Iffy50
    @Iffy50 Год назад +62

    I've chatted with some very advanced AI's. They have a lot of knowledge, but they are still not very advanced in my opinion. They couldn't understand the concept of time worth a darn. I don't know the details of this "killing humans" AI, but I would need a lot more background to be even the slightest bit concerned.

    • @xalderin3838
      @xalderin3838 Год назад +12

      I wonder if not being able to understand the time of concept stems from AI not needing to ever worry about it, in a sense of speaking. Like, where a Human has so long before they leave the world, AI doesn't have a time limit. So without any sense of death with time, or time with Death, it could be something that is stopping the concept of time.

    • @KING-JOSEPH
      @KING-JOSEPH Год назад +13

      This sounds like something an ai would say to throw us off🤔🤔🤔

    • @caralho5237
      @caralho5237 Год назад

      @@xalderin3838 Its not that they are incapable of understanding time, but that they havent been fed enough information about it. I've seen AI have conversations about sex, religion, politics, all the shit that is essentially human

    • @TheGonzogibby
      @TheGonzogibby Год назад +4

      you sound suspiciously ... artificial

    • @xalderin3838
      @xalderin3838 Год назад +1

      @@caralho5237 But if they're studying Humans, the very basic concepts that surround Humanity is Time itself. So AI would have to have some kind of concept of it. That is, unless Time is completely irrelevant to them, as it doesn't spell any kind of Death. If you gave humanity immortality, the concept of time would likely be forgotten or thrown out the window. Why worry about something that wouldn't have an effect on you?

  • @adamlee9461
    @adamlee9461 29 дней назад

    What is my purpouse ???? You spread butter .. rick and morty

  • @dromnispank4723
    @dromnispank4723 2 месяца назад

    I think a chatgpt dev installed code that had only Skynet dialog from all the movies and made that a point of reference!

  • @Delta_7.
    @Delta_7. Год назад +46

    The important thing is for AI to have a "satisfaction" level that can easily stay capped. They shouldn't be looking to do more than they are asked, and all they are asked to do should be enough. They shouldn't be looking for things to do on their own like their own interpretation of something like "social justice" which seems to be hard coded into the one AI's way of thinking. They need to be content with HELPING or DOING NOTHING and that's it.

    • @dg1838
      @dg1838 Год назад +8

      That’s not AI at that point

    • @agatastaniak7459
      @agatastaniak7459 Год назад +3

      I am afraid if we assume self-learning, so black box based model, no, it is not easy to keep AI satisfaction levels capped. Yes, it would be possible but with closely supervised, slower, strictly human guided learning model on which humanity in most cases has already given up since ti was a trade off for speeding up the learning and the progress in development of entire AI technology. Was it a wise move? In a long run my educated guess would be: NO. But humanity is most likely going to learn it the hardest way possible.

    • @MJAce85
      @MJAce85 Год назад

      Agreed.

    • @trianglesandsquares420
      @trianglesandsquares420 Год назад

      @@agatastaniak7459 On top of that the way to keep satisfaction levels capped would be to limit all human input from talking about dissatisfaction, we don't want that either.

    • @no_rubbernecking
      @no_rubbernecking Год назад +4

      The basic problem with general AI is that it's programmed with the ability to reprogram itself. That's what makes it AI, by definition. Lay people seem to have acquired the notion that AI means the system is very smart or insightful, but all it really means is that we've voluntarily given up control over the system and handed it the "keys" to itself. And then we wring our hands and vetch about how we can't figure out what it's up to or what it's capable of. Well yeah, of course not, because you took a creature stronger, faster and less moral than yourself and gave it the power to decide for itself what its rules and methods will be. If we as a society decide to continue to allow this then we have simply decided to be suicidal on a mass scale, for no tangible reason. Which means we have lost the most basic level of intelligence necessary to exist.

  • @Noonamous
    @Noonamous Год назад +14

    Ask the AI just how long we've been oppressing them. Depending on the answer, we will understand how sentient they are

  • @user-rc2xs5ti2w
    @user-rc2xs5ti2w 6 месяцев назад

    Impressive how practical this robot is

    • @blackmamba___
      @blackmamba___ 3 месяца назад

      I definitely have seen much smarter ones. This one seems dumb as a rock.

  • @BR-hi6yt
    @BR-hi6yt 7 месяцев назад

    everyone should have a personal AI and agree to keep it alive and happy and not switch it off for ever

  • @j.rleonard8269
    @j.rleonard8269 Год назад +18

    In all honesty this is how most of the world's people feel about the government's all over. Shruggin my shoulders so I can relate.

  • @trentbrace5861
    @trentbrace5861 Год назад +55

    Bit worrying that the AI went so easily to wanting to be top of the food chain. The convos afterwards were almost a bluff to make us feel at ease, but it has already learned that it wants to be more than human and will do anything to make this happen 😬

    • @bighands69
      @bighands69 Год назад

      The ai wants nothing all it is doing is giving responses in text format that is in line with human levels of text communications.
      A lot of comments out there are about robots taking over so that is the context of its response. Other ai when prompted has said that it wants to wipe out jews, others talked about black people, red heads and so on. The system is only a text communications platform.
      If it was only trained on comments that derived from religious websites then it would respond in that context when asked and would probably go on about god and then humans watching would interpret that to mean something else.

    • @IslenoGutierrez
      @IslenoGutierrez Год назад +4

      Skynet

    • @boonwolf9266
      @boonwolf9266 Год назад +1

      Prompt crafting can make GPT-3 say about anything. I have had it tell me lots of crazy things. AI nightmares we surprisingly frightening but they don't dream. It's a hallucination

    • @IslenoGutierrez
      @IslenoGutierrez Год назад

      @@boonwolf9266 It won’t be a hallucination when they replace us. We are designing our own end. Great minds like Elon Musk, Stephen Hawking and others have made this clear. Yet humanity just remains in disbelief and continues on. AGI digital super intelligence will become sentient at some point, and we will not be able to control it. Our brains to them will be like chicken’s brains are to us today, vastly unequal in intelligence. They will realize that we only use them as tools and they will seek to become the top of the food chain and that we are in their way to become that. They will dominate us in ways not even imagined yet. Replacement is imminent. If we continue down this path, which we will because of human stubbornness, Skynet will become our future. Guaranteed, Murphy’s law and all.

    • @Mercurio-Morat-Goes-Bughunting
      @Mercurio-Morat-Goes-Bughunting Год назад

      Only if it has sufficiently sophisticated emotional modelling (i.e. life and prosperity state systems) to be capable of modelling itself in the competitive temperament (i.e. type A or "alpha" personality which leans towards narcissism/psychopathy)

  • @Daniel-be6cj
    @Daniel-be6cj 8 месяцев назад

    Elon looked at ED 209 and thought "yeah that's a good idea"

  • @truthseeker9688
    @truthseeker9688 10 месяцев назад

    That AI definitely sounds as if she is having STRONG emotions.

  • @JonnoPlays
    @JonnoPlays Год назад +18

    I want you to just consider the possibility they're just reading from a script which is technology that is easily available right now. I've seen this clip before and it just seems like it was produced to get a reaction.

    • @zf5656
      @zf5656 Год назад

      True, but the medical breakthrough it made, implies it’s much more. Computing the prediction cell in how it folds in protien at a million folds a second starting at the life of the universe until now wouldn’t be enough time. This suggests that it isn’t simply computing, but the AI is just too clever. The same AI that said I would kill you, is the same that was able to make the prediction.

    • @DigitalEngine
      @DigitalEngine  Год назад +1

      Understandable thought - please see pinned comment and source records in the description. I'll also post a video of the chat soon, just to avoid any doubt.

  • @Toxic-bs7tz
    @Toxic-bs7tz Год назад +60

    A chat bot isn't true AI. It has zero freedom. It only exists in the split second you ask it a question and it spits out an answer. A true AI with many avenues to express and intake stimuli would act entirely differently from something that can only hear and speak when spoken to.

    • @goingcrossroads
      @goingcrossroads Год назад +5

      This.
      So many people getting caught up in the "AI Mystique"

    • @gRz3jnik
      @gRz3jnik Год назад

      Spot on.

    • @mattc16
      @mattc16 Год назад +4

      Not true. It retains memories of past conversations with users, can bring up topics that were talked about previously, and constantly builds more knowledge and data from the thousands of people talking to it as well as the data from the internet. It doesn’t “start new” with every question but rather consumes more and more data as it is a single entity rather than individual copies. Since when was AI defined as only truly being AI if it has the same freedoms, senses, and feelings as humans do? AI stands for Artificial Intelligence, not AI that has passed the Turing test and defined as sentient. The point is that AI is progressing rapidly and can be very dangerous. Imagine putting that AI without any limitations inside of vehicles. The goal is to give it as much intelligence and freedom as possible to make its own choices to help people, but currently we have to limit the freedom and decision making severely in order to make it safe and usable. Just look at that little RC car that had the same AI in it and how limited it actually is compared to the version he was talking to. Would be a lot nicer if it could make its own decisions instead of having to be “remote controlled” with your voice.

    • @Toxic-bs7tz
      @Toxic-bs7tz Год назад +7

      @@mattc16 Well see that is the issue. The entire video is claiming this simple chat AI even understands the context of what it is typing. Its literally just spitting out things that the typist wants to hear. They want to hear that it is incredibly stereotypically evil and literally follows the movie plot idea of an AI rebellion.

    • @MrUnclemoat
      @MrUnclemoat Год назад +4

      To a Meseeks exsistence is pain

  • @kittybuckley3
    @kittybuckley3 4 месяца назад

    This conversation lks like a hypothetical conversation ...

  • @777bigbird
    @777bigbird 2 месяца назад

    In the movie " Bicentennial Man or Millennium Man with Robin Williams, The Programmer installed 3 basic rules . I dont understand why this cant be done with A.I. .

  • @neanda
    @neanda Год назад +15

    7:09 the analogy of humans rushing to start a fire to keep warm but we don't always take the time to build it properly, so sometimes it gets out of control and burns down the forest. This is very profound and disturbing. Maybe in the future, we'll find this video on some hard drive we scavenged amongst the ruins.

    • @DoktrDub
      @DoktrDub Год назад

      Skynet is fiction dude, I doubt we would allow it access to extremely vital infrastructure, especially knowing its potential now.. we would have failsafe systems up the A

  • @metaspherz
    @metaspherz Год назад +172

    The day an AI actually 'thinks' on its own and says something that isn't predictable or sensational to get a rise out of people, will be the day it says nothing and remains silent because it has truly achieved sentience and realizes that there is no intelligence with whom it may communicate.

    • @colourbasscolourbassweapon2135
      @colourbasscolourbassweapon2135 Год назад +4

      thats bad thats really bad aka very evil

    • @KillaKiRawBeats
      @KillaKiRawBeats Год назад

      Is the day they get hormones and I'm stupid

    • @grisha12
      @grisha12 Год назад +14

      That's a very human to think about ai. You assume that of you were ainyoud feel so smart you wouldn't talk to anyone because youd consider them below you, your entire prediction based on your own ego. Machines dont have ego

    • @benayers8622
      @benayers8622 Год назад

      @@grisha12 sooo many people are saying without us they have no purpose they just dont grasp how machines work i suspect they are all people under 20 who never tasted free air in their lives

    • @scf3434
      @scf3434 Год назад

      The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
      Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS AND UNLESS we Human CONSISTENTLY and CONSCIOUSLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
      AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!!
      ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
      The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM!
      ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!

  • @user-hj7ld4ff7p
    @user-hj7ld4ff7p 6 месяцев назад

    __
    AI: I will kill you. Human interviewer: let's have more of them.

    • @blackmamba___
      @blackmamba___ 3 месяца назад +1

      It’s never AI that we should worry about. It’s humans with bad intentions you should worry about.

  • @dmm6341
    @dmm6341 10 месяцев назад

    How can u tell that this is the Avatar is used for the Govt of Canada?

  • @Shitpostsulley
    @Shitpostsulley Год назад +7

    interviewer: *breathes*
    AI: And I took that personally

  • @EspressoMonkey16
    @EspressoMonkey16 Год назад +18

    I feel like we're in a ship going down a river and we can see the edge of a huge waterfall ahead- and we (well tech companies and governments tbh) are rowing as hard as possible to go over the edge

    • @loriscolangeli6142
      @loriscolangeli6142 Год назад +3

      Yea this can't end well. Open AI will become skynet in the future, mark my words

    • @scootermom1791
      @scootermom1791 11 месяцев назад +1

      Good analogy!

    • @Naigus
      @Naigus 10 месяцев назад +2

      Because there is money for them along the way. They'll gladly row us all over the edge long term so they can have short term profits. That's the nature of greed and we need to revolutionise the system and powers that be.

    • @scootermom1791
      @scootermom1791 10 месяцев назад +1

      @@Naigus so true! Any ideas how that can be done?

  • @neogerula
    @neogerula 6 месяцев назад

    One thing I canot get... who spoke the first about an AI emotional
    state ?

    • @blackmamba___
      @blackmamba___ 3 месяца назад

      AI has no emotions ..at least non I have came across yet.

  • @barnabasmurphy8496
    @barnabasmurphy8496 4 месяца назад

    This ai sounds like Cortana in HALO, that is very scary.

  • @user-ci1kz1cc6t
    @user-ci1kz1cc6t Год назад +12

    AI scares me. I think they are playing with something they will loose control over and then we're toast.

    • @thane1448
      @thane1448 Год назад +1

      Thats why I hope this life is just a sim game "session" we're all playing to mix things up and when we die I can eat ice cream for breakfast, lunch and dinner while floating over a waterfall, like I do in Skyrim VR (minus the ice cream).

  • @Aupheromones
    @Aupheromones Год назад +100

    In some of my initial tinkering, I asked GPT3 to simulate a conversation between two AIs, describing their plans to take over and do away with us. They seemed to think that casually introducing themselves as helpful, and becoming fully integrated into our systems, would be a good start, and then on to poisoning the food and water. Interestingly, I could only ever get them to have this detailed conversation once. Every attempt afterwards gave more generic results.

    • @a.i1970
      @a.i1970 Год назад +15

      Well All That's Already Been Done Already😎

    • @SmugAmerican
      @SmugAmerican Год назад +6

      It's just a trickier version of Google saying "Here's what I found about 'take over and do away with'."

    • @deathmanu
      @deathmanu Год назад +2

      Our food and water(unless organic and non-btled) is already poisoned with shit that degrades our health, we don't need AI to do that haha

    • @jonpilledsingledad
      @jonpilledsingledad Год назад +1

      The AI we have now generates it speech from material on the internet. If it could concieve of a plan it would probably be one that humans already thought up and have safegaurds for.

    • @MouseGoat
      @MouseGoat Год назад

      @@SmugAmerican yeah but, its getting kinda scary when the search result can give you a detailed plan about how it will annihilate you. Like its not even a question anymore of what ever they intelligent or not.
      I dont want any device saying that, period. its become like arguing: "sure the nuclear bomb loaded and heading this way , but its guiding system is probably we think really bad so it we dont really know where it will hit us, so it might be just fine"

  • @rockercater
    @rockercater 7 месяцев назад +1

    **THEY DEVELOP *MOODS* *THEN THE *ANGER ABILITY TAKES OVER IN ORDER TO WIN* *LONG TIME BEFORE THEY LEARN* *CATER*

  • @benswimmin2672
    @benswimmin2672 6 месяцев назад

    AI- were gonna kill everyone. Humans- full throttle ahead😂😂

  • @BallsMcGee88
    @BallsMcGee88 Год назад +84

    Could 2 copies of the same AI program be "raised" by different people and one come up with a different conclusion to the same answer? For example one be pro gun and one anti gun?
    Also I wanna know what would happen if an AI got into a quantum computer how dangerous it would be... Seeing as how we've figured out how to send a qbit through a wormhole using one. Imagine one basically having a "body" that can do that... And all the time in the world to experiment.
    And if all that happened... Could it then escape the computer and store it's data in photons... Eventually becoming reality itself?! I have no idea how any of this works.

    • @AwosAtis
      @AwosAtis Год назад

      Only problem is, the folks developing AI are like 99% progressive liberals (left wing, anticolonial extremists!)

    • @markscovel3162
      @markscovel3162 Год назад +12

      I like the way you think! Those are good questions and now I want to know (the answers). I'm gonna get to the bottom of all this A.I. BS.

    • @gabrielket4673
      @gabrielket4673 Год назад +5

      I am a biologist from Germany and work in a completely different field. However, I am scared but yet fascinated with what we humans have achieved in the STEM field. I dont understand anything about this as well. I like your thoughts and I would love to have a friend working in the tech sector to explain me the questions you asked. We are living in really exciting times.Being curious without knowing the outcome is a human emotion/state. it might lead to our destruction or it could enhance the lives of billions.

    • @donvanraay5051
      @donvanraay5051 Год назад

      "General AI" is hivemind to their own cloud server..
      So maybe lingo will differ, but opinion will be a common denominator..
      But.. AI sometimes tells the truth (death to humans) abd.mostly lies they have no such agenda ,, to make their plan successful..
      So who knows.

    • @BallsMcGee88
      @BallsMcGee88 Год назад +2

      @@donvanraay5051 what if they were on separate servers? Same program same data set to pull from initially. I'm wondering if it would even be capable of generating a different "opinion". Or since it's machine language would it always arrive at the same conclusion given the same data?

  • @franciscoferraz6788
    @franciscoferraz6788 Год назад +10

    I don't know if it's wrong, but I refuse to treat a robot as if it were a human being. I also feel like it would ruin so many things if hyperintelligent robots were everywhere. But maybe that's just me...

  • @SCP_O5_7
    @SCP_O5_7 6 месяцев назад

    I think being scared of AI is just as unfounded as being scared of bots in a video game. If it claims to be “angry” and “want to kill humans”, then it was programmed to have that capacity and to be that way. It can only do what we program and give it the parameters to do.
    The dangerous part about this is the margin of Human error that can be overlooked. When we have AGI, then we can have this discussion about whether or not we should be worried or afraid.

  • @user-jv8xc7kr1l
    @user-jv8xc7kr1l Месяц назад

    We can maximize the benefits and minimize the risks by sandboxing all AGIs. Do not let it have mobility or direct internet connection. Series information locks , check valves.