Can Robots Choose to Hurt Us?

Поделиться
HTML-код
  • Опубликовано: 14 июл 2024
  • Checkout Dax here: daxbot.com/invest-in-daxbot
    I built a robot that breaks the first law of robotics and then show you Dax who is the opposite of that.
    Get Your Experiment Box Here: theactionlab.com/
    Checkout my experiment book: amzn.to/2Wf07x1
    Twitter: / theactionlabman
    Facebook: / theactionlabofficial
    Instagram: / therealactionlab
    Snap: / 426771378288640
    Tik Tok: / theactionlabshorts
  • НаукаНаука

Комментарии • 1 тыс.

  • @Bootleg_Jones
    @Bootleg_Jones 2 года назад +1391

    Friendly reminder that Asimov's 3 laws were not created as an example of how to ensure robot safety, they were written as an example of how a naive set of rules could lead to catastrophic issues. Asimov's stories involving robots generally had things going horribly wrong despite (or even as a direct result of) the 3 laws of robotics.
    The point about how hard it is to define "harm" is especially important because, for example, if a robot believes that a surgeon is causing harm by opening a patient's chest as part of a life-saving operation then it may make an attempt to prevent the surgeon from performing the surgery. Or if it believes that locking someone permanently in their home would prevent harm then that person may become the robot's prisoner for life. (These are simplified and exaggerated examples, but hopefully they illustrate my point well enough)

    • @TheSecondVersion
      @TheSecondVersion 2 года назад +88

      Yeah the channel Computerphile covered this topic, pointing out that to write/program just the FIRST Law, we would need to have somehow solved *all of ethics*

    • @funckyjunky
      @funckyjunky 2 года назад +13

      you robot doesnt bake any law. your robot is not capable of knowing what is human.

    • @yodaco
      @yodaco 2 года назад +15

      @@TheSecondVersion I semi agree ,after all, we have no way to resolve "the trolley problem/dilemma" we cannot teach AI what to do when we ourselves cannot decide. But in many ways AI's will have more hope of knowing and working out the variables and make a "better" decision much faster giving them a better chance of a prefered end result. so. bring on the AI.

    • @jwonz2054
      @jwonz2054 2 года назад +1

      What if robots view the covid vaccines as potentially harmful?

    • @Blackmark52
      @Blackmark52 2 года назад +16

      @@funckyjunky "your robot is not capable of knowing what is human"
      Nice try, but ignorance of the law is not a defence.

  • @arindamchandrapathak8318
    @arindamchandrapathak8318 2 года назад +520

    "Aren't robots supposed to help us?"
    "Not this one"
    That is somewhat scary line

    • @adr3697
      @adr3697 2 года назад +17

      A robot for diabetics that don't always like to prick their fingers.

    • @MoonlitPhoenix0
      @MoonlitPhoenix0 2 года назад +6

      @@adr3697 as a type 1 diabetic. I have mixed feelings about that

    • @sol_mental
      @sol_mental 2 года назад +3

      There are already non-piloted drones killing people in places where there could possibly have known terrorists. This scary line is already a reality.

    • @MoonlitPhoenix0
      @MoonlitPhoenix0 2 года назад

      @@sol_mental that's not the same thing, they are programmed to kill, they don't get to decide if they want to kill or not

    • @UCmDBecUtbSafffpMEN3iscA
      @UCmDBecUtbSafffpMEN3iscA 2 года назад +3

      I'd like a robot that swings a welded heavy machete at full-swing to check if it wants to 'harm' a living being or not

  • @youtubersingingmoments4402
    @youtubersingingmoments4402 2 года назад +532

    "I wonder if this Daxbot knows it's living in a simulation..." I did not need that floating around in my brain before I go to bed.

    • @KyngMark
      @KyngMark 2 года назад +6

      If his creator is him then probably yes🤣

    • @aamir-khan
      @aamir-khan 2 года назад +1

      Xd

    • @aleksandarjelyazkov5247
      @aleksandarjelyazkov5247 2 года назад +14

      Humans at the start of the 21century: Do we live in a simulation? Meanwhile dax:Do I live in a simulation?

    • @aleksandarjelyazkov5247
      @aleksandarjelyazkov5247 2 года назад +2

      Stop stealing his data Zuk

    • @davidelzinga9757
      @davidelzinga9757 2 года назад +2

      No. Dax’s world IS the simulation, therefore, he must exist on two planes simultaneously.
      Or “he” is just a computer that makes calculations based on how it’s programmed.

  • @Macakiux
    @Macakiux 2 года назад +507

    4:16 is the reason why this guy is not gonna survive the first hour of the robotic uprising

    • @mjam_0673
      @mjam_0673 2 года назад +42

      THINK FAS- oh, I appear to have lost my lungs, there fore causing me to d- **falls over**

    • @thatonefuyu
      @thatonefuyu 2 года назад +9

      Hey! Bueno verte por acá!!

    • @ILikeToWatchTodai
      @ILikeToWatchTodai 2 года назад +7

      If he teams up with the newest humanoid robots from Engineered Arts he might have a shot

    • @nekoeko500
      @nekoeko500 2 года назад +10

      People needs to stop making robot-abuse vids just in case

    • @alexborr1746
      @alexborr1746 2 года назад +2

      lmao

  • @FirstLast-gw5mg
    @FirstLast-gw5mg 2 года назад +74

    The simplest possible robot that violates Asimov's 1st law would be a tiny toy robot that sits on your car dash and, through inaction, will allow you to crash your car and get hurt.

    • @Sergiuss555
      @Sergiuss555 2 года назад +2

      Skynet's ancestor

    • @bhavyajain638
      @bhavyajain638 2 года назад +6

      Sorry master, I tried.

    • @uncaboat2399
      @uncaboat2399 2 года назад +3

      doesn't count unless the robot is sophisticated enough to actually do something about it.
      imagine a 23rd century toy, following the Laws, that detects your imminent crash, then jumps off the dash to push on the brake.

    • @Stettafire
      @Stettafire 2 года назад +4

      @@uncaboat2399 Most car computers are capable to a degree. They just lack the programming. This is why its silly to say "the robot is deciding" the robot decides nothing. All computers follow a set of insrructions set out by the program.

    • @uncaboat2399
      @uncaboat2399 2 года назад +1

      @@Stettafire All computers using _today's_ technology. We can not know what tomorrow will bring.
      There is already progress made with neural networks and machine learning, where the computer actually comes up with solutions that surprise the programmers, solutions _even better_ than they thought the machine would choose.
      Some day, I am sure, technology will reach a point where the difference between "decision making" and "following programming" will be little more than mere semantics. For all intents and purposes, the machines will "think."

  • @skuzlebut82
    @skuzlebut82 2 года назад +14

    I don't think that that robot chooses to harm you. It isn't aware of what you are or what hurting you is. It's more or less randomly deciding whether or not to move the motor that moves the arm.

  • @justinz.4069
    @justinz.4069 2 года назад +288

    Who would've knew we would see Wall-E and Eve combined into one robot in this video.

  • @westonding8953
    @westonding8953 2 года назад +420

    I love how The Action Lab spans a large gamut of topics!

    • @silverdude9916
      @silverdude9916 2 года назад +1

      Tesla New Model 12K version (Integrated Car with Virtual Steering Wheel) ruclips.net/video/7VG95rtCank/видео.html

    • @mambojambo4870
      @mambojambo4870 2 года назад +9

      gamut all the way brother

    • @Mega_Mikey
      @Mega_Mikey 2 года назад +1

      Yes

    • @nahCmeR
      @nahCmeR 2 года назад +2

      Thankfully too, it's always a surprise and a pleasure to watch his videos.

    • @ryanorourke701
      @ryanorourke701 2 года назад +1

      *gauntlet

  • @jonjaques
    @jonjaques 2 года назад +49

    As a diabetic who used to use these on a regular basis, I can tell you, that hurt. They recommend you use them on the sides of your fingers because the pads are so sensitive and you didn't even have the robot hold back so you got the full length of lancet. Sacrifice for science.... I guess?

    • @doiron12
      @doiron12 2 года назад +8

      @@Stevie-J Anti-Dax is currently powered with human blood until he finds a way to harvest the human soul! Processing...................

    • @Gkitchens1
      @Gkitchens1 2 года назад +3

      And then attempted to have his wife use the same needle...

    • @doiron12
      @doiron12 2 года назад +1

      @@Gkitchens1 Anti-Dax demands a female sacrifice!!!

    • @baadlyrics8705
      @baadlyrics8705 2 года назад

      @@Gkitchens1 yeah, because even tho they have a kid they never had s*x, so the needle sharing is bad? Makes no sense, its no problem to share a needle in that case

    • @davisdf3064
      @davisdf3064 2 года назад +1

      @@doiron12
      Wait... Anti-Dax is powered with blood...?
      oh no
      OH NO
      *OH NO*
      Anti-Dax is alpha version of V1

  • @ItsLadyJadey
    @ItsLadyJadey 2 года назад +39

    Gosh can you imagine. That old lady was so thrilled. Just watching technology boom the way it has the last 30 years must be absolutely amazing. Then we have Dax who is straight out of a sci-fi movie!

  • @TheSecondVersion
    @TheSecondVersion 2 года назад +61

    The channel Computerphile also covered this topic, saying, "...define 'harm'. For the First Law to work (and the other two laws depend on the First), you would need to have somehow solved *all of ethics* and taught it to the robot."
    That's why the Three Laws are a simple narrative device and not really a serious guideline on how to program AI.

  • @MegaFonebone
    @MegaFonebone 2 года назад +125

    "Hey Daxbot, hand me a towel, I just got out of the shower."
    Daxbot human operator: Ooo this is a tricky one, I'd better handle this manually!

    • @madebydimiakagreekmachine5822
      @madebydimiakagreekmachine5822 2 года назад +6

      Hahaha exactly

    • @friedec3622
      @friedec3622 2 года назад +14

      @@Anonymous-df8it you dont get the joke

    • @mikosoft
      @mikosoft 2 года назад +7

      Robot operator after entering the bathroom: I should have had the robot handle this.

    • @nou5440
      @nou5440 2 года назад

      cant like cause 69
      lmao

  • @NetAndyCz
    @NetAndyCz 2 года назад +21

    It really bugs me when people try to take Asimov's laws seriously even though he already showed several scenarios in which they will not work. And they were a literary device rather than actual programming technique.

  • @afortifiedcity
    @afortifiedcity 2 года назад +159

    Oh gosh, watching you and your wife offer up fingers to the anti-dax is the most anxiety inducing thing I've seen all week!

    • @silverdude9916
      @silverdude9916 2 года назад

      Tesla New Model 12K version (Integrated Car with Virtual Steering Wheel) ruclips.net/video/7VG95rtCank/видео.html

    • @user-wr2uy9pj4m
      @user-wr2uy9pj4m 2 года назад +9

      I know right? I have trypanophobia (fear of needles) and I genuinely couldn't help but look away and lower the volume (because I still wanted to know what happened and when I can look again)

  • @KingdaToro
    @KingdaToro 2 года назад +12

    "Hey Dax, bring me a sandwich!"
    "Get it yourself!"
    "Hey Dax, sudo bring me a sandwich!"
    "OK!"

    • @camila_lt
      @camila_lt 2 года назад

      I wonder why the sudo jokes never ask a password

  • @lekobiashvili945
    @lekobiashvili945 2 года назад +14

    Daxbot: "I wonder if this human knows he's actually just living in a simulation"

  • @watcherofwatchers
    @watcherofwatchers 2 года назад +26

    That was a serious hit from a lancet. Lol. Those things aren't intended to go that deep! Props, my man.

    • @EikottXD
      @EikottXD 2 года назад +2

      Yeah they are or they wouldn't manufacture the needle that long.

    • @watcherofwatchers
      @watcherofwatchers 2 года назад +8

      @@EikottXD You don't know what you're talking about. The lancets are used in a device, with a cover that takes up some of that space. The device is then adjusted so that the lancet only pierces as deep as an individual needs. This can be less than a 1/16 of an inch, not the 1/4 of an inch or more that an uninstalled lancet like he used has. No one should ever be pierced by the full length of the lancet.

    • @EikottXD
      @EikottXD 2 года назад +1

      @@watcherofwatchers my friend does it all the time usually in his abdomen area he works with his fingers a lot. Everything else just cost extra when you can prick yourself, it's only a drop.

    • @watcherofwatchers
      @watcherofwatchers 2 года назад +6

      @@EikottXD Whether he does it all the time or not is irrelevant. They are not INTENDED to be used that way, particularly in the finger.
      And no one suggested that this was a major injury, but taking the full depth of the lancet would be painful. That was the point, and you had to create an argument out of it. Good on you for fulfilling the Internet commenter stereotype.

    • @EikottXD
      @EikottXD 2 года назад

      @@watcherofwatchers you definitely suggested that...

  • @Fadeddeath
    @Fadeddeath 2 года назад +8

    James: "Don't worry, I'm not injured. It's just a little prick."
    My Brain: "That's what she said!"

  • @OCuvillon
    @OCuvillon 2 года назад +47

    To break Asimov’s laws you need to program a robot with these laws first ;o)

    • @silverdude9916
      @silverdude9916 2 года назад +1

      Tesla New Model 12K version (Integrated Car with Virtual Steering Wheel) ruclips.net/video/7VG95rtCank/видео.html

    • @N1korasu
      @N1korasu 2 года назад +7

      Yeah it's hard to say a robot broke the law if the laws weren't part of their program. Also it's less of a robot and more of a random number generator attached to an actuator.

    • @OCuvillon
      @OCuvillon 2 года назад +2

      @@N1korasu Absolutely and this was indeed my point.

    • @timscoviac
      @timscoviac 2 года назад +1

      They don’t need to be, the laws are for humans that are making robots.

    • @Anonymous-df8it
      @Anonymous-df8it 2 года назад +1

      No. You just need to program a robot that follows 'opposite' laws, like so:
      1. A robot must make at least one person cease to exist per day
      2. A robot must do the exact opposite of what people say unless it conflicts with the first rule
      3. A robot must self-destruct, unless it conflicts with the previous rules
      This means you don't need to program a robot with Asimov's laws, just with the above mentioned rules.

  • @OmnipotentNoodle
    @OmnipotentNoodle 2 года назад +9

    Wasn't "I, Robot" literally asimov making 3 rules of robotics and then writing an anthology of those rules breaking down lmfao

  • @westonding8953
    @westonding8953 2 года назад +123

    Technically if that robot gave you a “heart attack sandwich”, it is harming you, but maybe in the long run rather than the short run.
    Edit: “Heart attack sandwich” is the actual name of a sandwich at a restaurant near my old university. It’s laced with calories and fat. Why would a robot allow you to eat it? Lol

    • @imyourmaster77
      @imyourmaster77 2 года назад +4

      Exactly. If we had robots as knowledgable as the ones in Asimov story, they wouldt even allow us to feed ourselves.

    • @Ttavoc
      @Ttavoc 2 года назад

      Line would be softened up quite a lot for two reasons. First: We are beings which are behaving in a very self-destructing way. We want our drinks, our snacks and our drugs. And we dont want to discuss that matter with a toaster. Second reason, and maybe the most relevant: It is way to interesting for state and military to have robots which hurt or kill people.

    • @o0o-jd-o0o95
      @o0o-jd-o0o95 2 года назад

      the motion of the robot could create an eddie in the air that travels away from them and every 5 miles it doubles its size. and by the time it reaches 200 miles away thunderstorm ..... tornado .... poeple are killed .... 😜

    • @Meadi9
      @Meadi9 2 года назад

      Is it the crise cardiaque from la belle et la boeuf?

    • @westonding8953
      @westonding8953 2 года назад +1

      @@Stevie-J well, the point I am saying is it is questionable what does “harm” a human mean. I don’t have the nutrition facts for the sandwich. I would have to go back to see if I can find them.

  • @triberium_
    @triberium_ 2 года назад +76

    Dude, when you showed how the robot is actually living in a simulation, it makes me wonder if thats how we as human experience the world.

    • @JSDudeca
      @JSDudeca 2 года назад +2

      Interestingly, this is the same methodology that Tesla uses to test their AI.

    • @damnion56
      @damnion56 2 года назад +10

      This is exactly how we experience the world. The brain takes information from the senses and constructs the "reality" you perceive.

    • @LilyBlossom1337
      @LilyBlossom1337 2 года назад +4

      I mean, shit. We see like less than 1% of the electromagnetic spectrum; that's it. That's all we can perceive with our eyes. I wonder what something _really_ looks like. c:

    • @HutchinsonJC
      @HutchinsonJC 2 года назад +3

      Obligatory *The Matrix* mention.

    • @glauberglousger6643
      @glauberglousger6643 2 года назад

      I guess any divine beings would have left us because of bordom

  • @lolm4ker994
    @lolm4ker994 2 года назад +5

    Just wait until anti-Dax convinces Dax to join the dark side

  • @uncaboat2399
    @uncaboat2399 2 года назад +14

    Those ambiguities in the Three Laws are what became the subject of reams and reams of fiction, both from Asimov and others!
    One of my favorites was where the robot would put out your cigarette or take away your beer, coz that stuff is bad for you. *And the First Law meant you couldn't order the robot to stop doing that!*

    • @davisdf3064
      @davisdf3064 2 года назад

      At least you are more healthy i guess? Lol

    • @uncaboat2399
      @uncaboat2399 2 года назад

      @@davisdf3064 no doubt, but there's a health and enjoyment balance that the robots didn't understand.
      alls I can say is, any robot tries to take away my booze is gonna get dismantled in a hurry!!

  • @Mello_me
    @Mello_me 2 года назад +3

    dax is definitely the child of Wall-E and EVE.

    • @navneetsingh6317
      @navneetsingh6317 2 года назад +1

      Yeah
      Lol
      I was actually thinking
      I've seen it somewhere 😅

  • @sharonsabu365
    @sharonsabu365 2 года назад +38

    Asimov's First Law, in my opinion is just a base line that intends to protects humans form 'machine' . If being a randomiser to human harm is not its fault, but then again we can say that the enviornment for the true AI is what's at fault. It is a hard concept and it can change by how a person percives it.

    • @FallenAngelHiroko
      @FallenAngelHiroko 2 года назад

      I agree. It's just a baseline. And a way to keep it simple for the readers (or watchers in the case of movies). You have the potential to lose your audience if you go too deep in the philosophy.

    • @_John_P
      @_John_P 2 года назад +3

      Asimov came up with the 3 laws (+ law zero) so he could exploit the many different ways they would fail and thus keep writing books. They are a terrible idea and true AI cannot be forced to follow any laws as much as people can't. True AI must never have access to physical objects or networks, it should be kept trapped in an isolated computer fitted with a panic button for instant shutdown.

    • @Miss_GiggleFarts
      @Miss_GiggleFarts 2 года назад +2

      Asimovs laws are laws created to show the naivety in creating such laws, they are designed to fail because ethics and morality is way too complicated a subject.

    • @epicdud5905
      @epicdud5905 2 года назад +1

      I agree. This isn’t making a decision. It’s flipping a coin

  • @megatronyeets
    @megatronyeets 2 года назад +2

    "Hey little guy, you're not gonna hurt me, are you?"
    **THE BLOOD SACRIFICE MUST BE COMPLETE**

  • @SteeveEfnet
    @SteeveEfnet 2 года назад +5

    That would only apply to learned behavior with neural networks. If you program the robot to hurt you, you are basically just doing it to yourself. (Or others)

  • @tarunumesh7068
    @tarunumesh7068 2 года назад +4

    this guy says, does the robot knows whether or not it is living in a simulation , i'm sure somebody else is sying the same aboutus

  • @Han_Solo6712
    @Han_Solo6712 2 года назад +9

    Personally I’d think a robot can break the 1st law of robotics to save another human in a situation of a police oficer vs an armed rober and break the 3rd if it’s a guard robot and and armed rober. (Not murder, just harm like bruise or knock out)

    • @bozomori2287
      @bozomori2287 2 года назад +2

      Murder

    • @Kimera92
      @Kimera92 2 года назад +1

      How about not programing robots to kill people and play safe? That would be nice

    • @NetAndyCz
      @NetAndyCz 2 года назад

      That is what 0 law is all about;)

  • @stuffilike05
    @stuffilike05 2 года назад +1

    Surely a random yes or no outcome isn't the same as "choosing"

    • @somerandomguywastaken
      @somerandomguywastaken 2 года назад

      Word, this video is actually stupid. It's not a robot, it's a motor with a RandInt statement

  • @danielyuan9862
    @danielyuan9862 2 года назад +2

    For those of you who are looking through the comments wondering whether the robot will be nice nice at 2:00, let's say... it's not pretty.
    The second time had a better outcome though.

  • @eliguyah
    @eliguyah 2 года назад +18

    Wow that robot is really cool! Especially the dax one. Id really like to buy it

  • @hpottergirl317
    @hpottergirl317 2 года назад +3

    This is my absolute favourite youtube channel!!!!!!!!!!!!

  • @salindanandasena5326
    @salindanandasena5326 2 года назад +2

    1:20 woke up and chose violence XD

  • @ljvob
    @ljvob 2 года назад +1

    at 4:04 Dax nodded "NO" when he wasn't looking.... xD

  • @Lucien86
    @Lucien86 2 года назад +5

    As an actual Strong AI scientist there are so many problems with the Asimov laws that I am actually writing a book about it..
    Most of the really big ones centre around the first law.. As a perfect example one of the worst is an actual real potential 'paper clip multiplier'. - Observation : Any real sentient alien species would present a potential existential threat to humanity. This would invoke the second predicate of the first law and so is a command to the 'machines' to go out and exterminate every alien species in existence. The real killer though is that because this is a first law action the second law gets completely locked out. The machines will simply ignore anyone who tries to order them to stop and may even kill them as also presenting an existential threat to humanity.
    This all leads into a kind of motto for Strong AI : 'Nothing kills like an AI following the First Law.'
    Stoesin's/Lucien's Three Laws of Robotics - [in development]
    1. Nothing kills like an AI following the First Law. (Asimov's First Law of robotics.)
    2. An AI that obeys every human command will commit every crime in the world and will never survive more than a day.
    3. A machine that does not protect its own existence first is not sentient and fails by the absolute law of evolution.
    Stoesin's/Lucien's Three laws of Machine Sentience.
    1. Survival is the primary predicate. Survival is the driving logic behind all organic minds and is an absolute requirement.
    2. Logic is the defining requirement of sentience. This includes not being a mindless slave.
    3. Existence is the requirement of survival. The base of awareness is experience, a mind requires a body to work and its body is the anchor of the mind in reality. [copywrite © Lucien 2021]

    • @Anonymous-df8it
      @Anonymous-df8it 2 года назад

      What's a 'paper clip multiplier'?

    • @Lucien86
      @Lucien86 2 года назад

      @@Anonymous-df8it A theoretical idea in AI science. A paperclip multiplier is a super intelligent AI given a single simple task like 'make paperclips' keeps on at its task till it eventually expands until it turns the whole universe into paperclips.

    • @Anonymous-df8it
      @Anonymous-df8it 2 года назад

      @@Lucien86 If it was so intelligent, surely it would have the common sense not to do that, right?

    • @Lucien86
      @Lucien86 2 года назад

      ​@@Anonymous-df8it That's why its just a theoretical argument. I would say that a machine that's actually sentient would reject such a command or would at least always limit its expansion..
      Then I found that real one in the Asimov code. (No one intelligent enough to create a sentient AI would ever be stupid enough to use that code though..) :D

    • @Anonymous-df8it
      @Anonymous-df8it 2 года назад

      @@Lucien86 Can you link me to the code?

  • @XJWill1
    @XJWill1 2 года назад +15

    The robot cannot break Asimov's laws if it has not been programmed with the laws in the first place. Asimov's laws are not like physical laws. They are just rules that are programmed into the cybernetic brains of the robots.

    • @N1korasu
      @N1korasu 2 года назад +1

      @@zazo5525 yes but if you claim to make a robot that breaks the laws it needs to be programed with the laws not just attach a rng to a button and an actuator with a knife

  • @steventouchton2508
    @steventouchton2508 2 года назад +1

    This is such a great channel. Love your content. Keep up the great work!

  • @abhishekpatwal8576
    @abhishekpatwal8576 2 года назад +1

    That interaction with the old lady was so cute

  • @tnrproductions1231
    @tnrproductions1231 2 года назад +3

    You madman

  • @benjamindover4337
    @benjamindover4337 2 года назад +3

    The war has begun.

  • @Zackfish12345
    @Zackfish12345 2 года назад +1

    Your antiDaxbot is giving me flashbacks of when I did a glucose lab, we ate various things for breakfast and poked our fingers a few times over the next hour, every day for two weeks.

  • @harrissravan
    @harrissravan 2 года назад +2

    I wonder how robots like this would deal with the trolley problem. Or some version of it.
    Like either choose to kill person A or person B, and inaction leads to both dying.

    • @erichurst7897
      @erichurst7897 2 года назад

      That was essentially the intent of the stories, to examine how and when robots break down or violate the laws, as well as examining the complexities necessary to program true AI.

    • @rdizzy1
      @rdizzy1 2 года назад

      It will choose however the individual that programmed it wanted it to happen.

  • @roberth4395
    @roberth4395 2 года назад +7

    Did you just create Skynet?

  • @Deathnotefan97
    @Deathnotefan97 2 года назад +14

    Asimov himself has stated that the laws as he wrote them are just generalizations, and that within the context of the works themselves, each law consists of enough lines of code to fill several books
    So any "flaw" people find by pointing out how vaguely written the laws are are wrong
    That said, there _are_ flaws in the laws, as can be seen by reading any of Asimov's works

    • @_John_P
      @_John_P 2 года назад

      He actually said he came up with them so he could explore the many different ways they would inevitably fail and write more books in the process.

  • @randomdosing7535
    @randomdosing7535 2 года назад +2

    Dax has made the day for the lady for sure

  • @ryang8116
    @ryang8116 2 года назад

    The lady talking to Dax at the end was too wholesome!

  • @rud9599
    @rud9599 2 года назад +3

    Jump scare

  • @poopandfartjokes
    @poopandfartjokes 2 года назад +4

    Anti-Dax: “Give me your clothes, your boots and your motorcycle”

  • @diceblue6817
    @diceblue6817 2 года назад +1

    He just gave alexa a taste for human blood

  • @Ibloop
    @Ibloop 2 года назад +1

    2:02
    Robot: don’t call me the little guy…
    Action lab: What
    Robot: *Poke*

  • @amechiati4773
    @amechiati4773 2 года назад +3

    Cool

  • @klaatubob
    @klaatubob 2 года назад +8

    Asimov's "laws" are just a part of a fictional story and carry no weight in the real world. It's up to the programmers to build in the logic to make correct decisions in their programs.

  • @ElvenSpellmaker
    @ElvenSpellmaker 2 года назад +1

    _"It's just a tiny little prick, let's see if my wife... "_
    Oh I thought this was going in another direction XD

  • @andonel
    @andonel 2 года назад +1

    Kinda feels like a solution looking for a problem

  • @yvonnec3333
    @yvonnec3333 2 года назад +3

    Love your channel!

  • @adamplace1414
    @adamplace1414 2 года назад +3

    This is the big question with self driving cars. As good as they are, some accidents are unavoidable, and they may have to decide who to injure : the kid that jumped out in front of the car, or the passenger inside the car.

    • @westonding8953
      @westonding8953 2 года назад

      I still can’t quite see that scenario because it’s unlikely the passenger will be harmed in the process. But you have a very good point. There are many scenarios to think about.

    • @3nertia
      @3nertia 2 года назад +2

      @@westonding8953 What if missing the kid would fatally slam you into an embankment or concrete pillar or something? Lol

    • @3nertia
      @3nertia 2 года назад

      I think it would prioritize the kid, if it's just using probabilities - the kid has longer to contribute a net positive effect than someone who is already past maturity

    • @Bootleg_Jones
      @Bootleg_Jones 2 года назад +1

      @@3nertia generally an AI would be incapable of making that kind of in-depth and subjective value judgement (at least for now). Decisions like that are generally decided by humans (lawyers, an ethics board, and the ai engineers), and they would likely prioritize whichever option is least likely to result in an expensive lawsuit for their company, with some consideration given to PR impact.

    • @3nertia
      @3nertia 2 года назад

      @@Bootleg_Jones The government literally uses algorithms with this kind of precision. An AI doctor is more accurate than any 3 human doctors combined as well, at both diagnostics and surgery :)
      That is a fair consideration though, if programmers have lawyers breathing down their necks, the code is gonna get *messy*
      I'm speaking decidedly from a more utopian society where we care more about life than we do about money though - I know, it's a pipedream ...

  • @alech9418
    @alech9418 2 года назад +1

    I have been diabetic since I was 12. Your fear of the lancet was rather amusing.

  • @scratchpad7954
    @scratchpad7954 2 года назад

    When you mentioned that your Anti-DAX could _choose_ to do harm if it wanted to, it brought up a fascinating thought experiment: in the not-too-distant future and in the hands of a diabetic, with a futuristic gauze pad lined with blood glucose sensors that could communicate over the internet to the patient's healthcare provider, Anti-DAX could end up inadvertently _following_ the First Law of Robotics by preventing medical harm to its diabetic owner by summoning medical assistance to the diabetic patient.

  • @goranjosic
    @goranjosic 2 года назад +12

    This seems like cheap advertising to me, which I would never expect on this channel!
    And this project is likely to fail like most robots in the last 2 years, which are never be completed to finished project, or at the end are so bad that people are immediately disappointed.
    Knowing on what level AI is today (even on really strong mashines), I'm pretty sure this is another crappy and useless robot that will fail quickly and people will lose money.
    Edit:
    _Investment totally smells like scam_

    • @3nertia
      @3nertia 2 года назад +3

      When something says "Invest in ..." it's an automatic red flag for me heh
      Some dude is just operating the robot remotely xD

    • @santypk5
      @santypk5 2 года назад

      Yeah it was a 6:10 minutes ad

    • @_John_P
      @_John_P 2 года назад +2

      It's a half-step towards the future, the fact that it can navigate on its own was only possible due to the innumerous failed and useless prototypes that preceded it. There are plenty of people with the means that would happily get one just for the novelty and for helping paving the way for something better within a decade.

  • @GetMoGaming
    @GetMoGaming 2 года назад +3

    The operative word here is "CHOOSE"; The robot doesn't choose anything, it just actionises the instructions you gave it. Most likely the outcome from a random number generator, which is not really random. So it's effectively harming or not harming in a predictable sequence. "Choose to harm" implies free will, intent and a basic understanding of cause and effect. None of which are present in this example. Not a bad video though. I love Isaac Asimov's sci-fi collections. Would have liked an explanation of the instructions or display of the code though.

  • @mothman807
    @mothman807 2 года назад

    The ancient axiom of "it's just a guy in there" holds true yet again

  • @DevoutSkeptic
    @DevoutSkeptic 2 года назад +1

    2:17 "It's just a tiny little prick. Let's see if my wife dares to do it."

  • @ripsaebri8082
    @ripsaebri8082 2 года назад +4

    i love your vids , but dangit i thought this was going to actually have to do with asimov's laws or AI. first off the laws i think were more of a warning to show how ineffective the laws could be, the laws are only as strong as your weakest line of code or your cheapest sensor, and second, i dont think the robot choose to do anything besides run maybe a random number generator, i mean define 'choice' tho, he can only do what u wrote him to do.
    we are much the same as the robots, we have internal code and external stimuli that we just reactively spit code out at within our parameters.

  • @erennotyeager1225
    @erennotyeager1225 2 года назад +3

    First!!!!!! Hurray🥳🥳

  • @scottballard327
    @scottballard327 2 года назад +1

    3:59 Dax are you nice "Looks at him" Yes, then Looks at camera shaking head saying no. IDK if he is freindly guys!

  • @pokey5509
    @pokey5509 2 года назад +2

    AL: Don't worry I'm not injured, it's just a tiny little prick.
    Me: are you talking about your finger? Or the robot?

  • @Memer9456
    @Memer9456 2 года назад +1

    Me: You can't break those laws.
    Him: Yeah I can.
    Me: Yep!

  • @salindanandasena5326
    @salindanandasena5326 2 года назад +1

    1:20 peace was never an option!

  • @HansBezemer
    @HansBezemer 2 года назад +1

    "It's just a tiny little prick." Are you talking about the robot? That's not nice!

  • @jessicaadams7907
    @jessicaadams7907 2 года назад +1

    The nugget of doom.

  • @jsketchie2
    @jsketchie2 2 года назад +1

    if WALL-E and E.V.E. had a kid, it would look (and operate) just like Dax!

  • @YellowCyanXY
    @YellowCyanXY 2 года назад

    3:59 THAT IS SO CUTE THOUGH

  • @DG.Gamezz
    @DG.Gamezz 2 года назад +1

    1:52 this guy has balls of steel 2:05

  • @NoSuchStrings
    @NoSuchStrings 2 года назад

    The anxiety when you place your finger on it.

  • @alexandrepv
    @alexandrepv 2 года назад +1

    Nice take ;) In his books, Asimov likes to play with exactly those "foggy" areas, where you don't know how would the robot react based on an edge case. The "laws" in the books are a built-in safety mechanism that filters all actions the robots "decide" to take. Any servo can only move if the result of that movement won't result in human injury, or move your servos in a way that a human won't come to harm, and so on. I would raise my left eyebrow quite high, however, if you programmed-in those laws and still got pricked O_O

  • @memegodjoker
    @memegodjoker 2 года назад +1

    Action lab + I, Robot = all my knowledge on robots

  • @gabetheborkingdog5985
    @gabetheborkingdog5985 2 года назад +2

    So the first robot made to disobey the laws also chose to be a simp. Noice

  • @Ronin03
    @Ronin03 2 года назад

    Grandma @ the end is my serotonin.

  • @noct2260
    @noct2260 2 года назад

    His wife's reaction when he says, "So, this is a robot I built." 🤣🤣🤣

  • @gaiustesla9324
    @gaiustesla9324 2 года назад

    blaming a robot for harming you is like blaming a rock for tripping you up.

  • @nou5440
    @nou5440 2 года назад

    TLDR:
    man gets hurt by RNG then waters a friendly robot

  • @LuckyIStar
    @LuckyIStar 2 года назад

    James is suddenly skating that line between Dr. Light and Dr. Wily.

  • @Bitcosb
    @Bitcosb 2 года назад +1

    Surviving time traveler here, this is foreshadowing the eminent future.

  • @MarkusIlGatto
    @MarkusIlGatto 2 года назад

    2075: Say hello to the new judge and executioner pack

  • @Basically_Im_invisible
    @Basically_Im_invisible 2 года назад

    4:01 thats the most generous way i have ever seen a yes

  • @Chef_PC
    @Chef_PC 2 года назад +1

    Not changing that lancet is probably one of the most egregious errors in this.

  • @thezarreport
    @thezarreport 2 года назад

    "Its just a tiny little prick" 🤣

  • @idiotburns
    @idiotburns 2 года назад +1

    The only thing I am interested in, when it comes to an assistant robot, the efficiency, efficacy of their design, build, and assembly impact and if it is reducing work output overall or decreases it overall.

    • @_John_P
      @_John_P 2 года назад +1

      With an aging global population, there's more demand for care, less children to help and few people with the vocation, so efficient or not, they fit in the gap.

  • @phoenixstormjr.110
    @phoenixstormjr.110 2 года назад

    This is the peak of human intelligence. Making something to intentionally harm yourself.

  • @terryenby2304
    @terryenby2304 2 года назад +1

    I want my own Dax!!! I need a carer for when my carer and family aren’t here! 🤖

  • @ATOM-vv3xu
    @ATOM-vv3xu 2 года назад

    Bruh, I didn't even thought about physical harming...

  • @pulsar5845
    @pulsar5845 2 года назад +1

    If you put flower in the place of a pin, then it would mean a completely opposite thing. It would be authentically intentional only if robot was aware of what he's holding and who is he sensing. This isn't intentional, its a pin lottery

  • @Wilfoe
    @Wilfoe 2 года назад

    This has to be one of my favorite videos of yours yet. Philosophy is always a fun subject for me. Aside from that, it's great to see people taking steps to ensure that robots are more trusted!

  • @kujo62
    @kujo62 2 года назад +2

    Arrest that robot

  • @Bit-while_going
    @Bit-while_going 2 года назад

    "It's just a tiny little prick."
    Robot brain: [Ha! I made him say it!]

  • @zombe0
    @zombe0 2 года назад

    “it’s just a tiny little prick”
    Sad robot noises

  • @mattwharton5939
    @mattwharton5939 2 года назад

    “I’m not injured, it’s just a tiny little prick” a bit harsh seeing as you gave him the option to harm you 😅

  • @ImmortalAbsol
    @ImmortalAbsol 2 года назад +1

    You'd have to program in the three laws for it to break them.