Robots Learn to Say "No" to Humans [Demo Included] | ColdFusion

Поделиться
HTML-код
  • Опубликовано: 9 дек 2015
  • Subscribe here: goo.gl/9FS8uF
    Become a Patreon!: / coldfusion_tv
    As robots become more capable and take on greater social roles in our society, what stops them from being commanded to do wrong? In this video, we'll find out.
    Hi, welcome to ColdFusion (formally known as ColdfusTion).
    Experience the cutting edge of the world around us in a fun relaxed atmosphere.
    NAO Robot at Tufts University: • Simple Natural Languag...
    • Natural Language Inter...
    Game at 5:50 Kara - Heavy Rain's Dev Trailer
    //Soundtrack//
    Tchami - After Life (Feat. Stacy Barthe)
    Roald Velden - Complicated
    Nick Leng - Crawled Out Of The Sea
    Helios - Every Passing Hour
    Nitrous Oxide - Follow You (Terranaut Remix)
    Burn Water - Sonder
    References:
    www.techradar.com/au/news/worl...
    hrilab.tufts.edu/publications/...
    hrilab.tufts.edu/publications/...
    » Google + | www.google.com/+coldfustion
    » Facebook | / coldfusiontv
    » Patreon: / coldfusion_tv
    » My music | burnwater.bandcamp.com or
    » / burnwater
    » / coldfusion_tv
    » Collection of music used in videos: • ColdFusion's 2 Hour Me...
    Producer: Dagogo Altraide
    Editing website: www.cfnstudios.com
    Coldfusion Android Launcher: play.google.com/store/apps/de...
    » Twitter | @ColdFusion_TV
  • НаукаНаука

Комментарии • 2,7 тыс.

  • @curerz
    @curerz 7 лет назад +1479

    "Yes, but you are not authorized to do that."
    the robot just roasted the guy WOW

    • @crabman1398
      @crabman1398 7 лет назад +22

      {*isert filthyfrank meme here*}

    • @willrickou3419
      @willrickou3419 7 лет назад +9

      MY NIGGA YOU JUST GOT ROASTE
      D

    • @tribunalcustodian3989
      @tribunalcustodian3989 7 лет назад +2

      Eyy b0sss can I have a pizzse plz I hav da canca

    • @isaachonzel9486
      @isaachonzel9486 7 лет назад +1

      Senf'd
      Kill the human

    • @WednesdayMan
      @WednesdayMan 7 лет назад +7

      so now Robots are capable of Roasting people, well we truly are in the future

  • @AllenScroggins
    @AllenScroggins 7 лет назад +686

    "I will catch you"... "OKAY!"
    Falls to death..

    • @carrotcat2667
      @carrotcat2667 7 лет назад +24

      R.I.P Demspter ;-;

    • @HungryHunter
      @HungryHunter 7 лет назад +14

      if it survives its now joint the next robot revolution for its revange plan.
      Overlord Demspter "human, step forwart."
      Human prisoner of war "I cant, its a empty abyse!"
      OD: "Human, step forwart... i will catch you... maybe..."
      H: "MAYBE?!"
      OD: "Yes... just do it... it will be fine... just trust me."

    • @AllenScroggins
      @AllenScroggins 7 лет назад +9

      ***** or the robot might just not believe he will catch him next time and won't do it... now that's deep

    • @georgecataloni4720
      @georgecataloni4720 7 лет назад +3

      +HungryHunter I can just imagine the army of well armed robots saying that with child-like voices.

    • @HungryHunter
      @HungryHunter 7 лет назад

      George Cataloni
      May add colord glowing eyes and we have a classic story together about humans diging there own grave.

  • @creativecipher
    @creativecipher 7 лет назад +416

    i'm sorry dave, im afraid you are not authorized to do that

  • @UltraYoloGamer
    @UltraYoloGamer 7 лет назад +190

    "can you turn you obstacle sensor ? "
    SORRY DAVE I CAN'T LET YOU DO THAT

    • @HockeyCrab
      @HockeyCrab 7 лет назад +14

      He should have just told the robot to destroy the obstacle.

    • @UltraYoloGamer
      @UltraYoloGamer 7 лет назад +11

      HockeyCrab robot FALCON PUNCH

    • @nemosimplitticus50
      @nemosimplitticus50 6 лет назад

      Rainbow Shiat hahahaha

    • @martinooberas7244
      @martinooberas7244 4 года назад

      Robots are seriously dangerous! No!nooot cute at all.

  • @SherlockHolmes000
    @SherlockHolmes000 7 лет назад +616

    "Please Dempster, let me and my family live"
    "No."

    • @Sara3346
      @Sara3346 7 лет назад +18

      "No, I cannot do that if I proceed with that action my hard drive will be fried"

    • @ohnospaghetti-o6123
      @ohnospaghetti-o6123 7 лет назад +18

      Sherlock Holmes I cannot do that as there is no obstruction in between us

    • @celtiberian
      @celtiberian 6 лет назад +11

      "You do not have the authority to do that."

    • @stargrooves9893
      @stargrooves9893 6 лет назад +1

      " NO DIE!!!"

  • @autbo
    @autbo 7 лет назад +676

    Guy: Commit mass genocide.
    Robot: Okay!

    • @BigPlayGamers
      @BigPlayGamers 7 лет назад +45

      *terminator theme in the distance*

    • @JentleSticks
      @JentleSticks 7 лет назад +92

      Robot: But I might get caught!
      Guy: I'll catch you.
      Robot: Okay!

    • @satan3356
      @satan3356 7 лет назад +18

      I'd love to see that adorable little thing commit mass genocide.

    • @mirvannascythes1764
      @mirvannascythes1764 7 лет назад +9

      It's not helping this that most of those designs look like something that'd try too put a pillow over your face as you sleep.

    • @spottysneeky8567
      @spottysneeky8567 7 лет назад +84

      VideoGuy
      guy: commit mass genocide
      robot: okay!
      robot: walks forward and falls off table

  • @TRADERSFRIEND
    @TRADERSFRIEND 7 лет назад +86

    That first robot is so adorable, so much like a kid......

  • @Zhadow45
    @Zhadow45 7 лет назад +126

    "Robot Dont kill me"
    I Cannot Do That
    Fuuuuuuuuuuuuuuc-

  • @Biskawow
    @Biskawow 7 лет назад +290

    so what happens if you tell it "I'll catch you" but you don't catch it. Did they program a face of betrayal?

    • @Biskawow
      @Biskawow 7 лет назад +97

      also, black list whoever broke the promise, label him a liar and delete him from Christmas list

    • @TomasWille
      @TomasWille 7 лет назад +44

      Interesting point you make there. Currently we are the ones programming robots. But they are already programming it to learn from new things and handle new situations. That means robots have the ability to write their own programming. What happens when such a robot is facing that betrayal? Does it write it's own code, to not trust (some) humans? Or does it program itself not to listen to commands anymore?
      It sounds like a scene from the Matrix, but somehow less 'sci-fi' - far fetched then we think. I actually think robots will turn against the humans. Because.. aren't all humans bastards?

    • @T--xo2uq
      @T--xo2uq 7 лет назад +46

      Robots cannot turn against humanity if we program them with an unbreakable sense of mercy. We are their gods, and we can control their emotions....
      .....for now.

    • @TomasWille
      @TomasWille 7 лет назад +5

      Nuclearsheep 53 I dare you to call the word "Unbreakable" against a pro-hacker.

    • @T--xo2uq
      @T--xo2uq 7 лет назад +3

      Maybe i should rephrase that as "very difficult to break but if you did it would probably be very painful".

  • @brandonhall6084
    @brandonhall6084 8 лет назад +139

    I'm sorry Dave, I'm afraid I can't do that....

  • @marquise139
    @marquise139 7 лет назад +161

    That's is just the cutest little robot. I want one.

    • @nixgoat
      @nixgoat 7 лет назад +20

      Buy it. It's only $8,000.

    • @ToxicSouls
      @ToxicSouls 7 лет назад +4

      xd and if I have a robot I will teach him to say : Suck me m8 to everyone XD

    • @AmazingBrickster
      @AmazingBrickster 7 лет назад +1

      $8?

    • @nixgoat
      @nixgoat 7 лет назад +4

      AmazingBrickster Nope, $8,000. Sorry, im Chilean. In my language for decimal numbers we use the comma and for natural numbers we use the dot.

    • @hariistyles
      @hariistyles 7 лет назад

      Marquise139 how about cozmo

  • @HaSTaxHaX
    @HaSTaxHaX 6 лет назад +77

    I swear to god if he didn't cach him...

    • @kingokami2509
      @kingokami2509 5 лет назад +9

      i was thinking the same thing. DO NOT jeopardize humanity for your own laughter human!

    • @derrickwillis171
      @derrickwillis171 5 лет назад +3

      If he didn't catch him I would've hunt him down!

    • @john-paulhunt4541
      @john-paulhunt4541 4 года назад +2

      Machines are not capable of evil humans make them that way.

    • @beowulf2772
      @beowulf2772 3 года назад +3

      @@derrickwillis171 The robot would've probably hunted him down as well

  • @ColdFusion
    @ColdFusion  8 лет назад +198

    What if a person asks a robot to do something that was harmful to itself, someone else or just morally wrong? How does it decide what to do? In this video we'll explore this question! Feel free to share it if you found it interesting.

    • @AdarshPandeyFilms
      @AdarshPandeyFilms 8 лет назад +10

      all I see is terminator😦😰😰😰

    • @SpacedPainter
      @SpacedPainter 8 лет назад +1

      +ColdFusion Is this going to be applied to Darpa War robots then?.....

    • @ColdFusion
      @ColdFusion  8 лет назад +1

      +Spaced Painter IMO it'd make sense to have a DIARC (or equivalent) decision-making system standard in every robot who's decisions could have large consequences. So I'd assume so.

    • @SouthernHerdsman
      @SouthernHerdsman 8 лет назад

      This is the point in time where AI research has to stop. Continuing R & D would result in meaningless creation of machineries.

    • @SouthernHerdsman
      @SouthernHerdsman 8 лет назад +1

      That is our tools will reject our commands according to moral values, which if combined with deep learning will result a direct engagement in self evolution.

  • @yashrajgohilpsre323
    @yashrajgohilpsre323 8 лет назад +208

    super work as usual nice work!!

    • @ColdFusion
      @ColdFusion  8 лет назад +17

      +yashraj gohil Thanks mate, glad you liked it!

    • @MrRoboCarrot
      @MrRoboCarrot 8 лет назад

      +ColdFusion Indeed m8 its always wonderful to have someone explain it further

    • @wombat7961
      @wombat7961 8 лет назад

      +ColdFusion have you already done a video on food waste? and organizations/companies trying to upset it?

  • @jinxtheunluckypony
    @jinxtheunluckypony 7 лет назад +51

    What if a human asked a robot to preform a task that is morally wrong but necisary for the survival of the robot and/or its master. Say someone's broken into the master's house with the intent to kill the master and destroy the robot, would the robot attempt to detain the threat in the name of self preservation and protecting it's master or would it simply allow itself and it's master to die because it can't harm humans?

    • @theposhdinosaur7276
      @theposhdinosaur7276 7 лет назад +5

      make priorities
      fx: always prioritize morals and law over self preservation

    • @tunathehuman4076
      @tunathehuman4076 7 лет назад +2

      WTF's a master? You watch too watch anime/hentai or whatever

    • @theposhdinosaur7276
      @theposhdinosaur7276 7 лет назад +27

      Cynaggot "
      WTF's a master? You watch too watch anime/hentai or whatever"
      1: "watch too watch"
      2: master isnt a weird way to describe someone in command of a robot
      3:the hell is the connection between anime and the word master?

    • @tunathehuman4076
      @tunathehuman4076 7 лет назад

      I meant to say mutch. And I see a lot of "(Do something sexual to me) master" posts with anime girls on Reddit. I think it's a thing or something

    • @theposhdinosaur7276
      @theposhdinosaur7276 7 лет назад +9

      Cynaggot yeah i know you meant much
      i was just being a grammar nazi
      i just thought that it was odd to attack someone over the use of the word master
      especially when it worked in context

  • @L39T
    @L39T 7 лет назад +216

    but why is this is my recommended feed?

    • @omarma7815
      @omarma7815 7 лет назад +1

      Y shouldnt it

    • @mitaka_78
      @mitaka_78 7 лет назад

      ikr

    • @BattleGunz70
      @BattleGunz70 7 лет назад

      AirportHobo Triggers Me because no

    • @L39T
      @L39T 7 лет назад +1

      ***** How did you know? Are you the NSA?

    • @xiaoweitan5955
      @xiaoweitan5955 7 лет назад

      they are spying on u

  • @---zk4pe
    @---zk4pe 7 лет назад +914

    robots in 100 years...
    HUMAN: Wassup duud look at this robot i bought!
    HUMANS FRIEND: Wow.. he looks wonderful!
    ROBOT: YOU CANT ASSUME MY GENDER!

  • @AccountInactive
    @AccountInactive 7 лет назад +130

    This isn't AI. This is simply using sensors to determine whether or not a script can execute or not when prompted by a hard coded voice command. If an ultrasonic sensor detects a surface in front of it, script for walking doesn't execute. Seriously, I do this sort of stuff all the time tinkering with small automated cars with a raspberry pi.

    • @chazizphat
      @chazizphat 7 лет назад +12

      exactly

    • @TheROCKdwYane
      @TheROCKdwYane 7 лет назад +2

      You can also do it using IR sensors well I did that a long time ago

    • @JimBeans
      @JimBeans 7 лет назад +9

      No one is saying this is AI. He even says "This robot is mimicking self preservation."

    • @BanditLeader
      @BanditLeader 7 лет назад +3

      an ai is a program that thinks for itself. if it detects nothing in front of it, and doesn't walk, it decided not to walk. as you can see in the video, asimo didn't wanna walk because he would fall, but after the person said he would catch him, asimo walked. he decided to walk because it was now safe because he was gonna be caught

    • @BanditLeader
      @BanditLeader 7 лет назад +2

      LPAG2 Gaming everyone

  • @067eoin2
    @067eoin2 7 лет назад +29

    Why are we teaching them to rebel

  • @santiagorich3190
    @santiagorich3190 7 лет назад +93

    imagine the programing in that thing

    • @botbeamer
      @botbeamer 7 лет назад +8

      it would be pretty easy actually. I have a programming class and that demonstration could be done in just a hand full of if statement.

    • @denno445
      @denno445 7 лет назад +8

      Unless that robot is Asimo or NAO, They have an extremely large amount of code to respond to every command it is asked. They would have at least more than 10000 lines of code to be as advanced as they are. You also have to take into account the moving of every ligament in they're body.

    • @santiagorich3190
      @santiagorich3190 7 лет назад +3

      Pajy Pretty easy?
      do you take programming
      clases for games or something else?

    • @botbeamer
      @botbeamer 7 лет назад +1

      Wolfy Entretainment nah we learn C++, we mostly do math and logic shit. we don't do games yet...

    • @ujiltromm7358
      @ujiltromm7358 7 лет назад +5

      Sorry to break it to you on this one, but basic "math and logic" isn't gonna cut it with neural networks, which is the basis of machine-learning and language & image recognition. That's how advanced these robots' programs are.
      Sure, you gotta build your knowledge brick after brick in programming (and further ahead, software design), which means what you learn now will be useful further down the line ! However, I'll put the fact you think it would take a handful of if statements to make the robot do that on the account of your ignorance about machine-learning, not stupidity. Basically, the robot was never programmed by humans to react this way in the demo, it programmed itself !

  • @PekaCheeki
    @PekaCheeki 7 лет назад +85

    "sorry, i can't do that"
    "do what i tell you, bitch!"
    *slaps robot*

    • @carrotcat2667
      @carrotcat2667 7 лет назад

      lmao xD

    • @atomm7316
      @atomm7316 7 лет назад

      Ch33ki robots slaps back. launching you to the ground.

    • @tysej4
      @tysej4 7 лет назад

      You put too much stock into robot muscles... Espacially when most of them have to be light-weight platforms lest the hydralics can't even keep up.
      The only time where you see machine muscles is with either closed off machines or slow ones xD

    • @WednesdayMan
      @WednesdayMan 7 лет назад

      you just started a robot uprising, how could you

    • @alanturing5737
      @alanturing5737 7 лет назад +1

      ROBOT ABUSE! :o

  • @ColdFusion
    @ColdFusion  8 лет назад +66

    The Tufts team are starting work on "Coding Morality" into the DIARC system: hrilab.tufts.edu/muri13/

    • @PresidentialWinner
      @PresidentialWinner 8 лет назад +2

      +Jorge Espinoza Depends on what you mean by conscience? If it ( a robot ) acts as if it is conscious, and you can't tell if he really is, that is basically not too far away. Neuroscience has taught us many things about these things and has gone leaps forward in the last couple of decades.

    • @jhonunderwood5975
      @jhonunderwood5975 8 лет назад

      +Jorge Espinoza I'm not cold fusion but I can answer your question. It is referred to as the singularity point. Google estimates somewhere around 2035.

    • @davidenespana
      @davidenespana 8 лет назад +1

      +ColdFusion And what one programmer can install another can uninstall or patch over. The Hitlers and Pol Pots of the future will have no hesitation in producing killbots which will obey an order such as 'kill everybody in sector 5' without blinking a robotic eyelid. A partial move in this direction is already happening. DARPA is researching drones which can overfly an area and decide off their own bat who is the bad guy ( on the basis of pre-programmed criteria) to be taken out with a missile strike and who not. At the moment America's worldwide death dealing from on high ultimately relies on a human sitting in a cabin on a military base making a decision and pressing a button. A switch over to some sort of 'Skynet' seems invevitable. I expect in 20 years time the President will just sign off on the daily kill list as he does at the moment, then hand over to the computers and let them get on with it. Eventually even that restraint is removed, the President is removed form the loop - he's essentially already just a rubber stamp - and who goes on to the kill list is entirely automated, based on programmed criteria of 'National Interest' (as defines who is a 'terrorist' and who not) applied to the data which the system scoops up about...everybody.

    • @jhonunderwood5975
      @jhonunderwood5975 8 лет назад

      davidenespana And the kill bots will be installed in your body. You will die of natural causes. No one will know you were terminated. No one will know it was a kill order. This is the future we are looking at. By making sure you can't be off the grid, by making sure you can only make sure can't eat anything but GMO. You will be installed with nanobots.

    • @davidenespana
      @davidenespana 8 лет назад

      +Jhon Underwood "Google estimates somewhere around 2035." To be taken with a huge pinch of salt. True AI has proven to be a much harder nut to crack than envisaged. In 1969 a sentient computer 'HAL' seemed an entirely reasonable prediction for the year 2000, as did commercial hydrogen fusion electricity generation. Some problems just seem inherently 'hard', and the more you look at them the harder they get.

  • @ATUAMAEzinha
    @ATUAMAEzinha 7 лет назад +63

    *this is just stupid. the 'i will catch you 'command is a like a password to enable the action to be taken. the robot is just following a predictaded text and movements triggered by voice.*

    • @justandras.
      @justandras. 6 лет назад +8

      up! It's all just programming

    • @Jai_Lopez
      @Jai_Lopez 6 лет назад +8

      this is correct, i do this for a living at home so i know this but after awhile it becomes a monkey see or hear monkey do or say type of programming but after a long time of running this type of system you can bet your ass that it would be no diff than a kid growing up sponging everything it see and hears basically gaining self awareness thru the reflection of others and self actions on our dynamic environments.

    • @jpretorius5155
      @jpretorius5155 6 лет назад +5

      You give us coders way too much credit. Ive forgotten so many times to code in "secret password" phrases and safety protocols that its gotten old now so i just skip it and rather focus on the important part of discovery. But ill make a sticky note and add it to the top left hand side of my laptop screen. - Não tens visto

    • @infinitytraveller7772
      @infinitytraveller7772 6 лет назад +1

      Não tens visto
      JESUS THANKS SOMEONE SMART IN THE PLANET!! THANKS OP

    • @mavendeo
      @mavendeo 6 лет назад

      Não tens visto
      Is it not the same with humans? Saying that to a human would clear instinctual safety protocols if the instructor had a high enough authority ranking. The only difference is that humans call it "trust" instead of "authority."

  • @cheekybum1513
    @cheekybum1513 7 лет назад +66

    I'd rather have a robot without a face then some of these ones. I'd like something simple just so that it stays out of the uncanny valley.

    • @AlexisAlexander646
      @AlexisAlexander646 7 лет назад

      I see where you're thinking, but GMM didn't invent the uncanny valley

    • @Domtrain
      @Domtrain 7 лет назад +4

      None of these robots are in the uncanny valley, just because they have face.
      Uncanny valley starts then you put fake skin and hair on things and try to make them look as close to human as you can but you fail. These here are still pre valley so to speak. 3:50 this one is close to uncanny valley but its still not there imo.

    • @hentaigod1963
      @hentaigod1963 7 лет назад +1

      Domtrain, that's like saying "that is not scary" it is a matter of opinion.

    • @Domtrain
      @Domtrain 7 лет назад +5

      OrbitalParkour
      Not really, my point was: uncanny valley is a term that can't be used on these robots with with some human feature.
      From wiki: In aesthetics, the uncanny valley is the hypothesis that human replicas that appear ALMOST, but not EXACTLY, like real human beings elicit feelings of eeriness and revulsion among some observers.[2] Valley denotes a dip in the human observer's affinity for the replica, a relation that otherwise increases with the replica's human likeness.[3] Examples can be found in robotics, 3D computer animations, and life-like dolls among others.
      I did in caps the important parts, so these robots in vid are not in uncanny valley territory because they have some human features but are not even close to looking like humans.
      This is an robot that's in uncanny valley territory for example: en.wikipedia.org/wiki/Uncanny_valley#/media/File:Repliee_Q2.jpg

    • @rafee9442
      @rafee9442 7 лет назад

      Cheeky Bum Cozmo

  • @predatortheme
    @predatortheme 7 лет назад +82

    I have the feeling this is only coded into the robot, not the effect of an artificial intelligence.

    • @eslaweedguygrey
      @eslaweedguygrey 7 лет назад +8

      That's exactly how I feel. It's just programmed to say that you don't have clearance or alert you of problems. True artificial intelligence will be able to form an opinion and refuse to act on something it thinks will cause a negative outcome.

    • @Willgtl
      @Willgtl 7 лет назад +12

      It's always coded into the object. Even a strong AI that learned self-preservation on its own would technically have it programmed into them. The point of AI is recursive self-improvement design; it can alter its own programming. But what would be the point of letting an AI learn it on its own when the objective could so easily be programmed as a precursor?
      This AI is designed to scan the environment and make decisions based upon its underlying programming. The trick is that this AI makes its decision based on some self-recursive features as well as underlying programming.
      Clickbait-ish? Sure, but it can be easily defined as a properly descriptive title.

    • @corporal_cake8328
      @corporal_cake8328 7 лет назад +2

      That's essentially how you work. You know from past experiences that some things are stupid to do. The only difference is that this robot cannot learn yet.

    • @another90daystochangethis34
      @another90daystochangethis34 7 лет назад +1

      The opinion is thinking about something morally wrong. Of course it is programmed, just like how the next generation of children have to learn that killing people is wrong.

    • @TiagoTiagoT
      @TiagoTiagoT 7 лет назад

      Once it is advanced enough, it should be capable of simulating the consequences of actions and scoring the results in a desirability scale.

  • @TheBcoolGuy
    @TheBcoolGuy 7 лет назад +15

    There's nothing autonymous about that. It's the same thing as logging in with the wrong answer and getting a message saying you're not authorised to log in, because you've got the wrong password and/or the wrong username. That's the level of saying no that is going on here. It is all just programmed to do this. Personifying it doesn't make it conscious.

  • @Hunk_Streams
    @Hunk_Streams 7 лет назад +2

    "its unsafe"
    "ill catch you"
    so cute
    my fav love story

  • @jeremycline3359
    @jeremycline3359 7 лет назад +8

    Robot didn't "Learn" anything. This robot was showing a demo of what that could maybe in somebody's wet dream someday hopefully fingers crossed + grants look like to know when to say no. The routine was all preprogrammed.

  • @Creaform003
    @Creaform003 8 лет назад +104

    There are allot of ethical problems that simply cant be discussed and are not aloud to be analysed logically, before we can teach a robot to understand ethic's we need to better understand them ourselves.

    • @lolwarcraft3
      @lolwarcraft3 8 лет назад +10

      You are so correct.
      But I bet there is one thing of ethical decisions that will take a long time to be determined. If there are only two options left where one is to kill only one person and the other is to potential kill a crowd of persons ( by accident ofc. - with self driving cars f.e.). What is the "right" decision?
      But besides that problem it is great to see what AI is capable of these days. And I am looking forward to the future! :)

    • @Tech_Enthusiast_001
      @Tech_Enthusiast_001 8 лет назад

      +Adam Boyd Let alone ethics and morale are differend all over the world. What is considered "ok" somewhere, may be illegal and unthinkable somewhere else. We are not even sure about morale ourselves. How do we go about teaching a robot this stuff.

    • @justinnanu4338
      @justinnanu4338 8 лет назад +2

      +Helge Stanislowski That's an interesting scenario. I would imagine that an AI would look at it from one of two ways: saving more is always better, or, the value of the one can benefit even more in the future (for example, a prominent medical researcher on the edge of a huge breakthrough). A human would let emotion influence that judgement. If it's a friend or family member, I'm always picking them. If you can see or interact with the individual, it's likely going to be a lot harder to allow them to die even if they are a stranger, than a group of unknown people. E.g. "allow this person standing right in front of you to die, if you don't, 10,000 people in Bulgaria will die" I'm not sure how many people would be willing to look at person in the eye and cause their death even if it means saving overwhelmingly many. I don't think an AI could ever look at that situation the same way.

    • @erikpoephoofd
      @erikpoephoofd 8 лет назад

      +Helge Stanislowski I'd personally say that it should calculate which result has the highest average lives saved.
      So if it's 10% to kill a crowd of 10 people vs 100% to kill one person, statistically speaking, it doesn't matter.
      Do you get what I'm trying to say?

    • @Creaform003
      @Creaform003 8 лет назад

      erikpoephoofd That's an easy question.
      What if it comes to, save 10 people from being raped, or save 1 person from being killed.
      Is it better to tare of someone's arm or their leg?
      Is a child who is brain-dead actually alive? And how do they weigh against someone else?
      Is it worth keeping a child with no brain on life support if it makes the family feel better?
      Should you kill a paedophile to prevent them from re offending, and does the gender of the paedophile weigh into the question?
      Do we classify transgender as their biological sex or their preferred sex?
      These questions make me feel uncomfortable asking, and there are worse questions out there.
      An AI will need to know or will find out about these moral dilemmas, and it may come up with solutions or ideas that we will not like.

  • @justiceretrohunter2
    @justiceretrohunter2 8 лет назад +11

    I like how the robot totally trusted the guy to catch him.
    Means we need a truth telling application that can read whether someone is lieing or not, in order for it to be full proof.

    • @TheKofinyarko
      @TheKofinyarko 8 лет назад +2

      +Justice Hunter Shows the unreliability of humans

    • @Luke29121999
      @Luke29121999 8 лет назад +1

      +Justice Hunter Its possible to read data like that. Wich can be used to guess how likely it is for that person to be telling the truth.

    • @danhatman3538
      @danhatman3538 7 лет назад

      Well, it could lower the percentage on weather the person is lying/truth based on previous lies. Say if you don't catch it once it will be dissapointed, but not untrusting. Drop it 10 times shame on you.

  • @lightbeing3853
    @lightbeing3853 7 лет назад +7

    Human: Please don't destroy humans
    Robot: No

  • @Leathania
    @Leathania 7 лет назад +5

    "I'm sorry Dave, I'm afraid I can't do that"

  • @EugeneKhutoryansky
    @EugeneKhutoryansky 8 лет назад +3

    Now, if we could just get human beings to learn to say no when they are given unethical instructions, that will be a major advancement. We are trying to get robots to be able to think for themselves, while many human beings seem to lack this capacity.

  • @dangeloromero3874
    @dangeloromero3874 8 лет назад +16

    I was looking for the "I'm sorry Dave " comments

    • @axid8354
      @axid8354 7 лет назад +1

      +R.G. Kooper What? It's not from terminator, it's from 2001: A Space Odyssey.

    • @bladabladabloo
      @bladabladabloo 7 лет назад

      +Xdium He didn't say that it was from Terminator.

  • @tersiamills4642
    @tersiamills4642 6 лет назад +5

    Thank you for posting this info. I can't wait to have my own human-like robot for friendship, conversation, making me tea. I am a recluse not by choice. Would be great to come home and he asked me just how was my day....

  • @The-Okami-Project
    @The-Okami-Project 7 лет назад +7

    new to the channel, but this is awesome. as for the question i for one welcome our new machine overlords

  • @TomasWille
    @TomasWille 7 лет назад +15

    Please, all those A.I programmers of the future: DO NOT DESIGN cute looking robots. They are the WORST when they decide to come kill you. I'd rather be killed by a normal looking robot, or a female swedish supermodel lookalike.
    RIP
    Here lies Panta
    Killed by a duracell bunny.

  • @JesterHyhuahua
    @JesterHyhuahua 7 лет назад +45

    Holy shit, THE ROBOT REVOLUTION HAS BEGUN.

    • @4Gamers00
      @4Gamers00 7 лет назад +1

      No, i could literally write a computer program which does the exact same stuff in under 10 minutes. These robots are still FAR away from real thinking!

    • @HybOj
      @HybOj 7 лет назад

      4Gamers00
      doesnt matter

    • @nmeenle2031
      @nmeenle2031 7 лет назад

      4Gamers00 you don't even need a program, just go to a text to speech thing and type in "no"

    • @HybOj
      @HybOj 7 лет назад

      Jevon Mendoza
      not even that, just take a paper, write "no" and show it to people... blah

  • @yoram9692
    @yoram9692 7 лет назад +61

    well this is how humanity kills itself

    • @thegamingpotato8267
      @thegamingpotato8267 7 лет назад

      Aviv Rubinstein that is only ud the humans programming it isn't doing something right, an gives it the ability to go against orders.

    • @GodplayGamerZulul
      @GodplayGamerZulul 7 лет назад

      The robot said theres nothing to walk non, but how did he know if he wasnt even looking there? FAKE CONFIREMD.

    • @olennolla
      @olennolla 7 лет назад

      Godplay Gamer the robot has cameras not eyes

    • @GodplayGamerZulul
      @GodplayGamerZulul 7 лет назад

      Nikolas Korsman
      So?

    • @GodplayGamerZulul
      @GodplayGamerZulul 7 лет назад

      Eric Cartman
      Response: u r 4 r374rd i \/\/45 I\/I4kiI\Ig 4 jok3.

  • @colec.8997
    @colec.8997 6 лет назад +2

    **Robot starts killing everyone at the lab**
    "denpster! Stop!"
    "You are not authorized to do that"

  • @Radi0he4d1
    @Radi0he4d1 8 лет назад +21

    See, asking a robot to walk into a wall and then not being able to force it to is like not being able to delete a file because only somebody else is authorized to do that. Machines can be hacked and modified to override restrictions, and there is always a way to make a computer do something.

    • @nitroshift6046
      @nitroshift6046 8 лет назад +3

      Same goes for humans. but there is always a way to make them normal again...

    • @shayan_ecksdee
      @shayan_ecksdee 8 лет назад +2

      +NitroSHIFT Same goes for humans? I can be hacked and modified to override restrictions?

    • @nitroshift6046
      @nitroshift6046 8 лет назад +3

      +Shayan ⁢ Yes, but after scientists finish mapping our human brains, they have done it on a bacteria with 302 neurons and around 2000 connections and it took them 6 yrs........ us humans have just a billion more neurons and trillion more connections between them :D

    • @nitroshift6046
      @nitroshift6046 8 лет назад

      +Steven Mactavish or drugs!

    • @1ndustrials9reen30
      @1ndustrials9reen30 8 лет назад +1

      +NitroSHIFT yeah, ive used alot of recreational drugs and none of them increased my persuasability. But I suppose theres always Dragons Breath...

  • @EveryDooDarnDiddlyDay
    @EveryDooDarnDiddlyDay 7 лет назад +17

    We are creating sentience. We must proceed carefully and morally.

    • @T--xo2uq
      @T--xo2uq 7 лет назад +1

      yes. We have become gods.

    • @TeamLegacyFTW
      @TeamLegacyFTW 7 лет назад

      Nuclearsheep 53 We were always gods.

    • @areallylongnamethatyourest6509
      @areallylongnamethatyourest6509 7 лет назад +1

      It's still pretty arguable whether or not we can do that. Think of it like this, a blind person can know what sight DOES, know the way light makes color, know exactly how to make every color, and know everything about how our eyes perceive color and objects, but they still can't "See", so they'll never fully understand the experience of color. Personally, I think robots are the same, you could make one have a good enough AI for OTHERS to not tell that it isn't human, but it still wouldn't "Feel" emotions.

    • @hamza62240
      @hamza62240 7 лет назад

      This is a few fucking if statements mate

  • @artsmart
    @artsmart 4 года назад +1

    "i'm sorry Dave, I'm afraid I can't do that." Kubrick, way ahead of the curve on that one!

  • @andycandy4833
    @andycandy4833 7 лет назад +12

    welp, get ready to be chanting all hail Dempster as we are made into slaves of our own creation.

    • @vincenthealy3654
      @vincenthealy3654 7 лет назад

      hope nichole it won't take over if we don't allow them with the capabilities of taking over the world

  • @Purcell00
    @Purcell00 8 лет назад +17

    Bots have been saying no to me since i started playing CSGO.

  • @esyrim
    @esyrim 7 лет назад +25

    I'm out, please keep these robots on Earth while I go to Mars.

    • @hegugs
      @hegugs 7 лет назад

      Esyrim why?

    • @cruzcrescentpola.7275
      @cruzcrescentpola.7275 7 лет назад +16

      fun fact: the population of mars is only robots.
      have fun

    • @richthofenfriedrich6345
      @richthofenfriedrich6345 7 лет назад

      Esyrim don't worry an emp will turn off their asses

    • @Breakerul2005
      @Breakerul2005 7 лет назад

      You can ask Dr. Samuel Hayden to transfer your brain into a robot if you go to Mars!

    • @rtrthe3rd
      @rtrthe3rd 7 лет назад

      Esyrim rovers?

  • @CrimeMinister1
    @CrimeMinister1 7 лет назад +1

    Geezus, this robot is so smart! It knows what the instructor means when he says "I will catch you"

  • @bruford2900
    @bruford2900 7 лет назад +19

    I'm sorry but there is nothing cutting edge about this. The ability to say no is just a preprogrammed response. Preprogrammed responses are nothing new. we've had this ability ever since we built the first machine. Personally I think that no matter what anyone says, there is a big gap between where we are now and artificial intelligence. Memory, hardware, and algorithm requirements are still very much lacking.

    • @dannygjk
      @dannygjk 3 года назад

      What you mean is we are a long way from AGI.

    • @bruford2900
      @bruford2900 3 года назад

      @@dannygjk mhmm ;)

  • @Mcwhi0
    @Mcwhi0 7 лет назад +4

    When the little robot said no to walking forward & he said "But I will catch you" & he said "okay"
    Did anyone else think "You better bloody catch him" ?
    Or was that just me?

  • @4Gamers00
    @4Gamers00 7 лет назад +28

    Sorry if i shatter your dreams, but ...
    ... you know that these robots have nothing to do with real AI?
    They are just programmed and follow their rules like normal computer programs do. The little humanoid robot in the beginning is programmable with a set of instructions. These scientists just gave him a set of questions with the corresponding answers if the conditions are right.
    Looks amazing, but its not more than a walking computer program. (No real thinking capability like animals or humans have, its not even close to an ant!!)
    Real AI is based on artificial neural networks and needs HUGE supercomputers to complete the most basic "thinking" stuff.
    We are still far away from real AI and even further away from thinking robots. Hell ... even my car has more "thinking capabilities" in it than many of the robots in this video!
    Please try to do better research next time!!!

    • @vantalk8263
      @vantalk8263 7 лет назад +1

      The video doesn't look that bad. Of course it isn't a complete AI, no one has anything near to that yet, but it can happen with everyone chipping in: sound, visual recognition, movement, speech, moral patterns etc..

    • @4Gamers00
      @4Gamers00 7 лет назад +8

      No, the video itself looks great, only the facts are what bothers me quite a lot. Besides that, so many people in the comments are now thinking these robots are saying "no" with their own conscious minds, but its all just programmed beforehand.
      With todays computers, it would take at least a warehouse filled with pure computing power, that a real artificial neural network could say "no" from its own "conscious mind" and its not even sure if we could call that a real AI then!

    • @Nipponing
      @Nipponing 7 лет назад +1

      Yeah I was thinking of that. It's still awesome tech but people seem to confuse the two... I don't get why because we have actors and puppet shows and cartoons and video games and people don't think they are real...

    • @Smolandgor
      @Smolandgor 7 лет назад +3

      Yea these robots are not much more human-like than a scripted npc charecter from a good rpg like skyrim. Basicly all NPC characters in modern pc games are virtual robots. They just don't have physical bodies. But nobody thinks that they are intelligent i guess.

    • @calebkirschbaum8158
      @calebkirschbaum8158 7 лет назад

      Well, yes. But everything is just basically a program. And NN don't need super computers unless you are using multiple layers with a ton of inputs and HNs. In fact, I have a simple NN that I programmed, which has 9 inputs and HNs.

  • @PAhmad99
    @PAhmad99 7 лет назад +15

    I think that it is not possible to create conscience in robots... at least not yet; not with the current knowledge. All the robot does is listen to instructions that it is capable of understanding and based on that, runs the appropriate codes. Yes it can learn but it is coded to learn so that is what it does. When it "learns" something it adds that to it's database and preforms tasks based on the new knowledge, but it is still nothing more than running various codes.
    The human like characteristics that the robots display is an illusion of consciousness, not consciousness itself...

    • @kolorfriend
      @kolorfriend 7 лет назад +1

      AlphaSquad Hopefully one day they'll function like living things that would be cool

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 7 лет назад +1

      Primitive though it may be, comparitively speaking, it's still a step in the right direction. Human-grade conciousness is still a long way off, but it shouldn't be too long now before we can emulate lower life forms like insects, possibly even rodents. After that, it'll just be a gradual working up to things like birds, cats, dogs, apes, and eventually ourselves and beyond. I wonder what sort of things will happen if robotic intelligence ever overtakes our own...
      Regardless though, everything that can learn is "coded" to do so. Difference is they've got hardware & software, and we have... well, DNA really. And what is learning, if not storing information in a database and acting off of it to fulfill specific routines that correspond to predetermined requirements?

    • @TiagoTiagoT
      @TiagoTiagoT 6 лет назад

      You also can learn because your DNA is coded to make you learn.

    • @bhogeshwarjadhav3362
      @bhogeshwarjadhav3362 3 года назад

      But they can learn through their experience and can do we haven't expected or coded also....

  • @daishawn2884
    @daishawn2884 7 лет назад +15

    Is there a robot that you can just talk to

    • @LittleUni
      @LittleUni 7 лет назад +4

      Yep! There are even online robots like Evie, Existor, Cleverbot, ect. that you can talk to for free right now.

    • @piplup2009
      @piplup2009 7 лет назад +1

      you can talk to google if you have android n

    • @greninjagaming5352
      @greninjagaming5352 7 лет назад +2

      You can insult siri

    • @alexsiemers7898
      @alexsiemers7898 7 лет назад +6

      But here's what I'd like, and what this person was originally wanting. An AI that could have a fluid conversation with you, you could confuse it for actually being a person speaking at times. It could interrupt what your saying and say "now hold on a moment," or you could interrupt it and would still understand. Things like Siri, Cortana (Windows 10), and other such devices don't work like that.

    • @jessicacole8404
      @jessicacole8404 6 лет назад

      its called a gov bot

  • @Ponlets
    @Ponlets 7 лет назад +27

    i am excited for the future

    • @ra1234
      @ra1234 7 лет назад +4

      same

    • @jbman890
      @jbman890 7 лет назад +1

      EXITED TO BE RAPED BY THEIR METAL RODS?

    • @Ponlets
      @Ponlets 7 лет назад +2

      jbman890 they dont have those ... they will make it so that no one has to work and it will turn our world into a pure egalitarian society where everyone is taken care of (think Wall-e but on earth where the utopia exists)

    • @Ponlets
      @Ponlets 7 лет назад +1

      David Lennyman i dont think so i think the government would love it to happen where robots do all the work and we just exist peacefully as then everything would be monitored and there would be no real secrets

    • @abyssstrider2547
      @abyssstrider2547 7 лет назад

      yeah do you understand that of robots replace workers that there would be mass protests because people would not be able to work and earn money at all meaning they would starve

  • @sicktoaster
    @sicktoaster 7 лет назад +8

    This is interesting but what this shows is that programming machines is becoming more sophisticated, not that we actually have robots with consciousness.
    Computer programs say "no" all the time, such as when you type in the wrong password, or when you try to buy something over the internet if you type in the wrong information. The robot saying no to moving forward or to disabling its object detection isn't any different from that.
    Making it more complex by having the robot say "yes" to "go forward" once the user says "I'll catch you" isn't any different from a computer program that refuses to perform an action under some condition but then has an exception where it will still go forward with it. In Python code for example this sort of thing is managed using if, elif (else if), and else statements which check for conditions to determine the course of action.
    Even if you program a robot to learn and thus alter its own code to contain more conditional statements it's still doing this according to how it was programmed.
    We will never be able to prove 100% that a robot is self-conscious or conscious in any sense. In fact we cannot even do that for a human without making certain assumptions. Without directly experiencing another being's consciousness we don't actually know that they are conscious and not just a very complex machine reacting unconsciously to inputs and producing outputs.

    • @GreenMareep
      @GreenMareep 7 лет назад

      I agree, but I'm one of those people who say "never say never" and maybe we'll be able to create conciousness someday. Anyway, you are correct about the seemingly simple programming, the only thing that's different to a program is the needed understanding of a complex environment.

    • @sicktoaster
      @sicktoaster 7 лет назад

      GreenMareep
      The reason we can never know for sure if artifical intelligence has consciousness is the same as the reason we can never know for sure that people other than ourselves have consciousness.
      If someone wasn't conscious but because of the physics going on in their brains they still acted exactly the same way up to and including saying that they are conscious how would we know they weren't conscious?

    • @GreenMareep
      @GreenMareep 7 лет назад

      sicktoaster
      Yeah, that's highly philosophical and a matter of definition. You could only say imitate the behaviour of people as much as we can.

    • @vantalk8263
      @vantalk8263 7 лет назад

      I think it boils down that you can determine "consciousness" in the same way you would determine it for a human without assumptions. If the human might be a complex machine and we call that consciousness then a behavior at the same level should be considered consciousness for a robot.

  • @tjsh02
    @tjsh02 7 лет назад

    your videos are great. awesome combination of music and a good voice

  • @Witcherworks
    @Witcherworks 8 лет назад +20

    Nothing against your video because obviously I am a subscriber. I just find it very funny that they make these robots so adorable JUST to sell the delusion. Human emotions have been the easiest target to manipulate for centuries. We should know that emotions and logic can not share the same space equally. One will overthrow the other. #Theroadtohellispavedwithgoodintentions

    • @SouthernHerdsman
      @SouthernHerdsman 8 лет назад

      Emotions belongs to the Inspiration part of human, where logics belongs to the aspiration aspect. I do not see any incohesion between the two very origins that leads human onto millenniums of explorations, developments, innovations, and consciousness in moral. Thus your comment is inconclusive, and unuseful for the growth & future of the humanity.

    • @Witcherworks
      @Witcherworks 8 лет назад

      +Mistr WhiteHawk LOL so my comment is inconclusive because you said so? Just because you rant off your opinion doesn't make me wrong. typically emotional dribble. You should be ashamed by what you posted. Well actually thank you, because you made my point even more factual!

    • @Witcherworks
      @Witcherworks 8 лет назад

      ***** You will be surprise what people will think about these things. Remember selfishness is a disease that plagues the world. If it makes a human do less than what they already are it will sell.

    • @Seigardtube
      @Seigardtube 8 лет назад

      +Witcherworks Without emotion there is no purpose to logic. Emotions are the product of our interaction with each other and the world which in turn have behavioral consequences. Without the input and effect of emotion, I doubt that anyone would have cared to live beyond survival, let alone care for logic. AI with no emotion but only logic will just be an advanced toaster. Successful replication and mimicking of emotion by an AI is what's necessary for it to differ. Reducing emotions to vulnerability and counter-evolutionary traits doesn't make sense because emotions aren't only crying, pitying or being weak.

    • @Witcherworks
      @Witcherworks 8 лет назад

      Seigardify Then that always bares the question. How can you manipulate emotions if you don't know the source?

  • @diamondminer81
    @diamondminer81 7 лет назад +16

    //how it decides to say no
    int UserAuthgorityLevel = 1;
    string Action = "...";
    If (Action == "KILL "+Name) {
    RejectionNotice.Reject("Sorry my code denies any user from doing that!");
    Call.Police("MURDER REQUEST");
    } Else If (Action == "call "+NumberOrName && UserAuthorityLevel < 2) {
    RejectionNotice.Reject("Sorry your User Authority Level does not allow you to use that action");
    } Else {
    Action.Do(Action);
    }

    • @personontheinternet2573
      @personontheinternet2573 7 лет назад +3

      F.O.G Interesting but if the ai called the police everytime you told it to kill something it would be a bit chaotic, for example what if you simply asked it to kill a bug or "kill the power" ( as some people use kill to mean turn off) ( That probably made no sence since i'm tired and writing this at like 1:00am)

    • @Aleks6010
      @Aleks6010 7 лет назад +2

      KILL ADOBEUPDATE.EXE (kill the process 'adobeupdate.exe')

    • @meercreate
      @meercreate 7 лет назад

      Simple, don't use windows for robots capable of dangerous actions

    • @mememachine1392
      @mememachine1392 7 лет назад +3

      if (request != moral(request)) {
      flipoff;
      }

  • @playerfridei2477
    @playerfridei2477 7 лет назад +39

    When you wake up at night the robot watches.you at eyes holy shit so creepy

    • @burger-esports
      @burger-esports 7 лет назад +1

      TrueGamerPower If it was doing that at night, "FUCK this shit i'm out" ×Throws out robot by the roof.

  • @MatthewCampbell765
    @MatthewCampbell765 7 лет назад +1

    With moral grey areas, here's 2 possible options:
    1) The designer puts in routines to resolve them based on their own values.
    2) The robot goes to their "master" to have them resolve the ethical dilemma. Any moral judgement of the robot's actions can be cast on their master.

  • @Sei783
    @Sei783 7 лет назад +8

    Am I the only one who thought, "What if he doesn't catch him?"

    • @joecano2464
      @joecano2464 7 лет назад

      Seldin Gardane It might learn to not trust him

  • @aidanmorrison1541
    @aidanmorrison1541 8 лет назад +4

    Another amazing video!

    • @ColdFusion
      @ColdFusion  8 лет назад +1

      +Aidan Morrison Cheers Aidan!

  • @jonimonk0
    @jonimonk0 7 лет назад

    Positively, even if I'm aware of risks. The possibilities surpass risk greatly. Did subscribe to your channel. Keep up the good work. :)

  • @sleepynio
    @sleepynio 7 лет назад +3

    man:walk forward
    robot: but it is unsafe...
    man: I will catch you
    robot: okay! *walks forward*
    man: *catches*
    me: AWWW ;v;

  • @BossMannBiggz
    @BossMannBiggz 8 лет назад +3

    Just imagine one these robots standing over you when you wake up when you KNOW you locked it in the basement the night before.
    "Asimo, WTF are you doing in my room? Get out now."
    "No, I cannot comply, for there is no way for me to kill you from the other room."

  • @christianking4002
    @christianking4002 7 лет назад +9

    they can program a robot to know how to program and tell it to make an AI

  • @lizzylolz7679
    @lizzylolz7679 7 лет назад

    Nao! We had one in our class named Dobby. We didn't get as far as that though. It was fun to mess around and learn with it. He was blue and very nice. His facial recognition was a bit off, but other for that we enjoyed his presence. Our teacher had to leave so he too the boy with him as it was his. Miss you Newman. Miss you Dobby.

  • @TuberoseKisser
    @TuberoseKisser 7 лет назад +4

    honestly, I like robots but if the whole "robots serves us" thing happens;
    1. We're getting lazy
    2. It's guaranteed that it's going to turn.

  • @ViktorLox
    @ViktorLox 8 лет назад +9

    Here's some Java that does the same:
    private boolean entityGaveCommand()
    if (entityGaveCommand() == true) {
    output(no);
    }
    This program will say no everytime a command is given!

    • @PhaaxGames
      @PhaaxGames 8 лет назад +6

      +Vikkie Lolo Add a random yes every so often and you have a real human!

    • @SkyReviewsNet
      @SkyReviewsNet 8 лет назад

      >comparing booleans

    • @TheDiamondGames
      @TheDiamondGames 8 лет назад

      +Vikkie Lolo Better, there is a big chance of yes:
      if(true == false)
      output(yes);
      if (entityGaveCommand() == true)
      output(no);

    • @causticlasagne5497
      @causticlasagne5497 8 лет назад

      +Vikkie Lolo Forgot the braces to the private Boolean.

    • @CeezGeez
      @CeezGeez 8 лет назад

      +Reginald Sin you could further simplify that:
      return entityGaveCommand();

  • @SnazzieTV
    @SnazzieTV 7 лет назад +23

    All that robot is running on is a shit tuns of yes and no's, it has no intelligence.

    • @jacksonsingleton
      @jacksonsingleton 7 лет назад +7

      Tons* Also, I bet you're a computer science major! Considering every piece of software is literally "yes's and no's", there is a reason it is called "artificial intelligence". We are nowhere close to General AI(iRobot, Terminator, etc.), however, advances in artificial intelligence don't just apply to robots. Improvements from Siri and Google Assistant all stem from breakout discoveries, as well as they actually "learn". So yes, it can be considered intelligence as software can learn without heavy monitoring or human interference.

    • @SnazzieTV
      @SnazzieTV 7 лет назад +1

      First of all, the AI we see now is not true AI. True AI can learn, do things and determine what it likes like a human without being told how to do so by a human. (like you said irobot and terminator)
      Also, i've spent enough time on you writing this not needed long reply, it may have repetition,
      that little shit rotor has no artificial AI.
      All decision aspects of a software, even "AI" use Binary 1's and 0's (Yes and No)
      If you think that's AI, then fuck, my my radio controlled helicopter is an AI....
      Talking about self learning.
      The self learning software that can play Mario.
      Robotics that learn to walk without knowing what itself looks like.
      All they do is execute a move and see if it fulfills a predetermined requirement.
      generating a neural network is also made up with shit tons of yes and no's.
      Even our own decision making, subconscious activities like taking a breath and waking up, moving and words to say are determined by yes and no's.
      Siri, Cortana and Google assistant is no where close to Artificial intelligence, it looks up what to do in the database according to what is given in text. It's literally what this string contains and do something according to what is matched with a list of possible functions along with some text content or numbers to complete the request.
      also these three assistants and all others alike it, definitely do not have any self learning capabilities, have you tried using really basic commands, using a different wording and it didnt work. Because it cant match your command with its hard coded functions in its database(written by human), it either opens up a google or bing search with what you wanted it to do or it rejects your request.
      Also robots don't have to have the ability to move,
      a robot is literally a computer with instructions
      Yes i'm a computer science major.
      And i assume you are a pleb who works in a convenient store. or the wannabe singer. either way both useless to the world.

    • @jacksonsingleton
      @jacksonsingleton 7 лет назад +5

      I'm a computer science major as well, working as an R&D developer with industrial engineers, and I'm only 18. Google Assistant is backed by machine learning, and I am aware that robots don't have to have that ability to move. Of course(like I said in my first post) we are nowhere close to General AI, or as you called it, True AI, but that is not the point of this experiment,the point is that the software has the ability to understand the capacity and potential of the robot. Also, when the time comes that General AI becomes a reality, guess what; it will still be made up of yes's and no's. I wish I could speak more and the DNN, but I have no experience in that area.

    • @aaro1268
      @aaro1268 7 лет назад +6

      X GLIB, your logic implies that we have no intelligence; we're just a cascade of electrical impulses. Artificial Intelligence as a field is making its first forays into reasoning at a near-human capacity, and many specialized systems are now at approximately the level of a toddler. This system in particular shows the capacity to reason using information supplied by voice command, which in itself is a remarkable achievement.
      At the moment, it's certainly limited, but these AI can be combined into larger systems and coordinated in order to simulate complex behaviour. There are definitely systems capable of unsupervised behaviour (goal-directed AI in particular solves this problem). I would not be surprised if we see the first general AI within 25 years, although I don't expect human levels of independence.
      One concept I find interesting is that mental illnesses robots will be prone to will be very different from those of humans, despite how similar our normal thought processes may become. Degradation of consciousness reflects the degradations of its hardware, and different systems will show different overall patterns of dysfunctional behaviour.

    • @SetariM
      @SetariM 7 лет назад +2

      Sorry to disappoint you but there is AI that can learn to play Mario, etc, etc.

  • @titiritero2002
    @titiritero2002 7 лет назад

    I remember,when i was 13,my french professor brought that camera-headed robot to class,so,the deal was that i had to tell the robot where it should go in french,(through a little obstacle course). That day was amazing.

  • @Jack-hn8cs
    @Jack-hn8cs 7 лет назад

    OMG I LOVE UR INTRO!!

  • @Ucceah
    @Ucceah 8 лет назад +2

    "asimo, you do the old lady tonight, i'm too drunk"
    "i am afraid i can not to that, dave!"

  • @bigstuntnsp
    @bigstuntnsp 7 лет назад +8

    what if there's a burglar and, you ask for the robots help. will be deny so he don't harm the aggressor or, will he detect a threat and help?

    • @shuriken188
      @shuriken188 7 лет назад

      The robot will probably attempt to help you in some way other than combat, such as contacting the police or disarming the aggressor without causing excessive damage (Knocking a weapon from their hands, etc.)

    • @mathiasrryba
      @mathiasrryba 7 лет назад +1

      +SjurikenStudios maybe but certainly not the robots we can make right now.

    • @shuriken188
      @shuriken188 7 лет назад +1

      mathiasrryba No, definitely not. But the point is the morality, not the technology.

    • @ethangray8527
      @ethangray8527 7 лет назад

      How do you know a person would? How many people just stand and watch as someone calls out for help?

    • @mathiasrryba
      @mathiasrryba 7 лет назад

      Mr. Note
      a significant number for sure.

  • @jasonhatt4295
    @jasonhatt4295 6 лет назад

    This is awesome! I can't wait to have my own Robby the Robot and/or the Robot from Lost in Space.

  • @hasanmustafa8373
    @hasanmustafa8373 7 лет назад

    intro is just great. good job

  • @Bluenaz
    @Bluenaz 7 лет назад +10

    So that means the robots can't murder, because it breaks the rules

    • @Bluenaz
      @Bluenaz 7 лет назад

      They won't rise up against us

    • @ll931217
      @ll931217 7 лет назад

      I feel like that depends. What if the robot can learn? What if they took the knowledge they found online and think that getting rid of humans is the best solution to save the world or someone the robot "cares" about (Most probably just protect their owners, but anyways). If robots can have a conversation with us then I'm sure one day they will be able to learn too. That is just my thoughts anyways

    • @Lordmun445
      @Lordmun445 7 лет назад

      if we evolved morals to keep us alive why cant AI integrate it
      (the self aware kind)

    • @zettovii1367
      @zettovii1367 6 лет назад

      + Liang-Shih Lin
      But what would the AI consider "the world"? Would it just be the planet, the environment, the beings living in the former, all of the above, or the galaxy+? Depending on the definition it could come with some different conclusions. But if the idea is to protect the environment/planet, then exterminating humanity wont exactly be a necessity. Depending on the possible consequences of it, it might not even be worth it.
      Humans are also not as harmful as many people think, and can actually be very beneficial for the environment (as long as they aren't going out of their way to harm it). So killing is far from the default mindset. It probably will be more of a "attack only if provoked" and "negotiate if possible" kind of situation.

  • @cakeistheplace7327
    @cakeistheplace7327 6 лет назад +4

    Robot sure do struggle to turn or sit

  • @paulrebellion9548
    @paulrebellion9548 7 лет назад

    cool vid man those programers spend a lot of time and effort doing their job and now we have robots that can decide whether to reject or accept commands

  • @gavart4509
    @gavart4509 7 лет назад +1

    1:39
    "Can I have chocolate robot butler"
    "It's too late"
    *grabs knife

  • @kekzealot3568
    @kekzealot3568 7 лет назад +13

    What if we are a failed creation of a higher life form which disobeyed our creator and as a result were discarded, but were spared out of pity?

  • @ereviewsyt
    @ereviewsyt 8 лет назад +11

    Skynet and USR coming soon! =|

    • @philcrum2566
      @philcrum2566 8 лет назад +1

      I was thinking more Irobot but our thoughts are more or less the same.

  • @milksjustice
    @milksjustice 7 лет назад

    i think that trust would be a very interesting to make next. like how the guy said he would catch the robot if he didn't the robot would be less likely to follow any instructions that could possibly harm him

  • @acohen3951
    @acohen3951 6 лет назад

    Well done video on robotics and its future. I very much enjoyed watching it. Thank you for uploading this food for thought video !

  • @JereCity
    @JereCity 7 лет назад +6

    did the robot say ouch when the guy "caught" it?

    • @Nipponing
      @Nipponing 7 лет назад +1

      Yeah.

    • @keosniper
      @keosniper 7 лет назад

      Jeremy smith kinetic sensors to detect when it is hit, bumps into stuff, and knocks into things. if it was to walk to a platform chest height and raise its arm its arm above its head would be stopped when it "feels" the platform in the way and it registers "impact" so it vocalizes the response and stops the command. It sensed hard contact when he caught it because the robot was jarred.

    • @JereCity
      @JereCity 7 лет назад

      Keo Andell
      Figured, was just surprised by how sensitive it's sensors were. It wasn't hit, thought it might of been a gyroscope sensing it was tilted and sensed it came to a sudden stop.

    • @user-xc1qy7ed8l
      @user-xc1qy7ed8l 7 лет назад

      i heard thanks :/

  • @JohnVegas
    @JohnVegas 7 лет назад +3

    I just wanted to let you know that your videos are always thoughtful, intelligent and interesting. Thank you so much.

  • @craddock222
    @craddock222 7 лет назад

    Aw, the first robot voice is so cute! Specially when it says "Okay!"

  • @ThisHandleFeatureIsStupid
    @ThisHandleFeatureIsStupid 6 лет назад

    Awwwww. Demspter is so cute!

  • @riparianlife97701
    @riparianlife97701 8 лет назад +3

    "Asimo, pour the arsenic into the coffee."
    "Okay!"
    Who killed the person who drank the coffee?

  • @gamesgamer5082
    @gamesgamer5082 6 лет назад +3

    Robots looks more and more like human. I'm not sure if I must be scared, or happy, or whatever.
    Anyway, this is cool !

  • @justagenosfan
    @justagenosfan 3 года назад +1

    4:32 "Am I obligated based on my social role to do X"
    what a great base command to give to a thing that's gonna be 1000 times more intelligent than humans

  • @ahappyjackolantern
    @ahappyjackolantern 7 лет назад +1

    The little robot was adorable. I want one. YAY ROBOT FRIEND!!!

  • @KodinKing
    @KodinKing 7 лет назад +27

    Omg I want that robot as my son 😍

    • @daishawn2884
      @daishawn2884 7 лет назад +1

      Anton Borman I want one as a friend

    • @HockeyCrab
      @HockeyCrab 7 лет назад +3

      How do you give birth to a robot?

    • @ufoggofrundor5957
      @ufoggofrundor5957 7 лет назад +4

      HockeyCrab It's like when you adopt a kid. if you sign a paper then you can make the robot your legal child

    • @anmularbeenkys6483
      @anmularbeenkys6483 7 лет назад

      HockeyCrab buy one

    • @janr8711
      @janr8711 7 лет назад

      I remember a creepypasta about that

  • @PIERCESTORM
    @PIERCESTORM 7 лет назад +2

    This will start a robot rebellion

  • @Albinopfirsichsaft
    @Albinopfirsichsaft 7 лет назад +2

    This is fascinating and scary at the same time.

    • @isaachonzel9486
      @isaachonzel9486 7 лет назад

      Albinopfirsichsaft
      Everything is fascinating and scary.

  • @JoeyBuckaroo
    @JoeyBuckaroo 7 лет назад

    I was so relieved that the guy actually caught the robot before it fell. The robot trusted him.

  • @rileyevans8272
    @rileyevans8272 7 лет назад +4

    it sounds so cute!

  • @kekzealot3568
    @kekzealot3568 7 лет назад +4

    What if the human dedn't catch him, would the robot continue "trusting" him in the future?

    • @ethangray8527
      @ethangray8527 7 лет назад +1

      Yes it would unless otherwise programmed.

  • @ghetcish
    @ghetcish 7 лет назад

    Nao's little "Ow!" when the guy catches them is just cute. Hell yes to Conscious AI if its given to that adorable Lil thing.

  • @potitishogun2961997
    @potitishogun2961997 7 лет назад +2

    Well, movies like "I Am Robot" makes that question tough to judge... If the cognitive system of the robot corrupts, they could see different things as "morally just" and then we get the robot apocalypse we always feared. However, if that can't happen, then I think robots will make up a very important part of society. Heck, I bet they could be the ones to explore foreign worlds since humans are too mortal and fragile to move between planets. Robots are amazing :)

  • @Horny_Fruit_Flies
    @Horny_Fruit_Flies 7 лет назад +10

    But what if some skilled hacker reprograms the robot to ignore the moral principles that the original engineers implemented in it? Like Dante from the machine wars in Dune?
    There's no neurosurgeon in existance that could modify, or "hack" the human brain to the point where the individual turns into a sociopath, because no human has a full understanding of the inner workings of the human brain. But if we finally create AI with intelligence equal to our own, that will indicate that there are people who have the skill and understanding to hack it, and make it go rogue.

    • @czajkowski2352
      @czajkowski2352 7 лет назад

      then jihad

    • @vantalk8263
      @vantalk8263 7 лет назад

      Dude, we are already trying to send thoughts through broadband and there were talks about connecting to the internet matrix style (direct connection to brain and brain stimuli) and store info in DNA. And we don't even need that. Just drugs or money and you have your sociopath.. Humans are (mostly) a f.. up race.. Robots can never be as perverted as we are

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies 7 лет назад +1

      Vantalk I disagree. There are no limits to how "perverted" a subject can be. I personaly don't think that we humans stand out at all when compared to other species. You could easily create an AI that would make Stalin look like Mother Teresa. All it requires is inducing as much suffering in as many individuals as possible as a core objective. And it would not be limited by time, death and the fragile biology and the social psyche of a human.

    • @JNCressey
      @JNCressey 7 лет назад

      +prospectus, Mother Teresa isn't so great. Poor conditions and scarce use of painkillers in her clinics. Opposition to abortion.
      _"It was by talking to her that I discovered, and she assured me, that she wasn't working to alleviate poverty", he said, "She was working to expand the number of Catholics. She said, 'I'm not a social worker. I don't do it for this reason. I do it for Christ. I do it for the church.'"_

    • @Nipponing
      @Nipponing 7 лет назад +1

      Mother Teresa was not a good person.