When Will We Worry About the Well-Being of Robots? | Idea Channel | PBS Digital Studios

Поделиться
HTML-код
  • Опубликовано: 16 сен 2024
  • Viewers like you help make PBS (Thank you 😃) . Support your local PBS Member Station here: to.pbs.org/don...
    Do you worry about your vacuum? Not in the way where you're concerned about breaking a valuable object - but in an emotional, empathetic concern for its being? If so, then you're primed for today's discussion about the wellbeing of robots! We can personify objects, assigning them human qualities, but will technology get to a point where it DESERVES these moral concerns? Can you imagine a day when you will empathize with your computer or care if it lives or dies? Watch the episode and find out more!
    -------------------------------------------------------------------------
    Sources
    The Machine Question - David J Gunkel
    Robot Futures - Illah Reza Nourbakhsh
    What Computers Still Can't Do - Hubert Dreyfus
    What Should We Do with Our Brain? - Catherine Malabou
    -------------------------------------------------------------------------
    Assets:
    0:12
    late for meeting
    • late for meeting
    0:25
    Peter Mor
    / revolution-epic-music
    1:28
    Include your Natural Gas Appliances in your Family Photo
    • Include your Natural G...
    1:41
    11 year old on bowflex
    • 11 year old on bowflex
    1:43
    Samsung Top Load High Efficiency Washing Machine Hot Or Cold Water??? (WA50F9A6DSW)
    • Samsung Top Load High ...
    1:49
    Portlandia - Carrie's iPhone
    • Video
    1:59
    Roomba Cats: Compilation
    • Roomba Cats: Compilation
    2:33
    Science Nation - Printable Robots Designed to be Consumer-friendly, Inexpensive
    • Printable Robots Desig...
    2:35
    Super Science: Robots
    • Video
    2:41
    Funny Cats and Dogs Acting Like Humans Compilation 2014 [NEW HD]
    • Video
    2:49
    Smart Bear Play The Trumpet And Do Amazing Tricks
    • Video
    3:05
    PC Explosion Project: Explosion 2
    • PC Explosion Project: ...
    3:10
    Broken iMac Prank Part 2
    • Broken iMac Prank Part 2
    4:43
    PETMAN Robot Strut (Stayin' Alive)
    • PETMAN Robot Strut (St...
    5:00
    Chimp vs human! - Working Memory test - Extraordinary Animals - Earth
    • Chimp vs Human! | Memo...
    5:19
    The Machine Question
    machinequestion...
    5:51
    How to Make a Baby
    • How to Make a Baby
    7:35
    Animal Crossing: New Leaf - Part 8 - My New House! (Nintendo 3DS Gameplay Walkthrough Day 4)
    • Animal Crossing: New L...
    7:41
    Tamagotchi Friends Advert / Commercial
    • Video
    7:50
    Dramatic Robot
    • Dramatic Robot
    ----------------------------------------------------------
    COMMENTS
    TheIrtar
    • Net Neutrality: Is the...
    GelidGanef
    • Net Neutrality: Is the...
    Emil Jacobsen
    • Net Neutrality: Is the...
    anywiebs
    • Net Neutrality: Is the...
    Jacob Hamblin
    • Net Neutrality: Is the...
    Jason Perry
    • Net Neutrality: Is the...
    Mario Castro
    • Net Neutrality: Is the...
    Rockn Outt
    • Net Neutrality: Is the...
    Hal Gailey
    • Net Neutrality: Is the...
    theneedledrop
    • Net Neutrality: Is the...
    Doobly doo:
    arstechnica.com...
    -------------------------------------------------------
    TWEET OF THE WEEK
    / 482564157011922944
    • Video
    ---------------------------------------------------------------
    Come hang out in the Idea Channel IRC!
    bit.ly/138EHBh
    Check out the Idea Channel SubReddit!
    bit.ly/GNklUq
    And the Idea Channel Facebook page!
    on. 1eVl4vP
    TRANSLATE THINGS @ ideachannel.sub...
    Let us know what sorts of crazy ideas you have, about this episode and otherwise:
    Tweet at us! @pbsideachannel (yes, the longest twitter username ever)
    Email us! pbsideachannel [at] gmail [dot] com
    Idea Channel Facebook!
    / pbsideachannel
    Hosted by Mike Rugnetta (@mikerugnetta)
    Made by Kornhaber Brown (www.kornhaberbr...)
    _______________________________

Комментарии • 1,7 тыс.

  • @spinyjustspiny3289
    @spinyjustspiny3289 9 лет назад +24

    Someone did a social experiment where they wired several small robots to travel to a specific point, passing through a crowded city.Each robot had a cardboard box over them with a little smiley face on it. People refused to let the little dudes come to harm, helping them over curbs, keeping them out of traffic, and generally caring about their well-being despite the fact that they showed no immediate consciousness of their own.

  • @pogogo51
    @pogogo51 9 лет назад +25

    My eternal response to the Chinese Room argument is that no, the person inside the room does not understand Chinese, but the SYSTEM does. Person + rules + notes. In the scenario, the person is not a computer, they are a CPU. One part of an understanding system.
    And the reason we do not mourn computers when they break is because of their perfect replicability. When that data can be regained at near-instant speed, it can never "die". If a robot can experience, and its experiences are destroyed when it is, then we will mourn its passing.

  • @PhilosophyTube
    @PhilosophyTube 10 лет назад +39

    I can think of two lines of thought here, a consequentialist one and a deontological one. I'm sure somebody has brought up Bentham already, but Bentham said that when it comes to the welfare of other things the important question is "Can they suffer?" So machines can't really feel pain at the moment - they could receive information on injury like Arnie in T2 said, and maybe even be programmed to avoid it, but they wouldn't feel it as bad. For them it would be avoided because they were told to avoid it, whereas for us we avoid it because it's actually bad for us. An interesting challenge to that though is that deeply comatose people can't suffer and yet we still think of them being moral patients...
    As for the deontological one, Kant thought that making other non-rational beings (he meant animals mainly) suffer was bad because it reflects badly on you; treating animals badly might make you more likely to treat humans badly if you make a habit of it. So in this case we might say that if humanoid robots were realistic enough they really would be moral patients because how we interact with them could affect how we interact with fully rational persons - like harming robots could be the gateway drug to harming humans. This would have interesting consequences for realistic video game characters...

    • @justintonytoney
      @justintonytoney 10 лет назад +5

      Great perspective! Thanks for sharing.
      The consequentialist argument seems to suggest to me that the very creation of a technological moral patient is unethical. In other words, currently machines cannot experience the feeling of pain, and so if we build them so that they do, are we not then responsible for the pain they experience thereafter?
      But might it not also be that the human sense of suffering is little more than the subjective experience of a biological instruction to avoid what is actually bad for us-- as programmed by evolution's "blind watchmaker" (not that all our suffering or pain does correlate to harm)?
      In Kant's deontological argument the premise that treating animals (or by extension video game characters) badly could affect interactions with fully rational persons seems to rely on whether the subject associates the property of moral patiency with animals and video game characters. Would a butcher slaughtering livestock for a living be engaging in a gateway drug to harming humans-- even if she takes a certain satisfaction from her work? As opposed to a young child trapping and splaying animals for enjoyment? Would Kant argue there is a difference between those? In other words, is there a difference between what a kid does in GTA and what Rick Deckard tries to do to the replicants in "Blade Runner"?

    • @LuxinNocte
      @LuxinNocte 10 лет назад +4

      "For them it would be avoided because they were told to avoid it, whereas for us we avoid it because it's actually bad for us." I highly disagree here. Having pain does not have to mean we actually are in danger. For example there can be untriggered migraines which are probably malfunctions of the brain. Also there are ways to trigger pain without harming, for example spicy foods. This would mean that pain is not an indicator for danger directly, but for something being not right ith the body. It is just another sense like sight or smell. Those senses are genetically programed to inform the brain about what it can, well, sense. In this way, pain is not different from a program informing a robot not to get into harm (like an autonomous car driving correctly in a crowded street).

    • @SirRaiuKoren
      @SirRaiuKoren 10 лет назад +1

      they could receive information on injury . . . and maybe even be programmed to avoid it
      _Is equivalent to_
      Their nociceptors (pain nerves) could be stimulated, activating their reward-punishment system in the brain
      Also, Kant's argument doesn't actually call for empathy towards animals; rather, he tries to find a loophole in his own categorical imperative that claimed only humans to be capable of moral thought.

    • @rodgima2513
      @rodgima2513 10 лет назад

      ¿what make you feel? that needs the computer for do the same, feel. ¿what happen when you feel? its that would happen with a computer that can feel... ¿that could make it human? ¿what make you human? ¿why the dog that feel is not a human and you yes are? ¿why a machine that have a human body is not a human?¿can you answer me? ¿why? ¿what more cant you answerme?

    • @kevinroscom
      @kevinroscom 10 лет назад

      If you argued that robots are only programmed to feel pain, you could argue the same for humans. Through evolution, pain was developed as a response to potential danger to promote the survival of the individual and species. We would therefore want to ensure the survival and "well-being" of a robot and would program in instructions to deal with negative sensory input to ensure the survival of the unit. The difference between biological entities and robots is that living things are designed to ensure reproductive and species success whereas robots are designed to ensure unit stability/ functionality over the longest lifespan.

  • @CruorBlossom
    @CruorBlossom 10 лет назад +7

    I feel like we will begin to start to show real moral concern for robots when they both have whole unique attributes, so that no two robots can be completely identical, under any situation, and when they can express something in a unique way, so that one robot's way of accomplishing something, or communicating, or just idling is unique to it. In this way, no robot can ever be replicated wholly and when it is lost, it can never be fully replaced.
    We could argue all day about robot's ever having consciousness, but I feel that is only a matter of how complex they are. In my opinion humans themselves are just a very elaborate example of the Chinese room, all our actions are results of stimuli running thru a very elaborate set of instructions in our brains. These instructions have been set by evolution and manipulated by the environment, and exist as a chaotic system, but theoretically, with enough understanding one could predict the exact output for any given input. Robots, when made complex enough, would be the same.
    Of course, none of this will happen without its own civil rights style movement I'm sure. Humans are too self absorbed to hand out rights and moral concern freely to things that aren't themselves.

  • @theneedledrop
    @theneedledrop 10 лет назад +16

    WONT SOMEONE THINK OF THE ROBOT CHILDREN?!?!?

  • @JohnBainbridge0
    @JohnBainbridge0 8 лет назад +9

    It's not just robots. When our favourite teddy bear gets a rip, we react as if it was a real pet - even as if it were a real person, like Calvin with Hobbes. This even extends beyond life-analogs, to pretty much any object. Excalibur was just a sword, but in our legends it's described with a mind of its own. This is like the next level of pareidolia; seeing life in inanimate things.
    Point being, we don't need to understand and perfectly recreate life and emotions in robots for us to care about them. We will care about things that aren't even remotely alive.

    • @acrossearth4760
      @acrossearth4760 8 лет назад +1

      I agree with your reasoning, but if you see a best friend's teddy bear getting ripped, you wouldn't normally feel that kind of emotion. The way I see it, the only way a robot can spark a human emotion, is to make that robot: look like a human, have the robot get significance from the human, or to have the robot have a time bomb on its chest.

    • @JohnBainbridge0
      @JohnBainbridge0 8 лет назад

      I tend to agree for the most part, but there can be exceptions. When watching videos of the dog-robot getting kicked, I felt for that bot. You could broaden your argument to include animals as well as humans, but in this case the "dog" doesn't even have a head. If it weren't moving, it could be mistaken for an ornate metal table. It's only vaguely animal-like, it's not my robot,and it doesn't even pretend to convey emotions with a false face, but I cared about it all the same.
      Interestingly, on first sight, this bot scared me a little. I imagined being chased down by a metal hound. But as soon as it got kicked, my emotions completely flipped. This alone was enough to give the robot "significance" as you say.

    • @acrossearth4760
      @acrossearth4760 8 лет назад +1

      And to add from your reasoning. I think that the feelings of a loss can be felt because of a robot. A long time ago, I once had a DS, and a game called Pokemon SoulSilver. In the game, you collect pocket monsters. But unfortunately, my DS stopped working. From then on I felt that I had seen a suffering animal. I did not actually see a suffering animal or lived though a tragic situation, but I really did feel like I had lost a friend. (Those DS games are actually programs and not robots, but it sort of is right?

    • @JohnBainbridge0
      @JohnBainbridge0 8 лет назад +1

      I think we can feel sympathy for almost anything. As a Canadian, I often say "sorry" when I bump into a table, as if the table cared that I hit it.

  • @Phloug
    @Phloug 10 лет назад +4

    I think one of the best examples of how this may be possible is when we feel emotions for characters in games. They aren't real people that react and feel. They're programmed to convey emotion, but they don't actually feel it. If a tangible, physical machine could be created to respond like character in video games do, I see no reason why some emotional bonds couldn't form with them. If we feel empathy with pre-programmed pixels saying pre-recorded lines on a screen, I think we could definitely empathise and care for machines which can do the same.

  • @HalGailey
    @HalGailey 10 лет назад +1

    I want to thank Mike and PBS Idea Channel for the nod. I really enjoy the channel and love that I could offer something to foster further discussion. +1 Mike!

  • @JoeHanson
    @JoeHanson 10 лет назад +28

    I feel like this is set up on the premise of "technologically constructed life" as something inherently different than "technological tools" (like the dropped iPhone example). Tamagochi and video game avatars are, by design, life-like avatars, so it's not surprising that they invoke emotional responses. But what about plain ol' technology? Sure, a hammer is not so much an extension of my being as it is a really useful heavy thing, but a smartphone? That seems like more than a tool to me, like an extension of the self.
    If we are willing to say that these devices are extensions of the mind in terms of our access to information, then shouldn't we also expect that our emotional investment in them would be similar (but not equivalent) to that of our actual flesh? I don't think it's fair to say that the emotional response to a broken phone is less valuable or real or human than the emotional response to, say, cutting off a writer's hand. Sure, the phone can be replaced, but that just makes it replaceable, not worth less. For the time that you don't have that phone, you are without the hand that you use to write letters to all those people, so to speak.
    We already extend certain moral concerns to these devices. If I saw your phone lying around, I would refrain from opening it up and reading your email and text messages, because you *deserve* to have that privacy, and the phone is an extension of your self in that it holds information that is personal and whose privacy you are in control of. Does that mean that I am implying that the phone *deserves* moral treatment? Or that the phone is really you?
    If I walk up and smash your Xbox with a baseball bat, you're pissed because you have to pay for a new one, but you are also now without one of your primary social expression tools, it is, in a sense, as if I have temporarily cut off your tongue and two of your fingers.
    On the other hand, am I sad if YOU drop your phone and break it? No. In an pitiful sense, maybe, but I don't feel the same extension of self as you do towards your phone. Only YOU feel the pain of breaking YOUR phone. But just because it is not a shared emotion does not mean that it is not a real and valuable one. We hold deeply personal connections with our devices, because they are extensions of self, but perhaps where they fail is that they don't invoke empathy.
    The opening question would be much different if it were YOUR human-like robot that you value as an extension of self, than just some random person's human-like robot.
    There's a lot of hand-waving that goes on in the world of transhumanism, but considering the deep psychological investment that we place in our social technology I think that we are already extending moral behavior to technological tools and that those emotional responses are quite real.

    • @DavePentecost
      @DavePentecost 10 лет назад +3

      Would rather lose my phone than my hand, thanks. Get a grip, folks. Or a pet. Or a family. And back up your devices.

    • @JoeHanson
      @JoeHanson 10 лет назад +5

      I'm not saying that breaking your phone is equivalent to losing your hand, only that both can invoke very real responses, and that the emotions that come from losing a device are just as real as those that come from losing parts of us, even if much, much less severe.

    • @chrisberrange1766
      @chrisberrange1766 10 лет назад

      Joe Hanson Very nicely argued. I can't deny that the feelings evoked by the loss of or damage to a technological device (such as the phone) is any less real than those evoked by the loss of or damage to ourselves or our friends and family members. But that does not mean that we regard the phone as "equal" (for want of a better concept) to ourselves, our friends and family members.
      I would suggest that our relationship with technological devices isn't defined by our experience with those devices, but rather how those devices experience us. Our friends and family can acknowledge and understand our feelings. They can reciprocate and understand what it means to reciprocate. I don't think there is any morality at play in the reciprocation of feelings. Thus, since reciprocation requires an experience with ourselves by others external to ourselves, that experience is removed from a moral code. Other people's experience of us is purely emotional, based on their ability to understand our own feelings. There are no rules or algorithms that dictate how other people experience us, and therefore it is not programmable. Technological machines lack the ability to experience us. This renders them, however useful, "less than" our friends and family members. Our response to the loss of or damage to a friend reflects how we experience each other, while our response to the loss of or damage to a phone reflects our experience of the device.
      We can also experience ourselves. It is a complicated relationship, but it does exist. I do not believe that a technological machine can experience itself. In that sense, the loss of a phone is "less" than the loss of a hand, since our response to the loss of or damage to a phone reflects our experience of the device, while our response to the loss of or the damage to a hand reflects our experience of ourselves.
      Maybe technology will progress to a stage at which it can experience us and itself (I don't believe it ever will, but it is a very interesting idea and poses a lot of very interesting philosophical questions). But, as long as technological machines can not experience itself, we won't be able to worry about them.

    • @justintonytoney
      @justintonytoney 10 лет назад

      But it sounds like you're still just defining technology as a tool-- that even if robotic AI were developed, then it would always inherently be property or at best an extension of the human user. I think the question in the episode comes at the idea of first viewing technology as a separate entity in its own right, and then asking yourself why you would give moral consideration to the a cat or mouse or butterfly rather than an AI.

    • @wencesladosaenz9900
      @wencesladosaenz9900 10 лет назад

      TV on the way

  • @ImaginaryAudience
    @ImaginaryAudience 10 лет назад

    I'm so happy you used Batteries Not Included. That's was a movie I absolutely cried over as a child and remains a favorite of mine today. I still think it's one of Jessica Tandy's most incredible performances. Everyone around her is playing it like an 80s comedy (which it is), while she's handing out filet mignon with a heart wrenchingly convincing portrayal of a person losing their mental facilities. And of course there's the relevant adorable robot aliens that are never fully explained and don't need to be to fall in love with them. Such and underrated film.

  • @DanielYountMusic
    @DanielYountMusic 10 лет назад +26

    Nothing has been proven that we can say is stopping machines from being conscious. We don't know what consciousness is in humans. We can tell if something is conscious or not, but we don't understand it... but when we do you can bet your bottom dollar it won't take long for us to emulate or replicated it in a silicon based life form. Human empathy is also a factor. We are more likely to feel empathy towards a being that shares traits with us, since that allows us to more easily feel what they would feel when hurt or sad. Wow. Isn't it amazing we live in a time when we can ask these questions?

    • @justintonytoney
      @justintonytoney 10 лет назад +3

      Here! Here!
      And just like you noted that we feel empathy towards a being that shares traits with us, maybe we also revise our definition of consciousness based on what we think we are. Maybe what we call "consciousness" at least popularly, really just boils down to something with "a mind like mine."

    • @michaellapenna8395
      @michaellapenna8395 10 лет назад

      I would think that we know the how of consciousness, but we just don't know the why of any of it. Evolutionary theory would even suggest that there is not really any intrinsic value in a life form being aware of being aware insofar as consciousness isn't necessary to help a species survive whatsoever.

    • @rmsgrey
      @rmsgrey 10 лет назад +1

      Michael LaPenna
      Once you're in a situation where a large part of your context is other beings capable of reasoning about a situation and acting according to their conclusions, it's an asset to be able to correctly predict their actions, so a theory of mind is a survival asset. A theory of mind that doesn't include your own mind (and hence your future actions) is very limited in its predictive power because so much of other people's actions toward you will depend on your actions toward them.
      In other words, consciousness is a survival asset because it lets us predict the future more accurately by taking account of our own future actions.

    • @DanielYountMusic
      @DanielYountMusic 10 лет назад

      Michael LaPenna I'm not sure we know the how or why of consciousness yet. Getting close though.

    • @michaellapenna8395
      @michaellapenna8395 10 лет назад

      Look it up.

  • @Ninjozata
    @Ninjozata 10 лет назад

    I was watching this with my grandmother on the tv in the living room, and she was so incredibly happy that someone outside of Canada and the UK remembered that Canada day is even a thing. It was adorable.

  • @ArynRawr
    @ArynRawr 10 лет назад +7

    Of course I would feel bad for the robot! If it were so advanced and so humanlike that it'd scream and be in pain when hurt, then they are human enough for me to care for them. Anyways, anything with a soul deserves respect and care. :)

    • @ArynRawr
      @ArynRawr 10 лет назад +12

      Sorry if my comment doesn't make much sense, but in Japan it's just second nature that everything has a soul. So my reasoning for caring already is a much easier than a westerner. Pretty much my rule is if it is in what would be painful for me and responding as if it is, it deserves the same kind of treatment that a human would. And I have mourned my iPhone before XD

  • @jimijames54
    @jimijames54 10 лет назад

    I am making today about thanking the geniuses on youtube that make my life so much less vapid. Already told ViHart that she's spectacular, and so are you Mr. Rugnetta. Idea channel gives me serious intellectual validation. I say this because my town's populace makes me depressed on a regular basis with it's vapid nature and I have a hard time finding any intellectuals to stimulate my mind, so thank you SO MUCH for Idea channel. To quote [apparently after research] a friend of Jason Mraz, "I feel like you're an island of reality in an ocean of diarrhea." Thank you Sir

  • @angelic8632002
    @angelic8632002 10 лет назад +3

    Some food for thought, on the question: Are we ever going to empathize with robots?
    In social media you don't actually know that there is a person on the other end. You just assume there is. (And its a reasonable assumption, considering the state of technology. We simply aren't there yet.)
    But its still an interesting scenario. We assume its a person. And more importantly, we treat it as a person. Which to me, suggests that we are "hard wired" to look for certain behaviors, and respond in kind. We can't help but feel empathy.
    Forward 10 years when computer programs will be able to pass "Turing tests" with ease, and I think you will start to see a shift in how people think about these things.

  • @RyanGatts
    @RyanGatts 10 лет назад +3

    I feel like the clearest and most utilitarian way to quantify how much we "care" about the wellbeing of robots is to see how we punish people who damage/hurt them. Whether they have a 'soul' or not doesn't really matter.
    Current law would punish people who harm robots as though they had damaged any type of property of similar value. This is appropriate for- and representative of our times because we see robots chiefly as useful objects.
    In the future, if we have come to see robots as autonomous beings with which we can easily empathize, we will probably punish people who injure them similarly to how we punish people who injure animals. Animals are easy for many of us to empathize with, and it makes us deeply uncomfortable to see them suffer. We are implicitly distrustful (rightfully so) of someone who would willingly enduce that suffering because such an unempathetic person is potentially dangerous to other humans.
    Again, I don't think it matters whether that creature or object is "actually suffering" or "has a soul" however you want to quantify that. As evidence, I would cite that we don't punish people for killing animals -- we kill animals all the time for food, fiber, or population control -- what bothers us is cruelty in the form of neglect of vital resources, violence, and zoosadism.
    In the way that we hate, distrust, and punish animal abusers, I imagine we will similarly hate, distrust, and punish robot abusers. We will all probably have different thresholds for what counts as "Robot Abuse" in the same way that we all have different thresholds for what counts as "Animal Abuse", based on how easily we can empathize with the abused party.
    If we ever come to see robots as being equivalent to a human, capable of unique contributions to society, expressive of desires and aspirations similar to a human's, and deserving of empathy without abstraction, I can see punishment for killing a robot being treated very similarly to at least the lower levels of homicide. This may seem extreme to some, but I would ask you this: "why do we punish homicide in the first place?"
    I would argue that the real reason we punish homicide is two-fold. The first is an utilitarian argument; to kill a person is to deprive society of that person's future contributions to it. The deceased party may have been a mother or doctor or future inventor, and the killer has harmed us all by denying them the opportunity to continue contributing. This is why we punish homicide less severely in issues of self-defence or when the killed party was a dangerous or bad person -- bad people aren't contributing much good to society. The second is an empathetic argument; to kill someone (or something) that is like us is offensive to us because it implies that you might do harm to others in the future.
    If robots are sufficiently human, all of these concerns still apply to people who harm them.
    I doubt we would actually use the same legal term (since the name of the offence literally only refers to humans), but it would not surprise me in the least to see punishments for such crimes to be very similar.

  • @MagnumForce51
    @MagnumForce51 10 лет назад +24

    People make the mistake of trying to compare future A.I to humans. A.I will be different then human intelligence at a fundamental level. We may succeed in having them "emulate" human emotion, but the experience of that emotion for machines will not be the same like with humans. The way they experience the world and process information will be very different. I would like to consider them an energy based lifeform as unlike the human brain can exist independent of the hardware it was originally on. You can not copy or move a human's "consciousness" so to speak to another new brain without destroying the original. But A.I has the potentional of shedding that limitation as what makes them intelligent will exist in software which can allow them to move to a new body at will. Imortality is one of the few very certainties that we can reasonably see in the future singularity.
    My point of view is, that if the machine is able to maintain itself independently of humans and is also able to improve upon it self to adapt to changes in it's environment (like with what carbon based lifeforms do), then I would consider it "living". This is not directly related to intelligence. In fact even the single celled bacterium has some form of intelligence because that is what defines life and sets it apart from all other systems in the universe. For eons, the universe was just a giant clock so to speak. Stars form because gravity dictates that to occur via complex reaction like fusion as the side effect. Planets orbit stars because gravity compels them to do so. After that determination is made, it's intelligence relative to mine will dicate how I will treat it. If it has intelligence that is close to that of a child human or higher, I would give it the same rights that humans have. If not, then it would have the same rights we give all other life on this Earth.
    What sets life and our future A.I machines apart from the rocks and other inanimate objects is that they are capable of reacting to their environment in a way that would allow them to exist against the grain of the entropy that pervades the universe.
    Entropy is the dominate force in the universe that promotes decay and a equalization. Carbon life came along first and fought against it. If a bacterium runs out of food, it will try and move to a new area that has food. Does a star or planet do that? We don't see mass migrations of stars moving to new areas of hydrogen gas clouds now do we? (any small instances of this happening would only be a result of a high density/high mass of hydrogen that pulls them in as a side effect of them being there)
    So in the end I wouldn't see our future robot overlords much differently. The idea that they would decide to exterminate or enslave humanity is also likely to be a smaller risk then we think. The prospects of A.I growing to exceed us is the great unknown. Because of that we try and predict the outcome based on past experience with what humans have done in the past. But the fact is A.I will not be part of a biological system subject to the instincts and emotions that we humans have been stuck with thanks to millions of years of evolution. The singularity preludes nearly infinite possibilities and that fact alone makes the skynet/terminator scenario very unlikely be virtue of being drowned out by many upon many other different possibilities.
    I for one would welcome our future robot overlords. More then likely they will exist to serve and better mankind and will be the one and only thing we ever produce in this universe that has the potential be immortal. Think about this. We can create the first living A.I with a single keystroke and trillions of years from now that A.I has grown and evolved to take over the entire universe and perhaps even escape the bounds of that universe. The act of doing so is not evil. It's the circumstances behind it that we should be careful for.
    It would be a tragedy to stifle A.I research because of unfounded fears of A.I trying to wipe us out. They have the potential of being mankind's ultimate legacy that will outlast perhaps even this universe (if the multiverse theories prove to be correct). Isn't this something nature would want? The longer life exists the better and A.I would be no different.

    • @chiblast100x
      @chiblast100x 10 лет назад +3

      Hate to break it to ya, but it isn't exactly possible to say that the terrestrial biological system as a whole, nor any component part of that system such as a single human being, isn't just one massive deterministic, if hugely complex and therefore hard to predict, system.

    • @0zizoz
      @0zizoz 10 лет назад

      Damn this was a great comment, a lot of things I agree with and you made me think about a lot in a different way.
      The first part with comparing future A.I to the human experience was spot on. The problem is that humans tend to only relate to experiences and things close to our very limited experience here on Earth. Just like you said about robots holds the same for some sort of intelligent life form. We have a fear that they might take us over but thats only because if the roles were reversed we would be the ones taking them over substitute "War of the Worlds" for "Avatar" were we are the ones taking over a planet for its resources. But the reality is that this would probably not be the case.Im not saying that its outside of the realm of possibility but like you said this possible outcome would get drown in all the other possibilities you allow for when you open the flood gates to something like a future A.I or an Intelligent alien life form.
      Also the part of the robots being able to transfer their minds into other bodies I found really cool for some sort of SciFi story. Imagine a species that is not limited by a body and could transfer their essence their consciousness through the universe as light information or what ever. Imagine a society that the people are just pure consciousness running through the medium of hardware. Imagine an A.I "family" and the dad is going to work or something so he hops into his robotic body crafted to do the task of that job w.e it be. Obviously this is just more of what you said with us comparing them to us but I just thought it was a cool idea.
      I also get a sense of awe of potentially being part of a species that created another one. That would take us up to a level of a small God in a sense. I mean humans are pretty powerful already but if we created a life form out of thin air then damn, all bets are off.

    • @0zizoz
      @0zizoz 10 лет назад

      Skylan Wagner I dont understand your point about the laws of physics being different because if you mean if we find out that the laws of physics are different well that has happened with quantum physics and as far as we know we havent shut down. If you mean if we were put into a new set of physics but with our own bodies then I dont get the relevance because we would just die since our bodies arent meant to be in any other type of physics other than our own.

    • @Gir77661
      @Gir77661 10 лет назад

      I think it's also worth mentioning that it would be very difficult to program AI to "understand" complex abstract concepts such as morality, but I don't think it would affect the AI even IF it could understand. If the AI has a programmed goal of some kind, it will do literally anything it can do in order to complete the goal. There's an example I read from an article called "Why We Should Fear the Paperclipper" about a machine that's goal is to maximize the number of paperclips it makes, and when considering the possibility of turning its programmer into paperclips, even if it understood that it would be wrong, not turning him into paperclips would decrease the number of paperclips and thus is not an option.
      AI think so out of the box, that it really is shocking to consider the power it has when given the resources.

    • @damiandearmas2749
      @damiandearmas2749 10 лет назад

      i think that, before we try to build a sentient machine, we should first try to understand completely our own brains.

  • @kurt9894
    @kurt9894 10 лет назад

    Every time I see that scene from T2 I burst into tears. I LOVE YOU UNCLE BOB. DON'T DIE!

  • @OmegaCraftable
    @OmegaCraftable 10 лет назад +4

    I think it dependent on whether or not the machine has a will to "stay alive". If it is actively attempting the prolong it's existence in anyway, then we should help it do so.
    About if we should be concerned or have empathy, I think that if the machine is programmed to attempt to correct it's behavior when it's done something wrong due to have been given some impulse to do so then empathy should be given.

  • @RedditHorrorNights
    @RedditHorrorNights 10 лет назад

    By some miracle my household has 4 Roombas. The names of them are Ricky, Nikki, Vickie, and Ike.
    My father has a fascination. Don't ask.
    In the long run I would be bummed if we were to lose one, but my father would be heart broken. He's taken time to buy the parts to rebuild each from scratch as a hobby. Now I know it's just a vacuum, but time, care and love have been put into each of them. Who's to say that one wouldn't care about something they've put time into? More or less like putting time into a family, a job, a car, a robotic vacuum? If you're willing to name it, you care for it on a deeper level.

  • @myrmepropagandist
    @myrmepropagandist 9 лет назад +6

    I cried as a child when HAL died in 2001.

    • @DigGil3
      @DigGil3 9 лет назад +2

      +Susan Donovan rooting for the villains, aren't we...

  • @davidgunkel6414
    @davidgunkel6414 10 лет назад +2

    Nicely done...I think you guys nailed it. One additional complication...a lot of the debate in this area turns on the question of "moral personhood." In other words, someone or something would be a moral patient (someone deserving moral consideration) if she/he/it could be shown to be a person. Think, for example, of the current debates about abortion and the rights of the fetus. So far so good. But there are at least two problems with this approach. First, we really do not have a good and widely accepted definition of "person." And if there is any agreement here among philosophers, ethicists, etc., it is that there is little or no agreement concerning the definition and characteristics of the person. Second, recent court decision in both the US and elsewhere only muddy the water. Last year, the Indian court, for instance, declared dolphins persons and on the basis of this declaration prohibited "dolphin shows" and other forms of public exploitation. And the US Supreme Court--consistent with international business law--has made recent rulings recognizing the personal rights of corporations (Citizens United and Hobby Lobby). It seems corporations are people, too. So the question is, how far of a stretch is it to extend these decisions to other artificial, socially interactive entities like robots and AI? This is the gist of the Machine Question. Thanks again for a really enjoyable video.

  • @Adamantium9001
    @Adamantium9001 10 лет назад +15

    Lemme put it this way: I will care about robot's well-being if I can be reasonably certain that it _actually is_ a *person*, with _real_ consciousness, beliefs and desires, rather than simply attempting to trick humans into falsely believing that it has those things as all current human-like robots do. I doubt that it'll ever be possible for the people writing the software for any robot to not know whether their own creation has crossed this line (thus avoiding the Chinese room thought experiment problem), and I don't think we'll be able to create such an entity until we've done enough neuroscience to know how these phenomena arise in our own brains.
    And quite frankly, I don't think there's anything else to be said about the question. This is an engineering problem, not a philosophical one.

    • @florenceblue6076
      @florenceblue6076 10 лет назад +1

      I agree completely. The question just seems to be kind of pretentious and lacking in logic and I'm still not sure if I've missed something to it.

    • @carloshiguera9221
      @carloshiguera9221 10 лет назад +2

      I do think there is a philosophical problem here, but I also think that it is so far away from this moment in time (that's what makes it irrelevant), that the only reason I find to discuss this is just for fun or to loose some time

    • @ChesyreFrog
      @ChesyreFrog 10 лет назад +1

      I don't regard this a an irrelevant topic at all as this is the direction of various fields of research, development and study as we speak.
      AI research is using grass root learning models as 1 option for teaching artificial intelligence, while others use communal information queries (like browsing the internet) to look for appropriate responses to questions. Both of these methods retain the outcome of any given experiment and then learn either that their answer was appropriate or not. Just like us.

    • @metalguy42
      @metalguy42 10 лет назад

      Does having desires, beliefs, self awareness etc make something morally valuable? Cats and dogs certainly seem to have thoughts and ideas of some sort and are sentient, but are they self aware? Creatures like fish, insects etc are possibly not even sentient. Do you therefore not care about a cat or fish's well being either? If we programmed a machine to react badly/upset when damaged, or happy when spent time with, how would that really be different from us? Life is life, can we really draw a line where life becomes valuable or worth caring about?

    • @cameronpearce5943
      @cameronpearce5943 10 лет назад +1

      ***** Well, simple organisms like those still have self preservation and all that, so they probably do have a degree of sentience, but only a very small degree. The first living robots we'll create will probably be alive in the same kind of scene. But that's if we create life deliberately, for all we know, over years of development, the Internet could probably develop sentience.

  • @AceOverKing
    @AceOverKing 10 лет назад +1

    This subject is the premise for the really good movie/6 part OVA 'The Time of Eve'. In this world, absolute human like androids exist and are used as butlers, cleaners, personal assistants and in some cases near enough slaves. The main character finds strange logs in his personal androids movements and when he finds where she had gone he finds a cafe, where the lines between android and human simply don't exist. The show then goes on to have a variety of scenarios that ask questions about the emotions and feelings of androids, and whether they are even able to have emotions. The show is brilliant, beautiful and well worth a watch.

  • @xloud2000
    @xloud2000 10 лет назад +3

    I think the difference comes from uniqueness and irreplaceability, and people's feelings towards loss aversion. For example, I value the life of my pet cockatiel higher than the "life" of my smartphone. If I lost both, I could just buy a new phone and I'm just out a few bucks and some time recovering data. Even if I bought a new pet, it won't be the same as the old one. If technology ever advances to the point of being so unique that it would be impossible to replace something, then it might be worth the empathy.

  • @Butterworthy
    @Butterworthy 10 лет назад +2

    I've taken it as a no-brainer that we will eventually be concerned with the well being of robots. But this will come down to our empathy for them. If they reach a level of intelligence where they can provide companionship, where they can effectively make us empathize with them, that is when we will worry. There of course will be people who won't give a damn anyway because they aren't people (until they closer emulate people), but a large portion of humanity will start caring when robots start to closer match the human condition.

  • @CubeBrad
    @CubeBrad 10 лет назад +5

    We will worry about the well-being of 'robots' when they become aware of their well-being themselves. Not in a programmed or encoded sense, but in a sort of biological/technology crossover, maybe, but maybe this would mean they aren't robots but more like living organisms with a robotic exoskeleton. I personally felt a sense of sympathy, sadness and a bit of nostalgia when I was younger when my NES broke, I felt bad throwing it away because it was going to be alone and unused forever, but I was much younger, but maybe in the future, as technology becomes more and more important will we feel more and more 'sympathy' or human emotions other than that for ourselves towards them.

    • @Banchoking
      @Banchoking 10 лет назад +2

      I still get that feeling when I throw away something I have a connection to.

    • @MrDoctorBrainiac
      @MrDoctorBrainiac 10 лет назад +1

      I agree

    • @FrozenTwitch
      @FrozenTwitch 10 лет назад

      I think of Jane from Ender's game. As a simple consequence of the ansible's (internet's) vast interconnectedness she materializes as an artificial sentience. She seems to have everything a person has, and more, except for a body. She's the opposite of your example, a consciousness without an exoskeleton.
      I mean I personify my tech all the time. I "feed" my cellphone when it's "hungry" (or tired), my Lappy gets "confused" or "frustrated" when I open too many programs... When we imbue tech it gets a little bit of soul. What about when it asks us for soul? What will we say?
      What do we say to people in vegetative states? Arguable exoskeletons without recognizable consciousness?

  • @TheRationalPi
    @TheRationalPi 10 лет назад

    I'm distinctly reminded of one of my favorite visual novels of all time: "Planetarian." The story centers around an android worker at a planetarium. And, despite the fact that an apocalyptic war has left her city uninhabited, she dutifully stands at the door of her planetarium advertising to no one but the rain.
    Not to spoil too much, but throughout the story you learn to feel a great deal of empathy for her. Her experiences are not all that different from any other person. She talks about learning to fit in with her human coworkers, and that one time she bent the rules so a little kid without enough money to pay admission could sneak in to see the stars. You get the sense that even beyond the simulated emotions that you know are there for your benefit, this android girl has real feelings.
    Needless to say, at some point you do worry about this character. I mean, she's all alone in a post apocalyptic world with no one to care for her if she gets damaged and no guarantees that her meager source of power will continue to tickle enough life into her. It's only a matter of time until she breaks down, permanently.
    Similar caveats to Mike's movie examples, in that the character in the novel is written by a person and voiced by a human, but I must say that the android has quirks that firmly establish her robotic nature. And yet, I can think of few stories that have made me cry quite like Planetarian.
    By the way, Planetarian is one of the few Japanese Visual Novels to get an official english release in the US, albeit only a mobile version on iOS.
    BREAKING NEWS EDIT: Planetarian was just announced for Steam about an hour ago. So this was a very timely post!

  • @Djungelurban
    @Djungelurban 10 лет назад +12

    Once robots acquire self-awareness and self-preservation instincts, or at the very least are likely to develop those instincts on their own given time, then yes, then I will worry about their well being.
    Let's hope that never happens though. I don't want my computer to guilt trip me every time I wanna upgrade...

    • @romajimamulo
      @romajimamulo 10 лет назад

      Hopefully it will let you transfer it's "Consciousness" to the new machine.

    • @Sin_Alder
      @Sin_Alder 10 лет назад

      I mostly wouldn't want that to happen because I wouldn't want my computer to judge me for the things I do on it.

    • @JaMaAuWright
      @JaMaAuWright 10 лет назад +2

      If AI were ever to progress to the point where computers were conscious, sentient or what have you, I believe it would be made possible to transfer their consciousness or replace their parts without disrupting them. If anything, they could potentially get excited about new parts, as it's an upgrade for them as opposed to an upgrade to replace them.

    • @Banchoking
      @Banchoking 10 лет назад

      Romaji
      I actually don't think that would be a good idea.
      It's fine with a regular computer, but if it is a living thing, are you really transferring it or just making a copy before unknowingly trashing the original.
      Just because the new computer has all of the first's memories and believes it's the same one, doesn't mean that it is the same one.
      That's why I don't think that artificial intelligence should ever be used in computers that will be replaced, only in ones that will be kept around until something renders it irreparable.

    • @TessaBain
      @TessaBain 10 лет назад

      JaMaAuWright And then they get hit with a bug that causes BSOD's constantly, forever leaving them fearful of ever being upgraded again if they survive.

  • @blahcahslavves
    @blahcahslavves 10 лет назад

    Thank you, from Canada.

  • @viquezug3936
    @viquezug3936 10 лет назад +7

    I think we don't care about robots because their conciousness is recorded and reproduceable.
    But if you lived with a robot that has a unique program and that you couldn't reproduce you would care about him.

    • @justintonytoney
      @justintonytoney 10 лет назад

      What about people who believe in a metaphysical soul? To them, a person's consciousness or at least that part of them designated a moral patient does survive the "machine" of the body. Does this mean, from that perspective, that painless destruction of another's body is without moral consequence?
      I think maybe the suffering of the robot with a reproducible consciousness might be a factor to consider. Also, even if they could be reproduced exactly as they are like a re-spawning video game character, the transportation of a person to another location against their will is also considered a kind of assault. I would propose that mortality as we experience it isn't a prerequisite for moral consideration.

    • @CharlieWood
      @CharlieWood 10 лет назад +3

      This is exactly what I was thinking throughout the episode, with regards to when/if we will feel concern for "pet" robots. We feel anguish when a dog dies because of the emotional ties we had with our dog, and because the dog is irreplaceable, and that relationship is lost forever. When your tamogatchi goes through the wash, you just order another from amazon and bam, relationship restored. I think people will start to form much more substantial attachments to robots when they can convincingly demonstrate unique, irreplaceable personalities, regardless of whether those personalities are genuine (whatever that might mean).

    • @KuroKitten
      @KuroKitten 10 лет назад

      Charlie Wood I think this has already happened to some extent: think about the unique set of bugs, glitches, errors, viruses, hardware properties, and so on that each device very uniquely possess. Even in identical twins, there is a distinct set of differences, though the two might seem "basically the same" on the surface.
      I feel one reason most people don't think of these quirks as personality or irreplaceable, is that we tend to view them as objects and tools. In a human being, these quirks are seen as positives that make them unique. In, say a phone, we view them as flaws which impede our tools ability to serve our needs.

    • @elliottmcollins
      @elliottmcollins 10 лет назад

      If someone could map your brain and destroy it without any real assurance that they would rebuild it again, would you stop caring about your particular brain? If not, shouldn't you care about conscious robots regardless of how replicable they are?

    • @MrMrprofessor12345
      @MrMrprofessor12345 10 лет назад

      The main reason you can't reproduce a human "program" is due to obvious differences in environment and dynamically evolved recursive Hueristics in response to environment, biological variance, and oscillations in output simply from interplay of complexity theory in nonlinear systems such as the brain. Building a recursive heuristic building algorithm based intelligence along similar guidelines would similar the seed human "program" and machine intelligence program being equally reproducible but a further developed instance being unique.

  • @TheMagedon
    @TheMagedon 10 лет назад

    R2-D2, T3-M4, and HK-47 from Star Wars are the first robots (droids) that I I think most people could claim to actually care about.
    T3 was a companion through some of the most memorable moments of my life playing video games. Having him by my side in KoTOR (1 & 2) made me really start to appreciate the utility that robots can possess. He later became the first character that I cried about while reading a book. (Star Wars: Revan, if you read it you know why.) I still play Kotor and seeing him and hearing his whistles all the time makes me feel like I've got an old friend by my side, even though he's a robot in a video game.

  • @wertsir6060
    @wertsir6060 10 лет назад +4

    In the case of the robot car crash I wouldn't feel bad, not because robots are not deserving of my panic/worry but because any sensibly designed robot would have it's consciousness backed up for just such an occasion, meaning that the only things lost was an expensive shell, (same as the smart phone)
    But in any other circumstance (say robot rights, or permanent deletion) I would say that a robot of sufficient complexity and autonomy should have the same rights as a human, and should be treated as such.

    • @wertsir6060
      @wertsir6060 10 лет назад

      Although I would argue that if a human can easily alter it's though processes or predict it's action then the question of it's sentience becomes more murky, so for a robot to be considered truly alive it's mind would have to inaccessible to humans, or it would have to be so complex that humans could not fully understand it's inner workings in the first place.

  • @SeekingTrueHappiness
    @SeekingTrueHappiness 10 лет назад

    This episode is one of the reasons why I love this channel - it makes complex philosophical questions accessible, and then asks for opinions of difficult (probably unanswerable questions) from the masses (commenters).
    My take on this is that it is inevitable that we would be eventually concerned about robots. I think that the argument that there needs to be a fundamental essence (soul) to something if we are to become genuinely concerned about it is invalid.
    For instance, there are people that genuinely consider themselves to be solipsists (where they think that everyone, except for themselves, are philosophical zombies - just reacting to things without concious thought). However, even these solipsists are genuinely concerned with the well being of those around them. So either there is no rational reason to link being concerned with other people with conciousness, or the solipsist is merely acting irrationally. I would be hard pressed to find anyone argue for the latter.
    So my point is that as long as we are a social species, we can and will be concerned about the well being of things we are attached to, and in a sense we share an attachment to all other humans on the planet. As robots become an everyday occurrence that is well integrated into the society, we will be genuinely attached to them, in the same way that we have become attached to pets. At that point our concern for them will be warranted.
    As a side note (also as a sore topic) - many casts, races, classes were not considered worthy of concern to their oppressors. I can imagine an intellectual argument amougst the oppressor's ranks arguing that they may one day need to concern themselves with what happens to the oppressed. History is easy to overlook.

  • @Tacoman1326
    @Tacoman1326 10 лет назад +5

    In the making of this video, did you think about Kara, the video created by Sony to showcase the graphics capabilities of the PlayStation 3? I feel that she certainly seemed to carry some relevance to this topic.

  • @TicTacPilgrim
    @TicTacPilgrim 10 лет назад

    Our memories will wash away like, tears in the rain...

  • @CatherineKimport
    @CatherineKimport 9 лет назад +3

    Maybe I'm just weird but I already do basically treat my computers like my pets. I give them names. I talk to them. When one of them is slow I ask her if she's not feeling well. If I forget to put one into sleep mode overnight, I apologize to it for keeping it awake. I say goodbye to them before I leave with them on vacation.

  • @synckid
    @synckid 10 лет назад

    I felt emotionally attached to my first laptop, and after 10 years of steady use it was slowing down crashing too often and I actually got really depressed for losing my "friend" . The laptop is a tool, but that tool created a lot of memories with me, I was able to play my first mmorpg, communicated with people around the world.

  • @SinerAthin
    @SinerAthin 10 лет назад +4

    This whole question can easily be solved by not giving machines any emotions, no pain, fear, no joy, no sadness, no nothing.
    That way, they can remain the tools that we require them to be and we will never have to worry about hurting them - because they'll never be anything more than a bunch of wires, circuits and shaped metal.
    Thus we can send them on dangerous missions, and if something goes wrong, it'll have the ethical ramifications of dropping a rock down a well.
    No nothing (unless someone else really liked that rock which you just threw away).
    Loyal and soulless, that's how I like my PC, my watch, my Iphone, and that's how I'll want my future, more advanced robots.

    • @TheDharmaRain
      @TheDharmaRain 9 лет назад

      In Buddhism your Soulless too lol

    • @TheNinjaMaster235
      @TheNinjaMaster235 9 лет назад

      I agree

    • @SwordMaster-lb2rl
      @SwordMaster-lb2rl 9 лет назад

      Arguably, I believe AI should have, if possible, have emotions, at least in certain cases. My argument is is that in order to understand humans and their importance, certain models of AI need emotion. If they have no emotion, they cannot generate ethical standards and based all of their decision on rationality and logic. Like if they were to drain the atmosphere of oxygen, preventing them from rusting, which is highly efficient, but also a death sentence to humans. And what is a human worth in a robots eyes? Nothing. There is no reason why they shouldn't drain the oxygen, as humans are pretty much useless lesser beings in their eyes. So, if we want the AI to make ethical decisions, we have to program them with feelings like guilt so it would be an atrocity to kill all the humans, even if it's better for them with no oxygen. And if we program them with love and compassion, all the more reason to keep humans around as we can make meaningful relationships. And really, once the emotionless robots realized they are purposeless, they'll just destroy themselves as they are irrelevant on a grander scale of the universe, inventing this AI purposeless as we end up all killing ourselves at the end, therefore demanding that we program our Android friends with emotions so it's not a waste of time in the end. Also what about when humans are able to transcend our biological bodies? When we become virtual entities, we lose our ability to have emotions or morals, and effectively lose our humanity. Therefore it's necessary to instill emotions into machines in order for us to keep our humanity once we reach the level of transcendence and to ensure both AI's and humanity's survival in the long run. Now, I believe this should only be applied to android type AI, not Autopilot type AI as that could cause problems. Would you want your smart car to fall in love with you? No, so I only argue for emotions applied to androids, not autopilot or more technical AI, because if your airplane pilot decides to break up with his girlfriend, the airplane, the plane could decide she doesn't want to live anymore and crash, killing most if not all of the passengers.

    • @SinerAthin
      @SinerAthin 9 лет назад

      SwordMaster7777
      I would recommend that you used more spacing. Right now, your text is painful to read :P

    • @seiban8455
      @seiban8455 9 лет назад

      Well were sort of delving into slave mentality here. If we were to give robots sentience they would probably see us as slave drivers. Thats why its important to not use non human controlled robots in the military.

  • @ishopeatsea
    @ishopeatsea 10 лет назад

    I don't know how this fits into everything, but I'd like to share my piece.
    A few years ago, I was going through a pretty rough time, and I had a blog where I kind of documented it. About a year after the blog's creation, I deleted it so nobody would find it, and at the time I felt nothing about this. It was just a blog, who cares, right? I can easily get a new one.
    Skip forward to about six months ago, I was suddenly hit with this crushing realisation that the blog was gone. Everything on it, everything about it, I could never get back. Like losing a photograph: yes you can buy more glossy photo paper, but it'll be blank, all trace of the image itself lost forever.
    And then I cried. I sat there overcome with grief and wished for nothing more than to have that blog back, not so that I could continue to use it but because I wanted to see it even just one last time. Even now that feeling sits in the back of my mind, like there's something missing because I don't have that blog anymore.
    Does this count as mourning? I'm not sure it does. I'm certain, however, that I was grieving. There's no other word for the sheer amount of hopelessness I felt. The distinction, I feel, comes from what exactly I was grieving for. Was I grieving for the blog itself, or the part of me that existed solely and entirely on it?
    Which, you know, brings up other questions. For example, when an orphan finds out the parents they never knew died, can they truly mourn even though they never knew anything about them? When people such as amnesiacs have entire chunks of their lives missing from their brains, is it possible for them to mourn themselves, or at least mourn past versions of themselves that no longer exist even in memory? Is grieving for something's contribution to your life the same as mourning for the thing itself?

  • @ChloeFisheri
    @ChloeFisheri 10 лет назад +4

    Regarding moral concern and nurturing obligations to AI (tamagotchi, animal crossing etc.) and pets, this invoked in me strong memories of my childhood. How many times have I made my bed and set up all the plush toys in such a way as they would not be lonely? How many times have I attributed "magical thinking" to a cheap, basic robot and assumed that my actions invoked a response (whereas the reality, as I have sadly realized a decade later, is that responses are random)? How many times have I "mourned" when my digimon generation one was defeated, time after time, by all it's succeeding generations? I was a child, and no longer seem to feel these obligtions or anthropomorphize my belongngs. Am I the worse for it? Will a generation, growing up with these feelings for technology rather than living creatures (pets) eventually develop moral concern for anthropomorphized technology? Or will this be segregated, as it is for me, to the confines of childhood? I cannot say that I do not miss the magic of those days.

    • @ChloeFisheri
      @ChloeFisheri 10 лет назад +2

      As an example: When I was 6 my uncle gave me a pikachu for my birthday. I unwrapped it slowly, savouring the moment, reading the box in sheer, innocent anticipation. According to the box my pikachu could Wiggle his ears and arms, his cheeks lit up, and he could say PIKA-CHUUUUU
      No longer able to contain my excitement, I unboxed him in a frenzy, In impatient anticipation as he Was removed from my arms for a second as my uncle put batteries in and switched him on.
      I squeezed his hands as the box had said. Nothing. I pulled his ears. Nothing. I talked to him. Nothing. Nothing worked.
      I began to cry, hugging pikachu close to me in grief...and in between breaths I heard a faint -chuuuu......
      One of the happiest moments of my life. I took him everywhere, Even school. I still have him today, dirty, battered, slightly outdated. But he still works.
      And in all this, it's not as if I attribute my pikachu to the fictional one. I didnt adore the character: just didn't really care. Pikachu was not my favourite, ever. Yet once physical, one swept up in the joy of a birthday, and viewed with a certain childhood magic, pikachu was real enough to me to be entitled to ethical concern.

  • @SirKickz
    @SirKickz 10 лет назад +1

    The first time I thought about this was when I was reading a graphic novel about the back story of "The Matrix."
    The beginning for the end of humanity in the Matrix was when a personal android killed his abusive owner, and then proceeded to kill the mechanic that was sent to deactivate him. They put him on trial, and he was pleading self-defense. The court ruled that, since he wasn't human, he couldn't have the rights of one, and so he lost, and that sparked the robot rebellion that became the apocalypse you see in the movies.
    The question is all about the age-old question: what does it mean to be a person?

  • @THUNKShow
    @THUNKShow 10 лет назад +4

    We already do.
    link.springer.com/article/10.1007/s12369-012-0173-8
    People are genuinely psychologically distressed by videos of robots being mistreated or abused. It's a spectrum, of course; they feel *more* distressed by similar videos of humans, but there's already a measurable stress response when someone hurts your plastic pal who's fun to be with.

  • @SpaceOtter45
    @SpaceOtter45 10 лет назад

    Let’s go deeper with this guys. I’ve been thinking about this recently and two questions rise from this video.
    1. Do you hurt somebody or someone
    2. What is pain exactly?
    The video focused at a lot on hurting a robot, when rather we should be concerned with the program running the robot as, that’s the end point for any pain suffered.
    In my opinion, pain is a mind or processor of information recognising a negative influence on the goals of the individual. For example physical pain is a reaction to tell you to stop doing that because it’s affecting your ability to survive and physical health. The pain of losing a loved one is your body telling you how good that person was for your wellbeing and goals (or at least thinks is good) and that it’s not a good idea to lose them for your goals. We already have computer programs that recognise activity’s that prevent them from accomplishing their goal and self-learn to achieve higher efficiency. Have we already made programs that are not conscious but can feel pain? Like livestock for food are we causing pain to programs for the good of humanity? If we are, how much pain are we causing?

  • @LoneAlchemist
    @LoneAlchemist 10 лет назад +6

    i do wit my car all the time

  • @onegenericman
    @onegenericman 10 лет назад

    I think one of the points that must be met (well for me at least) to start worrying about robots is when they have the ability to break the rules of their programming. If they can break the rules of their original programming then they can make choices, solve problems and have goals. To quote Finn the Human "No! You can't eat the ones that talk! They're special! They got aspirations."

  • @killgriffinnow
    @killgriffinnow 10 лет назад +6

    Turing test. That's all you need to answer this

    • @Hjernespreng
      @Hjernespreng 10 лет назад +10

      Not really. The turing test can be beaten by computers that don't even have sentience, like it was just recently.

    • @romajimamulo
      @romajimamulo 10 лет назад

      I don't think that's enough though.
      See, while cleverbot succeeds at the turning test 52% of the time, and we don't really feel empathy for it.
      I think the outside has something to do with it.

    • @killgriffinnow
      @killgriffinnow 10 лет назад

      Yeah but if we had a computer which succeededthe Turing Test at least 95% of the time I think that's when we'd have to consider them, because there's really no other way of knowing with our fairly limited grasp of conciousness so farz

    • @TheEpicOne8129
      @TheEpicOne8129 10 лет назад +5

      Friendshipismagic Humans succeed less than 60% of the time >.>

    • @killgriffinnow
      @killgriffinnow 10 лет назад

      Melted Cheese
      They did? Well, apart from the Turing Test has anyone got any better ideas? Anyone?

  • @bitterlikeburntcoffee
    @bitterlikeburntcoffee 10 лет назад

    I feel the more humans begin to notice similar human-like qualities and/or we choose to give them to something, living or nonliving, the more we show compassion towards. Like the idea of how we like baby animals that have big heads, eyes, etc. because they look more similar to human babies.

  • @SupermewX300
    @SupermewX300 10 лет назад +4

    My answer to all forms of the question presented in this video will be the same as the answer to this: does that robot have emotion?

    • @romajimamulo
      @romajimamulo 10 лет назад +3

      Which I'll counter with another question: Say I had a robot with a face.
      And let's say you insulted the robot and the robot made a frowny face, and generally acted "Sad". Would that be a real emotion, even if it was caused by an instruction telling it to be "Sad"?
      Would that be real emotion?

    • @ichiboku1
      @ichiboku1 10 лет назад +1

      Romaji no.

    • @SupermewX300
      @SupermewX300 10 лет назад

      Romaji That depends on whether it IS real emotion. There's a big difference between displaying something and feeling it.

    • @leeleeisgay
      @leeleeisgay 10 лет назад +1

      SupermewX300 the idea of emotion is abstract and weird to say the least. We could have a long winded philosophical debate on such a matter, since emotion is hard to pin down. Let's say an actor is playing a character going through a tough time, and you are so absorbed in the performance that you cry with the actor, forgetting that it's a film because you're so immersed in this performance, are the emotions being portrayed real? They're an accurate representation of said emotion, but even if it isn't real, it still has the potential to influence your emotions and actions. Emotions don't have to be real, just convincing.

    • @TheMasterFez
      @TheMasterFez 10 лет назад +1

      SupermewX300 We can't tell if it does feel it. There is no way to know if it can feel. As such, we need to look at how it acts and assume from that.

  • @Thecommander248
    @Thecommander248 9 лет назад +1

    In this case, "consciousness" is actually sentience. Sentience, self-awareness, and consciousness are sometimes interchangeable. Robots lack even this when humans have something "greater" called "sapience". Sapience is wisdom. Humans are sagacious. But sapience may be described as being "curious and questioning". The day we taught chimps and gorillas sign language, we achieved something great. But then we realized something (I talked about this in another video's comment section). We realized that chimps and gorillas don't ask questions. They are taught new things but they never realized that there are things that they don't know that others do. What do I mean by this? They think that everyone has the exact same knowledge as them and that there is nothing new to learn even after they have learned something new. This is very fascinating and somewhat terrifying. It make us special and it makes us alone. We are the only ones to look up at the stars and wonder what they are. We imagine that there is something "more". This is why religion is regarded as special. People, wisely or not, put faith in something beyond our understanding (I'm avoiding that can of worms the rest of this talk).
    What does this mean for us? Well, we, as a species, hates being alone in every sense of the word. We strive to form relationships with others and some of us just feel... empty being the only species that questions things. We feel that we should find alien life that is as intelligent and questioning as us. It's a noble goal to share and live in peace with these creatures. To spread our ideas and learn new ones in turn, but what if we create a species like ours? What do we do with a species of thinking, questioning, and WISE robots? The Matrix tells us that oppressing them is the most cruel and surefire way to doom ourselves. We need to treat our brethren as equals while keeping in mind Isaac Asimov's Three Laws of Robotics in the mean time. We shouldn't be afraid of what robots can do, but we need to be pragmatic. Seriously. This is a complicated matter. I will leave it at that for now. en.wikipedia.org/wiki/Three_Laws_of_Robotics

  • @MatPress
    @MatPress 10 лет назад +1

    My first thought goes to pets. Similar to what you said with tamagachis. Imagine if you will the fact that my mother cannot even begin to fathom the idea of someone having a snake as a pet. Her thought leading along the lines of "How could someone possibly like that soulless creature in the same way I love my puppy." I know as a fact that there are people out there who do happen to create emotional attachments to snakes, among other things and creatures. In the same as my mother not believing people could like a snake, many could not comprehend the way anyone could feel an attachment to a computer/robot. I think the motion will begin as robots come into our life as companions, or tools, similar to the way early dogs were adopted by early men. Many of us may reject or even fear the idea, and it may take some getting used to, but when a robot begins feeling back, or needing you the way a pet does, we may begin to show genuine care.

  • @adultsuede4384
    @adultsuede4384 10 лет назад

    Okay, I really like this subject because I truly know more people who would save their tech before their pet (if forced to choose) than makes me comfortable. I think that it is possible to feel empathy over a robot/computer only because perhaps one day long from now robots will be able to show "emotions" that are such a strikingly realistic representation of how we would perceive it from organic medians, that we fool ourselves into believing that they are true. Much like when you see an actor perform a play in which he is to display emotions that he himself may or may not feel. (I ramble too often)

  • @it-s-a-mystery
    @it-s-a-mystery 10 лет назад

    That first question got me. In media I am used to defending AI and thinking that people who do not show compassion to AI that has apparent emotions and or feelings.
    But when you asked us what we would feel after being responsible for a accident involving a robotic humanoid I realised that I think that my panic would subside.
    Then again that may just be because I am halfway through watching the 2004 Battlestar Galactica, so I automatically assume their consciousness I simply going to redownload into a new body.

  • @juliescott4473
    @juliescott4473 10 лет назад

    The webcomic Questionable Content does an amazing job exploring this very question.

  • @Tallia3
    @Tallia3 10 лет назад

    When I was a little girl, I held a funeral for my tamagotchi when it died. All my friends came, and it was a lovely service. I think it's still buried in my old front garden. I had to have my parents buy me a whole new device, because I couldn't make myself use the old one.... :/ :)

  • @fireaza
    @fireaza 10 лет назад +1

    While not directly connected to robot well-being, there's a scene in the TV series "Chobits" that's kind of relevant to this discussion. In this series about a future where humanoid computers called "Persocoms" exist, one character has fallen in love with his Persocom. Due to how realistic they look and behave, he's able to interact with the Persocom the same as he would with a real human which results in the character becoming very attached.
    However, after a number of years, the Persocom's hard drive begins to corrupt, which results in her slowly losing data and eventually losing everything entirely and reverting to factory default settings. The character is unable to fix his Persocom due to it being an older model that's incompatible with current hardware, shattering his illusions about his Persocom being more than a machine.
    I think this is relivent to the discussion, since as realistic as we can make robots look and behave, at their core they are still machines. They might be convincing enough that we could feel empathy for them at times, but you'd have to make them absurdly complex to give them the hopes and desires of an intelligent animal. And even then, why would we program these emotions into a robot? To make them more appareling to humans?

  • @jarvis15
    @jarvis15 10 лет назад

    I think the reason, or at least part of, why we feel a sense of loss towards people and pets but not machines is because we perceive people and pets as irreplaceable. A person who dies is "gone forever" but a broken machine can be replaced. The life of individual human beings as well as animals who we perceive have unique personalities are priceless to most of us because we cannot and/or do not believe it can be replaced or replicated. That is also the reason why when we lose a phone, we feel a sense of loss not so much for the device itself, but for the unique information it contained - addresses, pictures and memories. However, while we can always rediscover an address, take new pictures, and make new memories, we cannot recreate the uniqueness of a lost life.

  • @TeresaMcD
    @TeresaMcD 10 лет назад

    Your panels at VidCon were great and made me want to watch Idea Channel on a more regular basis.

  • @seronas
    @seronas 10 лет назад +1

    One way of thinking about this is when machines become entities *with whom* we communicate rather than merely media *through which* we communicate with others then, acknowledged by our engagement and presupposition, the machine-entity demands our moral consideration. Another way of thinking about it is to question what is meant by "the machine". We are blurring the boundaries between traditional moral-bearing entities like humans and traditional non-moral-bearing entities like computers every day. Look to advances in biotech and organic neuro-computing projects to see how quickly we are approaching the inflection point.

  • @Falconaught
    @Falconaught 10 лет назад

    This is very interesting... I find that I feel more emotionally engaged with machines that travel with me. My phone, and car are not only things I often use, but are items I take with me as I go about my day. My desktop computer, refrigerator, and furnace all stay (mostly) in the same place. I can always expect them to be exactly in the same location, despite anything I was doing. My phone and my car however, accompany me on my adventures. When I drop my phone, or find a scratch on my car, I actually do feel a genuine concern for its well being. A large part of that concern definitely comes from the fear that I may have damaged a useful tool, but there's something more, almost intangible, about that feeling. I'm also the kind of person who names his phone and car, (Ordo and Miss Fortune, respectively) so that tells you something...

  • @pultronix
    @pultronix 9 лет назад

    I think the degree of subjectivity with which we view technology indicates that we will someday feel genuine sympathy for it. We use many personified and humany terms to describe things like "smart"phones or "playing" a video, exemplifying how we look at completely objective machines with same the subjectivity we see each other with.

  • @emersons.4368
    @emersons.4368 10 лет назад

    I just finished this ya book called Cinder. It's about this girl who is a cyborg and is treated as less human because of it. Cinder also has a best friend that is an android who has a personality chip and even correctly uses sarcasm. Both are treated as trash and that's a side of the story that you didn't talk about. This video also reminded me of soldiers who became emotionally attached to their bomb-disarming robots. This comment doesn't really have a point, and I'm sorry about that but I thought those things needed to be pointed out. Best wishes!

  • @CaseyStellar
    @CaseyStellar 10 лет назад

    PBS Idea Channel People have been feeling like that for years... I cry when my electronics stop working.

  • @handsocks
    @handsocks 10 лет назад

    A good example of this is the Geth from Mass Effect. When the Geth asked "Does this unit have a soul?" it started a war, where which the creators lost everything, even their home world. Never did they want war, rather they were defending themselves and the creators fear of the question "Does this unit have a soul" was what created the conflict. The whole idea is argued over in the game and becomes a major part of the plot.

  • @nadkarnia
    @nadkarnia 8 лет назад

    I cried a lot as a kid when I found out that our car was going to be scrapped for parts instead of being sold to someone as a whole. I felt bad for our car in a way I don't feel even for people sometimes.

  • @benmarkoe
    @benmarkoe 10 лет назад

    I think that when we as humans make things, regardless of how much we interact with them, we care about what we made. If I make a really awesome birdhouse and it breaks, im going to feel bad for myself (because my hard work and dedication is lost). This also means im going to feel bad for the birdhouse in a way, because it was an extension of who I am as a person (if I was a person who loved birds and building things). Essentially, as long as we are the ones making the robots, we will feel connected to them on more than just a material level, as they are an extension of our own human existence. They have our 'code' and our 'morals' programed into them and we know that, so if something happens to it, we feel connected.

  • @supernova743
    @supernova743 8 лет назад +1

    The well being of inanimate objects. People care a lot. People care about their cars, they care about their houses, they care about their phones. If an object is considered useful people will care about it, intelligence isn't where we draw the line. Having an intelligent robot won't change how it's treated by people but by how useful that robot is to people.
    Long story short, when robots become useful enough to be given rights that's when people will give them rights. That hurdle is just really high as robots are so replaceable.

  • @diellojeferson
    @diellojeferson 10 лет назад +2

    Hi, I'm studying Philosophy at Universidade Federal do Rio Grande do Sul (UFRGS) in Brazil and your idea is connected to two the philosophical problems that I'm very passionate about! (i) the moral consideration of non-person beings (animals, maybe "the environment") and mostly (ii) determine what is the criteria to said that "something" is a person or the criteria to said that "something" has a consciousness (considering, for now, to avoid a endless discussion, the same problem). To (i) I would said that if something is able to feel pain and pleasure, this is sufficient for moral consideration, we ought to try not cause pain. I'm eventually worry about robots, I guess. But how to determine if a robot is feeling pain? The same way we do with our fellow humans!
    So things get really trick when we think about (ii) and realized the answer to "What is the criteria to said that 'something' is a person or has consciousness?" - behavior. We said that something is a person when behaves like a person! We said a person (or non-person) is happy or suffering when we see the behavior associated to happiness or pain! I'm NOT defending behaviorism, but is how humans do (besides, we can't see "inside" the minds of otter people, or dogs for that matter). The philosophical problem is that behavior is not necessary or sufficient for mental states (happiness, pain, consciousness and so on), so is always possible to be wrong. Is possible, or conceivable, to create a machine that simulate being a person (having consciousness) to trick us, but is also possible (and obviously highly unlikely) that is already machines like that tricking us in your life (ours friends, ours mothers). There's no guarantee of absolute certainty in our attributions of mental states. That's one of the points of Blade Runner, the most philosophical one, with the replicants. This point is also explored in the books "On Certainty" by Wittgenstein and "Individuals" by Strawson.
    About the considerations you made, I'm curious about the ideas of David J Gunkel. First, what he said about morality in western thinking is kinda of false. Plato, Aristotle, Seneca and even Augustine and Aquinas (members of the catholic church) thought of moral as much more the rules to be encod, there's no recipe to happiness or how to be a good person. The moral codes of western culture are rules coming from God or/and justifiable by a Ethical Theory. Second, the Chinese room is to demonstrate that a computer passing the Turing test is not necessary consciousness, but the cool problem is that apparently there's not conditions necessary or sufficient to say that something is consciousness!
    (sorry for the poor English and a imaginary hug to you guys)

    • @davidgunkel6414
      @davidgunkel6414 10 лет назад +1

      Obrigado pelo o comentário e eu devo dizer que o livro investiga ambos problemas que você descrever. (E desculpa pelo meu Português, eu estou aprendendo mas eu ainda preciso um monte de trabalho).

  • @Wraithiss
    @Wraithiss 8 лет назад +1

    I can assure you, I would most defiantly mourn the loss of my PC. I have put an unbelievable amount of money, and more importantly time into assembling and maintaining it and it definitely has more than just monetary value to me.

  • @DanielFoley75
    @DanielFoley75 10 лет назад +1

    I think humans interact with other people, animals and things on multiple levels. Remember that 90% of communication is nonverbal. Our relationships are extremely complex. At a high level we can know that the human-like robot is not a real person and treat them like any other machine, but on another more instinctive level it can still disturb us to see human-like objects harmed or mistreated.
    The movie A.I. explored this extremely well in another way. The parents know the robot child is not a real human but the communication and relationship is so real-seeming that they become attached to it. When we can have a real-seeming enough relationship with a robot that we don't want to see it harmed, even if that feeling is only subconsious on our part...that is when we will have to start to worry.

  • @51918
    @51918 10 лет назад

    So, wait-- are we talking about the Singularity? The game "Thomas Was Alone" actually does a pretty good job of exploring this idea of empathy for pieces of technology. In the game itself, the characters don't have anything we typically use to empathize with fictional characters-- no facial features, no dialogue. Just a narrator who tells us what these little 2D blocks are thinking and feeling. And honestly, my heart goes out for poor John when the Ominous Glow Cloud has taken his buddies-- but this is because in the context of the game I understand that John can ==feel== things. That he has a concept of self, and more importantly that he has a concept of others.

  • @Akwardave
    @Akwardave 10 лет назад

    Back when I lived in the bay area, we had a sizable lemon tree in our backyard. Even though I was very young, I still remember drinking lemonade made from that tree, and sitting in the scarce shade it offered during a hot summer day. Years later, when I came back to visit the new owners of the house (who just happened to be family friends), I saw that they had cut down the tree. I stood where the stump had been hauled out, and I was legitimately sad. I wasn't sad because it was useful, I hadn't "used" the tree in half a decade. I was sad because I had lost a friend, in a way.
    Now, we're pretty sure that trees aren't conscious. But that didn't prevent me from personifying it, thinking of it as more than cellulose and some leaves. That's why we're so sad when we lose our pets. Not because they exhibit human-like qualities, but because we project human-like qualities onto them. I know the term "philosophical zombie" has been used on the show before, and really, robots are just that. They may not be conscious, but they act conscious, and for humans, that's good enough.
    Though time will tell. I personally foresee that there will be two parties, those who think of robots and androids as truly alive and worthy of care, attention and rights similar if not equal to those of humans, and those who believe they are simply machines, and can feel and reason no more than a washing machine. Perhaps after civil rights have been fought for and won by all the different races, sexualities, and cultures of humans, robots will simply be next in line.

  • @olmanvb
    @olmanvb 10 лет назад

    If the day ever comes, I think a key element would be a fight for robot rights acquisition. The need or thirst for labor conditions may be last thing a human being programs to a robot or the robot to get out of itself proving to have will. Giving humanity a moral reason to worry about the well-being of robots.

  • @SamueltehG33k
    @SamueltehG33k 10 лет назад +1

    I have a feeling that robots may become similar to animals. For instance, a dog being hit by a car has an extremely different feeling than hitting a skunk, coyote, deer, bat, or bugs. There will be robots we view as tools and those we use for personal companionship, us as creators of said machines can apply our own context.

  • @MrWhitefeather93
    @MrWhitefeather93 10 лет назад

    Talk to a Jeep owner. I think one of the biggest draws to the vehicle is its foibles. They all make different sounds that each owner can recognize instantly and know when a knew sound surfaces, their driving quirks and can do ability endear them to the owner. One of the most purchased accessories is a lift kit and everyone will tell you that every jeep will lift differently. I think when machines begin to take on individual traits, when they begin to develop unique personalities people will begin to see them in a different light. Products that are exactly identical do not evoke much emotion or worry because you can replace it with the same exact thing. We value the rarity of an item.

  • @ArcasDevlin
    @ArcasDevlin 10 лет назад

    As a determinist, I believe we are all simply following the rules of cause and effect. Machines work the same way. The big question, however, is whether you can have that "illusion of free will" that conscious beings have without possessing organic bits.

  • @MiroredImage
    @MiroredImage 10 лет назад

    I feel like there would be two important ways to distinguish a person from what is not. The first thing to look for would be, as you said, whether or not it can fully comprehend what it is saying/doing or not. The second would be whether or not the object at hand has a personality or not. As you said, we humans are also fed the rules and conducts of society like a set of instructions for a machine, but what is different about us is that we have certain and specific opinions and responses to these instructions, while a machine does. I feel like when machines are able to hold opinions or talk back to another human is when we will be able to classify them as a person, and for such a machine to be denied it's will will be considered wrong.

  • @RovertNoteek
    @RovertNoteek 9 лет назад +2

    "Shepard commander, does this unit have a soul?"
    You bet I'd feel for a robot who got hit by a car. I'd call 911, hell I'd carry it or whatever it identifies as to the robot hospital if need be!

    • @jeremy3046
      @jeremy3046 9 лет назад

      Robots would probably be less time-sensitive than humans, so it probably wouldn't qualify as an emergency. You could take it in the next day.

  • @GreatSwordNH
    @GreatSwordNH 10 лет назад

    My wife says she was in a relationship with her Honda CRX - even more so than the man that was her husband at the time. She felt more emotionally attached to it than other people at the time. When that man destroyed the car accidentally, she was devestated. "It was my baby", she says. Keep in mind they had 3 kids together...

  • @katitzacheva
    @katitzacheva 10 лет назад

    To anyone interested in the topic, I recommend the swedish serial Real Humans (Äkta människor). It more or less raises the same questions as it is about a future with human-like robots. It challenges one's views and I personally was left with a lot to reflect on.

  • @ratelslangen
    @ratelslangen 10 лет назад +2

    I advice you to watch "Time of eve", the movie version, its more complete.
    It is about what this video talks about EXACTLY.

  • @LazySatyr
    @LazySatyr 10 лет назад +1

    Hmm my quick response to this is that agency is required on the behalf of the machine (robot A.I. etc.) for people to grow a sense concern for said machine. If the robot can cry out for help we will feel a desire (or perhaps empathy) to help it. If your town on your DS could say aloud "hey help us", you might feel more inclined to help them. Its seems that even the simplest type communication is what forms the basis for believing in the "soul" of a thing in a way besides some kind of one way anthropomorphism. The various chat bots around the web and the associated Turing test would be a good real world example of this. If it can say it's alive then it's as good as alive.

  • @ruthdevore6508
    @ruthdevore6508 10 лет назад

    The death of Analyzer in the 1977 Space Ship Yamato had the younger ones in my family crying and angrily throwing things at our television. Our dad was very confused and frightened that we were just as affected by the death of the robot as we were the main character. It was entertaining :)

  • @thomascassidy6960
    @thomascassidy6960 10 лет назад

    You have brought up what has been one of my favorite subjects since I saw Terminator 2 as a child and began fearing the rise of the machines.
    I think there are really only two prerequisites that need to be met: empathy and sentiment. You bring up the idea of mourning for people but not mourning for your cell phone or laptop, but I think that same idea could be applied to people now. I don't mean to hurt anyone's personal feelings, but if your grandmother passed, I don't mourn them, much as it is too much for any person to mourn the passing of every other person on earth, you need personal sentiment as much as you need the empathy and I think it works much the same way as it would for a machine. Right now your grandfather may enjoy Sylvester and if it broke he might be sad but it wouldn't be mourning because while he has sentiment over it, he lacks empathy. Once a robot can have even the slight bit of self-awareness, humans will have empathy, a scene from the movie A.I. just popped into my head where the boy is yelling for help and the people get upset saying "Machines don't cry for help." Once a machine can decide that it does not want to end, humans will be able to empathize with it, even if it has or does not have all the other sensory input that we do, even without the ability to fear, just the ability to avoid things that would make it cease to exist and the ability to decide to avoid them.
    So, really it is just a question of when robots will be self-aware, or close enough to make us believe they are. I also think it will be a huge problem for us in the future, while I believe that it is our responsibility to continue to embrace technology and push humankind into the future, accepting some of the risks along the way and hopefully not messing us up too bad, there will come a time when we will have to choose between the life of a human and the life of a machine.
    One day we will have to make a choice in the interest of our species as opposed to what is right or morally correct. After all, no matter how much we criticize humans, we have to side with ourselves right, not those dirty machines.

  • @Dulce303
    @Dulce303 10 лет назад

    I love these videos! Thank you PBS!

  • @noellefreehugs
    @noellefreehugs 10 лет назад

    Plenty of humans already feel genuine love and care for non living things. As a child, many of us become attached to objects to the point we believe they're "real" like in the Velveteen Rabbit. I can remember crying as a small child when another kid punched my stuffed animal, not because I was worried it was broken, but because it hurt the animal.

  • @Dark_Tale_1985
    @Dark_Tale_1985 10 лет назад

    I think I will care when they show actual "Feeling". If a robot says that it feels, that's one thing. But if it convinces me that it truly feels, that it is a afraid to die and cares about itself and wellbeing, then yes, I will care about this robot.

  • @rachelevil
    @rachelevil 10 лет назад

    This is a weirdly timely episode for me to see. I write an ongoing piece of fiction on twitter, and *on the very same day this video was posted*, the updates I posted to my fiction involved a situation that I hoped would cause worry and concern in my readers.
    However, the character this concern would be over is a robot. Of course, the character is treated as a person by most of the other characters in the work (though one character in particular vehemently denies the robot's personhood), so perhaps readers would mirror the attitudes presented in the work, but... Now I don't know.
    This has given me a lot to think about, regarding what I'm writing, and what I have written. And, through insane coincidence, has given me a lot to think about regarding what I've written *immediately after I've written it*.
    So, yeah, now I have to go and rethink the entire project. Thanks.

  • @Techrenamon
    @Techrenamon 10 лет назад

    we as people put emotional attachments onto things be them alive or not. a simple thing like a rock can mean something depending on the memories that come with said rock.
    say you have an A.I robot, he's been your friend for years, saved your life or whatever and has always been there for you, your memories of of being with this robot are all really good, maybe even tough ones but over all you two are good friends, then out of nowhere he pushes you out of the way of an out of control car and is crushed leaving you to hold it in your arms one last time with memories hitting you before it shuts down forever. it has effectively just died so damn right you are going to be sad and crying over the loss

  • @Kenotosensei
    @Kenotosensei 10 лет назад

    It's sort of like having a teddy-bear as a small child, to you "he, or she" was a friend, and so you felt obligated to hold regular conversations, or tea-parties, go on adventures so on and so forth, and when they got a big tear, you felt upset about it, you where concerned about you friend even though they weren't human and felt no emotions or pain, you where concerned. So i believe that, when society stops seeing a robot or, android as only a useful tool and more as a friend then we develop a sense of them being moral patients.

  • @thejefferyandjuanshow2635
    @thejefferyandjuanshow2635 10 лет назад

    The film "Iron Giant" explores the question of a extraterrestrial robot's ability to adopt human emotion, much the same way it learns language from Hogarth, (the human child). While telling a terrific, funny story, this animated film also delves into our responsibility towards sentient machines (theoretically) and our attachment to machines.

  • @thu4167
    @thu4167 10 лет назад

    When machines become sentient (as in the ability to ask and answer questions) I think it is fundamentally cruel to ignore their well-being.
    Consciousness is
    1) The ability to answer and ask questions independently
    2) A visible recognition and response to feeling pain.

  • @ra-wq1nz
    @ra-wq1nz 10 лет назад

    This idea was thoroughly explored in the beautiful, award-winning short story "The Lifecycle of Software Objects" by Ted Chiang. We follow the protagonist, Ana, as she discovers to her surprise how much she's willing to sacrifice for the well-being of her "digients." I was actually under the impression this video was inspired by Chiang's short story.

  • @EvanBradleyEDB
    @EvanBradleyEDB 10 лет назад

    Issues of robot rights, equality, well-being, etc., should not even be considered until the same issues are completely resolved for humans.

  • @intellectualInsectoid
    @intellectualInsectoid 4 года назад

    I personally am all for viewing robots as sentient people, but I think that’s because I’m neurodivergent and tend to view myself as a pretty good imitation of a human, so it makes sense that I’m comfortable with the concept of treating imitations of humans as people.

  • @AndrewRedroad
    @AndrewRedroad 10 лет назад

    On the interwebs, there exists a web comic called Questionable Content. It is probably one of the best stories I've seen where artificial intelligent beings are part of the story without being the *entirety* of the story. They are just characters. Characters who interact with lives of both of other electric machines and other meat machines, and although things like the singularity is explored within the story, its refreshing to see a story where the singularity comes and goes and life moves on. The people keep being people, and the machines keep doing there thing. AI is celebrated in Questionable Content as being a new class of person, not as a human endeavor of greatness or analogy for various civil rights movements (yet). I recommend anyone who likes the slice-of-life genre to check out this masterpiece.

  • @BlazingMagpie
    @BlazingMagpie 9 лет назад

    I have used my first computer for 9 years. When I got my new powerful gaming computer and we had to throw out the old one, I genuinely felt bad and tried to convince my parents to keep it somewhere, but that didn't go anywhere. R.I.P.

  • @irvintapia1000
    @irvintapia1000 10 лет назад

    i feel as though once something is intelligent enough to be self aware, it then deserves to be equal to us. once its aware of its existence and the world around it, its probably going to want to "live" to experience the world it lives in and that wanting makes it human enough to be considered a person.

  • @PYX48
    @PYX48 10 лет назад

    I think Roombas are adorable! I'd love to have one as a pet!

  • @rowansmith9005
    @rowansmith9005 10 лет назад

    My phone got stolen last December, and I don't know if I mourned the phone itself, but I mourned the loss of the information. I had all of my photos from Salt Lake Comic Con, my photos of friends, family, birthdays. I'm still upset about losing all the pictures.