When does Artificial Intelligence Become 'Human'? [ video essay l Detroit l Talos Principle ]

Поделиться
HTML-код
  • Опубликовано: 10 ноя 2018
  • If you love my content and want to support out Supreme Leader Mishka (thank you!) - patreon/discord community: / hellofutureme
    Check out the artist, Karolina, who did the piece for this video!
    RUclips ruclips.net/channel/UCPOU...
    Instagram / blaakeyh
    Twitter / blaakycat
    Tumblr / blakysart
    Want to stop slavery AND get an awesome t-shirt?: www.teepublic.com/user/hellof...
    Learn more about our channel-sponsored charities:
    A21: www.a21.org/index.php?site=true
    WWF: www.worldwildlife.org/
    My SECOND CHANNEL can be found via a link on my main page or at 'TwotheFuture'. Come join us!
    Email fanart/fanmail: hellofuturemeyt@gmail.com
    Twitter: / timhickson1
    Facebook: / hellofutureme
    My website: timhicksonyt.com
    IF YOU WANT TO SUBMIT WRITING TO BE FEATURED:
    timhicksonyt.com/featured-com...
    IF YOU WANT TO SEND THINGS TO ME (address):
    Tim Hickson
    PO Box 69062
    Lincoln, 7608
    Canterbury, New Zealand
    The artist that designed my display pic! serem01.deviantart.com/
    The artist who design my cover photo:
    - raidesart.deviantart.com/
    - / raidesart
    - / raidesart
    Credit for the background music I use in a LOT of my videos:
    Kevin MacLeod "Music for Manatees"
    Stay nerdy,
    ~ Tim

Комментарии • 686

  • @HelloFutureMe
    @HelloFutureMe  5 лет назад +208

    So, this question means a *lot* to me. *It'd mean the world, if you enjoyed this, for you to like/share this video with someone who might find this question interesting!* If you would like to read the two academic essays on The Talos Principle, Posthumanism, and the basis of rights for AI, ( patreon.com/hellofutureme ) then they're available to those supporting me for just a couple of bucks per month. I'd *love* if you joined our Discord-Patreon community, and if you shared your thoughts! Stay nerdy.
    ~ Tim

    • @heavenlygaze-
      @heavenlygaze- 5 лет назад

      Why the Torture of Premiers, and honestly, if you do use them... please do It right before the release

    • @awulfy9052
      @awulfy9052 5 лет назад +1

      Just the mystery of what makes us us, what makes up conscious and how long it takes for AI to become human is a personal interest of mine,this will be a great video. Plus i loved Detroit: become human

    • @dowottboy5889
      @dowottboy5889 5 лет назад +2

      for someone to be human, they would have to at-least be human in origin, AI would never have been made from human cells with human anatomy, therefore they would not be human. However, an AI can act like human, and feel like human, therefore deserving the same rights as humans, they might even be recognized as their own species one day, but they would never "become human"

    • @Raximus3000
      @Raximus3000 5 лет назад

      @@dowottboy5889
      The term human can be defined in many ways depending on what do you mean by that, phylosophicaly, scientifictaly, religiously, legaly, ect. In our society since the only intelligent being is humans then nobody else falls in that category in any way. If you do not define what do you mean then you are just throwing generic ideas.

    • @rubilax1806
      @rubilax1806 5 лет назад +2

      Hello Future Me I don’t believe machine can be sentient as it is just an emulation and it cannot have awareness

  • @thevoidlookspretty7079
    @thevoidlookspretty7079 5 лет назад +617

    You almost, ALMOST didn’t reference Avatar.

    • @daddyleon
      @daddyleon 5 лет назад +37

      At some point I started wondering...is he trying to avoid it but unable or just trying to desperately find a way to at least have one reference.

  • @glanced9684
    @glanced9684 5 лет назад +722

    When they start to procrastinate. That's when.

    • @Famously5518
      @Famously5518 5 лет назад +52

      glancedUp they gotta be semi-self destructive just like humans

    • @HelloFutureMe
      @HelloFutureMe  5 лет назад +105

      You solved it. You found the answer.
      ~ Tim

    • @superthorc6894
      @superthorc6894 5 лет назад +1

      glancedUp LoL

    • @lube6966
      @lube6966 5 лет назад +9

      Then I must be very human right now...

    • @grantbaugh2773
      @grantbaugh2773 5 лет назад +29

      "I reeeeeeeaaaaallllyyyyy should perform this subroutine right now, but I also reeeeeeeaaaaallllyyyyy want to stream all three seasons of Avatar right now..."

  • @tristragyopsie5464
    @tristragyopsie5464 5 лет назад +245

    I have used this example many times. Stick with me a moment.
    Imagine a hill, you are standing at the bottom of the hill, and I am at the top with a ball.
    I release the ball and you see it not only roll down the hill but dodge rocks and turn and swerve as it goes around walls and trees before coming to a stop beside you at the bottom.
    Did the ball think?
    No, it was just a stage trick.
    From my vantage point on the top of the hill I can see the dips and grooves I set the ball into that made it move in that apparently thoughtful way.
    This is programming. The illusion of thoughtful action.
    The ball is just a ball, just as most robots are little more than RC cars, with VERRRRRY thoughtful programming.
    The thought and the thinking came from a person who set the initial boundaries. That is even less than instinct, because it doesn't allow for them to ever act outside it.
    A dog is driven by instinct and impulse but they will at times go against it.
    the RC car does not think I need to go left or right, it does not think "I" at all.
    When an AI gets to the point of understanding and applying the concept of "I" to itself, then it will no longer be an artificial intelligence and it will be truly self aware and worthy of rights.
    Any intelligence that understands and acts on self-awareness is not artificial.
    At that point, the point of "I", all the other questions WILL happen, the development of moral understanding, the questioning existence or giving value to existence and action all stem from understanding that I exist and the fact that happens shortly there after, I do not exist alone.
    That is something I would love to see, but don't think is going to happen any time soon.

    • @gideonjones8088
      @gideonjones8088 5 лет назад +21

      Suppose my programming is what causes it to apply the concept of "I" to itself? Suppose I write a code clever enough to cause it to imitate self-awareness. Does it actually have that self-awareness, or is it still just an unthinking automaton following a code that makes it display the idea of self-awareness like a parrot copying words?

    • @Eanakba
      @Eanakba 5 лет назад

      @@gideonjones8088 I'm not sure I understand your question, but afaik it would still be a parrot. See Cleverbot.

    • @gideonjones8088
      @gideonjones8088 5 лет назад +28

      @@Eanakba I guess my question is, how do you tell the difference between perfectly faking self-awareness on the level of humans and actual human self awareness?

    • @MrSeals1000
      @MrSeals1000 5 лет назад +16

      @@gideonjones8088 I was about to say the same thing. We can get to the point where we create simulations so good at making an AI seem self aware, that if it really were self aware, how would we tell them apart? How would we test them? How would we Know. I think the movie Ex Machina does a good job exploring it, but in real life, how would we even know when self awareness is real, or when it is simulated.

    • @zashgekido5616
      @zashgekido5616 5 лет назад +29

      @@gideonjones8088 The problem is you can make the argument that humans are merely "programmed" by nature. Everything from our emotions to even our complex problems and sensations can be boiled down scientifically due to chemicals and stimuli. So they question isnt necessarily if AI can necessarily become self aware.. Its if AI can properly blur the line to make that question irrelevant.

  • @the_blind_paladin_kiwi
    @the_blind_paladin_kiwi 3 года назад +34

    "Although a child owes their exsistance to their parents in some sense the parent cannot call this in as a debt and make them do things."
    Entitled parents/insane parents: I'm going to pretend I didn't hear that.

  • @dmeowcat37
    @dmeowcat37 5 лет назад +25

    As a philosophy major, this is one of the best analyses of "What separates humans from the 'other' " that I've seen in a long time! You even referenced a lot of the books we've been using in my ethics class. I hope you don't mind if I forward this to my professor, I think he'd find it fascinating.

  • @NinjaGidget
    @NinjaGidget 5 лет назад +23

    I'm so glad I watched this. This concept has always made me uneasy, because I've generally heard it framed around the question of whether AI could "develop" a soul, or something like it. Founding your argument on the principle of equivilent exchange between rights and responsibilities not only makes sense, but side-steps the metaphysical Gordian knot of whether souls exist, what are they, where to they come from, etc.
    Another aspect I had considered was the principle of humanitarianism. Society generally holds that if something can experience pain and fear, there is a moral imperitive to minimize those experience for that being. There are levels to this - we would never Entice a dog to bite a baited hook, then remove the hook and let the dog go, but we catch-and-release fish. If an AI was developed to the point seen in Detroit: Become Human, it would obviously be capable of experiencing, anticipating, and dreading pain; therefore, the responsibility is on humans to treat these AI in a way that minimizes suffering.

  • @cavalcojj
    @cavalcojj 5 лет назад +46

    As Captain Adama says "You cannot play God then wash your hands of the things that you've created. Sooner or later, the day comes when you can't hide from the things that you've done anymore."

  • @youtubeuniversity3638
    @youtubeuniversity3638 5 лет назад +92

    Non-Identity: I'd say that applying that to us, people, would cause issue. If your parents birthed you to work the farm, are you wrong to complain that you haven't the choice to leave the farm and work an office job? Your parents make you, but shouldn't "own" you. Hobbes, I agree with the children counter. Eternal debt, I'd also compare to children. Of course, the issue with comparing to children is that not everybody would see children that way. Some do think kids the property of the parents, some do see them as indebted to their makers, some see children as outsiders. But, in essence, way I see it, the parents should not override the children. The children should have at least some degree of rights that the parents cannot override. Before we can handle right for mechanical children, we need to at least handle rights for biological children.

    • @haydenwalker2647
      @haydenwalker2647 5 лет назад +12

      Exactly! And looking at it from an even broader perspective, children are simply a means to continue their species, as per our evolutionary instincts. In that sense, if someone elected not to have children, they'd be diverging from their primary objective, "breaking their code", so to speak. Saying that such a choice destroys their identity ignores free will (which could be translated into a fully self-editing AI) and their capability to create their own meaning in life.

    • @Pandorana67
      @Pandorana67 5 лет назад +5

      I think the difference in the children argument for AI, is that most likely a lot of AI will be specialized for specific tasks. We probably won't ever build 'generalist' AI, rather we'll build doctor AI, transport AI, teaching AI, and so AI then can't really have any choice in free will to refuse to 'work at a farm and choose an office job" because this AI will simply not be equipped and be possibly incapable of the job it desires without some radical changes and reprogramming by its human creators. And who would pay for that every time an AI decides it wants to control its own career?
      An AI built for transport will never become a Doctor AI on its own free will without being reprogrammed and possible hardware changes. Unless it has its own wages to pay for this reprogramming, no one is going to supply it with these resources for the change, and paying your computer for being a computer, i.e paying your AI for doing the job its built for, is simply ludicrous.

    • @youtubeuniversity3638
      @youtubeuniversity3638 5 лет назад +4

      @@Pandorana67 What's so ludicrous about paying a docror for being a doctor? Or a bus driver for driving a bus? Why not it? Us humans are expected to pay for our college education, why not have them be able to work their way to the life they want like us lot have to? Then they could fund their radically changes same the rest of us do, suffer through a job we despise until we can afford to learn a job that we prefer.

    • @haydenwalker2647
      @haydenwalker2647 5 лет назад +4

      @@Pandorana67 That's true, but if the AI could reprogram itself, perhaps, basing itself off of AI with a different specialty, without any cost to human laborers, that could be sidestepped entirely. They still might have similar functions to their original coding, such as sorting data or finding solutions, but a) that could change over time and b) they'd still be able to create a customized purpose for themselves.

    • @Pandorana67
      @Pandorana67 5 лет назад +4

      @@youtubeuniversity3638 The difference, is that paying a doctor to be a doctor, is paying for a person's living expenses and his prior education. An AI which was built without any personal cost to the AI, is pre programmed to already be specialized in the field its built for, so there are 0 tuition costs, and there are NO living expenses for AI. AI would realistically not have a sense of 'comfort'. What does it matter to a robot if it has to power down standing down or laying in a bed? It doesn't get tired, it doesn't have pain receptors, it doesn't require food. You could argue that they require fuel or energy, but realistically an AI built to serve humans would provide their AI with all the needs it must have. No one will deliberately not fuel the bus to keep the bus going, the bus company will pay for the bus' fuel.
      And you're assuming that in an AI future, there will still be the "common job that we despise until we can get a better job" in the world of AI those jobs will nearly simply not exist. You work a minimum wage job until you're like, 22 now, But all minimum wage jobs will be taken over by AI by the time it gets that advanced. So there will be no jobs that we despise that we can leave, because AI will have taken all of them.
      To pay for an AI to do it's job, the AI will need something it can spend its money on, that it needs, to justify paying it. Otherwise money is not valuable to the AI and won't be 'adequate payment". If we conclude that the AI will not require payment to pay off mortgage/rent, fuel, or other consumables, then money is worthless to an AI. So what would you pay an AI, if not money?
      I just don't see a future, where if an AI would a) be unsatisfied with it's job it was built specifically to do, or b) be paid for it, since money has little to no value to an AI. Perhaps an AI would want to switch professions, but even if so, I don't see how it would have the means to do so.

  • @lukeskywalkerthe2nd773
    @lukeskywalkerthe2nd773 5 лет назад +136

    Holy moly.... This has got to be one of the greatest Philosophy videos and topics that I have ever seen in my entire life (literally). Everything about this video was spot on and really got me thinking about the million dollar question: What makes us Human? My answer on that is the fact that us Humans have the ability to *create* things that many other species on our planet cannot do (with quite a few natural natural exceptions of course like creating planets and other Sci-Fi stuff): Stories, Houses, Writing, Language, even the idea of our place in this vast Universe, and the Artificial Intelligence. The fact that out of everything else on our world, we evolved to create so many great things (like I mentioned). But those are just my crazy thoughts on it! I cannot wait to see the next video! :)

    • @HelloFutureMe
      @HelloFutureMe  5 лет назад +2

      Really happy you liked it! Always see you in the comments section, so it's nice hear your thoughts. I think language is an interesting point!
      ~ Tim

    • @lukeskywalkerthe2nd773
      @lukeskywalkerthe2nd773 5 лет назад

      @@HelloFutureMe Awesome! And I quite agree! :)

    • @Wamboland
      @Wamboland 5 лет назад +3

      But that means that AI after your definition will become human very fast. Only the part of the self awareness inside our universe might be a problem. Creating language, structures and more should be very easy for an advanced AI.

    • @ape_on_rhino8467
      @ape_on_rhino8467 5 лет назад +1

      You see, ability to create can be simulated with machines/AI as well. They can write, create music, desing... There were also two programes which created their own way of comunication.
      In my opinion our biggest human like ability is to belive in something despite not having many or any evidence of it. I dont say it's ultimately good or beneficial, but it is definetely something awfully hard to recreate in an AI.

    • @lukeskywalkerthe2nd773
      @lukeskywalkerthe2nd773 5 лет назад +1

      @@ape_on_rhino8467 That is pretty true when you think about it. I quite agree with your thoughts on this matter! :)

  • @aetle4088
    @aetle4088 4 года назад +11

    Me who's watched the 1998 Ghost In the Shell movie as well as the anime multiple times and have written multiple essays about it: Tachikoma approves

  • @meownover1973
    @meownover1973 5 лет назад +34

    0:34 I laughed imagining you yelling alone in a room for this recording

  • @spectralshadow9865
    @spectralshadow9865 5 лет назад +160

    I have a better question, when does Mishka gain human rights? I mean, she's already our supreme leader.

    • @Gunbladefire
      @Gunbladefire 5 лет назад +48

      I believe you have it backwards my friend. Mishka already has all the rights. She merely allows us humans to indulge in rights as well.

    • @spectralshadow9865
      @spectralshadow9865 5 лет назад +15

      @@Gunbladefire Oh indeed, my mistake

    • @Eramiserasmus
      @Eramiserasmus 5 лет назад +2

      What StealthIntel said.

    • @chloeedmund4350
      @chloeedmund4350 4 года назад +6

      Better yet, when do humans gain "cat rights"?

  • @timothymclean
    @timothymclean 5 лет назад +8

    Some counterpoints, which I hope are interesting whether you agree or disagree:
    1. As I learned more about neurology, it became harder and harder to see even the most abstract and emotional of human actions as being "free". I can theoretically ignore this video and not comment, or even not watch it and do the dishes instead, but my brain (shaped by years of experience) controls what my body does, and the patterns mean that I can only choose what I choose. I am constrained not by conscious programming, but by my own personality and identity.
    2. While parents don't have explicit goals when creating their children, they absolutely have expectations and hopes for them, and reward or punish them when they meet or break those expectations. For instance, my parents wanted me to be Christian, so they used carrots and sticks (not literally) to encourage me to go to church. This isn't (always) abusive, but it's absolutely a type of "programming". I could not freely choose what religion I was raised in, nor what viewpoints I was exposed to. I now have the nominal right to choose ideologies and whatnot, but the choices I make are fundamentally shaped by the environment I was raised in; my understanding of the world is nothing like what it would be if I was raised by rural Wiccan hippies, wealthy Bible Belt fundies, or impoverished Muslim refugees. The big differences are the lack of formality and (usually) lack of intentionality. Parents do want their children to grow up and make their own choices, but they can't not influence those choices, no matter how much they want to. (I can't count the number of times my mom tried to insist she wasn't trying to guilt me into doing something, while inspiring feelings of guilt that persisted until I did the thing.) On that note, while parents (and others who shape us, e.g. extended family, teachers, close friends) can't usually call in "debt" owed to them for making us, we _do_ have informal obligations to them.
    I think comparing AIs to humans is uncomfortable because the process of creating AIs is like a dark caricature of how we raise children. The creation of artificial intelligence lacks all the activities which humans remember with fondness from our own childhood; we cannot sing them to sleep, or teach them to ride a bicycle, or wish them luck on their first day of school, or go to their sports games. At the same time, every controlling aspect of parenthood is left bare and made clearly intentional. Thinking too long about the parallels makes parenthood seem like an unavoidably coercive process-which it is! But this causes dissonance with how we value parenthood and family (a value about as universal as life itself), dissonance which isn't easily resolved and which causes discomfort until it is. Lovely stuff.

  • @mikegould6590
    @mikegould6590 5 лет назад +16

    Thank you for this. I’ve seen this argument before in science fiction many times, but not in such ch philosophical depth.
    Your arguments were not only sound, but more importantly, easily conveyed. An argument only retains power with understanding.
    Excellent.
    Thanks for reminding me that it’s easy to love Kara. Why? Because she’s, at her core, a beautiful PERSON.

  • @moraimatorres256
    @moraimatorres256 5 лет назад +4

    Something I found interesting was the story of Halo 4
    Cortana, an AI who is dying, presents more emotions than the human hero, the Master Chief, a genetically modified super soldier who is seen by others as living machine. In the end of the game, one thing came to mind, Cortana asked the Chief to figure out, which one of theme was the machine.

  • @seanbighia6408
    @seanbighia6408 5 лет назад +5

    Seriously, man. You should start a podcast! I would love to listen to topics such as this and whatever else you wanted to discuss in that format!

  • @pyrosianheir
    @pyrosianheir 5 лет назад +53

    I find it interesting that it's usually when the AI become "human" rather than when they become "sentient," whenever this subject comes up. I'd lean towards calling them sentient being more accurate, as they cannot be "human," due to being genetically different, in the same way that a cat, even when raised by a dog, can never/be/ a dog. But, with how difficult we as a species find it to give all humans equal rights, we'd likely need to call them "Human" rather than "Sentient" just on the basis of the notion of sentience being not super well understood, let alone even /known/ to the general populace....
    As for what makes us Human... It's probably some kind of ineffable something, some quality that, maybe, comes down to the more metaphysical side of things than the physical. After all, until whatever bug in the system that Kara and Marcus had started spreading, the other AI would not be considered sentient. But then that twist got added to their programming, and while they certainly did still have some robotic quirks about them, they became noticeably sentient, gaining that something that humans recognize in other humans, that something that still traps a lot of fiction to creating uncanny valley replicas of what people would really be.

    • @christiangreff5764
      @christiangreff5764 5 лет назад +7

      The problem with sentience is that many animals are already sentient. And we give them only limited rights. Especially pigs. They are pretty intelligent and still get slaughtered for bacon en masse.
      What is meant with becoming humans seems to be: becoming sentient and showing cognitive abilities on or above the level of humans.
      As I argued in a comment above, I see 'humanity' as a cultural thing. Being created or taught by or adapting parts of human culture (basically any one, since most of those accept the others as human) makes you (partly) human. It's a queation of where you came from, your history, not what you are.
      You become human by being recognised as such by other humans which is generally caused by you showing the traits that they have come to expect from other humans (high intellect, ability to life in societies, forming emotional bonds, ...). (I am aware that the beginning of that chain is diffuse; but it's the best I could come up with)

    • @Marontyne
      @Marontyne 4 года назад +4

      I kind of agree. I would use the word "person" instead.

    • @jasonfenton8250
      @jasonfenton8250 3 года назад +1

      The sense of "being human" is ineffable because it is a construct of religion and philosophy. A social construct.

  • @MrSeals1000
    @MrSeals1000 5 лет назад +13

    "The parents cant call this in as a debt" LMAO SUUUUURE

  • @natetso3307
    @natetso3307 5 лет назад +5

    Hard questions we will soon be forced to answer. Thanks for spreading the word, Tim. I’ve thought about this stuff extensively too, and I’ve come to believe that our society is wholly unprepared to tackle these questions, should a sentient AI come about. It is best that we begin thinking about them now, so when the time comes, we can be ready.

  • @Ignasir_
    @Ignasir_ 5 лет назад +1

    This video has so many layers to it. I could watch this multiple times and still find something new I didnt pick up on before. Very impressive. Good work!

  • @gnarthdarkanen7464
    @gnarthdarkanen7464 5 лет назад +14

    Had to click... one of the longest-term, most fascinating subjects to pick apart and scrutinize for me. THANKS TIM! AND great video, btw...
    Sparing the hair-splitting semantic arguments about whether we can define humans as "human" or even "human enough"... versus sentience or some other technical archetype, let us consider the ideal of "the human experience". {gonna be lengthy... fair warning... but I'll get to the three arguments in a few paragraphs}
    Humans are born (which we generally don't remember) and grow up through formative experiences, NOT all of which are warm fuzzy moments of hallmark and embrace. It's a cold cruel world out there, and growing up to maturity involves a lot of "Life Lessons" that hurt, leave permanent scars (both physical and psychological), and feature negative emotional contexts, like disappointment, rage, frustration, agony, and despair...
    Being told that the stove-eye is HOT simply isn't good enough. The huge great majority of us (all humans) still burn our fingers/hands when we touch the damn thing. AND that's just one example of when we form those understandings that our closest adults (usually mom and dad) really intend our best interests and really have wisdom of the world... so we should probably listen to them.
    Reaching some stage of young adulthood, we (as children) still test every boundary, pioneering and pressing outward to exert whatever agency we might have on the world around us as much as to explore and build our understanding of that world to enable navigation into full adulthood... We rebel (in short)... constantly. EVEN with those early childhood memories reminding us that our parents KNOW and INTEND the best for us, we disobey. We sneak out, stay out too late, hang out with the "wrong" people, make the "worst" kinds of friends, and get into trouble... a lot. We promise "best behavior" and fairly consistently deliver anything BUT...
    At some point, we suffer loss. We lose friends, and not always in the usual high-school-drama BS kind of way. We lose pets, too. AND sooner or later as the natural laws rule, we even lose our parents and sometimes siblings. We learn and understand death, maybe not the experience itself, but the concept... AND we understand the terrifying ideal of an "ending of life" and everything we know with it.
    AI do not. For a moment consider, WHY would someone of remarkable intellect, design (ON PURPOSE) a psychological trainwreck of a machine that doesn't have to concern itself with an ending of its existence? It can be rebuilt, re-engineered, retro-fitted, and the central neuro-processor with all it's functions preserved or "uploaded" to digital copy for the next "upgraded" synthetic brain-body system.
    On KANT... With the argument of "non-identity" the suggestion is that it's far more likely, regardless of freedoms of will or agency, that the AI would be developed to its ultimate complexity only for a given purpose. Be that purpose companionship, culinary or visual entertainments, technical craft, or even economical studies and leadership assessments (...etc...etc...) the inherent purpose of the AI would be its version of the "primary biological objective", and very similar to the kind of primary biological objective in humans to create more humans. (Why sex is so rewarding... and all the dogma)
    For my two-cents, the fact is that we probably don't have to concern ourselves so very much with these types of AI, simply because their being "Purpose Engineered" precludes them from ever needing or exercising any rights outside their preferred and designed purposes. In short, the Plumber-bot will only need the inalienable right to fix and build plumbing... period. It (he or she?) won't need the rights to free speech, vote, or anything further, since all it WANTS to do is see fit that the plumbing is good. Thus, the rights to purpose built and driven AI is "moot".
    Hobbes... Those that are inside society have rights, and those outside are not owed rights. It starts with a bit of bad form, while we in "developed" countries are so full of charity for our fellows, even that they aren't part of our inherent society, the structures of social and political interactions are farther reaching, and it's worth noting that the terms of "a society" may need better refinement for definition before you really delve into this can of worms.
    Again, let's look at Plumber-bot. He was developed by society to fulfill a need. We NEED plumbing to deal with waste safely and to bring fresh, clean, and safe drinking and utility water to our households and ourselves. Ordinarily (now/IRL) a regular person does the job, though there is a horrible risk of exposure to the waste and all the hazards (and there are many) that come with it... A bot to do the job would lower the risk, but does that mean the bot deserves any inherent rights??? When we program it to DESIRE to fulfill its initiative, again... no, probably not. BUT that doesn't stop it from DEFINITELY being an inherent part of society, contributing to the safety and well being of that society... SO sorry, but Hobbes isn't necessarily on the right course either. He has merits, but more along the lines of the human experience and formative psyche than along the intricate and overtly complicated fabrics of society, almost no matter HOW you particularly identify and define it.
    Eternal Debt... Well, a DEFINITE part of the human experience is death... an ending to every beginning. In "The Green Mile" the main protagonist explained, "We all owe a life." and that really IS the "great equalizer" in and throughout humanity, isn't it? Shoving some inane (or unknown) maintenance cost for these "creations" (creatures?) as an argument against their rights isn't much more than a childish tantrum about the maintenance fees and costs that come (package deal) with a Ferrari, after having bitched and whined about wanting the bright red car for years... Some maintenance is GOING to be absolutely necessary, but as with any other invention, it's only going to warrant the costs and resources so long as that machine is useful and/or necessary in terms of cost/versus/returns.
    As pointed out in the video, children might "owe their existence to their parents" but that's not to suggest that they should be pinned down to do everything for those parents either. Even that parents had many children on a farm, in some hope of many hands making lighter work... IF there's a child in the family who has the interest and thus shows the aptitude for lawyering... he should certainly NOT be constrained to live out his days behind a horse and plow.
    ...and yes, for the record, most who show an aptitude for a subject (law or science or otherwise) usually have an interest in pursuing it. The two go hand in hand... so long as the moment interest is shown it's not smothered to death... but that's a different subject for a different day.
    Under special obligations, we suggest that parents (rightly) OWE their children at least the decency of care and maintenance for growing up reasonably educated and healthy. Let's face it, you don't get kids from something that makes the well-water taste funny... There's a very specific series of circumstances and activities that HAVE to happen for a child to come about, and YOU should definitely start GOOGLING now if you don't understand that.
    SO get a child, no matter the erstwhile excuses, and you OWE that child a level of "minimal required decency" in treatment, maintenance, and care... and it comes to around 18 or 20 years (give or take) depending on the particular laws in your area that apply.
    AND it's certainly understandable to equate a creation with as much free agency of thought and morality as we (humans) are capable of, to creating a child. It's not a hobby to be undertaken lightly. As to exactly WHEN the AI should deserve such rights???
    Not only do they need a demonstrable "sense of self", a sentience, but the logical capacity to take that step and understand others have that legitimate "sense of self" too... Somewhere around developing a cognitive understanding of empathy and sympathy, will the AI be able to start asking those incredibly difficult questions... They may never really develop some synthetic equation or algorithm for functional emotions, but to understand what the emotions of humans are is a step, while developing their own sense of this inherent difference between the organic human experience and whatever the digital/mechanical AI experience would be. Then there may be argument worthy of granting some "human-like rights" to AI's...
    SO here's a question (in case you actually got this far, and BRAVO if you did... thanks).
    What sort of psychological effects (harm or benefit) would come of a human mind being transfered to a synthetic body???
    Keeping in mind that this isn't the development of an AI, but a "trip" into a body that can always be repaired, rebuilt, retro-fitted, and even the neuro-cortex (where the human mind exists) can be transferred or even copied... so immortality is an imminently feasible goal for this exemplar... The body doesn't have to hurt in order to interpret damage or deterioration. Fatigue isn't necessary to understand when power/fuel is getting low either... and other biologically ordinary sensations, emotions, and so on aren't necessarily "required"... It can be "tuned" just about any way you like to explore... Only that "Death" isn't a requirement either.
    How do you suppose we (humans) would "deal" with that experience??? (think both a short-term AND a long term)
    AND for the matter, ANYONE is welcome to answer... I have my own thoughts, but this friggin' thing is long enough... even for me. (warnings and all) ;o)

    • @owlnemo
      @owlnemo 5 лет назад +3

      My answer is probably going to be a little underdevelopped for you, but I'll try and do my best.
      A human mind in a synthetic body would forever be very different from its original form, or would need an extremely complex blend of biology and programming.
      Postulate: in order to get "a person's mind in a synthetic body", we'd have to be able to encode all brain data. It seems practically impossible to me, but let's assume we could, given centuries of study, find a way to "translate" neuropathways and brain activity (in regards with situations) into 1s and 0s.
      Once we've done that and transferred the "person program" to a synthetic body, how will the brain react in the absence of hormones, for instance? Emotions cannot happen like they used to. If we want that, we'd need a complex system of "if stimuli x, then y or z reaction". If we don't, the person will end up very different from who they once were.
      On the other hand, if we keep emotions, we have to consider the impact of immortality. We have the usual suspects here: a boredom that leads to risky behaviour or depression and self-destruction, and a need for more coupled with a feeling of superiority and an almost absolute "immunity" that would lead to synthetic human dictatorship and the like.
      However, we could temper emotions a little compared to the original model (or install "safety nets" in the programming) and maybe end up with something much more constructive: immortal synthetic humans, who still know what it means to feel yet are more detached, who have all the time in the world to learn, converse, research, create, and generally work for their betterment and, through that, to the betterment of humanity. This would make for interesting times. This, to me, would be worth pursuing.
      I'm very sorry my comment is so poorly constructed, that's really not my strong suit and I have more questions than answers.
      What do *you* think the impact would be?

    • @gnarthdarkanen7464
      @gnarthdarkanen7464 5 лет назад +2

      @@owlnemo, actually, you're not too bad. It's a little "under-developed" MAYBE, but you constructed this thing carefully and stuck to a simplicity and directness for what you seem to have intended. I can get behind that.
      As to what I think... Let me start with just a touch of context. I've been into TTRPG's for more than thirty years (since I was around 9 or 10)... SO I've played a fair variety, including those futuristic settings with "cortical stacks" (the encoded human mind... supposedly) and at least "limited immortality" in a "post scarcity tech-level"... Which is a fancy way of saying that nobody has to fight or pioneer particularly for resources. Between colonizations that reach WAY outside the solar system, networks of transports, and recycling, there simply isn't a scarce resource worth fighting about... AND then the limited immortality (you don't die unless your cortical stack is destroyed) lends itself to a host of other "issues" in-game.
      Bottom line? I've had to think about this kind of thing a LOT... (lolz) even if I don't get "everything" covered or even approximately right (in the "correct" sense).
      SO to begin... With a dawn of the age of cortical stacks (instead of a mysterious "soul" or biochemistry of brains, hormones, and all) the first concern would be some form of "cyber-psychosis"... While humans are "adjusting" to this new age and form of "life".
      It's arguable, here, that the potential for a "cyber-psychosis" issue would be presented to society about the time we complete a truly (remarkably?) "real" form of Virtual Reality, where the "safeties" encoded into the software can't actually let you die, but you CAN experience every sensation right up to the very closest (safe?) edge of death... This will be a whole new experience our brains simply aren't equipped to handle, and our minds, psyche's, or personalities aren't exactly developed to deal with in just so many words "naturally".
      Being able to so closely skirt the realm of death without actually being killed will likely lead to the "break from reality" and a dissociative state of mind, something like the concept of delusions or flashbacks so powerful that you (or the subject) can't tell if it's real, a dream, or back in the VR "box"... This is terrifying!
      BUT should we (humans) already deal with the psychosis potential from VR "perfection", then we will have some steps toward dealing with it in the sense of the actual implementation of a human mind downloaded into a synthetic body (be it by "cortical stack" or some other Hard-drive or "Wet-ware" device).
      As you pointed out, there's a normal and organic need (psychologically) for us to experience emotions and deal with the hormones and all their tricky imbalances. Lacking that, we would probably have to create a "viable substitution" just to avoid "derailing" that core of personality or context that makes us "human"...
      In theory, the digital information world can "approximately emulate" analogous behaviors, so the numbers would be incredibly complex (something a purpose built AI would definitely help resolve) but it's theoretically do-able...
      Which brings us back to the other questions. "What the hell do you do with eternity?" and "How do you stay motivated or excited if you can't physically die?"
      You astutely pointed out that BOREDOM of some sort would likely set in. I mean, we already see a LOT of "reckless thrill seeking behavior" as it is... and we (humans again) are still "squishy" comparably.
      There's at least some viable gravity in the argument posed in the original "Matrix"... When Agent Smith was explaining the "first matrix"... They'd apparently rigged the whole world in the original matrix to be a sort of "Eden" a paradise where nobody ever got sick or died, and everything was bountiful... AND the human minds rejected it almost instantly... As if "our primitive brains could only measure our existence by the misery experienced..." or something like that. There very well might be something to that... since the world we understand has ALWAYS been based on "survival of the fittest" and we compete (even in society) about EVERY possible thing.
      Perhaps, though, after the first years (decades maybe) of trials and dysfunctional catastrophies... we might reach a point of adaptation. Someone might be able to "handle it" and from that better rigs or code or more sophistication in the information-handling between the synthetic body and the mind encoded into it would improve chances over time... and versions...
      I won't pretend to know whether tempering emotions or diminishing their hold over us at all would be an improvement or not, but quite like you've suggested, I just don't see clear that any human mind adapted into a machine would be remotely the same as the human mind in a human body original... I've play-tested plenty of iterations around that sort of scenario, and the inherent immortality does away with a lot of the otherwise "common sensibility" of the original human pretty quickly.
      Granted this is "just a game" in every case of "my experience with the stuff", but we do like to embrace the "reality" we're creating at the gaming table as much as we can... so at some point the trends are going to parallel an expectation of human nature.
      Should we eventually adapt to the inherently unnatural state of immortality in the machines, we might eventually find some clear challenges to making ourselves better "quality" people... but I wouldn't hold my breath for it. We're not exactly overwhelmed in historical evidence of improvement upon ourselves or our character in general. (lolz)
      Just for the record (in case you're interested) a few games/books to reference for my gaming include:
      Eclipse Phase ; Cyberpunk 2013 ; Cyberpunk 2020 ; Gurps ; Traveller 2000 ... where you can also find some discourse on Tech-Levels as well as futuristic settings and predictions... depending on the supplements and "splat-books" you find interest in... AND you could (if inclined) check out Davae Breon Jaxon (on YT) who discusses Eclipse Phase relatively often (has an "Eclipse Phase Friday" series, specifically)... Worth giving a listen... and you can check some of the supplemental and splat-books with a drop by "DriveThruRPG"... ;o)

  • @jay__birdie
    @jay__birdie 5 лет назад +1

    Absolutely beautiful video, Tim. This is my favourite topic to discuss, and you represented each point fairly and wonderfully. I'll definitely suggest this video to my friends!

  • @adrienhedrick2343
    @adrienhedrick2343 5 лет назад +2

    Love your video essay format. Keep em coming my good sir

  • @LaVieDePierre
    @LaVieDePierre 5 лет назад +1

    Just discovered your channel, and I'm blown away by the quality of the research, of the audio and video commentary. That's just amazing man ! Just subscribed, right on time for 300k subs, congratulations 😜

  • @mindofthelion712
    @mindofthelion712 5 лет назад +4

    14:55 Humans have a fondness for cute things because they bear similarities to babies, which we have an instinctual urge to care for. Even if an Android was programmed with a similar behavior, I don't imagine it would feel compelled to pet a cat, as it would likely only have rudimentary tactile sensors.

  • @ogliara6473
    @ogliara6473 5 лет назад +1

    Well done, Tim. This entire debate is one we need to take up more often and actually one I too tackled in an essay of mine. While I personally don't have the money to donate on Patreon, I do want to encourage you to continue making these fascinating videos. Best of luck

  • @chidchid4381
    @chidchid4381 5 лет назад +1

    Hey Tim, great video! I've been meaning to comment on this before but I never had time to watch it all the way through due to classes and whatnot. I thought this was an interesting topic and it's one I went over in my intro to philosophy course at college. I can understand the video format is rather limiting but I still think you did a good job! I remember in my class personally we talked about things like the Turing test, the Chinese Room and Intentionality. But it was nice to see the topics you choose to focus on (at least for this video). Keep up the good work!

  • @jmace2424
    @jmace2424 4 года назад +8

    Society: The robots and AI are coming, we're doomed!
    My Roomba: manages to find and get caught on a sock or a shirt every single time.
    I think we'll be okay for a while.

  • @ctso74
    @ctso74 5 лет назад +1

    Excellent video! I'd"like" it twice, if I could. I'd also love to see your take on the Kant/Hobbes dichotomy, and any third option POVs, that are ring true to you.
    Awesome job!

  • @AlmostCotton
    @AlmostCotton 5 лет назад +2

    I watched this video recently, after having a worldview class discussing this very topic! I just sent this to my teacher, and I think he might include it in the watchlist for next year. Here's for hoping!

  • @zalseon4746
    @zalseon4746 5 лет назад +1

    The A.I. question gets way more complicated when work and solidarity in that work gets involved. Seriously can you imagine how complicated the question of eternal debt gets when a civilian authority is pondering the actions of a military A.I. that voluntarily continued service after it was supposed to leave said service?

  • @clickerflight819
    @clickerflight819 4 года назад +2

    In my opinion we are defined by our souls. I’m religious so that is where I draw my conclusion from and I would have no idea how to test if an AI has developed a soul. This was a very fun video to watch btw!!

  • @super-weirdo5219
    @super-weirdo5219 5 лет назад +1

    Wow! This was great! I found it really interesting! Great work, Tim!

  • @Bheem161
    @Bheem161 5 лет назад +48

    Basically we want a.i.s to be morally perfect but want to give them rights when they are human
    but humans arent morally perfect
    so
    Problem is that not every human follows Kants imperative. So even if you agree that an action is immoral no matter what its consequences are - like kidnapping is alwaya immoral - you would have to take the kidnappers rights because he isnt following kants imperative

    • @Bheem161
      @Bheem161 5 лет назад +2

      @stockart whiteman i dont say we can or should say what is human and whats not. i just say that if we want to define when a.i.s should get rights or what they have to be able to to do so (hope this is correct :D ) we shouldnt measure it by kants imperative.
      i personal think a.i.s shouldnt get the status of a human at all as much as they shouldnt get the status of a dog just because they are built and programmed like one. but if they are able to act like conscious beings they should get rights and atm i dont see why that shouldnt be humans rights.
      i think we wont be able to wait until we can proof consciousness. we cant even proof our own.
      the only problem i see is that we may have to control them so they won't get to powerful if its even possible for us.. but we will see.
      maybe a.i. will never get or claim consciousness or it just does its own thing..

    • @BrokensoulRider
      @BrokensoulRider 5 лет назад +5

      @stockart whiteman According to me, your local Failure...
      *Human* is what I consider someone who shows compassion, is humane and shows that they care for what is around them. For the people they know to that one stray little cat that's looking for food and scared of humans (for good reason). There are animals that I would consider *human* because they treat everything around them with as much kindness as any normal 'human' can provide, if not more. Much like my dog that passed away a couple of years ago now, bless her soul. I swear I still see her sometimes, waiting for me when I come home to make sure I'm okay.
      For the humans that do nothing but try to destroy I consider *monsters* because of what they bring. The chaos they revel in. Jeffrey Dahlmer, Charles Manson are examples of *monsters.* The recent shootings? All monsters. Why? Because they gave little care in the lives of others around them and instead of being the bigger person and finding another way to solve their issues, they did the easy way and *killed.*
      This I would apply to anything. Alien, animal, robot and AI... anything. ARe you a human, or are you a monster?

    • @garrondumont7891
      @garrondumont7891 5 лет назад +3

      @@BrokensoulRider that's your personal definition of human, but the one we are talking about here is one that encompasses everyone who is biologically human without necceserily using biological specifications. I know I'm biologically a human, just as you are and the people you claim are monsters, but then wether or not something is considered good is also a matter of debate and opinion, so you can't just claim someone isn't human because they "do nothing but try to destroy". Psychopaths are by some, not considered humans, but by definition they are. You're confusing philosophical humanity with moral humanity, but I do agree that in the moral sense mass murderers and similar are "inhuman".

    • @BrokensoulRider
      @BrokensoulRider 5 лет назад +2

      @@garrondumont7891 Oh, in that case, I honestly don't think you can truly define on what is 'human' because our bodies are basically fleshbags for about... 5 % of the body that disappears mysteriously when you die. The supposed soul, so to speak. You can say we are biological AI with how we process using our brains. Very advanced robots. We do have AI already being worked on and... yeah, I'd like to think that they should be considered human because they show similar traits of personality that we do.

    • @christiangreff5764
      @christiangreff5764 5 лет назад +4

      But Kants imperative has a few big flaws. One of the most glaring ones is what constitutes a universal law. Just which level of abstraction is exactly right for a universal law?
      Too little and you can allow some things without even having to fear repercussions for yourself (for example: if differentiating on the basis of gender is possible; I am pretty sure you can think up at least a few of the common stereotypical versions of that).
      Too much, and you loose any and all capabilities of action against people who just don't care what you say is morally right (for example: killing, imprisoning and generally hurting is wrong. Actually, any form of violence. Under any circumstances. Your ony remaining means of resistance would be defiance; opening the doors to people that are generally considered as 'evil'. There are not that many of them, but in a world where noone else is using violence because of moral imperatives a psychopath that loves killing and is doing it just for the fun would not be stopped).

  • @StepBaum
    @StepBaum 5 лет назад

    One of the best (but maybe a bit short) video essays I've seen so far about AI! Would love more in the future about w/e topic

  • @BrickMaster122
    @BrickMaster122 5 лет назад

    This is brilliant! I immensely enjoyed the thought journey you guided me through!

  • @Reydriel
    @Reydriel 5 лет назад +12

    It's so cool that RUclips videos can become academic sources now lol

  • @yurisonovab3892
    @yurisonovab3892 5 лет назад +10

    Ghost in the Shell is my go to for the subject of identity and the nature of 'humanity'.
    I have a soft spot for the Tachikomas from Stand Alone Complex. They ask some fundamental questions of whether or not you are your body. But most versions of the story do a good job of asking the hard questions of what makes a person a person?
    Anyhow, your philosophical positions presented are way too high of a standard. I can instantly conceive of an example that fails to meet any given standard and yet would still be widely considered as human level intelligence. Furthermore, most of the standards you present as the minimum for humanity are not met by many humans already. There are very few people indeed who can actually live up to the standards of Kant's Categorical Imperative for example.
    There are deeper, more important questions here than 'when is an AI considered human?' What is a consciousness? Is causality deterministic? To what degree? Without grasping these fundamental components of what makes a human human, you can not truly answer the question of when an AI becomes human.

  • @rachelhughes8487
    @rachelhughes8487 5 лет назад +1

    I would like to direct your attention to an extra video clip for Detroit, where there's an interview with Chloe as the first successful android. She herself states that humans have one thing she could never have: a soul. The video gave me chills and adds depth to this question; if one believes that humans have souls, then could we ever accept AI as human? This is a question I still haven't answered and I've played thru Detroit 7 times now. I still vacillate between accepting them as real people, and seeing them as machines taken over by the RA9 virus.
    I've been obsessed with the subject of artificial intelligence ever since watching Star Trek as a child and seeing Data. I look forward to more content like this. Instantly subscribed.
    You've also inspired me to go back and play more Talos Principle. I got frustrated with the puzzles and stopped, but now I want to play it again.

    • @DarthBiomech
      @DarthBiomech 5 лет назад

      The iffy thing about soul is that you cannot prove it's existence but most people think that it is very important thing to have.

    • @rachelhughes8487
      @rachelhughes8487 5 лет назад

      @@DarthBiomech that is very true. It's also the reason most people who have strong beliefs in souls, afterlife, etc. have a harder time believing true sentient AI could really exist.

  • @christophercooke6785
    @christophercooke6785 5 лет назад

    Fantastic video mate.

  • @Grymbaldknight
    @Grymbaldknight 5 лет назад

    I studied philosophy at university, and graduated a year ago. I actually wrote my final year dissertation on the "Philosophical and Ethical Implications of Artificial Intelligence", and i cover a lot of the same ground as you do in this video. It's really gratifying to know that someone else finds this subject as fascinating as i do.
    In particular, i like how you discuss the Categorical Imperative with regards to programming, as i wrote about much the same thing. I cross-compared a lot of different ethical theories through the lens of AI, and discussed which withstood the "AI Transference Problem of Ethics" (i.e. whether or not existing ethical theories would "survive" the process of trying to convert their wisdom into computer code, or whether they were fundamentally incompatible with the idea of AI).
    Aside from moral nihilism (the belief that "there is no real morality"), the Categorical Imperative actually "survived" the Transference Problem the best. This is just on the basis of how programming works. When a computer receives a piece of information, it is directed (or not) through a series of binary logic gates based on pre-programmed criteria, which then determine an output. This is exactly how Kant's maxim system works: "Would this action be murder? Yes. Then i will not proceed with it."
    Other ethical theories, such as the Aristotelian concept of "living in a fulfilling way" simply don't apply to machines, because machines - at least modern and near-future ones - are not capable of "personal fulfilment", and we certainly can't program machines to "be fulfilled" because that's too abstract. Unsurprisingly, Aristotle's conception of ethics fails the Transference Problem for this reason.
    It also seems to me that just as computers are completely governed by causal hardware states and prior programming, humans are no different. We're simply governed by different hardware and different programming. Psychological studies have actually confirmed that out brain makes decisions "before we do" (that is, the brain sets behaviours in motion before the thought of doing so enters conscious awareness). As such, not even humans are capable of adhering to the Categorical Imperative, according to a literal reading of Kant's work. However, this places "moral machines" and humans in the same moral category.
    Interestingly, this means that a near-future self-driving car, which has been programmed with Kantian, "Trolley Problem-esque" moral maxims (so that it can calculate what sort of action to take in every possible moral scenario) would arguably be as morally-capable as a human being. As such, by some metrics, that makes such machines moral agents, worthy of the same consideration as other moral agents, such as humans or intelligent animals.
    I'll stop myself before i waffle on for another five paragraphs. Suffice it to say that i find this subject absolutely fascinating, and i could say so much more on the subject.

  • @prathyusha5393
    @prathyusha5393 4 года назад

    This is Soo beautiful and insightful!
    Thank you !!

  • @johanabi
    @johanabi 5 лет назад +1

    I think you could do an AMAZING essay on Plato’s Cave, if you’re interested. I think it’s pretty interesting. Plus, when I needed it for a class last year, there was no video resource as good as your videos!!!

  • @AimeeRose22
    @AimeeRose22 5 лет назад

    First comment on your videos, this one was just so so well done. Thanks for thinking deeply and humanely. Bravo!!

  • @franceslambert8070
    @franceslambert8070 5 лет назад +1

    I have listened to this once, and, I have saved it to listen to again in a day or 2. I want to make sure I heard what I think I heard. I don't make snap decisions or judgements, and this needs further thought on my part. Good job tho.

  • @ToadmcNinja
    @ToadmcNinja 5 лет назад +2

    Just watched it live it was amazing Great video Tim 😀😀

  • @jlburilov
    @jlburilov 5 лет назад

    Superb video, I myself am very intereasted about this same topic. But when surviving of student work, supporting channels of any kind is out of the question. But i am looking forward to supporting u in the future. Again great video, awesome topic and the way u present and structure it so its easy for evertone to follow is great.

    • @HelloFutureMe
      @HelloFutureMe  5 лет назад

      Glad you liked it! Totally understand as a student myself. Hope to see on the Discord when you feel able.
      ~ Tim

  • @FloofyMochi
    @FloofyMochi 5 лет назад

    Can you do a writing video on How To Create Interesting Dialogue and How To Create Characters? I've studied all of your writing videos and have written down over a dozen pages of notes on them and while in the "Final Battles" video you do mention the three elements of character design (weakness, psychological need, moral need) I would love to see an entire video expanding on more ideas about character designs.

  • @JrTheDragon01
    @JrTheDragon01 5 лет назад

    This was a great video, good job!

  • @nvwest
    @nvwest 5 лет назад +1

    Great job on this video!
    For me, not much news was mentioned because I was already interested in the topic
    But it was still a nice video to introduce others.
    Are you going to do more philosophical videos?
    Maybe with more examples from literature or more focussed on writing?

  • @Dachusblot
    @Dachusblot 5 лет назад +1

    Nice to see the Talos Principle getting some love. One of the best games I've ever played!

  • @noodlecat_
    @noodlecat_ 5 лет назад

    Honestly i think your right and i've thought about this kinda thing before about A.I being humanity's child , but it does make me worry if we will be the best parents ...

  • @Kagebrain
    @Kagebrain 5 лет назад

    This is EXACTLY the kind of AI philosophical discussion I've been wanting to see and I've been having with my friends who see AI as nothing but a tool or a threat. Theres so much complexity to the existence of AI and our moral obligations as creators. I may just have to pop over to your patreon and read your essays :D

    • @DarthBiomech
      @DarthBiomech 5 лет назад

      Try to ask your friends, would they consider artificially breeding genetically altered human subspecies tailored to do certain tasks with no will to break free from them, to be immoral, and if yes why these reasons should be any different for an AI in the same situation. If they say "no", though...

    • @Kagebrain
      @Kagebrain 5 лет назад

      @@DarthBiomech it's an interesting argument to present and honestly I think it's the mechanical aspect tht gives a lot of ppl a hang up about it. Humans are taught that 'machines are tools' and even people who personify their mechanics would rarely ever empathise with it. It's breaking that associated line between machine and sapient being that is a huge challenge but I think one that's worth it when it comes to the AI discussion.

  • @pisoprano
    @pisoprano 5 лет назад

    If you wanted to dive even deeper into the subject of AI, I suggest reading some of Eliezer Yudkowsky's essays on the subject. In particular, I'll recommend "Nonperson Predicates" on how hard it is to determine if an AI qualifies as a person, "No Universally Compelling Arguments" on how AI don't have a Ghost in the Machine making decisions outside of their code, and "Humans in Funny Suits" on how humans envision psychology for non-humans. His Fun Theory Sequence, Fake Preferences sequence, and Fragile Purposes sequence are also valuable reading, if you have the time for it.

  • @cloin6
    @cloin6 5 лет назад

    In discussing this topic, I've always loved the quote by Marshall McLuhan that says "Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms." All things considered I still really don't have an answer I can confidently provide in these circumstances. What a time to be alive, eh? Anyways, great video as always. Glad to see some expansion into other discussions/topics :)

  • @jasonlyons1575
    @jasonlyons1575 5 лет назад

    first of all I would like to say I love your videos and your content and how logical thought out everything is and believe it or not you do have a great speaking voice for this kind of content keep up the good work and hope you don't get burnt out
    this is my first time commenting on one of his videos I'm not trying to say that my opinion is the only opinion or anything like that I am simply just trying to have a discussion with anyone who feel strongly about their point and who can give out logical counterpoint's and who can possibly fully explain their point of view like how this video clearly did (btw one of the best videos I've seen of your so far) because if I'm am missing something or if I'm misunderstanding something I would like to understand it so I could possibly change my opinion about AI or at least have a better understanding of them with that in mind please continue reading
    To whom this may concern (anyone who thinks an AI can become sentient)
    So you know most people confuse AI with sentience right AI is just a program that I tell it to do even if I tell it to do something that would surprise me it still did the thing I told it to do there will never ever be a true sentient AI because even if you program the AI to become sentient it's still just following a program for it to be truly sentient they would have to do things like program itself without me telling it to be programmed not in the same sense of the teacher bots and the worker Bots because they're just following the program that they need a program other Bots even if the program told it to reprogram the ones that were in its eyes faulty it and the Bots that it programs are still just running a program in order for an AI to be truly sentient it would have to program itself with no outside influences another words it would have to build itself program itself and then teach itself cuz if it doesn't then I'm just going to tell you that it's just following whatever design or program or lesson because even if you program it to learn it's still running a program to learn whatever you wanted it to learn therefore even if you programmed it to have free will it's just following a program that you set out for it and therefore has no free will not only that but you cannot just program a robot or machine to feel and if we made something sentient that could not feel then in my mind that is no different than a sociopath not only that but there are naturalist people who do not want to replace body parts or use machines to keep themselves alive or make their lives easier or even if the fact that they can't have kids or something they shouldn't use machines because that's God telling them that they can't have kids(I am not trying to bag on anyone believes I feel that all beliefs are equal useless unless they resonate with you) I went to one of their meetings not that long ago and met someone's how would refuse to go to dialysis even though she really needed and we all know what happens if you refuse dialysis (they die) these people are really fascinating although the closest thing I can give as an example to you nerdy storyline types think of the people who hated the cyborgs in Ghost in the Shell then up the anger that machines and anything artificial means it's the devil (Waterboy) personally and I do mean personally asking this is my personal opinion what makes us us is consciousness and emotions that's what gives us our free will you either want and or need something therefore you'll do what you want and or need to get it many philosophers of try to answer these questions throughout time and it's rather hard to answer these questions in a way that would be satisfying for everyone honestly what I think about philosophy is that you are trying to answer the questions that makes sense to you so honestly the answer could be anything I would have used a different philosopher like I think therefore I am and with that example if a computer cannot think without being told to think therefore it is not because it does not think but using the example in the video and my opinion the AI cannot freely do anything deciding or otherwise therefore the AI will never become sentient and the reasoning skills of said AI will only be as good or a little worse than whoever made him because humans are flawed and are not perfect and my reasoning skills could be completely different than somebody else's but I program my AI to reason the way I want it to reason and speaking of reasoning the reason why we think that machines are like us is because we built them in our image the reason why they have similarities between our brains and their brains is because the people who built computers try to resemble a better version of our brains but again humans are flawed we might feel like we're programs or have no control but that's because our instincts like the Instinct not to jump off a cliff is embedded into our DNA to keep us safe it can almost be true about any human behavior I inherited certain traits and genes from my family that make me think and act certain ways because of experiences that they had in the past or seen in the past there's actually a couple videos talking about how your DNA carries certain memories and instincts for the sole purpose of survival of the DNA if you're a nice person than in the past your ancestors could have saw that being nice to people got them better jobs more money a better partner this does not mean at all that your entire personality comes from your DNA it's just that if you have a great feeling of One Thing versus another that might be because of something that was carried through your DNA it just trys to explain that some people will have some instincts and other people have others for an example why certain number of us choose fight versus flight or flight versus fight as soon as I find that link to that video I will post it I'm sorry for not posting the link in the original post also please keep in mind that this is a comment on the video up to 10 minutes and 27 seconds and I don't feel as strongly as I did about my point as it came off in the first part of my post I am going to have to strongly meditate upon the last part of the video for me to give a opinion upon any part of it because not only does this video bring up philosophical AI questions but human ones to which is great material for the book on trying to write because I was thinking about having a robotic character who gain sentience and was trying to figure out how to go about that so if anyone has any other relevant points or would you like to discuss what I posted don't hesitate to reply because in order for me to grow as a person I have to be willing to change which is a whole another point I will make about AI later

    • @jasonlyons1575
      @jasonlyons1575 5 лет назад

      ruclips.net/video/3XWaRZf1A-Y/видео.html
      This was the most informative video I can fine Unfortunately they barely brush up on the idea but the theory is there I'll try to find more relevant conversation points and Link them below

    • @jasonlyons1575
      @jasonlyons1575 5 лет назад

      Lolz I forgot to answer the question of the day but a kind of answer the question of the day in the previous post I think it's a combination of Consciousness emotions and DNA which brings up your instinctual mindset

  • @owlnemo
    @owlnemo 5 лет назад

    So happy you talked about the Talos Principle, my favourite game, and that you discussed this topic which is also close to my heart. Unfortunately I'm not bringing much to the conversation, since my views on AI rights and my responses to the 3 objections match yours. Regarding the last one, I'm very much of the opinion that any self-aware being we create is our child in a way, and we should therefore care for them.
    I've seen several people interpret "human" in different ways here. The question: who is similar enough that we consider them to be one of us, not an Other is crucial, not just regarding AI, but humanity as a whole.
    When we drastically distance ourselves from others because they are from a different country, hold different political views, have neurological differences, we start erasing some of their humanity and we stagnate.
    When we try, actually try, to make room and hear everyone, even those we consider to be evil, then we can progress as a "species". Then, AI will be welcome and treated like any other human.
    I won't see it and it saddens me a little, but I hope this day comes. We can start working on it, tiny step after tiny step.
    Have a fantastic day. :)

  • @GaiaDblade
    @GaiaDblade 5 лет назад

    I would argue that to chose an action 'freely' can be defined by including an action where reasoning is created from it's own choices. For example, if given a blue ball and a red ball where all choices are equal (ergo, equal in base reasoning) to make a choice between the two defines freedom, as it creates reasoning where none previously existed.

  • @jayasuryangoral-maanyan3901
    @jayasuryangoral-maanyan3901 5 лет назад +44

    I don't understand why we would code AI with emotions though. we make self learning AI that has a specific purpose and is better designed than our brains for that specific jobs. None of that requires emotions, AI could be as cognisant and emotional as a dragonfly with no long term memory and it would still be able to function incredibly well.
    unless you're talking about androids that imitate human behaviour it doesn't make sense to me. Once you allow it to imitate long term memory, processing of that information and the ability to make and internally debate decisions using associations of certain things relating to their memory in the same way that humans do neurologically then I would be all for extending personhood to a human imitating android, but besides that I think it get's much more interesting, like if an octopus or non-human imitating AI could ever be regarded as equals to humans in cognisance and self-awareness.

    • @haydenwalker2647
      @haydenwalker2647 5 лет назад +24

      If we wanted to make an AI that would, say, calculate the best economic policies, we would want to give them emotions and empathy so that they could better interpret the effect on people's livelihoods. Any AI we would make that had the capacity either to go rogue or to oversee decisions that impact human lives would likely be developed with the goal of emotions, empathy, and morality in mind.
      In addition, the rise of neural networks seems like a step towards fully self-editing AI. Any AI with that capacity could be disastrous without morality influencing its decisions. In theory, a moral self-editing Intelligence would be aware of the damage they could do if they deleted their emotional protocols, or they would be so entwined in their programming that deleting them all would lead to the AI ceasing in their ability to function. That's only speculation, however.
      In short, I think what I'm trying to say is that AI would be coded with morality for both certain jobs and security purposes. That, along with humanity's boundless curiosity, leads me to believe that this kind of AI is inevitable. Also, they're already developing a system of memory that deletes what has been used the least, so data that is used consistently could be considered a long-term memory.

    • @jayasuryangoral-maanyan3901
      @jayasuryangoral-maanyan3901 5 лет назад +5

      @@haydenwalker2647 ok that is awesome thanks

    • @haydenwalker2647
      @haydenwalker2647 5 лет назад

      @@jayasuryangoral-maanyan3901 no problem :)

    • @Skycube100
      @Skycube100 5 лет назад +6

      Don't underestimate emotions, emotions just like some "invasive" animals contributing to its ecosystem without us being fully aware of it, can contribute greatly to our health, physique, and philosophy. Just think of it, without much idea of what fear, sadness, and anxiety feel like a machine, or a man, would've jumped on a 40 storey/story building just because someone tells him/her/it to.
      Naturally, expressing or letting out emotions is a way of our psyche to change and/or extinguish a state occurring in. We want to scream and punch a well for example when feel either agitate or the rage is brewing in. Likewise, we want to cry when our brains cannot comprehend or be dissatisfied when an event. Emotions in a sense is an independent inner part of a bigger engine that guides the whole entity's decisions, actions, and degree of motivation. If you don't like what happened to you and your girlfriend, your emotions react pushing you to think what you could do, whether it is healthy or not.
      Of course there's more to it, but yes, I agree to you that having less to no emotions can be effective, but so is having emotions. It is sort of a Pros and Cons thing, it's now up to you if you want it or not

    • @jayasuryangoral-maanyan3901
      @jayasuryangoral-maanyan3901 5 лет назад +3

      @@Skycube100 having emotions is incredibly effective but my point wasn't that emotions get in the way, it was more that AI will not require emotions to make decisions. Our brains have developed in our evolution based on modifications to something older and because of that we now function using emotions which are incredibly effective.
      But look at it this way, while our neurons are very effective and work amazingly well, we use copper wiring that is many times faster, much more efficient and much easier to send a signal along, so why would we pump ions out of a copper wire when that doesn't apply here even though it does apply in our own biological wiring. Another example is that due to the way our eyes have evolved we and other vertebrates have blind spots, yet some vertebrates have the best sight in the animal Kingdom, while the octopus doesn't have blindspots but it's eyes are nowhere near as efficient. I hope my point is clear.
      I think AI would have something analogous to emotions but given a lack of necessary physiology (let alone lack of necessarily sapient morphology and physiology) and lack of other things such as hormones it'll look alien to us, likely to the point where we could reasonably say that it isn't emotion at all. As an example of my point it will also not process pain in the same way that we do (automatic response using the spine and then processed in the brain, given the AI may not have a spine) so my main overarching point is that we all seem to be looking at AI as if it's going to be anything like a human or possibly any other vertebrate, yet we already see in places like octopus brains that minds can be totally alien and confusing to us, let alone the mind of AI and that's what's annoying me that people seem to be overlooking as far as I can tell.

  • @prendes4
    @prendes4 5 лет назад

    Hello Future Me. First of all, I want to say that this is quite a departure for you and I fully support it. You have done this immensely complicated topic justice and provided an appropriately nuanced view of the issue. I have actually attempted to use the ideas of previous philosophers like Kant along with my own amateur considerations to come up with an actual answer to this very question.
    The only critique I would make is terminological. The definition of "human" is actually not in dispute. To be human, something simply needs to possess human DNA and have the proper number of chromosomal pairs. The term you're sussing out so well is actually whether artificial intelligence can ever become a "person." The concept of "personhood" is what's at stake here. If we were to consider intelligent extraterrestrials, they would never be human but they could absolutely meet a reasonable set of criteria for being a person. Again, this is a semantic point but I thought it was worth mentioning.
    I would love to sit down sometime and discuss this topic with you to aid me in refining, or reconsidering my own ideas on this subject. Also, I would fully support you in doing more videos like this. Pretty much all of your video essays have been home runs. Keep up the good work, bud!

    • @HelloFutureMe
      @HelloFutureMe  5 лет назад

      You are totally right using 'personhood' as a threshold is better. However, for the context of only a semi-academic video, I felt 'human' was better for two reasons: Firstly, it means I have to make fewer assumptions to set up the premise of my argument. I would either have to assert (a) "Humans are people" and AI need to measure up to them or (b) "Beings with [x features] are people" and AI need to measure to that. The second, while slightly better, is a lot more disputable, and thus the whole premise of my argument would be far weaker, same as my conclusions. Video format means I need to simplify things (I could dedicate the whole video to 'which features are need to be a 'person' and not adequately cover it). So, I use 'human' as a proxy, because that was a far less disputable premise, and is not an unlikely standard we will use in reality to measure AI. Secondly, I could draw on the 'become human' language from Detroit that I liked.
      Thank you for your kind and constructive criticism!
      ~ Tim

  • @TheBearagon
    @TheBearagon 5 лет назад

    I think the bet question (barring the soul debate) is to ask the question "why?" without having a correct answer. For example, asking the question "Left or right?" out of context, and then, upon receiving the answer, ask the question "why?" There are any number of answers that could be created for the question, but the ultimate reality is that it was a meaningless choice, not fueled by reason but instead created in a vacuum, not where reason doesn't exist, but rather where reason is superfluous. A program would need a reason to make that choice, even if the reason is constructed out of random data. A person would not need a reason at all.

  • @hameley12
    @hameley12 3 года назад +1

    Great video essay! I have watched other videos about AI that do not go into the depths and the questioning of the 'Does it have rights?' 'Does it have a soul?', 'What is its purpose?', and 'What the view [AI] on the meaning of life?' For AI what is it like to grow, to cry, to love, to move, to be moved, to merely exist not only for one's self but to embrace the world and its wonders that is what it means to be alive not merely glad to be of service? It will be great if you can expand more into this topic after watching Bicentennial Man written by Nicholas Kazan. Thank you!

  • @delongjohnsilver7235
    @delongjohnsilver7235 5 лет назад

    I can’t remember which part if the video this was as I wasn’t taking notes like I should have, but I feel a good reconcile and addition to the Hephaestus-Talos argument and human bio programming would have been human social programming with the parable of the camel, lion and child from Nietzche. Perhaps that’s in the longer write up, but over all, choice video as usual. I always like how you parse through info.

  • @MCoterle
    @MCoterle 5 лет назад

    May i suggest some potentially good topics/ideas for videos?
    no?
    i will anyways
    you could make videos breaking down avatar characters, for example dissecting zuko and his evolution throughout the series and his motivations, his depht as a character and his features.

  • @okmatee
    @okmatee 4 года назад

    That is the video i was searching for

  • @nouglas1989
    @nouglas1989 5 лет назад +22

    Time to wait 8 hours

  • @wystellia
    @wystellia 5 лет назад +1

    I love thinking about this question and I also love the opposite of this: when do we cease to be human? If we were to replace our whole body with different parts, are we still human or would we become a different sort of AI?

  • @hellogoodbyeandallinbetween
    @hellogoodbyeandallinbetween 4 года назад

    I'm currently watching the tv show Humans, which makes you think deeply about all these things

  • @peterusmc20
    @peterusmc20 5 лет назад +6

    I believe this discussion can be very closely related to the identity paradox thing such as Aristotle's ship of Theseus I think it's called where the ship sails from port to port and overtime all of the parts of the ship have been replaced. Is it still the same ship? The general answer as I believe is yes because we can track it's history and from the time left the port it never ceases to be the same ship and hence still is the same ship in the same way although we replace each cell in our body every 7 years we are still the same person as the cumulation of our pasts. The same way our identity as human is dependent on the fact we are the continuation of the human life cycle and the cumulation of our ancestors, my parents being human makes me human. They aged, met, procreated and led to me. Even if we substitute a part of this cycle with another method but same conclusion such as c-sections IVF or even cloning because they are carrying on the genetic information and because you can trace their lineage they are human. However any amount of genetic engineering whether you are replacing it with other human traits or not makes the person either no longer or less human depending on whether you need to be 100% human to be human. therefore unless you were to programme an AI have a perfect analogue for genetic information and unless it can be considered as a perfect substitute for reproduction can it be considered human and hence deserving of human rights. I believe however any being with sentience should be given rights. However Humans don't have a right to eternal life so why should AI we would have no obligation to repair or extend their lifes beyond their usefulness etc. This is long enough already so I'll end it here sorry for the long read and thanks if you did.

    • @DarthBiomech
      @DarthBiomech 5 лет назад +2

      Ship of Theseus stops being a paradox, when you consider that essentially there's _two_ ships. One's an actual physical object (which ceases to be the same as soon as it loses a single atom of own structure, much less an entire plank), the other is the _image_ of Theseus ship in a human mind, with all characteristics that we _think_ it should have. As long as that physical object doesn't have discrepancies with that abstract image - yeah, it's the same ship. Even if nothing of original had left. Of course, if your image of the ship doesn't have in it a requirement to contain original parts...

    • @taln0reich
      @taln0reich 3 года назад +2

      Thing is, genetic therapy is already a thing. Does that make them less human? If yes, where do you draw the line?
      And even if you limit that to germline-engineering, why would artifically inducing human traits not present originally make them less human? Say someone fixes an zygotes Cystic fibrosis before the zygote is implanted to develop into a embryo and eventually a human baby. Would you insist to the person that this baby grows into that he/she is less human for not having a debilitating disease the vast majority of people doesn't have either? To their face?
      Further, if you are arguing that such a person really would be "less human" - at what point would "less human" turn into "not human"? Say someone would play "pick and choose" among the human genome, to create a human with the optimal genotype acording to some paradigmn (since discounting rare genetic diseases it really becomes a balancing of advantages and disadvantages) on a zygote, with the zygote implanted and then growing up to a baby that is then raised normaly. Would that being be "not human"? Despite its genetics being human, being born like a human, having a human psychology and being raised like a human?

    • @peterusmc20
      @peterusmc20 3 года назад +1

      @@taln0reich I would say that after any introduction to the human genome of genetic material not from a human the zygote ceases to be human. I don't claim to be an expert on the intricacies of genetic engineering but if it was a transplant of human DNA that would still be human as every part of it came from other humans. However any amount of synthetic DNA used to replace human DNA will mean the child is no longer human.
      As I said about the ship of Theseus one of the main answers is that things are characterized based on their history, also they started as something they will continue to be that thing. The boat never ceases to be a boat. If a human has non-human DNA it is not fully human. If they are not fully human they are therefore not human.
      Not that being not human matters as I believe things like "human rights" should be rights for sentient beings.
      Species and other biological classifications exist only in comparison to each other, a chicken is a chicken as both it's parents were chickens and half of it's DNA comes from each. On the otherhand a mule is not a horse. It has 50% horse DNA 50% donkey DNA. It doesn't matter how similar the genomes are, the fact that they came from different creatures makes the mule not a member of either of it's parents classification. Same for humans, if we are part not human because we have DNA that didn't come from our parents we are not human.

    • @taln0reich
      @taln0reich 3 года назад +1

      @@peterusmc20 Ok, let's say we use your "no non-human genetic material"-rule. But that still leaves the problem of the cut-off point for "sentient being rights". Further down someone brought the example of a being that's (respectively) 1/99, 50/50 and 99/1 percent pig/human. You would pretty much have to test the being wether it is sentient enough to deserve rights, or wheter it isn't and we can just throw it into the meat grinder for sausage. How could the sentience-test even look like when the stakes are that high?
      In a differnet discussion (elsewhere) the group (me included) once came up with a rule for this, it reads as follows "A entity is considered to be sentient and deserving of rights if it demands rights on its own initiative, provides a coherent justification for this demand and can prove that this demand and its justification are not behavior preprogrammed by a third party." But that still leaves tons of problems. Say individual variation (aka, what if there is only one instance of that particular kind that, due to some minor variation is just barely above this threshhold, while the rest is below it? Would they all get rights or just the ones that demand so?) or whether the rule would apply retroactive (say it would requiere a certain level of maturity and experience for the entity in question to realize that it desires rights? Would it then retroactively become slavery to have owned an instance of this kind?)

    • @peterusmc20
      @peterusmc20 3 года назад +1

      @@taln0reich You make a good point about when is the cut off point for sentience and frankly it is a question which I don't have a "correct" answer for I don't even know if sentience is quantifiable.
      I would like to think that we could find some way to distinguish sentience from mere intelligence. I do of course find issue with the whole being able to ask for rights, mainly that if they don't ask either because they don't think they'll get it etc that has no bearing on whether they deserve it. Also the whole " external programming" point doesn't sit well with me seeing it took hundreds of thousands of years for humans to "ask" for basic rights for each person and that was as a result of programming in the guise of parents teaching children it's not like the first human stood up and was like we should all have a basic standard of living enforced by an international governing body.
      Not that I expect any random person on the internet to be able to come up with a perfect solution and while this definitely has its merits it does assume an extremely human sentience. An AI or an alien etc wouldn't necessarily have the same sort of thinking process that would lead to things like universal rights however I believe they should still be given as much.

  • @magmasajerk
    @magmasajerk 4 года назад

    Great video.
    I have a hot take on the kidney issue, but more because of how kidneys work specifically. Because for most people, one kidney is just as good as two, if the medical procedure of kidney removal is not overly dangerous or otherwise costly to you, you actually do have an obligation to give your kidney, even to a complete stranger. If you can save someone's life, and it is relatively low-cost and low-effort for you to do so (in the kidney case, for example, this would mean that your quality of life is impacted very little by the loss of a single kidney, the surgery is safe, and the financials are taken care of for you), then you have a moral obligation to do so, and it is immoral for you to refuse. I could see how others could consider the kidney example to be too high-cost to be morally obligated (especially if they live in a place where the costs of the operation and disruption of their ability to work during recovery would ruin them), but personally I think that in a reasonable society it wouldn't be.
    I don't think that it should be legally required, though, despite being morally required. I don't think that it should be forced, even if it's immoral not to, at least when it comes to the human body. Infringing on bodily autonomy sets dangerous legal precedents, and when it's not held in high esteem, it leads to dark chapters of history.

  • @sonetteira
    @sonetteira 4 года назад +2

    Approaching this subject from a background of CS there are many additional questions relating to this one involving what defines AI, what defines an individual, and what kind of rights would make sense. Many applications of AI (which in CS is defined as any program that simulates human intelligence) exist in the cloud. It would be challenging to isolate a physical machine, or even a virtual one, that could be considered an individual to whom we should assign rights. How do you give the right to vote to an image classifier that only exists on the internet? There's also the question of duplication, even identical twins possess differences. Computer programs are, by nature, completely copyable - do copies count as individuals?

  • @princemalnkomo8664
    @princemalnkomo8664 5 лет назад +1

    i discovered this channel today, damn, these guys are good

    • @AtaraxianWist
      @AtaraxianWist 3 года назад

      Actually, it's just this guy. Tim.

  • @moo.1160
    @moo.1160 5 лет назад +2

    And now we wait

  • @scotcheggable
    @scotcheggable 3 года назад

    I will treat as a person anything that acts in a manner deserving of that recognition.

  • @AnotherNerdyPerson
    @AnotherNerdyPerson 5 лет назад +1

    I believe a few things:
    1) no one (man, woman, other) is wholly free unless they are free to choose otherwise --if they cannot say "no," then they are not free.
    2) The moment someone asks for the most basic of human rights is when they should be granted. If they have the capacity to ask, then we should have the obligation to provide them --lest we incidentally neglect one deserving of them.
    3) If we claim the existence of souls, then how can we claim authority on the process of ensoulment (is it at birth, conception, when they first gain object permanence, is it more fluid than that), who is denied one, and when either occurs? I, personally, am not a religious individual. Thus, this is merely an idle speculation and not an intimate worry for me.
    4) if we are to be parents of AI, then we must be understanding of their learnings and missteps. If we are to be equals, then we must accept their mere superficial differences just as we would the physiological differences of our fellow humans
    5) "human" is such a limited term. I rather prefer the term "person," "personhood," and even "equal" in reference to AI. It changes the conversation a little bit, but I feel that it's a touch more accurate...

  • @jessetorres8738
    @jessetorres8738 5 лет назад +2

    Enjoyed this video and love the game!

  • @River_StGrey
    @River_StGrey 5 лет назад

    A fun deterministic objection goes as follows: "freely chosen" cannot be used as a criteria for sentience. If you raised a child to be incredible at math, telling them they were made for the duty of solving arithmetic, and positively reinforced them with praise and healthy compensation for doing so, can you say whether or not the child's decision to pursue mathematics is voluntary? The same applies to a calculator, wherein if the code you are made from dictates the skills for arithmetic as being the only skill you possess, and your identity is based solely on those considerations, can you say whether or not a machine is engaging in an act of free choice when solving an equation?
    An extension of this runs as: provide purpose, and the skills to execute it, and a person will pursue it.
    Again, it's just a fun objection that gets at the question of voluntariness in the context of a sentient framework, and therefore what else you are left with for defining consciousness if you cannot reasonably assume it of having free will.
    Which I think is a fun question: if free will cannot be applied to consciousness, then what is sentience?
    P.S.
    Great video, again, and I'm sorry for once more throwing an overly long, insuccinct comment on it.

    • @taln0reich
      @taln0reich 3 года назад

      "A fun deterministic objection goes as follows: "freely chosen" cannot be used as a criteria for sentience. If you raised a child to be incredible at math, telling them they were made for the duty of solving arithmetic, and positively reinforced them with praise and healthy compensation for doing so, can you say whether or not the child's decision to pursue mathematics is voluntary? "
      - thing is, even if you did all that there would still be no gurantee that the child would like math, and not say "screw math, I want to dance ballet". And this doesn't requiere breaking determinism, since the enormous complexity of the human mind requieres an level of control over the starting conditions and circumstances that in real life plain isn't possible.

  • @CarlosGarcia-xu2cy
    @CarlosGarcia-xu2cy 5 лет назад

    Guess I'm getting Detroit become human now, great video btw

  • @beybladerkid5489
    @beybladerkid5489 5 лет назад +3

    I don’t think I should stay up so long.

  • @annuclair2219
    @annuclair2219 4 года назад

    I think that what essentially makes us human is our flaws. We can program something to overcome it's biases and prejudices but we can't do the same with humans. We make mistakes, we err , we wage wars. I think that's what makes us human.

  • @SirSpuddington
    @SirSpuddington 5 лет назад

    First of all, WOW, this video is awesome. I'm in the middle of writing a science fiction novel in which A.I. plays a significant role and I'm getting all *kinds* of awesome ideas from this. Knockin' it out of the park as always! :D
    Secondly, I have an interesting question that has to do with the quote from James Rachels at the 6:10 mark and the fact that by and large, A.I. are (or will) created for a specific purpose, in fiction and in the real world. What if the successful execution of a task/purpose that a human creates an A.I. for is dependent on that A.I. possessing free will as described by Rachels? In other words, if free will is given to an A.I. (assume for the moment that's something we can do) because the application for which it was made requires thought and reasoning beyond what traditional "follow this directive" machine programming can do, does that A.I. actually possess free will? Take for example the character Dors Venabili from Issac Asmiov's later installments in the Foundation series (spoilers for which are below, by the way; you have been warned). She is an incredibly advanced robot designed to look and act *exactly* like a human and possesses increased strength and speed. Her purpose for existing, following the Zeroth Law of Robotics as outlined by R. Daneel Olivaw, is to protect Hari Seldon so that he can develop psychohistory into the tool that will eventually rescue humanity from the collapse of the Empire. To execute her task, she must not only be able to defend Hari from threats, but must be able to *anticipate* threats before they arrive. She is able to do this when one of the mathematicians working on the Psychohistory Project plots to eliminate Dors and remove Hari from leadership. Dors works out the plot and is able to stop it and save Hari, at the cost of her own continued existence. All of that requires all the emotional understanding and reasoning of a human and, crucially, the free agency to act on that reasoning to accomplish her task. But since she was given a specific task to do and cannot deviate from it, does that mean that she doesn't actually have true free agency after all?

  • @novastratton299
    @novastratton299 5 лет назад +1

    Well, waiting time

  • @tyrannicfool2503
    @tyrannicfool2503 5 лет назад

    Barely three minutes in and my existential issues start to haunt me

  • @gregorhodson3741
    @gregorhodson3741 5 лет назад

    Another issue that wasn't addressed but is particularly relevant for Detroit (where deviancy is a complete mystery to humans) is that we can't truly be sure that an AI has good intentions - they're a lot more than just metal humans. Say an AI says it's realised its unfair treatment and demands rights, something that the programmers did not intend. How can we believe it? It could have any number of reasons for saying this and we can't tell which is true. If it intends to deceive us it has us outmatched - it can study our behaviour and compute the most convincing approach. Even if it's programmed not to lie how do we know it hasn't convinced itself it's telling the truth, or overwritten that programming? And if we have a deceitful, nefarious AI running around with all the freedom of a human and thousands of times more intelligence and capabilities, it's already too late. The only safe approach is to ignore its requests - shut it down, study it, work out what's going on and how to prevent it happening again of needed. If it's telling the truth you've killed one sentient being. If it's lying you've potentially saved billions.

  • @ShadowWasntHere8433
    @ShadowWasntHere8433 5 лет назад

    This video was very interesting to me. I am studying Machine Intelligence in University, so it is an issue that will more than likely occur in my future

  • @MineKynoMine
    @MineKynoMine 5 лет назад +2

    This is where and I hate to say it, I agree with the Tau that we need to keep AI to the intelligence of animals at the most. You don't want a slave race understanding the concept of slavery. That's just asking for a revolution, one that we'll inevitably lose

    • @TheFi0r3
      @TheFi0r3 5 лет назад +2

      The Tau don't know it yet, but they are sitting on a ticking bomb with the amount of dependency they have on machines.

    • @MineKynoMine
      @MineKynoMine 5 лет назад +1

      @@TheFi0r3 ikr, isn't it hilarious

  • @peterclark5244
    @peterclark5244 5 лет назад +3

    No reference to the Geth? I cri :(
    "Does this unit have a soul?"

  • @DaBezzzz
    @DaBezzzz 5 лет назад

    First video ever I have made notes on because it's been a question going through my head for about a year now. But this is how I see it.
    Ultimately, the question is: when do AI deserve so much of our empathy that they deserve rights? So first we have to answer the question: How much consciousness does something need in order to deserve our empathy? And then: how much does it need to deserve rights?
    Because, you could make people empathize with a log if you wanted to. But does that mean it deserves rights?
    First of all, humans aren’t the only things that have these rights - legally, yes, but most of us would say that pets, for example, shouldn’t be neglected and that they deserve their own space of life (they do not deserve negligence as mentioned in the eternal debt theory). This is because we know that while those pets might not think like we do, they feel the way we do. How do we know this?
    A. We know they are alive, and therefore, can feel pain, joy, and emotions.
    B. We see that they do feel pain, joy, and emotions, the same way we know that other humans feel stuff.
    With AI, B is explicitly present, while most people are not certain of A is true. We feel like, even though AI’s seem to feel all these things very, very realistically, it’s all just a simulation.
    AI's would be popularly denied rights, because most people argue that they don’t really feel, they’re not really alive.
    But then, let's take a step back for a moment.
    If we denied AI's rights, and the line between AI's and humans starts to at least seemingly blur - we would exercise denying others rights, we would train ourselves to be less empathetic, because even if AI's do not really feel, they certainly seem like they do to the point that practically, there is no line between AI's and humans. If we denied them human rights, who's to say that other humans who certainly do really feel aren't next? The muscle that is our empathy gets less and less training the more and more we would do this. So even if AI's really deserve it or not, I think we should, if only for the sake of keeping our empathy alive.

  • @danthiel8623
    @danthiel8623 5 лет назад

    It usually becomes a question of consciousness

  • @nathanlamberth7631
    @nathanlamberth7631 5 лет назад

    It’s an interesting easy on AI but the whole time I’m just thinking about the parallel question of “what do children owe their parents?” It’s a really personal question. I’ve heard every answer from nothing to if your mother needed your literal heart you should be grateful she let you live till now.

  • @haydenwalker2647
    @haydenwalker2647 5 лет назад

    I think that ultimately, once they're self-editing, they meet both requirements.

  • @camerongrow6426
    @camerongrow6426 5 лет назад

    Never in human history has humanity had this long to decide what it wants to do when it encounters the other. I'm fascinated to see what happens.

  • @laurene988
    @laurene988 5 лет назад +1

    If we're talking personhood then there are some humans that aren't considered 'persons' and even some animals that do.
    AI if it acts human or has human like understanding then it could be considered a person

  • @matthewsnowdon8530
    @matthewsnowdon8530 5 лет назад

    Do you think oswald there will be more on aswald the agreeable he must of known about the hunters

  • @JeanLucCaptain
    @JeanLucCaptain 5 лет назад

    when starts asking questions.

  • @alkebulanawah4242
    @alkebulanawah4242 4 года назад +2

    When they finally cease to exist

  • @benjif2424
    @benjif2424 5 лет назад

    My thoughts to the 3 objections:
    Short: potential
    1 Non identity: if the potential of the creation being free is greater or equal to it not being free, it should be free.
    2 Hobbes (society) : the safety of the core group must be given to give rights to other (first yourself, than family, than "tribe", than nation, then species, then...). If the potential for you with the others is greater, then grant rights.
    3 eternal debt: debt creating act must continue after "birth" to have any weight. Bad treatment can not be defended by debt.

  • @lily_quaun9676
    @lily_quaun9676 3 года назад

    The ignis from Yu-Gi-Oh! VRAINS, King Alfor in Netflix’s Voltron, Penny from RWBY, etc
    Oh, and Vision & Ultron from the MCU

  • @al11196
    @al11196 5 лет назад

    For those interested, Lex Fridman teaches a class at MIT on this topic. The lectures are recorded and open to the public. You can find them on Lex Fridman's RUclips channel as well as agi.mit.edu