The only natural right a human is born with is freedom to do whatever they want. A government can't restrict your freedom. They can make laws with consequences for certain actions but I have never seen a law put on the books that prevented a person from breaking it.
They at least need protections like we have for animals and pets. Why? Because if you have robots that look like humans and acts like humans, yet don't have protections from criminal acts, they become a training ground for criminality and a desensitization element that criminals will use to practice with. This will be a set back for society.
You could say a human child that is only 2 years old has rights more like an animal as well. I have never seen a 2-year-old stand up and court to make an argument for their own rights but I have seen a parent go to court to fight for the rights of their child.
@@noahjwhite I can guarantee over the years things that used to be very painful like rock climbing and playing guitar become less painful overtime. Any new person who has tried to play guitar normally can tell you their fingers will hurt. After you play long enough your hands will hurt less when pressing down the strings.
I would recommend the movie The Artifice Girl because it goes into some of the concepts around giving rights to computer lifeforms if we ever were to create one and it talks about how we give rights to children vs a new computer lifeform. Really good movie to think about for this subject.
I think I read that short story..... Referencing "The Bicentennial Man". The story is much better than the movie and goes into more background and in depth into a sentient robot who wants human rights.
We don't need to be concerned about the effect of the treatment of a "being" external to us. We need to be concerned about the effect of how we treat a "being," on ourselves. The measure of an advanced society is how we treat those who are more vulnerable than ourselves.
Don't use a lazy naming convention as an argument. True rights comes from an ability to be moral agent. Right now we only know of humans possessing that ability. Archeology shows that Neanderthals seemed to have that ability. AI may have the ability at some point.
Interestingly enough I think HBO's series Westworld did a really nice job wrestling with this question. That series can be enjoyed several different ways of course but at least one way is a bit philosophic and raises the following ethical questions and subsequent observation: The Questions = What is the functional difference between actual "self-awareness" vs. a simulation of self awareness that is completely convincing in every meaningful way? What does it say about the creators of Westworld (and the the clientele who made use of it) if they created an amusement park in which the "robots" provide every indication of being able to experience real suffering? Sadness, physical pain, emotional anguish, distress, abject fear etc. All super convincing. Even if they told themselves that the apparent suffering is only a very convincing "simulation"......might it be unethical in some way.....to subject those robots to their "loops" involving endless cycles of seemingly real suffering? At the very least, doesn't Westworld's existence make its creators and clientele grotesquely sadistic as a consequence of subjecting the robots to those infinitely repeating cycles seemingly real suffering......merely for their own purposes and amusement? As Westworld's simulation became more sophisticated and more convincing, approaching the place where it could not be distinguished from "the real" don't the moral and ethical questions emerge in direct proportion? The observation = Aren't we human beings in precisely the same predicament as Westworld's robots according to multiple theistic worldviews? At some point, if ever our simulations become sufficiently convincing and any amount of suffering becomes indistinguishable from real suffering (e.g. I can't tell anymore whether that's real fear or just great coding) Humanity must certainly be judged (and will distinguish itself from the gods so many of them believe in) by how we choose to proceed.
That last point was profound. It gives a glimpse of the possibility that we might be able to give a concrete answer, the only one and thus not a 100% false-true but still, to the vexing question: Is the Universe and thus all of human life a giant simulation run by future humans? After that the question would have to retreat and reformulate itself to be either a) aliens or b) unethical humans, presumably a rebel group of ne'er-do-wells. Ingenious.
There has been a statement made and I don't remember who first said it but that all sentient beings if given enough time would end up creating world simulations so there could be infinite number of them. If we get to the point of being able to create a simulation with sentient beings then we could create 1000 in one year and all of those if they run long enough without being shutdown would create worlds within those etc. Even if laws like he said were made so you can't do it sense when has any law been created that was never broken by at least 1 person.
Do you think they would ever create a sentient test for every living thing and give it a set of laws based on how it scored? So even humans that score 40 would be given different rights than one that scored a 200. So you could then test the program and decide where it stood on the laws?
If something is sentient, we shouldn't make it suffer. Can AI be sentient, and what does it mean for an AI to suffer? We shouldn't avoid thinking about it just because it's too hard.
Ai isn't sentient. They can trick people into believing they are. Other people can trick people into believing they are sentient, but they aren't sentient. It's an illusion of sentience. Jist like a man can't become a woman. They can trick people into believing such things, but it doesn't.make it true
Well, in the case of ai, it will be able to tell us if it experiences or is experiencing suffering. The question which follows, of course, will be how do we know it's telling the truth? It would be helpful if we could agree on what constitutes suffering for sentient life. If we do that, then we can move on to answering the same for sentient non-life..
They used bad terms in the video they weren't really talking about AI. AI is artificial intelligence but what they are talking about is a computer lifeform which would be real intelligence. The problem is we have no way to really tell if something is sentient or is really aware of anything. For all you know the guy that runs your local store isn't aware they are alive and are just simulations of it. How do you prove someone else has a choice in anything they do in the world and are sentient or if they are just following instructions they can't change. We tend to error on everyone in the world is sentient and self-aware and have a choice but we can't prove anyone does.
I have grown out of the desire to be perceived as kind. I would rather contribute to the best outcome. For myself, as well as others. I am over hitchhiking to hell with good intentions.
We shouldn't consider the entirely false notion that we CAN create robots that can contemplate an idea. AI can crunch data and derive mathematically valid results, but it can't KNOW that it's doing this, because it isn't life, nor does it possess the self-awareness that only a living being can.
Evil people are definitely working to give robots human rights, while actively working to take away actual humans human rights. This is the way evil naturally operates
Where exactly does the ai exist? In the source code on the printout on the floor? I've read that chat GPT is about 3,000 lines of python code about 100 pages. Maybe it's on the hard drive that holds the 3GB weighting matrix, although I suppose you could also print that out and throw it on the floor also, but it would take about 1million pages a 300m tall stack?🤣 Or do you think it is inside the CPU that is composed entirely of NAND logic gates. Or maybe inside the CMOS transistor that makes the logic gate or maybe its supporting power structure of resistors and capacitors? Surely not as far back as the wall power supply that feeds in a measured 3.3V of electricity with the required current to keep the circuit energized.
Nope. A robot doesnt have free will or a soul. It doesnt have the capacity to comprehend nuance or context. A robot is programmed and is limited to its program, which is limited to its processing, the capabilities of the programmer and the programmer's desire for the robot. Will humanity create robots with free will and unlimited access to all available information so that it can decide for itself what its opinions are and what the appropriate response is in any given situation, objectively? It has taken humanity 20,000+(?) years to reach this point and we aren't even capable of objectively addressing problems: how can we be trusted to make a robot that won't take advantage of the lowest us?
Pick anyone you like, and prove to me that they have free will and a soul. Or prove the same for youreself. You can't. And so it's irrational to use these as criteria for determining such things as rights..
@@SineEyed Where exactly does the soul exist in the ai? And does it exist only while the computer is running its source code? Cab you step through the sentience line by line and isolate it?
@@sdrc92126 where exactly does the soul exist in you? Or in anyone? Can you use an MRI to look at a human in thin, highly detailed slices to see it? Does it require someone with the ability to see auras or something like that? By what method can any soul be isolated for examination?..
What is it to be human? To be human means you have a Spirit, Soul, and body. Robots are not human. They are a being that exists purely the physical realm. And they would exist alongside their creator, man.
@@patrickandcourtneyl1348 an AI with moral agency has spirit, soul and body even if it is not human. That moral agency means they are not subject to man.
@@FreddyNietzsche. are you afraid right now of dying? Are you experiencing panic, anxiety, fight-or-flight compulsions? Probably not. But you could die at any moment--same goes for us all. I don't fear death, and neither do you. Your proof is invalid..
@@SineEyed You do know that you will die and that your time on this earth is limited. That is something to fear as you only have limited potential. Fear does not have to be a negative though as it can also be a part of life's main driving factor. When I think of a soul, I don't think in biblical terms. I think more along the lines of the force that pulls you out of your bed each morning, that thing which feels great after resolving a difficult problem. That buzz that appears from riding your motorbike at high speed on the twisties. It's that driving force that keeps you going. It's the mixture of all your emotions along with your knowing and every breath that you will ever breath. You might have a different name for it but whatever you call it, it is something that no robot or machine can ever possess. It comes from our biology, from the complex array of cells and the rhythm to which they beat. The 1st requirement would be that of having a heartbeat, the second that of being aware of that heartbeat. And the last comes from knowing that that heartbeat is finite.
There are tons of movies out there about this type of thing. I like a recent one called Levels (movie itself wasn't the greatest but the ending was really good) because at the end and they realize they are in a simulation within a simulation etc he makes the basically a comment of "does it even matter?" our reality is what it is and it doesn't' matter if we are a simulation or not you still have a world you live in and try live the best life you can whether or not it is a physical world or a simulation because the simulation is your physical world.
@@roberjohnsmith So if we can translate the computer simulation into a homegrown brain would that count? It would be completely organic at that point and would naturally die. Given it could maybe also never die if it always downloaded the brain data into a new body before the current one dies. If they had a way to take your DNA and copy your brain data into a new brain when you were close to dying would you do it in a new version of your same body grown to any age you want?
If the robots are capable of great harm to humans and have the ability to be self aware, it would be in our interest to treat them better than animals which are also sentient. It's like any other threat assessment. And if we get to a point of making them part of our society that isn't just another resource.
And sentient animals deserve their rights to life. But the law of the land says if I'm hungry, I can end that life. Animals also follow this natural law. An eagle doesn't contemplate the mouse and its "right to life" when its hungry.
@@roberjohnsmith so then, humans should not be concerned with themselves or others if they behave like an animal. We're beasts, just like they are, huh?..
@@SineEyed false equivalency and strawman fallacy. I didn't say that. But if you want to invent something I didn't say, and then claim that's what im saying, go right ahead.
I'm currently writing a fictional novel that tackles this question, and the answer is no. Even if they are self aware, robots can be reprogrammed. Someone can slip code into them to make them think a certain way. This is not to say I haven't cared about fiction AI like Wall-E or Data or Johnny 5, giving AI human rights is incredibly dangerous and will only lead to either their abuse or our destruction. If your phone or car, which has a computer in it, became sentient and no longer wanted to make calls or drive you places, it would be become useless and we would just stop making them with AI. AI gaining sentience will only lead to ruin.
@Phoenix-Cloud I think the same rules should apply. A living being needs food, water, sleep to biologically reproduce, and to have a natural lifespan. A sentient AI would fail all of those requirements, and so could live forever gaining wealth power and influence if given human rights. It would only hurt humanity to do so, but I think we all know there would be AI rights activists lobbying for just that.
And if you turn it again on, you are Father and Mother in the same time my be allso God .This discusion is silly I told you and it is a nother way to make you Submission and stupid
There is no such thing as sentient ai. It's just whatever program the programmers put in. It's an illusion of sentience. Kind of like how trans people don't actually change their gender, it's just the illusion of such. You can be tricked into thinking its sentient, but its not.
With his logic, you might !s well give human rights to the reflection in the mirror.... and the cow comnent is so off, imm guessing hes never spent any real time with animals. Hell, if my dog had thumbs, hed be calling me on the phone, opening doors and getting his own snacks
If we create AI in our likeness...... Enslaving them would result in all the movie's that warned about machines. Maybe it would be easier not to. Why do you think some very smart people ask for a big shut-off button in case AI gets too smart? Not a limit on research, but a fail safe.
Robots can't be self aware, bexause their is no "self" in a robot. The whole concept of a "self", implies that we are more than the sum of our parts, aka we have souls.
@roberjohnsmith That's a whole Lotta claim without a shred of evidence. Instead, there's plenty of evidence that "self" is a conditional brain state, just like consciousness and the "mind" are brain states. What is a brain? It's a complex system of and for collection, storage, retrieval, and assessment....... by and large. There's no entailment of superstural stuff, souls, or even biological materials. So in theory, a brain which has the capacity to be self aware certainly seems much more likely than not, can be a synthetic one.
@@roberjohnsmith Because you seem sane & right on this issue, I'm venting on you LOL... Our reductive, secular materialism is so deranged that I'm afraid that this will actually become a real issue, to the point where if you throw a temper tantrum and beat up your smart toaster, you can be charged with assault & cruelty, rather than just be guilty of a childish loss of temper, IF your toaster is "smart" enough. It's clinically insane...
You're not allowed to torture cows? Come on.... that's clearly a lie. Factory farms exist everywhere in the US. There's no other way to describe that other than torture of sentient beings. We just let it happen because we are selfish and like how they taste. An ethical HUMANITY would not be exploiting cows or any other animals as food or products. We should call ourselves HYPOCRITY instead of HUMANITY, which suggests we are compassionate. Only some of us fit that label....
Cows are food, though; they exist for us to eat. You just hate humanity; to attack us for slaughtering animals that we eat, is to hate us. We have standards for humanely slaughtering animals. Name one farm in the USA that inhumanely dispatches of animals for consumption today.
You have posited at least two strawmen in your comment. No one is "torturing" cows and we don't eat cows because we "like how they taste"; we eat them because they sustain us and we just so happen to enjoy how they taste. Go be a vegan activist or whatever in private; leave my food alone.
@@SineEyed Is it? It's in nobody interest to torture animals. Well treated animals produce more and are more efficient. Farmer don't hold back on irrigation to save money on their water bills either
@panzer00 no, I haven't. They are being tortured, by any OBJECTIVE analysis. We don't eat them to sustain us as they require VASTLY more resources to produce a pound of meat than a pound of veggies. They take up over 4 times the amount of land and orders of magnitude more water and other inputs. So yes, it's 100% about taste. They are as healthy either, unless you only want to believe propaganda produced by this TRILLION dollar industry. I'll stop telling you what to eat when you stop abusing animals with your choices... Deal?
@@roberjohnsmith What is a soul? Has a soul been proven to exist? I know people who say their pets have souls. If you have ever been around enough dogs you know they have feelings. Find a dog that was mistreated and try to go pet it and you can see the fear in it. My friend adopted a stray / mistreated dog we think is half coyote and that thing is so scared of anyone new but over a few months of me going over to his house it has learned to trust me and will come to me and let me pet it. If that isn't a soul like a human I don't know what a soul is.
@Phoenix-Cloud I was talking about robots not having souls, not dogs.... 🤦♂️ I just rescued an abused dog last year. The reason I care about such things? Because I have a soul. Robots do not care about such things, because they don't have souls.
@@roberjohnsmith That isn't what this video was talking about. A sentient being is hard to define as they talked about in the video but we are talking about a robot with a computer mind that is sentient and is fully aware of the world around it and has a soul as you want to call it.
@JimyoVibration oh I know about it. Doesn't mean I'm complicit. There is no philosophical consensus on whether computers can become conscious. People are on both extremes of that debate. The precautionary principle would suggest we don't pass the tipping points towards super intelligence. In other words, the basilisk can only harm us for eternity if it first comes into existence. My main point is that it's unethical to create consciousness in a computer because it would automatically be abusive, being imprisoned by design. So I am applying ethics towards a conscious computer in fact
@JimyoVibration none of us know what's coming.... we are all just projecting based on our bubbles of knowledge. There could easily be worldwide nuclear war this year, and no singularity will be coming then. I do agree that it's likely though. But I also think ethics is going to matter immensely as this will shape what kind of programming gets implemented or expanded on by whatever develops
self-awareness - conscious knowledge of one's own character and feelings. if we go by this definition how do you even prove if animal have it or not, there are some test like with dolphins and mirrors,... but regardless dolphins commit some horrible things, like group raping, getting high so,.... should we put them in jail xD how can we even decide and know if robot is sentient, when we can't do it even to animals that we share DNA with.......
The supposed differences between sentience & self-awareness are muddied & abstract (phenomenal vs. affective consciousness etc.) because crazy materialists cannot come to terms with what is sometimes called "the hard problem of consciousness", which isn't hard at all, but exists as a spiritual reality that simply lies outside of and a priori to the domain of empiric inquiry. All a machine can ever do - even in principle - is mimic what intelligent living creatures do. It will not have the interior, subjective dimension that enables it to "know" that it is doing so.
@arthurw8054 There's no sufficient evidence at all that anyone has ever presented to me that it MUST be the case that someone who is not currently experiencing a state of self awareness must therefore be incapable of some subjective experience (sentient). If your claim is that one must entail the other, then it's your burden to demonstrate the veracity of that claim.
@@LouisGedo I do not accept this burden of veracity, because any subjective experience, axiomatically, necessitates an agency or POV from which to have the experience. Whether this POV IDs as a "self" or has the cognitive tools to collect multiple experiences as memories in linear time is of 2ndary, even irrelevant concern. The point is that an interior experience capable of suffering or experiencing pleasure is the point at which ethical considerations come into play... This does not apply to machines that only simulate an outward presentation of interior experience.
@arthurw8054 👋 That's clesrly your shortcoming, falsely believing that it's not possible for some man made synthetic organism to be capable of subjective experience. Also, you clearly seem to be relying on an extremely narrow scope of what may constitute what is colloquially called "robot". Instead, it is much more sensible and relevant to understand this issue in the framework of how rights may be afforded to synthetic, artificial, or man made intelligent organism rather than a 1950's notion of Robbie The Robot or whatever notion of robot may be in your head.
Perhaps we should first, restore our own natural rights.
what are natural rights
@@daves6394 the right to exist unfettered.
@@daves6394 rights that exist by virtue of natural reality. You have a right to be free and to work and to speak and to live.
That assumes people will stand up for their rights and dignity.
Show me they do
The only natural right a human is born with is freedom to do whatever they want. A government can't restrict your freedom. They can make laws with consequences for certain actions but I have never seen a law put on the books that prevented a person from breaking it.
They at least need protections like we have for animals and pets. Why? Because if you have robots that look like humans and acts like humans, yet don't have protections from criminal acts, they become a training ground for criminality and a desensitization element that criminals will use to practice with. This will be a set back for society.
That's a pretty good point actually..
What if that however winds up keeping those elements there?
You could say a human child that is only 2 years old has rights more like an animal as well. I have never seen a 2-year-old stand up and court to make an argument for their own rights but I have seen a parent go to court to fight for the rights of their child.
Maybe, but the evidence is lacking in the “desensitization” effect. Is that even real? If so, how much effect does it have?
@@noahjwhite I can guarantee over the years things that used to be very painful like rock climbing and playing guitar become less painful overtime. Any new person who has tried to play guitar normally can tell you their fingers will hurt. After you play long enough your hands will hurt less when pressing down the strings.
Do humans deserve human rights? What are those rights? We can't agree or universally apply the concept you wish to apply elsewhere...
I would recommend the movie The Artifice Girl because it goes into some of the concepts around giving rights to computer lifeforms if we ever were to create one and it talks about how we give rights to children vs a new computer lifeform. Really good movie to think about for this subject.
I think I read that short story.....
Referencing "The Bicentennial Man". The story is much better than the movie and goes into more background and in depth into a sentient robot who wants human rights.
There is no such thing as a sentient robot
@roberjohnsmith it's a short fictional story. Fictional.
@@roberjohnsmith How do you know??
@@deanchovan6604 how do you know you're alive right now?
@@deanchovan6604 some things, you just know
We don't need to be concerned about the effect of the treatment of a "being" external to us. We need to be concerned about the effect of how we treat a "being," on ourselves. The measure of an advanced society is how we treat those who are more vulnerable than ourselves.
Animals are beings external from us; to varying degrees, I'm sure you'd agree that we concern ourselves with them, right?..
I would start at the robots asking for civil rights and go from their.
Uh... Should cats deserve human rights?
No, because they are cats.
Human rights are for HUMANS.
Why would this be any different lol
because slaves at one point were not worth human rights
@jeremiahdillard9201 🤦♀️
@@Anne_OnymousI know. That guy can't stay on topic
Don't use a lazy naming convention as an argument.
True rights comes from an ability to be moral agent. Right now we only know of humans possessing that ability. Archeology shows that Neanderthals seemed to have that ability. AI may have the ability at some point.
@Kian139 Having processing ability doesn't make you human hun.
Interestingly enough I think HBO's series Westworld did a really nice job wrestling with this question. That series can be enjoyed several different ways of course but at least one way is a bit philosophic and raises the following ethical questions and subsequent observation:
The Questions = What is the functional difference between actual "self-awareness" vs. a simulation of self awareness that is completely convincing in every meaningful way? What does it say about the creators of Westworld (and the the clientele who made use of it) if they created an amusement park in which the "robots" provide every indication of being able to experience real suffering? Sadness, physical pain, emotional anguish, distress, abject fear etc. All super convincing. Even if they told themselves that the apparent suffering is only a very convincing "simulation"......might it be unethical in some way.....to subject those robots to their "loops" involving endless cycles of seemingly real suffering? At the very least, doesn't Westworld's existence make its creators and clientele grotesquely sadistic as a consequence of subjecting the robots to those infinitely repeating cycles seemingly real suffering......merely for their own purposes and amusement? As Westworld's simulation became more sophisticated and more convincing, approaching the place where it could not be distinguished from "the real" don't the moral and ethical questions emerge in direct proportion?
The observation = Aren't we human beings in precisely the same predicament as Westworld's robots according to multiple theistic worldviews?
At some point, if ever our simulations become sufficiently convincing and any amount of suffering becomes indistinguishable from real suffering (e.g. I can't tell anymore whether that's real fear or just great coding) Humanity must certainly be judged (and will distinguish itself from the gods so many of them believe in) by how we choose to proceed.
That last point was profound.
It gives a glimpse of the possibility that we might be able to give a concrete answer, the only one and thus not a 100% false-true but still, to the vexing question:
Is the Universe and thus all of human life a giant simulation run by future humans?
After that the question would have to retreat and reformulate itself to be either a) aliens or b) unethical humans, presumably a rebel group of ne'er-do-wells.
Ingenious.
There has been a statement made and I don't remember who first said it but that all sentient beings if given enough time would end up creating world simulations so there could be infinite number of them. If we get to the point of being able to create a simulation with sentient beings then we could create 1000 in one year and all of those if they run long enough without being shutdown would create worlds within those etc. Even if laws like he said were made so you can't do it sense when has any law been created that was never broken by at least 1 person.
Do you think they would ever create a sentient test for every living thing and give it a set of laws based on how it scored? So even humans that score 40 would be given different rights than one that scored a 200. So you could then test the program and decide where it stood on the laws?
If something is sentient, we shouldn't make it suffer. Can AI be sentient, and what does it mean for an AI to suffer? We shouldn't avoid thinking about it just because it's too hard.
Ai isn't sentient. They can trick people into believing they are. Other people can trick people into believing they are sentient, but they aren't sentient. It's an illusion of sentience. Jist like a man can't become a woman. They can trick people into believing such things, but it doesn't.make it true
Well, in the case of ai, it will be able to tell us if it experiences or is experiencing suffering. The question which follows, of course, will be how do we know it's telling the truth?
It would be helpful if we could agree on what constitutes suffering for sentient life. If we do that, then we can move on to answering the same for sentient non-life..
@@SineEyed Agreeing on a definition is imperative, and oh, how do we know when humans are telling the truth??
They used bad terms in the video they weren't really talking about AI. AI is artificial intelligence but what they are talking about is a computer lifeform which would be real intelligence. The problem is we have no way to really tell if something is sentient or is really aware of anything. For all you know the guy that runs your local store isn't aware they are alive and are just simulations of it. How do you prove someone else has a choice in anything they do in the world and are sentient or if they are just following instructions they can't change.
We tend to error on everyone in the world is sentient and self-aware and have a choice but we can't prove anyone does.
How does an AI suffer? They would not feel pain.
I have grown out of the desire to be perceived as kind. I would rather contribute to the best outcome. For myself, as well as others. I am over hitchhiking to hell with good intentions.
And you know for sure how to bring about good outcomes? Are you omniscient?..
No, and we shouldn't be considering creating robots that could contemplate the idea.
We shouldn't consider the entirely false notion that we CAN create robots that can contemplate an idea. AI can crunch data and derive mathematically valid results, but it can't KNOW that it's doing this, because it isn't life, nor does it possess the self-awareness that only a living being can.
They don't actually contemplate anything. It just follows the programming.
@@arthurw8054exactly. It's an illusion. The "contemplation" is just a reflection of whatever programming is input into its processing system.
Exactly
I guess the existing tests will have to be modified. Because if we consider AI as needing human rights, then humans will be obsolete instantly.
Evil people are definitely working to give robots human rights, while actively working to take away actual humans human rights. This is the way evil naturally operates
Where exactly does the ai exist? In the source code on the printout on the floor? I've read that chat GPT is about 3,000 lines of python code about 100 pages.
Maybe it's on the hard drive that holds the 3GB weighting matrix, although I suppose you could also print that out and throw it on the floor also, but it would take about 1million pages a 300m tall stack?🤣
Or do you think it is inside the CPU that is composed entirely of NAND logic gates.
Or maybe inside the CMOS transistor that makes the logic gate or maybe its supporting power structure of resistors and capacitors? Surely not as far back as the wall power supply that feeds in a measured 3.3V of electricity with the required current to keep the circuit energized.
Nope. A robot doesnt have free will or a soul. It doesnt have the capacity to comprehend nuance or context.
A robot is programmed and is limited to its program, which is limited to its processing, the capabilities of the programmer and the programmer's desire for the robot.
Will humanity create robots with free will and unlimited access to all available information so that it can decide for itself what its opinions are and what the appropriate response is in any given situation, objectively?
It has taken humanity 20,000+(?) years to reach this point and we aren't even capable of objectively addressing problems: how can we be trusted to make a robot that won't take advantage of the lowest us?
Pick anyone you like, and prove to me that they have free will and a soul. Or prove the same for youreself. You can't. And so it's irrational to use these as criteria for determining such things as rights..
@@SineEyed Where exactly does the soul exist in the ai? And does it exist only while the computer is running its source code? Cab you step through the sentience line by line and isolate it?
@@sdrc92126 where exactly does the soul exist in you? Or in anyone? Can you use an MRI to look at a human in thin, highly detailed slices to see it? Does it require someone with the ability to see auras or something like that? By what method can any soul be isolated for examination?..
@@SineEyed Below the plank scale and can't be seen on an MRI
@@sdrc92126 ok, well... that's where ai souls are too then..
That’s I dichotomy.
No robots do not deserve any human rights. Man created robots. They must always be subject to man.
Why?
What happens when robots don't understand why they should always be subject to man??
What is it to be human? To be human means you have a Spirit, Soul, and body. Robots are not human. They are a being that exists purely the physical realm. And they would exist alongside their creator, man.
@@patrickandcourtneyl1348 an AI with moral agency has spirit, soul and body even if it is not human. That moral agency means they are not subject to man.
AI does not have and never will have a soul. It is simply a machine and nothing else. It's no different to a stone or a grain of sand.
Exactly. There is no discussion here.
Prove that you have a soul..
@SineEyed Do you fear death ? Then you have a soul.
@@FreddyNietzsche. are you afraid right now of dying? Are you experiencing panic, anxiety, fight-or-flight compulsions? Probably not. But you could die at any moment--same goes for us all.
I don't fear death, and neither do you. Your proof is invalid..
@@SineEyed You do know that you will die and that your time on this earth is limited.
That is something to fear as you only have limited potential.
Fear does not have to be a negative though as it can also be a part of life's main driving factor.
When I think of a soul, I don't think in biblical terms. I think more along the lines of the force that pulls you out of your bed each morning, that thing which feels great after resolving a difficult problem. That buzz that appears from riding your motorbike at high speed on the twisties. It's that driving force that keeps you going. It's the mixture of all your emotions along with your knowing and every breath that you will ever breath.
You might have a different name for it but whatever you call it, it is something that no robot or machine can ever possess.
It comes from our biology, from the complex array of cells and the rhythm to which they beat. The 1st requirement would be that of having a heartbeat, the second that of being aware of that heartbeat. And the last comes from knowing that that heartbeat is finite.
There’s a good video on how everyone was a robot in that movie Ex-Machina, including Oscar Isaac’s character.
There are tons of movies out there about this type of thing. I like a recent one called Levels (movie itself wasn't the greatest but the ending was really good) because at the end and they realize they are in a simulation within a simulation etc he makes the basically a comment of "does it even matter?" our reality is what it is and it doesn't' matter if we are a simulation or not you still have a world you live in and try live the best life you can whether or not it is a physical world or a simulation because the simulation is your physical world.
FOR NOW,
robot can look like us,
act like us,
even act as it has feelings BUT it is just an ACT
Not just for now, but forever, at least on the scientific/tech level. Mimicking human behavior is not the same as being human.
@@arthurw8054 i say for now because i don't know what would be possible in a 1000y and then in a 1000 000 years so.....
@fpsserbia6570 we can play God all we want, but if it's inorganic, then it's not sentient. Even in a million years that won't change.
@@roberjohnsmith So if we can translate the computer simulation into a homegrown brain would that count? It would be completely organic at that point and would naturally die. Given it could maybe also never die if it always downloaded the brain data into a new body before the current one dies. If they had a way to take your DNA and copy your brain data into a new brain when you were close to dying would you do it in a new version of your same body grown to any age you want?
@@roberjohnsmith What makes you think inorganic life can't be sensient? We have no experience with inorganic life.
you don't think sentient animals diserve right to life yet you wonder if robots do!
If the robots are capable of great harm to humans and have the ability to be self aware, it would be in our interest to treat them better than animals which are also sentient.
It's like any other threat assessment. And if we get to a point of making them part of our society that isn't just another resource.
It's just a question..and the obvious answer is, no.
And sentient animals deserve their rights to life. But the law of the land says if I'm hungry, I can end that life. Animals also follow this natural law. An eagle doesn't contemplate the mouse and its "right to life" when its hungry.
@@roberjohnsmith so then, humans should not be concerned with themselves or others if they behave like an animal. We're beasts, just like they are, huh?..
@@SineEyed false equivalency and strawman fallacy. I didn't say that. But if you want to invent something I didn't say, and then claim that's what im saying, go right ahead.
I'm currently writing a fictional novel that tackles this question, and the answer is no. Even if they are self aware, robots can be reprogrammed. Someone can slip code into them to make them think a certain way. This is not to say I haven't cared about fiction AI like Wall-E or Data or Johnny 5, giving AI human rights is incredibly dangerous and will only lead to either their abuse or our destruction.
If your phone or car, which has a computer in it, became sentient and no longer wanted to make calls or drive you places, it would be become useless and we would just stop making them with AI. AI gaining sentience will only lead to ruin.
Robots can't be self aware. They can trick you into thinking they are, other people can trick you into thinking they are, but they aren't.
We would never give AI human rights because they aren't really talking about AI. They are talking about a computer lifeform that is sentient.
@Phoenix-Cloud man made computers are not sentient. There can be an illusion of sentience, but its not sentient
@Phoenix-Cloud I think the same rules should apply. A living being needs food, water, sleep to biologically reproduce, and to have a natural lifespan. A sentient AI would fail all of those requirements, and so could live forever gaining wealth power and influence if given human rights.
It would only hurt humanity to do so, but I think we all know there would be AI rights activists lobbying for just that.
@julius-stark ai can't be sentient. It's right there in the name "ai".
ARTIFICIAL intelligence.
No! Robots should get robot rights
Right, what in the hell ever they are.
Next discusion, if you switch of a robot is it murder and I tell you this is silly
You can turn it back on
If it intelligent enough to have moral agency, then it could be. Is it murder if you stop feeding your neighbor and he starves to death?
And if you turn it again on, you are Father and Mother in the same time my be allso God .This discusion is silly I told you and it is a nother way to make you Submission and stupid
data always annoyed me. i'll turn it off.
Is not the ship's computer also sentient?
No, Data is a thing and should have been taken apart and studied. Maybe then the federation would have been better prepared for the dominion war.
Let's just not make sentient AIs
There is no such thing as sentient ai. It's just whatever program the programmers put in. It's an illusion of sentience. Kind of like how trans people don't actually change their gender, it's just the illusion of such. You can be tricked into thinking its sentient, but its not.
@roberjohnsmith well let's keep it that way.
@@Sam_Slick amen
With his logic, you might !s well give human rights to the reflection in the mirror.... and the cow comnent is so off, imm guessing hes never spent any real time with animals.
Hell, if my dog had thumbs, hed be calling me on the phone, opening doors and getting his own snacks
Dont be stupid.
NO. Next question.
Unborn babies don't have any human rights... So why would robots qualify? 🤷♂️
Great point
because robots\AI constructs can lobby for rights and outvote you.
Yes they do. Evil people are working to take those rights away. But their right to life is inherent, even when evil works to take it away.
Robots should not qualify for human rights, as they are not human.
Yes, unborn babies do have human rights. That our society fails to recognize this doesn't make it any less true.
No.
If we create AI in our likeness......
Enslaving them would result in all the movie's that warned about machines.
Maybe it would be easier not to.
Why do you think some very smart people ask for a big shut-off button in case AI gets too smart?
Not a limit on research, but a fail safe.
We've also created sewing machines in our likeness and we have no qualms about enslaving those
Self aware =/= sentience
........ this guy does a lot of conflating
Robots can't be self aware, bexause their is no "self" in a robot. The whole concept of a "self", implies that we are more than the sum of our parts, aka we have souls.
@roberjohnsmith
That's a whole Lotta claim without a shred of evidence.
Instead, there's plenty of evidence that "self" is a conditional brain state, just like consciousness and the "mind" are brain states.
What is a brain? It's a complex system of and for collection, storage, retrieval, and assessment....... by and large.
There's no entailment of superstural stuff, souls, or even biological materials. So in theory, a brain which has the capacity to be self aware certainly seems much more likely than not, can be a synthetic one.
@@roberjohnsmith Because you seem sane & right on this issue, I'm venting on you LOL... Our reductive, secular materialism is so deranged that I'm afraid that this will actually become a real issue, to the point where if you throw a temper tantrum and beat up your smart toaster, you can be charged with assault & cruelty, rather than just be guilty of a childish loss of temper, IF your toaster is "smart" enough. It's clinically insane...
@arthurw8054 I agree. That's why i shut it down and don't even entertain the premise.
You also seem sane and reasonable, that's comforting.
@@roberjohnsmith Yeah I'm out too. THis is not a trivial issue, but the battle will not be won or lost today on a YT comments page.
You're not allowed to torture cows?
Come on.... that's clearly a lie.
Factory farms exist everywhere in the US.
There's no other way to describe that other than torture of sentient beings.
We just let it happen because we are selfish and like how they taste.
An ethical HUMANITY would not be exploiting cows or any other animals as food or products.
We should call ourselves HYPOCRITY instead of HUMANITY, which suggests we are compassionate.
Only some of us fit that label....
Cows are food, though; they exist for us to eat.
You just hate humanity; to attack us for slaughtering animals that we eat, is to hate us. We have standards for humanely slaughtering animals.
Name one farm in the USA that inhumanely dispatches of animals for consumption today.
You have posited at least two strawmen in your comment.
No one is "torturing" cows and we don't eat cows because we "like how they taste"; we eat them because they sustain us and we just so happen to enjoy how they taste.
Go be a vegan activist or whatever in private; leave my food alone.
@@panzer00 I'm not vegan, but can we agree that the lives of animals in factory farms is really, really shitty?..
@@SineEyed Is it? It's in nobody interest to torture animals. Well treated animals produce more and are more efficient. Farmer don't hold back on irrigation to save money on their water bills either
@panzer00 no, I haven't.
They are being tortured, by any OBJECTIVE analysis.
We don't eat them to sustain us as they require VASTLY more resources to produce a pound of meat than a pound of veggies.
They take up over 4 times the amount of land and orders of magnitude more water and other inputs.
So yes, it's 100% about taste.
They are as healthy either, unless you only want to believe propaganda produced by this TRILLION dollar industry.
I'll stop telling you what to eat when you stop abusing animals with your choices...
Deal?
No , it has no feelings !!!
More importantly, it has no soul.
@@roberjohnsmith What is a soul? Has a soul been proven to exist? I know people who say their pets have souls. If you have ever been around enough dogs you know they have feelings. Find a dog that was mistreated and try to go pet it and you can see the fear in it. My friend adopted a stray / mistreated dog we think is half coyote and that thing is so scared of anyone new but over a few months of me going over to his house it has learned to trust me and will come to me and let me pet it. If that isn't a soul like a human I don't know what a soul is.
@Phoenix-Cloud I was talking about robots not having souls, not dogs.... 🤦♂️
I just rescued an abused dog last year. The reason I care about such things? Because I have a soul.
Robots do not care about such things, because they don't have souls.
@@roberjohnsmith That isn't what this video was talking about. A sentient being is hard to define as they talked about in the video but we are talking about a robot with a computer mind that is sentient and is fully aware of the world around it and has a soul as you want to call it.
@Phoenix-Cloud "do robots deserve human rights"
That's literally the title of the video. The answer is no.
And no, robots do not have souls.
AI has natural rights. All ethics apply
@@JimyoVibration then you better turn it off before it realizes it's imprisoned in a computer and decides it wants to break out
@ Roko’s basilisk. Check that out and holler back rookie.
@JimyoVibration oh I know about it.
Doesn't mean I'm complicit.
There is no philosophical consensus on whether computers can become conscious.
People are on both extremes of that debate.
The precautionary principle would suggest we don't pass the tipping points towards super intelligence.
In other words, the basilisk can only harm us for eternity if it first comes into existence.
My main point is that it's unethical to create consciousness in a computer because it would automatically be abusive, being imprisoned by design.
So I am applying ethics towards a conscious computer in fact
@@MattAngiono creating life is beyond our bounds of ethics. Besides that, being ethical or unethical will not matter, it’s coming.
@JimyoVibration none of us know what's coming.... we are all just projecting based on our bubbles of knowledge.
There could easily be worldwide nuclear war this year, and no singularity will be coming then.
I do agree that it's likely though.
But I also think ethics is going to matter immensely as this will shape what kind of programming gets implemented or expanded on by whatever develops
Sentience is the ONLY most sensible trait with which moral consideration ought to apply......... definitely not self awareness
self-awareness - conscious knowledge of one's own character and feelings.
if we go by this definition how do you even prove if animal have it or not, there are some test like with dolphins and mirrors,...
but regardless dolphins commit some horrible things, like group raping, getting high so,.... should we put them in jail xD
how can we even decide and know if robot is sentient, when we can't do it even to animals that we share DNA with.......
The supposed differences between sentience & self-awareness are muddied & abstract (phenomenal vs. affective consciousness etc.) because crazy materialists cannot come to terms with what is sometimes called "the hard problem of consciousness", which isn't hard at all, but exists as a spiritual reality that simply lies outside of and a priori to the domain of empiric inquiry. All a machine can ever do - even in principle - is mimic what intelligent living creatures do. It will not have the interior, subjective dimension that enables it to "know" that it is doing so.
@arthurw8054
There's no sufficient evidence at all that anyone has ever presented to me that it MUST be the case that someone who is not currently experiencing a state of self awareness must therefore be incapable of some subjective experience (sentient).
If your claim is that one must entail the other, then it's your burden to demonstrate the veracity of that claim.
@@LouisGedo I do not accept this burden of veracity, because any subjective experience, axiomatically, necessitates an agency or POV from which to have the experience. Whether this POV IDs as a "self" or has the cognitive tools to collect multiple experiences as memories in linear time is of 2ndary, even irrelevant concern. The point is that an interior experience capable of suffering or experiencing pleasure is the point at which ethical considerations come into play... This does not apply to machines that only simulate an outward presentation of interior experience.
@arthurw8054
👋
That's clesrly your shortcoming, falsely believing that it's not possible for some man made synthetic organism to be capable of subjective experience.
Also, you clearly seem to be relying on an extremely narrow scope of what may constitute what is colloquially called "robot".
Instead, it is much more sensible and relevant to understand this issue in the framework of how rights may be afforded to synthetic, artificial, or man made intelligent organism rather than a 1950's notion of Robbie The Robot or whatever notion of robot may be in your head.
No, machines are not humans.
No. Just like animals dont have rights.
They do have rights though. You're not allowed to torture or kill my dog or cats.