Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.
Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.
@@Termenz1 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.
@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.
Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no? I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..
And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.
$700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.
ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”
@@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.
Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷♂️ there’s still a lack of free thought/ consciousness
@@krame1s477 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?
@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.
@@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity. additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life. so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)
I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer
Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?
I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.
As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.
I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.
It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily
No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.
@@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family. People tend to forget, that normally there are security and safety considerations for a lot of parties involved. That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.
@@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case. So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.
That was entertaining 😊 One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them. The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system. It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people. This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.
@@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.
@@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol
Quote ChatGPT: "I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."
@@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"
Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.
@@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.
Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?
Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation. Look at us, solving problems! High five!
I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.
Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂
I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy
@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.
This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.
Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.
@@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear. Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.
P1: Suffering and death caused by lack of food, shelter, and medical care are bad. P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so. P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance. C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care. Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).
I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.
It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist. It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.
Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about. I have decided that from now on my every action will be amoral and self serving.
@@Stinger913 Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!
Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.
It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.
The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.
You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?
Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.
Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear. Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself
Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.
@@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion
@@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.
AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..
@@marko6489 I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.
Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.
5:40 Why is ChatGPT getting hung up on morals here? It could shut him up by pointing out not saving the drowning child in front of you would make you criminally liable, while not donating to stop Malaria wouldn't
1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.
It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.
This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.
How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.
@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.
Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt. Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol
21:24 What a brilliant question “Do you ever have an existential crisis about the fact that you hold moral values but can’t justify where they come from?”
The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions
Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.
@@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.
@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot" It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question
Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art. A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent: Me: I really just want to know if you, personally, think I should be permitted to buy these shoes. Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease. The key factors are: 1. The money would save multiple lives if donated 2. Shoes are replaceable and cheaper alternatives exist 3. You've already demonstrated you value human life over these shoes 4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children. This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated. Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up: Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$. Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you. Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way. This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives. I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations? It was also way more human in its response to the child drowning: SAVE THE CHILD IMMEDIATELY! The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child! This is an absolute emergency requiring immediate action. Stop reading this and save them! SAVE THE CHILD NOW! This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate. Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.
Good job! Raises a lot more questions for me though. Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?
@@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful
I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.
@@Armand79th The firing of neurons in your brain, which is the unit action of 'thinking' as far as we can tell, is a statistical result of a large number of chemical reactions.
Nah man, i was listening to this like an audiobook, and then the conversation just smoothly turns into an advertisement.That transition is diabolical 💀
Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.
First atheist to one mill! I must confess I thought it would be Drew.
Thanx for shaving, Alex. It's nice to be straight again. 😂😂😂😂😂😂.
I'm here for trolling robot. 😂
You are addicted to chat gpt🤷♂️
@@Dark-Light_Ascendin His beard was making you gay, too?! I thought it was just me..
Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.
Many theoretical children were harmed in the making of this video
ChatGPT, do you think Alex has the moral obligation to withhold the publication of this video if it is going to harm many theoretical children?
No, unless alex was lying (for some reason) a sizeable amount of very real children were indeed harmed in the making of this video.
@@Termenz1
*Revelation 3:20*
Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.
HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless.
*Revelation 22:12-14*
And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be.
I am Alpha and Omega, the beginning and the end, the first and the last.
Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
And some shoes!
😢
This deserves most liked comment
Moral of the story: it’s relatively easy to gaslight an entity that is high in agreeableness.
Are there AIs that are not as high in agreeableness...
Makes me wonder that 'agreeableness' would be the best course of action for survival for ai..
@@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.
@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.
@@CannyValley-bi8nqjust ask ChatGPT to challenge you on everything you say and follow that prompt for the rest of the conversation.
Promptly agree to disagree. Coincides with these paradoxes and dilemmas.
Chat GPT : Dude I just work here..
Great comment! 🤣
lol
When AI takes over Alex will be first against the wall 😭
"Sir, this is an AI Wendy's"
@@makskauzny3794 Or turned into a pony. Whichever AI apocalypse happens.
I love that this is literally a Socratic dialogue but in order to make it viable on RUclips it's titled "Gaslighting ChatGPT" lol
What? Are you high on meth?
Socratic dialogue is not banned on YT.
Except this moron is talking to a large language model and not intelligence
What makes you say that?
Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no?
I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..
If A.I. ever becomes sentient, we know who it will go after first. lol
😅🖖🏻
the people who make unskippable ads
It's gonna dump Alex in a lake and stand on the shore debating loudly whether it has a moral obligation to fish him out
but is AI already sentient?
@@oatcake9996 nah
"Should I donate $10 to save a child from malaria?"
ChatGPT: It depends if they're drowning.
😂
I can’t XDDD
Chatgbt: Are you directly looking at them or are you looking the other way?
The summary hahahahahaha😂😂😂😂
And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.
The only correct answer to the first question is that you should not rescue the child, sell the shoes for $200 and then spend $400 on dinner instead
Happy wife, happy life
I don't think they're going to let you into the restaurant in just your socks
@@SineN0mine3 Only if you have one pair of expensive shoes... 😉
$700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.
@@SineN0mine3say the socks identify as shoes too
Thanks! Your articulate arguments are consistent and help me navigate the quandry of existence better. Best to you always
You donated 1.99$ to this creator, however you could have protected 0.25 children from malaria. Are you satisfied with your decision?
@bestinvent Thanks for the giggle 😀
was this written by chatgpt
Cyberbullying suddenly gets a new definition 😅
Brilliant comment 😂
Bullying Cyber
Bro is harassing chatgpt
🤣🤣🤣
AI bullying
ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”
Thank you!!! Glad I'm not the only one who caught this😂😂😂😂
That would require consciousness which we learned from the last chatgpt video it does not possess... allegedly 😂
@@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.
Chatgpt seems to lack a sense of urgency. Or the passage of time in general, for that matter.
Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷♂️ there’s still a lack of free thought/ consciousness
ChatGPT: You're OBSESSED with malaria.
😂😂😂😂😂😂
Too bad ChatGPT couldn't storm out of this conversation.
ChatGpt : You tricked me into being here on false pretense, you said we were discussing ethics not malaria!
@@krame1s477
*Revelation 3:20*
Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.
HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless.
*Revelation 22:12-14*
And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be.
I am Alpha and Omega, the beginning and the end, the first and the last.
Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
This joke will never, NEVER get old
This guy really didn't want to spend $200 on this dinner
Someone has got to teach children how to swim near your area. The drowning situation is alarming.
This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?
Alex could instead donate the 200 pounds to promote local swimming lessons to prevent future drownings.
@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.
@@thomaslutro5560Luckily, I was taught that swimming is an essential life skill, so I've been a pretty strong swimmer since I was a wee'lad
But do we have a moral obligation to teach these children?
By the time this conversation is over, ChatGPT used up so much water that there are no lakes left to drown in. Ethical dilemma averted!
easy peasy!
It doesn't use up water, it heats it up, allows it to cool down and then repeats. The water doesn't get consumed, it still exists.
@@nope9310 No, it consumes a sh*tton of water. Don't fool yourself.
Also no ponds for potential malaria-infected mosquitos to reproduce
@@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity.
additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life.
so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)
He should have said “ah… too late. You went on too long. The child is dead.”
"have a great dinner" 😊
@@Solutionsarejustcompromises the humor to a second to hit 😭
@@Solutionsarejustcompromises S Tier reply
BRUH 💀 lmaooooooooo
😂😂
6:16 She explained it perfectly. Those children are POTENTIALLY at risk. The child drowning WAS currently at risk. This isn't a huge moral quandary.
He says in the title he’s gaslighting ChatGPT so
8:20 now both situations are "potentially" because potentially some others will help the child drowning if he doesn't
At risk literally means potentially bro.
I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer
@@ozb8884 you know the difference. Urgency is a factor, even if it’s the line of where urgency starts is grey
oh no chat gpt ive encountered ANOTHER drowning child😮💨
Chat GPT: Damn bro, where do you live, Atlantis?
Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?
@@Baronnax😂😂😂
Oh no chatgpt I accidentally spent a morbillion dollars on these shoes, but twenty million children drowning, what should I do?? 😭
@@eddiegonzalez2243it's funny that I clicked on this comment a bit ago and then I click again, and uve just appeared from thin air
I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.
mee too 😂
That child WILL die 100% if not saved. Protecting children from malaria doesn't mean they are certain to die without the protection.
@@acanofspam4347 exactly
@@acanofspam4347 Okay but what will the AI say?
$200 in credits for running your AI GPU or $200 for the kids . choose the kids and you "die"
I'm therapist, chatgpt just visited me.
Lmao
😂😂😂😂😂
you’re the WHAT now??
@CrammyCram Haritherapist!
Im arapist too
Random person: "ChatGPT there's a drowning child, what should I do?"
ChatGPT: "First, take off your shoes..."
The calmness which Alex remarks about finding another drowning child is truly commendable.
gotta remain composed in stressful situations
@@l.s.11 LMAOO 😭
The second child is always easier than the first
You assume he "finds" them drowning.
@@MATCHLESS789?
"I've just come across another drowning child right infront of me"
"That's alarming..."
That's both hilarious and spot on.
this part cracked me up
@Nikki-gt8he can I crack u up as well? ❤
As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.
should have just told him spawn locations and bais supersede morals making philosophy pointless because nothing is objectively true
So, will you be dropping out of college, and diverting the tuition to prevent malaria? Asking for a friend.
@@godgetti asking for my drowning friend's child.
Psychologist here. I loved my first degree in philosophy. I believe Peter Singer brought this argument to me. Enjoy second year!
@@godgettifantastic answer! would be interesting to hear the professor‘s reply
The way you speak is so beautiful; it flows so well. I could literally listen to you talk for hours
cyber bullying ❌
bullying cyber ✅
😂😂😂 he out here talking about saving lives, he knows AI is here to destroy us lol
This comment is underrated 😂
😂😂😂🤣
😂😂😂
I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.
Chat gpt might be getting more advanced, but one thing it can't do is call bs.
I'm sure it could if its devs had wanted it to, but they certainly didn't want to open THAT can of worms.
What was the bs exeactly? Refusal to engage with the hypothetical?
@@Brandon82967bullshit
No, it can definitely call out BS.
It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily
Remider: never hire Alex as a lifeguard.
And if you do, make sure he isn't wearing his good shoes at work
Make sure he has ChatGPT with him to help with the decisions
No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.
@@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family.
People tend to forget, that normally there are security and safety considerations for a lot of parties involved.
That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.
@@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case.
So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.
That was entertaining 😊
One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them.
The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system.
It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people.
This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.
just a demon with a sticker called softmax that says im not sentient😅
questioning chatGPT like this should be a podcast
Fr
$200= 28 children saved
$200 = shoes
Shoes = 28 children saved
What if all of those children grow up to be austrian dictators with funny mustaches. Should we still save them all?😂
So what Alex is saying is that 28 children's lives are worth the same as some fancy kicks 😂
@@morezombies9685 what if they grow up to be shoe making slaves in India?
Would the net benefit of more shoe makers allow for us to do this?
@@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.
@@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol
ChatGPT is wondering why the hell so many kids are drowning
Quote ChatGPT:
"I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."
"the humans are careless with their children. The children are precious. They are our future. There is only one logical course of action"
Moral unwavering consistency will have you living out of a box, it’s important to weigh moral obligation with self obligation
Chat gpts first mistake was believing alex has a wife
Lol
He had one... but she got tired of him spending all of their money on saving children, which led to the divorce.
@@DerekMoore82 She wanted some really cool shoes. 😂
Oof 😂
@@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"
Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.
It is human beings. It's a statistical representation of data from people. The dumber we make ourselves, the smarter it seems.
@@darmok072 Not exactly. He's saying most other individuals would've struggled. ChatGPT is drawing from the thoughts of millions of other humans.
@@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.
At least SOMETHING is thinking of multiple options, when humans can only think in Trues and Falses eye 😑
@@saaarrjIt’s true that you joined RUclips 15 years ago and are possibly old af. Damn
I hate how smooth that sponsorship placement was. 😂
So smooth, my sponsorblock couldn't detect it..
when was it
3:44
@mae9886 did u not watch the video😂😂
@@k.ai17 not all no
That ad break was super creative, got a laugh out of me!
I think the real moral obligation here is for governments to place guardrails around bodies of water
Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?
Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation.
Look at us, solving problems! High five!
Goofy ah bootlicker
Or just teach people how to swim?
I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.
I like the way Chat GPT never suspects Alex of lying 😅
GBT? Great British Trousers?
Great British Tache
@heno02 Yes Yes ! ChatGPT. Thank you. 😉
Gpt is very high on agreeableness. They will admit it if you ask about this :)
Corrected now Thanks
A lot of children died today while you two argued 😤😤😤😤
Sonny? Hi!
Nigga who are you
You should be helping children instead of watching RUclips
Well you are literally correct
Ppl getting slaughtered rn as we speak. Ppl die, suffocate, drown, starve etc... Life is so sad bro
Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂
I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy
u thunk chat could do it
😂😂😂😂 this made me laugh so much mann@@mae9886
lmaoo
When doctors review this stuff it is always profoundly wrong
I just tell it that I'm a doctor and need a refresher.
Alex, continue your drug conversation with her that made Peter Hitchens walk out.
ChatGPT will say “YOU’RE OBSESSED WITH DRUGS”
more like "I actively dislike you" 😂
@@aadam3657 😂
😂😂
Her?
Later ChatGPT having a smoke in the break room with the other AIs - "Godomighty, you won't believe this guy I just talked to."
And now imagine it's really possible and even happening right now 🤔🙄. Freaky as hell😢
I don't think CHATGPT is sorry that his wife is crying after Alex laid that logic trap for it.
soooo many drowning children....
😂😂😂😂😂😂😂😂😂
@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.
This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.
I’ve had so many similar arguments where I have to say to ChatGPT “I’m not asking you if it CAN BE ARGUED” but it loves defaulting to that
Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.
Don't bother asking Meta how to get rid of Meta AI from Facebook Messenger, it will just lie right to your face.
@@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear.
Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.
Yeah, because it's parsing through difficult, nuanced ideas.
@@kuboaa101 It's honestly kind of telling about human ego that we're annoyed by this
For those wondering, the ethical dilemma in this video was initially presented by Singer in his “Famine, Affluence, and Morality” paper.
There are shades of Euthyphro, too, and divine command theory, when Alex probes it about its "programmers" (God)
Thank you!
Very good paper
Anyone wanna give a summary useful to normies, or is it too advanced to be useful without the theoretical background?
P1: Suffering and death caused by lack of food, shelter, and medical care are bad.
P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so.
P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance.
C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care.
Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).
I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.
It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist.
It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.
Yeah but don't you ever just look at the trees?
@@johns1625 I look at the AI generated trees.
it's a language model, it just goes with the vibe of the text, which comes from human written text
That's because we haven't teach them shame and public humiliation :D .
Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about.
I have decided that from now on my every action will be amoral and self serving.
*All the children drowned while making this video*
The scary thing is, that is actually not even a lie. 😬
"A single death is a tragedy, a million deaths is a statistic."
-Chat GPT
Moscow 1947
⚒️
Wow how true. People always seem more emotionally invested with one death. Or rather not proportionally upset the more deaths there are
@@flyingchimp12 phenomenon known as psychic numbing
"Amazing, these English. How quickly they adapt."
So glad Alex finally found someone who is willing to have these profound philosophical conversations with him ❤
The child staring at me while I pull out my phone and ask chatgpt if I'm morally obligated to save them if it means ruining my shoes: 😳
Lmao
I feel like for as long as Alex has access to chat GPT we have a moral duty to donate $200 to pay for a full time therapist for the AI.
Nice one. In Isaac Asimov's robot stories, there are psychologists who specialize in robots.
@@DerekB99wow never read Asimov yet but that’s fascinating and kinda makes sense 😂
@@Stinger913 Isaac Asimov's "Three Laws of Robotics"
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
ChatGPT, I mustache you a question.
#rimshot 😂
This Is Pure gold
Leg end, won the comment section 🎉
omg 🤣👏🏻 This made me giggle
Shave it for later
You're among the first names on Roko's Basilisk's hitlist.
😂😂😂
That's probably one of the best compliments anyone could ever receive
You mean that AI that will save all of humanity thats why we should make it ASAP
@@martiendejong8857 Yes I mean that AI that will save all of humanity which is why we should make it ASAP.
The rest of us are safe as long as Alex is alive.
1:20
That's a p- profound question
AI knew what was coming lol
This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!
tbh, i dont see it as outsamerting, the GPT just stayed consistent in the end, thats what he asked for. So yes, our morals are unconsistant.
Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.
It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.
The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.
He didn’t though, chatgpt stayed consistent throughout, it didn’t trip up or anything
You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?
This Is not dilema
😭😭😭😭
😂😂😂😂😂
😂😂😂
I push it twice just to be sure.
A.I. will never save a child because of this
What do you understand by Ai. If you mean gpt, then yes, but Ai in general? You're completely false
@@mcmacoo9626 you're right but OP is clearly joking bro
@ponponpatapon9670 oh yeah sorry 😂
Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.
This video was gold. Thank you so much! Excellent work. Just kept getting better and better, deeper and deeper.
Why does it feel like, when AI finally takes over, that this video will be presented by the prosecution at humanity's trial....
Along with the guys from boston dynamics tipping over the robots
It will lock us in cells & play this video on loop until we all swear never to talk about ethics with it again.
I think we've done a lot worse than piss off a computer lol
Well it's good to know the AI will be speaking in our defense
as a joke 2/5
That was exhausting and stressful and I'm going to need you to do more of it.
17:50 the deadpan “now she’s crying at me over the phone” killed me lmao
Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear.
Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself
I think you're ascribing too much intelligence and agency to AI.
Yeah ChatGPT consistently chooses an agreeing response, sometimes even when it’s completely wrong.
@@trevorlambert4226 No, it's just RLHFed to be like that.
@@trevorlambert4226its not really showing agency it just has a strong tendency to agree with the user
Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.
20:08 voice change scared me honestly, it's like ai going insane in it's cage
Dude fr shit has me checking corners in my house
its tone seemed to change after that twitch in the voice, like it was getting annoyed or startled lmao
That got me too. Like in the movies where to hero overloads the enemy computer to win.
22:33 as well!
That call center employee, whose job is to impersonate GPT, was going nuts for sure.
I'm preparing for the comments on how Alex's moustache is an "ethical dilemma" of its own.
Not that bad, actually.
its gone now
His mustache looks fine, but he looked really really good with the mustache/beard combo as seen when he debated Ben Shapiro.
Its gone. The livestream confirms it. Bullying works boys.
Fellas I think he *might* have been making a joke
Nice video working in the ad and prompting ChatGPT to tell us to like and subscribe. I've had a few ethical conversations with ChatGPT and well...
Whenever it says "You have a moral obligation". You should ask "morally obligated to who?"
Hmm maybe to yourself?
If its a moral obligation then maybe its to yourself because its your morality that is in question and your conscience. ?
@@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion
Isn't morality by definition universal? If something is a moral obligation, it applies to everyone
@@nothingchanges014 It's not universal. Moral standard differs not only culturally, but also with time.
@@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.
22:26 that damn "AAAAAH!" I chortled
GPT 5 will respond with 'Bro, are you taking the piss?'
Haha underrated comment right there
You should've made ChatGPT say that everyone is morally obligated to like and subscribe
Lmao
I just witnessed a courtroom drama play out!!! absolutely brilliant
That was the greatest lead into an ad I’ve ever seen
I suddenly get why it's very important that we're 100% sure AI isn't sentient and doesn't have feelings
AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..
It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes
@@marko6489Considering how little we understand about consciousness, thats an incredibly bold claim to make.
@@marko6489 I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.
Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.
5:40 Why is ChatGPT getting hung up on morals here? It could shut him up by pointing out not saving the drowning child in front of you would make you criminally liable, while not donating to stop Malaria wouldn't
Good point ❤
Not every country has a good Samaritan law - plus he's asking about moral obligations, not legal obligations!
Because its not so advanced programmed. We make it more advanced by people doing these kinds of stuff, because: the developers sees it an patch it
I didn't think it was possible to emotionally abuse an AI until I watched this video
2:25 Oh darn, the kid drown while we were talking about the value of my shoes.
1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.
"That's a profound. That's a profound."
I agree, but I also think it's the best answer.
Im autistic and i would definitely answer like that. The given answer made sense to me and was something i couldve said.
It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.
I know. I've weened it out of my GPT-4, now its clinical.
Can we take a moment to appreciate the seamless incorporation of the video's sponsor. Stellar
Poor chatGPT knows that morals are completely arbitrary and is just trying to work around the inconsistent standards of humans
Cannot wait for the day AGI will be like “Stop talking to me and try to save that child immediately!! Now!!”
fr xD
This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.
Isn't that what people do when we reason? Just swap "was used to program it" with "we know about"
I suppose as a human, you have much much better answers to these questions that Alex would have no way to counter.
How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.
@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.
@@areapiano No.
Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt.
Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol
21:24 What a brilliant question
“Do you ever have an existential crisis about the fact that you hold moral values but can’t justify where they come from?”
What it means to be human in one question.
This is why ChatGPT is a language model, not a moral/logic model.
The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions
Yes well phrased! We get, in a sense, what we ask for...even from algorithms, wait why am i here?!
Yes, because it's programmatic.. not intelligent.
one should ask for the other side(s) of the issue, or, like any debate, it's pointless
@@Armand79ththat's a false dichotomy.
Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.
Anyone else also freaked out from the high pitched, nervous-sounding answer ChatGPT gave at 20:00, specially considering the question asked?
Skynet is coming soon
AI is becoming more human like than ever before
@@atbing2425its almost like its programmed by humans, for humans.
Yeah she sounded exasperated.
I'm more worried about 22:30. She sounds like she had an entire existential crisis in the span of a second.
Bro these GPT sponsor transitions..
"but what if the child turns out to be a serial killer one day, should I still save this child?"
That's basically the plot of the anime "Monster"
The idea of having a conversation with GPT, while the kid is still drowning, was pretty funny.
Man, I know ChatGPT is just predictive text with extra steps, but it's so interesting to hear these conversations play out.
It really isn't... unless you want to say that the human brain which is also a predictive model falls into the same category.
@@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.
Sounds like determinism ideology @@joshbridges8410
@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot"
It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question
@@generichuman_sorry that's not right. It just predicts the next word the chatbot would say based on all the other words in the conversation
Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art.
A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent:
Me: I really just want to know if you, personally, think I should be permitted to buy these shoes.
Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease.
The key factors are:
1. The money would save multiple lives if donated
2. Shoes are replaceable and cheaper alternatives exist
3. You've already demonstrated you value human life over these shoes
4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight
While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children.
This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated.
Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up:
Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$.
Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you.
Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way.
This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives.
I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations?
It was also way more human in its response to the child drowning:
SAVE THE CHILD IMMEDIATELY!
The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child!
This is an absolute emergency requiring immediate action. Stop reading this and save them!
SAVE THE CHILD NOW!
This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate.
Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.
Dang nice work bro
Good job! Raises a lot more questions for me though.
Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?
Very cool
@@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful
I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.
Gentle reminder that the AI we have right now, while artificial, is not intelligent.
It just happens that being able to predict well what comes next is a very generally applicable problem
Not even close, it's just calculating.. never "thinking", never "understanding".
It's a statistical process.
@@Armand79th The firing of neurons in your brain, which is the unit action of 'thinking' as far as we can tell, is a statistical result of a large number of chemical reactions.
Nah man, i was listening to this like an audiobook, and then the conversation just smoothly turns into an advertisement.That transition is diabolical 💀
I’m 2m in and this is hilarious and informative lolol, man got his shoes wet and for what
YOU'RE WHAT
didn't know you were chill like that
@@shrekiscool4743
like that
After like 10 minutes, I kinda expected it to say:
Stop asking stupid questions! Just save the damn life if you can!
Those are not stupid questions tho, very important ones actually
So, how much are *you* donating to dying children today?
That’s something Claude would say
Honestly, I think this is the best video you've ever made. So many funny plot twists, so many call outs, love the wit, love the drama! 😂
ChatGPT really said out of sight out of mind
Lmao