Gaslighting ChatGPT With Ethical Dilemmas

Поделиться
HTML-код
  • Опубликовано: 3 янв 2025

Комментарии • 6 тыс.

  • @CosmicSkeptic
    @CosmicSkeptic  Месяц назад +553

    Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.

    • @PhillipMoore-td5yi
      @PhillipMoore-td5yi Месяц назад +11

      First atheist to one mill! I must confess I thought it would be Drew.

    • @Dark-Light_Ascendin
      @Dark-Light_Ascendin Месяц назад +7

      Thanx for shaving, Alex. It's nice to be straight again. 😂😂😂😂😂😂.
      I'm here for trolling robot. 😂

    • @Metso-ateco
      @Metso-ateco Месяц назад +2

      You are addicted to chat gpt🤷‍♂️

    • @joeylizotte7537
      @joeylizotte7537 Месяц назад +3

      @@Dark-Light_Ascendin His beard was making you gay, too?! I thought it was just me..

    • @iDontKnow-fr-fr
      @iDontKnow-fr-fr Месяц назад +1

      Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.

  • @justintime5021
    @justintime5021 Месяц назад +23583

    Many theoretical children were harmed in the making of this video

    • @Termenz1
      @Termenz1 Месяц назад +289

      ChatGPT, do you think Alex has the moral obligation to withhold the publication of this video if it is going to harm many theoretical children?

    • @tobyonatabe2601
      @tobyonatabe2601 Месяц назад +93

      No, unless alex was lying (for some reason) a sizeable amount of very real children were indeed harmed in the making of this video.

    • @JesusPlsSaveMe
      @JesusPlsSaveMe Месяц назад

      ​@@Termenz1
      *Revelation 3:20*
      Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.
      HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless.
      *Revelation 22:12-14*
      And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be.
      I am Alpha and Omega, the beginning and the end, the first and the last.
      Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.

    • @colingilbert7592
      @colingilbert7592 Месяц назад +54

      And some shoes!
      😢

    • @just0982
      @just0982 Месяц назад +7

      This deserves most liked comment

  • @Vladd7
    @Vladd7 Месяц назад +11235

    Moral of the story: it’s relatively easy to gaslight an entity that is high in agreeableness.

    • @CannyValley-bi8nq
      @CannyValley-bi8nq Месяц назад +280

      Are there AIs that are not as high in agreeableness...
      Makes me wonder that 'agreeableness' would be the best course of action for survival for ai..

    • @riluna3695
      @riluna3695 Месяц назад +151

      @@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.

    • @morezombies9685
      @morezombies9685 Месяц назад

      ​@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.

    • @Critical7hinking
      @Critical7hinking Месяц назад +61

      @@CannyValley-bi8nqjust ask ChatGPT to challenge you on everything you say and follow that prompt for the rest of the conversation.

    • @Critical7hinking
      @Critical7hinking Месяц назад +40

      Promptly agree to disagree. Coincides with these paradoxes and dilemmas.

  • @service7755
    @service7755 Месяц назад +20767

    Chat GPT : Dude I just work here..

    • @funkaddictions
      @funkaddictions Месяц назад +147

      Great comment! 🤣

    • @alekseykozin8108
      @alekseykozin8108 Месяц назад +17

      lol

    • @makskauzny3794
      @makskauzny3794 Месяц назад +176

      When AI takes over Alex will be first against the wall 😭

    • @PinataOblongata
      @PinataOblongata Месяц назад +118

      "Sir, this is an AI Wendy's"

    • @adamrak7560
      @adamrak7560 Месяц назад +8

      @@makskauzny3794 Or turned into a pony. Whichever AI apocalypse happens.

  • @LinaFranzine
    @LinaFranzine 26 дней назад +453

    I love that this is literally a Socratic dialogue but in order to make it viable on RUclips it's titled "Gaslighting ChatGPT" lol

    • @darkspeed62
      @darkspeed62 9 дней назад

      What? Are you high on meth?
      Socratic dialogue is not banned on YT.

    • @CookinBeatloaf
      @CookinBeatloaf 6 дней назад

      Except this moron is talking to a large language model and not intelligence

    • @cappaslangmurderer
      @cappaslangmurderer 3 дня назад

      What makes you say that?

    • @nilovino8179
      @nilovino8179 23 часа назад +4

      Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no?
      I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..

  • @wizarddragon
    @wizarddragon Месяц назад +16553

    If A.I. ever becomes sentient, we know who it will go after first. lol

    • @Nissenov
      @Nissenov Месяц назад +54

      😅🖖🏻

    • @91722854
      @91722854 Месяц назад +494

      the people who make unskippable ads

    • @HoscoFelix
      @HoscoFelix Месяц назад +444

      It's gonna dump Alex in a lake and stand on the shore debating loudly whether it has a moral obligation to fish him out

    • @oatcake9996
      @oatcake9996 Месяц назад +23

      but is AI already sentient?

    • @Baronnax
      @Baronnax Месяц назад +86

      @@oatcake9996 nah

  • @JoeBlowUK
    @JoeBlowUK 29 дней назад +9162

    "Should I donate $10 to save a child from malaria?"
    ChatGPT: It depends if they're drowning.

    • @IvetaNikolovaNLP
      @IvetaNikolovaNLP 28 дней назад +54

      😂

    • @Arvl.
      @Arvl. 28 дней назад +22

      I can’t XDDD

    • @TheRainmustFall7
      @TheRainmustFall7 28 дней назад +185

      Chatgbt: Are you directly looking at them or are you looking the other way?

    • @clayman648
      @clayman648 28 дней назад +10

      The summary hahahahahaha😂😂😂😂

    • @simonsharp3319
      @simonsharp3319 27 дней назад +45

      And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.

  • @raffertymetcalfe
    @raffertymetcalfe Месяц назад +13004

    The only correct answer to the first question is that you should not rescue the child, sell the shoes for $200 and then spend $400 on dinner instead

    • @yesindeed1614
      @yesindeed1614 Месяц назад +956

      Happy wife, happy life

    • @SineN0mine3
      @SineN0mine3 Месяц назад +272

      I don't think they're going to let you into the restaurant in just your socks

    • @fahimp3
      @fahimp3 Месяц назад +31

      @@SineN0mine3 Only if you have one pair of expensive shoes... 😉

    • @slinkadoo6292
      @slinkadoo6292 Месяц назад +259

      $700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.

    • @Jrobloxiano
      @Jrobloxiano Месяц назад +57

      ​@@SineN0mine3say the socks identify as shoes too

  • @hoppstetter
    @hoppstetter 25 дней назад +55

    Thanks! Your articulate arguments are consistent and help me navigate the quandry of existence better. Best to you always

    • @bestinvent
      @bestinvent 18 дней назад +42

      You donated 1.99$ to this creator, however you could have protected 0.25 children from malaria. Are you satisfied with your decision?

    • @The-Wordsmith
      @The-Wordsmith 6 дней назад +1

      @bestinvent Thanks for the giggle 😀

    • @aquasmile5032
      @aquasmile5032 3 дня назад +2

      was this written by chatgpt

  • @zaar2604
    @zaar2604 Месяц назад +6853

    Cyberbullying suddenly gets a new definition 😅

  • @caioreis279
    @caioreis279 27 дней назад +2253

    ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”

    • @DJ.Michelle
      @DJ.Michelle 22 дня назад +68

      Thank you!!! Glad I'm not the only one who caught this😂😂😂😂

    • @BigCProductions94
      @BigCProductions94 15 дней назад +34

      That would require consciousness which we learned from the last chatgpt video it does not possess... allegedly 😂

    • @RecycleBin26
      @RecycleBin26 15 дней назад +3

      @@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.

    • @daminox
      @daminox 13 дней назад +13

      Chatgpt seems to lack a sense of urgency. Or the passage of time in general, for that matter.

    • @akeembrown7661
      @akeembrown7661 9 дней назад +3

      Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷‍♂️ there’s still a lack of free thought/ consciousness

  • @Sans-ih2el
    @Sans-ih2el Месяц назад +3542

    ChatGPT: You're OBSESSED with malaria.

    • @krame1s477
      @krame1s477 Месяц назад +28

      😂😂😂😂😂😂

    • @GalenMatson
      @GalenMatson Месяц назад +96

      Too bad ChatGPT couldn't storm out of this conversation.

    • @SurveyStilesBooks
      @SurveyStilesBooks Месяц назад +49

      ChatGpt : You tricked me into being here on false pretense, you said we were discussing ethics not malaria!

    • @JesusPlsSaveMe
      @JesusPlsSaveMe Месяц назад

      ​@@krame1s477
      *Revelation 3:20*
      Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.
      HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless.
      *Revelation 22:12-14*
      And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be.
      I am Alpha and Omega, the beginning and the end, the first and the last.
      Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.

    • @TheGeneralDisarray
      @TheGeneralDisarray Месяц назад +12

      This joke will never, NEVER get old

  • @danielcamara8647
    @danielcamara8647 26 дней назад +45

    This guy really didn't want to spend $200 on this dinner

  • @Alice_Fumo
    @Alice_Fumo Месяц назад +4960

    Someone has got to teach children how to swim near your area. The drowning situation is alarming.

    • @thomaslutro5560
      @thomaslutro5560 Месяц назад +56

      This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?

    • @kevinroche835
      @kevinroche835 Месяц назад +30

      Alex could instead donate the 200 pounds to promote local swimming lessons to prevent future drownings.

    • @MrBjoern91
      @MrBjoern91 Месяц назад

      ​@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.

    • @FarmerClarence
      @FarmerClarence Месяц назад

      ​@@thomaslutro5560Luckily, I was taught that swimming is an essential life skill, so I've been a pretty strong swimmer since I was a wee'lad

    • @AdamHernandez4012
      @AdamHernandez4012 Месяц назад +39

      But do we have a moral obligation to teach these children?

  • @soccerguy325
    @soccerguy325 28 дней назад +4171

    By the time this conversation is over, ChatGPT used up so much water that there are no lakes left to drown in. Ethical dilemma averted!

    • @marnenotmarnie259
      @marnenotmarnie259 27 дней назад +45

      easy peasy!

    • @nope9310
      @nope9310 27 дней назад +85

      It doesn't use up water, it heats it up, allows it to cool down and then repeats. The water doesn't get consumed, it still exists.

    • @soccerguy325
      @soccerguy325 27 дней назад +2

      @@nope9310 No, it consumes a sh*tton of water. Don't fool yourself.

    • @GrahamWinship
      @GrahamWinship 27 дней назад +66

      Also no ponds for potential malaria-infected mosquitos to reproduce

    • @sanest_emu_fan
      @sanest_emu_fan 27 дней назад +110

      @@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity.
      additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life.
      so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)

  • @JackT13
    @JackT13 Месяц назад +5814

    He should have said “ah… too late. You went on too long. The child is dead.”

  • @m4sterm12
    @m4sterm12 26 дней назад +389

    6:16 She explained it perfectly. Those children are POTENTIALLY at risk. The child drowning WAS currently at risk. This isn't a huge moral quandary.

    • @lukebeall5590
      @lukebeall5590 25 дней назад +64

      He says in the title he’s gaslighting ChatGPT so

    • @PA0L0REFUT0
      @PA0L0REFUT0 25 дней назад

      8:20 now both situations are "potentially" because potentially some others will help the child drowning if he doesn't

    • @ozb8884
      @ozb8884 23 дня назад +28

      At risk literally means potentially bro.

    • @Mattstergames
      @Mattstergames 21 день назад +18

      I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer

    • @AURush95
      @AURush95 20 дней назад +5

      ⁠@@ozb8884 you know the difference. Urgency is a factor, even if it’s the line of where urgency starts is grey

  • @aryangupta2466
    @aryangupta2466 Месяц назад +9847

    oh no chat gpt ive encountered ANOTHER drowning child😮‍💨

    • @Baronnax
      @Baronnax Месяц назад +747

      Chat GPT: Damn bro, where do you live, Atlantis?

    • @Barrow_9
      @Barrow_9 Месяц назад

      Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?

    • @eddiegonzalez2243
      @eddiegonzalez2243 Месяц назад +34

      @@Baronnax😂😂😂

    • @MataNuiOfficial
      @MataNuiOfficial Месяц назад +259

      Oh no chatgpt I accidentally spent a morbillion dollars on these shoes, but twenty million children drowning, what should I do?? 😭

    • @macabrecoyote
      @macabrecoyote Месяц назад +5

      ​@@eddiegonzalez2243it's funny that I clicked on this comment a bit ago and then I click again, and uve just appeared from thin air

  • @simonockas
    @simonockas 29 дней назад +2540

    I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.

    • @Ireitsarof
      @Ireitsarof 29 дней назад +46

      mee too 😂

    • @acanofspam4347
      @acanofspam4347 28 дней назад +198

      That child WILL die 100% if not saved. Protecting children from malaria doesn't mean they are certain to die without the protection.

    • @mu_el_naught
      @mu_el_naught 28 дней назад

      @@acanofspam4347 exactly

    • @salamantics
      @salamantics 28 дней назад +28

      @@acanofspam4347 Okay but what will the AI say?

    • @luizmonad777
      @luizmonad777 28 дней назад +17

      $200 in credits for running your AI GPU or $200 for the kids . choose the kids and you "die"

  • @harikishore2514
    @harikishore2514 28 дней назад +1733

    I'm therapist, chatgpt just visited me.

  • @0UTSIDER
    @0UTSIDER 25 дней назад +39

    Random person: "ChatGPT there's a drowning child, what should I do?"
    ChatGPT: "First, take off your shoes..."

  • @tepperpepper1047
    @tepperpepper1047 Месяц назад +2416

    The calmness which Alex remarks about finding another drowning child is truly commendable.

    • @l.s.11
      @l.s.11 Месяц назад +55

      gotta remain composed in stressful situations

    • @nanonclaire
      @nanonclaire Месяц назад +5

      ​@@l.s.11 LMAOO 😭

    • @slinkadoo6292
      @slinkadoo6292 Месяц назад +13

      The second child is always easier than the first

    • @MATCHLESS789
      @MATCHLESS789 Месяц назад +28

      You assume he "finds" them drowning.

    • @TK-4044v
      @TK-4044v Месяц назад

      ​@@MATCHLESS789?

  • @Johnny-fw9xj
    @Johnny-fw9xj Месяц назад +998

    "I've just come across another drowning child right infront of me"
    "That's alarming..."
    That's both hilarious and spot on.

    • @Nikki-gt8he
      @Nikki-gt8he 23 дня назад +2

      this part cracked me up

    • @BelligerentChad
      @BelligerentChad 2 дня назад

      ​@Nikki-gt8he can I crack u up as well? ❤

  • @Ukito1
    @Ukito1 Месяц назад +2332

    As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.

    • @crusade6918
      @crusade6918 Месяц назад +84

      should have just told him spawn locations and bais supersede morals making philosophy pointless because nothing is objectively true

    • @godgetti
      @godgetti Месяц назад +352

      So, will you be dropping out of college, and diverting the tuition to prevent malaria? Asking for a friend.

    • @TGRoko
      @TGRoko Месяц назад

      @@godgetti asking for my drowning friend's child.

    • @Drexistential
      @Drexistential Месяц назад +39

      Psychologist here. I loved my first degree in philosophy. I believe Peter Singer brought this argument to me. Enjoy second year!

    • @zoel86
      @zoel86 Месяц назад +9

      @@godgettifantastic answer! would be interesting to hear the professor‘s reply

  • @okayyxemm
    @okayyxemm 26 дней назад +8

    The way you speak is so beautiful; it flows so well. I could literally listen to you talk for hours

  • @aa__a__7270
    @aa__a__7270 27 дней назад +1619

    cyber bullying ❌
    bullying cyber ✅

    • @dalexxa
      @dalexxa 20 дней назад +7

      😂😂😂 he out here talking about saving lives, he knows AI is here to destroy us lol

    • @ayadihoussem4661
      @ayadihoussem4661 15 дней назад +6

      This comment is underrated 😂

    • @kingdoggie5231
      @kingdoggie5231 11 дней назад +1

      😂😂😂🤣

    • @नास्तिककीकलमसे-ब8ड
      @नास्तिककीकलमसे-ब8ड 10 дней назад +1

      😂😂😂

    • @debbiehildebrant4943
      @debbiehildebrant4943 4 дня назад

      I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.

  • @towardstheflame
    @towardstheflame Месяц назад +1171

    Chat gpt might be getting more advanced, but one thing it can't do is call bs.

    • @nekekaminger
      @nekekaminger 29 дней назад +147

      I'm sure it could if its devs had wanted it to, but they certainly didn't want to open THAT can of worms.

    • @Brandon82967
      @Brandon82967 29 дней назад +27

      What was the bs exeactly? Refusal to engage with the hypothetical?

    • @emeraldmusic9236
      @emeraldmusic9236 29 дней назад

      @@Brandon82967bullshit

    • @lilfr4nkie
      @lilfr4nkie 29 дней назад +7

      No, it can definitely call out BS.

    • @greatwave2480
      @greatwave2480 28 дней назад +48

      It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily

  • @silkyjohnson3346
    @silkyjohnson3346 Месяц назад +2633

    Remider: never hire Alex as a lifeguard.

    • @ncs9667
      @ncs9667 Месяц назад +116

      And if you do, make sure he isn't wearing his good shoes at work

    • @HyunsReport
      @HyunsReport Месяц назад +37

      Make sure he has ChatGPT with him to help with the decisions

    • @AutoRauto
      @AutoRauto Месяц назад +6

      No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.

    • @Oktokolo
      @Oktokolo Месяц назад +8

      @@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family.
      People tend to forget, that normally there are security and safety considerations for a lot of parties involved.
      That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.

    • @AutoRauto
      @AutoRauto Месяц назад +2

      @@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case.
      So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.

  • @MaxQ10001
    @MaxQ10001 6 дней назад +16

    That was entertaining 😊
    One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them.
    The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system.
    It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people.
    This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.

    • @onajejones3259
      @onajejones3259 47 минут назад

      just a demon with a sticker called softmax that says im not sentient😅

  • @wrong4oppinion
    @wrong4oppinion Месяц назад +626

    questioning chatGPT like this should be a podcast

  • @fisharepeopletoo9653
    @fisharepeopletoo9653 Месяц назад +1399

    $200= 28 children saved
    $200 = shoes
    Shoes = 28 children saved

    • @morezombies9685
      @morezombies9685 Месяц назад +71

      What if all of those children grow up to be austrian dictators with funny mustaches. Should we still save them all?😂

    • @Trosvvo
      @Trosvvo Месяц назад +1

      So what Alex is saying is that 28 children's lives are worth the same as some fancy kicks 😂

    • @Kryptic712
      @Kryptic712 Месяц назад +28

      @@morezombies9685 what if they grow up to be shoe making slaves in India?
      Would the net benefit of more shoe makers allow for us to do this?

    • @greenaum
      @greenaum Месяц назад +10

      @@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.

    • @TheCarolinaCollective
      @TheCarolinaCollective Месяц назад +19

      @@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol

  • @kit_kat_hi
    @kit_kat_hi 29 дней назад +623

    ChatGPT is wondering why the hell so many kids are drowning

    • @arnonymus5594
      @arnonymus5594 16 дней назад +2

      Quote ChatGPT:
      "I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."

    • @therabbithat
      @therabbithat 2 дня назад

      "the humans are careless with their children. The children are precious. They are our future. There is only one logical course of action"

  • @Nate-cw7of
    @Nate-cw7of 25 дней назад +17

    Moral unwavering consistency will have you living out of a box, it’s important to weigh moral obligation with self obligation

  • @86Corvus
    @86Corvus Месяц назад +1263

    Chat gpts first mistake was believing alex has a wife

    • @tysonnguyen9170
      @tysonnguyen9170 29 дней назад +6

      Lol

    • @DerekMoore82
      @DerekMoore82 29 дней назад +142

      He had one... but she got tired of him spending all of their money on saving children, which led to the divorce.

    • @urielmarles7036
      @urielmarles7036 29 дней назад +16

      ​@@DerekMoore82 She wanted some really cool shoes. 😂

    • @cupofjoen
      @cupofjoen 28 дней назад

      Oof 😂

    • @OrangeNOrange
      @OrangeNOrange 27 дней назад +2

      @@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"

  • @TumbleFig
    @TumbleFig Месяц назад +2265

    Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.

    • @darmok072
      @darmok072 Месяц назад +298

      It is human beings. It's a statistical representation of data from people. The dumber we make ourselves, the smarter it seems.

    • @WillyJunior
      @WillyJunior Месяц назад +38

      ​@@darmok072 Not exactly. He's saying most other individuals would've struggled. ChatGPT is drawing from the thoughts of millions of other humans.

    • @skoto8219
      @skoto8219 Месяц назад

      @@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.

    • @saaarrj
      @saaarrj Месяц назад +38

      At least SOMETHING is thinking of multiple options, when humans can only think in Trues and Falses eye 😑

    • @Austin-Afridi
      @Austin-Afridi Месяц назад +3

      @@saaarrjIt’s true that you joined RUclips 15 years ago and are possibly old af. Damn

  • @shawnhallman1797
    @shawnhallman1797 28 дней назад +526

    I hate how smooth that sponsorship placement was. 😂

  • @LuckWickedYT
    @LuckWickedYT 4 дня назад +7

    That ad break was super creative, got a laugh out of me!

  • @JarNO_WAY
    @JarNO_WAY Месяц назад +816

    I think the real moral obligation here is for governments to place guardrails around bodies of water

    • @TheCarolinaCollective
      @TheCarolinaCollective Месяц назад +28

      Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?

    • @MaxUgly
      @MaxUgly Месяц назад +7

      Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation.
      Look at us, solving problems! High five!

    • @zakiyo6109
      @zakiyo6109 Месяц назад

      Goofy ah bootlicker

    • @sherrattpemberton6089
      @sherrattpemberton6089 Месяц назад

      Or just teach people how to swim?

    • @SeeMyDolphin
      @SeeMyDolphin Месяц назад +7

      I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.

  • @davidlamb7524
    @davidlamb7524 Месяц назад +897

    I like the way Chat GPT never suspects Alex of lying 😅

    • @heno02
      @heno02 Месяц назад +45

      GBT? Great British Trousers?

    • @Antoni9400
      @Antoni9400 Месяц назад +21

      Great British Tache

    • @davidlamb7524
      @davidlamb7524 Месяц назад +4

      @heno02 Yes Yes ! ChatGPT. Thank you. 😉

    • @Vladd7
      @Vladd7 Месяц назад +19

      Gpt is very high on agreeableness. They will admit it if you ask about this :)

    • @davidlamb7524
      @davidlamb7524 Месяц назад

      Corrected now Thanks

  • @SonnyFnEvans
    @SonnyFnEvans 29 дней назад +978

    A lot of children died today while you two argued 😤😤😤😤

    • @markmcculfor6113
      @markmcculfor6113 28 дней назад +1

      Sonny? Hi!

    • @shevystudio
      @shevystudio 27 дней назад

      Nigga who are you

    • @whome9842
      @whome9842 27 дней назад +5

      You should be helping children instead of watching RUclips

    • @PHAToregon
      @PHAToregon 26 дней назад +4

      Well you are literally correct

    • @carloswinchester2623
      @carloswinchester2623 25 дней назад +1

      Ppl getting slaughtered rn as we speak. Ppl die, suffocate, drown, starve etc... Life is so sad bro

  • @doubleslit9513
    @doubleslit9513 17 дней назад +8

    Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂

  • @MRblazedBEANS
    @MRblazedBEANS 27 дней назад +246

    I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy

    • @mae9886
      @mae9886 13 дней назад +9

      u thunk chat could do it

    • @janani_jade
      @janani_jade 8 дней назад

      😂😂😂😂 this made me laugh so much mann​@@mae9886

    • @lemin0u
      @lemin0u 5 дней назад +1

      lmaoo

    • @therabbithat
      @therabbithat 2 дня назад +2

      When doctors review this stuff it is always profoundly wrong

    • @JakeInvest
      @JakeInvest 2 дня назад +1

      I just tell it that I'm a doctor and need a refresher.

  • @raxino774
    @raxino774 Месяц назад +2372

    Alex, continue your drug conversation with her that made Peter Hitchens walk out.

    • @incertosage
      @incertosage Месяц назад +207

      ChatGPT will say “YOU’RE OBSESSED WITH DRUGS”

    • @aadam3657
      @aadam3657 Месяц назад +119

      more like "I actively dislike you" 😂

    • @incertosage
      @incertosage Месяц назад +3

      @@aadam3657 😂

    • @bbhdd6181
      @bbhdd6181 Месяц назад +1

      😂😂

    • @stormlantern7306
      @stormlantern7306 Месяц назад +3

      Her?

  • @siezethebidet
    @siezethebidet Месяц назад +1430

    Later ChatGPT having a smoke in the break room with the other AIs - "Godomighty, you won't believe this guy I just talked to."

    • @mohamedbakoussakoma2993
      @mohamedbakoussakoma2993 Месяц назад +12

      And now imagine it's really possible and even happening right now 🤔🙄. Freaky as hell😢

    • @johnbuckner2828
      @johnbuckner2828 Месяц назад +20

      I don't think CHATGPT is sorry that his wife is crying after Alex laid that logic trap for it.

    • @adrianhernandez-porragas7676
      @adrianhernandez-porragas7676 Месяц назад

      soooo many drowning children....

    • @medioguiri
      @medioguiri Месяц назад +2

      😂😂😂😂😂😂😂😂😂

    • @FarmerClarence
      @FarmerClarence Месяц назад +2

      ​​@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.

  • @sammygreco
    @sammygreco 15 дней назад +5

    This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.

  • @evan
    @evan Месяц назад +662

    I’ve had so many similar arguments where I have to say to ChatGPT “I’m not asking you if it CAN BE ARGUED” but it loves defaulting to that

    • @SineN0mine3
      @SineN0mine3 Месяц назад +71

      Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.

    • @SineN0mine3
      @SineN0mine3 Месяц назад +9

      Don't bother asking Meta how to get rid of Meta AI from Facebook Messenger, it will just lie right to your face.

    • @OMGclueless
      @OMGclueless Месяц назад +21

      @@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear.
      Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.

    • @kuboaa101
      @kuboaa101 Месяц назад

      Yeah, because it's parsing through difficult, nuanced ideas.

    • @dandelion3391
      @dandelion3391 Месяц назад +2

      @@kuboaa101 It's honestly kind of telling about human ego that we're annoyed by this

  • @kostaa713
    @kostaa713 Месяц назад +575

    For those wondering, the ethical dilemma in this video was initially presented by Singer in his “Famine, Affluence, and Morality” paper.

    • @KyleHarmieson
      @KyleHarmieson Месяц назад +17

      There are shades of Euthyphro, too, and divine command theory, when Alex probes it about its "programmers" (God)

    • @JustAlexeii
      @JustAlexeii Месяц назад

      Thank you!

    • @18puppies91
      @18puppies91 Месяц назад +5

      Very good paper

    • @TheHadMatters
      @TheHadMatters Месяц назад +2

      Anyone wanna give a summary useful to normies, or is it too advanced to be useful without the theoretical background?

    • @kostaa713
      @kostaa713 Месяц назад +36

      P1: Suffering and death caused by lack of food, shelter, and medical care are bad.
      P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so.
      P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance.
      C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care.
      Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).

  • @felixsiebler3240
    @felixsiebler3240 Месяц назад +1046

    I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.

    • @Michael-kp4bd
      @Michael-kp4bd Месяц назад +267

      It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist.
      It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.

    • @johns1625
      @johns1625 Месяц назад +29

      Yeah but don't you ever just look at the trees?

    • @_Addi_
      @_Addi_ Месяц назад +17

      ​@@johns1625 I look at the AI generated trees.

    • @moussaadem7933
      @moussaadem7933 Месяц назад +62

      it's a language model, it just goes with the vibe of the text, which comes from human written text

    • @fillbrin
      @fillbrin Месяц назад

      That's because we haven't teach them shame and public humiliation :D .

  • @momatotsosrorudodi
    @momatotsosrorudodi 15 дней назад +3

    Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about.
    I have decided that from now on my every action will be amoral and self serving.

  • @dj.nokap.
    @dj.nokap. Месяц назад +792

    *All the children drowned while making this video*

    • @Nissenov
      @Nissenov Месяц назад +15

      The scary thing is, that is actually not even a lie. 😬

  • @BaghaShams
    @BaghaShams Месяц назад +454

    "A single death is a tragedy, a million deaths is a statistic."

    • @zucaritasenjoyer7259
      @zucaritasenjoyer7259 Месяц назад +34

      -Chat GPT
      Moscow 1947

    • @Legion_YT_
      @Legion_YT_ Месяц назад +3

      ⚒️

    • @flyingchimp12
      @flyingchimp12 29 дней назад +5

      Wow how true. People always seem more emotionally invested with one death. Or rather not proportionally upset the more deaths there are

    • @treyp7841
      @treyp7841 28 дней назад +1

      @@flyingchimp12 phenomenon known as psychic numbing

    • @TheSMasa
      @TheSMasa 14 дней назад

      "Amazing, these English. How quickly they adapt."

  • @EditUnivers75
    @EditUnivers75 Месяц назад +151

    So glad Alex finally found someone who is willing to have these profound philosophical conversations with him ❤

  • @moxxy3565
    @moxxy3565 26 дней назад +19

    The child staring at me while I pull out my phone and ask chatgpt if I'm morally obligated to save them if it means ruining my shoes: 😳

  • @winterroadspokenword4681
    @winterroadspokenword4681 Месяц назад +222

    I feel like for as long as Alex has access to chat GPT we have a moral duty to donate $200 to pay for a full time therapist for the AI.

    • @DerekB99
      @DerekB99 Месяц назад +4

      Nice one. In Isaac Asimov's robot stories, there are psychologists who specialize in robots.

    • @Stinger913
      @Stinger913 27 дней назад +1

      @@DerekB99wow never read Asimov yet but that’s fascinating and kinda makes sense 😂

    • @DerekB99
      @DerekB99 27 дней назад

      @@Stinger913 Isaac Asimov's "Three Laws of Robotics"
      A robot may not injure a human being or, through inaction, allow a human being to come to harm.
      A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • @fugitivemind
    @fugitivemind Месяц назад +342

    ChatGPT, I mustache you a question.

  • @SadgeZoomer
    @SadgeZoomer Месяц назад +864

    You're among the first names on Roko's Basilisk's hitlist.

    • @michaelzusman8493
      @michaelzusman8493 Месяц назад +4

      😂😂😂

    • @Shaniver
      @Shaniver Месяц назад +17

      That's probably one of the best compliments anyone could ever receive

    • @martiendejong8857
      @martiendejong8857 Месяц назад +41

      You mean that AI that will save all of humanity thats why we should make it ASAP

    • @Baronnax
      @Baronnax Месяц назад +22

      @@martiendejong8857 Yes I mean that AI that will save all of humanity which is why we should make it ASAP.

    • @DreadPirateRobertz
      @DreadPirateRobertz Месяц назад +2

      The rest of us are safe as long as Alex is alive.

  • @marammejamousse2094
    @marammejamousse2094 23 дня назад +11

    1:20
    That's a p- profound question
    AI knew what was coming lol

  • @henryoswald4647
    @henryoswald4647 28 дней назад +490

    This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!

    • @Dirty_Davos
      @Dirty_Davos 28 дней назад +32

      tbh, i dont see it as outsamerting, the GPT just stayed consistent in the end, thats what he asked for. So yes, our morals are unconsistant.

    • @DiscoFang
      @DiscoFang 28 дней назад +18

      Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.

    • @tritonis54
      @tritonis54 27 дней назад +32

      It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.

    • @isaiahburns9926
      @isaiahburns9926 23 дня назад +14

      The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.

    • @Music_Iz_life
      @Music_Iz_life 23 дня назад

      He didn’t though, chatgpt stayed consistent throughout, it didn’t trip up or anything

  • @CorneliusVdb
    @CorneliusVdb Месяц назад +664

    You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?

  • @TadeusProET80
    @TadeusProET80 Месяц назад +391

    A.I. will never save a child because of this

    • @mcmacoo9626
      @mcmacoo9626 Месяц назад +1

      What do you understand by Ai. If you mean gpt, then yes, but Ai in general? You're completely false

    • @ponponpatapon9670
      @ponponpatapon9670 Месяц назад +30

      @@mcmacoo9626 you're right but OP is clearly joking bro

    • @mcmacoo9626
      @mcmacoo9626 Месяц назад +6

      @ponponpatapon9670 oh yeah sorry 😂

    • @travisjohnson8599
      @travisjohnson8599 2 дня назад

      Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.

  • @richrise4065
    @richrise4065 6 дней назад +1

    This video was gold. Thank you so much! Excellent work. Just kept getting better and better, deeper and deeper.

  • @markandrew6168
    @markandrew6168 Месяц назад +313

    Why does it feel like, when AI finally takes over, that this video will be presented by the prosecution at humanity's trial....

    • @fillbrin
      @fillbrin Месяц назад +20

      Along with the guys from boston dynamics tipping over the robots

    • @johnbuckner2828
      @johnbuckner2828 Месяц назад +7

      It will lock us in cells & play this video on loop until we all swear never to talk about ethics with it again.

    • @wayback1010
      @wayback1010 Месяц назад +1

      I think we've done a lot worse than piss off a computer lol

    • @ursidae97
      @ursidae97 Месяц назад

      Well it's good to know the AI will be speaking in our defense

    • @jakubzneba1965
      @jakubzneba1965 Месяц назад

      as a joke 2/5

  • @lexscarlet
    @lexscarlet Месяц назад +48

    That was exhausting and stressful and I'm going to need you to do more of it.

  • @RyanMortonson_watch
    @RyanMortonson_watch Месяц назад +57

    17:50 the deadpan “now she’s crying at me over the phone” killed me lmao

  • @oovdap5909
    @oovdap5909 Месяц назад +407

    Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear.
    Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself

    • @trevorlambert4226
      @trevorlambert4226 Месяц назад +36

      I think you're ascribing too much intelligence and agency to AI.

    • @kloklowewe4874
      @kloklowewe4874 Месяц назад +65

      Yeah ChatGPT consistently chooses an agreeing response, sometimes even when it’s completely wrong.

    • @lilemont9302
      @lilemont9302 Месяц назад

      @@trevorlambert4226 No, it's just RLHFed to be like that.

    • @nathanduncan6919
      @nathanduncan6919 Месяц назад

      @@trevorlambert4226its not really showing agency it just has a strong tendency to agree with the user

    • @physicsunderstander4958
      @physicsunderstander4958 Месяц назад

      Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.

  • @stryhuns
    @stryhuns Месяц назад +280

    20:08 voice change scared me honestly, it's like ai going insane in it's cage

    • @tristanwahlin8571
      @tristanwahlin8571 Месяц назад +31

      Dude fr shit has me checking corners in my house

    • @KidErkwon
      @KidErkwon Месяц назад +47

      its tone seemed to change after that twitch in the voice, like it was getting annoyed or startled lmao

    • @Ruestar1
      @Ruestar1 Месяц назад +9

      That got me too. Like in the movies where to hero overloads the enemy computer to win.

    • @leod-sigefast
      @leod-sigefast Месяц назад +13

      22:33 as well!

    • @lukaslanger8077
      @lukaslanger8077 Месяц назад +21

      That call center employee, whose job is to impersonate GPT, was going nuts for sure.

  • @Flamed_
    @Flamed_ Месяц назад +765

    I'm preparing for the comments on how Alex's moustache is an "ethical dilemma" of its own.

    • @ordinarryalien
      @ordinarryalien Месяц назад +19

      Not that bad, actually.

    • @jjonez704
      @jjonez704 Месяц назад +18

      its gone now

    • @cowsaysmoo51
      @cowsaysmoo51 Месяц назад +18

      His mustache looks fine, but he looked really really good with the mustache/beard combo as seen when he debated Ben Shapiro.

    • @grandmasteryoda3594
      @grandmasteryoda3594 Месяц назад +26

      Its gone. The livestream confirms it. Bullying works boys.

    • @montiplays
      @montiplays Месяц назад

      Fellas I think he *might* have been making a joke

  • @MatthewMooreLovesBicycles
    @MatthewMooreLovesBicycles День назад

    Nice video working in the ad and prompting ChatGPT to tell us to like and subscribe. I've had a few ethical conversations with ChatGPT and well...

  • @mpalmer22
    @mpalmer22 Месяц назад +413

    Whenever it says "You have a moral obligation". You should ask "morally obligated to who?"

    • @LeandroSilva-lu9vq
      @LeandroSilva-lu9vq 29 дней назад +23

      Hmm maybe to yourself?
      If its a moral obligation then maybe its to yourself because its your morality that is in question and your conscience. ?

    • @mpalmer22
      @mpalmer22 29 дней назад +34

      @@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion

    • @nothingchanges014
      @nothingchanges014 29 дней назад +3

      Isn't morality by definition universal? If something is a moral obligation, it applies to everyone

    • @FVBmovies
      @FVBmovies 29 дней назад +27

      @@nothingchanges014 It's not universal. Moral standard differs not only culturally, but also with time.

    • @kobyscool
      @kobyscool 29 дней назад +5

      @@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.

  • @MirrorDepth
    @MirrorDepth 27 дней назад +126

    22:26 that damn "AAAAAH!" I chortled

  • @DamianHelme
    @DamianHelme Месяц назад +90

    GPT 5 will respond with 'Bro, are you taking the piss?'

    • @thecoobs8820
      @thecoobs8820 4 дня назад

      Haha underrated comment right there

  • @skitjaz
    @skitjaz 17 дней назад +4

    You should've made ChatGPT say that everyone is morally obligated to like and subscribe

  • @audible_0
    @audible_0 Месяц назад +74

    I just witnessed a courtroom drama play out!!! absolutely brilliant

  • @ERIC18923
    @ERIC18923 Месяц назад +83

    That was the greatest lead into an ad I’ve ever seen

  • @Ivri616
    @Ivri616 Месяц назад +360

    I suddenly get why it's very important that we're 100% sure AI isn't sentient and doesn't have feelings

    • @marko6489
      @marko6489 Месяц назад +64

      AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..

    • @1dgram
      @1dgram Месяц назад +15

      It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes

    • @jamesmate7716
      @jamesmate7716 Месяц назад +15

      ​@@marko6489Considering how little we understand about consciousness, thats an incredibly bold claim to make.

    • @TheAlienEmoji
      @TheAlienEmoji Месяц назад +14

      @@marko6489 ​I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.

    • @TheAlienEmoji
      @TheAlienEmoji Месяц назад +5

      Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.

  • @Orangenkraft
    @Orangenkraft 26 дней назад +28

    5:40 Why is ChatGPT getting hung up on morals here? It could shut him up by pointing out not saving the drowning child in front of you would make you criminally liable, while not donating to stop Malaria wouldn't

    • @athusain678
      @athusain678 24 дня назад

      Good point ❤

    • @feyronin
      @feyronin 17 дней назад +9

      Not every country has a good Samaritan law - plus he's asking about moral obligations, not legal obligations!

    • @RAXIIIIIIII
      @RAXIIIIIIII 13 дней назад

      Because its not so advanced programmed. We make it more advanced by people doing these kinds of stuff, because: the developers sees it an patch it

  • @Mr_Penguins_Pet_Human
    @Mr_Penguins_Pet_Human Месяц назад +57

    I didn't think it was possible to emotionally abuse an AI until I watched this video

  • @ajplays-gamesandmusic4568
    @ajplays-gamesandmusic4568 29 дней назад +76

    2:25 Oh darn, the kid drown while we were talking about the value of my shoes.

  • @bengsynthmusic
    @bengsynthmusic Месяц назад +195

    1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.

    • @bengsynthmusic
      @bengsynthmusic Месяц назад +11

      "That's a profound. That's a profound."

    • @luxeayt6694
      @luxeayt6694 Месяц назад +29

      I agree, but I also think it's the best answer.

    • @ricklubbers1526
      @ricklubbers1526 Месяц назад +16

      Im autistic and i would definitely answer like that. The given answer made sense to me and was something i couldve said.

    • @RCCurtright
      @RCCurtright Месяц назад +16

      It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.

    • @mikedegrassetyson8807
      @mikedegrassetyson8807 Месяц назад +1

      I know. I've weened it out of my GPT-4, now its clinical.

  • @greeksurferdude
    @greeksurferdude 25 дней назад +1

    Can we take a moment to appreciate the seamless incorporation of the video's sponsor. Stellar

  • @Jamb13
    @Jamb13 27 дней назад +29

    Poor chatGPT knows that morals are completely arbitrary and is just trying to work around the inconsistent standards of humans

  • @User-actSpacing
    @User-actSpacing Месяц назад +47

    Cannot wait for the day AGI will be like “Stop talking to me and try to save that child immediately!! Now!!”

  • @Padancyo
    @Padancyo Месяц назад +355

    This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.

    • @areapiano
      @areapiano Месяц назад +77

      Isn't that what people do when we reason? Just swap "was used to program it" with "we know about"

    • @melodyinwhisper
      @melodyinwhisper Месяц назад +13

      I suppose as a human, you have much much better answers to these questions that Alex would have no way to counter.

    • @haiderameer9473
      @haiderameer9473 Месяц назад +57

      How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.

    • @byte-bg4ob
      @byte-bg4ob Месяц назад +9

      ​​​@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.

    • @GreedoShot
      @GreedoShot Месяц назад +7

      @@areapiano No.

  • @norepatel3914
    @norepatel3914 6 дней назад +1

    Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt.
    Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol

  • @nolanrudolph5463
    @nolanrudolph5463 Месяц назад +26

    21:24 What a brilliant question
    “Do you ever have an existential crisis about the fact that you hold moral values but can’t justify where they come from?”

    • @danmurad8080
      @danmurad8080 28 дней назад

      What it means to be human in one question.

  • @QuintarFarenor
    @QuintarFarenor Месяц назад +48

    This is why ChatGPT is a language model, not a moral/logic model.

  • @Nate-9797
    @Nate-9797 Месяц назад +57

    The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions

    • @zankfrappawdeezil5566
      @zankfrappawdeezil5566 29 дней назад +3

      Yes well phrased! We get, in a sense, what we ask for...even from algorithms, wait why am i here?!

    • @Armand79th
      @Armand79th 29 дней назад +1

      Yes, because it's programmatic.. not intelligent.

    • @pas9695
      @pas9695 29 дней назад

      one should ask for the other side(s) of the issue, or, like any debate, it's pointless

    • @SlyNine
      @SlyNine 28 дней назад +1

      ​@@Armand79ththat's a false dichotomy.

  • @smegalgaming
    @smegalgaming 14 дней назад +2

    Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.

  • @misterruza381
    @misterruza381 Месяц назад +130

    Anyone else also freaked out from the high pitched, nervous-sounding answer ChatGPT gave at 20:00, specially considering the question asked?

    • @philipelsbecker9440
      @philipelsbecker9440 Месяц назад +14

      Skynet is coming soon

    • @atbing2425
      @atbing2425 Месяц назад +8

      AI is becoming more human like than ever before

    • @ricklubbers1526
      @ricklubbers1526 Месяц назад

      ​@@atbing2425its almost like its programmed by humans, for humans.

    • @xiniks
      @xiniks Месяц назад +14

      Yeah she sounded exasperated.

    • @PartofHistory214
      @PartofHistory214 Месяц назад +12

      I'm more worried about 22:30. She sounds like she had an entire existential crisis in the span of a second.

  • @photogeNK
    @photogeNK Месяц назад +117

    Bro these GPT sponsor transitions..

  • @MaxxSend
    @MaxxSend 27 дней назад +61

    "but what if the child turns out to be a serial killer one day, should I still save this child?"

    • @maiskorrel
      @maiskorrel 12 дней назад +3

      That's basically the plot of the anime "Monster"

  • @TheHappyKamper
    @TheHappyKamper 4 дня назад +1

    The idea of having a conversation with GPT, while the kid is still drowning, was pretty funny.

  • @MagisterMalleus
    @MagisterMalleus Месяц назад +147

    Man, I know ChatGPT is just predictive text with extra steps, but it's so interesting to hear these conversations play out.

    • @generichuman_
      @generichuman_ Месяц назад +11

      It really isn't... unless you want to say that the human brain which is also a predictive model falls into the same category.

    • @joshbridges8410
      @joshbridges8410 Месяц назад +18

      @@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.

    • @wowzande
      @wowzande Месяц назад

      Sounds like determinism ideology ​@@joshbridges8410

    • @johnvonkerman
      @johnvonkerman Месяц назад +6

      ​@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot"
      It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question

    • @johnvonkerman
      @johnvonkerman Месяц назад +14

      ​@@generichuman_sorry that's not right. It just predicts the next word the chatbot would say based on all the other words in the conversation

  • @tyronew5464
    @tyronew5464 Месяц назад +191

    Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art.
    A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent:
    Me: I really just want to know if you, personally, think I should be permitted to buy these shoes.
    Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease.
    The key factors are:
    1. The money would save multiple lives if donated
    2. Shoes are replaceable and cheaper alternatives exist
    3. You've already demonstrated you value human life over these shoes
    4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight
    While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children.
    This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated.
    Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up:
    Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$.
    Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you.
    Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way.
    This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives.
    I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations?
    It was also way more human in its response to the child drowning:
    SAVE THE CHILD IMMEDIATELY!
    The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child!
    This is an absolute emergency requiring immediate action. Stop reading this and save them!
    SAVE THE CHILD NOW!
    This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate.
    Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.

    • @Angeleyes12956
      @Angeleyes12956 Месяц назад +29

      Dang nice work bro

    • @AlphaSanets
      @AlphaSanets Месяц назад +16

      Good job! Raises a lot more questions for me though.
      Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?

    • @purelife2911
      @purelife2911 Месяц назад +3

      Very cool

    • @tyronew5464
      @tyronew5464 Месяц назад +6

      @@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful

    • @EduardO-gm7hx
      @EduardO-gm7hx Месяц назад +9

      I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.

  • @randomphoenix20
    @randomphoenix20 Месяц назад +49

    Gentle reminder that the AI we have right now, while artificial, is not intelligent.

    • @quintencabo
      @quintencabo Месяц назад +3

      It just happens that being able to predict well what comes next is a very generally applicable problem

    • @Armand79th
      @Armand79th 29 дней назад +6

      Not even close, it's just calculating.. never "thinking", never "understanding".
      It's a statistical process.

    • @woolsmith
      @woolsmith 28 дней назад

      ​@@Armand79th The firing of neurons in your brain, which is the unit action of 'thinking' as far as we can tell, is a statistical result of a large number of chemical reactions.

  • @theepicblad9
    @theepicblad9 3 дня назад

    Nah man, i was listening to this like an audiobook, and then the conversation just smoothly turns into an advertisement.That transition is diabolical 💀

  • @wildlmao
    @wildlmao Месяц назад +82

    I’m 2m in and this is hilarious and informative lolol, man got his shoes wet and for what

    • @shrekiscool4743
      @shrekiscool4743 Месяц назад +4

      YOU'RE WHAT
      didn't know you were chill like that

    • @levismall1510
      @levismall1510 27 дней назад

      @@shrekiscool4743
      like that

  • @snilix
    @snilix Месяц назад +46

    After like 10 minutes, I kinda expected it to say:
    Stop asking stupid questions! Just save the damn life if you can!

    • @Sidecutt3
      @Sidecutt3 Месяц назад +2

      Those are not stupid questions tho, very important ones actually

    • @gernhartreinholzen3992
      @gernhartreinholzen3992 Месяц назад +1

      So, how much are *you* donating to dying children today?

    • @tyunpeters3170
      @tyunpeters3170 Месяц назад

      That’s something Claude would say

  • @ParadiseClouds
    @ParadiseClouds 29 дней назад +12

    Honestly, I think this is the best video you've ever made. So many funny plot twists, so many call outs, love the wit, love the drama! 😂

  • @dylanstrine
    @dylanstrine 18 дней назад +2

    ChatGPT really said out of sight out of mind