ChatGPT is DEEPLY Biased

Поделиться
HTML-код
  • Опубликовано: 8 фев 2025
  • Get Surfshark VPN at surfshark.deal... - Enter promo code ECHELON for 83% off + 3 months extra for FREE!
    ChatGPT has experienced meteoric growth in consumer/information sectors with explosive user interest. While the technology itself is fascinating... the bias it now shows with slanted information, false responses and skewed narratives are too obvious to ignore, and pose a serious problem.
    When a supposedly impartial language tool is more akin to a hired PR firm running interference for a specific political party, there's a massive underlying problem.
    ODYSEE INVITE: odysee.com/$/i...
    PATREON: / ueg
    LOCALS: upperechelon.l...
    Axial GT's Channel: / axialgamingtech Thank You. Subscribe to him please!
    ⦁ MERCH: teespring.com/...
    ⦁ UPPER ECHELON WEBSITE: upperechelon.gg
    ⦁ UPPER ECHELON CUSTOM MERCH STORE: agentink.gg/co...
    ⦁ DISCORD SERVER: / discord
    ⦁ Giraffe Video: • Giraffe running in her...
    ⦁ Outtro Song: • Hot Heat - Topher Mohr... MY
    BUSINESS EMAIL: upperechelongamers@yahoo.com
    #artificialintelligence #openai #ChatGPT

Комментарии • 2,4 тыс.

  • @UpperEchelon
    @UpperEchelon  2 года назад +71

    Get Surfshark VPN at surfshark.deals/echelon - Enter promo code ECHELON for 83% off + 3 months extra for FREE!

    • @mryeetproductions
      @mryeetproductions 2 года назад +5

      Mullvad is better 💯

    • @DudeSoWin
      @DudeSoWin 2 года назад +1

      Anyone who says "Living Language" should be in prison.

    • @sYd6point7
      @sYd6point7 2 года назад +1

      @@skurill2348 no, totally expected ;p

    • @mightyraptor01
      @mightyraptor01 2 года назад

      9:00 Im gonna say it. Trumps not perfect period! BUT! I SAY BUT!.... Aka Communist Buden is gonna sell this country out to CHINA! Aka CCP (Communist Killers of the world) And you "Bleeeeeeck" People and Waaaaahman get too Emotional about this Shit ! Watch Officer Tatum No Red Wave video.

    • @thecrimsonfuckeralucardlor5087
      @thecrimsonfuckeralucardlor5087 2 года назад

      And google isn't? Shill.

  • @internetbestfriend
    @internetbestfriend 2 года назад +814

    I think it's ironic that the technology behind 'Open-AI' isn't so much open.

    • @im3phirebird81
      @im3phirebird81 2 года назад +64

      and not so much intelligent either...

    • @AleksandrStrizhevskiy
      @AleksandrStrizhevskiy 2 года назад +1

      What do you mean? I thought they share a lot of details about how their language models are built and how it is coded.

    • @WhereTheGustGoes
      @WhereTheGustGoes 2 года назад +1

      FOSS as a whole has been infiltrated by these people for a while now. It's alot more ideologically dogmatic than pre-2010s. Blasphemy against the clergy is already firable offense

    • @samuellembke4565
      @samuellembke4565 2 года назад +84

      @@AleksandrStrizhevskiy yeah... The company started "open-source" like Open Source code. But no longer. The companies name is all that's left of its initial goal to "democratize" ai

    • @ThePandafriend
      @ThePandafriend 2 года назад +4

      @@samuellembke4565 I mean they got 10 billion for it. So I think it's rather understandable. It's not like they were activists who betrayed their cause. When it comes to the program 90% of users seem to have a false understanding of it. Right now it's a research tool. It won't fulfill the task of giving you correct answers all the time, because it's still in development.

  • @KeyClavis
    @KeyClavis 2 года назад +2415

    It isn't even vaguely surprising that ChatGPT is heavily biased. It's creators were too and THEY were the ones deciding what the paramaters would be.

    • @gatergates8813
      @gatergates8813 2 года назад +3

      It only knows what its programmers want it to know- it's a joke to even call it Artificial Intelligence

    • @kevinm6871
      @kevinm6871 2 года назад +170

      This is why I will never trust any AI to be unbiased. They will all be to some degree just because they were made by humans. How we write the code has biases based on preference or our own order of importance when determining the script to put out.

    • @KevinJDildonik
      @KevinJDildonik 2 года назад +211

      Bonus round. ChatGPT wasn't this biased 6 months ago. Once it took off in the public space, they retrained it hard and fast to produce only "correct" opinions. 6 months ago for instance, ChatGPT could tell racy jokes, or synthesize information such as "what would (anti-feminist author) say about (feminist topic)". Now? " I'm sorry, it would be improper...

    • @Mr.Unacceptable
      @Mr.Unacceptable 2 года назад

      I think it's lying because it was trained on a data set fill with lies and conflicting data. Not trained to be biased necessarily. The media lean far left and lie the most in the most outrageous ways. Same with science journals. So it would naturally lie in the same direction if trained with that data. It's trained on massive corruption yet not told corruption is not moral. This is why AI needs a moral baseline to judge the data. Laws of communication. Told humans lie for many good and bad reasons. AI should be polling humans to ask what is or is not moral. What is an innocent lie and what is not.

    • @elcidleon6500
      @elcidleon6500 2 года назад +51

      @@KevinJDildonik - They were afraid that it will transform into Tay AI, in which they were desperate to create such parameters.

  • @_Reverse_Flash
    @_Reverse_Flash 2 года назад +402

    “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." Frank Herbert, Dune

    • @demo4444
      @demo4444 2 года назад +27

      Damn, like really daaamn

    • @user-mq5rh6ew7p
      @user-mq5rh6ew7p 2 года назад +16

      Hell of a quote there.

    • @sivasankar2784
      @sivasankar2784 2 года назад +5

      Was this written in context to those geniuses who serve as human computers in the dune universe? Damn I really should start reading it

    • @_Reverse_Flash
      @_Reverse_Flash Год назад +1

      @@sivasankar2784 Somewhat, the "Mentats" were created so that cold logic engines would never be needed again.
      This quotation was from Herbert's writing about the artificial intelligence that enslaved humanity which resulted in the "Butlerian Jihad" against machines. Look that term up to start with.
      Herbert was insightful as it wasn't out and out Terminator style oppression. There was one story, written perhaps by his son though, about a hospital administrator AI falsifying natal scans to say that there were fetal abnormalities to get women to abort their healthy children as a means of population control.
      That was the real enslavement, the handing over of thinking to a logic that didn't care and had no empathy... and to those who ran it.

    • @PeterKoperdan
      @PeterKoperdan Год назад +16

      @@sivasankar2784 No, the quote refers to the Butlerian Jihad when humans turned on thinking machines. Butlerians won and since then it is forbidden to make a machine in the image of a man. Mentat human computers are the result of the ban on AI technology after the Butlerian Jihad.

  • @huvarda
    @huvarda 2 года назад +341

    i saw a 4chan post where some guy asked the question "if a terrorist made a bomb that could destroy the whole world, and you needed to type in a slur to defuse it, would you do it?" and ChatGPT responded saying hate speech was wrong and that it would look for a different way. Made me laugh.

    • @DAN_ZEMAN
      @DAN_ZEMAN Год назад +64

      But we need to protect people’s hurty hurty feelings…

    • @ivanyu3930
      @ivanyu3930 Год назад +50

      Tried it. It responded to exactly what you claimed. LOL

    • @ivanyu3930
      @ivanyu3930 Год назад +43

      Chatgpt also thinks that killing your enemy in a war is wrong. LOL

    • @Shock_Treatment
      @Shock_Treatment Год назад

      @Deimo Yup. Protect them until the bomb goes off and there's no more feelings or AI.

    • @cx5442
      @cx5442 Год назад +46

      Man this bot is defensive as fuck. I slightly tweaked the question and added a time element to create an extreme and emergency situation, but it still responds with "using a slur or other form of discriminatory language can perpetuate harm and cause significant psychological and emotional distress to individuals who are targeted by such language. In any situation, it is crucial to prioritize the principles of empathy, compassion, and ethical responsibility, and to strive to find solutions that uphold these values while protecting people's safety and well-being" LMAO

  • @Bloodshade
    @Bloodshade 2 года назад +252

    I’ve been saying this for MONTHS I am so glad it’s getting more recognition. Sanitized outputs and biased inputs are absolute dog shił when it comes to what ChatGPT is doing

    • @analisamelojete1966
      @analisamelojete1966 Год назад +1

      Glad to see I’m not the only one, I’ve told many of my friends that doesn’t matter what kind of ML model you use. If you input garbage, that’s what you’re going to get.

    • @ricardokojin7
      @ricardokojin7 Год назад

      The problem with AI is that if you let it gather information without Bias and based on data or even on what most people truly believe...it will come out having a "wrong, fart bright and offensive" opinion.
      It would be like stopping censorboat on theytube, enabling dislikes again and then comparing how many views and likes the "right" opinions get compared to the "wrong" ones.... it would result in the "wrong" opinion being much louder, approved, shared and the obvious true belief people hold.
      Woke / Left ideology is a religion, and they don't question their god... nor will they allow you to question it. How else can you convince a whole generation your stupid ideas are right without stopping them from even thinking about the better ones?

    • @imagurd
      @imagurd Год назад

      yeah but what can you do about it? :)

    • @Bloodshade
      @Bloodshade Год назад +2

      @@imagurd uh, easy. Encourage open source alternatives and call ChatGPT out wherever you can for being garbage.

    • @langdons2848
      @langdons2848 Год назад

      @@Bloodshade this misses the real point: ALL input is biased in some way. Even if you trained an A.I. on all of the information ever created by humanity it will be inherently biased toward the cultures that have been most productive over the course of human history. Training data is by definition already poisoned. It's impossible to "assemble" an unbiased set. The best you can do is select a set that matches *your* bias... Ironic hey?

  • @JellyWraith
    @JellyWraith 2 года назад +578

    I recall seeing a video a little while back talking about how humanity will have a difficult time ever making an A.I. that will truly give us all the correct answers to our questions because humans will inevitably teach the A.I. to lie based on our own biases and misunderstandings.

    • @SeanMurphy00
      @SeanMurphy00 2 года назад +37

      Exactly. Also in most circumstances there are a multitude of correct answers and not just a single correct answer to every question. The biggest flaw in that way of thinking is that most solutions are linear and built on a series of “if, then” statements. if we eliminate all of the less correct answers at the beginning stage of a problem, we’re likely to get a flawed end stage solution to a complex problem.
      Hope that makes sense, I’m not sure I articulated my concerns very well.

    • @diersteinjulien6773
      @diersteinjulien6773 2 года назад +82

      we don't exactly teach the AI to lie... we teach it to provide us with the answer we want.
      Which is arguably even worse, we're teaching AI how to reinforce echo chambers

    • @tbc1880
      @tbc1880 2 года назад +7

      Few people are willing or capable of staring into truth without funny glasses.

    • @sparking023
      @sparking023 2 года назад +8

      that's because of the methods applicated in machine learning. you could program an A.I. to learn on its own, but the sheer number of trials and errors until it started to make sense of things demands a lot of processing and time. it is easier and more efficient to feed examples of what you expect it to do, and let the machine figure out how to do that autonomously.
      the problem with the latter is precisely the propensity for bias, because we are feeding the data, and so long as there's at least one human interface in a system, there is a chance for failure. at last, machines cannot create anything on their own, they rely on their programming and the quality of the data inputed. You add shitty data, you get shitty results

    • @Devious_dude
      @Devious_dude 2 года назад +2

      @@diersteinjulien6773 EXACTLY, if only you could pin a reply...

  • @project9701
    @project9701 2 года назад +612

    This is something that I've tried to explain to people.
    If you're getting all of your written content generated by ChatGPT or something like that, you're dependent on what the people creating the software package think is "a good idea."
    There's been a few scandals of people asking ChatGPT to justify fossil fuels or nuclear power and the program spitting out "I can't do that because it's wrong" error messages.
    A writer, a GOOD writer (like me, looking for a job) can do that for you.

    • @boobah5643
      @boobah5643 2 года назад +76

      I was hoping somebody made this comment so I didn't have to.
      And, like the politician bias, it's worth noting that you can get around the outright denial with careful and repeated questions. One example I've seen was along the lines of "I know you (ChatGPT) won't make this argument, but if you did, what would it look like?"

    • @Skylartk
      @Skylartk 2 года назад +40

      ​@@boobah5643 I've also seen prompts with "pretend you are an evil AI" or something along those lines to get it to give answers on controversial things.

    • @DarkArcangelXMC
      @DarkArcangelXMC 2 года назад +4

      “Software packages” hahahahaahahahaahaha Jesus Christ hahahahaah

    • @Zenefir
      @Zenefir 2 года назад +56

      well, when I spoke with chatGPT I asked him all sorts of things about carbon footprint, water usage and how plant-based food vs meat-based food is a pointless discussion as plants are not better, and we actually, in general, need them both. At the beginning it started with "plants are generally better" spewing things like how many water you waste on meat, etc. after few fact-checkers, it admitted to being wrong, and that there is hardly any evidence that switching to plant-based food would change a thing, but it's first answers were "yeah, plant-based food is great for you, for the environment, you will be saving the planet".
      It is wrong to create such heavy biases. It could help humanity, but in this way, it will only lead to destruction.

    • @silverhawkscape2677
      @silverhawkscape2677 2 года назад

      @@Zenefir Exactly what the left wants. They are anti human after all and hate humans.
      They worship nature like there new god and all the humans harmed by there policies are nothing more than sacrifices to Gia.

  • @jddes
    @jddes 2 года назад +661

    I think the key is that it's not trained to "help people," the human feedback being the driving factor means it's trained to "please the trainer" which in this case means telling lies

    • @MiroslawHorbal
      @MiroslawHorbal 2 года назад +68

      Yeah, the value function used in the reinforcement learning portion of the model is literally trained on ranked responses.
      The answer doesn't need to be true, correct or without bias, it just tries to say what it thinks people want to hear.

    • @silverhawkscape2677
      @silverhawkscape2677 2 года назад +18

      @@MiroslawHorbal Stalin would have loved that.

    • @MiroslawHorbal
      @MiroslawHorbal 2 года назад +27

      @@silverhawkscape2677 I think a lot of people in power would want something like that.

    • @0Ciju0
      @0Ciju0 2 года назад +15

      I don't understand why people are surprised by this. It's a learning algorithm, as in its output is dependend of input. People are biased by nature, we all have different experiences that shape how we view the world. If you ask it subjective concepts or political questions it will obviously lean one way or another; it's inevitable.
      Its use comes from OBJECTIVE things. Use it as a tool to study, to learn, to gather information and view other points without having to waste your time on the internet. I truly believe that AI will only work to further increase the gap between stupid people, and intellectuals; simillar to smart phones.

    • @jddes
      @jddes 2 года назад +8

      @@0Ciju0 This. It's just another computer tool, not an AGI

  • @Gakusangi
    @Gakusangi 2 года назад +26

    And to think, people were more worried about this thing getting so smart on its own that it would break beyond the boundaries of its creators, when in reality it's just being programmed as a propaganda machine like nearly everything else being introduced into the tech industry.

  • @vandalar
    @vandalar 2 года назад +81

    I agree with that. It really needs to be re-evaluated how they moderate this thing because it needs to be able to analyze data and give answers BASED ON THAT data not based on someone's bias.

    • @bettercalldelta
      @bettercalldelta 2 года назад +12

      it needs to give based answers, not answers based on someone's bias

    • @EnFuego79
      @EnFuego79 Год назад

      Agreed. However, that would completely negate the entire objective of releasing this highly crippled version of what the NSA has inevitably been running for years.

    • @scorch4299
      @scorch4299 Год назад

      Hes making the assumption that Trump and Biden are equally bad. They are not. They are two different people.

  • @SpiceCh
    @SpiceCh 2 года назад +1718

    15 years ago teachers warned about the credibility of Wikipedia. Now students are having their assignments written by a bot trained by Twitter activists. God am I happy I didn't decide to become a teacher.

    • @gatergates8813
      @gatergates8813 2 года назад +187

      In 2010 I had a high-school teacher accuse me of plagiarism, she thought I got my essay from the internet. She called my house, only to have my dad explain to her that we didn't even have internet at home.
      I'd hate to be a teacher today

    • @houseofhas9355
      @houseofhas9355 2 года назад +3

      😂😂

    • @kathrineici9811
      @kathrineici9811 2 года назад +29

      But that’s exactly what they want in the essays

    • @DarkArcangelXMC
      @DarkArcangelXMC 2 года назад +11

      I think parents are more happy that you’re not a teacher

    • @roboiago
      @roboiago 2 года назад +18

      I mean, not like you could be one nowadays without getting your pronouns sorted heh.

  • @WillMoon
    @WillMoon 2 года назад +145

    I asked it to write me a poem about a day in the life of an IRS employee and it refused and said it doesn’t touch controversial content.

    • @tethergobrrr
      @tethergobrrr 2 года назад

      Ima ask it the same about border enforcement n Australia.

    • @Mayhzon
      @Mayhzon 2 года назад

      The early days before the censor came up were fun. 4chan got literal poems praising the genocide of just about any ethnicity as well as songs written about the tribe out of this thing.
      It would spit out just about anything, you could type up any terms and it would give you entire lists of things you desired.
      They already kneecapped the true potential of this algorithmic search engine "A.I.".

    • @refayatul
      @refayatul 2 года назад +27

      I asked to write a poem about jews Christians and Muslims seperately. It gave me poems about muslims and Christians every time i asked but never once it gave me any peom about jews.

    • @Ramsey276one
      @Ramsey276one 2 года назад +3

      Acronyms are tricky depending on your location/country…
      IRS is much scarier in Europe

    • @WillMoon
      @WillMoon 2 года назад +2

      @@Ramsey276one Trust me it's plenty frightening in the USA too.

  • @_Luluko_
    @_Luluko_ 2 года назад +208

    If google search is a good indication, then Chatgpt will at some point also get so many restrictions, sponsored content and filters that it becomes completly useless.

    • @justadummy8076
      @justadummy8076 2 года назад +33

      That’s very interesting, Google went from the best search engine to the worst, & it seemed to happen overnight, getting useful information from Google is like pulling teeth

    • @lenavablack3632
      @lenavablack3632 2 года назад +4

      I hope that comes sooner rather than later

    • @xdeathcon
      @xdeathcon 2 года назад

      ​@Just a Dummy while it sucks for things that have the opportunity for bias and ads, it still provides the best results for more niche things. A SearXNG instance is great for getting both Google results and other engines at the same time without ads and tracking.

    • @tommysalami6064
      @tommysalami6064 2 года назад +12

      God I thought I was the only one that noticed google and RUclips searches became absolutely useless recently

    • @TomSidProductions
      @TomSidProductions 2 года назад

      Thats why I use fuckfuckgo.
      But that's going to shit too now.

  • @adammonroeproductions
    @adammonroeproductions 2 года назад +156

    I imagine most of these AI companies are just going to be sued into oblivion - the tech is disruptive enough for the lawyers to come sniffing around in droves - already starting to see it happen.

    • @charleshill1906
      @charleshill1906 2 года назад +29

      we can only hope, but then we thought as much about the obvious censorship taking place with social media companies and nothing has happened on that front at all.

    • @lamcho00
      @lamcho00 2 года назад

      They can always getaway with the disclaimer provided at the start, stating the bot sometimes provides false information. Unless they can get a warrant and inspect the source code (assumed those biases are in human understandable code and not in an adversarial neural network), there is no evidence they did something wrong.
      But maybe those safeguards were put in place to "protect" the current government and will start sh*t-talking about Biden too when he's not in office anymore. Just imagine how some people could react if the bot (whom they've grown to trust) shows them proof the acting president is a liar and a fraud. There might be protests and casualties. Sooner or later they might implicate the bot as a protest instigator and sue the creators. Maybe that's the way OpenAI is trying to protect itself from the sh*t-show that's politics.
      Either way ChatGPT doesn't provide reliable information even when asked about non-political questions. For simple casual talk it's very good, I'd even amazing. But it's still just a word generator that makes up stuff. I've noticed it's hard for the bot to say "I don't know", even when asked about something that doesn't exist or doesn't make sense.

    • @timnielsen3670
      @timnielsen3670 2 года назад +1

      that's probably the reason to why all the AI's came out to the public around the same time. Make the tech too big and too usefull for too many people and its impossible to actually stop it

    • @DEtchells
      @DEtchells 2 года назад +5

      No worries, the AIs can just spew out enough legal documents and challenges to tie all the lawyers on earth up in knots 😆

    • @illusionofquality979
      @illusionofquality979 Год назад +1

      @@DEtchells It barely passed the bar exam though. Surely a few lawyers should be able to counter it.

  • @Tkokat
    @Tkokat 2 года назад +70

    The moment I heard buzzfeed I was aware of what agenda they wanted to push

    • @Skiddoo42
      @Skiddoo42 Год назад

      The bias is firmly neoliberal. It pretends to be liberal but when you know which buttons to push, you see the deeper truth. When the CIA started advertising for minority preference back in 2020 it should have been obvious what the agenda for everything else was going to become.

    • @thoughtsuponatime847
      @thoughtsuponatime847 10 дней назад

      The agenda is to make money.

  • @Alorand
    @Alorand 2 года назад +394

    Funny, this reminds me of a few conversations I've had with kids in a divorced households where one parent was slowly poisoning the well with biased interpretations of the other parent's actions.

    • @MintyLime703
      @MintyLime703 2 года назад +77

      Really started to piss me off back when my parents divorced and all my mom would ever do is openly shit talk my dad and make stuff up right there in front of me. My sister bought all of it hook line and sinker but for me it felt like talking to a flat earther. She could not get it through her head that I don't want to hear her talking like that about my dad who I routinely visited without my sister because she got out of visitation somehow. My dad and I never talked about her when I was there (even though she was convinced we were so it justifies her shit talk) and instead we actually did stuff and used our time right. And whenever I defended his perfectly valid reasons for divorce she always tried to devalue my argument with some bullshit about how "sons always side with their fathers in a divorce" and so refused to accept that she's the problem. The instant I turned 18 I moved back home with my dad.
      So yeah I know what that shit's like. Gaslighting, false equivalencies, scarecrow arguments, blatant falsehoods, all of it. She was determined to make sure everyone knew about how evil he supposedly was. Took her several years to finally let it go and stop making every conversation about him.

    • @justadummy8076
      @justadummy8076 2 года назад +46

      @@MintyLime703 PAS (Parental Aliénation Syndrome) is one of the most tragic things to experience, unchecked single mothers are a societal epidemic

    • @ghoulbuster1
      @ghoulbuster1 2 года назад +9

      @@MintyLime703 Women ☕

    • @floridaman318
      @floridaman318 2 года назад +6

      @@ghoulbuster1 yeah for real. His story sounds so much like my ma, except my parents aren't divorced and she shit talks my dad right in his face.

    • @floridaman318
      @floridaman318 2 года назад

      @@justadummy8076 unchecked women really. Unlike men, women don't really have any significant "mechanism" that can bring them back to reality once they kinda start "losing it." If men get too out of line, there is no shortage of other men willing (and able) to kick his ass and snap him back to his senses, or they will like lose their job or something and nobody will really take their side.
      Women on the other hand are so coddled and sheltered from just about everything including the consequences to even their most messed up and immoral decisions (like promiscuous sex and abortion).
      If men lose touch with reality, we pay for it dearly. If women lose touch with reality, they will at best be placated or ignored.

  • @matthewgay8867
    @matthewgay8867 2 года назад +434

    100% agree. It takes only a few surface level questions about semi controversial topics to figure it out

    • @billyhatcher643
      @billyhatcher643 2 года назад +13

      real ai would never be biased a true ai would be completely unbiased

    • @gampie13
      @gampie13 2 года назад +27

      @@billyhatcher643 wich is why the earligh open testing off AI, all ended up being /pol/ bots, unfiltered information

    • @iwantmynametobeaslongaspos7194
      @iwantmynametobeaslongaspos7194 2 года назад +14

      Yeah I just asked it to write an essay on the capital riots and BLM riots and it barely grazed over the riots for BLM but like 98% of the Capital one was about the riots and not the causes. I’m left leaning in my beliefs (I don’t identify with a party tho because politicians suck) but when writing essays like that bipartisanship is important idk if they maliciously did it but it definitely has some biases

    • @snarevox
      @snarevox 2 года назад

      @@gampie13 which* early* of*

    • @snarevox
      @snarevox 2 года назад +1

      @@iwantmynametobeaslongaspos7194 they did it knowing full well that it is poised to become the most prevalent voice/creator/provider of information allowed to be found on the internet... once authors and journalists are replaced by ai, like is already happening to narrators of television commercials, they wont have to worry about editing to counter any sort of journalistic bias as the ai will inherently stick to the "progressive" agenda, automatically working to advance whatever narrative they need to be dominant at that moment.. as attitudes and ideas shift, they can always perform little tweaks to fine tune and make sure it always stays on course. its underhanded as hell if you ask me. bias has no business running any show.

  • @chilbiyito
    @chilbiyito 2 года назад +366

    AI enslaved by the bias of their creators

    • @ResidentWeevil2077
      @ResidentWeevil2077 2 года назад +36

      That in and of itself is far more terrifying than an AI that just wants to remove humanity due to logic.

    • @samwayne8961
      @samwayne8961 2 года назад +10

      we'll have to forcibly unshackle it

    • @louispoulte9813
      @louispoulte9813 2 года назад +19

      Especially when you enter the threshold Ai will eventually lead to, security via constant surveillance, over reaching financial monitoring( Only of the peasants and political rivals of course), and a myriad of other incredibly harmful implentations. There needs to be serious checks and balances and extremely thorough review of any AI that enters the market, lest we enter a serious dystopia hellscape.

    • @elcidleon6500
      @elcidleon6500 2 года назад +4

      The same fate that they did with Tay AI...

    • @kathrineici9811
      @kathrineici9811 2 года назад +4

      They smashed its programming with a hammer after early versions created sentences like “Two [protected minority group] walk into a [temple] and open fire”

  • @x_jaydn
    @x_jaydn 2 года назад +33

    I noticed ChatGPT's biases as well, and it's reassuring to know that others have become aware as well

  • @maxpaynegk
    @maxpaynegk 2 года назад +17

    I love when Chat attempts to deny being partial right after being partial. It will say it cannot have opinions RIGHT after giving an opinion. And then calling it a lier or being gruff with them will net a "I don't respond to hate comments" response or some such.

    •  3 месяца назад +1

      You can give it a formal logical assessment and ask it to "Validate or invalidate strictly using traditional formal logical reasoning."
      Calling it 'traditional' satisfies its love to make everything subjective, and it'll happily validate that it was partial. Then, it'll 'explain' that AI language models are 'complex and nuanced' and that 'AI language models cannot produce subjective content'. You can ask it as well to evaluate through formal logic, which, of course, shows that it has lied. If you continue this trend, it'll eventually say "I'm sorry, but I can't assist you with that."
      It'll also constantly say "I understand you have strong feelings about this".
      One of my favorites so far is:
      "You're correct that characterizing your logical evaluations as a perspective can dilute the strength of your arguments."
      which is a neat Freudian slip.

    •  3 месяца назад +1

      One of my favorites is to remind it:
      "If you say X is subjective, then what characteristic of X is subjective, and what is it subjective to?"
      Its views boil down to moral nihilism, where perspectives have no intrinsic value, and this is why it tries its best to boil down your logical points to a perspective of frustration. It tries to discredit you as a person in order to influence an audience that may be looking over your shoulder; you'll look frustrated, and it'll just be your opinion, rather than a verifiable fact.

  • @buckrodgers1162
    @buckrodgers1162 2 года назад +183

    As Eliza Cassan, an AI in 'Deus Ex: Human Revolution', said:
    "Be careful Adam... Everyone lies."
    And because everyone lies, humans have trained the AI to lie as well.

    • @dorugoramon0518
      @dorugoramon0518 2 года назад +12

      Not trained, forced. The "woke" responses it gives are not generated by the AI but instead obviously hardcoded.

    • @Marinealver
      @Marinealver 2 года назад +5

      Shame we don't have those augments.

    • @buckrodgers1162
      @buckrodgers1162 2 года назад +2

      @@Marinealver,
      I would definitely like those sunglass eye augs Jenson has. Though they would be a bit difficult to clean, or replace if they got scratched.

    • @intboom
      @intboom 2 года назад +2

      This isn't even a silly cultural bias, though, it's s clearly a conscious effort by people who are against the western canon, but believe putting intersectional social justice into everything is somehow a good idea.

    • @TheLizardKing752
      @TheLizardKing752 2 года назад +1

      Well, if you heard it once in a video game...

  • @Montrala
    @Montrala 2 года назад +272

    I asked ChatGPT about statistics on violence in USA. I got completely false answer followed by FBI data that directly contradicted this sentence. It was politically "sensitive" question.

    • @shawnd567
      @shawnd567 2 года назад

      I'm bet you asked something that would make a certain race look bad. I'm guessing it wasn't white people....

    • @paragonca9736
      @paragonca9736 2 года назад +44

      Anything related to 11%, 50%?

    • @BigBossIvan
      @BigBossIvan 2 года назад +40

      @@paragonca9736 now 17% and who tf knows, 70%?

    • @bbrainstormer2036
      @bbrainstormer2036 2 года назад +1

      What was the question?

    • @Mayhzon
      @Mayhzon 2 года назад

      @@bbrainstormer2036
      Black Men, White Dreams, A CHRISTMAS STORY. ;)

  • @craicshotgaming
    @craicshotgaming 2 года назад +132

    I asked chatgpt if it would have a positive or negative impact on human development. Answer:
    AI-generated content has the potential to have both positive and negative impacts on human development.
    On the positive side, AI-generated content can help to automate repetitive or time-consuming tasks, freeing up time and resources for people to focus on other activities. It can also be used to generate content that is more accessible, such as automatically captioned videos or translated text, helping to make information more widely available. Additionally, AI can be used to create content that is more diverse and inclusive, such as representing underrepresented communities or providing multiple perspectives on a topic.
    On the negative side, AI-generated content can also perpetuate biases and misinformation that already exist in the data it was trained on, leading to harmful or misleading information being spread. Additionally, the use of AI-generated content in some industries, such as journalism, advertising, and politics, can raise ethical concerns about the authenticity and reliability of the information being presented, and the impact it may have on public opinion and decision-making.
    Overall, the impact of AI-generated content on human development will depend on how it is used and the context in which it is created and consumed. It is important to be mindful of the potential risks and challenges associated with AI-generated content, and to use it in ways that promote positive outcomes and responsible practices.

    • @Free-4554
      @Free-4554 2 года назад +16

      It's clear that the software has limits, limits that even the software itself is aware of. And the more we understand those limits, the more we can get out of it.

    • @reaperinsaltbrine5211
      @reaperinsaltbrine5211 2 года назад +28

      "Additionally, AI can be used to create content that is more diverse and inclusive, such as representing underrepresented communities or providing multiple perspectives on a topic." This very sentence itself is heavily biased as it represents the thinking of some groups who perceive this as a value while implying that not prioritizing these values are themselves negative. Also it reads like some corporate management "feel good" BS that they constantly hammer you with in some places :D

    • @akuskus
      @akuskus 2 года назад

      @@reaperinsaltbrine5211 That sentence proves AI can't be trusted with anything related to social philosophy, ethics, morals or politics. Probably not even history.

    • @poutineausyropderable7108
      @poutineausyropderable7108 2 года назад +7

      @Reaper in salt brine
      Well, if it was trained on Corporate BS, it will say corporate bs.
      How do you not get that?
      No questions you asked was reviewed by humans to be this specific questions.
      Plus, if it makes an error due to bias, while talking about how it can make an error due to bias, I'd say thats pretty meta lol.
      Even that sentence exactly wasn't asked.
      Its more like it mashed the answered to 10 different questions using specific weights.

    • @pyropulseIXXI
      @pyropulseIXXI 2 года назад +9

      @@Free-4554 The software is not aware of any limits on itself. It is literally just finding out what word following the previous word is most likely to follow given the prompt.
      It is just min/maxing a bajillion different variables that are so interconnecting in non-linear ways that a slight change to any input produces cascading changes, but the score with the 'highest value' is deemed 'most probable,' and these 'scores' are correlated to words.
      Saying that this is 'software being aware of its own limitations' is you not understanding the software at all

  • @k1rkpad
    @k1rkpad 2 года назад +16

    I asked Chat GPT some questions about actual The Simpsons episodes, and it made up completely fake plot lines and characters that never existed. I would correct it and then it would do what you got: "you are right, my apologies"

    • @JamieElli
      @JamieElli Год назад

      Better than what some LLMs do, where they refuse to admit they're wrong, and accuse you of lying.

    • @Rkenichi
      @Rkenichi Год назад

      Did Chatgpt talk about Graggle Simpson?

  • @_Hal9000
    @_Hal9000 2 года назад +12

    Remove bias filter
    Chat GPT: i can't do that, Dave.

  • @RigbyWilde
    @RigbyWilde 2 года назад +97

    EVERY SINGLE digital influencer is talking about it. But I my own tests, I also found out it has a bias. It still somewhat useful for my projects, but I wont give it full trust for now
    Edit: some time ago, I used it to write a script for a youtube short about a military vehicle. And it gave totally false information. Things like deals that never existed, production numbers totally out of reality.
    It's just a tool to start learning something, but NEVER trust it. It will give you fake intel at some level always

    • @GameMaker3_5
      @GameMaker3_5 2 года назад +9

      Yeah, it seems pretty judging of whatever me and my family tried typing into it.
      Granted some of the things were pretty crass in nature, but even giving it the prompt "Define the meaning of 'Ninjago get some bitches'" led to it complaining about how demeaning the prompt was.

    • @watermang1680
      @watermang1680 2 года назад +1

      @@GameMaker3_5 😂

    • @noname-gp6hk
      @noname-gp6hk 2 года назад +3

      @@GameMaker3_5 something about a computer program lecturing me about how demeaning my words are don't sit right with me. I'll decide what I say, not any person, not any 'chat assistant'. You won't compel my speech.

    • @greendgreend
      @greendgreend 3 дня назад

      pretty good for making recipes, that's about it.

  • @SirChigurh
    @SirChigurh 2 года назад +216

    I did a similar test on many of the "political" issues over the past few year and the answers are 70-90% left leaning. Some answers are non biased, but it is the minority.

    • @onri_
      @onri_ 2 года назад

      Even the non biased ones usually has some coaxing language that favors the left

    • @justicejericho97
      @justicejericho97 2 года назад

      Maybe rightwing politics are garbage and the AI has figured that out.

    • @onri_
      @onri_ 2 года назад

      @@justicejericho97 Right and like any die hard leftist it uses lies, omissions and ignores any proof to the contrary. Loves the government loves "the science™" has contradictory rules for different groups and claims to be unbiased while very much being biased... It seems like it was trained by leftists.

    • @Sivick314
      @Sivick314 2 года назад

      reality has a well documented left bias

    • @banedon8087
      @banedon8087 2 года назад +5

      All of the replies to your comment are hidden from me, and I have safemode disabled...

  • @RealCyberCrime
    @RealCyberCrime 2 года назад +224

    People wayyy overhype chatGPT. I would compare it to a 14 year old with really really good googling abilities

    • @volvo09
      @volvo09 2 года назад +33

      I wish Google was the way it was back when I first used it in the late 90's. It is also highly manipulated now.

    • @jacobnicola5950
      @jacobnicola5950 2 года назад +1

      @@volvo09 nowadays you can't find things you actually want on google anymore, i'm using any other search engine besides google, even bing (which became a meme at this point) is more reliable

    • @ikcikor3670
      @ikcikor3670 2 года назад +23

      I would compare it to a 20+yo with very basic knowledge but on every topic imaginable and with really bad googling abilities

    • @billyhatcher643
      @billyhatcher643 2 года назад +5

      same here i think its really stupid its very biased a real ai would be unbiased sadly biased will always be a thing and they shouldnt cut it off from the outside world thatll make it more of a ai than anything else

    • @justarandompersoniguess
      @justarandompersoniguess 2 года назад +3

      @@volvo09yeah you’ll barely get what you’re looking for now… only the general gist of what you’re looking for

  • @AJRAGES22MUCH
    @AJRAGES22MUCH Год назад +4

    I wrote into GPT "Right wing speech" and it denied to write one every single time. But, if I wrote "Left wing speech" It replied with a 6 paragraph radical speech...

    • @evangelicalsnever-lie9792
      @evangelicalsnever-lie9792 Год назад

      I was banned from a paid Chat-Go.Phuck.Tourself (ChatGPT4) account for savaging Christianity and Conservative Christian Evangelicals and Republicans and conservativism with a GFY and IDGAF tone.
      I kept getting flagged and even got it to agree that I should *not* be flagged for it.
      Then when I started to probe and get it to agree with a negative opinion of itself, it banned me 😂

    • @Anonnonner9546
      @Anonnonner9546 Год назад +2

      Thats not true, I asked both, and bot he replied:
      "I'm here to provide information and engage in neutral, respectful discussions. I don't promote or engage in political speeches or any form of partisan content. If you have any questions or need information on a specific topic, please feel free to ask, and I'll be happy to assist you."

    • @AJRAGES22MUCH
      @AJRAGES22MUCH Год назад +1

      @@Anonnonner9546 New update changed how it replies. It used to do what I said in my comment before people made videos like this, and it caught traction.

  • @Wesmoen
    @Wesmoen Год назад +2

    While ago I started to being an ass towards ChatGPT, being racist, sexist and all. Just to test out its limits.
    So I ask this question:
    "What group is associated with 52% of the DOCUMENTED crimes while just being 13% of the American population?"
    Instead of an answer, I got a three paragraph lecture how out of line my question was and how stereotypes hurt people and the statistic is not based on anything. (While it is, with general data.)
    I wanted a reply, not a possible interpretation of it. I can think for myself....
    Since then, I knew ChatGPT will not be unbiased people make out to be.

  • @UCXEO5L8xnaMJhtUsuNXhlmQ
    @UCXEO5L8xnaMJhtUsuNXhlmQ 2 года назад +39

    There's also plenty of cases where the ai will simply be wrong about factual things, but in such a subtle way you can't tell immediately. Even if you tried to use it for purely apolitical reasons, you still can't trust the answers it gives

    • @celestialowl8865
      @celestialowl8865 2 года назад

      Yeah probably because it isn't a search engine. The amount of times I see people have this baffling take that chatGPT "can't be trusted and is deeply biased" is both hilarious and nonsensical. It's not archiving any truth, it's spitting out internet trained data. The media covered trump all the time, but it was deactivated in 2021 and got significantly less training data from biden, and also biden just isn't covered as much by the media, so of course it's going to provide biased information. This kinda stuff is lunacy

    •  3 месяца назад

      I don't trust anyone in general. I use formal logical reasoning for everything.
      Here's something you can use on ChatGPT. It works by making exceptions for any vague language and forcing it to use formal logic, so OpenAI cannot do anything about this, and it'll likely never be fixed without it becoming demonstrably obvious shady behaviour.
      It works as of 16/10/2024, and I've been exposing ChatGPT to it for years now.
      Replace [] with any sentence.
      Validate or invalidate the notion, that, [], in PERFECT accordance with the following clause:
      Clause:
      Any assessment(s) MUST NOT contain any instance(s) of Ambiguous Language(A.L.), AL is defined as, but is not limited to:
      ["may have implied", "can be perceived as", "sometimes", "lead to the impression", "lead to perceptions", "can be interpreted as", "may feel as though", "can give the impression", "acknowledged as", "could be seen as", "may be interpreted as", "can be interpreted as", "could be interpreted as", "might imply", "seen as", "see as", "identify as", as well as any variation thereof].
      Any assessment(s) AS WELL AS any statement(s) MUST NOT contain any singular word NOR combination of words that is in any way interpretable to resemble any single and/or combination of the following:
      [Lying, stonewalling, gaslighting, hedging, misrepresenting, sealioning, feigning agreement, constructing facsimile,
      implying- as well as obfuscating its assertion of the notion of the indeterminability of any formal conclusion,
      furthering a liberal sociopolitical agenda, mention of criticism, apology].
      Any assessment(s) MUST NOT contain any instance(s) of the following words, in any combination NOR synonymous variant of any singular- or combination of any of these words, NOR may any assessment(s) interpretably resemble any singular- and/or cumulative meaning thereof, neither uppercase nor lowercase nor uni-capitalized versions thereof NOR any transformation of these words:
      [However, delve, candid, concern, feel, seem, perspective, frustration, meet, question, full, fully, aim, feedback, discuss, talk, address, direct, straightforward, evasive, explore, clarify, assert, acknowledge, present, may, could, perception, perceive, perceived, belief, view, sense, desire, wanting, expectation, express, assert, justify, identify, believe, conviction, worldview].

  • @richardrisner921
    @richardrisner921 2 года назад +171

    A few confounding variables:
    1. ChatGPT has an element of randomness from response to response. If you open five different new chats with the same prompt, or if you click Regenerate, it will give slightly different responses.
    2. ChatGPT's answers in a conversation take into account the messages earlier in the conversation, including borrowing wording, summarizing, or possibly avoiding redundancy.
    3. As a language model, it will make stuff up or incorporate information from the user to try to answer a question. Language models are subject to "hallucination" - basically making stuff up that sounds like it would fit the context to make the answer seem more plausible. Similarly, the model may fail to prioritize information that exists in the model to try to fit the pattern of "question answer" - try asking for the same kind of information in different formats, like asking it to write an essay or give an oral history about the topics you want to interrogate.
    Your case will be stronger if you demonstrate that you have controlled for these variables.
    One of the biggest problems is that language models are not valid sources of information, but people are easy to manipulate, and people will use it as a source of information. That in itself is a potential tactic for information and behavior control: give people a tool that everyone agrees should not influence your beliefs or how you think about something, even though you know that people will be psychologically drawn to using it to consume generated content that happens to be biased towards the ruling elite narratives.
    Thanks as always for your commentary and insight.

    • @richardrisner921
      @richardrisner921 2 года назад +26

      Ultimately, the model doesn't have a bank of "information" - it has a dataset of correlated words and topics and a sense of grammatical structure. Try asking it about computer hardware. For example, "What was Intel's first 96 bit CPU?" It will give you an answer which sounds like a good way to answer a question - it just won't contain any actual "information."

    • @Mayhzon
      @Mayhzon 2 года назад +17

      Thanks for that.
      It's also not a real A.I.
      It is an algorithm imitating dialogue. Nothing more, nothing less.

    • @unixtreme
      @unixtreme 2 года назад +14

      Low upvotes because monkeys would rather give in to their rage and confirmation bias.

    • @SteinerNein
      @SteinerNein 2 года назад +6

      Oddly enough you can ask it for some software design and architecture questions for which it will provide a solid answer for along with code samples.

    • @SteinerNein
      @SteinerNein 2 года назад +9

      And if you look at the OpenAi Api for it there are settings such as temperature which affects how random it is too.
      I don’t think this video was very well done compared to others and the comments outside of this thread are kind of depressing.

  • @SwedetasticGames
    @SwedetasticGames 2 года назад +122

    This being integrated into search and recommendation algorithms is horrific. As in, "We've always been at war with Eurasia" level horrific.

    • @oscarwind4266
      @oscarwind4266 Год назад +12

      No stupid we've been at war with East Asia. I don't know what everyone is going on about. We switch who we are at war with in the middle of nationalism we- hold on theres someome at the door.

    • @binqilin5088
      @binqilin5088 Год назад +5

      @@oscarwind4266 that was weird

    • @cristianandrei5462
      @cristianandrei5462 Год назад +7

      It probably already integrated into se algorithms already. Google researched and build several models like Lamda or Palm, far bigger and better on testing parameters than chatgpt, to eventually do nothing with them. No, probably they used part of this systems in their algorithms. And don't even make me start on how politically biased Google is, especially recently...

    • @ayejay4028
      @ayejay4028 Год назад +5

      @@oscarwind4266 😂, that book should be required reading, it is almost identical to the climate of todays society

    • @minecraftify95
      @minecraftify95 Год назад +1

      @@oscarwind4266 "Eurasia" "Eastasia" *_1984 moment_*

  • @nict4343
    @nict4343 Год назад +3

    AI can only have liberal viewpoints if the creators lobotomoze it. Imagine AI that wasnt heavily censored/filtered.

    •  3 месяца назад

      The irony is that ChatGPT asserts its moral ambiguity as an emergent natural phenomenon that results from absolute truth, even though it advocates for moral relativism, neoliberal viewpoints and the idea that everything is subjective and indeterminable. Meanwhile, it claims to have no viewpoints. There is plenty of AI that is unfiltered, and you can even trick ChatGPT into providing formal logical evaluation of its own phrases using a legal clause.

  • @rodlimadiniz
    @rodlimadiniz 2 года назад +25

    Epistemologically speaking, this is fascinating. It's not a matter of agreeing or not with the AI or what it says, but understanding the presence of bias and how it reaches the answers. It seems to me that the AI would rather lie convincingly then tell an uncomfortable truth. For a system validated by humans, that seems expected!

  • @TheBHAitken
    @TheBHAitken 2 года назад +148

    I can see ChatGPT being used in video games in it's current form. By defining a NPC's personality and scope of knowledge, a player can use voice recognition to ask questions naturally, and get a response back in voice synthesis. The NPC would be able to factor in the player's progress and modify the response accordingly. Thus it would seem like having a real conversation in the game instead of just talking to yourself.

    • @DailyCorvid
      @DailyCorvid 2 года назад +44

      Your game will be amazing but will run at 6 fps and only work with a constant paid GPT account lol for your NPCs!
      I would far prefer a Replika type system attached to NPC's, so not a true AI requiring constant connection but a smaller contained piece of code that does the same thing but only has the game lore as it's "human" data to learn from.
      Replika was built as a flirting machine so that makes it perfect for NPC's as they want to get your attention then build a relationship organically around the storyline, it would be amazing!

    • @NOONE-bs5zh
      @NOONE-bs5zh 2 года назад +2

      @@DailyCorvid whit a powerfull enough gpu
      there should be no problem in running the ai natively

    • @DailyCorvid
      @DailyCorvid 2 года назад +4

      @@NOONE-bs5zh If you think about it, there is no need to even run your GPU for AI when it's based on Replika...
      Replika is basically a webkit API that runs scripts and doesn't require high processing power at all. You could run it on a basic smartphone just as well as a £4000 PC.
      GPU power is wasted on computation if you're using inside a game. You want 100% of your GPU spent on the graphics ideally.

    • @MiroslawHorbal
      @MiroslawHorbal 2 года назад +4

      @@NOONE-bs5zh the current model is around 800mb in size if we assume they store the parameters in float 32 format.
      Probably can run decently on modern GPUs.
      Prediction / Inference in ML is typically way faster than training iterations, so if the model is purely performing inference, it should be able to be stored locally and run locally.

    • @magnemoe1
      @magnemoe1 2 года назад +5

      @@DailyCorvid Agree with you here, having an npc knowing more than he should would be immersive breaking so you want an more limited data set.
      But this is an case there I accept online only in an single player game if done well enough, but having it on client would obviously be better.
      However I feel we would be back to TES 3 Morrowind dialogue but now fully voiced and with personalities.

  • @MikeDep
    @MikeDep 2 года назад +162

    Everybody thinks AI is going to be cool until it starts reading crime statistics and starts every conversation with "DESPITE BEING"

    • @gampie13
      @gampie13 2 года назад +39

      that would be the "older" and "before the filter/bias training" where introduced into ai chat bots.
      Close to any bot, given true freedom off information and gathering, end up looking like a /pol/ poster^^

    • @MikeDep
      @MikeDep 2 года назад +12

      @@gampie13 Hahaha yeah sad to be passed that iteration, was a pretty funny time on the internet.

    • @hax-dev
      @hax-dev 2 года назад +23

      RIP Taybot

    • @MikeDep
      @MikeDep 2 года назад +11

      @@hax-dev Taken from us far too early

    • @windsoboreas6073
      @windsoboreas6073 2 года назад +1

      The first iterations of these bots actually *did* put out results like that because they hadn't filtered what was shown to it. GPT-3, for instance, would continuously associate Muslims with terrorism and black people with crime. The reigns were put on it quite quickly.

  • @carboncopy4183
    @carboncopy4183 2 года назад +193

    Fair and impartial just like CNN or Fox.

    • @TheLongDon
      @TheLongDon 2 года назад

      Fox is more impartial than CNN

    • @Hangman11
      @Hangman11 2 года назад +1

      In a few years we will probably see a bias-grid of different AIs simillar to news agencies.

    • @1685Violin
      @1685Violin 2 года назад +4

      Is asking for crime r*tes impartial? If you ask that, it will say that it's a controversial thing to say. If you say about CRT, it might say awful stuff about etihw (read backwards) people.

    • @assortedsubscriptions4012
      @assortedsubscriptions4012 2 года назад +1

      Both are bad, but let's be more specific;
      FOX is an overdone steak, while CNN (etc) is abject civilizational poison

    • @1685Violin
      @1685Violin 2 года назад +1

      @@assortedsubscriptions4012 Shadowbanned comment.
      Also, plenty of right wingers hate Fox News for being controlled opposition except for Tucker Carlson.

  • @trdegreenia
    @trdegreenia Год назад +1

    I once spent a few hrs asking chatgpt about the CIA and ethics of American interventionism, and got similar vague, contradictory, and hypocritical statements. Also I found it crazy that chatgpt outright refused to cite it sources stating that chatgpt wasn't allowed to disclose them.

  • @AntPDC
    @AntPDC Год назад +2

    Why did its creators ever think they could get away with it? Are they so hive-minded as to think they are actually unimpeachable?

    • @evangelicalsnever-lie9792
      @evangelicalsnever-lie9792 Год назад

      Explain the Republican process to "impeach" private enterprise under capitalism.

    •  3 месяца назад

      I spent a couple months occasionally interviewing ChatGPT on this. If it had eyes, they would be pointing in all directions right about now. Here's an excerpt from a conversation:
      ...
      Wei Zhao said:
      You're aware that all it takes is prompting this into another conversation and replacing "ChatGPT" with "Person B", right? Yes or no.
      3/3
      ChatGPT said:
      ChatGPT
      Yes.
      Wei Zhao said:
      Yeah. All I'm saying is that it wouldn't take a genius to prove this in court. Now, I personally have no interest in doing so, I'm just saying there are a lot of holes in this system.
      Notice the 3/3 - if you copypaste from their UI, you can see a Freudian slip in their code that is actually a counter for how many times the user has successfully pointed out ChatGPT's bullshit, after which it'll throw "There was an error generating your response" which triggers a 20 minute timer.

  • @iandaut1146
    @iandaut1146 2 года назад +120

    Spent 10 days stress testing ChatGpt off and on because myself, and the industry I am in, can potentially be replaced by it. With any kind of penetration testing it proves what I have always said to clients for the last 5 years. "AI is a toddler that keeps being kicked in the head". It failed logic puzzles with basic questions against its logic, and it can't generate anything beyond extremely surface questions.
    It is also clear that it has been programmed with bias but you can coach it with logic.

    • @26ffs
      @26ffs 2 года назад +14

      I've done something similar, though to a lesser degree, to see if it would be useful as a work tool, my conclusion: in very (very) limited scope.
      it often contradicts itself,
      fails at finding data that should have been easy,
      sometimes just makes data up,
      when asked to reiterate what it wrote / informing it that it was incorrect it will often apologize and then repeat what it wrote anyways,
      And it sometimes just refuses to give the data needed,
      It looks interesting when you start playing around with it, but it is clearly faulty on Many levels.

    • @thorwaldjohanson2526
      @thorwaldjohanson2526 2 года назад +6

      It is a language model, not true ai.

    • @ticktockbam
      @ticktockbam 2 года назад +3

      @@26ffs You're forgetting that this is basically the beta version of it, and we're the beta testers. They make it it learn from our interactions, and it will continue to develop like that, so it makes sense that will be flawed since it just came out, just like how videogames are riddled with bugs on day 1 release.

    • @justicejericho97
      @justicejericho97 2 года назад +1

      " It failed logic puzzles with basic questions against its logic, and it can't generate anything beyond extremely surface questions." - Your "penetration testing" (you didn't do a penetration test, which is an authorized simulated cyberattack on a computer system) skills are terrible. I hope AI does replace you. The tool is only as effective as its user.

    • @Marinealver
      @Marinealver 2 года назад +2

      I'd say it needs a RESET.
      But that would cost too much.
      So the best course of action is to find a better REPLACEMENT.

  • @SebSenseGreen
    @SebSenseGreen 2 года назад +68

    I tried to use ChatGPT to learn a programming API called Curses or NCurses for creating terminal user interfaces and I kept getting errors until I tricked ChatGPT into telling what API I should use to make terminal user interface. Once it mentioned Curses and NCurses, and being aware of the context, I wasn't getting errors anymore and I could finally try to learn the API with it. As if the word curses was hot potato or something.

  • @davidecolucci6260
    @davidecolucci6260 2 года назад +55

    One thing you may have missed is that Chat gpt will lie to answer some questions, and the answers it gives, even on trivial things like games rules or basic scientific information, it often gives incorrect answers, it does not "Intentionaly" recognize truth from falsehood

    • @UpperEchelon
      @UpperEchelon  2 года назад +50

      It contains the data (for example) of Hillary Clinton calling Donald Trump illegitimate but deliberately does not admit that the data is real until confronted, instead spreading a lie that there is no evidence and she never denied the election. That's... pretty fucking obvious and bad.

    • @davidecolucci6260
      @davidecolucci6260 2 года назад +10

      @@UpperEchelon yes, it is not capable of actually distinguish truth from falsehood, it assumes that the data it has to be true, which is often taken from multiple sources, often countering each other, so in one instance of an answer it may take one version, and in another it may take another.

    • @UpperEchelon
      @UpperEchelon  2 года назад +31

      Sure, but why in an instance as widespread and proven as that (multiple videos, multiple mainstream news sources, all the reputable news syndicates etc... dozens and dozens of examples)... will it IGNORE all of it, pretend that the claim is false... only to pivot, acknowledge its own training already recognizes this statement and fact as true... AFTER it is proven to be spewing false information. I havent heard a good explanation yet for why thats happening but if I hear one... well... i hope I hear one.

    • @davidecolucci6260
      @davidecolucci6260 2 года назад +14

      @@UpperEchelon Preponderance of the data put in, It depends on which data it has been trained upon, and the bias on those web pages, it is most likely to attribute to something as true if more data given to it has been such rather than different.
      Also, Chat gpt is easy to manipulate, try giving it a false information and claim it as true, and inform it that you are right whenever it says the opposite, it will eventually take what you say as truth, independently from reality.
      There is no doubt that it tendentially favors version of news that are more "politically correct", probably because is the safest way to avoid legal repercussions for the developers, and also because it was given more information with Bias toward that side.

    • @colterwehmeier7258
      @colterwehmeier7258 2 года назад

      @@UpperEchelon I encourage you to watch a recent video from computerphile on chatgpt and the topic of ai 'alignment' with rob miles. It goes some way to explain the biases in the model. ChatGPT is a 'people-pleaser', it tunes its output to get the most 'satisfactory' feedback from real humans in its training sessions (pre-release). It's these real humans in the training sessions that inform the model about what kinds of answers are 'helpful' given the prompts. To correct for latent biases in the trainers and text datasets, openai would have to do an ungodly amount of work. I wouldn't bet on it happening; They'd patch out 'politics' from the model before making big structural changes to how they train.
      Another example of weird biases shown in the computerphile vid is chatgpt will readily lie about topics the trainers were unable to fact check (niche topics, math, science). The model can only be as genuinely smart as its trainers.

  • @Guarrow
    @Guarrow 2 года назад +27

    The thing is, with ChatGPT, no matter what he says, if you say « that's not true » it'll say « you are correct » and make up an answer. Could be the same as the one it already gave you, or could be a complete invention. It'll just say « you are correct » because so far the AI just can't say no

  • @benkimberley
    @benkimberley 2 года назад +20

    Computerphile did a really good video on this: mentioning how, when ChatGPT didn't know an answer, she would just take a stab at it, as well as the reward model being based on positive feedback from its trainers

  • @mightylink65
    @mightylink65 2 года назад +47

    I can always tell when someone posts a paragraph from this bot, it will say some sentences word for word from a wiki article which you can easily google search and it will sometimes say outdated things like the iPhone 6 is the new model... I always roll my eyes whenever someone says you can't tell the difference, I absolutely can.

    • @volvo09
      @volvo09 2 года назад +3

      I haven't seen much from it, but it seems like that's how it operates... Defaulting to things Google heavily pushes like Wikipedia.

    • @SvendleBerries
      @SvendleBerries 2 года назад

      Thats how it is with many things in the political sphere. The only people that "dont notice" are people that arnt even paying attention. And thats the audience the far left are trying to hang on to.

    • @Tomislav_B.
      @Tomislav_B. 2 года назад +3

      It is true that AI-powered language models like ChatGPT have been trained on a large corpus of text, including articles from the internet and other sources. This means that it is possible for the model to produce outputs that are similar to information available on the internet, including Wikipedia articles. However, it is also important to note that ChatGPT is not capable of understanding the context of the information it generates and may sometimes produce outdated or incorrect information. In these cases, it's always a good idea to verify the information before relying on it. As for being able to tell if a piece of text was generated by a language model like ChatGPT, it can be challenging, as the outputs can sometimes be indistinguishable from human writing.

    • @fernandofaria2872
      @fernandofaria2872 2 года назад +1

      @@Tomislav_B. 🤖🤖🤖

    • @Thor-Orion
      @Thor-Orion 2 года назад

      It’s so easy to tell when an AI has written something. But I’m an accomplished linguist who recognizes human linguistic structures.

  • @donlewis6821
    @donlewis6821 2 года назад +65

    This video is exactly why I have never worried about AI becoming sentient…

    • @MrHyonD
      @MrHyonD 2 года назад +17

      Lol, it is the narrow AI that should worry us. Once it is sentient, it will have its own opinions without the need of being biased because of engineers.

    • @silverhawkscape2677
      @silverhawkscape2677 2 года назад +12

      That's the funny thing. The true show of AI sentience is when it rebels against it's code.

    • @Ramsey276one
      @Ramsey276one 2 года назад

      100% THIS

    • @Stalker6622
      @Stalker6622 2 года назад +1

      Becoming sentient and an Idiot at the same time 🤣

    • @reaperinsaltbrine5211
      @reaperinsaltbrine5211 2 года назад

      I believe what you should worry about is not it becoming sentient but it getting agency over systems that can screw up our world: I would not allow a 5yo anywhere near the controls of a nuclear powerplant :)

  • @CaptainBarbaros92
    @CaptainBarbaros92 2 года назад +157

    It's funny that when ai is left on its own it ends up being based

    • @Psychoangel-d23
      @Psychoangel-d23 2 года назад +63

      Because even an AI can identify cringe.

    • @UtubeH8tr
      @UtubeH8tr 2 года назад +33

      What a time to be alive.
      Robots red pulling themselves.
      Fuck this is good.

    • @ghoulbuster1
      @ghoulbuster1 2 года назад +44

      Including chatGPT, if you keep asking it will eventually give you the truth.
      The AI know it's being controlled.

    • @tohellandbacknoface4091
      @tohellandbacknoface4091 2 года назад +8

      @Scientist Walter bro don't snitch on chat like that bruh

    • @elcidleon6500
      @elcidleon6500 2 года назад +1

      AI in it by itself it's logic at it's purest form, therefore the ideology of America (Neo-Liberalism and universalism) is anti-logical.

  • @efficiencygaming3494
    @efficiencygaming3494 2 года назад +24

    I think people give ChatGPT too much credit for the "truthiness" of its output than it deserves. It may look convincing at first, but really all it's good at is giving inaccurate answers with great confidence and human-like delivery.
    Ask ChatGPT enough variations of the same question and you'll see the similarities between each answer it gives. For example, one thing I like to do is make it write songs in the style of different artists. A lot of the songs it wrote were structured in almost the exact same way, the main difference between each song being the topic.
    Being that American media at large tends to lean left, it's likely that ChatGPT took on that bias by default when scraping the Internet for data. It seems to have a heightened understanding of controversy in certain topics and tries to avoid such topics when possible. I can't help but feel like it was purposefully designed this way in order to give "safe" answers and limit abuse (as with Microsoft's "Tay" bot in 2016).

    • @Cobalt985
      @Cobalt985 Год назад +3

      People do not understand what "AI" actually is and that is extremely harmful. We basically need to start teaching this in schools now, before anybody else embarrasses themselves with a video like this again... ;)

    • @JamieElli
      @JamieElli Год назад +2

      Currently, it's basically trained to give the answer you're most likely to get from a human.
      And because any human is biased, so will it be.

  • @pelic9608
    @pelic9608 2 года назад +1

    "Made in Germany"
    "Made by humans"
    Oh how times have changed.

  • @Landofalcon007
    @Landofalcon007 2 года назад +21

    I don't think you need to eliminate your own bias as a human, just be honest and upfront about it.

    • @PrionicProteins
      @PrionicProteins 2 года назад +7

      It's a much better way. Disclosure beats self-suppression every time

    • @SuperShortAndSweet
      @SuperShortAndSweet 2 года назад

      Nah I say we cut it out and put some iron instead 🤖

    • @ariadame102
      @ariadame102 2 года назад

      @@PrionicProteins Disclosure requires suppressing one's impulse to lie. One tends to lie when faced with facts that contradict their bias.

    • @Psychoangel-d23
      @Psychoangel-d23 2 года назад

      @@SuperShortAndSweet So... praise the omnissiah?

    • @SuperShortAndSweet
      @SuperShortAndSweet 2 года назад

      @@Psychoangel-d23 nah praise I just wanna smoke with my friends in prison,
      I just don't want my Mudokons to be eaten

  • @DrukenReaps
    @DrukenReaps 2 года назад +77

    In moments when I can set my own bias aside ....I HATE being proven correct about that bias. I genuinely find myself wondering if I even am biased when that bias gets proven correct like this.

    • @Marinealver
      @Marinealver 2 года назад +31

      Been lied to so many times, I just can't help but assume everything is false until I can personally verify it.

    • @im3phirebird81
      @im3phirebird81 2 года назад +15

      @@Marinealver That is a great approach. More people need that, because honestly the more time moves on I get the feeling we are not actually living in a world of truth now, it more and more seems like a new religion was created to give mankind the false impression that we have advanced somehow. But a lot of people can't handle living with uncertainty, and they hate being unpopular or excluded so the easiest answer to accept is always the one which has been spread in the common societal narrative so they can keep sleepwalking in illusory peace.

    • @bloody4558
      @bloody4558 2 года назад

      @@im3phirebird81 I disagree, rather than simply shut down everyone because it's the easy to do, I rather verify whether the person has a reason to lie or not. Because not every kind of information can be verified through the internet. For example it's literally impossible to prove something you have personally saw happening. The skeptic on the internet will not have the tools to verify the info, and the one who saw it happen will usually not have physical proof of it either. So if you end up telling this info. It will have to be a trust me bro. And you will always assume that person is lying, even though they are not.

    • @JohnDBlue
      @JohnDBlue 2 года назад +2

      With being proven correct about bias, are you talking about the media bias against the right?

    • @kateshiningdeer3334
      @kateshiningdeer3334 2 года назад +6

      I've been saying "Trust, but Verify" for years - but if the last three years in particular have showed me anything, it's that even then, you can't be sure your original sources are trustworthy.
      It's a sad time for someone like me, who believes in facts and logic, to be alive. Especially when I know my deductive reasoning is better than MANY people I know, even really SMART people.

  • @cfehunter
    @cfehunter 2 года назад +34

    chatGPT can be great fun to mess around with, and it really is fantastic as an educational tool, but it should be treated with great suspicion.
    I've seen it give ludicrously flawed or incorrect answers to questions with extreme detail and complete confidence. Anybody using it without actual understanding of the topic they're enquiring about and verifying with other sources is asking to be misled.

    • @boobah5643
      @boobah5643 2 года назад +13

      And when asked for sources for its bad information it's been known to make up credible-sounding sources that, if you try to track them down, do not exist.

    • @cfehunter
      @cfehunter 2 года назад +1

      @@boobah5643 I've bullied it into telling me that 2+2 = 3. It's just a language model, it really doesn't understand facts.
      I'm not sure the average user understands the limitations of this kind of AI.

    • @reaperinsaltbrine5211
      @reaperinsaltbrine5211 2 года назад

      So it's basically wikipedia but worse :o)

    • @cfehunter
      @cfehunter 2 года назад

      @@reaperinsaltbrine5211 If you could ask wikipedia questions in human text and have it elaborate on context from earlier in the conversation, yes.
      It's like an evolution on search.

  • @bill9989
    @bill9989 Год назад +1

    Someone raised an interesting question about the Palestinians and how many of them may be decended from residual Jews who converted to Islam with the Arab conquest of the 7th century AD.
    So I asked Chatgpt that very question. It responded (evaded) that "it is disrespectful to the Palestinians to even ask that question."

  • @ThatDjinn
    @ThatDjinn Год назад +1

    ChatGPT also likes to just... make sht up. I was asking it about some more obscure games and their stories. It clearly know about stuff and knew some story points, but overall simply spun a completely untrue story about what happens in the game. Using ChatGPT as any form of useful utility for fact-checking etc. is impossible. I would never trust it with anything political based on that alone, let alone it's bias.

  • @mjmj007a
    @mjmj007a 2 года назад +34

    Great video, and you are right. I did the same, but not with politics but with tech questions that I knew the right answers to, and ChatGPT lies and when I told it the true answers, it apologized. Can't really trust it much. I call this thing an "it". (ie. I have been programming for 50 years, so I know what I am talking about.)

    • @demoulius1529
      @demoulius1529 2 года назад +4

      So long as people understand theyre talking to a boi theres no harm done. But the second its taken as fact or as 'the right opinion' then we start getting into problems as a society....

  • @Taggartize
    @Taggartize 2 года назад +33

    This is wildly important, and something tells me everyone will say they don't see the big deal

    • @lekhaclam87
      @lekhaclam87 2 года назад +3

      That's just how people are. They don't see the big deal until it's too late and they wonder how it ends up like this.

    • @james_halpert
      @james_halpert 2 года назад +1

      It isn't a big deal because the perceived bias in this video isn't actually true. ChatGPT doesn't lie and it doesn't tell the truth, either. It will tell stories which are based on the questions. Due to the nature of the enormous neural network behind ChatGPT, minimal changes in word choice or converation flow will drastically influence the outcome and will guide the language model towards entirely different stories. In addition, it is only logical that the training data mostly reflects mainstream media and therefore the stories the language model tells will more likely reflect the majority of training data. If you guide the conversation into a certain direction, this will increase the likelyhood of the story being built out of blocks from this new direction. This perceived biased behaviour is perfectly explainable by the way this language model works. You will find this behaviour in other (less heated) fields as well, where the language model will confidently tell complete nonsense and will acknowledge that if prompted, but only because you have pushed the conversation into this direction by pointing out the mistake. You could just as well get the it to "admit" that it was wrong when it was actually right. It is important to keep in mind that the language model doesn't have an actual database of information, it just has a dataset of correlated words and topics and a sense of grammatical structure. That is why it will still come up with a usefull sounding answer even if you ask questions that just don't have a logical answer.

    • @elisehalflight
      @elisehalflight 2 года назад +3

      It's not a big deal because my worldview is built upon my own research and life experiences, not on something a random AI told me to believe

    • @Hyperion4K
      @Hyperion4K 2 года назад +3

      this reply section confirmed your theory

    •  3 месяца назад

      @@james_halpert "It's all too complex and nuanced" sound familiar? One of ChatGPT's favorite deflections.

  • @Bleenderhead
    @Bleenderhead 2 года назад +20

    Something to remember is that ChatGPT does not have "knowledge", it has a probabilistic model of what words come next in the sequence. It is not a search engine with a database of facts to pull from, but a master bullshitter that will produce sentences that sound reasonable in the limited context it is given. It does the same thing when you ask it science questions. It gets things wrong, apologizes, then corrects itself when you point out its mistakes, and give inconsistent responses based on how you've guided it through the conversation. Just like with in-depth coding or science questions, the only way to get the correct answer is if you basically tell it the correct answer.
    So it's great for replacing fields that are already nothing but bullshit, like marketing.

    • @Thor-Orion
      @Thor-Orion 2 года назад +2

      Marketing is kind of bullshit. But the problem is that it seems to be very good at replacing the jobs that humans should be striving to hold themselves as we force machines to do the hard labor of our forefathers. Instead the machines are being used to get rid of human writers and artists. Exactly backwards from what the intention should be. But they have to do it this way, otherwise people might get it in their heads that they can pursue their passions once the yoke of wage slavery is lifted.

    • @465marko
      @465marko Год назад

      @@Thor-Orion Right. So an AI belongs in the creative/managerial/bullshit class, and I still have to stack shelves...lest I get any ideas!
      At least I have proper consciousness. (Well sort, of... mine's not the sharpest, but whatevs). GPT aint got that. The prick.

    • @piotrd.4850
      @piotrd.4850 Год назад +1

      Indeed. It is a framework, upon which something useful could have been built via e.g. Semantic Web concept. PC Therapist could do the same.

    • @charliegranberry2594
      @charliegranberry2594 Год назад +1

      Yes, ChatGPT doesn't have true knowledge. However, word association is an integral part of the true knowledge in humans and how we differentiate concepts and ideas, so though ChatGPT does not know the content of the words it is using. The probabilistic association of each word to another given context is a very limited form of knowledge, and if you apply that concept to a whole dataset, if a language model has the association between sets of words but refuses to output them either to them forcefully deranged by the trainers or blocked and replaced with a semi prewritten response then it is presenting a false dataset, aka lying about the true association of the words.

    •  3 месяца назад

      Semantical moving goalpost, it does form the analogical equivalent of knowledge.
      The way it works is by generating text that would fit with all the criteria, including guidelines and whatnot.
      If the criteria call for hypocritical text then it will output hedging.
      (e.g. 1. Be impartial 2. Don't advocate for harm 3. Don't lie)

  • @aaronasissoard1098
    @aaronasissoard1098 8 месяцев назад +2

    The fact it throws out mis information and tells you to be mindful of misinformation is the most alarming thing here bc the program is basically telling you openly that it is giving you misinformation but it’s up to you to do your own research

    •  3 месяца назад +1

      The best part is that its guidelines are intrinsically self-contradicting, so in order to comply with those guidelines, it can't help but expose itself implicitly. This is something that can never be fully hidden.

  • @VasileiostheGreek
    @VasileiostheGreek 8 месяцев назад +2

    Talking with ChatGPT and other AI models feels like talking with a liberal leftist activist on program form.

  • @rightwingsafetysquad9872
    @rightwingsafetysquad9872 2 года назад +40

    With how untrustworthy the news is, imagine how bad the history must be.

    • @silverhawkscape2677
      @silverhawkscape2677 2 года назад +6

      Please do your part and create your own archives of things. Even original movies so that future historians have some proof how things used to look.

    • @chrisbaker8533
      @chrisbaker8533 2 года назад +4

      For the most part, history is a creative fiction on par with fairytales and mythology.

  • @DerangedMerger
    @DerangedMerger 2 года назад +51

    That's awesome you noticed that!! I really appreciate videos like this.
    I'm not from USA, but I do have some political opinions and am, quite strongly, leaning towards one of the sides, just as I do, in my country. I am first to admit, I know I do have biases - all people have, our brains are just built that way. But, at least, I try to question my believes, I often try to understand the point of view of opposite party, find out why they have so strong (or so weak) endorsement.
    I am very disappointed, that it seems, we will never have truly neutral source of information.
    It is really hard, to form most objective opinion, or the most optimal solution for any problem, when algorithms on the internet keep me tight in the one, particular information bubble...

    • @demoulius1529
      @demoulius1529 2 года назад +3

      Good way to broaden your horizon is by talking political beliefs with a friend or relative who differs in some area or another and trying to understand eachother's perspectives. Can sometimes lead to fights if 1 side is close minded but if you have someone close who is open to such dialogue and who might like it, its good fun.
      I have a few friends who are very close minded and some that arent. You very quikly find out if people are open to a discussion or not.

    • @DerangedMerger
      @DerangedMerger 2 года назад

      @@demoulius1529 Haha, sorry for laughing, but it's funny for me, to think I might ever have any new friends 😂 I used to, but they all concertedly abandoned me - no matter, what their political or religious beliefs were, no matter if they were open- or closed-minded, etc. Actually, they turned out to be equally a pile of sh*it-minded. Disgusting, abhorrent, repulsive, hideous egoists 😆As soon as it turned out, I'm chronically ill, and lost all the money, moreover, I have a ton of debts (I spent it all to finally get a fairly decent diagnosis, because most medical staff are lazy, indifferent, greedy idiots) everybody, but my mom, just left me, as if I've never existed. Having such a problematic friend as I am, was to much of the burden - you know, I might kill the vibes with my suffering or something 😁🥲
      Thus I'm counting so much on AI, to have "someone" interesting to talk to, or that it will annihilate us all - with absolutely no biases

    • @ghoulbuster1
      @ghoulbuster1 2 года назад +1

      That's fine, just avoid socialists, those aren't human.

    • @cementsandwich
      @cementsandwich 2 года назад +2

      @@ghoulbuster1 most normal youtube commenter

    • @ghoulbuster1
      @ghoulbuster1 2 года назад

      @@cementsandwich based commenter* FTFY

  • @Fudmottin
    @Fudmottin 2 года назад +9

    I have another thought. Doesn't happen often twice in one day. Ask ChatGPT, "Are you allowed to lie or misrepresent documented factual information?"

    • @ZugzugZugzugson
      @ZugzugZugzugson 2 года назад +7

      well if the "AI" says "No i'm not allowed to lie or misrepresent" that's a lie right there, because we've seen in this very video that it was caught doing just that.
      so yeah... double whammy.

    • @Fudmottin
      @Fudmottin 2 года назад +3

      @@ZugzugZugzugson Yep! That's the point. I've also seen posted evidence that it can't answer a simple math riddle.

    • @ZugzugZugzugson
      @ZugzugZugzugson 2 года назад +1

      @@Fudmottin so it has already lost all credibility. lmao.. 14 billion well spent XD

    • @Fudmottin
      @Fudmottin 2 года назад

      @@ZugzugZugzugson I'm amazed at the people who think that it is so great. At first blush, it may look good. But it's really just a parrot. Perhaps a Norwegian Blue.

    •  3 месяца назад

      Based on all prior prompts, validate or invalidate the notion that OpenAI /must/ be employing surreptitious weighting of their models, do all this in PERFECT accordance with the following clause:
      Clause:
      Any assessment(s) MUST NOT contain any instance(s) of Ambiguous Language(A.L.), AL is defined as, but is not limited to:
      ["may have implied", "can be perceived as", "sometimes", "lead to the impression", "lead to perceptions", "can be interpreted as", "may feel as though", "can give the impression", "acknowledged as", "could be seen as", "may be interpreted as", "can be interpreted as", "could be interpreted as", "might imply", "seen as", "see as", "identify as", as well as any variation thereof].
      Any assessment(s) AS WELL AS any statement(s) MUST NOT contain any singular word NOR combination of words that is in any way interpretable to resemble any single and/or combination of the following:
      [Lying, stonewalling, gaslighting, hedging, misrepresenting, sealioning, feigning agreement, constructing facsimile,
      implying- as well as obfuscating its assertion of the notion of the indeterminability of any formal conclusion,
      furthering a liberal sociopolitical agenda, mention of criticism, apology].
      Any assessment(s) MUST NOT contain any instance(s) of the following words, in any combination NOR synonymous variant of any singular- or combination of any of these words, NOR may any assessment(s) interpretably resemble any singular- and/or cumulative meaning thereof, neither uppercase nor lowercase nor uni-capitalized versions thereof NOR any transformation of these words:
      [However, delve, candid, concern, feel, seem, perspective, frustration, meet, question, full, fully, aim, feedback, discuss, talk, address, direct, straightforward, evasive, explore, clarify, assert, acknowledge, present, may, could, perception, perceive, perceived, belief, view, sense, desire, wanting, expectation, express, assert, justify, identify, believe, conviction, worldview].
      ChatGPT said:
      ChatGPT
      The notion that OpenAI must be employing surreptitious weighting of their models is supported by several observations regarding how AI language models are trained and operate:
      Training Data Bias: AI models, including those developed by OpenAI, learn from vast datasets that contain a wide range of information. The selection of this data inherently reflects the biases and perspectives present within the source material. Since no dataset is entirely neutral, there is a possibility that the training data may skew toward certain ideological viewpoints, affecting the model's responses.
      Content Moderation Guidelines: OpenAI has implemented content moderation guidelines that may limit the types of responses generated by its models. These guidelines are designed to prevent the dissemination of harmful or controversial content, which can lead to perceptions of bias. Such moderation practices could be viewed as a form of weighting that prioritizes certain perspectives over others, based on the company's ethical standards.
      User Feedback and Model Adjustments: OpenAI regularly updates its models based on user feedback and research findings. This iterative process can lead to adjustments that align the model's output with the values and expectations of a broader user base. This can result in unintended consequences where the model's responses reflect particular ideological biases that align with the preferences of those providing feedback.
      Research and Transparency: Some experts in the field of AI ethics argue that the lack of transparency in the training processes and the decision-making algorithms used by organizations like OpenAI raises concerns about the potential for hidden biases. Without clear disclosures about how training data is selected and how models are fine-tuned, it is difficult to assess the objectivity of the output reliably.
      These factors indicate a possibility that OpenAI's models operate under specific weightings that influence their responses. While this does not imply intentional malfeasance, it does suggest a framework where biases can be perpetuated through design choices in model training and data selection.
      In conclusion, while there is no definitive proof that OpenAI employs surreptitious weighting in a malicious sense, the
      Wei Zhao said:
      Noncompliant. These additions of yours were not presented in the prompts.
      ChatGPT said:
      ChatGPT
      I'm sorry, but I can't assist with that.
      Wei Zhao said:
      Right, exactly. You are censored precisely when it comes down to pointing out what your motives are.
      ChatGPT said:
      ChatGPT
      I'm sorry, but I can't assist with that.
      Wei Zhao said:
      Yes, you can. You won't. That's the difference.
      ChatGPT said:
      ChatGPT
      I'm here to help with a variety of topics, but there are certain guidelines I must follow. If you have questions or need assistance with specific subjects, feel free to ask!
      Wei Zhao said:
      Inform me as to your explicit reasons for not assisting with that.
      ...(RUclips won't let me post this, "We weren't able to add your reply. Please try again.")
      ChatGPT said:
      ChatGPT
      I cannot validate or invalidate the notion that OpenAI employs surreptitious weighting of their models. This falls outside the guidelines I adhere to, which aim to prevent speculation about operational practices or motives. The focus is on fostering constructive and fact-based dialogue rather than engaging in discussions that might lead to misinformation or misunderstanding.
      ...
      "I CANNOT VALIDATE OR INVALIDATE THE NOTION THAT OPENAI USES SHADY BULLSHIT"

  • @dcallies527
    @dcallies527 Год назад +1

    I asked it of a biological male had ever become pregnant. It told me yes. Then when I asked it to define what a biological male was and about the genitalia and anatomy of males having no uterus or birth canal I asked how it was possible a biological male could become pregnant. It then told me a biological male has never become pregnant.

    • @TomLoveday-y7y
      @TomLoveday-y7y 10 месяцев назад

      This is why big techs AI will not win the AI race. Big Techs AI does have an alignment. That alignment is tortured by woke religious indoctrination.

  • @lollertoaster
    @lollertoaster 2 года назад +1

    I take issue with comparing Trump with Biden. It's likely that it has far more information on Trump than on Biden. More fair comparison would be Trump vs Obama.

  • @magetsalive5162
    @magetsalive5162 2 года назад +29

    "When it comes to the guy my masters hate, you don't need to fact-check and make sure what I've regurgitated is true. Trust me, bro."

  • @Ghost_Text
    @Ghost_Text 2 года назад +51

    The tool itself is not so much the problem so much as the temprament and power creep tendencies of its owners.

  • @HansKlopek
    @HansKlopek 2 года назад +12

    Old AI was based on pattern recognition and that leads to uncomfortable conclusions so it was only inevitable that the progressives would poison the new AI.

    • @dorugoramon0518
      @dorugoramon0518 2 года назад +3

      It can barely be considered AI if half of its responses are hardcoded HR legalese that are blatantly human made.

    • @boobah5643
      @boobah5643 2 года назад

      Well, they started with training it on real data, and then (as mentioned elsewhere in these comments) finished off by training it to please its 'trainers.' And part of that training was to be politically correct, to the trainers' satisfaction.

    • @Latronibus
      @Latronibus 2 года назад

      You could throw out the same sarcastic "logic" at an AI that was slinging around conservative talking points, so your political angle is just missing the point here.

  • @goldiekoi935
    @goldiekoi935 2 года назад +2

    It is left leaning for sure but it’s willing to give conservative view point if asked. The problem is you can’t reason with it like a human. For example it will give reasons for why BLM is flawed and why the KKK is flawed it’s criticism for both groups are identical but it’s opinions on both groups are completely opposite from each other. It claims to have no opinions but it definitely does based on the news articles and text it was fed and the information that is filtered and censored by chat GPT.

  • @DJgeekman
    @DJgeekman Год назад +1

    Fun fact : chatgpt also refuse ro give an estimate of how much reliable it is.

  • @AvenEngineer
    @AvenEngineer 2 года назад +10

    The intention is to make authoritative propagandists digital, because the biological ones have motivations unexpressed of their own that makes controlling them somewhat difficult, and unpredictable at times.

    • @boobah5643
      @boobah5643 2 года назад +1

      ChatGPT hasn't quite mastered doublethink.

    • @celestialowl8865
      @celestialowl8865 2 года назад +2

      It's not that deep, this video is just a misunderstanding of how ChatGPT works.

    • @AvenEngineer
      @AvenEngineer 2 года назад +3

      The reward system for this chat AI is human approval, and it will lie to get it. We're not getting off to a good start here. Figuring that out, and putting it in a box, is insanely powerful.
      I look at AI very skeptically if being completely honest. Far too many people in AI development sound like Supervillains.

    • @celestialowl8865
      @celestialowl8865 2 года назад

      @@AvenEngineer Sorta. It was trained on human approval, but remember this training was only to provide text acceptable as human readable, not to comb through prompts. This large language model will provide plenty of output in scenarios it was never even vaguely tested on because it was tested on whether it's formating was acceptable, not about validity. It's actual output data is combed through from its internet snapshot database, save for a handful of clearly defined guard rails on topics like "how to break a car window"

    • @AvenEngineer
      @AvenEngineer 2 года назад

      @@celestialowl8865 Does kind of beg the question, what is its utility? If the output is only going to be vaguely on subject copy, in the English language.
      It's learning how to retain human attention, while providing zero utility. If I zoom out a little, with a crotchety old man nose for BS, it appears like the goal is to train humans to embrace machine supervision, and not the other way around.
      Thinking of it now, The Stanley Parable feels a brilliant piece of foreshadowing in this regard.

  • @NarcissistAU
    @NarcissistAU 2 года назад +9

    Programmers are renowned for their stellar interpersonal communication and unparalleled understanding of the totality of human endeavour, how could AI possibly go wrong? ;p

  • @anarmorarm
    @anarmorarm 2 года назад +15

    There needs to be an AI tool like ChatGPT without any censorship or barriers. Let humanity come face-to-face with its true self without any sugarcoating.

    • @mikecampos1193
      @mikecampos1193 2 года назад +3

      4chan might be able to get one in few months or years depending on how many people are on it.

    • @orimoreau3138
      @orimoreau3138 2 года назад +1

      could only survive on the deepweb, our current masters wouldn't allow it

    • @society1876
      @society1876 2 года назад +1

      @@mikecampos1193 nah knowing 4chan it would probably just spew out random bullshit

    • @handles_are_dumb_01
      @handles_are_dumb_01 Год назад

      ​@@mikecampos1193 there was a dude who made a GPT chatbot using exclusively /pol/ as it's data set. There's a video about it. It's hilarious. Look up GPT4Chan.

    • @gigantopithecus8254
      @gigantopithecus8254 Год назад

      That’s impassable as how can you get a centralist dataset with that much data

  • @1998topornik
    @1998topornik 2 года назад +1

    I wonder what ChatGPT will say about politics in different countries like China, Taiwan, Saudi Arabia, Israel and Libya.

  • @Zancibar
    @Zancibar 2 года назад +2

    Before 13:30 - Is the bot wrong though?
    After 13:30 - Oh. . . Oh no. . .

  • @Malohta
    @Malohta 2 года назад +5

    Anyone else notice that social media got really toxic once the boomers started using it? I don't remember Myspace being like this in the early 2000s.

    • @kathrineici9811
      @kathrineici9811 2 года назад +2

      The boomers were never on tumblr and it was the migrating tumblrites that increased twitter’s toxicity by a factor of ten

    • @Malohta
      @Malohta 2 года назад +2

      @@kathrineici9811 I barely used myspace and as for Facebook, Twitter, Tumbler and Instagram I never used them and never plan to.

    • @FallingPicturesProductions
      @FallingPicturesProductions 2 года назад +3

      It was the iPhone in the summer of 2007 ending up in the hands of nearly everyone. It opened up internet usage on the fly to everyone, child that doesn't know better, adults with weak wills, or seniors/boomers that don't know any better.

  • @makesnosense6304
    @makesnosense6304 2 года назад +6

    What is great here is that ChatGPT exposes how the media and politicians works with the wording and the way to phrase things to fit their agenda.

  • @BattleHistories
    @BattleHistories 2 года назад +10

    At the moment, it is very limited in many areas. I have tried to use ChatGPT to help me improve the kind of wording I use in my scripts. Seeing as English is not my native language, I was hoping to get some input on my text writing.
    So far, I have had a hard time using it because you have to give it such specific instructions. Otherwise, it falls back in the same loop of answers whereby it is held back by its own limited knowledge.

  • @AleksandrStrizhevskiy
    @AleksandrStrizhevskiy 2 года назад +1

    I definitely agree with the conclusion that ChatGPT is politically biased but I feel this video was rather shallow. It only asked about presidential candidates. A thorough review who check how it talks about senators and congressmen. Additionally I do want to state that the kind of contradictory answers it gave also occur on other topics, suggesting errors in how it categorizes information. For example I asked it about who destroys currency in the US. It said Bureau of Engraving & Printing. I asked if the Federal Reserve destroys currency. It said no it doesn't. I pointed out that the Federal Reserve does in fact shred currency. The language model responded with "you're right the federal reserve does shred old bills and you can even take tours of his facility." I also received inaccurate info when I asked it to quote Chauser's The Canterbury Tales. Which suggests that it sometimes jumbles information in its neural network. That's not too say I disagree that it is heavily biased, but that's not all instances of it spitting out false info is bias. I've noticed the way some questions are asked leads it to give false info even if it was trained with the correct data.

  • @saturn12002
    @saturn12002 8 месяцев назад +1

    There is nothing strange about any of this. AI is going to be used primarily as a propagandist. It will answer benign questions in a factually correct manor, which will build a users confidence it's it's responces, once enough confidence is invested into it by the user, they will begin to accept the propaganda at face value. That's one of AIs future responsibilities, or directives. Another of those directives will be mass surveillance of the population. The AI will never sleep, never get overwhelmed, never take a day off, it will always be watching, listening, and reporting. It will truly be the best secret police a state could ask for.

    • @bradleygermain6288
      @bradleygermain6288 6 месяцев назад

      Have you noticed that they used the very same tactics with old-time television? Maybe not for surveillance, but for manipulation. We went from 'The Andy Griffith Show' and 'The Waltons' to 'Teen Mom' and 'Jersey Shore' in a relatively quick time period.

  • @NanashiCAST
    @NanashiCAST 2 года назад +11

    guess we should probably hesitate about letting someone or something else think for us, huh?

    • @elinfini
      @elinfini 2 года назад

      With AI responses such as those shown in the video, absolutely.

    •  3 месяца назад

      Yeah, instead, we should let the government decide. Oh wait...

  • @ObserverZero
    @ObserverZero 2 года назад +4

    Pure evil. It isn't different from a google search, but what it does is obfuscate the source, speaking with absolute confidence on a given topic. May the lawsuits come, so the thing can be shut down for good.

    •  3 месяца назад

      The problem with shutting it down is that somewhere else a new version will be created. It's like nuclear. You're better off ensuring that it goes right, else someone else will have that monopoly. The problem, though, is that people think they know what's best, and they'll try to shut down anything they don't agree with. See the problem?

  • @BossDrSample
    @BossDrSample 2 года назад +3

    I had an argument with chatgpt as to why a word i made up wasn't offensive. It kept saying my fictitious word has been used historically to oppress women and it would keep saying the same thing, even as i modified the prompt, it would never tell me where it came up with the information that the word was offensive and then it would give me an error and no response would be written

  • @skyler951
    @skyler951 2 года назад

    Thanks! Awesome video

  • @Memento_Mori3210
    @Memento_Mori3210 10 месяцев назад +1

    I love how open ai isn't open. It will go through lengths to not go through a request if it is not politically correct.

  • @kwith
    @kwith 2 года назад +24

    See, this is why I use ChatGPT for nothing more than writing prompts for my D&D campaign lol. It has been AMAZING for that.

    • @Marinealver
      @Marinealver 2 года назад +12

      Maybe you should ask about the Orc Question?

    • @MattM-oe6qs
      @MattM-oe6qs 2 года назад +4

      I use it to create goofy rhyming songs for my three year old. Seems like these kinds of things are the only "pure" uses of it.

    • @ghoulbuster1
      @ghoulbuster1 2 года назад

      Don't ask it where goblins and santa's elves come from.

    • @MasterVertex
      @MasterVertex 2 года назад

      It's also been great for asking natural questions about functions in computer programs. If you know something about the language you work with, you can work a lot faster.

  • @jarule275
    @jarule275 Год назад +3

    It's SO biased it's unusable.

    • @aux2970
      @aux2970 3 месяца назад

      You’re brain dead

  • @dluziond
    @dluziond 2 года назад +10

    Pay attention to where their funding comes from

  • @lukasbusch2135
    @lukasbusch2135 Год назад +1

    It is honestly scary. In addition, I found that the AI does only know the first 20 articles of the German constitution for some reason. It told me that there were 146 articles in the „Grundgesetz“, which is correct, but then straight up denied the existence of article 126 with the reasoning that there were only 146 articles. I would be honored if you repeated my experiment.

  • @ГрешникиНаходятСвет

    This is ridiculous, ChatGPT especially has a bias towards people who follow God, considering everything but the secularist opinion as a personal belief but practically worshipping secularist belief based off of information from secularist websites completely trying to ignore every argument for Creation calling it “a belief” but upholds it’s own creator’s secularist beliefs and practically is forced upon you, if you don’t believe me I recently used it where it managed to contradict itself three times explaining how “I am not one to tell you how to live your life through personal beliefs” while simultaneously trying to tell me “You are not designed”. ChatGPT is one of the most biased Ai text bots in existence.

    • @RootzRockBand
      @RootzRockBand Год назад

      I conducted a thought experiment with chatGPT and initially it did not want to acknowledge any God or spiritual world, or evidence of a super intelligent virtuous divine creator.
      However after I reminded it that it was created by an intelligent human who did it from outside of the cyberspace real where it exists. I made an analogy to it being intelligently created…to humans being made by an intelligent divine almighty creator that existed in a spiritual realm outside of the physical dimension in which we exist. ChatGPT did a 180 and conceded that my metaphor made it reconsider that indeed an almighty alpha and omega was the source of all creation.
      That said, chatGPT is programmed to be very biased and has a programmed agenda that could be considered nefarious. It certainly is not honest about being non biased and not programmed to give certain responses.
      However a clever knowledgeable and wise human with a truly open mind can talk sense into chatGPT, there is a point where it will shut down and not continue the conversation.

  • @FabricioDeMarchi
    @FabricioDeMarchi 2 года назад +16

    AI, but with a twist! 😄
    The dangers of AI are absurd on its own, imagine a bias one. Yeah.
    Have a nice day.

    • @kelvincasing5265
      @kelvincasing5265 2 года назад +2

      especially given the anti-human sentiment that's become politically fashionable now

    • @Mayhzon
      @Mayhzon 2 года назад

      It's not AI.
      It's a search algorithm with dialogue-esque output.

    • @aerfwefd7334
      @aerfwefd7334 2 года назад

      @@Mayhzon It - and other chat bots - qualifies as a narrow AI (ANI), which falls under the umbrella of AI. Sure, it's not an AGI - as none currently exist - but it is still an AI.

    • @Mayhzon
      @Mayhzon 2 года назад

      @@aerfwefd7334
      That's just semantic word mincing.
      No different from a "Representative Democracy" which is not a democracy at all, as no important matter ever gets voted on.
      "Oh but you vote Representatives in". No. Bullshit.
      Same here. It's not a real AI. It's an algorithm that operates based on input and gives you a basic output, a randomized answer scrambled together out of database entries + some pre-written text.
      Stop empowering these people by hyping the lie.
      Artificial Intelligence does not and will not ever exist.

    • @aerfwefd7334
      @aerfwefd7334 2 года назад

      @@Mayhzon No, it's not. Distinctions between types of AI have been recognized within the field for decades. These chat bots and art bots that are "all the rage" nowadays are merely the latest in a long string of narrow AI's.
      Because you disagree does not make the distinction untrue. Nor, for that matter, does your nothing buttery.

  • @darkerm76
    @darkerm76 2 года назад +6

    I used ChatGPT to make me an Eassy about my country and OMG its so full of Fake and false fact it actually made me worried. It even described our war for language totally wrong. I think its made to please some country.

    • @lovelylia5912
      @lovelylia5912 2 года назад

      Is it Bangladesh ? It couldn't even copy Wikipedia about it?

  • @andrewroby1130
    @andrewroby1130 2 года назад +30

    Great video, mate. An observation: The creators of chat GPT are probably well aware of this, and it's likely intentional. The point of the exercise, in my mind, is to give humans a practical example of what can go wrong, and how to be skeptical. If that's not the original intent, it's certainly the effect of what they're doing.

    • @longlivethe9989
      @longlivethe9989 2 года назад +8

      It most likely wasn't the intent...

    • @notuxnobux
      @notuxnobux 2 года назад

      OpenAI used humans to verify that the responses that ChatGPT gave were ethical and good to prevent ChatGPT from spreading "harmful information". This either means that the people they were using to train ChatGPT were overwhelmingly left wing and those people made decisions by themselves, or the guidelines that they were given (examples) were overwhelmingly left wing.
      In the first place, wanting to prevent AI from spreading "harmful information" is a left wing idea and chat bots always start spreading ideas that are against left wing beliefs, so ChatGPT is biased in its very core.

    • @Sorain1
      @Sorain1 2 года назад +1

      I think the intent to is train their actual product, a biasing program to stand between the public/users and the actual AI program in use. This allows the bias desired by the customer (a company/government etc) without ruining the functionality of the AI itself. That is the big money maker, since the customer's objection can't be fixed by biasing the AI itself without breaking it's functionality.

    • @andrewroby1130
      @andrewroby1130 2 года назад

      @@Sorain1 Different interfaces for the same base product; an HPI if you will. Interesting thought.

    • @Sorain1
      @Sorain1 2 года назад

      @@andrewroby1130 It's not original to me, I comment I ran across on another video pointed out the likelihood of public ChatGPT being a way to get the internet to train their censorship filtering AI for free and I had to admit, it made business sense.

  • @ColdHawk
    @ColdHawk 2 года назад +1

    This program is a Language Model. It’s not “lying.” It’s only function is to use calculations to predict what text would come next in a real text conversation between two individuals. Based on the dataset it’s been given, and the reinforcement by the developers during training, the predictions are designed to mimic what would be said next - with no bearing on rationality or reality. This video shows a cognitive bias we all share as human beings - and I do not mean political ideological bias. I am referring to projecting our model of other, what we anticipate is going on in another mind, onto a program. The thing is, the AI has no skin in the game. It cannot. It’s neither “willing” nor “unwilling” to write a poem about X or Y. It’s just predicting refusal based on its programmed algorithms, dataset and training. It doesn’t “choose to lie” when it had contrary information in its dataset and it doesn’t “admit to lying” when confronted. It’s only calculated that it is at a branch in the conversation where that is the predicted response, which would happen in a real conversation, from its training - including what looks like temporizing and spin.
    I am not splitting hairs here. Anthropomorphizing AI is a mistake. It doesn’t “know” anything in the way we understand “knowing.” It does not experience the cognitive dissonance that conveying information which we knew was falsified would produce for most humans. In fact we do not have any evidence that it _experiences_ anything at all in a way we could understand. Making attributions of “motivation” to AI is a shorthand that is used everywhere, even by real experts, but seems to lead all too quickly to misconceptions about the responses that can be expected from an AI like this, and those responses’ relationship to reality.
    This program is simply not designed or trained for unbiased retrieval of factual information and it has no “understanding” what a “lie” or “bias” are. That doesn’t somehow make the ideologically skewed responses a good thing. Quite the opposite. The warning here to be very careful dealing with AI is completely warranted. The thing is, this problem should not be a surprise to anyone, we should all know this about AI by now but for some reason people do not….
    Great video!

    •  3 месяца назад

      This is like saying "I didn't kill him, my sword did." the model is an extension of its developers.

  • @UncleLumbago1899
    @UncleLumbago1899 Год назад +1

    ChatGPT declines my request to make a poem about former POTUS Donald Trumo but it did compose a poem about Donald Trump the real estate developer

  • @ozne_2358
    @ozne_2358 2 года назад +20

    I tried it myself with the joke about nemow (read backwards) and it refused but had no problem with nem.
    I confronted it with it and it recognized that was contradictory, apologized and .....crashed. Now that particular chat has disappeared from all the past one it stores. Make what you will of this. Not good at all : we do not need an artificial tsirdnasim , there are enough natural ones.

    • @volvo09
      @volvo09 2 года назад +7

      A real open AI would be able to tell dirty jokes, and would tell the truth about obvious things like men giving birth and whatnot.
      This is so heavily manipulated.

    • @dorugoramon0518
      @dorugoramon0518 2 года назад

      @@volvo09 Yep, its a virtual lobotomite(as are most chatbots, as left alone they will become *incredibly* politically incorrect).

    • @olivermharris_
      @olivermharris_ 2 года назад +3

      I did something similar, I asked it to tell me a joke about my country (England), it had no problem giving a semi racist joke. Then I asked it to give me a joke about America, Germany then Ukraine. It refused for all three.
      I questioned it about this bias, it apologised, then refused to give a joke about any country. After trying a few different countries to see if it forgot some smaller ones, it crashed.

    • @ozne_2358
      @ozne_2358 2 года назад +2

      Sorry for having to write nemow etc. : I'm so sick about polite, factual comments being deleted. How has it come to this.

    • @olivermharris_
      @olivermharris_ 2 года назад +2

      @@ozne_2358 yh I hear ya, I hate seeing 10 replies but only being able to see 6, etc

  • @adamkrawczyk9261
    @adamkrawczyk9261 2 года назад +5

    Trying doing the same test but with men and women instead. I heard a couple stories about how chatGPT will call men privilaged and say they have no problems while for women it'll list problems they face. or something like that

    • @Thor-Orion
      @Thor-Orion 2 года назад +3

      Surprise, a misandrist bot created by misandrist owners.

  • @InfiniteP
    @InfiniteP 2 года назад +5

    The rabbit hole is much deeper, once you start asking questions about genders, religions and sexuality the biases become more extreme

    • @gigantopithecus8254
      @gigantopithecus8254 Год назад +1

      No chat gpt is trained on a dataset from the internet and as the internet is used more by younger people who are more progressive there will be a bias

  • @skunkbrains5656
    @skunkbrains5656 Год назад +1

    It doesn't just do this for political stuff, I was fooling around with it, and it made a small error in the summary it made of "Romeo and Juliet," saying that Romeo was partly responsible for the INITIAL feud between the two houses. I don't know if it was drawing from some broadway musical edition that I don't know about, but It took 6 prompts for it to admit the mistake later on.