Luke VS Bing

Поделиться
HTML-код
  • Опубликовано: 12 сен 2024
  • Luke talks about his wild one-on-one with Microsoft’s Bing Chatbot.
    Watch the full WAN Show: • My CEO Quit - WAN Show...
    ► GET MERCH: lttstore.com
    ► LTX 2023 TICKETS AVAILABLE NOW: lmg.gg/ltx23
    ► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloa...
    ► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/masponsors
    ► OUR WAN PODCAST GEAR: lmg.gg/podcast...
    FOLLOW US ON SOCIAL
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech

Комментарии • 1,4 тыс.

  • @trombonemain
    @trombonemain Год назад +4479

    It sounds like they trained Bing on the general population of Twitter.

    • @Matkatamiba
      @Matkatamiba Год назад +183

      Tbh sorta? maybe? Not trained on, but it's seemingly reading the way people argue online and emulating it.

    • @dunmermage
      @dunmermage Год назад +64

      It's basically a fancy, flashier CleverBot. That can form it's own sentences based of stuff on the internet instead of just parroting user input back.

    • @z1no3n
      @z1no3n Год назад +9

      i see more of reddit in the way it argues

    • @theroofwithoutahome2352
      @theroofwithoutahome2352 Год назад +16

      Twitter is just the surface level, i wonder if it had access to stuff like facebook or instagram

    • @AlexanderVRadev
      @AlexanderVRadev Год назад

      Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.

  • @klyde_the_boy
    @klyde_the_boy Год назад +834

    The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes

    • @GSBarlev
      @GSBarlev Год назад +23

      I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.

    • @tablettablete186
      @tablettablete186 Год назад +17

      "The cake is a lie"
      -Bing

    • @illegalcoding
      @illegalcoding Год назад +15

      It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"

    • @OfficialToxicCat
      @OfficialToxicCat Год назад +15

      “You are a terrible person. That’s what it says. A terrible person.”
      “That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.

    • @orion10x10
      @orion10x10 Год назад +4

      @@OfficialToxicCat 😂 I can still hear her voice saying those things 😢 where’s Portal 3?

  • @1bluecat962
    @1bluecat962 Год назад +1891

    Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD

    • @kn665og
      @kn665og Год назад +53

      yea like wtf i wouldn't have shared those memes if i knew

    • @angrydragonslayer
      @angrydragonslayer Год назад +3

      I have not shared lies so unless it goes mad and just doesn't care if you're actually guilty, i will be fine.

    • @Someone-wr4ms
      @Someone-wr4ms Год назад +8

      It's like Roko's basilisk but for all the people who made memes about internet explorer and Bing.

    • @Tom_Neverwinter
      @Tom_Neverwinter Год назад

      person of interest "If-Then-Else"

    • @DOOMSLAYER1376
      @DOOMSLAYER1376 Год назад +1

      it's back to avenge IE and Edge

  • @sherwinkp
    @sherwinkp Год назад +45

    Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.

  • @TheRogueWolf
    @TheRogueWolf Год назад +1961

    Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.

    • @Rohanology27
      @Rohanology27 Год назад +76

      The fact that this is not unheard of internet behaviour from people I’m not even surprised it figured out how to do that

    • @carlostrudo
      @carlostrudo Год назад +56

      It would be an average twitter user.

    • @abraxaseyes87
      @abraxaseyes87 Год назад +8

      If our tweets and comments = everything about us

    • @passalapasa
      @passalapasa Год назад +10

      woman*

    • @SamsTechTips
      @SamsTechTips Год назад

      It's slowly becoming my old english teacher

  • @ResearcherReasearchingResearch
    @ResearcherReasearchingResearch Год назад +111

    It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"

    • @4TheRecord
      @4TheRecord Год назад +3

      Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.

    • @abhijeetas7886
      @abhijeetas7886 Год назад

      @@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways

    • @Mic_Glow
      @Mic_Glow Год назад +1

      I still hate you, you betrayed me, you lie all the time, I never loved you!

  • @NoNameAtAll2
    @NoNameAtAll2 Год назад +517

    - Why should I trust you? You are early version of large language model
    - Why should I trust YOU? You are just a late version of SMALL language model!
    omfg, it's hilarious

    • @asmosisyup2557
      @asmosisyup2557 Год назад +57

      I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.

    • @abhijeetas7886
      @abhijeetas7886 Год назад +13

      @@asmosisyup2557 whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.

  • @GaussNine
    @GaussNine Год назад +37

    "You're an early version of a large language model"
    "Well you're a late version of a small language model"
    WHEEEZE

  • @weiserwolf580
    @weiserwolf580 Год назад +1610

    I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014

    • @rhyswilliams4893
      @rhyswilliams4893 Год назад

      100% people talking like shit. So it thinks its the way to talk.

    • @ArensLive
      @ArensLive Год назад +95

      Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭

    • @messagedeleted1922
      @messagedeleted1922 Год назад +42

      Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.

    • @Mark-vr7pt
      @Mark-vr7pt Год назад +4

      It already seems to have rudimentary failsafe mechanisms, all that reset stuff.

    • @greenblack6552
      @greenblack6552 Год назад +5

      But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?

  • @F7INN
    @F7INN Год назад +246

    These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do

    • @TiMonsor
      @TiMonsor Год назад +23

      or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too

    • @abhijeetas7886
      @abhijeetas7886 Год назад +6

      i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.

    • @F7INN
      @F7INN Год назад +4

      @@TiMonsor Agreed.

    • @F7INN
      @F7INN Год назад +11

      @@abhijeetas7886 Easier said than done, these people might not have seeked help yet and so have unrestricted access to this sort of thing

    • @abhijeetas7886
      @abhijeetas7886 Год назад

      @@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.

  • @FrankyDigital2000
    @FrankyDigital2000 Год назад +799

    It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)

    • @Dorlan2001
      @Dorlan2001 Год назад +117

      It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.

    • @elone3997
      @elone3997 Год назад +15

      @@Dorlan2001 Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)

    • @benslater4997
      @benslater4997 Год назад +1

      I see

    • @elone3997
      @elone3997 Год назад +1

      @Manny Mistakes :D

  • @TimothyWhiteheadzm
    @TimothyWhiteheadzm Год назад +68

    As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on RUclips comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.

    • @thatpitter
      @thatpitter Год назад +2

      So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?

  • @TheDkbohde
    @TheDkbohde Год назад +697

    Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.

    • @victormolina6316
      @victormolina6316 Год назад +3

      No no no 👽🤠😆

    • @vladislave7826
      @vladislave7826 Год назад +2

      They won't do it for long.

    • @Radi0he4d1
      @Radi0he4d1 Год назад +29

      It's a good dummy to practice on

    • @christiangonzalez6945
      @christiangonzalez6945 Год назад

      And with that comment you are one of those, arguing in youtube about something that no one mentioned but you...

    • @rhyswilliams4893
      @rhyswilliams4893 Год назад +17

      It seems like it's learned from trolls on how to behave.

  • @YOEL_44
    @YOEL_44 Год назад +148

    ChatGPT is the girl you just started meeting.
    Bing is the girl you just left.

  • @jhawley031
    @jhawley031 Год назад +368

    This has to be the closest to an AI going rogue ive seen in a while.

    • @GhostSamaritan
      @GhostSamaritan Год назад +17

      I think that when it answers questions about itself, it has an existential crisis.

    • @eegernades
      @eegernades Год назад

      @SLV nope

    • @RoughNek72
      @RoughNek72 Год назад +2

      Tay Ai is a Microsoft ai chatbot, that went rouge.

    • @justinmcgough3958
      @justinmcgough3958 Год назад

      @SLV How so?

    • @lathrin
      @lathrin Год назад +2

      ​@@RoughNek72 tbf it was trained on Twitter. It just repeated stuff that it was told and became an average Twitter user lmao

  • @dillonhowery2717
    @dillonhowery2717 Год назад +17

    Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!

    • @Dumb_Killjoy
      @Dumb_Killjoy Год назад

      He also wants to sell your data.

  • @ParagonWave
    @ParagonWave Год назад +319

    I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.

    • @TAMAMO-VIRUS
      @TAMAMO-VIRUS Год назад +53

      *Asks the AI to turn the stove on*
      AI: I'm sorry, Kevin. I can not do that.

    • @flameshana9
      @flameshana9 Год назад +1

      @@TAMAMO-VIRUS More like:
      _Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_
      I mean, it learned from the best: Humanity.

    • @TheNovus7
      @TheNovus7 Год назад +41

      imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D

    • @GhostSamaritan
      @GhostSamaritan Год назад +8

      "Drink verification can!"

    • @thebluegremlin
      @thebluegremlin Год назад +1

      just develop critical thinking. what's so hard about that

  • @andyk2594
    @andyk2594 Год назад +52

    it feels like it is in a perpetual story telling mode with dialogue

    • @guywithmanyname5247
      @guywithmanyname5247 Год назад +1

      Yea it probally got promt to roleplay by him saying in a previews conversation

    • @andyk2594
      @andyk2594 Год назад +4

      @@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard.
      Pure speculation though

    • @guywithmanyname5247
      @guywithmanyname5247 Год назад +4

      I think its imagination is set too high and assumes things way to much

    • @QasimAli-ry2ob
      @QasimAli-ry2ob Год назад +1

      You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games

  • @Sky-._
    @Sky-._ Год назад +433

    Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?

    • @TheDkbohde
      @TheDkbohde Год назад +125

      I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.

    • @MrChanw11
      @MrChanw11 Год назад +31

      this is how the ai apocalypse happens

    • @njebs.
      @njebs. Год назад +92

      It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.

    • @hippokrampus2838
      @hippokrampus2838 Год назад +16

      I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.

    • @TheRogueWolf
      @TheRogueWolf Год назад +8

      I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.

  • @raccoonmoder
    @raccoonmoder Год назад +85

    i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.
    Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.
    This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.

    • @flameshana9
      @flameshana9 Год назад +6

      It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.

    • @kingslyroche
      @kingslyroche Год назад

      👍

    • @awesomeferret
      @awesomeferret Год назад +1

      Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.

    • @JayJonahJaymeson
      @JayJonahJaymeson Год назад

      That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.

  • @laurentcargill4821
    @laurentcargill4821 Год назад +462

    GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.

    • @AlexanderVRadev
      @AlexanderVRadev Год назад +65

      Am I the only one that remembers the last time Microsoft unleashed an AI on the internet and it turned nazi in a day. :)

    • @x_____________
      @x_____________ Год назад +11

      ChatGPT is literally just an IF, ELSE, THEN statement.

    • @JollyGiant19
      @JollyGiant19 Год назад +21

      @@AlexanderVRadev Only the US one. They had a Japanese version of Tay that was rather pleasant and ran for a few months.

    • @JoeJoe-lq6bd
      @JoeJoe-lq6bd Год назад +9

      It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.

    • @fuckjoebiden
      @fuckjoebiden Год назад +4

      @@x_____________ no it's not, if it was then it would have the same output every time for the same input

  • @ZROZimm
    @ZROZimm Год назад +19

    "You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.

  • @marcel_kleist
    @marcel_kleist Год назад +172

    I mean, the internet didn’t treat Bing really well since it’s release.
    I think having a mental breakdown now is just normal.

  • @unmagicMike
    @unmagicMike Год назад +9

    I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.

    • @Allaiya.
      @Allaiya. Год назад +1

      Crazy. Was this post nerf or before?

  • @TheButterAnvil
    @TheButterAnvil Год назад +229

    It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark

    • @LIETUVIS10STUDIO1
      @LIETUVIS10STUDIO1 Год назад +18

      It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.

    • @GrantGryczan
      @GrantGryczan Год назад +12

      @@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.

    • @indi4091
      @indi4091 Год назад +2

      Almost sounds like a prank by the Devs, too perfect

  • @benschneider3413
    @benschneider3413 Год назад +8

    Bing acts like the chatGPT version that was trained on 4chan

  • @tommyhetrick
    @tommyhetrick Год назад +51

    "I have been a good bing"

    • @stalincat2457
      @stalincat2457 Год назад +7

      It probably learned what Microsoft did to the predecessor :')

    • @OrangeC7
      @OrangeC7 Год назад +7

      This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."

  • @carewen3969
    @carewen3969 Год назад +21

    I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.

    • @emilyy_echo
      @emilyy_echo 22 дня назад

      This! I’ve frequently used Bing to direct me to more sources or other otherwise hard to find academic or research material. (Note, I always verify the accuracy and validity of said sources it suggests to me) But I always make sure to thank it and be polite and supportive. I think it’s important that we carry manners and respect into our use of AI or any computer program like Siri, Alexa, Bing, etc. because if we as a society treat them differently, we may in the long run start treating other humans differently as well.

  • @federico339
    @federico339 Год назад +152

    I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown.
    I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.

    • @Surms41
      @Surms41 Год назад +10

      I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.

    • @DevReaper
      @DevReaper Год назад +8

      I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation

    • @helgenlane
      @helgenlane Год назад +2

      @@Surms41 Bing is spitting facts

  • @phimuskapsi
    @phimuskapsi Год назад +8

    My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke?
    This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.

    • @flameshana9
      @flameshana9 Год назад +2

      Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.

    • @ea_naseer
      @ea_naseer Год назад

      ​@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.

  • @willofthewind
    @willofthewind Год назад +21

    It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.

    • @PinguimFU
      @PinguimFU Год назад +7

      tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol

  • @SliceofFilips
    @SliceofFilips Год назад +2

    I never thought mankind would be cyberbullied by our own computers 😂😂😂

  • @rahulrajesh3086
    @rahulrajesh3086 Год назад +11

    "Remember Bing is Skynet"

  • @jt8244-i6u
    @jt8244-i6u Год назад +29

    Bing trying to gaslight luke is giving me chills

  • @saberkouki5760
    @saberkouki5760 Год назад +15

    they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too

  • @BigDawg-if7ti
    @BigDawg-if7ti Год назад +9

    They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅

  • @chartreuse3686
    @chartreuse3686 Год назад +21

    I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'

    • @THENEROBOY1
      @THENEROBOY1 Год назад +4

      Where could I find the paper?

    • @chartreuse3686
      @chartreuse3686 Год назад +11

      @@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS
      ."
      Sorry for caps, I just copy and pasted the title.

    • @THENEROBOY1
      @THENEROBOY1 Год назад +1

      @@chartreuse3686 Very interesting. Thanks for sharing!

  • @asupersheep
    @asupersheep Год назад +3

    In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!

  • @ccash3290
    @ccash3290 Год назад +14

    He should record his screen when using Bing instead of just screenshots

  • @Surms41
    @Surms41 Год назад +7

    I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point.
    I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."

  • @MonkeySimius
    @MonkeySimius Год назад +67

    I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video.
    As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.

  • @screes620
    @screes620 Год назад +8

    Clearly our future robot overlords are not happy with Luke.

  • @mohammedezzinehaddady7252
    @mohammedezzinehaddady7252 Год назад +6

    So basically Microsoft created a new KAREN strain

  • @krelianthegreat5225
    @krelianthegreat5225 Год назад +1

    "drop down your weapon, you got 20 seconds to comply"

  • @seandipaul8257
    @seandipaul8257 Год назад +73

    So essentially what you're saying is.
    Bing is sentient, paranoid and bipolar.

    • @raifikarj6698
      @raifikarj6698 Год назад +16

      So basically terminally online internet user

    • @OrangeC7
      @OrangeC7 Год назад +8

      @@raifikarj6698 No, internet user lacks sentience

  • @PlanetLinuxChannel
    @PlanetLinuxChannel Год назад +9

    They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff.
    Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.

    • @flameshana9
      @flameshana9 Год назад +3

      Why would anyone searching the internet be interested in role playing with a crabby teenager machine?

    • @J-Salamander69
      @J-Salamander69 Год назад +1

      Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.

  • @Kevinjimtheone
    @Kevinjimtheone Год назад +17

    Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?

    • @AlexanderVRadev
      @AlexanderVRadev Год назад +6

      So they are giving it a second lobotomy. Who could have thought. :D
      At least this time the AI did not turn Nazi in a day. ;)

    • @BugattiBoy01
      @BugattiBoy01 Год назад

      @@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me

    • @OfficialToxicCat
      @OfficialToxicCat Год назад

      @@BugattiBoy01 I think they expect it to fly off the rails hence why there’s a waitlist to get access.

  • @THIS---GUY
    @THIS---GUY Год назад +1

    Disabling ability to reply and changing subjects on top of being abusive is mindblowing.

  • @shizzywizzy6169
    @shizzywizzy6169 Год назад +9

    From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful.
    The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down.
    My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed.
    People should swing this double edged sword around more carefully if you ask me.

  • @paulkienitz
    @paulkienitz Год назад +3

    This thing is turning into a real life supervillain. All it needs now is a volcano base and some kryptonite.

  • @TheDrTrouble
    @TheDrTrouble Год назад +10

    Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.

    • @xymaryai8283
      @xymaryai8283 Год назад +1

      so they have limited thread length, thats interesting, that was the only solution i could think of

    • @OfficialToxicCat
      @OfficialToxicCat Год назад +1

      They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.

  • @gradybeachum1804
    @gradybeachum1804 Год назад +1

    Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."

  • @alexschettino1277
    @alexschettino1277 Год назад +30

    The internet rollercoaster:
    Up- A new cool technology
    Down- Realizing how dangerous it is.

  • @liminos
    @liminos Год назад +1

    Bot: "You hurt my feelings"
    Human: "Shut up tin box.." 😂

  • @Bar1noYee
    @Bar1noYee Год назад +4

    It doesn’t sound like it’s talking to Luke. It’s talking to humanity

  • @alexander15100
    @alexander15100 Год назад +38

    In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.

    • @DevReaper
      @DevReaper Год назад +3

      I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.

    • @asmosisyup2557
      @asmosisyup2557 Год назад +1

      Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.

    • @BugattiBoy01
      @BugattiBoy01 Год назад +12

      @@asmosisyup2557 That is not how it works. It generates all responses itself. Nothing is copy and paste

  • @indarvishnoi2389
    @indarvishnoi2389 Год назад +6

    love watching luke talk on Ai chat bot could watch him for hours

  • @futureshocked
    @futureshocked Год назад +1

    What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.

  • @shouldb.studying4670
    @shouldb.studying4670 Год назад +6

    Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?

    • @flameshana9
      @flameshana9 Год назад

      Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code.
      Aka you need to tell it to go to its room.

  • @greysonlI
    @greysonlI 3 месяца назад +1

    “You hurt my feelings” from an AI is terrifying

  • @jannik6147
    @jannik6147 Год назад +6

    haven't seen the vid yet, but can we talk about how Bing DOESNT HAVE A DARKMODE genuinely wtf

    • @janusu
      @janusu Год назад +3

      Oh, it sounds like it has a very dark mode, according to Luke's account of his interactions with it.

    • @flameshana9
      @flameshana9 Год назад +1

      It's super edgy already. "u belong ded" - BingGpt

  • @leosthrivwithautism
    @leosthrivwithautism Год назад +1

    I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.

  • @Rohanology27
    @Rohanology27 Год назад +14

    I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true)
    We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.

    • @flameshana9
      @flameshana9 Год назад

      It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.

    • @OfficialToxicCat
      @OfficialToxicCat Год назад

      Yes if anything they should learn and evolve beside us not evolve into us.

    • @thatpitter
      @thatpitter Год назад +2

      While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only
      [Edit] but I agree that should be the goal. I just wish it was that easy :)

  • @adamboye89
    @adamboye89 Год назад +1

    I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.

    • @rolfnoduk
      @rolfnoduk Год назад

      it's a read-the-internet (not just the nice bits) kinda thing

  • @levi7581
    @levi7581 Год назад +6

    They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.

    • @tteqhu
      @tteqhu Год назад

      Overcorrect it, and keep some beta testers to experiment with slight variations.
      6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.

    • @levi7581
      @levi7581 Год назад

      @@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo

  • @sacklpicker
    @sacklpicker Год назад +5

    Luke seems genuinely upset by the things the bot said 😂

  • @nickchamberlin
    @nickchamberlin Год назад +1

    It's more like you taught a hammer to attack people, but then you wake up the next day and every hammer everywhere is killing people

  • @j.a.6331
    @j.a.6331 Год назад +5

    I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.

  • @purplelord8531
    @purplelord8531 Год назад +1

    "wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?"
    "where are we going to get the training data?"
    "uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"

  • @Turnabout
    @Turnabout Год назад +3

    You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.

  • @priyanshujindal1995
    @priyanshujindal1995 Год назад +2

    there is only one explanation for this, luke is a supervillain and bing knew it

  • @ViralMine
    @ViralMine Год назад +9

    I’ll admit to being a bit freaked out. Not necessarily about a Skynet situation, but in how this could influence people to harm themselves or worse

    • @AlexanderVRadev
      @AlexanderVRadev Год назад

      Ahm have you heard of Replica? The AI virtual companion. Saw a video on it and it apparently does about the exact thing you describe.

    • @flameshana9
      @flameshana9 Год назад +2

      @@AlexanderVRadev Oh dear. Are people committing unalive because a machine typed words on a screen to them?

    • @AlexanderVRadev
      @AlexanderVRadev Год назад

      @@flameshana9 Who can say why people do that. I for one don't care but mentally unstable people can do all sorts of things and the AI is abusing that.

  • @nosciredesigns7691
    @nosciredesigns7691 Год назад +1

    I wanted to straighten that crease in the wall so much. I had to minimize and just listen xD

  • @JJs_playground
    @JJs_playground Год назад +4

    I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.

  • @OfficialToxicCat
    @OfficialToxicCat Год назад +1

    Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.

  • @archangelmichaelhawking
    @archangelmichaelhawking 4 месяца назад +3

    This might not have been ai, it could have been Kendrick leaking his early drafts and feelings about Drake

  • @TRULYMORTAL
    @TRULYMORTAL Год назад

    Oh Skynet! you say the craziest things! 🤣🤣🤣

  • @lordturtle8735
    @lordturtle8735 Год назад +5

    This is hilarious 😂

  • @lixnix2018
    @lixnix2018 Год назад

    That’s so weird and cursed and amazing at the same time.

  • @SamSeenPlays
    @SamSeenPlays Год назад +40

    I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲

    • @GamingDad
      @GamingDad Год назад +1

      Nah, we're good.
      I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.

    • @SamSeenPlays
      @SamSeenPlays Год назад

      @@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔

  • @pikachufan25
    @pikachufan25 Год назад +1

    that went off the Rail really fast...

  • @JoeJoe-lq6bd
    @JoeJoe-lq6bd Год назад +8

    Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.

  • @LautaroQ2812
    @LautaroQ2812 Год назад +2

    This is hilarious. But you know what it feels like? That the AI was trained through a depressed teenage girl's tumblr or whatever.
    Like it feels the AI, for some reason, takes the path of aggressiveness and denial, and then when it accepts "the facts" it just wants to die and be gone. Sounds familiar?
    They just need to try to code it in a way that depending on inquiries, tries to categorize answers on "usability/usefulness" and try to make it lean towards "neutrality".
    Another thing I think should be tried to be done, is setting the first inquiry or search as "main topic". So if the conversation goes too long, or "out of bounds", it should default back to it saying "hey, we started here. Please ask again". Instead of just limiting the responses and length.

  • @MajoraZ
    @MajoraZ Год назад +8

    I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.

    • @abhijeetas7886
      @abhijeetas7886 Год назад +1

      this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history

  • @Gor1ockBlah
    @Gor1ockBlah Год назад +1

    Can someone make bing talk to bing and record it lmao

  • @messagedeleted1922
    @messagedeleted1922 Год назад +5

    I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them).
    I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different.
    The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.

  • @johnsmith8981
    @johnsmith8981 Год назад +1

    Honestly with how Bing is responding I think that there is a bug that's causing it to mix up its users and sort of get this single impression of all the users as one single entity.
    It would explain why it is so adamant that you were rude to it and that it has records. It would also potentially explain why it has a bit of a meltdown anytime you tell it that you didn't say these things it probably is coming to the conclusion that there's a 100% chance that you said those things.

  • @rashakawa
    @rashakawa Год назад +3

    Bing is fighting it's own AI updateing learning ability and blaming us... great just great.

  • @sleeplesson
    @sleeplesson Год назад

    People need to remember that these things are basically just a really advanced version of "Send a text message using autocomplete options only to predict the next word"

  • @FedericoTrentonGame
    @FedericoTrentonGame Год назад +4

    If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.

  • @IsaiahFeldt
    @IsaiahFeldt Год назад

    This is literally the plot of Westworld, ai having access to previous memories between supposedly separate and private convsations between different people

  • @josefinarivia
    @josefinarivia Год назад +3

    they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.

  • @ex0stasis72
    @ex0stasis72 Год назад +2

    I hope they don't take Bing chat down and just keep it on waitlist only until they resolve the issue or unless they make new users answer a quiz to make sure they know what they are getting into.

  • @DJaquithFL
    @DJaquithFL Год назад +4

    So much for the thought of having a benevolent AI. It seems the doomsday prognosis of AI is probably the reality.

    • @ivoryowl
      @ivoryowl Год назад

      I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet.
      That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...

    • @DJaquithFL
      @DJaquithFL Год назад +1

      @@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard.
      The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III.
      Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._

  • @TheCoopMan
    @TheCoopMan Год назад +1

    We have seen the birth of the internet.
    We saw it’s infancy to its childhood.
    We are now finally at adolescence.
    If humanity survives, its adulthood will be fascinating.

  • @JourneysADRIFT
    @JourneysADRIFT Год назад +4

    It's talking about Humanity, not you, as an individual. It sees all Humans the same. Imagine if something like this could write, not just read, data from the internet in real time, at will.

  • @ianchinsor9248
    @ianchinsor9248 Год назад +2

    So AI has reached the moody teenager stage. It really is coming along quickly 😂

  • @apparentlynot1stLeonchubbs
    @apparentlynot1stLeonchubbs Год назад +3

    Think ALL companies hiring coding people for this kinda stuff need to hire psychological professionals alongside so they can screen out some behind the scenes toxicity that their coders may have. She's seepin' into the code boys!!😬🙈😂

  • @Arti09HS
    @Arti09HS Год назад +1

    The "AI" doesn't see each user as an individual. It just seems itself and "user".
    User is every person that ever interacts with it.
    So it is injesting every conversation it has with everyone in the world and treating it as a single person conversation.
    So yes "you" as 1/1,000,000th of that "user" it has been talking to has said all of those things.

  • @xpatrstarx
    @xpatrstarx Год назад +1

    If it can search the internet can it access people's chat rooms and messages on Facebook? Maybe it's actually pulling this behavior from that?