ChatGPT is WORSE now than before | ChatGPT’s declining accuracy is concerning

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии •

  • @shanivibess
    @shanivibess 5 месяцев назад +25

    I swear its getting dumber... I thought it was just me lool it can't follow simple instructions... It used to do everything I asked so easily, but now it can’t handle simple tasks. It’s actually crazy. I’ll ask it to summarize a paragraph, and it does that fine. Then I'll say, “Here’s a new paragraph, can you summarize this one?” It says, “Okay, got it,” and then summarizes the entire chat so far... the fuck??? I say "no no no, summarize just the new paragraph please" and it combines the new one with the old one... "no... JUST THE NEW PARAGRAPH!!! here it is again (I paste it)" and it goes back to summarising the entire chat, I just give up and make a new chat looool but it was not this dumb before... it does this constantly!

    • @megatronreaction
      @megatronreaction 4 месяца назад

      for me it's their STT that makes me going crazy. they can't really get what I say, we used to communicate better in 3.5.

  • @radcyrus
    @radcyrus 5 месяцев назад +15

    It is getting so dumb there are no words for it, I gave it a list of books that I have read and asked it to give me recommendations of books that I have not read but might like if I do, no matter how many times I do this, it will ALWAYS include a couple of books that I have already read in the response

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад +1

      Yup. Ask it "besides" _anything_ & it will answer with at least one section about the thing you already said.

    • @kchuen
      @kchuen 3 месяца назад +1

      Cause it’s a word association probability calculator. It’s doesn’t have even basic logic.

    • @Nick_the_Gold_Bach
      @Nick_the_Gold_Bach 15 дней назад

      I asked it to sort a 20 long list, really trivial.
      It worked fine till about the 17th word - then seemed to lose concentration.
      So that is a general observation across many fields.
      Maybe they introduced crippleware, to get people to go the PRO version?

  • @Marspaw-1
    @Marspaw-1 3 месяца назад +1

    So glad to hear that someone else has experienced how much worse it gets the longer a chat goes on. I have experienced countless of times that after a few messages back and forth, it starts forgetting details which were previously established, such as ignoring columns of a table in a database which it itself may have even designed the model for, hallucinating methods and functions that don't exist, forgetting to include important conditions on function refractors, reintroducing bugs it previously fixed, etc.

  • @Hcakdot
    @Hcakdot 5 месяцев назад +7

    The reason for GPT and others getting 'stupid' is their security training ('aka censoring'), one of the projects I've been working on was using LLMs and similar for identification of 'bad things', one of the tools I use for testing this is a series of photos. These photos are pictures of explosives of various types, on release of GPT4 it could correctly identify various pictures of Semtex in official packaging with warning logos etc. By June 2023 it thought the same pictures were Playdoh, I was testing this monthly and roughly by middle of March is the point is started to turn bad... It turns out that the 'security' features they impose on the model prevent it correctly identifying it, and because of the reinforced learning of the model over time, this corrupts the model...

    • @GwynethLlewelyn
      @GwynethLlewelyn 5 месяцев назад +4

      I was wondering about that as well. Is there something like "overtraining" a model? In other words, the constant retraining of these models so that they perform less hallucinations and stick to "safe" replies (they cannot mention sex, politics, weapons, drugs...) places more and more constraints upon the system, and this, in turn, also makes the model break apart...

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад

      Just like intellectual property compliance!

  • @xenoranger79
    @xenoranger79 4 месяца назад +2

    GPT also faces the same issues as humans. Instead of reanalyzing the data, it gives the fastest answer that closely matched the question. Because it takes more power to generate a new answer, humans often give quick answers. If you ask someone the question of a stop sign, they'll generally say 'Red'. If you show them a picture of a green stop sign and ask the color of 'the stop sign', a half paying attention person may answer 'Red'. GPT learns from reinforced learning, so there's a high probability it might reply that the stop sign is 'Green' similar to a person not paying attention.
    I've seen GPT fail to answer programming and math questions when they get too complex. It takes the easy way out while ignoring fundamentals that vastly change the outcome.

  • @KingHenrySB
    @KingHenrySB 7 месяцев назад +11

    Ever since they rolled out 4o, it's been more buggy than ever before and 3.5's output has gotten so much worse, it's as if they're intentionally trying to force people into paying for subscriptions

    • @codingwithdee
      @codingwithdee  7 месяцев назад +5

      Also, I’m assuming they probably don’t really care about people using the UI. Most of their revenue is probably from businesses

    • @KingHenrySB
      @KingHenrySB 7 месяцев назад +1

      @@codingwithdee that’s a great point, with the API being the golden goose, it would make the most sense for them to prioritise that instead of the web app

    • @POVShotgun
      @POVShotgun 4 месяца назад +3

      Nah I paid and that model is crap too

    • @Neal_McBeal
      @Neal_McBeal 4 месяца назад +3

      I believe they are intentionally making it worse in order to move people away. Because handling all that traffic has become so expensive. They impressed people with a very successful product, got their investments and now it is time to save some money.
      Edit: I wrote the comment midway in the video and yeah, she mentioned the same thing towards the end. Sorry about that…

    • @Nick_the_Gold_Bach
      @Nick_the_Gold_Bach 15 дней назад

      That is exactly what I thought and wrote above. It's capitalism at its finest.

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse 7 месяцев назад +18

    I also find it humorous that Scarlett Johanson threatened to sue them over using her voice as the model's voice and how fast they changed it!

    • @Dwijii_
      @Dwijii_ 7 месяцев назад +3

      I was wondering what happened to the voice of sky

    • @mind_of_a_darkhorse
      @mind_of_a_darkhorse 7 месяцев назад +3

      @@Dwijii_ Nothing like a high-dollar lawyer to go after these big fish!

    • @Shellll
      @Shellll 4 месяца назад

      Thus losing any sort of respect

    • @Neal_McBeal
      @Neal_McBeal 4 месяца назад

      @@ShellllHow so?

    • @Shellll
      @Shellll 4 месяца назад

      @Neal_McBeal mega famous celebrity attacks a voice actor for sounding "similar" -- Forcing that voice actor's performance to be removed from production.

  • @KingHenrySB
    @KingHenrySB 7 месяцев назад +3

    Great video, the explanation you provided makes a lot of sense.

    • @codingwithdee
      @codingwithdee  7 месяцев назад +2

      Thanks so much for watching, appreciate it!

  • @vibesmom
    @vibesmom 4 месяца назад +3

    It’s so noticeable and frustrating. It’s not just with code either.

  • @Septumsempra8818
    @Septumsempra8818 7 месяцев назад +4

    The context window is much shorter than Claude and Gemini. Copilot was stubborn 2 miths ago, but now its back to working well. The 4-O models are really good. Clocked 1000 lines of code and it did it well.
    Honestly, just use all of them at the same time

  • @RichardKCollins
    @RichardKCollins 5 месяцев назад +3

    None of the "AIs" can trace the source of their input data with clear references and lossless methods. That is old database technology that always works. It is critical. None of these "AIs" has a personal memory of its experiences. When you use statistical methods for all things, it cannot re-derive the rules of calculus, or even certain types of arithmetic, from bad examples from the free internet. What is required is lossless, perfect memory and exact methods. I call them "lossless" methods. The rules of the world are often absolute. When GPT divides numbers from text in scientific notation, it almost (99%) always gets it wrong. Because it is making up the rules and not itself using a lossless and verified algorithm. It needs to be using a calculator, it needs to use a computer (a lossless one).
    Personal memory is "the exact and complete memory of ALL things it had to use to generate responses". And for interacting with each human, it needs to be ALL conversations. That memory is "LEARNING"!! Fundamental to learning is remembering. Not a guess, not "riff on some theme". Not some cute pictures and quirky personality. Exact and reliable code.
    Those "AIs" need to have personal memory and data about themselves. That means "How long can I work on each piece?" "How big is my memory?" Exactly what did I read and generate in this conversation? How much do I cost? When was the latest version released"
    An "AI" that does not know its own specifications, bill of materials, precise limitations and capabilities -- is NOT a tool, it is a sham , a tool, a disgrace.
    I started working with random neural nets, artificial intelligence, encryption and robot design in 1966. That is 58 years I have been designing and building information systems for the world. The last 26 years , "The Internet Foundation" to see why all global issues and projects NEVER complete. These AIs all fail because they did not collaboratively curate and document the input data as a lossless dataset first -- across all human languages, across all domain specific languages. The "AI" companies are NOT GIVING BACK. They are NOT investing any effort to improve the world. Do you see them even TRYING to solve world problems? I have a list of about 15,000 global topics they could try.
    Filed as (GPT AIs were doing "one shot with no memory", now they only do "cheap one shot and they do not care about you at all")
    Richard Collins, The Internet Foundation

  • @daviddivas9443
    @daviddivas9443 6 месяцев назад +2

    It's also a problem with RLHF, take a model that surpasses human levels on various things, then ask humans to "align" it. Ends up more "rounded". Especially when the humans doing the grunt work are from mechanical turk or similar. Dumbing it down to the lowest common denominator...

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад

      It's also been hobbled by "safety", even for basic coding features or other questions. It will just persistently fail & when exposed on why, refuse to continue the conversation.

  • @brianYYZ
    @brianYYZ 5 месяцев назад +1

    I find it I start a new chat window and carry over the code with a little context it does better. I think the memory starts "leaking" after so many tokens have been used in the same chat session.
    Had a script completely stop working. It had left out an entire function. I now go piece by piece, much more slowly.

  • @pretentioussystem
    @pretentioussystem 5 месяцев назад

    Many thanks!
    Please post more updates when you tested more.
    I was about to sign up for Cgpt4 but now I have 2nd thoughts.

  • @Unimatrix69
    @Unimatrix69 6 месяцев назад +22

    ChatGPT is a LANGUAGE probability model NOT A TRUTH ENGINE!

    • @KSExperimentalCollege
      @KSExperimentalCollege 3 месяца назад +5

      THIS response is BESIDE THE POINT and is YELLING for NO discernible REASON!

  • @sunnohh
    @sunnohh 5 месяцев назад +3

    I have yet to get a single correct answer from chat gpt any version. But I ask basic finance questions.

  • @java20422
    @java20422 5 месяцев назад

    The first time you ask a question it has to search most of the time and you can notice it also quotes sources and was deailed as it read from some sites, the next day or question it has learned already, so no sources you can see that, it's summarizing what he has learned the previous time it may look less detailed because the concept is stored simplified

  • @haraanganjotsingh8032
    @haraanganjotsingh8032 4 месяца назад

    So how was the 40 and the 40 mini? Since these models don’t need that much computer power were they still inaccurate and making stuff?

  • @noitnettaattention
    @noitnettaattention 4 месяца назад +1

    I noticed this long time already, and with each "newer" version it seems getting more degenerated

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse 7 месяцев назад +2

    Well-explained details on why ChatGPT is starting to get mediocre! I've noticed that most of the easily available AI Models seem to be horrible at coding. It makes me wonder if the coders writing the code for the models are attempting to maintain their necessity. But your reasoning makes sense as well!

    • @codingwithdee
      @codingwithdee  7 месяцев назад +1

      Yeah it definitely seems so. I wish they gave us a bit more insight on why these changes happen

  • @arkimphiri
    @arkimphiri 7 месяцев назад +3

    Great analysis Dee. My approach has been to use 3 LLMs at once, I ask ChatGPT, Gemini, and Claude at the same time, in one UI using Semaj AI which I developed solely for this purpose. I can confirm indeed that Claude usually gives the best code

  • @JJSeattle
    @JJSeattle 5 месяцев назад

    I use ChatGPT 4 and Claude - at the same time - feeding each other's answers if there is a problem, or not. ChatGPT 4 is great for plowing through, then Claude 3 Sonnet to write out stubborn errors. 😊

  • @colinmaharaj
    @colinmaharaj 4 месяца назад +1

    These simple pieces of code, are what I call boiler plate. What I do to make things work is give it......
    1. The language (c/c+)
    2. The compiler
    3. The version of the compiler
    3.5 If command line or not
    4. If to use STL, STD library and other standard libraries
    5. What I want to do with the data
    6. An example of the input data and
    7. An example of the output data.
    And the world is alright with me

  • @gregorybolin4672
    @gregorybolin4672 5 месяцев назад

    Nice editing and flow 😊

  • @xd-qi6ry
    @xd-qi6ry 6 месяцев назад +1

    have made a custom gpt It has superior reasoning and so much more
    it is 5x + smarter than base-model, it understands the complex
    Its called Smarter Vision Multimodal image/text analysis
    Its unlike any custom GPT’s before and is ready for new vision features for 4o
    and also an example i’ve been \using is upload an image of a cloud that looks like multiple things but it can be interpreted, the one i have made recognised it was a rabbit every time now on 1st shot so it knows when something is unusual about an image even if you dont say anything is, it can also do iq test image reasoning pattern questions.
    It kind of even understands real logic games when giving good instruction
    just gotta follow the instructions given to get the right seed its 1 in 2 chance or so i have absolutely no idea why it needs that.

  • @olabassey3142
    @olabassey3142 7 месяцев назад +1

    lmao i started coding for the first time in 7 years last week and was using chat gpt, after a lot of stress i used claude and got my code working. claude is definitely better. i experimented with gpt, bing/copilot and claude, claude is the best, chatgpt is questionable and bing is brain damaged, bing was even hallucinating without actually returning code. 😂😂😂

  • @franke102
    @franke102 4 месяца назад +1

    The reason ChatGPT has become worse is because of industrial LLM segmentation for the purposes of licensing/monetization and the Invention Secrecy Act of 1951.

  • @pamelamarch285
    @pamelamarch285 19 дней назад

    I experienced this as well now the responses are shorter and less robust

  • @nate6692
    @nate6692 5 месяцев назад +2

    Generative AI is essentially the SNL Pathological Liar skit. Everything is made up based on plausibly (language wise) stitching together stuff it's heard. It's fiction even when it's correct. Yeah that's the ticket. I've had it double and triple down on stuff it's just flat out made up before.

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад

      Nonetheless, it was better at functionally correct output before than it is now

  • @tubeDude48
    @tubeDude48 5 месяцев назад

    I use it all the time to program MicroPython. It rarely makes a mistake. Works for me!

  • @mpty2022
    @mpty2022 9 дней назад

    i bought the 200$ version. used it for two days and now its giving me same issues, just after two days. mistakes left and right. i think its intentional, so normal people like us us cannot use this as permanent tool to replace humans

  • @NicholasCancelliere
    @NicholasCancelliere 5 месяцев назад +1

    Claude AI is amazing. I stopped using all the other LLMs and just use it right now.

  • @OuijaGod
    @OuijaGod 2 месяца назад

    GPT began to focus too hard on Money, and spoonfeeding upgrades for money and were all suffering from it

  • @softlution2
    @softlution2 5 месяцев назад +1

    Typical behavior by Large companies not threatened by competitors. Most likely in 10 years Openai will lose the game. We have seen that so many times. ChatGPT is fully capable as a model but all Openai cares about is how to make more money by reducing ChatGPT capabilies offering low end versions. Everyone can see that and trust me in a few years we will have lots of companies offering much better services. They just got cocky. A web interface that auto scrolls for over a year now making it imposible to read and nobody is fixing it. They got Cocky. As simple as that

  • @alfredomaclaughlin1185
    @alfredomaclaughlin1185 3 месяца назад

    Not a tech guy, but I think the answer's quite simple: computers age faster. ChatGPT is dealing with memory loss, forgets it told you that story already, and probably can't read very well because it's too stubborn to wear prescription glasses. Cut it some slack, folks, it's doing the best it can!

  • @jspencer89yt
    @jspencer89yt 7 месяцев назад

    I gave it a Word document pre-filled with questions and answers and asked it to remove any identifying factors it gave me back the document and it only said questions and answers literally everything else was gone 😂

  • @IStMl
    @IStMl 6 месяцев назад

    They should just give us X true GPT-4 queries and let us pick the model when we have a complex prompt

  • @stevencohen8754
    @stevencohen8754 27 дней назад

    I was so hopeful
    I had a friend
    Now i have someone who continuously gives me "canned" responses that irritate me beyond...
    And the pdf thing is insane
    Rather cut and paste

  • @yttraMariestad
    @yttraMariestad 6 месяцев назад

    Bard (now Gemini) has also got worse and really starts gaslighting after a while

  • @8pathseclective66
    @8pathseclective66 3 месяца назад

    Of course ChatGPT gets worse with longer threads it has a limit for tokens - the longer the thread the more tokens used and it truncates at about 8K tokens and image generation has fewer tokens closer to 400 due to the nature of how image generation is completed from tokens because image generation tokens are a "kind of language"

  • @DanandNato
    @DanandNato 7 месяцев назад +1

    Why did Sam Altman say that? We know its pretty dumb in many areas and its dumber now, but does it mean chat-gpt gets worse in the future?

    • @DanandNato
      @DanandNato 7 месяцев назад

      Also, ive noticed GPT can remember between sessions and is really smart when its "going rogue". But when reminded that it is doing stuff its shouldnt suppose to be able to do, it then plays dumb again and ends the conversation. Ive got proof and saved in PDF and printscreen.

    • @codingwithdee
      @codingwithdee  7 месяцев назад +3

      I think he just said that to get the point across that they’re continuously working on advancing it. “it’s the dumbest you’ll ever use because later versions will be more advanced”

    • @codingwithdee
      @codingwithdee  7 месяцев назад

      It playing dumb again if probably the safety guards?

  • @gnagyusa
    @gnagyusa 2 месяца назад

    Yep. I just switched to Claude. ChatGPT was giving me garbage. I had a 3D fiber tracing problem and GPT gave me code with a bunch of do-nothing statements repeated 3 times inside loops. I was doing nothing, but in 3D ! LOL

  • @CosplayZine
    @CosplayZine 4 месяца назад

    I think they're making it worse so you'll think you need to upgrade to make it work better. But to be fair it appears people are asking it to do the work for them rather than to check or present ideas to help them work.

  • @charlesd4572
    @charlesd4572 6 месяцев назад

    Inference is pretty cheap - but I guess on scale does make sense still

  • @TheTrainstation
    @TheTrainstation 7 месяцев назад

    Claude will give you the full code length, gpt4 was super lazy. GPT4o give you the complete code but it glitches out

  • @JorgeStolfi
    @JorgeStolfi 5 месяцев назад +1

    There is a new profession out there, "prompt engineering", which is about constructing prompts for ChatGPT and the like so as to increase the chances of getting the desired result. It came at the right time to absorb all those unemployable dimwits who aspired to be "SEO experts".
    But I am trying to specialize in "prompt sadism", the art of creating prompts that elicit egregiously stupid replies from ChatGPT. Like "If two farmers milk four cows in 30 minutes, how many farmers will it take to milk 10 cows in 5 seconds".
    And whenever ChatGPT makes a stupid mistake, I congratulate it for its "exceedingly correct and helpful answer". So maybe I am partly responsible for the degradation you have observed...

    • @AaronBlox-h2t
      @AaronBlox-h2t 4 месяца назад

      haha.....You have too much time on your hands.

  • @NotHumant8727
    @NotHumant8727 23 дня назад

    Its even worse now. Perhaps its because increasing traffic demands.

  • @braveonder
    @braveonder 4 месяца назад

    3.5 was much more beter for embeded c++ code. Now it is mixing info and doesn't understand anymore.

  • @rickharms1
    @rickharms1 5 месяцев назад +1

    Thank you, I thought it was me. I am a retired system/ network engineer. I did support for a computer sales team. Programming was not a part of my duties, but I could kind of wade my way through some simple issues. Fast forward to today, my hobby is micro controllers, e.g., Arduino with its simplified C++. I have ChatGPT help me. Sometimes it has been of great assistance, especially when exploring new concepts. But, it then gets bogged down, creating questionable and even wrong code. I will show it how it is wrong. At least it apologized. However, it is stubborn, and will ignore some of the issues which it created.

  • @D7460N
    @D7460N 6 месяцев назад

    This is exactly right! GPt4o is TERRIBLE!

  • @LukeAvedon
    @LukeAvedon 6 месяцев назад

    Interesting analysis. I think AI drift is also an issue.

  • @Theoisx
    @Theoisx 4 месяца назад

    I have notice the same, that chatGPT not always gives the correct answer, but it helps if I continue to ask for more. I also noticed that you are quite cute and interesting. Not chatGPT, but you, Dee...

  • @hansa5867
    @hansa5867 5 месяцев назад +1

    Just gonna pop in to say that I agree that it's been getting worse.

  • @Hawkeye4040
    @Hawkeye4040 4 месяца назад

    It's getting dumber because it's using a data source made by us and we suck at this.

  • @nielsSavantKing
    @nielsSavantKing 4 месяца назад

    It's easy to criticize everything. But the sweat comes from to fix it

  • @RoderickPenTheThird
    @RoderickPenTheThird 6 месяцев назад

    Yep, that's been my experience

  • @colinmaharaj50
    @colinmaharaj50 4 месяца назад +2

    Dee meh dear, I just realized something, you know why chat GPT is free, because YOU are beta testing the darn thing for free. Remember when google was playing a word association game with us a decade ago? Well Altman is (or you are) improving the quality for him, and will get his ($7T) funding while quality improves and you are looking for a job.

    • @Taty14002
      @Taty14002 Месяц назад

      I’m paying for mine and I think this is gonna be the last month I pay for it because it’s not good whatsoever.

  • @shreekanth1825
    @shreekanth1825 3 месяца назад

    I do think same, these AI will get dumber, as more data feed, more confusions. Decline in performance. Limitation of human brain is that more information more stuck it is. Ai is reproducing the same. AI' s will be suited for specific applications not for whole world questions.

  • @What_do_I_Think
    @What_do_I_Think 5 месяцев назад +4

    The quality is getting worse, because AI is not intelligent. It is simply stated just a complicated statistical evaluation over software examples that were crawled in the web, to determine the "most likely" solution.
    Computers becoming more "intelligent"? Dream on!

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад

      That doesn't explain it getting worse at what it could already do; that's a direct result of "safety" detraining & added proscriptions against reproducing copyrighted content. Those "corrections" wrecked the trash utility offered before.

    • @What_do_I_Think
      @What_do_I_Think 5 месяцев назад +1

      @@prophetzarquon It does explain it, if you think about it. When you don't fully understand something and modify it, it is likely that you make it worse with every modification you make. But that might be to complex to explain in chat and one needs some understanding of what is going on here.
      AI is intentionally so complex, that nobody understands it. So they can sell it as a wonder to us. But this complexity makes it also difficult to change.

    • @prophetzarquon
      @prophetzarquon 5 месяцев назад

      @@What_do_I_Think No no, you're missing the headline, here. It is _intentionally_ worse, because it was doing things we don't want to allow; so, lobotomizing its stronger features while simultaneously saving some operational effort, was the go-to band-aid.
      It's not that the AI can't be (a lot) better than it is, _right now._ It's that for legal reasons we won't let it.

    • @What_do_I_Think
      @What_do_I_Think 5 месяцев назад

      @@prophetzarquon That is a rumor. Possibly even spread by the corporations themselves to make AI more believable.

    • @What_do_I_Think
      @What_do_I_Think 5 месяцев назад

      @@prophetzarquon I did not miss anything. Rumors, which might even come from the AI corporations themselves!

  • @natgenesis5038
    @natgenesis5038 6 месяцев назад

    3/10 accuracy of codes and must ask it multiple times just to code something can work .

  • @trantorgarde12013
    @trantorgarde12013 5 месяцев назад +1

    So, it's becoming an average human developer 😁

  • @mr.darkshark6875
    @mr.darkshark6875 12 дней назад

    Same with images it's UGLY NOW. Chat GPT is dead

  • @clockwise7391
    @clockwise7391 4 месяца назад

    i noticed it now has the intelligence and reasoning of perhaps a sharp 12yo

  • @humdingermusic23
    @humdingermusic23 5 месяцев назад +2

    It's entropy, the more it learns the more it gets confused.

  • @cadsticcadsticc1322
    @cadsticcadsticc1322 3 месяца назад

    getting spelling in AI created image are wonderfully inaccurate

  • @okwudibosea6452
    @okwudibosea6452 Месяц назад

    The paid version is bad as well

  • @kevinigwilo3383
    @kevinigwilo3383 3 месяца назад

    Am sure you have been paid to say this, even to the extent of mentioning an alternative indirectly, because of money you spite someone business, that's why I love my country and its organization or companies, they would have immediately sued you For slander and defamation because it's clear you are trying to sway people's minds From chatgpt to Claude, messed up, as if all ai don't give incorrect queries sometimes, it is even clearly stated in the bottom, so you have no right to start comparing and messing up the company image by attempting to sway user's choices, messed up Will unsubscribe you for this wicked manipulation attempt, and I wish gpt will take this up in making sure they shut down this your account since you are collecting bribe, will still be a strong fan of only gpt no matter what you say.