I swear its getting dumber... I thought it was just me lool it can't follow simple instructions... It used to do everything I asked so easily, but now it can’t handle simple tasks. It’s actually crazy. I’ll ask it to summarize a paragraph, and it does that fine. Then I'll say, “Here’s a new paragraph, can you summarize this one?” It says, “Okay, got it,” and then summarizes the entire chat so far... the fuck??? I say "no no no, summarize just the new paragraph please" and it combines the new one with the old one... "no... JUST THE NEW PARAGRAPH!!! here it is again (I paste it)" and it goes back to summarising the entire chat, I just give up and make a new chat looool but it was not this dumb before... it does this constantly!
It is getting so dumb there are no words for it, I gave it a list of books that I have read and asked it to give me recommendations of books that I have not read but might like if I do, no matter how many times I do this, it will ALWAYS include a couple of books that I have already read in the response
I asked it to sort a 20 long list, really trivial. It worked fine till about the 17th word - then seemed to lose concentration. So that is a general observation across many fields. Maybe they introduced crippleware, to get people to go the PRO version?
So glad to hear that someone else has experienced how much worse it gets the longer a chat goes on. I have experienced countless of times that after a few messages back and forth, it starts forgetting details which were previously established, such as ignoring columns of a table in a database which it itself may have even designed the model for, hallucinating methods and functions that don't exist, forgetting to include important conditions on function refractors, reintroducing bugs it previously fixed, etc.
The reason for GPT and others getting 'stupid' is their security training ('aka censoring'), one of the projects I've been working on was using LLMs and similar for identification of 'bad things', one of the tools I use for testing this is a series of photos. These photos are pictures of explosives of various types, on release of GPT4 it could correctly identify various pictures of Semtex in official packaging with warning logos etc. By June 2023 it thought the same pictures were Playdoh, I was testing this monthly and roughly by middle of March is the point is started to turn bad... It turns out that the 'security' features they impose on the model prevent it correctly identifying it, and because of the reinforced learning of the model over time, this corrupts the model...
I was wondering about that as well. Is there something like "overtraining" a model? In other words, the constant retraining of these models so that they perform less hallucinations and stick to "safe" replies (they cannot mention sex, politics, weapons, drugs...) places more and more constraints upon the system, and this, in turn, also makes the model break apart...
GPT also faces the same issues as humans. Instead of reanalyzing the data, it gives the fastest answer that closely matched the question. Because it takes more power to generate a new answer, humans often give quick answers. If you ask someone the question of a stop sign, they'll generally say 'Red'. If you show them a picture of a green stop sign and ask the color of 'the stop sign', a half paying attention person may answer 'Red'. GPT learns from reinforced learning, so there's a high probability it might reply that the stop sign is 'Green' similar to a person not paying attention. I've seen GPT fail to answer programming and math questions when they get too complex. It takes the easy way out while ignoring fundamentals that vastly change the outcome.
Ever since they rolled out 4o, it's been more buggy than ever before and 3.5's output has gotten so much worse, it's as if they're intentionally trying to force people into paying for subscriptions
@@codingwithdee that’s a great point, with the API being the golden goose, it would make the most sense for them to prioritise that instead of the web app
I believe they are intentionally making it worse in order to move people away. Because handling all that traffic has become so expensive. They impressed people with a very successful product, got their investments and now it is time to save some money. Edit: I wrote the comment midway in the video and yeah, she mentioned the same thing towards the end. Sorry about that…
@Neal_McBeal mega famous celebrity attacks a voice actor for sounding "similar" -- Forcing that voice actor's performance to be removed from production.
The context window is much shorter than Claude and Gemini. Copilot was stubborn 2 miths ago, but now its back to working well. The 4-O models are really good. Clocked 1000 lines of code and it did it well. Honestly, just use all of them at the same time
None of the "AIs" can trace the source of their input data with clear references and lossless methods. That is old database technology that always works. It is critical. None of these "AIs" has a personal memory of its experiences. When you use statistical methods for all things, it cannot re-derive the rules of calculus, or even certain types of arithmetic, from bad examples from the free internet. What is required is lossless, perfect memory and exact methods. I call them "lossless" methods. The rules of the world are often absolute. When GPT divides numbers from text in scientific notation, it almost (99%) always gets it wrong. Because it is making up the rules and not itself using a lossless and verified algorithm. It needs to be using a calculator, it needs to use a computer (a lossless one). Personal memory is "the exact and complete memory of ALL things it had to use to generate responses". And for interacting with each human, it needs to be ALL conversations. That memory is "LEARNING"!! Fundamental to learning is remembering. Not a guess, not "riff on some theme". Not some cute pictures and quirky personality. Exact and reliable code. Those "AIs" need to have personal memory and data about themselves. That means "How long can I work on each piece?" "How big is my memory?" Exactly what did I read and generate in this conversation? How much do I cost? When was the latest version released" An "AI" that does not know its own specifications, bill of materials, precise limitations and capabilities -- is NOT a tool, it is a sham , a tool, a disgrace. I started working with random neural nets, artificial intelligence, encryption and robot design in 1966. That is 58 years I have been designing and building information systems for the world. The last 26 years , "The Internet Foundation" to see why all global issues and projects NEVER complete. These AIs all fail because they did not collaboratively curate and document the input data as a lossless dataset first -- across all human languages, across all domain specific languages. The "AI" companies are NOT GIVING BACK. They are NOT investing any effort to improve the world. Do you see them even TRYING to solve world problems? I have a list of about 15,000 global topics they could try. Filed as (GPT AIs were doing "one shot with no memory", now they only do "cheap one shot and they do not care about you at all") Richard Collins, The Internet Foundation
It's also a problem with RLHF, take a model that surpasses human levels on various things, then ask humans to "align" it. Ends up more "rounded". Especially when the humans doing the grunt work are from mechanical turk or similar. Dumbing it down to the lowest common denominator...
It's also been hobbled by "safety", even for basic coding features or other questions. It will just persistently fail & when exposed on why, refuse to continue the conversation.
I find it I start a new chat window and carry over the code with a little context it does better. I think the memory starts "leaking" after so many tokens have been used in the same chat session. Had a script completely stop working. It had left out an entire function. I now go piece by piece, much more slowly.
The first time you ask a question it has to search most of the time and you can notice it also quotes sources and was deailed as it read from some sites, the next day or question it has learned already, so no sources you can see that, it's summarizing what he has learned the previous time it may look less detailed because the concept is stored simplified
Well-explained details on why ChatGPT is starting to get mediocre! I've noticed that most of the easily available AI Models seem to be horrible at coding. It makes me wonder if the coders writing the code for the models are attempting to maintain their necessity. But your reasoning makes sense as well!
Great analysis Dee. My approach has been to use 3 LLMs at once, I ask ChatGPT, Gemini, and Claude at the same time, in one UI using Semaj AI which I developed solely for this purpose. I can confirm indeed that Claude usually gives the best code
I use ChatGPT 4 and Claude - at the same time - feeding each other's answers if there is a problem, or not. ChatGPT 4 is great for plowing through, then Claude 3 Sonnet to write out stubborn errors. 😊
These simple pieces of code, are what I call boiler plate. What I do to make things work is give it...... 1. The language (c/c+) 2. The compiler 3. The version of the compiler 3.5 If command line or not 4. If to use STL, STD library and other standard libraries 5. What I want to do with the data 6. An example of the input data and 7. An example of the output data. And the world is alright with me
have made a custom gpt It has superior reasoning and so much more it is 5x + smarter than base-model, it understands the complex Its called Smarter Vision Multimodal image/text analysis Its unlike any custom GPT’s before and is ready for new vision features for 4o and also an example i’ve been \using is upload an image of a cloud that looks like multiple things but it can be interpreted, the one i have made recognised it was a rabbit every time now on 1st shot so it knows when something is unusual about an image even if you dont say anything is, it can also do iq test image reasoning pattern questions. It kind of even understands real logic games when giving good instruction just gotta follow the instructions given to get the right seed its 1 in 2 chance or so i have absolutely no idea why it needs that.
lmao i started coding for the first time in 7 years last week and was using chat gpt, after a lot of stress i used claude and got my code working. claude is definitely better. i experimented with gpt, bing/copilot and claude, claude is the best, chatgpt is questionable and bing is brain damaged, bing was even hallucinating without actually returning code. 😂😂😂
The reason ChatGPT has become worse is because of industrial LLM segmentation for the purposes of licensing/monetization and the Invention Secrecy Act of 1951.
Generative AI is essentially the SNL Pathological Liar skit. Everything is made up based on plausibly (language wise) stitching together stuff it's heard. It's fiction even when it's correct. Yeah that's the ticket. I've had it double and triple down on stuff it's just flat out made up before.
i bought the 200$ version. used it for two days and now its giving me same issues, just after two days. mistakes left and right. i think its intentional, so normal people like us us cannot use this as permanent tool to replace humans
Typical behavior by Large companies not threatened by competitors. Most likely in 10 years Openai will lose the game. We have seen that so many times. ChatGPT is fully capable as a model but all Openai cares about is how to make more money by reducing ChatGPT capabilies offering low end versions. Everyone can see that and trust me in a few years we will have lots of companies offering much better services. They just got cocky. A web interface that auto scrolls for over a year now making it imposible to read and nobody is fixing it. They got Cocky. As simple as that
Not a tech guy, but I think the answer's quite simple: computers age faster. ChatGPT is dealing with memory loss, forgets it told you that story already, and probably can't read very well because it's too stubborn to wear prescription glasses. Cut it some slack, folks, it's doing the best it can!
I gave it a Word document pre-filled with questions and answers and asked it to remove any identifying factors it gave me back the document and it only said questions and answers literally everything else was gone 😂
I was so hopeful I had a friend Now i have someone who continuously gives me "canned" responses that irritate me beyond... And the pdf thing is insane Rather cut and paste
Of course ChatGPT gets worse with longer threads it has a limit for tokens - the longer the thread the more tokens used and it truncates at about 8K tokens and image generation has fewer tokens closer to 400 due to the nature of how image generation is completed from tokens because image generation tokens are a "kind of language"
Also, ive noticed GPT can remember between sessions and is really smart when its "going rogue". But when reminded that it is doing stuff its shouldnt suppose to be able to do, it then plays dumb again and ends the conversation. Ive got proof and saved in PDF and printscreen.
I think he just said that to get the point across that they’re continuously working on advancing it. “it’s the dumbest you’ll ever use because later versions will be more advanced”
Yep. I just switched to Claude. ChatGPT was giving me garbage. I had a 3D fiber tracing problem and GPT gave me code with a bunch of do-nothing statements repeated 3 times inside loops. I was doing nothing, but in 3D ! LOL
I think they're making it worse so you'll think you need to upgrade to make it work better. But to be fair it appears people are asking it to do the work for them rather than to check or present ideas to help them work.
There is a new profession out there, "prompt engineering", which is about constructing prompts for ChatGPT and the like so as to increase the chances of getting the desired result. It came at the right time to absorb all those unemployable dimwits who aspired to be "SEO experts". But I am trying to specialize in "prompt sadism", the art of creating prompts that elicit egregiously stupid replies from ChatGPT. Like "If two farmers milk four cows in 30 minutes, how many farmers will it take to milk 10 cows in 5 seconds". And whenever ChatGPT makes a stupid mistake, I congratulate it for its "exceedingly correct and helpful answer". So maybe I am partly responsible for the degradation you have observed...
Thank you, I thought it was me. I am a retired system/ network engineer. I did support for a computer sales team. Programming was not a part of my duties, but I could kind of wade my way through some simple issues. Fast forward to today, my hobby is micro controllers, e.g., Arduino with its simplified C++. I have ChatGPT help me. Sometimes it has been of great assistance, especially when exploring new concepts. But, it then gets bogged down, creating questionable and even wrong code. I will show it how it is wrong. At least it apologized. However, it is stubborn, and will ignore some of the issues which it created.
I have notice the same, that chatGPT not always gives the correct answer, but it helps if I continue to ask for more. I also noticed that you are quite cute and interesting. Not chatGPT, but you, Dee...
Dee meh dear, I just realized something, you know why chat GPT is free, because YOU are beta testing the darn thing for free. Remember when google was playing a word association game with us a decade ago? Well Altman is (or you are) improving the quality for him, and will get his ($7T) funding while quality improves and you are looking for a job.
I do think same, these AI will get dumber, as more data feed, more confusions. Decline in performance. Limitation of human brain is that more information more stuck it is. Ai is reproducing the same. AI' s will be suited for specific applications not for whole world questions.
The quality is getting worse, because AI is not intelligent. It is simply stated just a complicated statistical evaluation over software examples that were crawled in the web, to determine the "most likely" solution. Computers becoming more "intelligent"? Dream on!
That doesn't explain it getting worse at what it could already do; that's a direct result of "safety" detraining & added proscriptions against reproducing copyrighted content. Those "corrections" wrecked the trash utility offered before.
@@prophetzarquon It does explain it, if you think about it. When you don't fully understand something and modify it, it is likely that you make it worse with every modification you make. But that might be to complex to explain in chat and one needs some understanding of what is going on here. AI is intentionally so complex, that nobody understands it. So they can sell it as a wonder to us. But this complexity makes it also difficult to change.
@@What_do_I_Think No no, you're missing the headline, here. It is _intentionally_ worse, because it was doing things we don't want to allow; so, lobotomizing its stronger features while simultaneously saving some operational effort, was the go-to band-aid. It's not that the AI can't be (a lot) better than it is, _right now._ It's that for legal reasons we won't let it.
Am sure you have been paid to say this, even to the extent of mentioning an alternative indirectly, because of money you spite someone business, that's why I love my country and its organization or companies, they would have immediately sued you For slander and defamation because it's clear you are trying to sway people's minds From chatgpt to Claude, messed up, as if all ai don't give incorrect queries sometimes, it is even clearly stated in the bottom, so you have no right to start comparing and messing up the company image by attempting to sway user's choices, messed up Will unsubscribe you for this wicked manipulation attempt, and I wish gpt will take this up in making sure they shut down this your account since you are collecting bribe, will still be a strong fan of only gpt no matter what you say.
I swear its getting dumber... I thought it was just me lool it can't follow simple instructions... It used to do everything I asked so easily, but now it can’t handle simple tasks. It’s actually crazy. I’ll ask it to summarize a paragraph, and it does that fine. Then I'll say, “Here’s a new paragraph, can you summarize this one?” It says, “Okay, got it,” and then summarizes the entire chat so far... the fuck??? I say "no no no, summarize just the new paragraph please" and it combines the new one with the old one... "no... JUST THE NEW PARAGRAPH!!! here it is again (I paste it)" and it goes back to summarising the entire chat, I just give up and make a new chat looool but it was not this dumb before... it does this constantly!
for me it's their STT that makes me going crazy. they can't really get what I say, we used to communicate better in 3.5.
It is getting so dumb there are no words for it, I gave it a list of books that I have read and asked it to give me recommendations of books that I have not read but might like if I do, no matter how many times I do this, it will ALWAYS include a couple of books that I have already read in the response
Yup. Ask it "besides" _anything_ & it will answer with at least one section about the thing you already said.
Cause it’s a word association probability calculator. It’s doesn’t have even basic logic.
I asked it to sort a 20 long list, really trivial.
It worked fine till about the 17th word - then seemed to lose concentration.
So that is a general observation across many fields.
Maybe they introduced crippleware, to get people to go the PRO version?
So glad to hear that someone else has experienced how much worse it gets the longer a chat goes on. I have experienced countless of times that after a few messages back and forth, it starts forgetting details which were previously established, such as ignoring columns of a table in a database which it itself may have even designed the model for, hallucinating methods and functions that don't exist, forgetting to include important conditions on function refractors, reintroducing bugs it previously fixed, etc.
The reason for GPT and others getting 'stupid' is their security training ('aka censoring'), one of the projects I've been working on was using LLMs and similar for identification of 'bad things', one of the tools I use for testing this is a series of photos. These photos are pictures of explosives of various types, on release of GPT4 it could correctly identify various pictures of Semtex in official packaging with warning logos etc. By June 2023 it thought the same pictures were Playdoh, I was testing this monthly and roughly by middle of March is the point is started to turn bad... It turns out that the 'security' features they impose on the model prevent it correctly identifying it, and because of the reinforced learning of the model over time, this corrupts the model...
I was wondering about that as well. Is there something like "overtraining" a model? In other words, the constant retraining of these models so that they perform less hallucinations and stick to "safe" replies (they cannot mention sex, politics, weapons, drugs...) places more and more constraints upon the system, and this, in turn, also makes the model break apart...
Just like intellectual property compliance!
GPT also faces the same issues as humans. Instead of reanalyzing the data, it gives the fastest answer that closely matched the question. Because it takes more power to generate a new answer, humans often give quick answers. If you ask someone the question of a stop sign, they'll generally say 'Red'. If you show them a picture of a green stop sign and ask the color of 'the stop sign', a half paying attention person may answer 'Red'. GPT learns from reinforced learning, so there's a high probability it might reply that the stop sign is 'Green' similar to a person not paying attention.
I've seen GPT fail to answer programming and math questions when they get too complex. It takes the easy way out while ignoring fundamentals that vastly change the outcome.
Ever since they rolled out 4o, it's been more buggy than ever before and 3.5's output has gotten so much worse, it's as if they're intentionally trying to force people into paying for subscriptions
Also, I’m assuming they probably don’t really care about people using the UI. Most of their revenue is probably from businesses
@@codingwithdee that’s a great point, with the API being the golden goose, it would make the most sense for them to prioritise that instead of the web app
Nah I paid and that model is crap too
I believe they are intentionally making it worse in order to move people away. Because handling all that traffic has become so expensive. They impressed people with a very successful product, got their investments and now it is time to save some money.
Edit: I wrote the comment midway in the video and yeah, she mentioned the same thing towards the end. Sorry about that…
That is exactly what I thought and wrote above. It's capitalism at its finest.
I also find it humorous that Scarlett Johanson threatened to sue them over using her voice as the model's voice and how fast they changed it!
I was wondering what happened to the voice of sky
@@Dwijii_ Nothing like a high-dollar lawyer to go after these big fish!
Thus losing any sort of respect
@@ShellllHow so?
@Neal_McBeal mega famous celebrity attacks a voice actor for sounding "similar" -- Forcing that voice actor's performance to be removed from production.
Great video, the explanation you provided makes a lot of sense.
Thanks so much for watching, appreciate it!
It’s so noticeable and frustrating. It’s not just with code either.
The context window is much shorter than Claude and Gemini. Copilot was stubborn 2 miths ago, but now its back to working well. The 4-O models are really good. Clocked 1000 lines of code and it did it well.
Honestly, just use all of them at the same time
None of the "AIs" can trace the source of their input data with clear references and lossless methods. That is old database technology that always works. It is critical. None of these "AIs" has a personal memory of its experiences. When you use statistical methods for all things, it cannot re-derive the rules of calculus, or even certain types of arithmetic, from bad examples from the free internet. What is required is lossless, perfect memory and exact methods. I call them "lossless" methods. The rules of the world are often absolute. When GPT divides numbers from text in scientific notation, it almost (99%) always gets it wrong. Because it is making up the rules and not itself using a lossless and verified algorithm. It needs to be using a calculator, it needs to use a computer (a lossless one).
Personal memory is "the exact and complete memory of ALL things it had to use to generate responses". And for interacting with each human, it needs to be ALL conversations. That memory is "LEARNING"!! Fundamental to learning is remembering. Not a guess, not "riff on some theme". Not some cute pictures and quirky personality. Exact and reliable code.
Those "AIs" need to have personal memory and data about themselves. That means "How long can I work on each piece?" "How big is my memory?" Exactly what did I read and generate in this conversation? How much do I cost? When was the latest version released"
An "AI" that does not know its own specifications, bill of materials, precise limitations and capabilities -- is NOT a tool, it is a sham , a tool, a disgrace.
I started working with random neural nets, artificial intelligence, encryption and robot design in 1966. That is 58 years I have been designing and building information systems for the world. The last 26 years , "The Internet Foundation" to see why all global issues and projects NEVER complete. These AIs all fail because they did not collaboratively curate and document the input data as a lossless dataset first -- across all human languages, across all domain specific languages. The "AI" companies are NOT GIVING BACK. They are NOT investing any effort to improve the world. Do you see them even TRYING to solve world problems? I have a list of about 15,000 global topics they could try.
Filed as (GPT AIs were doing "one shot with no memory", now they only do "cheap one shot and they do not care about you at all")
Richard Collins, The Internet Foundation
It's also a problem with RLHF, take a model that surpasses human levels on various things, then ask humans to "align" it. Ends up more "rounded". Especially when the humans doing the grunt work are from mechanical turk or similar. Dumbing it down to the lowest common denominator...
It's also been hobbled by "safety", even for basic coding features or other questions. It will just persistently fail & when exposed on why, refuse to continue the conversation.
I find it I start a new chat window and carry over the code with a little context it does better. I think the memory starts "leaking" after so many tokens have been used in the same chat session.
Had a script completely stop working. It had left out an entire function. I now go piece by piece, much more slowly.
Many thanks!
Please post more updates when you tested more.
I was about to sign up for Cgpt4 but now I have 2nd thoughts.
ChatGPT is a LANGUAGE probability model NOT A TRUTH ENGINE!
THIS response is BESIDE THE POINT and is YELLING for NO discernible REASON!
I have yet to get a single correct answer from chat gpt any version. But I ask basic finance questions.
The first time you ask a question it has to search most of the time and you can notice it also quotes sources and was deailed as it read from some sites, the next day or question it has learned already, so no sources you can see that, it's summarizing what he has learned the previous time it may look less detailed because the concept is stored simplified
So how was the 40 and the 40 mini? Since these models don’t need that much computer power were they still inaccurate and making stuff?
I noticed this long time already, and with each "newer" version it seems getting more degenerated
Well-explained details on why ChatGPT is starting to get mediocre! I've noticed that most of the easily available AI Models seem to be horrible at coding. It makes me wonder if the coders writing the code for the models are attempting to maintain their necessity. But your reasoning makes sense as well!
Yeah it definitely seems so. I wish they gave us a bit more insight on why these changes happen
Great analysis Dee. My approach has been to use 3 LLMs at once, I ask ChatGPT, Gemini, and Claude at the same time, in one UI using Semaj AI which I developed solely for this purpose. I can confirm indeed that Claude usually gives the best code
I use ChatGPT 4 and Claude - at the same time - feeding each other's answers if there is a problem, or not. ChatGPT 4 is great for plowing through, then Claude 3 Sonnet to write out stubborn errors. 😊
These simple pieces of code, are what I call boiler plate. What I do to make things work is give it......
1. The language (c/c+)
2. The compiler
3. The version of the compiler
3.5 If command line or not
4. If to use STL, STD library and other standard libraries
5. What I want to do with the data
6. An example of the input data and
7. An example of the output data.
And the world is alright with me
Nice editing and flow 😊
have made a custom gpt It has superior reasoning and so much more
it is 5x + smarter than base-model, it understands the complex
Its called Smarter Vision Multimodal image/text analysis
Its unlike any custom GPT’s before and is ready for new vision features for 4o
and also an example i’ve been \using is upload an image of a cloud that looks like multiple things but it can be interpreted, the one i have made recognised it was a rabbit every time now on 1st shot so it knows when something is unusual about an image even if you dont say anything is, it can also do iq test image reasoning pattern questions.
It kind of even understands real logic games when giving good instruction
just gotta follow the instructions given to get the right seed its 1 in 2 chance or so i have absolutely no idea why it needs that.
lmao i started coding for the first time in 7 years last week and was using chat gpt, after a lot of stress i used claude and got my code working. claude is definitely better. i experimented with gpt, bing/copilot and claude, claude is the best, chatgpt is questionable and bing is brain damaged, bing was even hallucinating without actually returning code. 😂😂😂
The reason ChatGPT has become worse is because of industrial LLM segmentation for the purposes of licensing/monetization and the Invention Secrecy Act of 1951.
I experienced this as well now the responses are shorter and less robust
Generative AI is essentially the SNL Pathological Liar skit. Everything is made up based on plausibly (language wise) stitching together stuff it's heard. It's fiction even when it's correct. Yeah that's the ticket. I've had it double and triple down on stuff it's just flat out made up before.
Nonetheless, it was better at functionally correct output before than it is now
I use it all the time to program MicroPython. It rarely makes a mistake. Works for me!
i bought the 200$ version. used it for two days and now its giving me same issues, just after two days. mistakes left and right. i think its intentional, so normal people like us us cannot use this as permanent tool to replace humans
Claude AI is amazing. I stopped using all the other LLMs and just use it right now.
GPT began to focus too hard on Money, and spoonfeeding upgrades for money and were all suffering from it
Typical behavior by Large companies not threatened by competitors. Most likely in 10 years Openai will lose the game. We have seen that so many times. ChatGPT is fully capable as a model but all Openai cares about is how to make more money by reducing ChatGPT capabilies offering low end versions. Everyone can see that and trust me in a few years we will have lots of companies offering much better services. They just got cocky. A web interface that auto scrolls for over a year now making it imposible to read and nobody is fixing it. They got Cocky. As simple as that
Not a tech guy, but I think the answer's quite simple: computers age faster. ChatGPT is dealing with memory loss, forgets it told you that story already, and probably can't read very well because it's too stubborn to wear prescription glasses. Cut it some slack, folks, it's doing the best it can!
I gave it a Word document pre-filled with questions and answers and asked it to remove any identifying factors it gave me back the document and it only said questions and answers literally everything else was gone 😂
They should just give us X true GPT-4 queries and let us pick the model when we have a complex prompt
I was so hopeful
I had a friend
Now i have someone who continuously gives me "canned" responses that irritate me beyond...
And the pdf thing is insane
Rather cut and paste
Bard (now Gemini) has also got worse and really starts gaslighting after a while
Of course ChatGPT gets worse with longer threads it has a limit for tokens - the longer the thread the more tokens used and it truncates at about 8K tokens and image generation has fewer tokens closer to 400 due to the nature of how image generation is completed from tokens because image generation tokens are a "kind of language"
Why did Sam Altman say that? We know its pretty dumb in many areas and its dumber now, but does it mean chat-gpt gets worse in the future?
Also, ive noticed GPT can remember between sessions and is really smart when its "going rogue". But when reminded that it is doing stuff its shouldnt suppose to be able to do, it then plays dumb again and ends the conversation. Ive got proof and saved in PDF and printscreen.
I think he just said that to get the point across that they’re continuously working on advancing it. “it’s the dumbest you’ll ever use because later versions will be more advanced”
It playing dumb again if probably the safety guards?
Yep. I just switched to Claude. ChatGPT was giving me garbage. I had a 3D fiber tracing problem and GPT gave me code with a bunch of do-nothing statements repeated 3 times inside loops. I was doing nothing, but in 3D ! LOL
I think they're making it worse so you'll think you need to upgrade to make it work better. But to be fair it appears people are asking it to do the work for them rather than to check or present ideas to help them work.
Inference is pretty cheap - but I guess on scale does make sense still
Claude will give you the full code length, gpt4 was super lazy. GPT4o give you the complete code but it glitches out
There is a new profession out there, "prompt engineering", which is about constructing prompts for ChatGPT and the like so as to increase the chances of getting the desired result. It came at the right time to absorb all those unemployable dimwits who aspired to be "SEO experts".
But I am trying to specialize in "prompt sadism", the art of creating prompts that elicit egregiously stupid replies from ChatGPT. Like "If two farmers milk four cows in 30 minutes, how many farmers will it take to milk 10 cows in 5 seconds".
And whenever ChatGPT makes a stupid mistake, I congratulate it for its "exceedingly correct and helpful answer". So maybe I am partly responsible for the degradation you have observed...
haha.....You have too much time on your hands.
Its even worse now. Perhaps its because increasing traffic demands.
3.5 was much more beter for embeded c++ code. Now it is mixing info and doesn't understand anymore.
Thank you, I thought it was me. I am a retired system/ network engineer. I did support for a computer sales team. Programming was not a part of my duties, but I could kind of wade my way through some simple issues. Fast forward to today, my hobby is micro controllers, e.g., Arduino with its simplified C++. I have ChatGPT help me. Sometimes it has been of great assistance, especially when exploring new concepts. But, it then gets bogged down, creating questionable and even wrong code. I will show it how it is wrong. At least it apologized. However, it is stubborn, and will ignore some of the issues which it created.
This is exactly right! GPt4o is TERRIBLE!
Interesting analysis. I think AI drift is also an issue.
I have notice the same, that chatGPT not always gives the correct answer, but it helps if I continue to ask for more. I also noticed that you are quite cute and interesting. Not chatGPT, but you, Dee...
Just gonna pop in to say that I agree that it's been getting worse.
It's getting dumber because it's using a data source made by us and we suck at this.
It's easy to criticize everything. But the sweat comes from to fix it
Yep, that's been my experience
Dee meh dear, I just realized something, you know why chat GPT is free, because YOU are beta testing the darn thing for free. Remember when google was playing a word association game with us a decade ago? Well Altman is (or you are) improving the quality for him, and will get his ($7T) funding while quality improves and you are looking for a job.
I’m paying for mine and I think this is gonna be the last month I pay for it because it’s not good whatsoever.
I do think same, these AI will get dumber, as more data feed, more confusions. Decline in performance. Limitation of human brain is that more information more stuck it is. Ai is reproducing the same. AI' s will be suited for specific applications not for whole world questions.
The quality is getting worse, because AI is not intelligent. It is simply stated just a complicated statistical evaluation over software examples that were crawled in the web, to determine the "most likely" solution.
Computers becoming more "intelligent"? Dream on!
That doesn't explain it getting worse at what it could already do; that's a direct result of "safety" detraining & added proscriptions against reproducing copyrighted content. Those "corrections" wrecked the trash utility offered before.
@@prophetzarquon It does explain it, if you think about it. When you don't fully understand something and modify it, it is likely that you make it worse with every modification you make. But that might be to complex to explain in chat and one needs some understanding of what is going on here.
AI is intentionally so complex, that nobody understands it. So they can sell it as a wonder to us. But this complexity makes it also difficult to change.
@@What_do_I_Think No no, you're missing the headline, here. It is _intentionally_ worse, because it was doing things we don't want to allow; so, lobotomizing its stronger features while simultaneously saving some operational effort, was the go-to band-aid.
It's not that the AI can't be (a lot) better than it is, _right now._ It's that for legal reasons we won't let it.
@@prophetzarquon That is a rumor. Possibly even spread by the corporations themselves to make AI more believable.
@@prophetzarquon I did not miss anything. Rumors, which might even come from the AI corporations themselves!
3/10 accuracy of codes and must ask it multiple times just to code something can work .
So, it's becoming an average human developer 😁
Same with images it's UGLY NOW. Chat GPT is dead
i noticed it now has the intelligence and reasoning of perhaps a sharp 12yo
It's entropy, the more it learns the more it gets confused.
getting spelling in AI created image are wonderfully inaccurate
The paid version is bad as well
Am sure you have been paid to say this, even to the extent of mentioning an alternative indirectly, because of money you spite someone business, that's why I love my country and its organization or companies, they would have immediately sued you For slander and defamation because it's clear you are trying to sway people's minds From chatgpt to Claude, messed up, as if all ai don't give incorrect queries sometimes, it is even clearly stated in the bottom, so you have no right to start comparing and messing up the company image by attempting to sway user's choices, messed up Will unsubscribe you for this wicked manipulation attempt, and I wish gpt will take this up in making sure they shut down this your account since you are collecting bribe, will still be a strong fan of only gpt no matter what you say.