Two years ago, i thought this technology was going to replace developers in the long run, but I just do not see that ever happening. I gave it a shot and it is not as impressive when you look deeper. It fails at some basic tasks that junior devs with 6 months experience will ace. It couldn't even write me a working third person controller in unity, even though there are thousands of working examples on github. As someone with a PhD in AI (specialised in ontology and graduated over a decade ago), I can already see limitations in the model and I think the model is about to reach its peak in terms of possibilities. There is only so much data you can feed into these machines to make them impressive.
@@doublesushi5990 Still kind of the same opinion. AI can spit good javascript snd react code and other basic web dev stuff. It can recognise these patters very well. I've tried using them for complex audio applications and graphics applications and they are clueless. In fact, even writing basic unity shaders with them are a mistake. It seems to be good at a very specific type of programming. Any kind of programming that requires cool applications of math and science (like game dev, audio dev, graphics, AR, etc) it falls apart.
I have notifications turned on for your channel but I never see your videos on my feed. I've seen all the other videos on chat gpt that I really wasn't interested in, and I almost missed out on your video. I watch your videos because you are brutally honest compared to these other youtubers.
@@martinlutherkingjr.5582 I’m sure this is a fair point but I suspect we will still need more SWE working with natural language models than we’re fearing. Assuming a perfect language model that codes perfectly, natural language is very imprecise. This is why we even invented formal mathematics or programming languages to begin with. It’s hard to think about math with words. It’s hell of a lot easier to use formal math notation. I think the real question is: what is faster, writing 5 paragraphs - which produce the correct outcome - to prompt a language model with the desired outcome, or to just directly code it? For some reason, my gut really thinks the latter is just faster on average.
@@hamm8934 Indeed, And we also have exactly for that reason Proof Oriented Programming Languages or Proofs Assistants (Coq, Idris, Agda, Isabelle, HOL, Dafny, Lean, F* etc..), some can even produce code in a variety of programming languages which is proved to be correct (Dafny is one). Currently I am working/learning/experimenting to try writing all of my math proofs with symbols and with no plain English at all on these systems on my computer, The English is just too repetitive in proofs and just distracts the essence of the proof. Also such systems give all those abstract theories a new shining light as we can actually turn them into a running and verified program code which totally changes how we view math notions from a totally abstract theoretical perspective into a practical concrete perspective with actual objects that we can run and test on our computers.
@@martinlutherkingjr.5582 It is maybe good for just small snippets and learning from them a simple technique (How to render into an SFML texture for example), Everything more and it will just produce nonsense.
very useful, thank you for going through this so we don't have to. On the other hand though there's lots of room for it to be incrementally improved so definitely we have to embrace it I think
i think it is like the mechanic story the mechanic hits the car one time with a hammer and it is fixed, u pay him for his knowledge here we also just have a tool, but you would still need to know how to use it
Yeah, we mostly don't need the simple problems solved in tech. My overall assessment with this example was that it was pretty unhelpful. It is slightly better than Google searching. It was becoming more and more confused as the complexity grew. People are acting like a trillion parameters are the key. It only seems to learn from humans before it. It's great at reading and writing, but it doesn't seem to be generally intelligent. If there is a clear question / answer, it does well, similar to Google. If there isn't, which is half the battle of coding, it doesn't.
People are impressed with it's ability to spit out code chunks and basic crud code. Anything complicated and it fails. To get any help from this you have to know what to ask. Keep debugging and asking then eventually you many be able to get some work done. It's a decent aide to a programmer, not to a layman.
Even when it gave me what I was asking for, it still was all slop code. Plus, that layout function had to be re-written four times. I see this as a tool to make developers more efficient, but straight copy and paste would result in unmaintanaible code. This example choked before we even got into anything remotely difficult.
While using it to scan for typos, I forgot the parenthesis when instantiating a class and Chat-GPT had no idea. It said "Ah, I see it now" when I pointed it out. 😂
I've been using it to generate bash scripts for me. While it does point me towards a good general direction, the code is almost always broken. And i have to ask 10 times before I get valid code. And these are fairly simple scripts.
Yeah, but then I'd be dealing with its bad suggestion of putting all types of files in one folder. This thing was telling me to write sloppy code, which didn't work much of the time.
It triggers me that you don't use the "Copy code" button in ChatGPT; worse, even ignoring that, you right-click the text and hit "Copy" instead of CTRL+C! Agree, though. I stopped using v3.5 because there comes a point when it's just quicker to do it yourself.
Lol, me too. At one point I said I should use the copy button. Right clicking in that case, it's just as easy to copy since my hand was already on the mouse. I mostly use the keyboard shortcuts in real life though.
it is not GPT-4 is limitted or bad, it is we are bad at prompting :) Some RUclipsrs showed that GPT-4 can't solve even EASY algo problems. By using correct prompting I've managed to solve 10/10 HARD problems with 100% accuracy, and code was great. ChatGPT will not replace us as developers, but a proper usage of GPT4 + Embeddings + Fine tuning will make us obsolete, or switch our job from developer to administrator/prompter :D
I just don't want to use these AI tools. I'm fed up with the hype, and people should research themselves instead of blindly trusting some code an AI has stitched together from copied code snippets.
I've played with chat gpt quite a bit as well as ai art. I find them both to be typical tech hype at the moment. It's similar to crypto a few years ago. Low code / no code will make these tools more effective, but it won't turn non-programmers into coders. Those who can code and be creative will just be better with the tools.
Two years ago, i thought this technology was going to replace developers in the long run, but I just do not see that ever happening. I gave it a shot and it is not as impressive when you look deeper. It fails at some basic tasks that junior devs with 6 months experience will ace. It couldn't even write me a working third person controller in unity, even though there are thousands of working examples on github. As someone with a PhD in AI (specialised in ontology and graduated over a decade ago), I can already see limitations in the model and I think the model is about to reach its peak in terms of possibilities. There is only so much data you can feed into these machines to make them impressive.
what are ur thoughts NOW? curious of ur input...
@@doublesushi5990 Still kind of the same opinion. AI can spit good javascript snd react code and other basic web dev stuff. It can recognise these patters very well. I've tried using them for complex audio applications and graphics applications and they are clueless. In fact, even writing basic unity shaders with them are a mistake. It seems to be good at a very specific type of programming. Any kind of programming that requires cool applications of math and science (like game dev, audio dev, graphics, AR, etc) it falls apart.
I have notifications turned on for your channel but I never see your videos on my feed. I've seen all the other videos on chat gpt that I really wasn't interested in, and I almost missed out on your video.
I watch your videos because you are brutally honest compared to these other youtubers.
Once again agree Chris. It does not matter how good this thing ever gets, if you DO NOT understand what it is doing it is like a child with a gun.
Yeah but how many people are needed to understand what it’s doing? Less than without it.
@@martinlutherkingjr.5582 I’m sure this is a fair point but I suspect we will still need more SWE working with natural language models than we’re fearing.
Assuming a perfect language model that codes perfectly, natural language is very imprecise. This is why we even invented formal mathematics or programming languages to begin with. It’s hard to think about math with words. It’s hell of a lot easier to use formal math notation.
I think the real question is: what is faster, writing 5 paragraphs - which produce the correct outcome - to prompt a language model with the desired outcome, or to just directly code it? For some reason, my gut really thinks the latter is just faster on average.
@@hamm8934
Indeed, And we also have exactly for that reason Proof Oriented Programming Languages or Proofs Assistants (Coq, Idris, Agda, Isabelle, HOL, Dafny, Lean, F* etc..), some can even produce code in a variety of programming languages which is proved to be correct (Dafny is one). Currently I am working/learning/experimenting to try writing all of my math proofs with symbols and with no plain English at all on these systems on my computer, The English is just too repetitive in proofs and just distracts the essence of the proof. Also such systems give all those abstract theories a new shining light as we can actually turn them into a running and verified program code which totally changes how we view math notions from a totally abstract theoretical perspective into a practical concrete perspective with actual objects that we can run and test on our computers.
@@martinlutherkingjr.5582
It is maybe good for just small snippets and learning from them a simple technique (How to render into an SFML texture for example), Everything more and it will just produce nonsense.
very useful, thank you for going through this so we don't have to. On the other hand though there's lots of room for it to be incrementally improved so definitely we have to embrace it I think
i think it is like the mechanic story
the mechanic hits the car one time with a hammer and it is fixed, u pay him for his knowledge
here we also just have a tool, but you would still need to know how to use it
It's very nice for small pieces of code I'll give it that. But the bigger the project the more problems it has.
Yeah, we mostly don't need the simple problems solved in tech. My overall assessment with this example was that it was pretty unhelpful. It is slightly better than Google searching. It was becoming more and more confused as the complexity grew. People are acting like a trillion parameters are the key. It only seems to learn from humans before it. It's great at reading and writing, but it doesn't seem to be generally intelligent. If there is a clear question / answer, it does well, similar to Google. If there isn't, which is half the battle of coding, it doesn't.
Yeah, Chat Bots won't touch our jobs, actual AI might, but we can use Chat bots to help us.
What do you mean by actual AI ? Chatbots are the most successful implementation of AI.
@@Mat-S86 they are transforming models, I mean AGI.
People are impressed with it's ability to spit out code chunks and basic crud code. Anything complicated and it fails. To get any help from this you have to know what to ask. Keep debugging and asking then eventually you many be able to get some work done. It's a decent aide to a programmer, not to a layman.
Even when it gave me what I was asking for, it still was all slop code. Plus, that layout function had to be re-written four times. I see this as a tool to make developers more efficient, but straight copy and paste would result in unmaintanaible code. This example choked before we even got into anything remotely difficult.
While using it to scan for typos, I forgot the parenthesis when instantiating a class and Chat-GPT had no idea. It said "Ah, I see it now" when I pointed it out. 😂
I've been using it to generate bash scripts for me. While it does point me towards a good general direction, the code is almost always broken. And i have to ask 10 times before I get valid code. And these are fairly simple scripts.
one of my fav based Sr devs 👏
You should've asked it to make all future commands work with a unmodified **Edit: +windows** command prompt, to prevent the manual creation of files.
Yeah, but then I'd be dealing with its bad suggestion of putting all types of files in one folder. This thing was telling me to write sloppy code, which didn't work much of the time.
@@realchrishawkes true fair enough, better to play it safe
Could you do a video on copilot X when it's out
This is just now. What about in 5 years? 10?
Same thing lol
@@realchrishawkes nah man ai in general will take a lot of swe jobs
GPT-4 itself probably wont be able to, but something like Langchain seems somewhere near.
Great video Chris !
It triggers me that you don't use the "Copy code" button in ChatGPT; worse, even ignoring that, you right-click the text and hit "Copy" instead of CTRL+C!
Agree, though. I stopped using v3.5 because there comes a point when it's just quicker to do it yourself.
Lol, me too. At one point I said I should use the copy button. Right clicking in that case, it's just as easy to copy since my hand was already on the mouse. I mostly use the keyboard shortcuts in real life though.
It also triggers me that he doesn't use dark mode, even though I appreciate him demonstrating this.
it is not GPT-4 is limitted or bad, it is we are bad at prompting :)
Some RUclipsrs showed that GPT-4 can't solve even EASY algo problems. By using correct prompting I've managed to solve 10/10 HARD problems with 100% accuracy, and code was great.
ChatGPT will not replace us as developers, but a proper usage of GPT4 + Embeddings + Fine tuning will make us obsolete, or switch our job from developer to administrator/prompter :D
I just don't want to use these AI tools. I'm fed up with the hype, and people should research themselves instead of blindly trusting some code an AI has stitched together from copied code snippets.
I've played with chat gpt quite a bit as well as ai art. I find them both to be typical tech hype at the moment. It's similar to crypto a few years ago. Low code / no code will make these tools more effective, but it won't turn non-programmers into coders. Those who can code and be creative will just be better with the tools.
You will spend huge amount of time and effort for debugging GPT code.
It's helped me on direct questions lately.
I am so scared of this i taking jobs
i feel like you ask bad questions. I've had it output a bunch of cool code
We gonna get AGI the moment this comment is posted.
devs are done ;)
Lol. I'll work that agi and take over the world with it. 🌎 🗺