You’re absolutely right! I was using Cursor to make minor Tailwind adjustments for the UI, but it ended up editing 4 different components and causing a mess. I had to go back and fix everything manually, and it turned out I only needed to change a single line in one of the components
I know. I keep hearing the "A.I. will take dev jobs", but that's completely bs. A.I can never take dev jobs. A.I can just help and support Devs, that's it.
Head up about "the pit of death" usually its caused by the LLM reaching it "workable" context length. This context length is far below the advertised context length of the model. The solution to the problem is simple, once you notice the model is no longer as useful as when it started out, simple start a new chat session. But before you do that have the agent summarize everything you are doing on the project and the most recent thing you are working on. Then use that summary to feed the new session. You will find that all of a sudden the model becomes useful once again. Rince and repeat every time. I am sure this feature will be automated in future agentic workflows of tomorrow once the devs understand it importance.
This is spot on. I haven’t tried tools like cursor yet, only use ChatGPT directly, and I have observed lots of times that continuing code in a new thread will make that the problem that was not solvable, now suddenly is solvable. But switching to a new context is a lot of work currently, because you need to provide the AI with the right start instruction of the context. I think that if someone builds an AI editor that generates very brief texts about your product structure/ conventions, a very brief text about what our current goal is (1 change at the time), and uses the rest of the context for coding and debugging AND writing prompts for follow up tasks, this might actually solve the “ pit of death” in a lot of cases. So the output of the AI process shouldn’t be just code. The prompts for follow up tasks should NOT be added to the current context as long as it is debugging.
It's not context length though, it's a break down between the user and LLM. I code with cursor, and face this multiple times a day. It doesn't matter if I'm giving it single modules, or the entire project, it frequently gets itself into a bit of a mess. By the time you've boxed it into a correct answer you a) needed to know the info, and b) might as well have written it. Obviously they're great tools, but the premise of this video is correct t from what Ive seen Latest examples: most gsap. Tanstack table (client vs server)
This does help, but eventually these little summaries become insufficient with a large codebase, and the agent will ask you to provide all the code again when you press it on why it doesn't know how to solve the problem. Despite this, even just working with 1500 lines of code I've gone through 45ish messages in around a dozen chats and gotten nowhere on some problems.
This is exactly what I am doing and it's working. Also whatever A. I code I use, I am always learning what I am implementing, for the same reason to avoid that same mistake
Like many other engineers, I already knew this and nothing too new is said here. But the verbalization of the problem we are facing is a hard skill and it not something everyone can do. Thank you for making it easier to explain.
From my experience AI is trash in generating actual complex code. I personally only use it to brainstorm with myself, meaning that i have a "sparring partner" to have a discussion on how to implement stuff or how to build an architecture. So it's more on the abstract level of a discussion, rather than generating actual code.
Yep! Same conclusions. Also one more use I found is checking how bad the idea is: you formulate an idea to LLM and look at whats been scratched - if output is way more complex then you expected, and you don't spot any silly mistakes, than idea is way worse then you estimated.
I’ve found the best way to address the pit of death is to containarize your project. If you can structure your code so that the language model doesn’t need to have full context of the project to address the needed input/output signature. Obviously we already do this, and it won’t work for every case, but I mean specifically to structure your stuff so that you don’t need to understand the project architecture to work on one of these containers
I see ai as a great assistant with endless patience and inspiration, but you still need to give direction, make the right choices and add ‘taste’ to the project to make it complete. That still needs skill, experience and intelligence.
@@rjackstheartofwealth6152 I asked it to make a terminal emulator. It started completely broken and I have spent a month trying to prompt it to fix it. It still doesn't work a month later. Needless to say, an actual programmer who knows how to make a terminal emulator would be done by now.
People often retort "You just aren't prompting it right", which we can expand a bit to mean "You aren't telling the LLM exactly what it needs to do". I would actually agree with this assertion because if I knew exactly how to fix all my bugs I could task the LLM with generating the code to match my perfect description. Of course, these AI futurists mean it in the sense that with the right prompting loop any problem can be solved, and therefore one day we will move away from using code entirely. On the contrary, as you highlighted, upon encountering a novel case that requires REAL reasoning, suddenly the "artificial" in AI hits you like a ton of bricks and you realize the intervention of true intelligence is necessary.
Another thing about these prmpting bros is that they're admitting you have to handhold the LLM into doing something useful. However, in their minds, LLMs are apparently SO adept at creating intelligent solutions that even non-tech people can use them to craft entire projects. The irony is thick. I always wonder how something that supposedly enables "everyone to become a developer" is supposed to do that while also needing detailed instructions from actual developers.
This reasoning also feels like a bit of a trap and is the main reason I am kind of dreading the prevalence of AI. Getting good at prompting usually means I am spending time away from code, which is the fun part for me. I have found I often need to hand hold LLM-generated code and very little of it is “set it and forget it” so I often treat it as a junior engineer who I am micromanaging more than a tool. I understand part of that is a skill gap and I’ll likely get better at prompting and working with the LLM in time but it just isn’t as fun as diving into a problem myself. I also understand that someone who isn’t as much of a purist as me will likely be more productive so I’m trying to work through that so I don’t get overrun but it definitely sucks some of the joy out of the process for me
He’s missing the core problem (from developers view) about AI. AI isn’t intended to replace software developers outright, but rather to lower the total number needed. The real issue is that AI doesn’t just offer a slight boost in efficiency-it provides a substantial leap. If AI can increase a programmer’s productivity by 100%, then the company can hire 50% fewer developers. Do you see the problem here?
This assumes that feature demand will not rise to meet the rise in productivity. And clients always want more and new features. Otherwise web development would have been automated DECADES ago.
Maybe I'm being too optimistic, but they said something similar about accountants with the rise of computers. Precisely because computers made the accounting process cheaper and need less people to do the same job, more companies were able to benefit for that and now we have more accountants than before.
IDK about overly optimistic. A cognitively bias, maybe? Productive capacity increased significantly during that time => if 1 unit of input produced 1 out before, it produced 2 out with computers. Quantitatively, evaluating whether the change in demand for accts then vs gdp deviated from historic figures would be telling, but the role of accts changed -> focus on complex conceptual tasks vs writing numbers in ledgers = increased benefit per accountant -> businesses want more benefits = more accts . . . and just had a thought. Maybe => deceptive acct practices like tax shelters that have lowered corporate tax rates from 35% nominal to 7% effective. If true, that wasn't great.
I 100% agree. Things can unravel real quick after what we would think is a 'simple prompt'. That's why it's probably really important to use a Git Repo before changes and branches. However, that won't help when you bugger up the database, then you gotta roll up your sleeve and fix it manually.
Do you guys not think that the AI companies will simply start having actual engineers train their models with production level code and complex debugging cases? We're nowhere near the final architecture of AI programmers. Like another commenter said a lot of these issues are caused by the memory limits of LLMs and the simple fix is to create new sessions for each requested change.
You got me subscribed at "Pit of Death" followed by "Plateau of death". All of those tools literally always got me there, and to find myself trying to fix a code, which then I have not written myself, it takes extra cognitive load.
haha I know exactly what you're describing, I would end up with couple of hundreds lines of code that I didn't write that ALMOST work, GPT is in a rut and it cannot get out, there is only a day left till the deadline, and now I have to start reading the code line by line and trying to make sense of it. All of this because at the beginning I thought 'this will be easy enough for GPT to write it and it will save me a lot of time'
I think this applies more to people who are just pasting in the whole codebase/whole file and asking for new features. There are many "developers" nowadays on social media that are basically "coding" by just asking the AI for a "calendar app" etc. Obviously real developers do not code like that.
I agree so much with the first minutes (only). I use LLMs to get things off the ground, but after a bit, I have to take a step back, ditch the model for a bit, and refactor everything into a proper architecture From that point forward, LLM can still help with individual bits and pieces, but have lost track of the thing as a whole. There's no way LLMs can build architecture at all. They're building disconnected pieces, not DRY and not expandable. And overengineered everywhere at the same time, building entire factories when a single line of code would suffice. So yeah. Prototype, then refactor. I don't see your tool helping though, to be honest. It has the very same flaws. It's a non sequitur here.
100% nailed it! So many people are falling for the "Look it built a whole game of snake in 5 seconds!". Many people don't look beyond that and assume that "coding is dead". The ones who try to take it a step further typically either know how to code and let the AI multiply their skills, or don't know how to code and end up in the pit or plateau of death.
Absolutely amazing video! I agree that the fear of being replaced by AI disappears when you start using AI coding assistants in real-world projects. I've had even the best AI IDEs break my code or fail to build something I wanted.
03:45 but there is a big difference between pdf and css, pdf needs to target only one fixed page size but web needs responsiveness on all sizes and aspect ratios of screens.
I tried Devin for a month. I knew exactly what I wanted it to do, but by the time I broke the instructions down, it didn't save much time anymore. And at one point it just added code to the Web service "if the request has no authorization, give it admin powers." I mean, it made the test pass... The more concrete tools that use more of my context, like Augment, actually hit the sweet spot right now.
Insights By "YouSum Live" 00:00:00 AI won't replace developers anytime soon 00:00:06 Development skills are increasingly valuable 00:00:28 Non-developers struggle with complex applications 00:01:30 Human intervention is essential in development 00:02:10 AI tools are useful for prototyping 00:03:25 Iterate before committing to full builds 00:04:42 AI will change, but programming won't disappear 00:05:13 AI can automate tedious development tasks 00:06:43 AI can optimize content autonomously 00:07:01 Future may allow real-time personalized experiences 00:07:40 AI tools will enhance team workflows Insights By "YouSum Live"
If anything, it's great for beginners to help get unstuck and help explain code. For example, in my learning journey, I cannot build frontend react page to connect to my backend python API yet. I build simple enough tools and share the API documentation with Cursor and it is able to generate a usable interface for my simple backend tools. This made me my learning journey so much more rewarding and enjoyable.
The further back you push the "pit of death", the easier it is to gaslight project managers into believing it doesn't exist at all. Drawing out progress doesn't necessarily require improving the model. Slow down iteration time by making the UI more tedious to use under the guise of "workflow improvement" ( hire inexperienced feature-creep loving frontend devs ). Throttle text generation output speed under the guise of a calculated tradeoff which improves quality. Your proprietary backend is a black box: clients can't find out. Hopefully, you can design the tool such that software development progresses for the duration of a full fiscal quarter before reaching the pit of death. Then, you can start scamming short-sighted executives directly, which is where the real money is at.
I will say this. Using a combination of a boilerplate and learning some code, Cursor has made it INFINITELY easier to pick up of A boilerplate and create MVPs. The feedback loop is so much shorter compared to a few years ago that even if you have to figure things out, it’s not nearly as difficult as it was when we were all going to stack overflow to research some error that we had never seen before in our life!
AI integrated in the IDE is a better code completion. I usually use it for that or when I am too lazy to read docs, so it explains it to me. For commonly used Frameworks etc. it works well
The new agent mode on cursor is huge, I've been developing a large complex app and there issues but I work through them and am making progress. I think the main issues are the context understanding for the app and context windows and the overall ability for the model being used. In a year it's going to be pretty magical IMO. People with some good ideas and product design will have amazing tools.
I've been pumping out Cursor AI tutorials on my channel. Totally agree it's a nice sweet spot. I can work in languages I'm not used to purely because I have the basics of knowing basic infrastructurer and high level programming. I don't write any code I just guide the AI to know what context is relevant. I'll be honest I was always a hack developer as opposed to a pure one. My objective was working products in the shortest time possible. If they got traction then go back and refactor for the long run. Most projects fail so why bother agonising over typing and unit tests in the first few weeks. Kills enthusiasm and nothing gets shipped.
@@RobShocksyour comment actually perfectly encapsulates the issues I’ve had with AI agents and LLMd. It treats code not as a craft, but a means to an end. LLM generated code can get a working demo up pretty quickly but once I start trying to flesh things out and add complexity it becomes less and less reliable and saves me less and less time. I’m curious to know what you would say the most complex project you’ve built with one is?
about the "reasoning aka chain of thought" of the fresh OpenAI model, what i draw from their "we do exclude couple of the first answers to give you the best one" is probably something that can be explained as "Markov Chain Monte Carlo" when you do the "burn in" phase to achieve and stationary distribution after some time, that we think of as reflecting best possible answer... I know i might mix up some terms, but it awfully resembles this technique from markov chains. We could treat a model as a hidden "extended markov" chain (not drawing conclusion and prediction from only last state, but the entire ensemble of previous states that were pre-ordered logically) in that case, where emissions are words being predicted by the model each step
I like to generate a complicated advance type for type script. LLM does a fatastic job for the Well-defined problem. If the users do not know what to do, good luck.
Especially as AI goes through the hypecycle and is constantly changing, there are always new theories about what impact it will have on roles and tasks. I think the video is a very succinct and good summary. AI gives us incredible opportunity to create faster, more and better customized code, but is far from reading our minds and turning it into a perfect product in zero time.
> Use GPT4o to write skeleton codes > trying to do multi threading > immediately generated deadlock codes Pretty sure devs still need to be well aware of what LLM actually generated for a while.
@@priyanshunishad7402 Assume we have two threads, and they both need resource 1 and 2 to proceed. These two resource can only be accessed by one thread at a time. If T1 holds R1, and T2 holds R2 and no one let go, it is a deadlock. No thread can proceed.
I've been using chat gpt as a pair progamming buddy and yeah, sometimes it will just throw me into a pit of bugs and the more I ask for help the more the issues will come up and it becomes a snowball and sometimes the answer for the initial bug that causes all this nightmare was 100 times simpler than the suggestion it gave in the first place. I've become more cautious when using it
Месяц назад
There's a reason why I subbed your channel! Thank you for being honest!
I still wonder, given that an experienced developper can leverage existing code they have and ship out a fully functioning MVP in 2 to 3 weeks with lots of customization, whether it is truly useful to create MVPs with no code low code apps with a huge probability of having something that is not working well or not customized to address the problem we are trying to solve
AI essentially just increases writing speed. translating whats in our head into code. but it cannot replace our head. No prompt, no code. quite simple concept tbh and once people realise this broadly (not just devs), the ai hype train will end, as it is a very helpful tool that still has a lot of room to improve. but it's not some magic dev replacing invention that many people believe.
Actually this is getting better. Also I switch models got back on track asked the AI to set me up with a GitHub repository so I can roll back to where it's working. I'm learning and able to refit and implement more on my own. Yeah it's costing me API money but it's worth it cause hiring someone is more expensive. I'm creating more iterative changes and I'm pleased. Very pleased. I went from fear of React to Next JS practical app dev ( no snake games and note apps, and actual dB app that in intend to let thousands of potential users use)
Last night I was debugging why my Terraform code didn't work on apply, yes, just Terraform, a psudo programming language. It took a whole night to find out what the problem is and I'd say the current AI capability is the Simple Jack equal of intelligence.
This is a good video. I do not think I will use builder though as I am indie hacking really specific use cases. I do think this product looks interesting though as for many boiler plate apps and company marketing department teams, they could take more control over the web development rather than having to send it to their outsourced firm for changes. Agreed 100 percent with the pitfalls… yesterday I had issues with o1 implementing my web socket and socket IO into my react app. I had to troubleshoot this by doing iterative testing step-by-step and viewing server logs. I figured it out this morning. 🎉 AI helped me entirely along the way, but prompt engineering is required and successful prompt engineering requires the prompter (is that even a word) to have acquired the fundamentals to make the prompt request.
it's best to keep your questions low level and contained to a single component, anything more than that and it starts making shit up and breaking the code. The worst part is when it tries to rewrite your code for no reason at all, removing parts of crucial functionality. Man do I hate that... always have to remind it to stop ripping out existing code
Security in a web app is really hard. Can the ai review the code for the top 10 owasp recommendations? Well you can tell it to do that but can you trust it?
I mostly use it to get some simple code sample, because it is just annoying how much BS Javascript build in functions are. Way better than googling it or looking into the documentation.
So not technically a programming thing but I was once trying to use a complex function in excel that I had the AI modify. I spent hours having it changed rewrite and go over the problem. At the end of it all the AI had a included a forgotten to include a single space that caused every iteration of the function to fail.
I use the AI to generate small modular code blocks, like building me a circular buffer. Things that will take me a little bit of time and a lot of debugging are great. Don't ever let AI try to bulk write most of the application.
We could work in an iterative way pre-AI. Just AI makes it faster to prototype - faster than you can type. Wait until companies have to have "prompt guides" or "prompt designs" and we'll be back to the same old slow process driven stuff again.
7:15 latency/cost isn't the biggest problem with "AI" personalization. Dynamically changing flows wholecloth is creating new categories of anti-patterns and user annoyance. Bad enough when ab testing hides/moves/renames buttons now do it randomly to the sites navigation hierarchy. Think about telling your grandma where to add a card to a toy but the website literally shifts under her feet. Or your favorite websites every time they do a UI "refresh".. now doing it monthly/daily/weekly/per-session. There's a point where "personalization" breaks down human connection of shared experience all for metrics that became targets causing a net loss that is not visible.
100%. Looking forward to your solution. Mine is to generate a single source of truth carried over throughout the app building process, across tools. Coming soon too :)
I hope it's not all copium 😂 (I don't think the AI will replace devs anytime soon either). For me the most annoying thing is that you have to "guess" using imprecise language (english) instead of using precise language - a programming language.
Had exactly that Problem in my Internship. You actually need to understand you Code before pasting that in. Thats what i learned there be efficient but understand what you do.
@@elimcfly350 Yes it is but for many Rookies and many Coders joking like that makes a false impression. Especially in a stressful Enviromen People get loose with Understanding and getting things done.
okay this sounds nice but could you do the following could you take the coat that was converted from figma design and then change the text in the code and then send it back to the figma design so that at the end you have the same figma design with another text?
So the tldr is that AI is going to make non coders are low-skilled coders suck at coding if they try to take shortcuts, but it will make experience coders 10 to 100 times better
My friend just re-did his company's website with the help of AI. Thing is, the man doesn't have a clue what's in the code or how it works - but he could still do it. The man has never written a line of code in his life: You over estimate the value of code in comparison to the understanding of what drives business value - the latter is important, the other one not so much anymore.
anyone can make something new with or without ai. the pain comes when you need to maintain your app. just like humans ai looses context especially for complex logic
And as time goes on, you left with ai. Every senior has moved on or retired, and ai cannot comprehend your project. Ive seen it happen without ai. Project is dead and need to be rewritten because nobody wants to touch legacy code, and no one left in company to continue working on it.
How was it put? Ability without skill is useless. By the time this is figured out AI will be so expensive to use that it won't matter the quality of the application it spits out
If marketing could learn to use a code editor, Im not saying learn to code. Just learn enough not to screwup the code. Think of how many dumb internal web interfaces we wouldn’t spent time on, leaving time to develop actual customer facing functionality. All arguments I ever heard against this concept goes: -Aaaaaaaah, I DONT WANT!
It doesn't need to completely make software developers obsolete; it only has to make the software developers more productive enough that companies think it is the right decision to lay off that many percentage of the engineers to save money by not paying the high cost of engineers. Either a company goes that route, or goes the other route of increasing productivity even more, but the other route does not likely happen. This is why we are seeing so many software developers in the market and being unable to find a job. Around 20-30% of the software developers are not needed now because of AI tools. How much of this percentage changes, we do not know yet.
Where do you get the 20-30% number from? Also the engineering market was bad before ChatGPT exploded in 2022, and from what I’ve seen it has actually gotten better, though that is anecdotal. I know a lot of people who were having a hard time getting hired in 2023 but randomly got many offers in the last few months. Even for me, in late 2023 I had an engineering role but was looking for another one and sent out about 100 applications but heard nothing and then in the last few months I got a ton of traction and started hearing a lot more.
It’s the same stuff with ai graphics too. Sure it can do basic stuff. But beyond that it’s not usable. For instance. I want three people holding hands and drinking Pepsi. Great. Now don’t change ANYTHING - the client has approved this! I just want you to change the color of one person’s shirt. Cannot do this. It will change things around too much. Try it. This is not usable for production I’d client wants changes.
Finally someone beating the faces with the truth. Every other day I keep watching stupid YouthTubers make a video on "AI will take your job", at every video I comment: "If AI can't even do 1% of my job, how will it actually replace me?". And, that's the truth, AI can only solve minor issues, the moment we expect it to do something big - bang! It starts hallucinating, no matter which model.
Wishful thinking, ai is enabling less developers, less knowledgeable developers develop larger systems. Ai already writes better code than most devs. i guess soon specialized coding models will develop much better than current models. Developers will still be required, but less. I guess a good BA will soon be able to develop useful software with very little help from devs who can write source code.
You’re absolutely right! I was using Cursor to make minor Tailwind adjustments for the UI, but it ended up editing 4 different components and causing a mess. I had to go back and fix everything manually, and it turned out I only needed to change a single line in one of the components
I know. I keep hearing the "A.I. will take dev jobs", but that's completely bs.
A.I can never take dev jobs. A.I can just help and support Devs, that's it.
@@nikoryu-lungma the argument comes from large pool of "developers" who are not "developers" but just skilled at stitching snippets from stackoverflow
>> Installing unnecessary packages....
Aider is better
Head up about "the pit of death" usually its caused by the LLM reaching it "workable" context length. This context length is far below the advertised context length of the model. The solution to the problem is simple, once you notice the model is no longer as useful as when it started out, simple start a new chat session. But before you do that have the agent summarize everything you are doing on the project and the most recent thing you are working on. Then use that summary to feed the new session. You will find that all of a sudden the model becomes useful once again. Rince and repeat every time. I am sure this feature will be automated in future agentic workflows of tomorrow once the devs understand it importance.
good tip brother, agree once token limitations are lifted
This is spot on. I haven’t tried tools like cursor yet, only use ChatGPT directly, and I have observed lots of times that continuing code in a new thread will make that the problem that was not solvable, now suddenly is solvable. But switching to a new context is a lot of work currently, because you need to provide the AI with the right start instruction of the context. I think that if someone builds an AI editor that generates very brief texts about your product structure/ conventions, a very brief text about what our current goal is (1 change at the time), and uses the rest of the context for coding and debugging AND writing prompts for follow up tasks, this might actually solve the “ pit of death” in a lot of cases. So the output of the AI process shouldn’t be just code. The prompts for follow up tasks should NOT be added to the current context as long as it is debugging.
It's not context length though, it's a break down between the user and LLM.
I code with cursor, and face this multiple times a day. It doesn't matter if I'm giving it single modules, or the entire project, it frequently gets itself into a bit of a mess. By the time you've boxed it into a correct answer you a) needed to know the info, and b) might as well have written it. Obviously they're great tools, but the premise of this video is correct t from what Ive seen
Latest examples: most gsap. Tanstack table (client vs server)
This does help, but eventually these little summaries become insufficient with a large codebase, and the agent will ask you to provide all the code again when you press it on why it doesn't know how to solve the problem. Despite this, even just working with 1500 lines of code I've gone through 45ish messages in around a dozen chats and gotten nowhere on some problems.
This is exactly what I am doing and it's working. Also whatever A. I code I use, I am always learning what I am implementing, for the same reason to avoid that same mistake
This video is probably the best take I've seen about software engineering's future with AI rather than clickbait stuff like "AI will take your job."
Like many other engineers, I already knew this and nothing too new is said here. But the verbalization of the problem we are facing is a hard skill and it not something everyone can do. Thank you for making it easier to explain.
From my experience AI is trash in generating actual complex code. I personally only use it to brainstorm with myself, meaning that i have a "sparring partner" to have a discussion on how to implement stuff or how to build an architecture. So it's more on the abstract level of a discussion, rather than generating actual code.
Yep! Same conclusions. Also one more use I found is checking how bad the idea is: you formulate an idea to LLM and look at whats been scratched - if output is way more complex then you expected, and you don't spot any silly mistakes, than idea is way worse then you estimated.
Guys, you're going to be high-level engineers. AI is very useful to start with.
i used ai to read assembler, in order to crack a software. they good at it hahaha
I’ve found the best way to address the pit of death is to containarize your project. If you can structure your code so that the language model doesn’t need to have full context of the project to address the needed input/output signature. Obviously we already do this, and it won’t work for every case, but I mean specifically to structure your stuff so that you don’t need to understand the project architecture to work on one of these containers
This is *the* hard bit, and proving the video’s point
I see ai as a great assistant with endless patience and inspiration, but you still need to give direction, make the right choices and add ‘taste’ to the project to make it complete. That still needs skill, experience and intelligence.
It doesn’t matter how you see it
it can replace you
face reality or suffer
you’re delusional and in denial
@@rjackstheartofwealth6152 I asked it to make a terminal emulator. It started completely broken and I have spent a month trying to prompt it to fix it. It still doesn't work a month later. Needless to say, an actual programmer who knows how to make a terminal emulator would be done by now.
Shouldn't your key takeway be that; had you spent that time learning it yourself you would've been done by now
People often retort "You just aren't prompting it right", which we can expand a bit to mean "You aren't telling the LLM exactly what it needs to do". I would actually agree with this assertion because if I knew exactly how to fix all my bugs I could task the LLM with generating the code to match my perfect description. Of course, these AI futurists mean it in the sense that with the right prompting loop any problem can be solved, and therefore one day we will move away from using code entirely. On the contrary, as you highlighted, upon encountering a novel case that requires REAL reasoning, suddenly the "artificial" in AI hits you like a ton of bricks and you realize the intervention of true intelligence is necessary.
Prompt engineering also is just coding.
@@christianibendorf9086 Lol that realization that you're writing code and copying into an llm and telling it "do this"
@@christianibendorf9086 ROFL your insane or delusional or a massive troll.
Another thing about these prmpting bros is that they're admitting you have to handhold the LLM into doing something useful. However, in their minds, LLMs are apparently SO adept at creating intelligent solutions that even non-tech people can use them to craft entire projects. The irony is thick.
I always wonder how something that supposedly enables "everyone to become a developer" is supposed to do that while also needing detailed instructions from actual developers.
This reasoning also feels like a bit of a trap and is the main reason I am kind of dreading the prevalence of AI. Getting good at prompting usually means I am spending time away from code, which is the fun part for me. I have found I often need to hand hold LLM-generated code and very little of it is “set it and forget it” so I often treat it as a junior engineer who I am micromanaging more than a tool. I understand part of that is a skill gap and I’ll likely get better at prompting and working with the LLM in time but it just isn’t as fun as diving into a problem myself. I also understand that someone who isn’t as much of a purist as me will likely be more productive so I’m trying to work through that so I don’t get overrun but it definitely sucks some of the joy out of the process for me
That was surprisingly (because this topic gets so much fad) high quality, thank you
He’s missing the core problem (from developers view) about AI. AI isn’t intended to replace software developers outright, but rather to lower the total number needed. The real issue is that AI doesn’t just offer a slight boost in efficiency-it provides a substantial leap. If AI can increase a programmer’s productivity by 100%, then the company can hire 50% fewer developers. Do you see the problem here?
This assumes that feature demand will not rise to meet the rise in productivity.
And clients always want more and new features.
Otherwise web development would have been automated DECADES ago.
Maybe I'm being too optimistic, but they said something similar about accountants with the rise of computers. Precisely because computers made the accounting process cheaper and need less people to do the same job, more companies were able to benefit for that and now we have more accountants than before.
IDK about overly optimistic. A cognitively bias, maybe? Productive capacity increased significantly during that time => if 1 unit of input produced 1 out before, it produced 2 out with computers. Quantitatively, evaluating whether the change in demand for accts then vs gdp deviated from historic figures would be telling, but the role of accts changed -> focus on complex conceptual tasks vs writing numbers in ledgers = increased benefit per accountant -> businesses want more benefits = more accts . . . and just had a thought. Maybe => deceptive acct practices like tax shelters that have lowered corporate tax rates from 35% nominal to 7% effective. If true, that wasn't great.
All I know is I still have tons of people asking me everyday to build stuff and there ain't anyone else they can find to do it.
Looking at all my 40 coworkers, I can guarantee you that AI has not increased their productivity anywhere near to 100%. Maybe 5%.
I 100% agree. Things can unravel real quick after what we would think is a 'simple prompt'. That's why it's probably really important to use a Git Repo before changes and branches. However, that won't help when you bugger up the database, then you gotta roll up your sleeve and fix it manually.
Do you guys not think that the AI companies will simply start having actual engineers train their models with production level code and complex debugging cases? We're nowhere near the final architecture of AI programmers. Like another commenter said a lot of these issues are caused by the memory limits of LLMs and the simple fix is to create new sessions for each requested change.
Has a non-coder who has been using Llama to code, you are absolutely correct. It's been exciting and frustrating. I'm almost ready to give up.
You got me subscribed at "Pit of Death" followed by "Plateau of death".
All of those tools literally always got me there, and to find myself trying to fix a code, which then I have not written myself, it takes extra cognitive load.
haha I know exactly what you're describing, I would end up with couple of hundreds lines of code that I didn't write that ALMOST work, GPT is in a rut and it cannot get out, there is only a day left till the deadline, and now I have to start reading the code line by line and trying to make sense of it. All of this because at the beginning I thought 'this will be easy enough for GPT to write it and it will save me a lot of time'
I think this applies more to people who are just pasting in the whole codebase/whole file and asking for new features.
There are many "developers" nowadays on social media that are basically "coding" by just asking the AI for a "calendar app" etc.
Obviously real developers do not code like that.
As a begginer I love this kind of information, we need more people like you to be in the spot light other than those fear mongering Ai hype tech bros
Finally someone explained it very precisely. Thank you.
I agree so much with the first minutes (only). I use LLMs to get things off the ground, but after a bit, I have to take a step back, ditch the model for a bit, and refactor everything into a proper architecture
From that point forward, LLM can still help with individual bits and pieces, but have lost track of the thing as a whole. There's no way LLMs can build architecture at all. They're building disconnected pieces, not DRY and not expandable. And overengineered everywhere at the same time, building entire factories when a single line of code would suffice.
So yeah. Prototype, then refactor.
I don't see your tool helping though, to be honest. It has the very same flaws. It's a non sequitur here.
100% nailed it! So many people are falling for the "Look it built a whole game of snake in 5 seconds!". Many people don't look beyond that and assume that "coding is dead". The ones who try to take it a step further typically either know how to code and let the AI multiply their skills, or don't know how to code and end up in the pit or plateau of death.
Absolutely amazing video! I agree that the fear of being replaced by AI disappears when you start using AI coding assistants in real-world projects.
I've had even the best AI IDEs break my code or fail to build something I wanted.
Anyone who honestly thinks AI can replace devs has never actually worked on a professional codebase.
03:45 but there is a big difference between pdf and css, pdf needs to target only one fixed page size but web needs responsiveness on all sizes and aspect ratios of screens.
Css can be also a complex animations and set of behaviours along with .js.
I tried Devin for a month. I knew exactly what I wanted it to do, but by the time I broke the instructions down, it didn't save much time anymore.
And at one point it just added code to the Web service "if the request has no authorization, give it admin powers."
I mean, it made the test pass...
The more concrete tools that use more of my context, like Augment, actually hit the sweet spot right now.
Insights By "YouSum Live"
00:00:00 AI won't replace developers anytime soon
00:00:06 Development skills are increasingly valuable
00:00:28 Non-developers struggle with complex applications
00:01:30 Human intervention is essential in development
00:02:10 AI tools are useful for prototyping
00:03:25 Iterate before committing to full builds
00:04:42 AI will change, but programming won't disappear
00:05:13 AI can automate tedious development tasks
00:06:43 AI can optimize content autonomously
00:07:01 Future may allow real-time personalized experiences
00:07:40 AI tools will enhance team workflows
Insights By "YouSum Live"
If anything, it's great for beginners to help get unstuck and help explain code. For example, in my learning journey, I cannot build frontend react page to connect to my backend python API yet. I build simple enough tools and share the API documentation with Cursor and it is able to generate a usable interface for my simple backend tools. This made me my learning journey so much more rewarding and enjoyable.
The further back you push the "pit of death", the easier it is to gaslight project managers into believing it doesn't exist at all.
Drawing out progress doesn't necessarily require improving the model.
Slow down iteration time by making the UI more tedious to use under the guise of "workflow improvement" ( hire inexperienced feature-creep loving frontend devs ).
Throttle text generation output speed under the guise of a calculated tradeoff which improves quality. Your proprietary backend is a black box: clients can't find out.
Hopefully, you can design the tool such that software development progresses for the duration of a full fiscal quarter before reaching the pit of death.
Then, you can start scamming short-sighted executives directly, which is where the real money is at.
I will say this. Using a combination of a boilerplate and learning some code, Cursor has made it INFINITELY easier to pick up of A boilerplate and create MVPs. The feedback loop is so much shorter compared to a few years ago that even if you have to figure things out, it’s not nearly as difficult as it was when we were all going to stack overflow to research some error that we had never seen before in our life!
AI integrated in the IDE is a better code completion.
I usually use it for that or when I am too lazy to read docs, so it explains it to me. For commonly used Frameworks etc. it works well
The new agent mode on cursor is huge, I've been developing a large complex app and there issues but I work through them and am making progress. I think the main issues are the context understanding for the app and context windows and the overall ability for the model being used. In a year it's going to be pretty magical IMO. People with some good ideas and product design will have amazing tools.
"There are issues" is exactly what he's talking about though. If you can't code you're unlikely to be able to work your way through them
I've been pumping out Cursor AI tutorials on my channel. Totally agree it's a nice sweet spot. I can work in languages I'm not used to purely because I have the basics of knowing basic infrastructurer and high level programming. I don't write any code I just guide the AI to know what context is relevant. I'll be honest I was always a hack developer as opposed to a pure one. My objective was working products in the shortest time possible. If they got traction then go back and refactor for the long run. Most projects fail so why bother agonising over typing and unit tests in the first few weeks. Kills enthusiasm and nothing gets shipped.
Maybe it's my misunderstanding of how to use agent, but I find that normal works better. Agent often deletes whole swaths of code.
@@RobShocksyour comment actually perfectly encapsulates the issues I’ve had with AI agents and LLMd. It treats code not as a craft, but a means to an end. LLM generated code can get a working demo up pretty quickly but once I start trying to flesh things out and add complexity it becomes less and less reliable and saves me less and less time. I’m curious to know what you would say the most complex project you’ve built with one is?
about the "reasoning aka chain of thought" of the fresh OpenAI model, what i draw from their "we do exclude couple of the first answers to give you the best one" is probably something that can be explained as "Markov Chain Monte Carlo" when you do the "burn in" phase to achieve and stationary distribution after some time, that we think of as reflecting best possible answer... I know i might mix up some terms, but it awfully resembles this technique from markov chains.
We could treat a model as a hidden "extended markov" chain (not drawing conclusion and prediction from only last state, but the entire ensemble of previous states that were pre-ordered logically) in that case, where emissions are words being predicted by the model each step
I like to generate a complicated advance type for type script. LLM does a fatastic job for the Well-defined problem. If the users do not know what to do, good luck.
so what should i go to school for?
Great point and presentation.
I just ask it to summarize the conversation and then start a new chat with the summary and code to reset the context issues
Ooohhh. I like that. Smart
You knows it - I also get a lot of value out of LLM's, they are a great tool.
Especially as AI goes through the hypecycle and is constantly changing, there are always new theories about what impact it will have on roles and tasks. I think the video is a very succinct and good summary. AI gives us incredible opportunity to create faster, more and better customized code, but is far from reading our minds and turning it into a perfect product in zero time.
> Use GPT4o to write skeleton codes
> trying to do multi threading
> immediately generated deadlock codes
Pretty sure devs still need to be well aware of what LLM actually generated for a while.
What does deadlock means
@@priyanshunishad7402 Assume we have two threads, and they both need resource 1 and 2 to proceed. These two resource can only be accessed by one thread at a time. If T1 holds R1, and T2 holds R2 and no one let go, it is a deadlock. No thread can proceed.
I've been using chat gpt as a pair progamming buddy and yeah, sometimes it will just throw me into a pit of bugs and the more I ask for help the more the issues will come up and it becomes a snowball and sometimes the answer for the initial bug that causes all this nightmare was 100 times simpler than the suggestion it gave in the first place. I've become more cautious when using it
There's a reason why I subbed your channel! Thank you for being honest!
I still wonder, given that an experienced developper can leverage existing code they have and ship out a fully functioning MVP in 2 to 3 weeks with lots of customization, whether it is truly useful to create MVPs with no code low code apps with a huge probability of having something that is not working well or not customized to address the problem we are trying to solve
AI essentially just increases writing speed. translating whats in our head into code. but it cannot replace our head. No prompt, no code. quite simple concept tbh and once people realise this broadly (not just devs), the ai hype train will end, as it is a very helpful tool that still has a lot of room to improve. but it's not some magic dev replacing invention that many people believe.
Finally someone actually said the struggle we face with LLMs whatever advanced they are.
Good content and even better discussions in the comments
Actually this is getting better. Also I switch models got back on track asked the AI to set me up with a GitHub repository so I can roll back to where it's working. I'm learning and able to refit and implement more on my own. Yeah it's costing me API money but it's worth it cause hiring someone is more expensive. I'm creating more iterative changes and I'm pleased. Very pleased. I went from fear of React to Next JS practical app dev ( no snake games and note apps, and actual dB app that in intend to let thousands of potential users use)
Last night I was debugging why my Terraform code didn't work on apply, yes, just Terraform, a psudo programming language. It took a whole night to find out what the problem is and I'd say the current AI capability is the Simple Jack equal of intelligence.
This is a good video. I do not think I will use builder though as I am indie hacking really specific use cases. I do think this product looks interesting though as for many boiler plate apps and company marketing department teams, they could take more control over the web development rather than having to send it to their outsourced firm for changes.
Agreed 100 percent with the pitfalls… yesterday I had issues with o1 implementing my web socket and socket IO into my react app. I had to troubleshoot this by doing iterative testing step-by-step and viewing server logs. I figured it out this morning. 🎉 AI helped me entirely along the way, but prompt engineering is required and successful prompt engineering requires the prompter (is that even a word) to have acquired the fundamentals to make the prompt request.
it's best to keep your questions low level and contained to a single component, anything more than that and it starts making shit up and breaking the code. The worst part is when it tries to rewrite your code for no reason at all, removing parts of crucial functionality. Man do I hate that... always have to remind it to stop ripping out existing code
The pit of death is a blessing. As a software engineer I use this as a strength to get a giant edge.
I try to get as much work done as possible by myself and when I’m tired but I really need to complete a feature I’ll sub in co-pilot
Security in a web app is really hard. Can the ai review the code for the top 10 owasp recommendations? Well you can tell it to do that but can you trust it?
I mostly use it to get some simple code sample, because it is just annoying how much BS Javascript build in functions are. Way better than googling it or looking into the documentation.
So not technically a programming thing but I was once trying to use a complex function in excel that I had the AI modify. I spent hours having it changed rewrite and go over the problem. At the end of it all the AI had a included a forgotten to include a single space that caused every iteration of the function to fail.
I use the AI to generate small modular code blocks, like building me a circular buffer. Things that will take me a little bit of time and a lot of debugging are great. Don't ever let AI try to bulk write most of the application.
What about o3 now?
awesome video man!!!
4:30 Now starts the sales pitch
"Just buy my product and everything will be OK." Almost wasted time watching this whole video.
In conclusion, we're going to need less but highly skilled developers in the medium future.
Bro started talking about why devs are required then was like also btw we’re trying to make AI that does stuff entirely without devs.
You are absolutely right!
What about Devin, that seems to work loads better than most other tools
2:05 this exactly what we have been doing , you cannot fully be dependant on bolt or v0
Very true. I always have that moment where I have to stop being lazy implement a feature the way I want it then the AI continues 😂
I view LLMs for programmers like Photoshop for photographers.
We could work in an iterative way pre-AI. Just AI makes it faster to prototype - faster than you can type.
Wait until companies have to have "prompt guides" or "prompt designs" and we'll be back to the same old slow process driven stuff again.
7:15 latency/cost isn't the biggest problem with "AI" personalization.
Dynamically changing flows wholecloth is creating new categories of anti-patterns and user annoyance.
Bad enough when ab testing hides/moves/renames buttons now do it randomly to the sites navigation hierarchy.
Think about telling your grandma where to add a card to a toy but the website literally shifts under her feet.
Or your favorite websites every time they do a UI "refresh".. now doing it monthly/daily/weekly/per-session.
There's a point where "personalization" breaks down human connection of shared experience all for metrics that became targets causing a net loss that is not visible.
100%. Looking forward to your solution. Mine is to generate a single source of truth carried over throughout the app building process, across tools. Coming soon too :)
100% agree. Real devs know this and not casuals
But I want to write code not just debug it sverywy
I hope it's not all copium 😂 (I don't think the AI will replace devs anytime soon either). For me the most annoying thing is that you have to "guess" using imprecise language (english) instead of using precise language - a programming language.
Had exactly that Problem in my Internship.
You actually need to understand you Code before pasting that in.
Thats what i learned there be efficient but understand what you do.
"Understand the code you use" being some sort of a realization is hilarious to me. The security world is weeping.
@@elimcfly350 Yes it is but for many Rookies and many Coders joking like that makes a false impression.
Especially in a stressful Enviromen People get loose with Understanding and getting things done.
Cooperative AI is critical not replacement of us.
How the heck are you personalizing a website without identity?
AI will never replace devs.
LLMs will never. We don't know if AI will ever be created.
Says a dev. Do you still really think that at least most devs arent going to be replaced? Have you been pressing F5 lately?
@@wolverin0 You are falling for marketing. It's amazing technology, and I love using Copilot but it will not replace us.
@jeremytenjo again, says a dev. Probably doctors are thinking the same.
@@wolverin0 🙄okay buddy.
okay this sounds nice but could you do the following could you take the coat that was converted from figma design and then change the text in the code and then send it back to the figma design so that at the end you have the same figma design with another text?
So the tldr is that AI is going to make non coders are low-skilled coders suck at coding if they try to take shortcuts, but it will make experience coders 10 to 100 times better
My friend just re-did his company's website with the help of AI. Thing is, the man doesn't have a clue what's in the code or how it works - but he could still do it. The man has never written a line of code in his life:
You over estimate the value of code in comparison to the understanding of what drives business value - the latter is important, the other one not so much anymore.
anyone can make something new with or without ai. the pain comes when you need to maintain your app. just like humans ai looses context especially for complex logic
anyone could have made a website using wix for the last decade brother
AI is a tool that helps speed up development. You still need software dev skills. 100%
AI is not replacing developers, but junior developers :)
And as time goes on, you left with ai. Every senior has moved on or retired, and ai cannot comprehend your project. Ive seen it happen without ai. Project is dead and need to be rewritten because nobody wants to touch legacy code, and no one left in company to continue working on it.
Who’s here after o3
Couldn't agree more!
Bolt for me is just a crazy steroid it speeds up the initial setup so much.
why is he wearing ear muffs ? did he record this outside?
For now. With quantic computing, it's a matter of time to see our development profession obsolete.
I wouldn't mind at that point, coding for money is one of the least human activities on the planet
Absolutely agreed
Hell yeah, developers will never get replaced by AI,take that dumb agi
How was it put? Ability without skill is useless. By the time this is figured out AI will be so expensive to use that it won't matter the quality of the application it spits out
The AI will find that a ten year unlimited free trial will convert super well!
If marketing could learn to use a code editor, Im not saying learn to code. Just learn enough not to screwup the code. Think of how many dumb internal web interfaces we wouldn’t spent time on, leaving time to develop actual customer facing functionality.
All arguments I ever heard against this concept goes: -Aaaaaaaah, I DONT WANT!
It doesn't need to completely make software developers obsolete; it only has to make the software developers more productive enough that companies think it is the right decision to lay off that many percentage of the engineers to save money by not paying the high cost of engineers. Either a company goes that route, or goes the other route of increasing productivity even more, but the other route does not likely happen. This is why we are seeing so many software developers in the market and being unable to find a job. Around 20-30% of the software developers are not needed now because of AI tools. How much of this percentage changes, we do not know yet.
Where do you get the 20-30% number from? Also the engineering market was bad before ChatGPT exploded in 2022, and from what I’ve seen it has actually gotten better, though that is anecdotal. I know a lot of people who were having a hard time getting hired in 2023 but randomly got many offers in the last few months. Even for me, in late 2023 I had an engineering role but was looking for another one and sent out about 100 applications but heard nothing and then in the last few months I got a ton of traction and started hearing a lot more.
It’s the same stuff with ai graphics too. Sure it can do basic stuff. But beyond that it’s not usable. For instance. I want three people holding hands and drinking Pepsi. Great. Now don’t change ANYTHING - the client has approved this! I just want you to change the color of one person’s shirt. Cannot do this. It will change things around too much. Try it. This is not usable for production I’d client wants changes.
Until we have agi
When are we getting agi?
Bro as a dev I'm semi concerned. Grok super computers and advancements in computation like with companies like light matter. Likely we got 5-10 years
when will visual editor available to all ?
Yeah. Current ai should be used as assistive rather than autonomous.
Finally someone beating the faces with the truth. Every other day I keep watching stupid YouthTubers make a video on "AI will take your job", at every video I comment: "If AI can't even do 1% of my job, how will it actually replace me?". And, that's the truth, AI can only solve minor issues, the moment we expect it to do something big - bang! It starts hallucinating, no matter which model.
Wishful thinking, ai is enabling less developers, less knowledgeable developers develop larger systems. Ai already writes better code than most devs. i guess soon specialized coding models will develop much better than current models. Developers will still be required, but less.
I guess a good BA will soon be able to develop useful software with very little help from devs who can write source code.
remake that video with the latest O3 model
Facts
100%. you gotta read what it outputs and if you dont know the concept, go read docs!!
💯