Brother... All three; Windsurf, Cursor, and Copilot should have you on their payroll as a consultant to help them improve their models. Thank you for doing what you do, keep up the good work! Subscribed, Like ALL, Getting ALL notifications, and am sharing with all my AI Dev friends. ;)
Another pro-tip: if you’re using git as part of your development workflow, try to periodically run a git diff in your console, copy and paste it to the editor chat, and ask it to review the changes. Many times the AI will spot very obvious issues and end up giving you a more robust code implementation
Absolutely! This is a great way to get a quick code review out of the LLMs. Another useful feature: you can mention @diff inside of cursor to get the diff straight away without having to do that manual copy-pasting.
SPOT ON, the timing of this video is just right with the code editors becoming very similar and very different at the same time. fantastic video and thx a ton. please keep this type of videos coming, they are a great help.
If you don’t stop to check the AI constantly, you’ll end up stuck in loops for days. That rules file you mentioned is a game changer, especially once the app becomes more complex. As a non coder, I like to put it in chat mode and brainstorm with it for 10 minutes at a time. By the end, it has a much clearer view of what it needs to do and does it in a few tries.
That is what I do as well, however if I close the editor down and reopen it after a few days, even if Iask it to familiarise itself with the code and any 'non-volatile' memory it has, I have seen (especially with Windsurf) that you are rolling the dice and never certain if it can actually pick up where exactly you left off. Quite annoying. That being said I think Windsurf is by far the best of the three.
@ I’m having to make massive changes because I didn’t plan the overall structure of my app before I started. I’m now making it create a structure md file with all the new changes. That way I can tell it to check it before we start each new session. Making an app while not understanding a single line of code is a new skill entirely. Exciting times. Good luck!
I think the current AI editor still needs to improve quite a lot for it to work well with non-coders in production scenarios, not just for creating prototypes. That said, I think we'll be making very fast progress in the coming year.
@ oh, I’ve also worked out. If you get it to create tests for every aspect of the app. The output of these tests gives it a much better understanding without the user having to understand what is actually going on. I guess this is common practice for coders. 🤷🏻♂️
I have been asking the AI to document changes in a change log. Problems in a suggested improvements file, also a rollback file in addition to the project readme. I guess it makes sense to ask the AI to summarise the project and it's architecture, apending it on a daily basis. This way the context of where we started, where we are now, and where we are going is preserved. Take that summary put it into another AI outside of the IDE and have a summary constructed into a proper token efficient prompt in case I have to start over, making sure it includes everything necessary.
Very smart. Thank you for sharing. If you are inclined to share anything else or more details it will be appreciated. I use o1 as an Architect for Windsurf and after a feature implementation I get a model to summarise o1s instructions and Windsurf responses into a single feature implementation context that I use within an iteration debugger prompt for detailed changes context but I can how your approach might achieve the same result with less effort.
That is a really good workflow. The only limitation for generating the project summary is the limited context window. Perhaps we could leverage the changelog as a way to summarize, so that the model doesn't have to ingest every single file. Inside the project, to be able to produce a summary, but rather use incremental changes to maintain a summary.
Thanks. I think this is true pretty much for any AI-related generation, be it images, code, text. I think people frequently forget the importance of good prompts in getting great output.
Tip of the day, i only have one line, in the cursor rules, and i say , if the code gets to big refactor it, so it refactor the code then, works verry good, this way you can get nice big organised code base :) also i use the @web feature with a prompt 😊
I have been having ChatGPT help me craft a better prompt for Windsurf. I can use poor language and not have a good understanding of what I am trying to tell Windsurf and ChatGPT can make a much better prompt for those bigger context prompts.
Yeah, that's a very useful way to do it, as long as you don't mind the extra hop. I wonder if they can build this in as part of the chat so that it iterates on the initial request once before it sends it off to Composer for editing.
Finally a video about the biggest issue with these tools... You get the feeling it can do anything and then your project blows up when you keep acting like that.
That's a really good shout. Perhaps I can look into contributing a feature to an existing open source codebase using the suggested workflow.. That's sizable. What do you think?
That was a great information. This was one of the pain I have been going through while just developing a extension. Thank for the valuable information but would be better to mention all of the resource you use in your description.
It's really annoying that none of these AI editors include these kinds of guides in their documentation, which I feel is essential for all AI devs. Thanks for the reminder to add things into this description.
@YifanBTH yea, you're right. Its like hit n trial and figure out something on our own. I found Windsurf and Cline better than online IDE especially when you are working on extension.
Hey, thanks so much! I was getting very frustrated, gave me motivation back instead of long red eyes night ;) I have 2 questions: 1. How long maximum should be .cursorrules? Should i include last react 19 next 15 changes detailed? 2. using latest react 19 and next 15 is fine or you advise previous version ? (due ai last training date) Maybe a third one im sure it’s interesting for everybody: 3.When to use @web? Does it fill context fast you think?
1. Keep it as short as you can, there’s not absolute limit but you should only include things in the file where you find cursor constantly underperforms. 2. I’d avoid keeping full docs unruled files. Use the @docs feature to utilise the embedding search for docs directly. This much more efficient than @web. 3. Use it when you find good solution on stackoverflow or good example implementation. For docs, always use @docs
I would add : before allowing it to act on the prompt, ask it to stop implementing and just to give you the steps it intends to take, because 90% it's planning to delete something due to misunderstanding.
Thank you for the video. I like watching your content. I see, we are using the AI it in a similar way. But what I always do wrong is trying to correct when it makes mistakes - going just further down the rabbit hole. As you suggested it’s better to use the checkpoints and rework MY PROMPT not on the output. Will try this tomorrow.
With Cursor you can use the @web feature at the end of a prompt. Also you have to be specific in telling what the AI should look for in the web. For example: “Implement URL props for this page for NextJs 15 @Web” It should find that you have to await your props. Was the first thing that came to my mind 😅
You'd better use frameworks AI picks by default. That means most of its knowledge is for this version. If not you'll get way more work to prompt it to fix code to work with newer versions. Most likely, it's easier to finish the app in nextjs 13+ and then upgrade to nextjs 15.
The "@docs" feature is particularly useful here because it enables embedding search for documentation, but you do have to mention it explicitly in your prompt to use it well.
For nextjs Vertica slices architecture always the best option, More tips, always start from constants/ type/ hook/ and other before actually implamen the components/ Try ask o1 to create prisma schema, it will be not pergect at the biginning but at you wil get grrat ideas from there
It's true, if you start with the standard best practices from the frameworks, it's much easier for the LLM to generate good output because they are mostly trained on those kinds of data.
For nextjs Vertica slices architecture always the best option, More tips, always start from constants/ type/ hook/ and other before actually implamen the components/ Try ask o1 to create prisma schema, it will be not perfect at the biginning but at you wil get grrat ideas from there CMIIW but i just learn programming not to long a go😊
I think we all look forward to a day where you can just put in one line and AI will do the rest automatically with high accuracy. But if that's the case, what's the need of programmers, eh?
It's funny cause it's true. My app is all AI coded and is complex and its having a hard time figuring things out and I have to say where files are and why did you do X when I have Y already. It gets frustrating for sure.
the requirement for more sophisticated prompt increases with the complexity of the codebase. also I find I have to review AI's code more carefully when working with large codebases because it's easy to generate things that are functional but misses many best practices
@YifanBTH I've so far only used it to convert my English slang to prompt and then applying it to other AI models which has been helpful. About to try bolt.diy with deepseek v3 so I'm excited about that from what I'm hearing
Yeah, the models are still pretty costly at this point. I can't imagine any of the AI editors being able to provide a good service without some kind of paid subscriptions.
Brother... All three; Windsurf, Cursor, and Copilot should have you on their payroll as a consultant to help them improve their models. Thank you for doing what you do, keep up the good work! Subscribed, Like ALL, Getting ALL notifications, and am sharing with all my AI Dev friends. ;)
Another pro-tip: if you’re using git as part of your development workflow, try to periodically run a git diff in your console, copy and paste it to the editor chat, and ask it to review the changes. Many times the AI will spot very obvious issues and end up giving you a more robust code implementation
Absolutely! This is a great way to get a quick code review out of the LLMs. Another useful feature: you can mention @diff inside of cursor to get the diff straight away without having to do that manual copy-pasting.
SPOT ON, the timing of this video is just right with the code editors becoming very similar and very different at the same time. fantastic video and thx a ton. please keep this type of videos coming, they are a great help.
If you don’t stop to check the AI constantly, you’ll end up stuck in loops for days. That rules file you mentioned is a game changer, especially once the app becomes more complex. As a non coder, I like to put it in chat mode and brainstorm with it for 10 minutes at a time. By the end, it has a much clearer view of what it needs to do and does it in a few tries.
That is what I do as well, however if I close the editor down and reopen it after a few days, even if Iask it to familiarise itself with the code and any 'non-volatile' memory it has, I have seen (especially with Windsurf) that you are rolling the dice and never certain if it can actually pick up where exactly you left off. Quite annoying. That being said I think Windsurf is by far the best of the three.
@ I’m having to make massive changes because I didn’t plan the overall structure of my app before I started. I’m now making it create a structure md file with all the new changes. That way I can tell it to check it before we start each new session. Making an app while not understanding a single line of code is a new skill entirely. Exciting times. Good luck!
@@Cnc1073 yeah I do that too, I am trying to perfect the rules file now with consistent problems I see occuring, lets see :)
I think the current AI editor still needs to improve quite a lot for it to work well with non-coders in production scenarios, not just for creating prototypes. That said, I think we'll be making very fast progress in the coming year.
@ oh, I’ve also worked out. If you get it to create tests for every aspect of the app. The output of these tests gives it a much better understanding without the user having to understand what is actually going on. I guess this is common practice for coders. 🤷🏻♂️
I have been asking the AI to document changes in a change log. Problems in a suggested improvements file, also a rollback file in addition to the project readme. I guess it makes sense to ask the AI to summarise the project and it's architecture, apending it on a daily basis. This way the context of where we started, where we are now, and where we are going is preserved. Take that summary put it into another AI outside of the IDE and have a summary constructed into a proper token efficient prompt in case I have to start over, making sure it includes everything necessary.
Very smart. Thank you for sharing. If you are inclined to share anything else or more details it will be appreciated.
I use o1 as an Architect for Windsurf and after a feature implementation I get a model to summarise o1s instructions and Windsurf responses into a single feature implementation context that I use within an iteration debugger prompt for detailed changes context but I can how your approach might achieve the same result with less effort.
That is a really good workflow. The only limitation for generating the project summary is the limited context window. Perhaps we could leverage the changelog as a way to summarize, so that the model doesn't have to ingest every single file. Inside the project, to be able to produce a summary, but rather use incremental changes to maintain a summary.
I appreciate you creating this fantastic video that demonstrates the significance of prompts in obtaining work from AI code assistance.
Thanks. I think this is true pretty much for any AI-related generation, be it images, code, text. I think people frequently forget the importance of good prompts in getting great output.
The right video at the right time. Thanks, Buddy. Would love to see some examples.
Yes, I do need to add more examples and illustrations to the video. Thanks for surviving through me talking for 18 minutes.
Excellent simple explanations that are very actionable. Great video, Yifan
Thanks man. I'll make sure to keep up the content.
Thanks for sharing this amazing knowledge!
Glad it was helpful.
Tip of the day, i only have one line, in the cursor rules, and i say , if the code gets to big refactor it, so it refactor the code then, works verry good, this way you can get nice big organised code base :) also i use the @web feature with a prompt 😊
I've tried similar things, but that only seemed to work well with smaller codebases.
I have been having ChatGPT help me craft a better prompt for Windsurf. I can use poor language and not have a good understanding of what I am trying to tell Windsurf and ChatGPT can make a much better prompt for those bigger context prompts.
Yeah, that's a very useful way to do it, as long as you don't mind the extra hop. I wonder if they can build this in as part of the chat so that it iterates on the initial request once before it sends it off to Composer for editing.
Finally a video about the biggest issue with these tools... You get the feeling it can do anything and then your project blows up when you keep acting like that.
The prototyping phase is always easy, and people often overlook that when in fact, most of us devs need to focus on real, productionized projects.
hi can you make a full tutorial on creating a large codebase app using cursor ?
That would be awesome.
That's a really good shout. Perhaps I can look into contributing a feature to an existing open source codebase using the suggested workflow.. That's sizable. What do you think?
@@YifanBTH yeah that would be good
@@YifanBTH Still waiting for the video
Thank you 🙏
That was a great information. This was one of the pain I have been going through while just developing a extension. Thank for the valuable information but would be better to mention all of the resource you use in your description.
It's really annoying that none of these AI editors include these kinds of guides in their documentation, which I feel is essential for all AI devs.
Thanks for the reminder to add things into this description.
@YifanBTH yea, you're right. Its like hit n trial and figure out something on our own. I found Windsurf and Cline better than online IDE especially when you are working on extension.
we need more of this
More is coming.
Hey, thanks so much!
I was getting very frustrated, gave me motivation back instead of long red eyes night ;)
I have 2 questions:
1. How long maximum should be .cursorrules? Should i include last react 19 next 15 changes detailed?
2. using latest react 19 and next 15 is fine or you advise previous version ? (due ai last training date)
Maybe a third one im sure it’s interesting for everybody:
3.When to use @web? Does it fill context fast you think?
1. Keep it as short as you can, there’s not absolute limit but you should only include things in the file where you find cursor constantly underperforms.
2. I’d avoid keeping full docs unruled files. Use the @docs feature to utilise the embedding search for docs directly. This much more efficient than @web.
3. Use it when you find good solution on stackoverflow or good example implementation. For docs, always use @docs
Great information as always! 🙏🏻
Welcome back.
I would add : before allowing it to act on the prompt, ask it to stop implementing and just to give you the steps it intends to take, because 90% it's planning to delete something due to misunderstanding.
Thanks dude! Amazing info, got my sub
Thanks for the sub!
Great video
Glad I could be of help.
Thank you for the video. I like watching your content.
I see, we are using the AI it in a similar way. But what I always do wrong is trying to correct when it makes mistakes - going just further down the rabbit hole. As you suggested it’s better to use the checkpoints and rework MY PROMPT not on the output.
Will try this tomorrow.
One of my biggest takeaways after spending this long with AI editors is to know when it's really failing and you need to start over
Are you using nextjs15? I see that claude doesn’t have latest info on 15. How do you manage it , whats ur flow with this
With Cursor you can use the @web feature at the end of a prompt. Also you have to be specific in telling what the AI should look for in the web. For example: “Implement URL props for this page for NextJs 15 @Web”
It should find that you have to await your props. Was the first thing that came to my mind 😅
You'd better use frameworks AI picks by default. That means most of its knowledge is for this version. If not you'll get way more work to prompt it to fix code to work with newer versions. Most likely, it's easier to finish the app in nextjs 13+ and then upgrade to nextjs 15.
The "@docs" feature is particularly useful here because it enables embedding search for documentation, but you do have to mention it explicitly in your prompt to use it well.
Need a videon supabase auth and database. Like there are many things like auth provider , client side , server side client supabase
Very good shout on how to iterate with databases inside this prompt.
For nextjs Vertica slices architecture always the best option,
More tips, always start from constants/ type/ hook/ and other before actually implamen the components/
Try ask o1 to create prisma schema, it will be not pergect at the biginning but at you wil get grrat ideas from there
It's true, if you start with the standard best practices from the frameworks, it's much easier for the LLM to generate good output because they are mostly trained on those kinds of data.
It would be great if this information was in a downloadable format.
What specific paths would you find most useful in the written format?
why don't you explore further in using MCP server on your project when working on IDE rather than rely only on prompt?
That's an interesting thing to try. Definitely need to explore it. Have you tried something similar?
For nextjs Vertica slices architecture always the best option,
More tips, always start from constants/ type/ hook/ and other before actually implamen the components/
Try ask o1 to create prisma schema, it will be not perfect at the biginning but at you wil get grrat ideas from there
CMIIW but i just learn programming not to long a go😊
Agreed.
yeah .. prompt is king
I think we all look forward to a day where you can just put in one line and AI will do the rest automatically with high accuracy. But if that's the case, what's the need of programmers, eh?
It's funny cause it's true. My app is all AI coded and is complex and its having a hard time figuring things out and I have to say where files are and why did you do X when I have Y already. It gets frustrating for sure.
the requirement for more sophisticated prompt increases with the complexity of the codebase. also I find I have to review AI's code more carefully when working with large codebases because it's easy to generate things that are functional but misses many best practices
Dude. do you say please and thank you to the AI ? :D
Just use bolt prompt 😅
How have you found it compared to other AI editors?
@YifanBTH I've so far only used it to convert my English slang to prompt and then applying it to other AI models which has been helpful. About to try bolt.diy with deepseek v3 so I'm excited about that from what I'm hearing
This guy was rambling for 18 minutes!!
Haha, thanks for surviving through my 18 mins ramble
Great ramble, though! 🙌🏼
# til
I guess that if you don't pay monthly fee for your ai editor, AI will get worse if you are not a pro user for a long time.
Yeah, the models are still pretty costly at this point. I can't imagine any of the AI editors being able to provide a good service without some kind of paid subscriptions.