Feels like that's not far away for various AI coders. Be sweet to just tell it what you want, then have it build and ask any questions it needs along the way, or for any dangerous actions (drop db etc).
Yes this is on the list of improvements! I agree it would be an absolute game changer! And as others have mentioned, this is starting to become a thing with tools like Cline, though there is certainly a lot of room for improvement still.
@@ColeMedin Do you think the error handling will have to wait for now, cause then we are talking about reasoning AI and AGI if that where to happen. Thank you though for making this project open source and open for contributions hope to contribute soon.
We’ve been involved with this project from the very start, and it truly embodies the spirit of AI democratization-making it accessible to everyone who wants to make a difference. Let's ensure it doesn't become yet another project driven by profit, with prohibitive subscription costs that exclude many. By keeping it open and inclusive, we can create something exceptional together. To those who understand, a word is enough-let's make this the greatest project ever. A bon entendeur Salut!!!
Congrats on this initial success! If I may suggest just one small thing, I'd recommend you *pick a name* for it so that it's not just referred to as "the fork". That's if you want to make it a full-fledged project though, not just focused on adding this one "local LLM" feature while still tracking the upstream project. An example that comes to mind is CircuitPython vs MicroPython (Python on microcontrollers): CircuitPython is _still_ a fork and regularly brings in updates from upstream, but is now clearly its own separate project with a new name and significant differences. I could certainly see that for your own fork.
I just want to say you guys are like super heroes to me. Some people glorify rappers, athletes, actors, etc. but I think you guys are the sht. For example, I am a creative non coder, non tech and i have all these ideas but no clue how to code any of them. I don't even know where to start, what's a fork? what's bolt.new?
I appreciate you saying that! I don't want to have the high priority list be too large but once some of those get implemented I will change this to high priority!
Amazing project, I'm going to use it a lot ! Would be great to integrate in an Electron app to deal with local files projetcs, and have everything installed at one place.
It also could be great to have a discord to discuss about possible features, get some help for people getting stuck, and to be able to see how the project is evolving in real time (like the ideas before code got added and pushed on git)
It does have this ability! But smaller LLMs don't always handle this piece well which is part of what I'm working on with making the smaller LLMs work with the webcontainer better as a whole.
I have have another proposal. Since bigger projects exceed the token/context limit, the rewriting of the files can cause a lot of circeling, unless you rewrite your pormpt perfectly. What about a doc, where all requirements will be added automatically and can be used to refresh the memory/context before progressing with the next step. I am talking about dependencies, strucutres, variables etc. Sorry I am not a programmer, just an AI enthusiast, trying my first steps in this world!
Pls provide Proper Infestructure to Code with Python and sometimes it randomly does something I never asked for would also love the Ability to self host the generated code Pls upVote if you wanna see these features
This running agents in the backend thing is implemented by a Visual Studio Code extension called Pythagora "previously knows as GPT pilot" and they are implementing it well, if this could help
🎉 great job! Liked and sub’d. As a no-code dev, for me the critical path for the opensource version of Bolt is “one click deploy” feature. Without this it’s just a hobby - as I haven’t a clue how to deploy the apps I build. Secondly, love that there’s already a fix coming to stop Bolt re-writing code unnecessarily, which is a time drain and massive waste of LLM credits! Thank you. 🙏🏼 And image integration into the prompt will be huge too! 🎉
Thank you very much!! Yes - easy deployments is definitely key and one of the features we are working on. Image integration is another feature that I am hoping to have available soon.
I've experienced same issues for many times! Especially, if you want to create something big, it might loose some of the requirements, due to the limitation of token/context.
To make the workflow quicker it would be awesome if you guys could add microphone input to that so you don‘t need to write with bold but talk. Best if you use a cheap speech to text model for this.
A couple things I would love to see are the ability to uplode a hand drawn wire frame that it then turns into an application. Also I would love to see the ability to uplode a url to a website or application and then ask it to make a copy. For yourself. To make your own version. Thus wod be awesome.
You should; for every video like this that you post; show the initial fork you made, the fork from the last video, and the fork from the current video This is something people will not only love to see, the big shifts in improvement from each, but it will also get people more excited to contribute and continue this very visual demonstration of improvement
Hi Cole, thank you for putting all of this together for us! I've been following along since day one, right from your very first video. I've never developed anything or written a single line of code before, but you've made me feel like a no-code developer. I'm currently facing an issue where this Bolt version is showing something I don’t quite understand. Could you or someone else help me out with it? Thank you! "I apologize that I haven't been able to directly create files, install packages, or run commands. I understand this makes the development process much more cumbersome for you. Let me reiterate my limitations within this environment: I'm a language model, and I operate within a specific context. I can generate code, provide instructions, and offer suggestions"
You are so welcome! I appreciate you! Which model are you using? This seems more like a smaller model that is just confused by all the Bolt.new prompting and thinks it can't create code for some reason.
Would be great to add the ability for the chatbot to ask for clarification. I find that often issues arise when the prompt is ambiguous- having an ability to state assumptions made and/or asking for clarification either at thr start or as it chugs along would be very nice
hey Cole! The Bolt.new team had a podcast recently explaining the roadmap for the versions in the future, did you got a chance to see their 1.5 hour stream? I sure did and wanted to know your thoughts?
@@ColeMedin long story short they are introducing tons of features to the closed source version that won't be appearing in the open source version anytime soon, however, they are overwhelmed by the support by the open source community but for now focus their efforts on the closed until they can play catchup since they are a small startup. they just released a new update where it allows bolt to do backends integrations easier, and hopefully not hallucinate as much. They are adding ability to lock in certain files so it doesn't overwrite the files as often and a bunch of changes I'm excited for! But sucks they won't be able to merge the open source to their closed source anytime soon :( but they will monitor the situation closely!
It would be very important to check the usage of the bolt.new name and logo does not infringe on any copyright as your project is growing I'll be using it heavily
Can you make a video explaining developers who want to contribute and either take existing task or try to bring a new improvement. So to have an incremental development of one features and how to debug the project and how to setup the project for such a feature.
Fantastic suggestion - I appreciate it! I want to start with something more simple with getting it running yourself, but as a follow up "how-to" video this would be great!
Probably at this point a new name for this project would be good. Something like "bolter" or anything that will separate it from bolt in other way than just "a fork".
Thank you very much! And I appreciate your advice! I'm certainly careful with who I bring into the team of maintainers, and we are setting really good standards for testing and reviewing everything before merging.
I'm wondering if a multi phase approach might be more effective for smaller models. That way you could have a phase 1 to analyse the problem, then write the code for each file, the a phase to format the response.
Yes you are definitely right here! One of the reasons I want to implement agents and have that on the list is specifically because it'll work better for the smaller LLMs with a solution like you described!
@ColeMedin i'm not familiar with the term agents. I guess you mean running in multiple instances, each with their own specialisms? Wouldn't that also be resource intensive? I suppose you don't need to run them all at the same time. Which i suppose is very similar if not the same as I'm suggesting. I find even when people work on tasks breaking things down yields better results.
@ColeMedin UPDATE, I had a chat with ChatGPT about agents and I understand the concept now. It's pretty much what I was describing. Where the model can iteratively solve a problem through smaller steps.
Yes exactly! And yeah it will take more tokens and more time, but generally that will be worth it for getting better results because it saves time/tokens going back and forth with the LLM to fix errors/hallucinations.
This is great. How do I run it? I do not understand the Readme. Is there a detailed tutorial? Llama is installed, visual studio code too. How to start it?
Vercel or netflify is ok, but isn't Vercel quite expensive compared to cloudflare? Getting something built and deployed with a few clicks at a reasonable cost would be good. Also, what about longer "session", meaning, when prompted, bolt write code, but... It also stops to "check" your response right. Sometimes we can say continue, and it does so. But, if the tasks it had was i.e to code something that took 3 "checks" and it really could code all 3 in one ho before checkin' in, that would be super as well. Maybe have a option to choose if Those check ins are wanted or not. In theory it COULD code a lot more before stopping, imho. Love the continued Build ❤ Thanks so much
Love your thoughts here, thank you! Vercel is more expensive than Cloudflare but a lot of people find it easier to use. Also a great free tier. I just listed Netlify and Vercel as examples but the feature itself is more general to just be able to publish the built apps somewhere!
Loaded question but I appreciate you asking! Basically, the key question is: Do you care more about adoption and usage (MIT) or ensuring the code stays open (AGPL)? I prefer adoption and usage more and don't mind if people spin off forks of what we are building here to commercialize it!
Amazing stuff! As a non-coder I’d love to have a possibility to use Ollama models to write me apps and then run them and self correct until it’s working. Now I ask the oai o1-models to write the scripts for me and then I have to manually copy paste them into vscode and run. Then cooy error messages manually back into the o1 window back and forth. I already have the OAI subscription so don’t want to pay for api as well. That’s why I’m thinking of Ollama. What would be the best way to automate this process locally with Code Llama 7B or similar small models? ( if at all possible)
Thank you! Right now having the LLM self correct in this setup is not possible, but that is a feature we are looking to implement because I agree it would be incredible for use cases like yours!
How does Bolt often do better at creating solutions than some of the others like v0 or Cursor given that they use Cline 3.5? And hopefully this is in the open source code?
Yes this is completely open source! And Bolt.new does better specifically because of the prompting and how it interacts with the "web container" (the code environment) that is unique to Bolt.new!
Sent you that Application my guy, hopefulyl i can help you with some of this higher level repo maintianance I have several 1 hour blocks that empty in my day to day I could test and merge or deny PRs and other stuff of that nature.
and i am looking at the loading and syncing full projects, its just a big task I can tell is going to need more focus than I can give it right now, someone will likely beat me to it lol.. but maybe ill merge their PR ;)
One of the only AI project I found useful after normal chatbots, I truly believe this project will go a long way, it has the potential to become necessity for developers and designers like ChatGPT became. Though something like this would be expensive to provide for free even if its only limited tokens.
Thank you! Could you clarify what you are asking/saying here? If you're saying that would be a cool feature to have agents just work by themselves to code up an entire project, I sure agree!
Im looking for bolt tutorial especially on running production website like Saas type. On how to structure it from github , staging areas and stuff.. i will be will to pay for a course on that.
@ColeMedin many of the things you named that would be nice to have, could perhaps be opened up to extension developers if you manage to get an extension framework setup. That would ease the load on the core team that would otherwise be reviewing many nice to haves and not doing much else due to time constraints. Or you lose the people that were contributing cuz noone is taking the time to look at it. Putting a framework in between would help the core team enable others to contribute, without having to review all the changes.
I really love this, thank you for sharing your thoughts! This would be amazing but might be pretty tough to set up unless you have an idea for how this could be done pretty easily!
@ColeMedin without having a look first, I would start talking out of my behind now .. I haven't gotten around to spinning it up yet, will it run on a 3070 ti laptop?
Yeah I believe I've seen this happen before and so have others once and a while, I would love to look into this and get a fix! Seems like a bug with the open source version of Bolt.new, not related to any of the changes for this fork.
when I ask to create a project a little more complex than the basic templates, it does not launch it, even if I try to launch it through the console myself, this is a very important detail to fix, without this there is ZERO sense in it, please fix it ❤️
This fork is amazing - Although the actual project itself is suffering. The UI it generates is terrible and nor does the server start by itself, doesn't solve errors on its own and consumes lots of tokens when running to fix errors again and again. If someone has any suggestions as to a platform that can generate better UI's I would be grateful (even if its a static app - Something like v0)
Have you been able to upload images to the database with bolt? Im also having trouble creating new fields or collections with bolt. Lots of permission denied errors all the time. Got a response that theyre aware of the problem... but its QUITE the problem. Pretty important to be able to have those functions.
The open source version of Bolt.new doesn't have the image upload functionality and I use open source so I haven't tried... I assume you are using the paid version with that functionality? I agree that's a big deal!
Yeah can we PLEASE get rid of the rewriting of files. Bolt is great and it can give a great first output, but everytime I want to add/ fix something, something else gets removed or commented out.
@@ColeMedinis it also possible to increase response lengths so we can get codebase that are closer to 20k lines? And add an orchestration aspect, or at least an architect like alder has
Main reason is there are so many changes I'm not sure if it would be accepted. And if I tried to merge one at a time it would move too slow. But it would be cool and I would be down to merge it into the main repo!
@@ColeMedin bolt.new-any/app/entry.server.tsx' at nodeImport (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53484:19) at ssrImport (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53349:22) at eval (eval at instantiateModule (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53398:24), :12:37) at async instantiateModule (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53408:5 Click outside, press Esc key, or fix the code to dismiss. You can also disable this overlay by setting server.hmr.overlay to false in vite.config.ts. This was the error. But a fresh clone does made that right. And Thank You very much. I'm trying this since the day that your video uploaded. Fantastic work
I cannot get it to run, neither your fork, nor the orginial local install. As i want to "pnpm preview" i get an error: "Received structured exception #0xc0000005: access violation..." Could you make a video explaining in detail? Or is there a way to get in touch with some talented user?
@ColeMedin admin for VS Code Terminal before trying "pnpm preview"? Or do you mean windows shell? Because i had to run shell as an admin to change some windows settings to even get an version output of "pnpm -v" comand in vs code. But i never tried vs code as an admin
Hi Cole, I'm using OpenAI's GPT-4o models, but my preview is only showing a blank white screen with a 'No preview available' message. Could you please help me troubleshoot this? Thanks!
Typically that means the LLM actually messed up and didn't produce good code or didn't run the right commands. I will try restarting with a similar prompt!
I think you should add being able to upload local project as a top priority
Yes. This is very important. I also want to be able to upload my frontend code from v0 to bolt so it can have context to build from
This should've been the first top prio feature imo. EVERYONE UPVOTE THIS
Major upvote
You know what I agree! I think someone is actually working on that but if the PR doesn't go through I'll move it to the top!
@@ColeMedinwaiting patiently for this update
Biggest pain point: errors. If it could run the server and automatically fix it's own errors that would be amazing and save so much work
Feels like that's not far away for various AI coders. Be sweet to just tell it what you want, then have it build and ask any questions it needs along the way, or for any dangerous actions (drop db etc).
Cline has started doing this now.
Wait if it doesn't do this, what's the point?
Yes this is on the list of improvements! I agree it would be an absolute game changer! And as others have mentioned, this is starting to become a thing with tools like Cline, though there is certainly a lot of room for improvement still.
@@ColeMedin Do you think the error handling will have to wait for now, cause then we are talking about reasoning AI and AGI if that where to happen. Thank you though for making this project open source and open for contributions hope to contribute soon.
We’ve been involved with this project from the very start, and it truly embodies the spirit of AI democratization-making it accessible to everyone who wants to make a difference. Let's ensure it doesn't become yet another project driven by profit, with prohibitive subscription costs that exclude many. By keeping it open and inclusive, we can create something exceptional together. To those who understand, a word is enough-let's make this the greatest project ever. A bon entendeur Salut!!!
OK c est parti 🎉
Love it man, I appreciate you!!
Definitely need to keep open AI accessible to the masses.
Need to be able to edit / open existing projects for this to really be a full product.
I agree Ben! It's one of the highest priority items on the list!
Love to see how this is progressing! This whole community is doing some great stuff, and your content is so frigging useful!
Thank you very much! Yeah the progress is super exciting!
Yes image integration is sooo needed! A lot of people use screenshots! Thank u so much!!!
You bet - and I definitely agree!!
The best way to use bolt.new or else it just makes random crap
Im not even a coder but i love this project.
That's amazing - thank you! :D
Good to see the Open ai compatible API support 🎉
Yeah that was a super exciting change!
Congrats on this initial success! If I may suggest just one small thing, I'd recommend you *pick a name* for it so that it's not just referred to as "the fork". That's if you want to make it a full-fledged project though, not just focused on adding this one "local LLM" feature while still tracking the upstream project. An example that comes to mind is CircuitPython vs MicroPython (Python on microcontrollers): CircuitPython is _still_ a fork and regularly brings in updates from upstream, but is now clearly its own separate project with a new name and significant differences. I could certainly see that for your own fork.
Thank you and I appreciate your recommendation! I totally agree and am thinking of a name now! I definitely have to give it a name soon.
@@ColeMedin "Insane Bolt". Thats what I called mine. A play on Usain Bolt. I think thats a winner right there.
I just want to say you guys are like super heroes to me. Some people glorify rappers, athletes, actors, etc. but I think you guys are the sht. For example, I am a creative non coder, non tech and i have all these ideas but no clue how to code any of them. I don't even know where to start, what's a fork? what's bolt.new?
Wow I appreciate it a ton, thank you!!
Cant wait for the tutorial on how to use this myself!
We acnnot wait. Will love to run this on a VPS
Prompt caching should be top priority imo.
I appreciate you saying that! I don't want to have the high priority list be too large but once some of those get implemented I will change this to high priority!
Am Waiting for adding local projects to the app
Yes I agree it's much needed!
@@ColeMedin you're doing great job, keep going
Will do, thank you!!
I won't miss any of your update videos ᅟ-ᅟ thanks for this it's amazing!
Awesome, I appreciate it a ton! You bet!
I'm really excited as well. And sharing with people, keep doing video update related to bolt.new upgrade
I sure will keep the updates coming!
Amazing project, I'm going to use it a lot !
Would be great to integrate in an Electron app to deal with local files projetcs, and have everything installed at one place.
Thank you! Yeah that would be awesome, let me look into Electron!
It also could be great to have a discord to discuss about possible features, get some help for people getting stuck, and to be able to see how the project is evolving in real time (like the ideas before code got added and pushed on git)
I'll be releasing a Discourse community soon, it's going to be even better than Discord!
This project is going to revolutionize how people make apps.
That is the plan, I appreciate it!! :D
This is one of the best project i have tested in a long time
I'm so glad to hear - thank you very much! :D
Pumped for the community can’t wait. I’ve learned so much from this channel. Thank you so much.
I'm pumped too man! I'm glad and it's my pleasure!
I think it needs to be able to react with the shell, bolt.new has this ability. So it can run the shell commands.
Fire!
It does have this ability! But smaller LLMs don't always handle this piece well which is part of what I'm working on with making the smaller LLMs work with the webcontainer better as a whole.
and thats how you start a new open source community
I have have another proposal.
Since bigger projects exceed the token/context limit, the rewriting of the files can cause a lot of circeling, unless you rewrite your pormpt perfectly.
What about a doc, where all requirements will be added automatically and can be used to refresh the memory/context before progressing with the next step.
I am talking about dependencies, strucutres, variables etc.
Sorry I am not a programmer, just an AI enthusiast, trying my first steps in this world!
Love your thoughts here Omar! That would be sweet!
Pls provide Proper Infestructure to Code with Python and sometimes it randomly does something I never asked for would also love the Ability to self host the generated code Pls upVote if you wanna see these features
All great features - I appreciate the suggestions a lot!
looking for away to upload files from saved projects or downloaded zips so i can continue to work on a project.
You can download as a ZIP! And then uploading files into the platform is on the list of improvements to be made!
This running agents in the backend thing is implemented by a Visual Studio Code extension called Pythagora "previously knows as GPT pilot" and they are implementing it well, if this could help
YES thank you for mentioning this! Pythagora is on my list to check out and I will certainly take inspiration from that!
Image integration would be amazing to see and really cool.
anyone started on this?
I agree! I think someone is actually working on a PR for this already!
🎉 great job! Liked and sub’d. As a no-code dev, for me the critical path for the opensource version of Bolt is “one click deploy” feature. Without this it’s just a hobby - as I haven’t a clue how to deploy the apps I build.
Secondly, love that there’s already a fix coming to stop Bolt re-writing code unnecessarily, which is a time drain and massive waste of LLM credits! Thank you. 🙏🏼
And image integration into the prompt will be huge too! 🎉
Thank you very much!! Yes - easy deployments is definitely key and one of the features we are working on.
Image integration is another feature that I am hoping to have available soon.
Major problem is when you say something its directly write from scratch again and ignore every single previous codes, they need to fix that.
100 %
I've experienced same issues for many times!
Especially, if you want to create something big, it might loose some of the requirements, due to the limitation of token/context.
@@Omar-s7m6h yea I am almost made entire website with it but it's always delete comment lines etc. it's just wasting time every single time
Yeah I totally agree! It's a big limitation of Bolt.new itself and it would be great to fix that with this fork. That is on the list!
Man just keep bringing it! This project is growing by leaps and bounds😁
Sure is, it's so excited! :D
i have been using the Fork since day one and i am truly enjoying it
How can I get the link to be able to use the fork?
@@humbleonyenma it’s in the description of the video very simple to install and use
Amazing! 😀
Upload local project to bolt is the most important feature that will make your forked version better than the original
Yeah I agree, can't wait to get that implemented!
I like ! I'm not a developer but I will definitely support this project !
Sounds great, I appreciate it a lot!
its amazing project, I had not success with local ollama, but I will re try with the last version, thanks a lots
Thank you! What issue did you run into with Ollama?
To make the workflow quicker it would be awesome if you guys could add microphone input to that so you don‘t need to write with bold but talk. Best if you use a cheap speech to text model for this.
You can use a Chrome extension for voice input...
Yeah this would be fantastic and not too difficult to implement! I will add it to the list!
Pull in the new features they are implementing like 'freeze code' and 'granular git history'
Looking forward to contributing on this project.
I don't think they are actually adding those to the open source version, correct me if I'm wrong!
Thanks man!
Oh maybe not the open source version.
Yeah unfortunately I don't think so... but maybe in the future!
Can't wait to see how to set it up using docker, I would like to install it on my VPS using Docker
Can you make a one-click install executable that includes and also installs node.js and npm for you?
just use the Docker image, it's more straightforward and less prone to errors given that every user runs a different environment
@@McAko thank you.
One button to do everything is always the dream haha! But yeah I agree with @McAko's suggestion!
A couple things I would love to see are the ability to uplode a hand drawn wire frame that it then turns into an application. Also I would love to see the ability to uplode a url to a website or application and then ask it to make a copy. For yourself. To make your own version. Thus wod be awesome.
Yeah both are on the list of improvements, I agree they would be incredible!
At what point do you rename the fork as it looks like it’ll be a different product within a month. Which is fantastic.
It is fantastic, I appreciate it! And great question. I'm actively thinking for a name for it right now because yeah I want to rename it asap!
Next stop: Replit. Free for all! This is AI democracy in action. Join the movement and let’s make some noise! Shoutout to everyone on board!
YES Replit is fantastic!
Finally took the time to fill in the form thanks for sharing
Awesome - thank you so much! :D
My version is getting ready!
You have a contribution you are working? :D
english or spanish ?
great work Cole. Proud of you. Go ahead.
Thank you very much! :D
You should; for every video like this that you post; show the initial fork you made, the fork from the last video, and the fork from the current video
This is something people will not only love to see, the big shifts in improvement from each, but it will also get people more excited to contribute and continue this very visual demonstration of improvement
I really appreciate this suggestion, thank you!!
Hi Cole, thank you for putting all of this together for us! I've been following along since day one, right from your very first video. I've never developed anything or written a single line of code before, but you've made me feel like a no-code developer. I'm currently facing an issue where this Bolt version is showing something I don’t quite understand. Could you or someone else help me out with it? Thank you!
"I apologize that I haven't been able to directly create files, install packages, or run commands. I understand this makes the development process much more cumbersome for you.
Let me reiterate my limitations within this environment: I'm a language model, and I operate within a specific context. I can generate code, provide instructions, and offer suggestions"
You are so welcome! I appreciate you!
Which model are you using? This seems more like a smaller model that is just confused by all the Bolt.new prompting and thinks it can't create code for some reason.
Another useful additon would be to include the model version and system version that made the changes - this would be useful
Yeah I love this!
For the API key, you can just ask bolt to do it actually. I never had to create the .env file
Would be great to add the ability for the chatbot to ask for clarification. I find that often issues arise when the prompt is ambiguous- having an ability to state assumptions made and/or asking for clarification either at thr start or as it chugs along would be very nice
Thanks Oliver I appreciate the suggestion! This would be done through additional prompting and probably wouldn't be that difficult to add in!
thank you for adding , Mistral Api i love it , Open Ai like Api i love it as well , thank you bro
You are so welcome!!
hey Cole! The Bolt.new team had a podcast recently explaining the roadmap for the versions in the future, did you got a chance to see their 1.5 hour stream? I sure did and wanted to know your thoughts?
Where can i find the podcast?
@@mastermedicalterms ruclips.net/video/afdRCN3zxSQ/видео.html&t
I actually didn't get a chance to see it yet! What did they talk about?
@@ColeMedin long story short they are introducing tons of features to the closed source version that won't be appearing in the open source version anytime soon, however, they are overwhelmed by the support by the open source community but for now focus their efforts on the closed until they can play catchup since they are a small startup.
they just released a new update where it allows bolt to do backends integrations easier, and hopefully not hallucinate as much. They are adding ability to lock in certain files so it doesn't overwrite the files as often and a bunch of changes I'm excited for! But sucks they won't be able to merge the open source to their closed source anytime soon :( but they will monitor the situation closely!
That's super cool! Yeah huge bummer they are keeping it closed source...
It would be very important to check the usage of the bolt.new name and logo does not infringe on any copyright as your project is growing
I'll be using it heavily
Yes I have checked! And will continue to make sure!
Sounds great Jonathan, I appreciate you!
Can you make a video explaining developers who want to contribute and either take existing task or try to bring a new improvement. So to have an incremental development of one features and how to debug the project and how to setup the project for such a feature.
Fantastic suggestion - I appreciate it! I want to start with something more simple with getting it running yourself, but as a follow up "how-to" video this would be great!
What a time to be aliiiiiiiiive!!!!!;
It would be cool if you could add website design and controlling it
Probably at this point a new name for this project would be good. Something like "bolter" or anything that will separate it from bolt in other way than just "a fork".
Yes I agree! I am thinking of a new name as we speak!
Be careful with letting anyone approve code. I've seen people push malicious code in other projects. love what you are doing though. nice work!
Thank you very much! And I appreciate your advice! I'm certainly careful with who I bring into the team of maintainers, and we are setting really good standards for testing and reviewing everything before merging.
I'm wondering if a multi phase approach might be more effective for smaller models. That way you could have a phase 1 to analyse the problem, then write the code for each file, the a phase to format the response.
You could give more granular instructions to each phase and you might even get better results that way.
Yes you are definitely right here! One of the reasons I want to implement agents and have that on the list is specifically because it'll work better for the smaller LLMs with a solution like you described!
@ColeMedin i'm not familiar with the term agents. I guess you mean running in multiple instances, each with their own specialisms?
Wouldn't that also be resource intensive? I suppose you don't need to run them all at the same time. Which i suppose is very similar if not the same as I'm suggesting.
I find even when people work on tasks breaking things down yields better results.
@ColeMedin UPDATE, I had a chat with ChatGPT about agents and I understand the concept now. It's pretty much what I was describing. Where the model can iteratively solve a problem through smaller steps.
Yes exactly! And yeah it will take more tokens and more time, but generally that will be worth it for getting better results because it saves time/tokens going back and forth with the LLM to fix errors/hallucinations.
This is great. How do I run it? I do not understand the Readme. Is there a detailed tutorial? Llama is installed, visual studio code too. How to start it?
Thanks! I will be making a guide soon on how to run this easily, stay tuned!
Also include computer use for future in roadmap 😊
That would be cool! What do you have in mind for that?
one thought: maybe use something like agentzero with a tool to use bolt. new ... might open up multiagent??
It will really "GODZILLA" functionality 😊! I was thinking about it more time
That would be sick!!
Vercel or netflify is ok, but isn't Vercel quite expensive compared to cloudflare? Getting something built and deployed with a few clicks at a reasonable cost would be good. Also, what about longer "session", meaning, when prompted, bolt write code, but... It also stops to "check" your response right. Sometimes we can say continue, and it does so. But, if the tasks it had was i.e to code something that took 3 "checks" and it really could code all 3 in one ho before checkin' in, that would be super as well. Maybe have a option to choose if Those check ins are wanted or not. In theory it COULD code a lot more before stopping, imho. Love the continued Build ❤ Thanks so much
Love your thoughts here, thank you! Vercel is more expensive than Cloudflare but a lot of people find it easier to use. Also a great free tier. I just listed Netlify and Vercel as examples but the feature itself is more general to just be able to publish the built apps somewhere!
@@ColeMedin Cloudflare will be a great choice to host the built apps.
Seems like I'm definitely hearing a lot of good things with Cloudflare recently, thanks for the suggestion!
why MIT license and not AGPL ?
Loaded question but I appreciate you asking! Basically, the key question is: Do you care more about adoption and usage (MIT) or ensuring the code stays open (AGPL)? I prefer adoption and usage more and don't mind if people spin off forks of what we are building here to commercialize it!
Keep going!!!!!
Amazing stuff! As a non-coder I’d love to have a possibility to use Ollama models to write me apps and then run them and self correct until it’s working. Now I ask the oai o1-models to write the scripts for me and then I have to manually copy paste them into vscode and run. Then cooy error messages manually back into the o1 window back and forth. I already have the OAI subscription so don’t want to pay for api as well. That’s why I’m thinking of Ollama. What would be the best way to automate this process locally with Code Llama 7B or similar small models? ( if at all possible)
Thank you! Right now having the LLM self correct in this setup is not possible, but that is a feature we are looking to implement because I agree it would be incredible for use cases like yours!
How does Bolt often do better at creating solutions than some of the others like v0 or Cursor given that they use Cline 3.5? And hopefully this is in the open source code?
Yes this is completely open source! And Bolt.new does better specifically because of the prompting and how it interacts with the "web container" (the code environment) that is unique to Bolt.new!
WooHoo!! Can't wait to watch it!
Sent you that Application my guy, hopefulyl i can help you with some of this higher level repo maintianance I have several 1 hour blocks that empty in my day to day I could test and merge or deny PRs and other stuff of that nature.
and i am looking at the loading and syncing full projects, its just a big task I can tell is going to need more focus than I can give it right now, someone will likely beat me to it lol.. but maybe ill merge their PR ;)
Thank you so much man!
That would be amazing if you implement syncing full projects! No worries if you schedule is too packed too!
Hi, thanks for the great introduction ! Can I paste image to get the code ?
Thank you and not yet! That isn't a part of the open source Bolt.new so we are working on implementing that ourselves.
you have earned a new subscriber 😊
I appreciate it man!
One of the only AI project I found useful after normal chatbots, I truly believe this project will go a long way, it has the potential to become necessity for developers and designers like ChatGPT became. Though something like this would be expensive to provide for free even if its only limited tokens.
Thank you very much!! It depends on the model - using something like DeepSeek through OpenRouter is actually super cheap and still really powerful!
Great project Mr Medin Why do i see computer run integration here completely hands off controled by agent teams.
Thank you! Could you clarify what you are asking/saying here? If you're saying that would be a cool feature to have agents just work by themselves to code up an entire project, I sure agree!
I hope we see PHP and Python in the future!
Yeah that would be awesome!
Is there any way to upload a reference knowledge base? Like many files or a lot of script templates or some reference for UI design?
I like this idea. Then when making a new project or even importing one, it can reference the KB and we don't repeat ourselves.
Not yet but this is a really awesome suggestion! It certainly would be a tougher one to implement but let me think about how that could be done!
lm studio integration should be very similar to open ai
Indeed!
Whoooooooooo another video !
Im looking for bolt tutorial especially on running production website like Saas type. On how to structure it from github , staging areas and stuff.. i will be will to pay for a course on that.
Can you make a how to guide for installing this and running it?
Coming this Sunday!
Does it already support extensions or plugins? That might open contribution up more for the nice to haves.
Could you expand on what you're thinking for extensions/plugins? I like where your head is at with that!
@ColeMedin many of the things you named that would be nice to have, could perhaps be opened up to extension developers if you manage to get an extension framework setup. That would ease the load on the core team that would otherwise be reviewing many nice to haves and not doing much else due to time constraints. Or you lose the people that were contributing cuz noone is taking the time to look at it. Putting a framework in between would help the core team enable others to contribute, without having to review all the changes.
I really love this, thank you for sharing your thoughts! This would be amazing but might be pretty tough to set up unless you have an idea for how this could be done pretty easily!
@ColeMedin without having a look first, I would start talking out of my behind now .. I haven't gotten around to spinning it up yet, will it run on a 3070 ti laptop?
Sometimes the preview session dosen't even load and updates😕, I wish there would be a fix for that in future!
Yeah I believe I've seen this happen before and so have others once and a while, I would love to look into this and get a fix! Seems like a bug with the open source version of Bolt.new, not related to any of the changes for this fork.
@@ColeMedin it would be really good and your reply means a lot.
Load of external Projects to let Bolt analyse it and add features , solve errors and so on ...
Yes both are super important features!
Now bolt offers to publish directly to github? I dont see it searched a lot
Bolt . new Any LLM (fork), not Bolt . new (official).
@@thedmellowPlease how do I get the link for the forked version so I can use
@@thedmellow oh i see cool its like a edited version of bolt. thanks bro :)
when I ask to create a project a little more complex than the basic templates, it does not launch it, even if I try to launch it through the console myself, this is a very important detail to fix, without this there is ZERO sense in it, please fix it ❤️
Which model are you using? Sometimes the smaller LLMs aren't able to handle as many requirements!
please make the tutorial to get this version asap! i cant wait and dont wanna waste time anymore :)
I will VERY soon!!
I love you bro🎉
This fork is amazing - Although the actual project itself is suffering. The UI it generates is terrible and nor does the server start by itself, doesn't solve errors on its own and consumes lots of tokens when running to fix errors again and again.
If someone has any suggestions as to a platform that can generate better UI's I would be grateful (even if its a static app - Something like v0)
Thank you and I get where you are coming from! I hope to evolve this fork to really solve those problems with the original Bolt.new.
Hey Cole... Can u make a video on how to integrate an api into the project
What do you mean by this? :)
Will the ability to develop directly in our local environments be a possibility in the near future? Without a webcontainer.
That's more where something like Cursor or Cline comes in! But we do have that as a feature to potentially implement in the future!
Have you been able to upload images to the database with bolt? Im also having trouble creating new fields or collections with bolt. Lots of permission denied errors all the time. Got a response that theyre aware of the problem... but its QUITE the problem. Pretty important to be able to have those functions.
The open source version of Bolt.new doesn't have the image upload functionality and I use open source so I haven't tried... I assume you are using the paid version with that functionality? I agree that's a big deal!
hey TOP G , thank you for you sharing
Please add cohere models as provider
Thanks for the suggestion! I'll add it to the list!
Can you make a website version to bro?
I am planning that now!
Yeah can we PLEASE get rid of the rewriting of files. Bolt is great and it can give a great first output, but everytime I want to add/ fix something, something else gets removed or commented out.
Yeah it is definitely annoying! It's on the list of improvements for a reason! :D
@@ColeMedinis it also possible to increase response lengths so we can get codebase that are closer to 20k lines? And add an orchestration aspect, or at least an architect like alder has
Not sure on the first one but I think that would be great! For orchestration aspect do you mean using AI agents behind the scenes, or creating agents?
But why was this fork not merged to the main bolt.new repo?
Main reason is there are so many changes I'm not sure if it would be accepted. And if I tried to merge one at a time it would move too slow. But it would be cool and I would be down to merge it into the main repo!
I'm getting error messages and can't continue VERY FRUSTRATING that I can't find anyone for assistance
Same here 🥺
I'm sorry! What's the error you are running into?
@@ColeMedin bolt.new-any/app/entry.server.tsx'
at nodeImport (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53484:19)
at ssrImport (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53349:22)
at eval (eval at instantiateModule (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53398:24), :12:37)
at async instantiateModule (file:///C:/Users/Shashinda%20Eshan/Documents/Github%20New/bolt.new-any/node_modules/.pnpm/vite@5.3.1_@types+node@20.14.9_sass@1.77.6/node_modules/vite/dist/node/chunks/dep-BcXSligG.js:53408:5
Click outside, press Esc key, or fix the code to dismiss.
You can also disable this overlay by setting server.hmr.overlay to false in vite.config.ts.
This was the error. But a fresh clone does made that right. And Thank You very much. I'm trying this since the day that your video uploaded. Fantastic work
@@ColeMedin "[vite] Internal Server Error /home/project/src/components/layout/dashboards/DistrictDashboard.tsx: Unexpected token (32:12) 30 | 31 | {/* Navigation Tabs */} > 32 |
Glad a fresh cloned fixed it for you! You bet!!
I cannot get it to run, neither your fork, nor the orginial local install.
As i want to "pnpm preview" i get an error: "Received structured exception #0xc0000005: access violation..."
Could you make a video explaining in detail?
Or is there a way to get in touch with some talented user?
Try running the terminal as an administrator first!
@ColeMedin admin for VS Code Terminal before trying "pnpm preview"?
Or do you mean windows shell?
Because i had to run shell as an admin to change some windows settings to even get an version output of "pnpm -v" comand in vs code.
But i never tried vs code as an admin
I did try running VS Code as an admin. No change on the error :/
But i think i found the error:
I needed to install/update VC Redistributable
Oh nice! Thanks for circling back and pointing that out
I am a newbie, please make a video guideon how to install it.
That's coming soon!
Add prompt enhance
Hi Cole, I'm using OpenAI's GPT-4o models, but my preview is only showing a blank white screen with a 'No preview available' message. Could you please help me troubleshoot this? Thanks!
Typically that means the LLM actually messed up and didn't produce good code or didn't run the right commands. I will try restarting with a similar prompt!
Hey buddy...please help people like us on how to install bolt. new Fork into our local computer..
Video on that coming on Sunday!