little tip for CMD, in windows when you browse to a directory, on the address bar type cmd and it will open in your current directory, no need to cd and specify drive :)
You should have shown the results of llama models using the same prompt you used on bolt.new. It could have given better understanding whether its worth or not. Not everyone has such high specs to try it out in the first place. But the video was informative. Thanks!
Man, this is so frustrating. I knew before I clicked the video that it would be one of those tutorials that don’t go into detail. Y’all are sitting on thousands of views from this topic, but there are literally thousands of no-code users like me who love Bolt and are searching for a complete tutorial on this. Not one video out there breaks it down in detail or shows exactly what to do. Everyone assumes we already know everything. I’m done with this.🤦🏾♂️
for me it is loading continuously for 10 minutes and getting this error "Failed after 3 attempts. Last error: Cannot connect to API: Headers Timeout Error", from 2 days i am trying unable to find the issue ,followed every step can you help me please
Hi, thank you for this video. I have a query, If we have any API documentation online, then how to deal with that? to fetch the sample codes and implement in our project in a local environment? Does this local bolt search online? This might be a silly query 😢
@@gondalaprasad easiest way to add documentation in this version is just add it in your command by copy pasting or convert documentation to pdf or docs and attach as file
@@TheMetaverseGuy thank you very much for your bolt speed reply 😀. I will try this, but the documentation literally kind of 30-40 sub-links and cannot create a pdf out of it.
Also, repeating the same process doesn't lead to any results. Not rendering any code or and visuals. Just behaving like a minimal LLM. Maybe you may want to check your repo or debug some instructions please.
You left us in the middle Bro :). Please help us to get final output. followed video completely. But getting error at the end. You also not shown any output :)
bro, when i try to install this using ollama run qwen2.5 on Powershel , it downloads till 25 to 30% and again restarts , ollama app is running , could you suggest please
Sir i had faced error when i start npm install, then when i go for run bypass this also throw error module not found or etc , what should i do sir pls help me and i when go for nppm install it does not install through my vs code ..pls tell me
The command "ollama create -e Qwen2.5coder qwen2.5-coder-:7b does not work. Giving me error message below; ollama is running on my PC E:\MyBizApps\BOLTDOTNEW\bolt.new-any-llm-main\bolt.new-any-llm-main\modelfiles>ollama create -e Qwen2.5coder qwen2.5-coder-:7b 'create' is not recognized as an internal or external command, operable program or batch file.
@TheMetaverseGuy Thankyou for replying. sir, I solve that error now getting error related insufficient RAM, I install 7b varient. I have 8gb RAM thing i now have to intall 4b like something.
The local version of Bolt is BS. I have tried it, doesn't work as you would expect even with the best models. If you are looking to develop full stack app, please don't bother. You will end up wasting your time.
You need to be transparent about GPU capabilities. As these videos trick people into using and liking. If you ain't running anything over 32b on anything under say a 3090. You won't get very far with Bolt.anything. 😅😅
Honestly, this version of Bolt is so buggy it's not worth messing with AT ALL. It's better to just use windsurf or Cline. The Bolt-any-llm is a complete waste of time, it wants to error out on almost everything you try to do.
Its just using qwen at backend and frontend of bolt 😂😂 why to have such complex process for this UI we have some extension that give us UI easily for ollama models
The command "ollama create -e Qwen2.5coder qwen2.5-coder-:7b does not work. Giving me error message below; E:\MyBizApps\BOLTDOTNEW\bolt.new-any-llm-main\bolt.new-any-llm-main\modelfiles>ollama create -e Qwen2.5coder qwen2.5-coder-:7b 'ollama' is not recognized as an internal or external command, operable program or batch file.
Hello and thank you for the video, first of all you didn't show that the project works at the end, you didn't show even one promt for the ollama bot because it won't work! It is necessary to change the default Model to a "bolt.new-any-llm-main/app/utils/constants.ts" line 9 code: export const DEFAULT_MODEL = 'claude-3-5-sonnet-latest'; change to export const DEFAULT_MODEL = 'your ollama model'; and put the ollama model that you started only then it will work with ollama with any promt. Greeting
My friend you are a hero to me, i was about to spend soooooo much much much money thank you! thank you, so much respect
Thank you soo much
The same to me
Agreee, just yesterday thought to purchase paid plan, since, the free tokens exhausted in 5 minutes.
little tip for CMD, in windows when you browse to a directory, on the address bar type cmd and it will open in your current directory, no need to cd and specify drive :)
this video was awesome dude , u gained a subscribe , keep posting .
Getting error when i run npm run dev, it says Could not locate @remix-run/serve. Please verify you have it installed to use the dev command.
You should have shown the results of llama models using the same prompt you used on bolt.new. It could have given better understanding whether its worth or not. Not everyone has such high specs to try it out in the first place. But the video was informative. Thanks!
buddy my god! this was the best tutorial ever! you earned a sub man! thanks for this beautiful information.
@@AdarshChandruofficial thank you so much ❤️❤️
Man, this is so frustrating. I knew before I clicked the video that it would be one of those tutorials that don’t go into detail. Y’all are sitting on thousands of views from this topic, but there are literally thousands of no-code users like me who love Bolt and are searching for a complete tutorial on this. Not one video out there breaks it down in detail or shows exactly what to do. Everyone assumes we already know everything. I’m done with this.🤦🏾♂️
@@trilloclock3449 please watch this video for the detailed step by step tutorial
ruclips.net/video/Kk9YOF6URnI/видео.htmlsi=E4wjbgzsLQbVZR1F
How much more details do you need bro. This video has already covered everything.
Stop crying and go learn how to code probably
I think you want him to also show you how to turn off your computer 🤣🤣🤣🤣
for me it is loading continuously for 10 minutes and getting this error "Failed after 3 attempts. Last error: Cannot connect to API: Headers Timeout Error", from 2 days i am trying unable to find the issue ,followed every step can you help me please
Nice work! Thanks for sharing. now question, how do i load it back up and continue working on it? it doesn't save in the prior chats to recall it?
Even it has some limitations, after using lot of tokens , it starts to give error and stops working on any free LLM models
Would be great to see video on how to make fully functional web application using Bolt.new , i mean including backend and DB..
@@KTTha thanks you the idea. I have just added in my list. Will record this video in couple of days✌️❤️
@@TheMetaverseGuy it'll highly appreciated if you do this in this week before weekend. ❤
Hi, thank you for this video. I have a query, If we have any API documentation online, then how to deal with that? to fetch the sample codes and implement in our project in a local environment? Does this local bolt search online? This might be a silly query 😢
@@gondalaprasad easiest way to add documentation in this version is just add it in your command by copy pasting or convert documentation to pdf or docs and attach as file
@@TheMetaverseGuy thank you very much for your bolt speed reply 😀. I will try this, but the documentation literally kind of 30-40 sub-links and cannot create a pdf out of it.
Any suggestions would be great.
Almost spent half day on it. Everything I i followed all the steps. When I enter the command I am not getting any response. It is showing error
It is very easy and straightforward.
Thanks!
What hardware do you have ?
Can we do it without docker? Would make it a tad bit easier. Suggestions?
Also, repeating the same process doesn't lead to any results. Not rendering any code or and visuals. Just behaving like a minimal LLM. Maybe you may want to check your repo or debug some instructions please.
Used Ollama, Qwen. All of it is referring to Claude by default.
using windsurf is that okay?
The BOlt only uses CPU my GPU is idle. Any fix for that?
For working ollama with ollama models please change from app/utils/constants.ts line:9 and put your ollama model.
You’re soo smart. Thank you ❤
No, @ColeMedin is,.... lol it's not his work...
Does not work for me, error at "remix vite:dev"
I am a finalist at SIH 2024 this vid is a life saver i wanna create an website for our project RESPECT++❤❤
@@renosenseiyt you are most welcome. I am glad I am able to add value 🙏
The setup does not work because "pnpm run dev" returns an error at "remix vite:dev"
You left us in the middle Bro :). Please help us to get final output. followed video completely. But getting error at the end. You also not shown any output :)
bro, when i try to install this using ollama run qwen2.5 on Powershel , it downloads till 25 to 30% and again restarts , ollama app is running , could you suggest please
Sir i had faced error when i start npm install, then when i go for run bypass this also throw error module not found or etc , what should i do sir pls help me and i when go for nppm install it does not install through my vs code ..pls tell me
Paste your error in chatgpt. That's how I fixed mine.
The command "ollama create -e Qwen2.5coder qwen2.5-coder-:7b does not work. Giving me error message below; ollama is running on my PC
E:\MyBizApps\BOLTDOTNEW\bolt.new-any-llm-main\bolt.new-any-llm-main\modelfiles>ollama create -e Qwen2.5coder qwen2.5-coder-:7b
'create' is not recognized as an internal or external command,
operable program or batch file.
hello bro , i follow same path which you say , but its still not working , can you please help me to fix this issue
When i have the website i like how do i deploy to production?
What specification of pc are needed for this installation
Very helpful, thanks man❤
There ws a error processing your request- please help
bro its showing error processing your request
Great video.....thank you for sharing bro....newly subscribed ...cheers :)
I have done everything but it's not working - My laptop i5 windows which can i download ollama API key
how muvh my 4050 16 gb ram will be able to run smothly
@@pushpenderkumar2319 should be able to run easily
@@TheMetaverseGuy plz help when i write they npm install it is showing something like disabled on this system
bro tell me the way how i connect with you
My laptop can’t run any local llm 😢
Try this one: ollama.com/library/tinyllama
enjoying your content Can you make a video on how to edit SVG files in Bolt.new or importing figma files
subscribers number 720
good luck with goodness
Thank you bro
Counting every single one✌✌
Hello sir i Follow you step by step but getting error at last
@@ROHITKUMAR-gn4pi can you please share the error that you are getting ?
@TheMetaverseGuy Thankyou for replying. sir, I solve that error now getting error related insufficient RAM, I install 7b varient. I have 8gb RAM thing i now have to intall 4b like something.
The local version of Bolt is BS. I have tried it, doesn't work as you would expect even with the best models. If you are looking to develop full stack app, please don't bother. You will end up wasting your time.
You need to be transparent about GPU capabilities. As these videos trick people into using and liking. If you ain't running anything over 32b on anything under say a 3090. You won't get very far with Bolt.anything. 😅😅
There is 1000x difference between cloud bolt and local bolt.
Yeah, i realized that after Creating few projects locally😄
@@TheMetaverseGuy Can you please elaborate? Is it bad locally?
Honestly, this version of Bolt is so buggy it's not worth messing with AT ALL. It's better to just use windsurf or Cline. The Bolt-any-llm is a complete waste of time, it wants to error out on almost everything you try to do.
Its just using qwen at backend and frontend of bolt 😂😂 why to have such complex process for this UI we have some extension that give us UI easily for ollama models
Please comment those for the others information 🙏🏻
With the new version it does'n work
❤
Here's the stupidity of it all - we're paying them to get it wrong. Do you see what they did there...? Do you see?
U digress alot shm
The command "ollama create -e Qwen2.5coder qwen2.5-coder-:7b does not work. Giving me error message below;
E:\MyBizApps\BOLTDOTNEW\bolt.new-any-llm-main\bolt.new-any-llm-main\modelfiles>ollama create -e Qwen2.5coder qwen2.5-coder-:7b
'ollama' is not recognized as an internal or external command,
operable program or batch file.
Hello and thank you for the video, first of all you didn't show that the project works at the end, you didn't show even one promt for the ollama bot because it won't work! It is necessary to change the default Model to a "bolt.new-any-llm-main/app/utils/constants.ts" line 9 code: export const DEFAULT_MODEL = 'claude-3-5-sonnet-latest'; change to export const DEFAULT_MODEL = 'your ollama model'; and put the ollama model that you started only then it will work with ollama with any promt. Greeting
Remove breath sound
Thank you for mentioning that, i will keep in mind in future video
hello brother, i follow same path which you say but its still not working.. can you please help me?