I only want solutions that work like Github Copilot in vsCode. I need to be able to download a light weight LM, I think LM studio seems to be best for now. Then connect to the service from other machines and have it reason with my solution generating, fixing and testing code.
ok It's free to run DeepSeek R1 locally, but it requires a very powerful computer. Specifically, you'll need at least 128 GB of RAM and a top-tier NVIDIA graphics card, which can cost around *$4000
I'm running deepseek-r1:14b on a rtx4070 and 32GB ram system and it uses the entire 12GB of VRAM and around 20GB of RAM. The problem is that this bolt diy just answers like chat gpt or the simple Ollama webview chat. It doesn't make any files or smth... are you sure in this video you demo the Deepseek reasoner and not the deepseek coder? it looks like you're capping 🧢 sir
@ I didn’t do that much testing and I tried some easy coding tasks. Deepseek coding and other coding related models did better than this reasoner that simply answered like the free version of chatgpt. Still this decent PC feels a bit slow for AI unfortunately. 24+ GB GPUs are recommended
Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: www.worldzofai.com/ 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 🚨 Subscribe To The Newsletter For Regular AI Updates: intheworldofai.com/ 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🧠 Follow me on Twitter: twitter.com/intheworldofai 👾 Join the World of AI Discord! : discord.gg/NPf8FCn4cD 📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
I'm encountering an issue after loading the Ollama API key. The error message says: 'There was an error processing your request: Custom error: No models found for provider Ollama.' Ollama is already installed on my PC. Can you help me resolve this?
help please when running bolt diy and creating a site it runs 'npm run dev' in the woekspace and doesn't make the site properly, seems like it gets stuck on that
[Must Watch]: Deepseek-R1 Computer Use: FULLY FREE AI Agent With UI CAN DO ANYTHING! (Beats OpenAI Operator): ruclips.net/video/PRbCFgSvaco/видео.htmlsi=-2Pdd_3OOYt7iei1 DeepSeek-R1 + Cline: BEST AI Coding Agent! Develop a Full-stack App Without Writing ANY Code!: ruclips.net/video/OBc9xheI2dc/видео.htmlsi=o6mpLg8dqMoilWiP Deepseek-R1 (Tested): BEST LLM EVER That's Opensource? AGI IS HERE! (Beats O1 & 3.5 Sonnet): ruclips.net/video/hXA15NkEHNU/видео.htmlsi=CQybrodUTSp6hXcy
@@intheworldofai I wanted to upload images in the app/website like how you did. But drag and drop in the project doesn’t work neither using Lorem picsum
I'm having Laptop with 8GB RAM, but have dedicated Graphic Card (only 2GB integrated graphic card). So will my system capabile of running it locally? ( If not can you please Create A video in which you can compare & suggest best options to create "Full stack App" on different system requirements ).
The typical computer can’t run it locally. It would cost a lot of money in hardware. If you’re looking to write code with ai use a service like cursor. Unless you’re committed to the max about running it locally I would learn about it all before investing.
@@kyleDoesCoding a computer minimum RAM requirement will soon be 128GB. We are below $2 PER Gb at the moment. An AI accelerator card like imacmini4 should not cost moe then 150$ aka nvidia jetson
I haven't figured out yet but probably have to install bunch of more packages diff ect still working on it. I can never get my deepseek key to take saids there's always a error deepseek saids it's the wrong end point with bolt.diy
If u go settings, tvere is something called Diff. It is beta version still. But works. Even yiu can ask bolt to split the files into different modules. It will do that. So u can refuce token usage..
I thinnk R1 is overhype ! I try to make some Wordpress plugins with this model and.. it was disaster every time. Even cheaper and free models as codestral are making better job. Sonet is still the king... sadly...
It's normal in coding, but in other fields it's amazing like Scientific domains. But have you tried DeepSeek R1 zero , as it uses test time compute not just Chain of thoughts.
🚨 Subscribe To The FREE Newsletter For Regular AI Updates: intheworldofai.com/
I only want solutions that work like Github Copilot in vsCode. I need to be able to download a light weight LM, I think LM studio seems to be best for now. Then connect to the service from other machines and have it reason with my solution generating, fixing and testing code.
🎉good stuff homie keep it coming..badassness❤
In the preview it doesnt show anything. Any help?
ok It's free to run DeepSeek R1 locally, but it requires a very powerful computer. Specifically, you'll need at least 128 GB of RAM and a top-tier NVIDIA graphics card, which can cost around *$4000
The smaller distilled models don’t require high end gpus like the 4090.
Theres people on X running it off their phones and raspberry pi
Estas hablando sin experiencia hermano
@@Pencilini_the_3rd the smallets model maybe. but the smallest is not even worth it
@@ricko13 dude, what you mean "worth", it's open source and free, don't be greedy, want things both free and perfect.
I'm running deepseek-r1:14b on a rtx4070 and 32GB ram system and it uses the entire 12GB of VRAM and around 20GB of RAM. The problem is that this bolt diy just answers like chat gpt or the simple Ollama webview chat. It doesn't make any files or smth... are you sure in this video you demo the Deepseek reasoner and not the deepseek coder? it looks like you're capping 🧢 sir
I have the same system as you. 4070 and 32gb ram.
Do you do coding?
Which model works better for you ?
If you have tested html, css, js and c#?
@ I didn’t do that much testing and I tried some easy coding tasks. Deepseek coding and other coding related models did better than this reasoner that simply answered like the free version of chatgpt. Still this decent PC feels a bit slow for AI unfortunately. 24+ GB GPUs are recommended
Why are you referencing baseball caps. Has nothing to do with the content.
Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: www.worldzofai.com/
💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
🚨 Subscribe To The Newsletter For Regular AI Updates: intheworldofai.com/
🔥 Become a Patron (Private Discord): patreon.com/WorldofAi
🧠 Follow me on Twitter: twitter.com/intheworldofai
👾 Join the World of AI Discord! : discord.gg/NPf8FCn4cD
📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
I use AnyLM to run various LLM models locally. Is there any difference between using Ollama and AnyLM?
I'm encountering an issue after loading the Ollama API key. The error message says: 'There was an error processing your request: Custom error: No models found for provider Ollama.' Ollama is already installed on my PC. Can you help me resolve this?
help please when running bolt diy and creating a site it runs 'npm run dev' in the woekspace and doesn't make the site properly, seems like it gets stuck on that
If yiu want to run it locally or your own hosting. Just ask bolt to give step by step instruction. It will tell you what exactly you have to do.. 😊
[Must Watch]:
Deepseek-R1 Computer Use: FULLY FREE AI Agent With UI CAN DO ANYTHING! (Beats OpenAI Operator): ruclips.net/video/PRbCFgSvaco/видео.htmlsi=-2Pdd_3OOYt7iei1
DeepSeek-R1 + Cline: BEST AI Coding Agent! Develop a Full-stack App Without Writing ANY Code!: ruclips.net/video/OBc9xheI2dc/видео.htmlsi=o6mpLg8dqMoilWiP
Deepseek-R1 (Tested): BEST LLM EVER That's Opensource? AGI IS HERE! (Beats O1 & 3.5 Sonnet): ruclips.net/video/hXA15NkEHNU/видео.htmlsi=CQybrodUTSp6hXcy
Please do cline with local deepseek r1
Bolt.DIY pide key de ollama? no me detectan los modelos que tengo, como deberia actualizar?
Is it like github copilot ? Does it edit code based on prompts like the copilot ? I am a bit concerned about the privacy with the copilot
Has anyone checked to make sure local versions of R1 isn’t communicating back to a remote location
I assume ollama wouldn't allow it if so
How to get images in bolt.diy ? I can’t have it to work
Which model are you using ? Deepseek can’t process images. I recommend using Claude or llama for images
Their v3 can take images
Deepseek v3 model can take images i mean
@@intheworldofai I wanted to upload images in the app/website like how you did. But drag and drop in the project doesn’t work neither using Lorem picsum
Has anyone tested the 32b/14b Ollama version? Does it produce the same application as the DeepSeek cloud version shown in the video?
I'm having Laptop with 8GB RAM, but have dedicated Graphic Card (only 2GB integrated graphic card).
So will my system capabile of running it locally?
( If not can you please Create A video in which you can compare & suggest best options to create "Full stack App" on different system requirements ).
Can't see anywhere to actually download deepseek, think you're just using it through their UI
The typical computer can’t run it locally. It would cost a lot of money in hardware. If you’re looking to write code with ai use a service like cursor. Unless you’re committed to the max about running it locally I would learn about it all before investing.
@@kyleDoesCoding a computer minimum RAM requirement will soon be 128GB. We are below $2 PER Gb at the moment. An AI accelerator card like imacmini4 should not cost moe then 150$ aka nvidia jetson
i got only 1 $ when i login into hyperbolic 😢😢
Thank you tons!
It looks promising .
This is awesome
não entendi
With bolt.diy how do you stop it from re writing the entire file? I figured it out in bolt.new but how in diy?
I haven't figured out yet but probably have to install bunch of more packages diff ect still working on it. I can never get my deepseek key to take saids there's always a error deepseek saids it's the wrong end point with bolt.diy
can you share bolt new , how did you solve it, then I will be able to find the same in diy, thanks
@@librakhan25 in bolt.new its a setting, experimental
If u go settings, tvere is something called Diff. It is beta version still. But works. Even yiu can ask bolt to split the files into different modules. It will do that. So u can refuce token usage..
@@juleatkr i see the Diff on bolt.new but I don't see it on DIY
Hyperbolic provides only 1$ for free not 10
Thanks
how to get free api unlimited
I thinnk R1 is overhype ! I try to make some Wordpress plugins with this model and.. it was disaster every time. Even cheaper and free models as codestral are making better job. Sonet is still the king... sadly...
It's normal in coding, but in other fields it's amazing like Scientific domains. But have you tried DeepSeek R1 zero , as it uses test time compute not just Chain of thoughts.