*System Requirements 💪* 🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware) 🎮 7B & 8B - 6GB VRAM or higher 🚀 14B - 16GB VRAM or higher 🔥 32B - 24GB VRAM or higher ⚡ 70B - 48GB VRAM or higher 💀 671B - 480GB VRAM or higher ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses. 💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
r1 is the latest version model XD. It doesn't come with Ollama. You need to install ollama first and then once that is done then you can use the command on the DeepSeek R1 to download and install it thanks to Ollama. :)
you can choose mutliple models. The best highest one it could run well is 14b. It might be able to run 32b or even 70b also. But it will produce answers really slowly. Forget 671b as no normal consumer level PC can run that really well if lucky or even work at all unless you have like a Server PC
Yes, you can add multiple version of it. Each version is a separate add-on with its own command. So no need to delete it a previous version. I just didn't mention this because I was not sure if most people would want more than 1. but If you wanted to delete a specific model Manually you can go on C:\Users\%YOURPCusername%\.ollama\model You will be able to tell which one to delete based off the size of the file.
System requirements? 🤡 Also I just want to analyse my documents and the server error has been bothering me for a long time, so I believe even the lowest model will be able to analyse documents locally?
Yea it will work. I have a gaming laptop and when I play a game like minecraft java my laptop performance goes low but with deepseek the performance is actually pretty good
System Requirements 💪 🖥 1.5B - Any PC (Avoid XP/Vista-era hardware) 🎮 7B & 8B - 6GB VRAM or higher 🚀 14B - 16GB VRAM or higher 🔥 32B - 24GB VRAM or higher ⚡ 70B - 48GB VRAM or higher 💀 671B - 480GB VRAM or higher ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses. 💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
System Requirements 💪 🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware) 🎮 7B & 8B - 6GB VRAM or higher 🚀 14B - 16GB VRAM or higher 🔥 32B - 24GB VRAM or higher ⚡ 70B - 48GB VRAM or higher 💀 671B - 480GB VRAM or higher ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses. 💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧 So for you i recommend you use 8B maximum. But you can take your chances with 14B or 32B but it will be really slow to use!
I am going to run 1.5B model but what's the difference between these models? will 1.5 model give me inaccurate responses or maybe not respond at all because lack of enough info..?
correct. due to the lack of information their will be many answers that are either incorrect, outdated. Basically it can do general things pretty well. But i still think 7b will be much better option however it just depends if your device can handle it.
@@Zortec okay I got the deepseek r1 7B model on my phone and it's kinda dumb it cannot tell if the number 9.11 is bigger or 9.9 and it's very slow. I downloaded another model called "Qwen2.5-3B-Instruct" and It got the maths question correct and it's faster.
yeah that is strange however keep in mind that 7B is 1% as capable as the complete 671b model. The higher the parameters the better the answers. The Qwen2.5-3B-Instruct model might be more efficient and accurate in mathematical reasoning, often outperforming larger models in these tasks. But this doesnt represent the true power of DeepSeek R1
System Requirements 💪 🖥 1.5B - Any PC (Avoid XP/Vista-era hardware) 🎮 7B & 8B - 6GB VRAM or higher 🚀 14B - 16GB VRAM or higher 🔥 32B - 24GB VRAM or higher ⚡ 70B - 48GB VRAM or higher 💀 671B - 480GB VRAM or higher ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses. 💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
So would it automatically use both deepthink and search feature if connected to the internet and get the latest information also on the deepseek website it doesnt allow you to use put a file when using search is it possible when running locally also is there any chance that the 7b paramter model runs on my intel ires xe graphics with 8 GB GPU RAM and 16 GB Normal RAM
Hey great tutorial but I have a few questions and I was hoping you could answer them. I have Dell Latitude 5310 laptop, below are my specs Processor Intel(R) Core(TM) i7-10610U CPU @ 1.80GHz 2.30 GHz Installed RAM 16.0 GB (15.7 GB usable) System type 64-bit operating system, x64-based processor Pen and touch Touch support with 10 touch points Q1: Which model is suitable for these specs? Q2: Which model is AnythingLLM using?? When you asked on console "Who are you??", it said "Deepseek - R1" but when you installed AnythingLLM, then it answered "I'm Gen". So I had the confusion if AnythingLLM is also using the Deepseek-R1 model or not?? Q3: Also one last question, if I install any other model instead of Deepseek-R1, would that also work with AnythingLLM or not?? I'm a beginner so I asked pretty basic questions I guess, would really appreciate your reply. Thank You.
Yeah to be honest i was confused to it said Gen. I think its because their is a prompt inbuilt on AnythingLLM that tells it to say its Gen. However it is Ollama DeepSeek R1. but just to make sure i will look into it again for you to make sure. Yes if its a Ollama model then it should work on AnythingLLM for you I recommend you start with 1.5b and see how that goes. and then you can also get 7b and try that out.
This is what i found "DeepSeek R1 model's parameters remain static, and it does not continue to learn or adapt in real-time during usage. Therefore, when running DeepSeek-R1 locally, it operates as a "fixed" model without the ability to perform reinforcement learning or self-improvement during its deployment. The RL component is exclusive to the training environment and is not a feature of the deployed model, regardless of the platform.
How about can we add image to ask questions on it like chat gpt, is it possible to access internet to get latest update like in the chat gpt (search online), what about token system, is there any limitations on how many questions we can ask, is it possible to upgrade the deepseek model from lower to next highest like in the beginning i downloaded 7b & now is it possible to upgrade to next 13 is easy or do i need to redo all the steps agin, how to create like our GPTS like in chat gpt can we do it in deepseek(locally) is it possible, what is the difference between those deepseek models 2b, 7b,14b,32....b i know it billion parameters of instructions something like that, what is the difference like the hogher the number lower time to give response or lower knowledge?, please answer all these questions if you dont mind make a video about it in more detailed about this 👍😃👍
Idk how to answer all the questions but install Anythingllm and install a new deepseek model and in the settings, you can just select a new model you don't need to download Anythingllm again. Also, if you get webui it can access the internet but idk how to install that
Thanks for your question Arunneser. Basically DeepSeek R1 and most local LLMs are designed primarily for text-based interactions and do not support image inputs. To process images, you would need a multimodal model specifically trained for both text and image understanding. As of now, DeepSeek R1 does not offer this capability. 1. Can we add images to ask questions, like in ChatGPT? At the moment, DeepSeek R1 and most local LLMs are built for text-based interactions. They don't handle image inputs unless they're specifically designed as multimodal models. So, for now, adding images to your questions isn't supported. 2. Is it possible for DeepSeek R1 to access the internet for the latest updates, like ChatGPT does? DeepSeek R1 operates offline and doesn't have built-in internet access. However, with some technical know-how, you can integrate external search APIs to fetch real-time information. This would involve additional setup and programming. 3. How does the token system work? Are there limits on how many questions we can ask? LLMs process text in units called tokens, which can be as short as a single character or as long as one word. Each model has a maximum context length, typically ranging from 2,048 to 4,096 tokens. This limit applies to the combined length of your input and the model's response. While there's no hard cap on the number of questions you can ask, longer conversations might require trimming earlier parts to stay within the token limit. 4. Can I upgrade from a lower DeepSeek model to a higher one without redoing all the steps? Yes, upgrading is straightforward. You can download the new model weights and load them into your existing setup. Just make sure your hardware can handle the increased demands of the larger model. 5. Is it possible to create custom models like GPTs in ChatGPT with DeepSeek locally? Creating custom models involves fine-tuning the base model on specific datasets to tailor its responses. With DeepSeek R1, you can perform fine-tuning locally if you have the necessary computational resources. This allows you to adapt the model to specific tasks or domains. 6. What's the difference between DeepSeek models like 2B, 7B, 14B, 32B, etc.? The numbers indicate the number of parameters in billions. Generally, more parameters mean the model can understand and generate more complex responses. However, larger models also require more computing power and memory. It's a balance between performance and resource availability. 7. Does a higher parameter count mean faster responses or better knowledge? Larger models usually have a better grasp of language and can provide more detailed answers. However, they might respond more slowly due to the increased computational load. Optimizations like model quantization can help speed things up, but there's always a trade-off between size, speed, and performance. I hope this helps clarify things!
@@Zortec 🫡 thanks for responding ,2) Is it possible to make a video on DeepSeek R1 to access the internet, 5) how to make custom deepseek for Separate purpose like script making, generating ideas...etc, & about the token system can you make a detailed video on these topic ! it will be helpful ! (what can I do with LLM(AI) which has no internet connection ? !!!)
*System Requirements 💪*
🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware)
🎮 7B & 8B - 6GB VRAM or higher
🚀 14B - 16GB VRAM or higher
🔥 32B - 24GB VRAM or higher
⚡ 70B - 48GB VRAM or higher
💀 671B - 480GB VRAM or higher
⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
i have GTX 1060 6gb with 16 gb RAM (Laptop) - which model is suitable for this specs?
@shortclipse based off your 6gb VRAM. 7B/8B would be the best suited for your GTX 1060
Thank you! Finally these are the only instructions that I have followed that actually work and are just click bait
Thanks for this amazing tutorial! One of the best guides how to run deepseek locally.
00:26 GTA 6 and Half-Life 3 ahahahahah. you are the best
w content, cant even be mad at the goofy soundtracks x)
Mind-blowing content! Keep it up!
Thank youuu that means a lot! 🙏
Thanks for this amazing tutorial!
You're welcome! Glad it helped you. 😉
I have a rtx 3060 laptop and I got deepseek coder V2 (the 16b model that's 9GB in size) and it runs great. Great tutorial I like the memes👍
What's your ram and vram size?
I pinned the System Requirements on the comments
@@WhiteScreen-i7v 16GB of ram and 6GB of vram
thanks for the compliment of the memes :)))
@ so for you 7b or 8b will be highest ones that work best
Your memes got me 1 sub 👌
Thank you!!
Great video bro… did you also download the r1 latest version or it comes with ollama
r1 is the latest version model XD. It doesn't come with Ollama. You need to install ollama first and then once that is done then you can use the command on the DeepSeek R1 to download and install it thanks to Ollama. :)
bro how to get your mouse?
Which one should I use? (RTX 4070 Ti Super)
you can choose mutliple models.
The best highest one it could run well is 14b.
It might be able to run 32b or even 70b also. But it will produce answers really slowly.
Forget 671b as no normal consumer level PC can run that really well if lucky or even work at all unless you have like a Server PC
@@Zortec Yeah. I just wanted to know if it can run 32b. Obviously it wouldn't run the 671b version xD
Thank you bro
no worries. feel free to share this with a freind if they need this :)
What if I want to install another ver of deepseek and want to delete the previous one? Then how should i do that
Yes, you can add multiple version of it. Each version is a separate add-on with its own command.
So no need to delete it a previous version. I just didn't mention this because I was not sure if most people would want more than 1.
but If you wanted to delete a specific model Manually
you can go on C:\Users\%YOURPCusername%\.ollama\model
You will be able to tell which one to delete based off the size of the file.
System requirements? 🤡
Also I just want to analyse my documents and the server error has been bothering me for a long time, so I believe even the lowest model will be able to analyse documents locally?
Yea it will work. I have a gaming laptop and when I play a game like minecraft java my laptop performance goes low but with deepseek the performance is actually pretty good
System Requirements 💪
🖥 1.5B - Any PC (Avoid XP/Vista-era hardware)
🎮 7B & 8B - 6GB VRAM or higher
🚀 14B - 16GB VRAM or higher
🔥 32B - 24GB VRAM or higher
⚡ 70B - 48GB VRAM or higher
💀 671B - 480GB VRAM or higher
⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
Yes you can analyse documents locally following my tutorial.
Hello ! Which model i should choose ?
My PC Specifications are
Intel i5 12th generation with AMD RX6600 8 GB V RAM , with 16gb ram .
System Requirements 💪
🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware)
🎮 7B & 8B - 6GB VRAM or higher
🚀 14B - 16GB VRAM or higher
🔥 32B - 24GB VRAM or higher
⚡ 70B - 48GB VRAM or higher
💀 671B - 480GB VRAM or higher
⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
So for you i recommend you use 8B maximum. But you can take your chances with 14B or 32B but it will be really slow to use!
is there a way to turn off deep thinking on the 14 model? like its possible on deepseek website
Its programmed into the model file so it will be static. Will let you know if i figure thiks out
I am going to run 1.5B model but what's the difference between these models? will 1.5 model give me inaccurate responses or maybe not respond at all because lack of enough info..?
By the way I am going to run it on my smartphone, my phone runs Gemma-2.2b-it perfectly fine so I hope I can run this too.
correct. due to the lack of information their will be many answers that are either incorrect, outdated. Basically it can do general things pretty well. But i still think 7b will be much better option however it just depends if your device can handle it.
@@Zortec okay I got the deepseek r1 7B model on my phone and it's kinda dumb it cannot tell if the number 9.11 is bigger or 9.9 and it's very slow.
I downloaded another model called "Qwen2.5-3B-Instruct" and It got the maths question correct and it's faster.
yeah that is strange however keep in mind that 7B is 1% as capable as the complete 671b model.
The higher the parameters the better the answers.
The Qwen2.5-3B-Instruct model might be more efficient and accurate in mathematical reasoning, often outperforming larger models in these tasks. But this doesnt represent the true power of DeepSeek R1
@@Zortec does the Deepseek official website use r1 671b parameter model?
how install on drive d:?
Can you send it attachments?
Yes you can if you install Anythingllm like shown in the video
yes you can. like documents and spreadsheets :)
My laptop is i5 13th gen 16gp ram and 6gb vram 4050RTX. Which one i should use?
System Requirements 💪
🖥 1.5B - Any PC (Avoid XP/Vista-era hardware)
🎮 7B & 8B - 6GB VRAM or higher
🚀 14B - 16GB VRAM or higher
🔥 32B - 24GB VRAM or higher
⚡ 70B - 48GB VRAM or higher
💀 671B - 480GB VRAM or higher
⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
So would it automatically use both deepthink and search feature if connected to the internet and get the latest information also on the deepseek website it doesnt allow you to use put a file when using search is it possible when running locally also is there any chance that the 7b paramter model runs on my intel ires xe graphics with 8 GB GPU RAM and 16 GB Normal RAM
why not try it out and find out. I left a System Requirements in Pinned comment.
ok thanks for reply
Hey great tutorial but I have a few questions and I was hoping you could answer them.
I have Dell Latitude 5310 laptop, below are my specs
Processor Intel(R) Core(TM) i7-10610U CPU @ 1.80GHz 2.30 GHz
Installed RAM 16.0 GB (15.7 GB usable)
System type 64-bit operating system, x64-based processor
Pen and touch Touch support with 10 touch points
Q1: Which model is suitable for these specs?
Q2: Which model is AnythingLLM using??
When you asked on console "Who are you??", it said "Deepseek - R1" but when you installed AnythingLLM, then it answered "I'm Gen". So I had the confusion if AnythingLLM is also using the Deepseek-R1 model or not??
Q3: Also one last question, if I install any other model instead of Deepseek-R1, would that also work with AnythingLLM or not??
I'm a beginner so I asked pretty basic questions I guess, would really appreciate your reply. Thank You.
Yeah to be honest i was confused to it said Gen. I think its because their is a prompt inbuilt on AnythingLLM that tells it to say its Gen. However it is Ollama DeepSeek R1. but just to make sure i will look into it again for you to make sure.
Yes if its a Ollama model then it should work on AnythingLLM
for you I recommend you start with 1.5b and see how that goes. and then you can also get 7b and try that out.
Okay got it!
Thanks for your response, much appreciated!! 👍🏻
is it work without internet?
yes it does :)
THANKS
thanks my afghan friend :)
Bro can we change this model coding
I guess you could. I haven't tried it myself. but you can get the R1 model alongside the coding model.
Dose this learn locally ?
Good question. I am not sure.
I do think it can self learn though in the chat you using though.
This is what i found
"DeepSeek R1 model's parameters remain static, and it does not continue to learn or adapt in real-time during usage.
Therefore, when running DeepSeek-R1 locally, it operates as a "fixed" model without the ability to perform reinforcement learning or self-improvement during its deployment. The RL component is exclusive to the training environment and is not a feature of the deployed model, regardless of the platform.
How about can we add image to ask questions on it like chat gpt, is it possible to access internet to get latest update like in the chat gpt (search online), what about token system, is there any limitations on how many questions we can ask, is it possible to upgrade the deepseek model from lower to next highest like in the beginning i downloaded 7b & now is it possible to upgrade to next 13 is easy or do i need to redo all the steps agin, how to create like our GPTS like in chat gpt can we do it in deepseek(locally) is it possible, what is the difference between those deepseek models 2b, 7b,14b,32....b i know it billion parameters of instructions something like that, what is the difference like the hogher the number lower time to give response or lower knowledge?, please answer all these questions if you dont mind make a video about it in more detailed about this 👍😃👍
Idk how to answer all the questions but install Anythingllm and install a new deepseek model and in the settings, you can just select a new model you don't need to download Anythingllm again. Also, if you get webui it can access the internet but idk how to install that
@@anibration Thanks for the input if you find anything please let me know
Thanks for your question Arunneser.
Basically DeepSeek R1 and most local LLMs are designed primarily for text-based interactions and do not support image inputs.
To process images, you would need a multimodal model specifically trained for both text and image understanding. As of now, DeepSeek R1 does not offer this capability.
1. Can we add images to ask questions, like in ChatGPT?
At the moment, DeepSeek R1 and most local LLMs are built for text-based interactions. They don't handle image inputs unless they're specifically designed as multimodal models. So, for now, adding images to your questions isn't supported.
2. Is it possible for DeepSeek R1 to access the internet for the latest updates, like ChatGPT does?
DeepSeek R1 operates offline and doesn't have built-in internet access. However, with some technical know-how, you can integrate external search APIs to fetch real-time information. This would involve additional setup and programming.
3. How does the token system work? Are there limits on how many questions we can ask?
LLMs process text in units called tokens, which can be as short as a single character or as long as one word. Each model has a maximum context length, typically ranging from 2,048 to 4,096 tokens. This limit applies to the combined length of your input and the model's response. While there's no hard cap on the number of questions you can ask, longer conversations might require trimming earlier parts to stay within the token limit.
4. Can I upgrade from a lower DeepSeek model to a higher one without redoing all the steps?
Yes, upgrading is straightforward. You can download the new model weights and load them into your existing setup. Just make sure your hardware can handle the increased demands of the larger model.
5. Is it possible to create custom models like GPTs in ChatGPT with DeepSeek locally?
Creating custom models involves fine-tuning the base model on specific datasets to tailor its responses. With DeepSeek R1, you can perform fine-tuning locally if you have the necessary computational resources. This allows you to adapt the model to specific tasks or domains.
6. What's the difference between DeepSeek models like 2B, 7B, 14B, 32B, etc.?
The numbers indicate the number of parameters in billions. Generally, more parameters mean the model can understand and generate more complex responses. However, larger models also require more computing power and memory. It's a balance between performance and resource availability.
7. Does a higher parameter count mean faster responses or better knowledge?
Larger models usually have a better grasp of language and can provide more detailed answers. However, they might respond more slowly due to the increased computational load. Optimizations like model quantization can help speed things up, but there's always a trade-off between size, speed, and performance.
I hope this helps clarify things!
@@Zortec 🫡 thanks for responding ,2) Is it possible to make a video on DeepSeek R1 to access the internet, 5) how to make custom deepseek for Separate purpose like script making, generating ideas...etc, & about the token system can you make a detailed video on these topic ! it will be helpful ! (what can I do with LLM(AI) which has no internet connection ? !!!)
next just make avideo how do install notion nerds with macbook will follow
ahahah good idea XD
@@Zortec ohhhr nooo my chatgpt said Bruh it is now gen za
gen ZA???
bro send me a link to HL3 :)
you want Half Life 3??
@Zortec who doesn't 🤣