🐋 How to Download DeepSeek R1 Locally | Install DeepSeek AI Locally ✅

Поделиться
HTML-код
  • Опубликовано: 4 фев 2025

Комментарии • 76

  • @Zortec
    @Zortec  2 дня назад +8

    *System Requirements 💪*
    🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware)
    🎮 7B & 8B - 6GB VRAM or higher
    🚀 14B - 16GB VRAM or higher
    🔥 32B - 24GB VRAM or higher
    ⚡ 70B - 48GB VRAM or higher
    💀 671B - 480GB VRAM or higher
    ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
    💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧

    • @shortclipse
      @shortclipse День назад +1

      i have GTX 1060 6gb with 16 gb RAM (Laptop) - which model is suitable for this specs?

    • @Zortec
      @Zortec  14 часов назад

      @shortclipse based off your 6gb VRAM. 7B/8B would be the best suited for your GTX 1060

  • @rmuzz76
    @rmuzz76 34 минуты назад

    Thank you! Finally these are the only instructions that I have followed that actually work and are just click bait

  • @Zninja145
    @Zninja145 2 дня назад +5

    Thanks for this amazing tutorial! One of the best guides how to run deepseek locally.

  • @Vincenzo-iw3be
    @Vincenzo-iw3be День назад +4

    00:26 GTA 6 and Half-Life 3 ahahahahah. you are the best

  • @iwenameh
    @iwenameh 23 минуты назад

    w content, cant even be mad at the goofy soundtracks x)

  • @User-r3s2h
    @User-r3s2h День назад +2

    Mind-blowing content! Keep it up!

    • @Zortec
      @Zortec  День назад

      Thank youuu that means a lot! 🙏

  • @MohammedAlaouna
    @MohammedAlaouna День назад +1

    Thanks for this amazing tutorial!

    • @Zortec
      @Zortec  16 часов назад

      You're welcome! Glad it helped you. 😉

  • @anibration
    @anibration 20 часов назад +2

    I have a rtx 3060 laptop and I got deepseek coder V2 (the 16b model that's 9GB in size) and it runs great. Great tutorial I like the memes👍

    • @WhiteScreen-i7v
      @WhiteScreen-i7v 17 часов назад +1

      What's your ram and vram size?

    • @Zortec
      @Zortec  16 часов назад

      I pinned the System Requirements on the comments

    • @anibration
      @anibration 15 часов назад +1

      @@WhiteScreen-i7v 16GB of ram and 6GB of vram

    • @Zortec
      @Zortec  14 часов назад +1

      thanks for the compliment of the memes :)))

    • @Zortec
      @Zortec  14 часов назад

      @ so for you 7b or 8b will be highest ones that work best

  • @hoango9677
    @hoango9677 13 часов назад +1

    Your memes got me 1 sub 👌

  • @idrishammed9524
    @idrishammed9524 Час назад

    Thank you!!

  • @Vangod
    @Vangod 9 часов назад

    Great video bro… did you also download the r1 latest version or it comes with ollama

    • @Zortec
      @Zortec  7 часов назад +1

      r1 is the latest version model XD. It doesn't come with Ollama. You need to install ollama first and then once that is done then you can use the command on the DeepSeek R1 to download and install it thanks to Ollama. :)

  • @Compilations-4k
    @Compilations-4k 4 часа назад +1

    bro how to get your mouse?

  • @isas213
    @isas213 День назад +1

    Which one should I use? (RTX 4070 Ti Super)

    • @Zortec
      @Zortec  День назад +2

      you can choose mutliple models.
      The best highest one it could run well is 14b.
      It might be able to run 32b or even 70b also. But it will produce answers really slowly.
      Forget 671b as no normal consumer level PC can run that really well if lucky or even work at all unless you have like a Server PC

    • @isas213
      @isas213 День назад

      @@Zortec Yeah. I just wanted to know if it can run 32b. Obviously it wouldn't run the 671b version xD
      Thank you bro

    • @Zortec
      @Zortec  День назад +1

      no worries. feel free to share this with a freind if they need this :)

  • @coldfieldgamer108
    @coldfieldgamer108 День назад +1

    What if I want to install another ver of deepseek and want to delete the previous one? Then how should i do that

    • @Zortec
      @Zortec  День назад +1

      Yes, you can add multiple version of it. Each version is a separate add-on with its own command.
      So no need to delete it a previous version. I just didn't mention this because I was not sure if most people would want more than 1.
      but If you wanted to delete a specific model Manually
      you can go on C:\Users\%YOURPCusername%\.ollama\model
      You will be able to tell which one to delete based off the size of the file.

  • @CoolShaikh007
    @CoolShaikh007 День назад +1

    System requirements? 🤡
    Also I just want to analyse my documents and the server error has been bothering me for a long time, so I believe even the lowest model will be able to analyse documents locally?

    • @anibration
      @anibration 20 часов назад +1

      Yea it will work. I have a gaming laptop and when I play a game like minecraft java my laptop performance goes low but with deepseek the performance is actually pretty good

    • @Zortec
      @Zortec  16 часов назад

      System Requirements 💪
      🖥 1.5B - Any PC (Avoid XP/Vista-era hardware)
      🎮 7B & 8B - 6GB VRAM or higher
      🚀 14B - 16GB VRAM or higher
      🔥 32B - 24GB VRAM or higher
      ⚡ 70B - 48GB VRAM or higher
      💀 671B - 480GB VRAM or higher
      ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
      💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧

    • @Zortec
      @Zortec  16 часов назад

      Yes you can analyse documents locally following my tutorial.

  • @Arijitgg
    @Arijitgg 13 часов назад

    Hello ! Which model i should choose ?
    My PC Specifications are
    Intel i5 12th generation with AMD RX6600 8 GB V RAM , with 16gb ram .

    • @Zortec
      @Zortec  11 часов назад +1

      System Requirements 💪
      🖥 1.5B - Any PC (Avoid Win XP/Vista-era hardware)
      🎮 7B & 8B - 6GB VRAM or higher
      🚀 14B - 16GB VRAM or higher
      🔥 32B - 24GB VRAM or higher
      ⚡ 70B - 48GB VRAM or higher
      💀 671B - 480GB VRAM or higher
      ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
      💀 671B? Forget it-99.9% of your PC can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧
      So for you i recommend you use 8B maximum. But you can take your chances with 14B or 32B but it will be really slow to use!

  • @w4lt3r19
    @w4lt3r19 7 часов назад

    is there a way to turn off deep thinking on the 14 model? like its possible on deepseek website

    • @Zortec
      @Zortec  6 часов назад

      Its programmed into the model file so it will be static. Will let you know if i figure thiks out

  • @shnz_edits
    @shnz_edits 9 часов назад

    I am going to run 1.5B model but what's the difference between these models? will 1.5 model give me inaccurate responses or maybe not respond at all because lack of enough info..?

    • @shnz_edits
      @shnz_edits 9 часов назад

      By the way I am going to run it on my smartphone, my phone runs Gemma-2.2b-it perfectly fine so I hope I can run this too.

    • @Zortec
      @Zortec  9 часов назад

      correct. due to the lack of information their will be many answers that are either incorrect, outdated. Basically it can do general things pretty well. But i still think 7b will be much better option however it just depends if your device can handle it.

    • @shnz_edits
      @shnz_edits 8 часов назад

      @@Zortec okay I got the deepseek r1 7B model on my phone and it's kinda dumb it cannot tell if the number 9.11 is bigger or 9.9 and it's very slow.
      I downloaded another model called "Qwen2.5-3B-Instruct" and It got the maths question correct and it's faster.

    • @Zortec
      @Zortec  7 часов назад

      yeah that is strange however keep in mind that 7B is 1% as capable as the complete 671b model.
      The higher the parameters the better the answers.
      The Qwen2.5-3B-Instruct model might be more efficient and accurate in mathematical reasoning, often outperforming larger models in these tasks. But this doesnt represent the true power of DeepSeek R1

    • @shnz_edits
      @shnz_edits 2 часа назад

      @@Zortec does the Deepseek official website use r1 671b parameter model?

  • @Marc-andreCournoyer
    @Marc-andreCournoyer 58 минут назад

    how install on drive d:?

  • @markrosenberg4369
    @markrosenberg4369 День назад +2

    Can you send it attachments?

    • @anibration
      @anibration 20 часов назад +3

      Yes you can if you install Anythingllm like shown in the video

    • @Zortec
      @Zortec  16 часов назад +1

      yes you can. like documents and spreadsheets :)

  • @WhiteScreen-i7v
    @WhiteScreen-i7v 17 часов назад

    My laptop is i5 13th gen 16gp ram and 6gb vram 4050RTX. Which one i should use?

    • @Zortec
      @Zortec  16 часов назад +1

      System Requirements 💪
      🖥 1.5B - Any PC (Avoid XP/Vista-era hardware)
      🎮 7B & 8B - 6GB VRAM or higher
      🚀 14B - 16GB VRAM or higher
      🔥 32B - 24GB VRAM or higher
      ⚡ 70B - 48GB VRAM or higher
      💀 671B - 480GB VRAM or higher
      ⚠️ You CAN run larger "B" models with lower VRAM, but expect slower responses.
      💀 671B? Forget it-99.9% of you can't handle it. If you can, consider yourself extremely lucky 🍀-this is built for specialized server PCs. 🖥️🔧

  • @pranavsaini69
    @pranavsaini69 3 часа назад

    So would it automatically use both deepthink and search feature if connected to the internet and get the latest information also on the deepseek website it doesnt allow you to use put a file when using search is it possible when running locally also is there any chance that the 7b paramter model runs on my intel ires xe graphics with 8 GB GPU RAM and 16 GB Normal RAM

    • @Zortec
      @Zortec  3 часа назад

      why not try it out and find out. I left a System Requirements in Pinned comment.

    • @pranavsaini69
      @pranavsaini69 3 часа назад

      ok thanks for reply

  • @bilalirfan2558
    @bilalirfan2558 6 часов назад

    Hey great tutorial but I have a few questions and I was hoping you could answer them.
    I have Dell Latitude 5310 laptop, below are my specs
    Processor Intel(R) Core(TM) i7-10610U CPU @ 1.80GHz 2.30 GHz
    Installed RAM 16.0 GB (15.7 GB usable)
    System type 64-bit operating system, x64-based processor
    Pen and touch Touch support with 10 touch points
    Q1: Which model is suitable for these specs?
    Q2: Which model is AnythingLLM using??
    When you asked on console "Who are you??", it said "Deepseek - R1" but when you installed AnythingLLM, then it answered "I'm Gen". So I had the confusion if AnythingLLM is also using the Deepseek-R1 model or not??
    Q3: Also one last question, if I install any other model instead of Deepseek-R1, would that also work with AnythingLLM or not??
    I'm a beginner so I asked pretty basic questions I guess, would really appreciate your reply. Thank You.

    • @Zortec
      @Zortec  4 часа назад +1

      Yeah to be honest i was confused to it said Gen. I think its because their is a prompt inbuilt on AnythingLLM that tells it to say its Gen. However it is Ollama DeepSeek R1. but just to make sure i will look into it again for you to make sure.
      Yes if its a Ollama model then it should work on AnythingLLM
      for you I recommend you start with 1.5b and see how that goes. and then you can also get 7b and try that out.

    • @bilalirfan2558
      @bilalirfan2558 4 часа назад

      Okay got it!
      Thanks for your response, much appreciated!! 👍🏻

  • @f4xalif
    @f4xalif 18 часов назад

    is it work without internet?

    • @Zortec
      @Zortec  16 часов назад

      yes it does :)

  • @CHADAFGHAN
    @CHADAFGHAN День назад +1

    THANKS

    • @Zortec
      @Zortec  День назад

      thanks my afghan friend :)

  • @mkcricket6876
    @mkcricket6876 16 часов назад

    Bro can we change this model coding

    • @Zortec
      @Zortec  16 часов назад +1

      I guess you could. I haven't tried it myself. but you can get the R1 model alongside the coding model.

  • @josephseger6053
    @josephseger6053 День назад

    Dose this learn locally ?

    • @Zortec
      @Zortec  День назад

      Good question. I am not sure.
      I do think it can self learn though in the chat you using though.

    • @Zortec
      @Zortec  День назад +1

      This is what i found
      "DeepSeek R1 model's parameters remain static, and it does not continue to learn or adapt in real-time during usage.
      Therefore, when running DeepSeek-R1 locally, it operates as a "fixed" model without the ability to perform reinforcement learning or self-improvement during its deployment. The RL component is exclusive to the training environment and is not a feature of the deployed model, regardless of the platform.

  • @Arunnesar
    @Arunnesar День назад +1

    How about can we add image to ask questions on it like chat gpt, is it possible to access internet to get latest update like in the chat gpt (search online), what about token system, is there any limitations on how many questions we can ask, is it possible to upgrade the deepseek model from lower to next highest like in the beginning i downloaded 7b & now is it possible to upgrade to next 13 is easy or do i need to redo all the steps agin, how to create like our GPTS like in chat gpt can we do it in deepseek(locally) is it possible, what is the difference between those deepseek models 2b, 7b,14b,32....b i know it billion parameters of instructions something like that, what is the difference like the hogher the number lower time to give response or lower knowledge?, please answer all these questions if you dont mind make a video about it in more detailed about this 👍😃👍

    • @anibration
      @anibration 20 часов назад +2

      Idk how to answer all the questions but install Anythingllm and install a new deepseek model and in the settings, you can just select a new model you don't need to download Anythingllm again. Also, if you get webui it can access the internet but idk how to install that

    • @Arunnesar
      @Arunnesar 16 часов назад +2

      @@anibration Thanks for the input if you find anything please let me know

    • @Zortec
      @Zortec  16 часов назад +1

      Thanks for your question Arunneser.
      Basically DeepSeek R1 and most local LLMs are designed primarily for text-based interactions and do not support image inputs.
      To process images, you would need a multimodal model specifically trained for both text and image understanding. As of now, DeepSeek R1 does not offer this capability.
      1. Can we add images to ask questions, like in ChatGPT?
      At the moment, DeepSeek R1 and most local LLMs are built for text-based interactions. They don't handle image inputs unless they're specifically designed as multimodal models. So, for now, adding images to your questions isn't supported.
      2. Is it possible for DeepSeek R1 to access the internet for the latest updates, like ChatGPT does?
      DeepSeek R1 operates offline and doesn't have built-in internet access. However, with some technical know-how, you can integrate external search APIs to fetch real-time information. This would involve additional setup and programming.
      3. How does the token system work? Are there limits on how many questions we can ask?
      LLMs process text in units called tokens, which can be as short as a single character or as long as one word. Each model has a maximum context length, typically ranging from 2,048 to 4,096 tokens. This limit applies to the combined length of your input and the model's response. While there's no hard cap on the number of questions you can ask, longer conversations might require trimming earlier parts to stay within the token limit.
      4. Can I upgrade from a lower DeepSeek model to a higher one without redoing all the steps?
      Yes, upgrading is straightforward. You can download the new model weights and load them into your existing setup. Just make sure your hardware can handle the increased demands of the larger model.
      5. Is it possible to create custom models like GPTs in ChatGPT with DeepSeek locally?
      Creating custom models involves fine-tuning the base model on specific datasets to tailor its responses. With DeepSeek R1, you can perform fine-tuning locally if you have the necessary computational resources. This allows you to adapt the model to specific tasks or domains.
      6. What's the difference between DeepSeek models like 2B, 7B, 14B, 32B, etc.?
      The numbers indicate the number of parameters in billions. Generally, more parameters mean the model can understand and generate more complex responses. However, larger models also require more computing power and memory. It's a balance between performance and resource availability.
      7. Does a higher parameter count mean faster responses or better knowledge?
      Larger models usually have a better grasp of language and can provide more detailed answers. However, they might respond more slowly due to the increased computational load. Optimizations like model quantization can help speed things up, but there's always a trade-off between size, speed, and performance.
      I hope this helps clarify things!

    • @Arunnesar
      @Arunnesar 10 часов назад +1

      @@Zortec 🫡 thanks for responding ,2) Is it possible to make a video on DeepSeek R1 to access the internet, 5) how to make custom deepseek for Separate purpose like script making, generating ideas...etc, & about the token system can you make a detailed video on these topic ! it will be helpful ! (what can I do with LLM(AI) which has no internet connection ? !!!)

  • @notdenisxd
    @notdenisxd 2 дня назад +1

    next just make avideo how do install notion nerds with macbook will follow

    • @Zortec
      @Zortec  2 дня назад

      ahahah good idea XD

    • @notdenisxd
      @notdenisxd День назад

      @@Zortec ohhhr nooo my chatgpt said Bruh it is now gen za

    • @Zortec
      @Zortec  День назад

      gen ZA???

  • @michaelh9667
    @michaelh9667 День назад

    bro send me a link to HL3 :)

    • @Zortec
      @Zortec  День назад

      you want Half Life 3??

    • @michaelh9667
      @michaelh9667 День назад

      @Zortec who doesn't 🤣