How to choose a GPU for Computer Vision? | Deep Learning, CUDA, RAM and more

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 38

  • @dimitrisspiridonidis3284
    @dimitrisspiridonidis3284 2 года назад +9

    don't forget more memory in many cases mean more speed, because you use bigger batch sizes

  • @cyberhard
    @cyberhard 2 года назад +3

    Do you need a GPU? Yes.
    Nvidia or AMD? Nvidia. Unless you're strictly going to develop using PyTorch. Then can use AMD and ROCm.
    How much RAM? As much as you can afford.

  • @ilkay3359
    @ilkay3359 2 года назад +1

    hello sir, honetly ,the people who lives in turkey ,can rarely afford these components ,reason : currency issues.
    still thank you , I'll wait your keras, dnn ,ömachine learning, model training and one more thing ,recently I figured out that .pt files ,I could not use .pt files in my codes. maybe you make a video about yolov5 pytorch ,deploying the pretrained model to our codes in pycharm. thank you again.

  • @SoundChaser_
    @SoundChaser_ Год назад

    Thanx for explanations!

  • @pankajjoshi4206
    @pankajjoshi4206 Год назад

    Sir graphic card which should I buy in 2023 . I am doing project in live face detection in a shopping mall , I m using deep learning pytorch open cv , yolo. Thank you

  • @gplgomes
    @gplgomes 2 года назад +1

    So will we need two GPU, one for the monitor and other for deep learning?

    • @pysource-com
      @pysource-com  2 года назад +1

      Nope, the same GPU can be used for training while at the same time is used for the monitor.
      Usually a normal computer usage (with browser open and some other light programs), uses only a few hundreds megabytes of gpu memory, and approximately 10/15% of gpu usage

  • @maloukemallouke9735
    @maloukemallouke9735 Год назад +1

    Thank you so much.
    How about CPU and memeory?
    I9 or Ryzen ?? 64GO or 128 ???

    • @KiraSlith
      @KiraSlith 9 месяцев назад +1

      (Speaking from limited experience here, new field, not a lot of documentation) If you're just building your rig for AI training/use and nothing else, system RAM over 32gb won't have much effect unless you're using LLM models with a C++ back end, or some extremely complex CV models. It's all about that vRAM pool size.

    • @maloukemallouke9735
      @maloukemallouke9735 9 месяцев назад

      @@KiraSlith thank you it's for intensive training

  • @FederickTan
    @FederickTan 5 месяцев назад

    Is there a big difference in performance and speed in AI tasks like stable diffusion & video rendering etc between RTX 4080 super and RTX 4090?Which one should i buy as I seldom play games or should i wait for 5090 at the end of the year?I am not a video editor or hold any jobs related to designing or editing,just a casual home user.

    • @pysource-com
      @pysource-com  5 месяцев назад +1

      Yes, there is quite a big difference from the 4080 super and the 4090 in terms of speed and also memory.
      If you're not tight on budget I would definitely go for the 4090

  • @shankn3520
    @shankn3520 Год назад

    What about laptop RTX 3060 is 6GB and not 12 GB. So are you saying purchasing laptop of RTX 3060 is of no use

  • @muhammadsabrimas2016
    @muhammadsabrimas2016 Год назад

    does LHR vga card affect deep learning?

  • @Mr_K283
    @Mr_K283 2 года назад

    Can you use laptop with Nvidia quadro T1000 graphics for computer vision

    • @pysource-com
      @pysource-com  2 года назад

      It's not ideal. There are 2 types of Nvidia T1000 (one 4gb and the other 8gb). Both are low in memory. 12gb is recommended to fully use your laptop for Computer Vision

    • @pysource-com
      @pysource-com  2 года назад

      If you need it only to run models in realtime they're fine, but not for training

    • @Mr_K283
      @Mr_K283 2 года назад

      @@pysource-com thank you.

    • @Mr_K283
      @Mr_K283 2 года назад

      @@pysource-com but can it be used for any other machine learning or neural network apart from computer vision

    • @pysource-com
      @pysource-com  2 года назад +1

      @@Mr_K283 yes it will work with all the deep learning frameworks: tensorflow, theano, pytorch, keras, and others

  • @zainbaloch5541
    @zainbaloch5541 2 года назад +6

    Cuda cores is one thing, but these days we also consider Tensor cores, and I have a suggestion to those who may want to buy GTX 1080 Ti. My suggestion is to go for RTX 2060 instead as it has 240 Tensor cores compared to no Tensor cores in GTX 1080 Ti. In my opinion, RTX 2060 is the right option in the right amount of Money!

    • @pysource-com
      @pysource-com  2 года назад +2

      I totally agree with this. the RTX 2060 is the best buy price/performance

    • @rock53355
      @rock53355 Год назад

      @@pysource-com How does one use the tensor cores in an RTX card? Is it 'cuda' in pytorch?

    • @michal5869
      @michal5869 Год назад

      @@pysource-com what? RTX 3060 12GB VRAM, is much faster for DL about 20%, and now cheaper by 30%, 7 months ago I bet the same. The best is the test, not cores or tensors because memory speed is also important as core speed, technology, etc.

  • @Mr_K283
    @Mr_K283 2 года назад +1

    is it the dedicated vram that should be more than 4gb?

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 10 месяцев назад

    3070 ti or 2080 ...if your not creating huge sized pics these cards work great just fine...obviously more for hobbyists..but not suitable giant projects that the pro's create...3060 and 2060 are ultra slow and have inferior vram tests have shown ...you never mentionerd cuda cores or 256 bandwith..not to mention way faster ddr6x vram..like the 3080ti is the minimum..4080 is the sweet spot..16gb vram..large bandwidth and fast vram with tons of cuda cores

  • @mpoltav2
    @mpoltav2 Год назад +1

    What about 1060 3 gb?

  • @liftup8895
    @liftup8895 2 года назад +1

    Hi, how we can sizing the requirement if we want to run object detection from multiple IP camera? Thank you

    • @pysource-com
      @pysource-com  2 года назад

      Hi, it depends on the specific case as there are many way to approach this scenario:
      - lighter model so that the GPU can handle more at the same time
      - lowering FPS for each frame (if we don't need many FPS), so that we can process more streams together
      - multi GPU set up to hand many streams at the same time

    • @liftup8895
      @liftup8895 2 года назад

      @@pysource-com thank you very much. One more question, i tried run object detection using gpu, FPS much better but detection is not working, detection is working before (using cpu), what i'm missing thank you

  • @gplgomes
    @gplgomes 2 года назад +1

    Is GPU for mining the same for Deep Learning?

    • @xcastel6234
      @xcastel6234 Год назад

      Technically no, but you a GPU can have multiple uses.

  • @CaleMcCollough
    @CaleMcCollough Год назад

    If I could only afford $150 I would get a 3050 8GB. I personally use two 3060 12GB and a 3090 24GB, but I would go for the 4060 TI 16GB for $500 if I couldn't upgrade my power supply. The 3090 24GB is a good deal, but the 4090 is way more power efficient so you'll end up paying for it in the power bill.

  • @amrzakaria5290
    @amrzakaria5290 Год назад

    Good information you help a lot.

  • @RossTang
    @RossTang 2 года назад

    i had two rtx a2000 6gb will it consider 12gb mem and double the cores?

    • @pysource-com
      @pysource-com  2 года назад +1

      HI, nope, it will consider them as 2 separate GPUs with 6gb each. It's still good because working with multiple gpus will divide the work among the gpus significantly speeding up the training, though 6gb is still low as memory and will be a limit depending on the model you're going to train.

    • @RossTang
      @RossTang 2 года назад

      @@pysource-com yes. I can't train yolov7 with my cards.

    • @Mr_K283
      @Mr_K283 2 года назад

      please is it the dedicated vram that should be more than 4gb ?