Do you need a GPU? Yes. Nvidia or AMD? Nvidia. Unless you're strictly going to develop using PyTorch. Then can use AMD and ROCm. How much RAM? As much as you can afford.
hello sir, honetly ,the people who lives in turkey ,can rarely afford these components ,reason : currency issues. still thank you , I'll wait your keras, dnn ,ömachine learning, model training and one more thing ,recently I figured out that .pt files ,I could not use .pt files in my codes. maybe you make a video about yolov5 pytorch ,deploying the pretrained model to our codes in pycharm. thank you again.
Sir graphic card which should I buy in 2023 . I am doing project in live face detection in a shopping mall , I m using deep learning pytorch open cv , yolo. Thank you
Nope, the same GPU can be used for training while at the same time is used for the monitor. Usually a normal computer usage (with browser open and some other light programs), uses only a few hundreds megabytes of gpu memory, and approximately 10/15% of gpu usage
(Speaking from limited experience here, new field, not a lot of documentation) If you're just building your rig for AI training/use and nothing else, system RAM over 32gb won't have much effect unless you're using LLM models with a C++ back end, or some extremely complex CV models. It's all about that vRAM pool size.
Is there a big difference in performance and speed in AI tasks like stable diffusion & video rendering etc between RTX 4080 super and RTX 4090?Which one should i buy as I seldom play games or should i wait for 5090 at the end of the year?I am not a video editor or hold any jobs related to designing or editing,just a casual home user.
Yes, there is quite a big difference from the 4080 super and the 4090 in terms of speed and also memory. If you're not tight on budget I would definitely go for the 4090
It's not ideal. There are 2 types of Nvidia T1000 (one 4gb and the other 8gb). Both are low in memory. 12gb is recommended to fully use your laptop for Computer Vision
Cuda cores is one thing, but these days we also consider Tensor cores, and I have a suggestion to those who may want to buy GTX 1080 Ti. My suggestion is to go for RTX 2060 instead as it has 240 Tensor cores compared to no Tensor cores in GTX 1080 Ti. In my opinion, RTX 2060 is the right option in the right amount of Money!
@@pysource-com what? RTX 3060 12GB VRAM, is much faster for DL about 20%, and now cheaper by 30%, 7 months ago I bet the same. The best is the test, not cores or tensors because memory speed is also important as core speed, technology, etc.
3070 ti or 2080 ...if your not creating huge sized pics these cards work great just fine...obviously more for hobbyists..but not suitable giant projects that the pro's create...3060 and 2060 are ultra slow and have inferior vram tests have shown ...you never mentionerd cuda cores or 256 bandwith..not to mention way faster ddr6x vram..like the 3080ti is the minimum..4080 is the sweet spot..16gb vram..large bandwidth and fast vram with tons of cuda cores
Hi, it depends on the specific case as there are many way to approach this scenario: - lighter model so that the GPU can handle more at the same time - lowering FPS for each frame (if we don't need many FPS), so that we can process more streams together - multi GPU set up to hand many streams at the same time
@@pysource-com thank you very much. One more question, i tried run object detection using gpu, FPS much better but detection is not working, detection is working before (using cpu), what i'm missing thank you
If I could only afford $150 I would get a 3050 8GB. I personally use two 3060 12GB and a 3090 24GB, but I would go for the 4060 TI 16GB for $500 if I couldn't upgrade my power supply. The 3090 24GB is a good deal, but the 4090 is way more power efficient so you'll end up paying for it in the power bill.
HI, nope, it will consider them as 2 separate GPUs with 6gb each. It's still good because working with multiple gpus will divide the work among the gpus significantly speeding up the training, though 6gb is still low as memory and will be a limit depending on the model you're going to train.
don't forget more memory in many cases mean more speed, because you use bigger batch sizes
Do you need a GPU? Yes.
Nvidia or AMD? Nvidia. Unless you're strictly going to develop using PyTorch. Then can use AMD and ROCm.
How much RAM? As much as you can afford.
hello sir, honetly ,the people who lives in turkey ,can rarely afford these components ,reason : currency issues.
still thank you , I'll wait your keras, dnn ,ömachine learning, model training and one more thing ,recently I figured out that .pt files ,I could not use .pt files in my codes. maybe you make a video about yolov5 pytorch ,deploying the pretrained model to our codes in pycharm. thank you again.
Thanx for explanations!
Sir graphic card which should I buy in 2023 . I am doing project in live face detection in a shopping mall , I m using deep learning pytorch open cv , yolo. Thank you
So will we need two GPU, one for the monitor and other for deep learning?
Nope, the same GPU can be used for training while at the same time is used for the monitor.
Usually a normal computer usage (with browser open and some other light programs), uses only a few hundreds megabytes of gpu memory, and approximately 10/15% of gpu usage
Thank you so much.
How about CPU and memeory?
I9 or Ryzen ?? 64GO or 128 ???
(Speaking from limited experience here, new field, not a lot of documentation) If you're just building your rig for AI training/use and nothing else, system RAM over 32gb won't have much effect unless you're using LLM models with a C++ back end, or some extremely complex CV models. It's all about that vRAM pool size.
@@KiraSlith thank you it's for intensive training
Is there a big difference in performance and speed in AI tasks like stable diffusion & video rendering etc between RTX 4080 super and RTX 4090?Which one should i buy as I seldom play games or should i wait for 5090 at the end of the year?I am not a video editor or hold any jobs related to designing or editing,just a casual home user.
Yes, there is quite a big difference from the 4080 super and the 4090 in terms of speed and also memory.
If you're not tight on budget I would definitely go for the 4090
What about laptop RTX 3060 is 6GB and not 12 GB. So are you saying purchasing laptop of RTX 3060 is of no use
does LHR vga card affect deep learning?
Can you use laptop with Nvidia quadro T1000 graphics for computer vision
It's not ideal. There are 2 types of Nvidia T1000 (one 4gb and the other 8gb). Both are low in memory. 12gb is recommended to fully use your laptop for Computer Vision
If you need it only to run models in realtime they're fine, but not for training
@@pysource-com thank you.
@@pysource-com but can it be used for any other machine learning or neural network apart from computer vision
@@Mr_K283 yes it will work with all the deep learning frameworks: tensorflow, theano, pytorch, keras, and others
Cuda cores is one thing, but these days we also consider Tensor cores, and I have a suggestion to those who may want to buy GTX 1080 Ti. My suggestion is to go for RTX 2060 instead as it has 240 Tensor cores compared to no Tensor cores in GTX 1080 Ti. In my opinion, RTX 2060 is the right option in the right amount of Money!
I totally agree with this. the RTX 2060 is the best buy price/performance
@@pysource-com How does one use the tensor cores in an RTX card? Is it 'cuda' in pytorch?
@@pysource-com what? RTX 3060 12GB VRAM, is much faster for DL about 20%, and now cheaper by 30%, 7 months ago I bet the same. The best is the test, not cores or tensors because memory speed is also important as core speed, technology, etc.
is it the dedicated vram that should be more than 4gb?
3070 ti or 2080 ...if your not creating huge sized pics these cards work great just fine...obviously more for hobbyists..but not suitable giant projects that the pro's create...3060 and 2060 are ultra slow and have inferior vram tests have shown ...you never mentionerd cuda cores or 256 bandwith..not to mention way faster ddr6x vram..like the 3080ti is the minimum..4080 is the sweet spot..16gb vram..large bandwidth and fast vram with tons of cuda cores
What about 1060 3 gb?
Hi, how we can sizing the requirement if we want to run object detection from multiple IP camera? Thank you
Hi, it depends on the specific case as there are many way to approach this scenario:
- lighter model so that the GPU can handle more at the same time
- lowering FPS for each frame (if we don't need many FPS), so that we can process more streams together
- multi GPU set up to hand many streams at the same time
@@pysource-com thank you very much. One more question, i tried run object detection using gpu, FPS much better but detection is not working, detection is working before (using cpu), what i'm missing thank you
Is GPU for mining the same for Deep Learning?
Technically no, but you a GPU can have multiple uses.
If I could only afford $150 I would get a 3050 8GB. I personally use two 3060 12GB and a 3090 24GB, but I would go for the 4060 TI 16GB for $500 if I couldn't upgrade my power supply. The 3090 24GB is a good deal, but the 4090 is way more power efficient so you'll end up paying for it in the power bill.
Good information you help a lot.
i had two rtx a2000 6gb will it consider 12gb mem and double the cores?
HI, nope, it will consider them as 2 separate GPUs with 6gb each. It's still good because working with multiple gpus will divide the work among the gpus significantly speeding up the training, though 6gb is still low as memory and will be a limit depending on the model you're going to train.
@@pysource-com yes. I can't train yolov7 with my cards.
please is it the dedicated vram that should be more than 4gb ?