This channel seriously deserves million subs.Have been watching many series from this channel.Great work !!!!! keep going I'm sure this channel gonna flow with lots of subscribers someday .
Thank you, Majeed! We're glad to hear you've been enjoying multiple series here, and we're happy to have you as an engaged member of the community! Always appreciate seeing your comments :)
wow, I saw these first 4 videos of the Pytorch series and am impressed how much time & effort you put into these tutorials. Thanks a lot. Also, you have developed enormously (although the older tutorials were already very good)
Good overview, also having 8 cores won't necessarily speed up computation by exactly x8, perhaps by x7 in practice I just wish you would mention that processors use SSE2, AVX2 and similar things that allow each core do 8 summations/multiplications/shifts/etc at a time, rather than one by one. This allows a CPU registers to process arrays in chunks of 8. Many C/C++ programmers don't know about those, and build programs that are by default doomed to underperform. So I feel everyone is always unfair towards the CPU. Everybody is pointing at the cores, but each core can (and should) use intrinsics, doing parallel things. Especially with various RNNs where we only win if we move the entire algorithm to the GPU to avoid data transfer bottlenecks, and when the RNN is decently wide in each layer for the GPU. Also, CPU is really flexible when it comes to 'if/else' or while loops, reacting faster and wiry when the branch occurs.
Any idea on how the rtx graphics cards and their tensor core stuff compares to the standard gtx gpus? Is that something that tensorflow or pytorch take advantage of?
PLEASE HELP, Is it something like - "you have to download pytorch with cuda if you wanna use gpu or else you will only be able to use cpu" ? I am an AMD user.
CUDA is NVIDIA platform and so only supports NVIDIA cards like the GTX GPUs. For AMD, the framework for parallel programming is OpenCL...which unfortunately does not have a development community as big as the CUDA community.
Did they remove all those stats/functions on a newer version of Cudo? Because I just recently downloaded it, and the only thing I can see on the screen is CPU, XMRig, h/s to the left and Payout Coin to the right. That's it! I'm using CPU but want to use GPU, but can't see any option. Please help if you can.
I'd like to know whether you could use dedicated graphic card for deep learning while you don't have a CPU with no iGPU. This will help me much with my screen cable management issue (I'm new to this) ! Thanks!
You have a mistake in the quiz section: Q. Different PyTorch components are written in different programming languages. PyTorch is written in all of the following programming languages except? Ans. Java (on blog correct answer is coming as Python)
Aren't GPU's used for image processing (i.e conversion of binary code to analog Graphic pixels)? If so how can we use them for mathematical computations ?
Guys, don't use cuda, use HIP so it runs everywhere or use opencl or sycl but don't have your software stuck to proprietary and platform specific hardware and software
the forward pass can rely on matrix math which can be run through CUDA(software layer) and done on an NVIDIA GPU. The more GPU cores, the faster the process. A gpu with 100 cores will perform this step 10x faster than a GPU with 10 cores (in general...).
CUDA is a software layer that interfaces with Nvidia GPUS to allow porting some problems (think forward pass) to the GPU which can be done in parallel. (your pc has an Nvidia GPU, with software like Pytorch, you tell PC cuda is available and to send certain processes to GPU for processing in parallel.) vastly over simplified.
Check out the corresponding blog and other resources for this video at:
deeplizard.com/learn/video/6stDhEA0wFQ
So I can't use Computer Vision Programs which require GPU because I am an AMD card user?
@@kudoamv Actually, you can, I think. AMD invested in that field too, google "gpuopen".
Great job speeding Jensen Huang up. xD
Can i use 1050ti 4gb for data science
Ofcourse
But it will be quite slow.
5:13 ballmer ambush, panic clicking to skip (thank you for awesome video)
lol. You are welcome!
This channel seriously deserves million subs.Have been watching many series from this channel.Great work !!!!! keep going I'm sure this channel gonna flow with lots of subscribers someday .
Thank you, Majeed! We're glad to hear you've been enjoying multiple series here, and we're happy to have you as an engaged member of the community! Always appreciate seeing your comments :)
Congratulations, You've impressed me. Very professional series. Right to the good stuff, clear and sharp voice, broad yet specific explanations.
This channel should have more subscribers, seriously
Amazing video and loved the short clips! Thank you!
The BEST VIDEO on this topic!
Beautifully done, Chris. Wow. Thanks. I learned a lot.
Rich informative video !! No better explanation is more than yours!!
wow, I saw these first 4 videos of the Pytorch series and am impressed how much time & effort you put into these tutorials. Thanks a lot.
Also, you have developed enormously (although the older tutorials were already very good)
Good overview, also having 8 cores won't necessarily speed up computation by exactly x8, perhaps by x7 in practice
I just wish you would mention that processors use SSE2, AVX2 and similar things that allow each core do 8 summations/multiplications/shifts/etc at a time, rather than one by one. This allows a CPU registers to process arrays in chunks of 8. Many C/C++ programmers don't know about those, and build programs that are by default doomed to underperform.
So I feel everyone is always unfair towards the CPU. Everybody is pointing at the cores, but each core can (and should) use intrinsics, doing parallel things.
Especially with various RNNs where we only win if we move the entire algorithm to the GPU to avoid data transfer bottlenecks, and when the RNN is decently wide in each layer for the GPU.
Also, CPU is really flexible when it comes to 'if/else' or while loops, reacting faster and wiry when the branch occurs.
Hey Igor - Thanks for adding these details. Great stuff. Much appreciated! 🙏
Very professional video. Good information.
Beautifully explained.
I ended up here because my daughter is learning "AI" at high school and now I need to understand how this all works to make her a PC.
😅😅
did you build that pc just fine or you needed further help ?
Thanks for putting in all the efforts.
You are welcome!
Any idea on how the rtx graphics cards and their tensor core stuff compares to the standard gtx gpus? Is that something that tensorflow or pytorch take advantage of?
Haven't seen the comparisons. And. Yes. www.geforce.com/hardware/technology/cuda/supported-gpus
Thank you very much, currently learning deep learning and this was perfect to explain why I need a good GPU
Hi Marcello how’s your deep learning experience going?
Thank you for sharing. Very helpful.
Hey James - You are welcome!
my girl friend and i been doing a lot of deep learning lately
you sure it's not just shallow computations ? :p
@@osumanaaa9982 🤣
I hope you are not hoping for any output.
@@Aditya_Kumar_12_pass RUclips should have a Haha react xD
How many layers for protection? Are you clear on how back propogation is supposed to work? 😂😂😂
Nice video, I like all the graphics you used. Where do you find them?
Very Helpful Series
What I learned from this video is that nvidia gpu got their speed from the CEO
Jesus @2:41 I spit out my water lol
Thank you. This series so helpful for me
Glad to hear that, jesse! You're welcome!
brilliant explanation
PLEASE HELP, Is it something like -
"you have to download pytorch with cuda if you wanna use gpu or else you will only be able to use cpu" ?
I am an AMD user.
CUDA is NVIDIA platform and so only supports NVIDIA cards like the GTX GPUs. For AMD, the framework for parallel programming is OpenCL...which unfortunately does not have a development community as big as the CUDA community.
I can feel your pain 😔 😔
Maybe if we start telling people the brain is an app they will start using it.
That's some cringeworthy joke my grandma would share on FB.
@@clbl8706 actually, it's not a joke.
@@e4r281 Unlike you
ok commie
Great video
you just nailed it
I have HP envy ci7 laptop having GeForce rtx 2050 card, will it be used for machine learning tasks?
So which graphics card should I buy for deep learning?
😄very nice video
When multiple apps are using CUDA how it's managed by GPU? Can GPU execute different kernels at the same time?
Did they remove all those stats/functions on a newer version of Cudo? Because I just recently downloaded it, and the only thing I can see on the screen is CPU, XMRig, h/s to the left and Payout Coin to the right. That's it! I'm using CPU but want to use GPU, but can't see any option. Please help if you can.
I'd like to know whether you could use dedicated graphic card for deep learning while you don't have a CPU with no iGPU.
This will help me much with my screen cable management issue (I'm new to this) !
Thanks!
You have a mistake in the quiz section:
Q. Different PyTorch components are written in different programming languages. PyTorch is written in all of the following programming languages except?
Ans. Java (on blog correct answer is coming as Python)
Hey Anuj - Thank you so much for pointing this out! I've fixed it. You may need to clear your cache to see the change.
Chris
Finally these nerds got the guts to call something a funny name, Embarrassingly parallel xD
🤓
@@deeplizard can the new XE server gpu from intel handle ai or Deep Learning workloads like nvidia gpu ?
Aren't GPU's used for image processing (i.e conversion of binary code to analog Graphic pixels)? If so how can we use them for mathematical computations ?
Anyone have that video link regarding "python is slow"?
Added it to the description. Here you go: ruclips.net/video/DBVLcgq2Eg0/видео.html
GIL is evil 😓
You can explain this whole AI trend 5 YEARS AGO
ty! Peeling the plastic of a brand new GPU is a good day, lol.
thank you
I am agreeing since Soumith has said it.
nVidia is holding back processing power in order to make selling their products sustainable.
This is important. A company's incentive to make a profit can be a double-edged-sword. Consider the same problem in healthcare or biotech.
@@deeplizard conflict of interests indeed, every software developer knows that LOL
5:25 i was like wtf. Can anyone have the link of whole video. 😂😂. Man i got excited and started shouting at home
It's a popular one. Google and you will find 😂
Sir is there any way to do cuda programing online? i mean any online compiler is present now? my system is not supporting cuda..pls help
You can do it on jupyter
At research stage I can see how Python is acceptable choice. However for production systems Python too large in size and not fast enough!
Although still being confused, I just picked up some new knowledge for a layman like me
Is a gtx 960m or 1050m worth using?
Subscribed.............
So when I play games with advance Ai make sure my gpu is ready
Exactly. 🤖
excellent
Everyone wishes they saw your video 5 years back 😅
OH MY GOD WOW are you a lizard too? i love ai and stuff as well :D
🤣
Guys, don't use cuda, use HIP so it runs everywhere
or use opencl or sycl but don't have your software stuck to proprietary and platform specific hardware and software
merci merci
i just learned the name of deep learning
😵
Can I combine Nvidia GTX 1070 or higher with amd ryzen 5
no amd at the moment.
exciting and if you read WBW blog thrilling times
I came Here to learn how to utilitise deep learning cores for Training my own ai... I still don't know why i should buy cores i cant use.
😵
the forward pass can rely on matrix math which can be run through CUDA(software layer) and done on an NVIDIA GPU. The more GPU cores, the faster the process. A gpu with 100 cores will perform this step 10x faster than a GPU with 10 cores (in general...).
wow!
you're so geinus!! Thanks
Lmao, I can't believe they actually named it embarrassingly parallel.
Never in my lifetime would I have ever imagined "Embarrassingly [something]" would be an actual technical term.
😅😅
Its a pity that AMD don't seem to support CUDA. Their new big navi cars look really nice apart from that
Does that mean GPU with more CUDA is better for Deep learning ?
yes (and more gpu ram...)
Wow was it this ok to do so much coke back in the day? 5:12 damn dude take it down a notch
u need some music in vid... attracks views.... cool vid
What kind of music do you like?
@@deeplizard PEWDIEPIE bitch lasanga ? jk... any relaxing music when u talk... when u changing camera shot etc... ;)
haha. That sent us on a tangent. Hadn't seen that before.🤣
I have a doubt, does Nvidia has a monopoly in such hardware? If not, does CUDA only work on Nvidia hardware?
Yes. Nvidia built CUDA. It only works with their hardware.
SUBBED !
EVERYBODY SUB THIS CHAN !
THIS ONE KNOWS HE'S STUFF !
GO LOOK AT THE PLAYLIST LIB !
Thanks Robert! Note that there are two of us here. 🦎🦎
5:57 東工大でるのは草www
I will now send this to anyone who asked why I bought a 3090! rip wallet tho
🤣💰
Jensen is better at 2x
Former CEO - and we can see why.
Although, he became a billionaire from his tenure at Microsoft 😄
someone who bought $NVDA 2018 ?
I really dislike CUDA becouse it’s not open-source and AMD is not able to use it, it makes development for both AMD and NVIDIA much harder.
He doesn’t explain tensor very well but overall good job
"Deep learning" should not be on the title in my opinion.
CUDA not explained at all
CUDA is a software layer that interfaces with Nvidia GPUS to allow porting some problems (think forward pass) to the GPU which can be done in parallel. (your pc has an Nvidia GPU, with software like Pytorch, you tell PC cuda is available and to send certain processes to GPU for processing in parallel.) vastly over simplified.
I'm glad I don't need to listen to Jensen Huang.
u did a rly bad job at explaining why gpu's are better for parrallel computing than cpu's.