This is the best course on Neural Networks and PyTorch I have ever watched in my life, and I'm barely done with it. I could cry. Everything is explained in detail, amazing. I am completely new to Neural Networks and I feel somewhat comfortable already. Thank you!
I really liked the second last part of the training program where the instructor has brought in a game changer that accelerates a data science model performance from 50% to over 95% using various techniques. ..Well what this means is that Data Science Models can now train faster and predict more accurate insights. Great job instructor, some more user cases in the medical field would be appreciated in your next video. Thank you so much, great work!!
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy? can you please explain the disadvantages of this method? This way trial and error approach is replaced by a mathematical model ?
This video was uploaded 3 hours ago having a duration of 9hrs. Folks have already commented praising the video. I'm wondering if they have really seen it fully and commented or just posting randomly.
i usually hate ads on youtube video but when it comes to ads on FCC, i gladly watch them cause I know its gonna help them provide us with more free quality content like this. Like seriously, I tried to learn some pytorch from a book and I wasn't making much progress but then this video really propelled me forward.
Fantastic content, thank you so much for sharing this course for free. One small suggestion : I understand it's additional work but it would be great to update the video to match the latest notebook content. For instance, notebook 3 is now at version 32 which is substantially different from the video content and in order to get the matching version you've to revert the notebook back to version 6. Note this is not a complaint, just a constrictive suggestion of improvement. Thank you again for sharing this.
Hi. Do u know how to change to the previous version of the notebook. For example how to change to version 6 of notebook 3 as u mentioned in the comment. Thank You
I'm confused a blit. Are you calling "test set" as a "validation set"? Because we're not updating hyperparameters according to validation loss. And the figure in 6:48:55 is a train-test error figure.
Really well done, Aakash explains the concepts very well, and the code is easy to follow. Unfortunately, the Residual Network (Part6) file from Jovian doesn't match the video - seems the Jovian version doesn't use the FastAI modules. But the explanation of RN's was well done. Highly recommended!
Whats with the discrepancies between notebook 04 on jovian and what Aakash has in the video? The MnistModel class has a bunch of extra methods on jovian, and it doesn't have a loss_batch function anywhere Edit: I needed to look at a previous version. Specifically version 10 it seems
I just paid $10 dollars for an online python machine learning not knowing there's a deep learning tutorial free on youtube and without ADS! SUBSCRIBED!
Hi Akash, Thanks for the video. I am still watching. One doubt/clarification. at around 01:20:00, regarding SGD. I think in SGD we process one example/observation at a time and then take gradients of that observation. When we process in batches it is called mini/Batch Gradient descent.
I highly recommend for people to check the newer, 24 hr course on PyTorch over this one. You need to do your due diligence when it comes to learning deep learning. This course skips over a thorough explanation of concepts such as convolutions, backpropagation, and gradient descent to name a few. I started learning PyTorch with this course but I felt lost because he skipped over the WHY and was just focusing on the HOW.
When I try doing from fastai.basic_data import DataBunch on kaggle, it throws the error that the module fastai.basic_data does'nt exist. What should I do?
This is an amazing series. Looking forward to another new series on NLP with tensorflow/pytorch that covers RNNs, transformers and more. Thousands of thanks
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy? can you please explain the disadvantages of this method? This way trial and error approach is replaced by a mathematical model ?
You are a grate teacher. I got the eureka moment when you show the Linear Regression implementation, when you show the 3D visual graph with one axis output prediction and 2 axis input, with the hyperplane and with the Linear algebra Matrix all thing's fall in to place in my head. Best explained example of training a Linear regression task, Thank's. Best Regards Olle Welin
why does SGD work so poorly on the CNN from part 5a? or is that just me? using Atom I get similar results but with SGD it pretty much just bounces around whatever accuracy it's already at.
In the linear regression example, a graph is shown of loss vs a weight. If the loss is quadratic with respect to any weight, then why is the graph not a parabola but such a curve?
At 28:00-29:00 he talked about how tensors are better for the GPU and that both numpy and tensors have the same functionalities. So why should we use numpy if tensors are better?
but not only did you move tensors to GPU but also previously saved models with their weights and bias? Do the parameters need to be moved to GPU as well, when making evaluation, as you did in the video?
For the section image classification, the code shown in the video is quite different from the one posted on jovian. Did you update it? Which one should I follow? Thank you. Btw great video!
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy? can you please explain the disadvantages of this method? This way trial and error approach is replaced by a mathematical model ?
Great content!! sincere thank you. Edit: The explanation on training steps are repetitive. For each section he explains about almost similar training steps. Instead the presenter could've spent more time on model architectures.
@Mhammad Umair Racist against who? I just wish Iranians have enough food to last, since the economy is collapsing, the regime is shooting protesters on the streets, and wastes all of their money on a failed nuclear weapons program.
I tried the second exercise (of the ones shown at 34:41), using the same x, w, b values as originally defined and letting y = torch.tensor([[w, w*x], [w*x+b, b]]). I keep getting "element 0 of tensors does not require grad and does not have a grad_fn" error though; does anyone know how I could fix this?
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy? can you please explain the disadvantages of this method? This way trial and error approach is replaced by a mathematical model ?
Please go ahead Ivan! The knowledge and material is free to use and build upon. I'd really appreciate it if you can link back from your translation to the original notebooks. :)
at 34:37 I tried exercise #1 and kept getting "grad can be implicitly created only for scalar outputs" error. Is this supposed to be the case or am I doing something wrong?
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy? can you please explain the disadvantages of this method? This way trial and error approach is replaced by a mathematical model ?
This is the best course on Neural Networks and PyTorch I have ever watched in my life, and I'm barely done with it. I could cry. Everything is explained in detail, amazing. I am completely new to Neural Networks and I feel somewhat comfortable already. Thank you!
Amazing, freecode should reserve a prize for their efforts.
Yes
Definitely Yes!
I'm impressed by the quality of this course, he knows the topic very well and explains it clearly!
This tutorial is complete and materials are available across multiple platforms. Thank you!
Perfect course for someone who wants start learning PyTorch. Amazing work, thanks a ton!
I really liked the second last part of the training program where the instructor has brought in a game changer that accelerates a data science model performance from 50% to over 95% using various techniques. ..Well what this means is that Data Science Models can now train faster and predict more accurate insights.
Great job instructor, some more user cases in the medical field would be appreciated in your next video. Thank you so much, great work!!
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy?
can you please explain the disadvantages of this method?
This way trial and error approach is replaced by a mathematical model ?
This video was uploaded 3 hours ago having a duration of 9hrs. Folks have already commented praising the video. I'm wondering if they have really seen it fully and commented or just posting randomly.
Millennial's like attention
@@alangobryan5022 But the only thing is it wouldn't take 9 hrs to tell if a girl is beautiful or ugly
may be they developed 3x and used it to watch this video.
kidding bro
i usually hate ads on youtube video but when it comes to ads on FCC, i gladly watch them cause I know its gonna help them provide us with more free quality content like this. Like seriously, I tried to learn some pytorch from a book and I wasn't making much progress but then this video really propelled me forward.
If you don't click on the advertisement directed to its website there is no earning for this channel ;)
@@maanasverma7659 oh I see...thanks for letting me know
I love it every time he says "r"
Summary : Coding = Food + Digestion = $hit
Artificial Intelligence = Food + $hit = digestion.
basically
It is perfect with indian accent
ㅋㅋㅋ
@bronskie1974 it easy being racist ain't it ?
@bronskie1974 you're welcome! (I'm the tutor)
@@AakashNS Thanks a lot for making this tutorial man. Very comprehensive. I love it.
@@AakashNS You are a hero. Thanks for the tutorial!
Wow first time I heared Indian Voice in this channel.
You'd be surprised how amazingly Indians can teach, nothing better than a RUclips tutorial made by an indian
Fantastic content, thank you so much for sharing this course for free.
One small suggestion : I understand it's additional work but it would be great to update the video to match the latest notebook content. For instance, notebook 3 is now at version 32 which is substantially different from the video content and in order to get the matching version you've to revert the notebook back to version 6. Note this is not a complaint, just a constrictive suggestion of improvement. Thank you again for sharing this.
cool found someone who might be trying ML stuff houdini
Hi.
Do u know how to change to the previous version of the notebook. For example how to change to version 6 of notebook 3 as u mentioned in the comment.
Thank You
i just started to learn pytorch this evening , phew, just got some good material to learn, thanks :)
I would recommend the viewers to check out these notebooks in the description. It's pretty good.
Perfect course for someone who wants to start learning PyTorch.
wow ive been waiting for this im 13 learning ai ive already learned the basic of python . but this is my first time watching such a thorough tutorial.
One of best vedio for students getting started with Pytorch
pls generate the eng-sub so that we can learn even more easier! anyway, this course is amazing!
Why? His English is perfect.
This is the first time i am learning deep learning and it is explained so beautifully by Aakash Sir.Thank You Sir.
Why is this channel so good! Loved the Jovian notebooks!:)
Been looking forward to this forever..thanks ..saved
WOW! I would say this is the best and most comprehensive tutorial I have ever seen! Thanks
It is perfect within quarantine time
This is what i need to pass my semester Thank you deeply
I'm confused a blit. Are you calling "test set" as a "validation set"? Because we're not updating hyperparameters according to validation loss. And the figure in 6:48:55 is a train-test error figure.
One video to watch before bedtime!
🤣🤣
Timestamps little bit wrong
1:33:04 - Image Classification with Logistic Regression
Really well done, Aakash explains the concepts very well, and the code is easy to follow. Unfortunately, the Residual Network (Part6) file from Jovian doesn't match the video - seems the Jovian version doesn't use the FastAI modules. But the explanation of RN's was well done. Highly recommended!
Thanks for providing video on pytorch akash sir
we should support this channel more!
I wish I could like this Course 1000000 times it is very helpful
Thanks a lot
About tricky generator loss function 9:04:21
Thank you so much. This helped me to transition from TensorFlow. SO well explained!
Wdym, helped transition from TensorFlow?
Whats with the discrepancies between notebook 04 on jovian and what Aakash has in the video? The MnistModel class has a bunch of extra methods on jovian, and it doesn't have a loss_batch function anywhere
Edit: I needed to look at a previous version. Specifically version 10 it seems
Danke!
saving for future...
I just paid $10 dollars for an online python machine learning not knowing there's a deep learning tutorial free on youtube and without ADS! SUBSCRIBED!
Hi Akash, Thanks for the video. I am still watching. One doubt/clarification. at around 01:20:00, regarding SGD. I think in SGD we process one example/observation at a time and then take gradients of that observation. When we process in batches it is called mini/Batch Gradient descent.
i was also thinking the same
I highly recommend for people to check the newer, 24 hr course on PyTorch over this one. You need to do your due diligence when it comes to learning deep learning. This course skips over a thorough explanation of concepts such as convolutions, backpropagation, and gradient descent to name a few. I started learning PyTorch with this course but I felt lost because he skipped over the WHY and was just focusing on the HOW.
that 24 hours by jason is it better than this one ? can we skip this entirely or should we go through both
When I try doing from fastai.basic_data import DataBunch on kaggle, it throws the error that the module fastai.basic_data does'nt exist. What should I do?
What's the difference between this course and the ongoing live pytorch course on Saturdays?
This is an amazing series. Looking forward to another new series on NLP with tensorflow/pytorch that covers RNNs, transformers and more. Thousands of thanks
Yeessss
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy?
can you please explain the disadvantages of this method?
This way trial and error approach is replaced by a mathematical model ?
You are a grate teacher. I got the eureka moment when you show the Linear Regression implementation, when you show the 3D visual graph with one axis output prediction and 2 axis input, with the hyperplane and with the Linear algebra Matrix all thing's fall in to place in my head. Best explained example of training a Linear regression task, Thank's. Best Regards Olle Welin
8 hrs ago 9:41 hrs long video uploaded, how ppl can praise in this such short time?? this is beyond machine learning.
2x
Im starting right now. Anyone else with me? to track our progress
Could you also share the assignment solutions? Appreciate!
We are supposed to figure it out ourselves but must say I enjoy to get stuck ... Brain just fries.
Hi Jovian, i followed the same tutorial and got 95% accuracy.
For a first timer like me, things got really complicated from the later half of the Logistic Regression part. :(
I fell in love with the instructor
the third notebook is not reflecting the one in the video at some point
why does SGD work so poorly on the CNN from part 5a? or is that just me? using Atom I get similar results but with SGD it pretty much just bounces around whatever accuracy it's already at.
In the linear regression example, a graph is shown of loss vs a weight.
If the loss is quadratic with respect to any weight, then why is the graph not a parabola but such a curve?
Can you please do a Complete systematic step by step video on Android Development too.... Please
And thanks for all the video guys
At 28:00-29:00 he talked about how tensors are better for the GPU and that both numpy and tensors have the same functionalities. So why should we use numpy if tensors are better?
Thank you for helping us to learn, sir
Fcc keeps going better.n.better and better..
great course, comprehensible and fluent
Very excited to see this...Thank you
Can you please do an full MEAN stack course?
The code used in the vid is different from the notebook shared in the comment section.
Can we get the notebooks of the code used in the vid..
Thanks
Small suggestion. Instead of following 10 hour video, its more convenient for me to follow a playlist broken up based on small topics.
some people like to binge watch tech videos.....so it's better this way....and if it's split up...we have to search for each and every bit...
Follow Akash channel. He does have them split up per lesson.
You can find the playlist here
ruclips.net/p/PLWKjhJtqVAbm3T2Eq1_KgloC7ogdXxdRa
@@shivamkhamble thank you!
Guys be thankful that there are people like this who share their knowledge
Third stamp - 2:34:47
Good course! Thank you for putting on youtube!
I couldn't find the records tab in jovian. Is anyone else facing the same problem?
but not only did you move tensors to GPU but also previously saved models with their weights and bias? Do the parameters need to be moved to GPU as well, when making evaluation, as you did in the video?
For the section image classification, the code shown in the video is quite different from the one posted on jovian. Did you update it? Which one should I follow? Thank you. Btw great video!
5:24:00 why don't we sum the channels in the beginning?
Thank you.
After three hours, this video has already taught me alot.
Thank you so much for this tutorial. Excellent tutor.
Course Prerequisites??
- Basic Python programming
- Basic linear algebra (matrix multiplication)
- Basic knowledge of calculus (differentiation)
Such a wonderful explanation of basics of ML. Thanks a lot
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy?
can you please explain the disadvantages of this method?
This way trial and error approach is replaced by a mathematical model ?
in description box codes are not opening. Can you please look in to that
This course is very useful, thank you so much for sharing.
You guyz are just awesome
What are the prerequisites??
Python
Linear algebra(basics)
Great content!! sincere thank you.
Edit: The explanation on training steps are repetitive. For each section he explains about almost similar training steps. Instead the presenter could've spent more time on model architectures.
Freecode must give peace nobel prize
Support from IRAN
خیلی دمشون گرمه . رایگان بی محدودیت
How's your nuclear weapons project going? do you have enough food to last?
@Mhammad Umair Racist against who? I just wish Iranians have enough food to last, since the economy is collapsing, the regime is shooting protesters on the streets, and wastes all of their money on a failed nuclear weapons program.
Aakash is a godsend!
Wow fantastic. You really a great person.. You thought almost 99% of pytorch.. for free.... I have share this many of my collegues... Thanks a lot
Can you do more in JAVA, thank you!
Have you seen this Java playlist ruclips.net/p/PLWKjhJtqVAbnRT_hue-3zyiuIYj0OlpyG
Java definitely need more Spring Boot / Spring MVC :/
Thanks for the great lecture. Please use a high pass filter for the audio to get rid of annoying puffs.
Thanks for your efforts! This has been a great learning experience!
why w.grad.zero_() is done?
I tried the second exercise (of the ones shown at 34:41), using the same x, w, b values as originally defined and letting y = torch.tensor([[w, w*x], [w*x+b, b]]). I keep getting "element 0 of tensors does not require grad and does not have a grad_fn" error though; does anyone know how I could fix this?
same here
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy?
can you please explain the disadvantages of this method?
This way trial and error approach is replaced by a mathematical model ?
I really didn't want to know about all that jovian stuff but eh it's okay if he wants to promote it
But its good tho
why (2,3) matrix transpose? why can't we generate (3,2) matrix earlier for weight?
You are doing an amazing job.
How do we know what needs to go to cuda device? Is it just any matrices that needs cuda for computation?
OCEAN OF KNOWLEDGE !! Keep bathing in it
Do you mind if I use your material to translate into Spanish video?
Please go ahead Ivan! The knowledge and material is free to use and build upon. I'd really appreciate it if you can link back from your translation to the original notebooks. :)
at 34:37 I tried exercise #1 and kept getting "grad can be implicitly created only for scalar outputs" error. Is this supposed to be the case or am I doing something wrong?
What is the use of building intuition by trying out different hyper parameters, when we have Gaussian process optimization that can learn a function of hyperparameters and estimated loss and accuracy?
can you please explain the disadvantages of this method?
This way trial and error approach is replaced by a mathematical model ?
Big Shout Out To Haters Who Always Thumb Down (👎) Great RUclips videos they See....
Great Video by The Way... 👍👌
What would be the advantage of using pytorch instead of tensorflow or Keras?
In the logistic regression part the class MnistModel how come we are not calling the forward method and it is enough to put model(images)?
can you please make a video on image captioning in PyTorch in Jupiter
Does this cover backpropagation?
Thank you, this is awesome!!
You're a legend bro!!