I highly appreciate that you didnt pollute the video with much deep learning concepts. The Main focus should be "you know deep learning you are familiar with the concepts and maybe another framework but you want to gettting started with pytorch and here is what should you know" Thank you!
This video is recommended to all who starting with Pytorch. With my 5 years of experience in this field, I can assure you that this video will sharpen your understanding in a great way.
Maybe you need to go from ground up or need some extra explanation@@imveryhungry112, try a video with more explanation also maybe try understanding using chatgpt, but it can give incorrect answers. You'll understand with time. Some of us understand things differently. 🧐
At 20:20 don't make the mistake I did of writing w = w - learning_rate * w.grad as it basically creates a new w and messes autograd stuff up ( sorry if I'm using the wrong terminology ). To ensure it's 'inline' you can also write w.sub_(learning_rate * w.grad)
This is a great, quick tutorial for someone with some experience in python and in other deep learning frameworks like Keras but looking to expand into PyTorch. You don't waste any time! I found myself frequently pausing the video while following along, so it took a good 5 hours for me to get through this 50-minute video. It was time well spent, though. The learning curve may be a little steep for someone just starting out with deep learning, but then such people usually won't be using PyTorch right away.
In step 4. Frist Neural Net, the code breaks in the line "example_data, example_targets = examples.next()", it throws an attribute error because instead of examples.next() it now should be next(examples)
This is an excellent video, it told me what I wanted to know and needed to know, efficiently. It was so condensed that I probably spent about 5 hours on it, because I wanted to run it on my computer, and see some of the partial outputs and play around, but now I feel like I get how Pytorch flows work, because I have not found Pytorch as intuitive as Tensorflow, though there are a lot of really great things about how it works (I learned programming from people who did it old school). Thank you very much for making this!
Awesome!!! Highly recommended!! I usually work with TF most of the time. But due to some research work i have to learn PyTorch!! This tutorial is like getting Big Picture idea of coding with PyTorch!! Bravo!!
I don't comment on videos but for this I have to. This is the definition of a crash course, everything needed to know is contained. Thanks so much this has really given me confidence in pytorch.
ikr this is the best beginner pytorch tutorial I've seen, super clear and straight to point, best vid our there for ppl with an understanding of how simple nn works in terms of math
At 30:57 I got an error: AttributeError: '_SingleProcessDataLoaderIter' object has no attribute 'next' I fixed it by changing the line to: example_data, example_targets = next(examples)
Good Tutorial. Two drawbacks: 1. input dim has to be inferred 2. saving the model. What if ConvNet requires some input in __init__ function. This would mean that input args also needs to be persisted.
This is a great course! Thanks a lot! I have one question though: is it right that the test-data comes from the same data-set but loaded again? So the test data has already been seen by the model? Wouldn't it be better if we split up the dataset into a training and test subset?
🤖 feeling quite accomplished after training one neuron to output y^ = 2x. but seriously, this was the best pytorch tutorial that didn’t gloss over all the prerequisite pieces like other videos. leaving gaping holes that after it’s done, just leaves you standing in the sh*t.
input_size is always equal to the number of features. Input size means how many features the input size. If you wanted to ask why the output size is also equal to n_features, then it just so happens that the input had 1 feature and we were also predicting a scalar output. But it's generally not the case.
Created a model that can recognize a word in short audio file. But how to use it for longer audio files to detect spoken words and it will tell time even they were spoken
@@__________________________6910 what if chunks get selected wrong eg hello gets cut into hel and lo then how will it know if it was hell or help or hello
@@jawadmansoor2456 Good question. I also faced same problem. But I think using this technique may be you can cut the chunks in exactly what you want. For example I have 6 min audio. I will cut the audio 30 sec each but first we will cut not exactly 30sec cut more than 30s may be 40-60 sec. Then create a spectrogram of the 40sec audio. Then I will find at 30s if ther is any data if not I will cut there else I move the cut time little bit extra like 30.1 then 30.2 and so on and find the next silence time means there is no data. Like this the chunks duration time may be 30s or 30.2 or 30.5 or 32 or 40 seconds. This type we can cut the long audio in small chunks. If there is any better way tell me I'm also looking for the solution.
Thanks, I watched it untill the end, and rewrite most of the codes to my IDE. Here is my criticism: 1- Sometimes it is hard to follow what you have said, your pronunciation and speech tone is a bit weird. Please improve your pronunciation 2- You are not writing the code line by line, you are just showing them fastly. However, writing and explaining the code line-by-line will make many points clear. In your case, there are some lines of code I did not understand why you use like that. In this case, I have to deal with it on my own.
MNIST example in the notebook is now erroring: AttributeError Traceback (most recent call last) in () 36 37 examples = iter(test_loader) ---> 38 example_data, example_targets = examples.next() 39 40 for i in range(6): AttributeError: '_SingleProcessDataLoaderIter' object has no attribute 'next'
I highly appreciate that you didnt pollute the video with much deep learning concepts. The Main focus should be "you know deep learning you are familiar with the concepts and maybe another framework but you want to gettting started with pytorch and here is what should you know"
Thank you!
Exactly! So conscise!
Pollute??? 🤣🤣🤣
Best tutorial I have ever gone through. To the point, No fluff! Congrats on building such a neat video!
This video is recommended to all who starting with Pytorch. With my 5 years of experience in this field, I can assure you that this video will sharpen your understanding in a great way.
I am not smart enough to understand any of this :(
Maybe you need to go from ground up or need some extra explanation@@imveryhungry112, try a video with more explanation also maybe try understanding using chatgpt, but it can give incorrect answers. You'll understand with time. Some of us understand things differently. 🧐
+1, this was a great refresher for me.
Thanks watching now
This is a 6 months course, in one package. Thank you.
This 50 minute video is better/produtive than a whole 24 hour video...if you know you know
😂😂😂
exactly@@deepak_sharma_z
++++++++++++++
u mean freecodecamp? 🤣
no daniel bourke was harmed😂
This is the best crash course i have seen online, I was able to write my own model for signal processing
Wow interesting
great tutorial, i took me around 2hrs to complete while asking chatgpt for help throughout but now i understand it all quite well. thanks a lot
At 20:20 don't make the mistake I did of writing w = w - learning_rate * w.grad as it basically creates a new w and messes autograd stuff up ( sorry if I'm using the wrong terminology ). To ensure it's 'inline' you can also write w.sub_(learning_rate * w.grad)
The model is not doing any gradient descent. from epoch 0 onwards w=2, loss=0 and loss is always 0
print(w_grad) is always tensor(0)
Clear and efficient, no BS, thank you very much for this very concrete lesson!
This is a great, quick tutorial for someone with some experience in python and in other deep learning frameworks like Keras but looking to expand into PyTorch. You don't waste any time! I found myself frequently pausing the video while following along, so it took a good 5 hours for me to get through this 50-minute video. It was time well spent, though.
The learning curve may be a little steep for someone just starting out with deep learning, but then such people usually won't be using PyTorch right away.
I am working in DL sphere for 6 years, this is golden tutorial! Well Done!
A clear, precise, concise tutorial, superb work thank you very much
This is art! Short, sharp and to the point!
In step 4. Frist Neural Net, the code breaks in the line "example_data, example_targets = examples.next()", it throws an attribute error because instead of examples.next() it now should be next(examples)
or examples.__next__()
Appreciate your efforts on Video. Pretty comprehensive and no fluff!
This is an excellent video, it told me what I wanted to know and needed to know, efficiently. It was so condensed that I probably spent about 5 hours on it, because I wanted to run it on my computer, and see some of the partial outputs and play around, but now I feel like I get how Pytorch flows work, because I have not found Pytorch as intuitive as Tensorflow, though there are a lot of really great things about how it works (I learned programming from people who did it old school). Thank you very much for making this!
Awesome!!! Highly recommended!!
I usually work with TF most of the time. But due to some research work i have to learn PyTorch!!
This tutorial is like getting Big Picture idea of coding with PyTorch!!
Bravo!!
I don't comment on videos but for this I have to. This is the definition of a crash course, everything needed to know is contained. Thanks so much this has really given me confidence in pytorch.
ikr this is the best beginner pytorch tutorial I've seen, super clear and straight to point, best vid our there for ppl with an understanding of how simple nn works in terms of math
Perfect, thanks a lot! I've learned so much from this tutorial.
We're glad it helped!
@AssemblyAI I'll always come back here for a refresher.
This was a wonderful crash course for new beginners like me! Thank you!
Thank you for sharing this video. The explanation was fantastic and incredibly helpful!🙌
Very helpful
Well explained, keep it up!
Thank you very much Patrick!!! You have considered my request in the previous video!!! Thank you so much!! It's very helpful for students like me
You're most welcome!
Awesome video! Best tutorial on PyTorch!
Great video! Very informative
Thank you so much for this tutorial!!!
super clear thanks!!
Thanks man!
No problem!
Very well illustrated! Thanks
At 30:57 I got an error: AttributeError: '_SingleProcessDataLoaderIter' object has no attribute 'next'
I fixed it by changing the line to:
example_data, example_targets = next(examples)
I had the same problem. Thanks!
or by __next__() if using python3
@37:30 Should it be argmax instead of max? to give label id from 0 to 9.
Great tutorial. Thanks for the amazing video!
Great course, well done!
very cool. Pity not so many people can enjoy this. A fashion influencer can easily have 100K views in 3 days
Really great introduction!
Thank you!!! It would be awesome if you could add also some exercises!
Good Tutorial.
Two drawbacks:
1. input dim has to be inferred
2. saving the model. What if ConvNet requires some input in __init__ function. This would mean that input args also needs to be persisted.
I really enjoyed watching this video
You are the best thank you💪💪💪💪💪💪💪💪💪💪
Perfect, thank you.
Thank you, von Braun.
This is a great course!
Thanks a lot!
I have one question though: is it right that the test-data comes from the same data-set but loaded again? So the test data has already been seen by the model? Wouldn't it be better if we split up the dataset into a training and test subset?
awesome tutorial,thx
great tutorial, thankyou for sharing !
Thanks
awesome crash course!
thanku for the video
Loved it 💯
Thanks for the great explanation
Thanks a lot! Do you have videos similar to this but focusing on RNN, GRU, LSTM and Transformer using PyTorch?
Thanks a lot
Any reason why you didn't softmax the output layer in the MNIST Neural Net?
🤖 feeling quite accomplished after training one neuron to output y^ = 2x. but seriously, this was the best pytorch tutorial that didn’t gloss over all the prerequisite pieces like other videos. leaving gaping holes that after it’s done, just leaves you standing in the sh*t.
I am from non-english Country, your voice is friendly to me.
But more importantly, this is a wonderful tutorial, thank you༼ つ ◕_◕ ༽つ
Thanks patrick
Thanks !
very helpful
22:47 Why input_size is equal to n_features also?
input_size is always equal to the number of features. Input size means how many features the input size. If you wanted to ask why the output size is also equal to n_features, then it just so happens that the input had 1 feature and we were also predicting a scalar output. But it's generally not the case.
Fantastic tutorial Patrick. Would you like to give a tech talk in the software company that i work for ? It will be great to hear you talk :)
does everyone start making these from scratch?
what if i want to make a draw_dot architecture of this?
Created a model that can recognize a word in short audio file. But how to use it for longer audio files to detect spoken words and it will tell time even they were spoken
chunks audio file and store the text every time . At last join the text and print
@@__________________________6910 what if chunks get selected wrong eg hello gets cut into hel and lo then how will it know if it was hell or help or hello
@@jawadmansoor2456 Good question. I also faced same problem. But I think using this technique may be you can cut the chunks in exactly what you want.
For example I have 6 min audio. I will cut the audio 30 sec each but first we will cut not exactly 30sec cut more than 30s may be 40-60 sec. Then create a spectrogram of the 40sec audio. Then I will find at 30s if ther is any data if not I will cut there else I move the cut time little bit extra like 30.1 then 30.2 and so on and find the next silence time means there is no data. Like this the chunks duration time may be 30s or 30.2 or 30.5 or 32 or 40 seconds. This type we can cut the long audio in small chunks. If there is any better way tell me I'm also looking for the solution.
Thanks, I watched it untill the end, and rewrite most of the codes to my IDE. Here is my criticism:
1- Sometimes it is hard to follow what you have said, your pronunciation and speech tone is a bit weird. Please improve your pronunciation
2- You are not writing the code line by line, you are just showing them fastly. However, writing and explaining the code line-by-line will make many points clear. In your case, there are some lines of code I did not understand why you use like that. In this case, I have to deal with it on my own.
Your critiques are harsh for someone passionately teaching an in-demand topic for free in a non-native language to reach a wider audience
Thanks for awesome lecture:) What if I do not use shuffle in train_loader?
23:35
just reading things out :)
8:39
I wish I was smart enough to understand this :(
Knowing the prerequisites has nothing to do with being smart :)
Idk what's wrong, but I got accuracy 10.27% in the First Neural Net 😂
same idk how to fix it
nvm I fixed it, spelling mistake -__-
21/06/2024: begin lesson
did u finish yet?
WORST LEC EVER
MNIST example in the notebook is now erroring:
AttributeError Traceback (most recent call last)
in ()
36
37 examples = iter(test_loader)
---> 38 example_data, example_targets = examples.next()
39
40 for i in range(6):
AttributeError: '_SingleProcessDataLoaderIter' object has no attribute 'next'
All that is needed is this:
example_data, example_targets = next(examples)
RUclips university
Well explained, keep it up!
Great course, well done !!
Thanks