Machine Learning / Deep Learning Tutorials for Programmers playlist: ruclips.net/p/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ruclips.net/p/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
I already asked it on another video, but just to cover the most area as possible Could I possibly normalize the weights to have mean 0 and variance 1 on weights initialization?
I am in debt to you for teaching me so much in one day. I would have kissed your hand in gratitude if you were in front of me. NN are such a convoluted mess but you have made things easier.
I'm deeply impressed by the quality of your videos. Allow me to say that these, by far, are the most helpful video tutorials on Neural Networks. I seriously appreciate the time you spend researching such information and then putting it in such a concise pleasant way, that's also easy to comprehend. Trust me without you, I wouldn't have been able to understand what changes these parameters make in the network. That's why, thank you very very much for both the time and the effort you put into this! And please, please, keep making more tutorials. Also, I'd like to remark that the topics of these videos are so sequential, so if you're following the playlist from the very beginning you'd absolutely be able to make sense of everything noted in the videos, regardless what your prior knowledge of Neural Networks is. Besides, the Keras playlist is also complementary and adds up a lot to the learning experience. All in all, this is - in one word - "professional work".
Wow kareem, thank you so much for leaving such a thoughtful comment! I'm very happy to hear the value you're getting from this series, and we're really glad to have you here!
God Bless you, my dear Teacher. I saw in every lesson that you put the whole ocean in a small jar. This is the unique qualities and very few teachers have such good quality.
One of the few youtube series I have completed in my life. Instead of beating around the bushes, you kept it to the point with the tons of info just in few minutes. Hope to see more such series.
The video I was finding like a beggar over the internet to help me understand the step 2 and 3 of batch norm. Here it was finally! Thank you so much for doing great work. I really really appreciate. So simple calm and informative explanation to very important topic.
THANK YOU SO MUCH FOR THIS AMAZING PLAYLIST! One of the best channels for learning deep learning. Absolutely loved your content. It was explained in the easiest possible way and awesome graphical illustrations. You really worked hard on the editing! Thanks again!
just like all other comments: i have just finished your video series and I am impressed by the quality of explanation. Many videos go into tiny details way to fast, before making sure that everyone at least understands the terms. Kudos! I hope you make many more.
Great video! But from my understanding, only g and b are trainable. In 4:23, it is mentioned that the mean and std are parameters as well ("these four parameters ... are all trainable")
Thank You Very Very Much. I'm posting this comment in 2020. and under the house quarantine. I needed to know about deep learning to my internship. And thanks to this playlist, now I have good knowledge about fundamental theories of neural networks.
I am really fascinated by your hard work that bring such quality to your videos ! I would be really happy if you could make as much more stuff as possible. Channels like yours only keep up the spirits of students like us really high! Just one word to sum it up....... OUTSTANDING !!
Thank you Deeplizard!. The playlist of Machine Learning & Deep Learning Fundamentals made me understanding the concepts of ML super easily. Thank you so much :D
Great playlist!! I went through the entire Deep Learning playlist, and have to say it's probably one of the best at explaining deep learning in a simplistic way. Thanks for sharing your knowledge!! 👍
Hurray, Completed the series (The only series on RUclips which I have seen from the first video to last without skipping a second). Amazing job Deep Lizard Team. Highly Appreciated! Now I am going to see the Keras Playlist and den I will see the Pytorch series and den Reinforcement learning
The online tutorial is very useful and helps me understand in detail batch normalization concept, which has confused me for a long time. Thanks very much for your sharing.
These tutorial videos are one of the best ones I could find. The explanations are extremely lucid and so easy to understand. I really hope you expand your pool of videos to include other topics such as RNNs. You could also dedicate some videos to hyper-plane classifiers, SVMs, RL, even some optimization methods. All in all the set of videos is just amazing!
This is the best intro to deep learning i have seen anywhere be it a textbook or video lecture series. You have definitely put in serious efforts and thought to break down this dense topic into bite-size tutorials packed with logical chain of thought which is easy to follow through. Thanks a lot :)
Wow, this is awesome. Kudos to you! Perfect explanation. Was trying to understand batchnorm from some websites and articles, this was much better than any of them. Thanks!
Thanks for all of your hard work in putting this series together. I just finished this last video & I can say that with your help I am much further ahead in understanding deep learning. God bless!
Nice tutorial, clear, professional voice and animations ! Looking forward more deep learning videos :) (I'm aware of your Keras tutorial series and I'm going to watch it right now !)
Great video, very clear and understandable. However, I want to point out some mistakes. In the batch norm, only b and g are trainable; not the m and s. Moreover, batch norm is applied after fully connected/convolutional layers but before activation functions. Therefore, it doesn't normalize the output from activation function; it normalizes the input to the activation function.
Loved your video. I am going to complete this series. Can you include RNNs, LSTMs and GRUs, and also complete the video series? I am looking forward to this as I start and complete this series.
Nice Explanation. However, there is a small mistake you can correct. We batch normalize the outputs of a layer (Conv or Linear) before squashing it through the activation function. That way, the activations never overshoot or understood, leading to a stable output and easier convergence. This also allows us to use bigger learning rates. I hope that helps..
Batch norm according the paper is actually applied before the activation function, not after. For this reason, they even recommend dropping the bias parameter of the layer itself because batch norm comes with a learnable bias term. The output of batch norm then goes to the activation function.
I have successfully binged (across 2 weeks) this playlist and found them really helpful! Thank you for all you do and keep up the good work. Hope to watch more vids getting added here or elsewhere on the channel. Lots of love:)
I spotted a slight issue in the article for this video. At the end of the article, it says "I’ll see ya in the next one!", with a link to the Zero Padding article, but by that point that article has already been covered. I really enjoy your courses so far, by the way. I've stopped and started a few times with studying ML in the past, but this has been a pleasure to go through.
I watched this complete playlist of deep learning. It was totally very amazing. but my suggestion is that please add some video about RNN and also make a separate video playlist about Supervised learning, Unsupervised Learning, Imitation Learning and Deep Reinforcement Learning. Thanks you mam.
You're welcome, I'm glad you enjoyed it! We have some of the topics you've suggested already available in other courses. Check them out here: deeplizard.com/courses
These are really good set of videos for neural network. I really liked it a lot and enjoyed watching it. Great work. But there is just one thing which I would like to suggest, you guys have explained Back propagation really well, better than most that I have seen, but it would be really helpful in understanding back propagation better if you could add a small numerical problem for back propagation calculation.
Great job finishing the course, Zehra! Many projects are included in our other various deep learning courses. Check out all the courses listed on the home page of deeplizard.com. We give the recommended order for which to take the courses there as well.
I have seen your all videos,i am Ph.D. student truly learn many things from you ,if you have time please teach how can variational autoencoder use in CNN
Ahh, explained in human language..thank you :). What I don't understand is where to insert those layers? Intuition tells me just everywhere, right? Btw the scale problem and such equations are called stiff equations (the NN is an equation solved by using numerical methods). But another problem is denormalization with numbers close to 0, that can cause 200-300 times slowdowns even with modern CPUs.
Thanks, peteabc1! Yeah, you would want to insert batch norm after your "typical" layers, like dense, conv, etc. that are followed by an activation. From my experience, determining when/where to add batch norm involves testing and analyzing my training results after adding or removing more batch norm layers. But yes, you certainly can add a batch norm layer after _all_ of these typical layers and observe how your model performs. Also, thanks for the stiff equations info!
{ "question": "What kind of parameters are g and b?", "choices": [ "Learnable Parameters", "Hyperparameters", "g learnable and b hyperparameter", "g hyperparameter and b learnable " ], "answer": "Learnable Parameters", "creator": "Sanaulla Haq", "creationDate": "2021-07-19T09:27:20.333Z" }
Could you please clarify why would one normalize a RELU layer, as shown in the example? In the discussion you suggest that it is to prevent cascading effects due to too large weights. However, later, you state that BN normalizes the output of the unit. How are the two related? Further, why normalize a RELU unit that is already bound to [0, 1]?
Machine Learning / Deep Learning Tutorials for Programmers playlist: ruclips.net/p/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
Keras Machine Learning / Deep Learning Tutorial playlist: ruclips.net/p/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
I already asked it on another video, but just to cover the most area as possible
Could I possibly normalize the weights to have mean 0 and variance 1 on weights initialization?
I am in debt to you for teaching me so much in one day. I would have kissed your hand in gratitude if you were in front of me. NN are such a convoluted mess but you have made things easier.
Can we make a game where ai have their own life and we live as their family and social system with our friends
I'm deeply impressed by the quality of your videos. Allow me to say that these, by far, are the most helpful video tutorials on Neural Networks. I seriously appreciate the time you spend researching such information and then putting it in such a concise pleasant way, that's also easy to comprehend. Trust me without you, I wouldn't have been able to understand what changes these parameters make in the network. That's why, thank you very very much for both the time and the effort you put into this! And please, please, keep making more tutorials.
Also, I'd like to remark that the topics of these videos are so sequential, so if you're following the playlist from the very beginning you'd absolutely be able to make sense of everything noted in the videos, regardless what your prior knowledge of Neural Networks is. Besides, the Keras playlist is also complementary and adds up a lot to the learning experience.
All in all, this is - in one word - "professional work".
Wow kareem, thank you so much for leaving such a thoughtful comment! I'm very happy to hear the value you're getting from this series, and we're really glad to have you here!
i don't allow you to say..!!
It was the purpose of this *deep learning* videos : to be *deeply* impressed by the *learning* you get
Several days that I read several articles to understand what really does Batch Norm, and I found your video. Perferctly explained, thanks a lot !
God Bless you, my dear Teacher. I saw in every lesson that you put the whole ocean in a small jar. This is the unique qualities and very few teachers have such good quality.
Thank you, Hafiz!
One of the few youtube series I have completed in my life. Instead of beating around the bushes, you kept it to the point with the tons of info just in few minutes. Hope to see more such series.
Literally watched all 38 videos in one go. Thank you so much!
The video I was finding like a beggar over the internet to help me understand the step 2 and 3 of batch norm. Here it was finally! Thank you so much for doing great work. I really really appreciate. So simple calm and informative explanation to very important topic.
Ohh bhai khajaana 💰💰💰mil gaya💰💰💰💰💰💰💰💰💰
Finally completed the series of deep learning, Thank You for such amazing videos and blogs for giving free on RUclips. It's great quality!!!
Top-notch, I finished it all, kudos to the Deeplizard team, love you all, love you Mandy, your sweet voice keep up us.
THANK YOU SO MUCH FOR THIS AMAZING PLAYLIST! One of the best channels for learning deep learning. Absolutely loved your content. It was explained in the easiest possible way and awesome graphical illustrations. You really worked hard on the editing! Thanks again!
Thanks, I'm writing my thesis thanks to your explanations!
I found, pure gold ... ! Great video ! I understood perfectly !
This one of the most comprehensive videos I ever watch.....
really thank you and I am looking forward for advanced concepts
worth watching all the videos because of the content delivery and quality. big thumbs up for the entire team
just like all other comments: i have just finished your video series and I am impressed by the quality of explanation. Many videos go into tiny details way to fast, before making sure that everyone at least understands the terms. Kudos! I hope you make many more.
Thank you Robin! Much more content available on deeplizard.com :)
Great video! But from my understanding, only g and b are trainable. In 4:23, it is mentioned that the mean and std are parameters as well ("these four parameters ... are all trainable")
Thanks Fernando, you’re right! The blog for this video has the correction :)
deeplizard.com/learn/video/dXB-KQYkzNU
came looking for this comment! thanks for stopping me losing my mind trying to reconcile this explanation to the paper
Thank You Very Very Much. I'm posting this comment in 2020. and under the house quarantine. I needed to know about deep learning to my internship. And thanks to this playlist, now I have good knowledge about fundamental theories of neural networks.
Wonderful!
I am really fascinated by your hard work that bring such quality to your videos ! I would be really happy if you could make as much more stuff as possible. Channels like yours only keep up the spirits of students like us really high! Just one word to sum it up....... OUTSTANDING !!
Thank you Deeplizard!.
The playlist of Machine Learning & Deep Learning Fundamentals made me understanding the concepts of ML super easily.
Thank you so much :D
Great playlist!! I went through the entire Deep Learning playlist, and have to say it's probably one of the best at explaining deep learning in a simplistic way. Thanks for sharing your knowledge!! 👍
Completed the whole playlist. Now I am confident about the basics of neural networks. Thanks a lot for the great series!!
Hurray, Completed the series (The only series on RUclips which I have seen from the first video to last without skipping a second). Amazing job Deep Lizard Team. Highly Appreciated!
Now I am going to see the Keras Playlist and den I will see the Pytorch series and den Reinforcement learning
Congratulations! 🎉 Keep up the great work as you progress to the next courses!
The online tutorial is very useful and helps me understand in detail batch normalization concept, which has confused me for a long time. Thanks very much for your sharing.
You are welcome!
These tutorial videos are one of the best ones I could find. The explanations are extremely lucid and so easy to understand. I really hope you expand your pool of videos to include other topics such as RNNs. You could also dedicate some videos to hyper-plane classifiers, SVMs, RL, even some optimization methods. All in all the set of videos is just amazing!
Wow. Such a nice explanation. Thank you!
This is the best intro to deep learning i have seen anywhere be it a textbook or video lecture series. You have definitely put in serious efforts and thought to break down this dense topic into bite-size tutorials packed with logical chain of thought which is easy to follow through. Thanks a lot :)
Thank you so much mandy... i have gone through all the videos... 😍😍😍 .
I love your tutorial. The illustration is just so concise and easy to understand. Thank you for all your effort in making these videos!
I think every machine learning specialist even specialized one will find in you course something new for himself.:) Great course, Thanks a lot!
Cleared the concept. Thnx
i completed thes series of this videos, can't wait to watch more on your playlist!
Awesome job! See all of our deep learning content on deeplizard.com :)
Wow, this is awesome. Kudos to you! Perfect explanation. Was trying to understand batchnorm from some websites and articles, this was much better than any of them. Thanks!
Great quality content, subscribed ️🔥
Very Excellent, I hope you continue this series. Your explanation is so clear.
0:10 intro
0:30 normalize and standardize
1:25 why normalize
3:05 problem of large weights, and batch normalization
5:46 Keras
Thank you for your contribution of the timestamps for several videos! Will review soon for publishing :)
Wow, thanks for putting this up. You deserve every like and every subscribe. Great job.
Beautiful !! super clear !
thank you really you are the best teacher in the world. I appreciate your efforts
Happy to hear the value you're getting from the content, qusay!
@@deeplizard I am so happy for your reply to my comment ^_^
Thank you so much for your explanations!. I'm writing my phd thesis and your tutorial helped me a lot :)
Wonderful explanation
Thanks for all of your hard work in putting this series together. I just finished this last video & I can say that with your help I am much further ahead in understanding deep learning. God bless!
finding this channel has been a great help for my studies!
Thank you very much for this whole series! It was really enjoyable to watch and I learnt a lot!
Simple and lucid explanation. loved it. Thanks
Nice tutorial, clear, professional voice and animations !
Looking forward more deep learning videos :)
(I'm aware of your Keras tutorial series and I'm going to watch it right now !)
Thank you, Jonathan! I'm glad you're liking the videos so far!
Very well explained!
The best explonation I ever watch
Thanks for the amazing series! I really enjoyed your videos! Keep up the good work! Hope to see more complex networks made simple by you!
Amazing explanation!
nice short video and great way of explaining!
I will follow this channel and watch more videos!
Keep up the great work
Great video, very clear and understandable. However, I want to point out some mistakes. In the batch norm, only b and g are trainable; not the m and s. Moreover, batch norm is applied after fully connected/convolutional layers but before activation functions. Therefore, it doesn't normalize the output from activation function; it normalizes the input to the activation function.
Very well explained
Loved your video. I am going to complete this series. Can you include RNNs, LSTMs and GRUs, and also complete the video series? I am looking forward to this as I start and complete this series.
Nice Explanation. However, there is a small mistake you can correct. We batch normalize the outputs of a layer (Conv or Linear) before squashing it through the activation function. That way, the activations never overshoot or understood, leading to a stable output and easier convergence. This also allows us to use bigger learning rates.
I hope that helps..
the best tutorial that I've ever seen.thanks
AMAZING SERIES
Batch norm according the paper is actually applied before the activation function, not after. For this reason, they even recommend dropping the bias parameter of the layer itself because batch norm comes with a learnable bias term. The output of batch norm then goes to the activation function.
Thanks for the amazing explanation!! By far the best tutorial video I've seen!
Great content. Like many others have said, one of the best series on ML out there.
That was very helpful, thanks
These videos are SO helpful, thank you
I'm deeply thankful 🤓
I have successfully binged (across 2 weeks) this playlist and found them really helpful! Thank you for all you do and keep up the good work. Hope to watch more vids getting added here or elsewhere on the channel. Lots of love:)
Thank you, and great work! Check out the homepage of deeplizard.com to see all other DL courses and the order in which to take them after this one!
Just WoW! Amazing content. Please make series on Explainig research papers
just woaaa ..! Please keep making these videos, it's by far the best explanation I got here
Ohhh what a wonderful narrative. I really like the way you explained it. Thank you and I’ve just Subscribed to your channel👍🏻
I spotted a slight issue in the article for this video.
At the end of the article, it says "I’ll see ya in the next one!", with a link to the Zero Padding article, but by that point that article has already been covered.
I really enjoy your courses so far, by the way. I've stopped and started a few times with studying ML in the past, but this has been a pleasure to go through.
Fixed, thanks Chris! :D
I've rearranged the course order since the initial posting of these videos/blogs, so I removed the hyperlink.
Just wanted to say kudos and thanks so much for your awesome series :D I have learned so much! Now I'm off to your Keras w/TF series :)
Great job getting through this course!
@@deeplizard Thanks! moving to your Deep Learning and Keras series next :)
I really enjoyed learning with your videos. Can you please create videos on RNN.!!
great video. precise and concise. Thanks!
Excellent series!
Clearly explained, good animation, covered most areas. Thanks
I watched this complete playlist of deep learning. It was totally very amazing. but my suggestion is that please add some video about RNN and also make a separate video playlist about Supervised learning, Unsupervised Learning, Imitation Learning and Deep Reinforcement Learning. Thanks you mam.
You're welcome, I'm glad you enjoyed it! We have some of the topics you've suggested already available in other courses. Check them out here:
deeplizard.com/courses
great series
amazing teaching skills you have got madam
thank you
As always, very well done and clear, thank you!!
I am sorry that this is the last video in the playlist I want more😢
These are really good set of videos for neural network. I really liked it a lot and enjoyed watching it. Great work. But there is just one thing which I would like to suggest, you guys have explained Back propagation really well, better than most that I have seen, but it would be really helpful in understanding back propagation better if you could add a small numerical problem for back propagation calculation.
gentle and to the point. Thank you.
Wonderful work. Thank you for setting up all this content.
awesome...I am going to watch the playlist....
Thanks a lot.
Thanks for the video. So do we have to normalize data before putting to the model or batch normalization does it itself in the model?
This is a gem! Thank you very much!!!
Thank you so much for your great work ❤
Very good Explanation..watched this whole playlist.Thanks for making understanding DL so easy and fun.Moreover your funny stuff made me laugh.
so helpful!
I finished all 38 videos. Great great great explanation!
Can you also do some sample projects?
Great job finishing the course, Zehra! Many projects are included in our other various deep learning courses. Check out all the courses listed on the home page of deeplizard.com. We give the recommended order for which to take the courses there as well.
@@deeplizard Sure! I will check the website. I also recommend it to my friends. Thank you, Mandy!
Amazing and concise video, thank you!
thankyou for amazing explanation
I have seen your all videos,i am Ph.D. student truly learn many things from
you ,if you have time please teach how can variational autoencoder use in CNN
this was an amazing explanation. thank you.
Thanks, Nika!
Once again again you did it! you dit it!
Ahh, explained in human language..thank you :). What I don't understand is where to insert those layers? Intuition tells me just everywhere, right?
Btw the scale problem and such equations are called stiff equations (the NN is an equation solved by using numerical methods). But another problem is denormalization with numbers close to 0, that can cause 200-300 times slowdowns even with modern CPUs.
Thanks, peteabc1!
Yeah, you would want to insert batch norm after your "typical" layers, like dense, conv, etc. that are followed by an activation. From my experience, determining when/where to add batch norm involves testing and analyzing my training results after adding or removing more batch norm layers. But yes, you certainly can add a batch norm layer after _all_ of these typical layers and observe how your model performs.
Also, thanks for the stiff equations info!
Love your channel
@deeplizard please do a series on transfer learning, or more in-depth teaching on NLP/CV :)
{
"question": "What kind of parameters are g and b?",
"choices": [
"Learnable Parameters",
"Hyperparameters",
"g learnable and b hyperparameter",
"g hyperparameter and b learnable "
],
"answer": "Learnable Parameters",
"creator": "Sanaulla Haq",
"creationDate": "2021-07-19T09:27:20.333Z"
}
Thanks
i started to fall in love with the voice
I haven't watch the video yet, but I know it's good.
I was right.
Could you please clarify why would one normalize a RELU layer, as shown in the example? In the discussion you suggest that it is to prevent cascading effects due to too large weights. However, later, you state that BN normalizes the output of the unit. How are the two related? Further, why normalize a RELU unit that is already bound to [0, 1]?