u are such an awesome teacher! i am medical doctor with zero background to ML and your playlists are my go to place to grasp concepts before i dive in deep. Im grateful luv from ZIMBABWE
Having watched your great explanatory videos on CNN and Zero padding, I am actually going to give a thumbs up on every video of yours I see before I even start watching! :)
I've always struggled to understand pooling and this to-the-point explanation was the missing piece in the puzzle. I cannot thank you guys enough for the great work and taking the time to explain everything in so much detail. I owe so much of my knowledge of Deep learning to this channel
This is the best channel for machine learning on youtube! Thank you so much you really helped me out when I was studying for my exams. Keep up the good work!
Hey first of all I thank you for uploading this series, secondly "Deeplizard sounds cool and unorthodox" and lastly I liked the way you structured this entire series, short and crisp at the same time easy to understand and lot to learn for a newbie like me. Keep up the good work.
I don't think I've ever seen a youtube channel that beautifully sums up DL/ML concepts in a way that idiots and master coders can understand. I am genuinely disappointed that I didn't find your channel before I spent ages on reddit/stackoverflow! Hahah +1 Sub, Keep up the good work from all of us here in the comments!
{ "question": "Stride refers to:", "choices": [ "how many units the filter slides between each operation.", "how many operations performed on each row.", "the size of the batch the operations are applied to at a time.", "the distance between the results of the operation in the resultant matrix." ], "answer": "how many units the filter slides between each operation.", "creator": "Chris", "creationDate": "2020-02-06T05:03:54.547Z" }
You made something that is supposed to be complicated and difficult... easy. Mind making a guide on quantum computing next? xD Fantastic work! Thank you
Some questions: -does it make sense to have a grid of Y x Z, where Y Z, or/and the stride be different of any of those 2? -what happens if in the edges of the image we don't have a full block (remainders)? Do we still max it?
I have a question. During the video on zero padding, you indicated that padding was useful to maintain the size in the original matrix. In your example in this video you include padding='same' on both of your Conv2D layer. But then you include a MaxPooling2d layer which cuts in the matrix from 20x20 to 10x10. This seems to negate or contradict the benefits of the padding='same' on the Conv2D layers. Please explain why keeping the original size of the matrix is good for the Conv2D layer, while reducing the original size of the matrix is good for the MaxPooling2d layer. Thanks!
When doing a convolution operation, if not using padding, then the data at the edges of the images will be completely thrown away and lost. To prevent this data loss, we use padding. Max Pooling, on the other hand, will indeed reduce the image size, but it does not throw data away. The original data from the image is used in the pooling operation to create the lower resolution image. Let me know if this makes sense.
@@deeplizard I understand the reason for the padding (to not lose data), but I'm not sure I understand your comment regarding pooling "does not throw any data away". Given a 2x2 filter, it looks at 4 items in the image, and uses the max value, and throws the other 3 away. So we go out of our way (padding) to lose as little as possible with the Conv2D operation, just to lose 75% of the image with the polling operation. everyone does it this way, so I know it is right. I simply can't wrap my mind around why this is not an issue.
Great video! I have a question though. Is it a standard procedure to have a max pooling layer after every convolution layer? Furthermore, how does one decide whether to put a max pooling operation after a conv layer and in which cases should we not put a max pooling layer after a conv layer?
Machine Learning / Deep Learning Tutorials for Programmers playlist: ruclips.net/p/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ruclips.net/p/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
Thanks, ivzlccs! A Flatten() layer transforms the output from the previous convolutional layer into a 1D tensor so that it can be provided as input to the following Dense() layer. The two videos on learnable parameters in a CNN below may be helpful as well. There, when transitioning from the convolutional layer to the output layer, we discuss the flatten operation. ruclips.net/video/gmBfb6LNnZs/видео.html ruclips.net/video/8d-9SnGt5E0/видео.html
@@deeplizard Can you kindly also explain what dense layer does? The explanation that i have is that it connects layers but why would you have unconnected layers in the first place?
{ "question": "If we have an image of size n by n, (or for python users: image.shape == (n,n)) and we perform max pooling with a filter size of 2 by 2 and a stride of 2.
How much smaller will the output image of the Max Pooling Layer be than our input?", "choices": [ "2", "n", "2/n", "n/2" ], "answer": "2", "creator": "Kaffafel", "creationDate": "2021-04-18T13:22:11.978Z" }
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/ZjM_XQa5s6s
Thanks, Daniel! We did go to university in the US, but we've graduated already :)
These videos are more useful than half a year of a university course on neural networks. Thanks for making them!
The Way you explain and then end up with an example of code it's really nice way to grasp!!!
I beg you! Keep making these videos. Your videos just light up my inner neural network.
My new favorite channel. It has saved me at times in my undergrad
I love the "Hey, what's going on, everyone?" at the beginning. :)
Great explanations, very clear and concise.
u are such an awesome teacher! i am medical doctor with zero background to ML and your playlists are my go to place to grasp concepts before i dive in deep. Im grateful
luv from ZIMBABWE
0:19 What is Maxpooling?
5:42 Why do we use Maxpooling?
7:06 Example for other types of pooling
7:28 How is it done on code?
Awesome explanation!
Added to the description. Thanks so much!
Having watched your great explanatory videos on CNN and Zero padding, I am actually going to give a thumbs up on every video of yours I see before I even start watching! :)
Have I found the best Deep learning channel on youtube? Um, I guess so!
I've always struggled to understand pooling and this to-the-point explanation was the missing piece in the puzzle. I cannot thank you guys enough for the great work and taking the time to explain everything in so much detail. I owe so much of my knowledge of Deep learning to this channel
0:48 intro
1:40 example
4:25 toy example
5:40 why max pooling
7:27 Keras code
This is the best channel for machine learning on youtube! Thank you so much you really helped me out when I was studying for my exams. Keep up the good work!
Glad to hear that Abdullah! Thank you!
Probably the most intuitive explanation I have ever seen =)
This video is a major reason why I got a job as a computer vision ML eng. Thank you a lot!
Woah, awesome! Thanks for sharing, Sam! Were you asked about how max pooling is implemented in your interview?
Hey first of all I thank you for uploading this series, secondly "Deeplizard sounds cool and unorthodox" and lastly I liked the way you structured this entire series, short and crisp at the same time easy to understand and lot to learn for a newbie like me. Keep up the good work.
Now this is as simple as it can be explained. Great work.
I'm blown away how good these explanation are!
this is the best explanation video i have ever heard in my life.
i am currently bing watching all your cnn videos, great work with top quality content.
This is a great series! You do a wonderful job explaining and teaching. Thanks!
That's just the best explanation out there. Keep up the great work!
thank you very much for this clear and helpful explanation.
Words fail to express my gratitude.
Thank you so much for putting together this series, it has really helped me with understanding concepts behind deep learning:)
Super clear and helpful video. Many thanks!
Thank you for making this excellent video!
Thank you for continuing the series!
For sure, Kotki!
I've just discovered your channel, really the video is clear and your way of presenting things made it easy to understand.
Big thumbs up !
Thanks, Wael! Glad to have you here!
extremely simple and easy to understand. Love it. Thank you.
Really good video, congrats!! Better than tensorflow guides.
thanks for explaining. really helpful and easy to understand!
Ms Lizard... I love ure videos. Explanations are very clear with neat illustrations/animations! :)
best cnn tutorial ever...that girl rocks
Thank you for these wonderful videos
Awesome again.. Deep learning is "Simple learning" now with the way explain 😊👍
Haha I like that, thanks!
Really Nice Explanation. TNX
Thanks for these great explanations
I'd really appreciate this awesome video. It's very helpful for my study.
Great explaining.Thank u very much.
thanks for this video, it was super helpful !
Thank you for this great explanation!
Thanks the visualizations are excellent
Excellent and Detailed explanation. Thanks !!
Thanks, chetan! Glad you liked it!
Your videos are amazing!! Keep it up :)
I don't think I've ever seen a youtube channel that beautifully sums up DL/ML concepts in a way that idiots and master coders can understand. I am genuinely disappointed that I didn't find your channel before I spent ages on reddit/stackoverflow! Hahah +1 Sub, Keep up the good work from all of us here in the comments!
Love you tutorials keep it going. Hands down
thank you soooo much!!!! this was very hard for me to understand
perfectly explained... thanks much
Such a great video !!!
Excellent Tutorial. I love you Ma'am!
Amazing tutorial...you simplify concepts so well and think very clearly. I am an admirer and I just subscribed :)
Thank you, Nikola!
At 7:37, why do you have dense layer as the first layer in CNN?
Another great video. Thank you.
You’re marvellous, thanks very much!!!
God bless you, dear. more knowledge to you
{
"question": "Stride refers to:",
"choices": [
"how many units the filter slides between each operation.",
"how many operations performed on each row.",
"the size of the batch the operations are applied to at a time.",
"the distance between the results of the operation in the resultant matrix."
],
"answer": "how many units the filter slides between each operation.",
"creator": "Chris",
"creationDate": "2020-02-06T05:03:54.547Z"
}
Thanks, Chris! Just added your question to deeplizard.com :)
So clearly explained, awesome job!
Amazing series of videos 🙌
You made something that is supposed to be complicated and difficult... easy.
Mind making a guide on quantum computing next? xD
Fantastic work!
Thank you
Just what i was looking for. Thanks!
Some questions:
-does it make sense to have a grid of Y x Z, where Y Z, or/and the stride be different of any of those 2?
-what happens if in the edges of the image we don't have a full block (remainders)? Do we still max it?
thank you, really nice explaination
You are a life saver !!
Not all heroes wear capes 😜
Lol glad you enjoyed the video, Mostafa!
the visual is amzing
Thank you so much! Nice understanding :)
thank for the great tutorial
Great work!
wow this was such a good explanation, including the previous one on cnn's
Thank you! Have a look at this one as well: ruclips.net/video/kt6iUG0Gfm0/видео.html
you are a legend, thank you.
you are the best ! you are the best ! you are the best !you are the best ! you are the best !
Nice video, u nailed it
What software do you use to make the animations, such that in the minute 3:09 moving the filter? Thanks.
and thank you for the clear explanation!
You're welcome, Abimael! The software is called Camtasia (link below).
www.techsmith.com/video-editor.html
I have a question. During the video on zero padding, you indicated that padding was useful to maintain the size in the original matrix. In your example in this video you include padding='same' on both of your Conv2D layer. But then you include a MaxPooling2d layer which cuts in the matrix from 20x20 to 10x10. This seems to negate or contradict the benefits of the padding='same' on the Conv2D layers. Please explain why keeping the original size of the matrix is good for the Conv2D layer, while reducing the original size of the matrix is good for the MaxPooling2d layer. Thanks!
When doing a convolution operation, if not using padding, then the data at the edges of the images will be completely thrown away and lost. To prevent this data loss, we use padding. Max Pooling, on the other hand, will indeed reduce the image size, but it does not throw data away. The original data from the image is used in the pooling operation to create the lower resolution image. Let me know if this makes sense.
@@deeplizard I understand the reason for the padding (to not lose data), but I'm not sure I understand your comment regarding pooling "does not throw any data away". Given a 2x2 filter, it looks at 4 items in the image, and uses the max value, and throws the other 3 away. So we go out of our way (padding) to lose as little as possible with the Conv2D operation, just to lose 75% of the image with the polling operation. everyone does it this way, so I know it is right. I simply can't wrap my mind around why this is not an issue.
Thanks!!!! amazing video
Great video! I have a question though. Is it a standard procedure to have a max pooling layer after every convolution layer? Furthermore, how does one decide whether to put a max pooling operation after a conv layer and in which cases should we not put a max pooling layer after a conv layer?
Excelent video!
Great video, on the deeplizard site MaxPooling2D() is missing a comma at the end FYI
Good eye, thanks! Just corrected it on the site.
Great explanation
Well explained!
Thanks, Raghavendra!
Awesome videos! thank you very much.
Great Explanation... thank you
This is very informative!
Machine Learning / Deep Learning Tutorials for Programmers playlist: ruclips.net/p/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
Keras Machine Learning / Deep Learning Tutorial playlist: ruclips.net/p/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
nicely done. thanks
Great video! But can you explain what "Flatten()" layer does? Thanks!
Thanks, ivzlccs! A Flatten() layer transforms the output from the previous convolutional layer into a 1D tensor so that it can be provided as input to the following Dense() layer.
The two videos on learnable parameters in a CNN below may be helpful as well. There, when transitioning from the convolutional layer to the output layer, we discuss the flatten operation.
ruclips.net/video/gmBfb6LNnZs/видео.html
ruclips.net/video/8d-9SnGt5E0/видео.html
@@deeplizard Can you kindly also explain what dense layer does? The explanation that i have is that it connects layers but why would you have unconnected layers in the first place?
Great videos! easy to understand .It would be more understandable if the operations and coding part are zoomed .
Thanks for the feedback, Akanksha. In later videos, the font size is increased, and I zoom in on the code :)
very nice visualizations...why is it so underrated..?
Thanks, akshay! Not enough people know about deeplizard. Help spread the word! :D
THANK YOU!
You're welcome, Luis!
You are the best
{
"question": "If we have an image of size n by n, (or for python users: image.shape == (n,n)) and we perform max pooling with a filter size of 2 by 2 and a stride of 2.
How much smaller will the output image of the Max Pooling Layer be than our input?",
"choices": [
"2",
"n",
"2/n",
"n/2"
],
"answer": "2",
"creator": "Kaffafel",
"creationDate": "2021-04-18T13:22:11.978Z"
}
great video
thank you very much for this video. I have a question, why does the Dense layer have 16 units? Greetings from Spain :) keep on doing such a good work!
Hey this is great thank you!
What does Flatten do in the Sequential model declaration?
Thanks for your tutorials deeplizard. But would you please say why do we even need max pooling ? When would we need it ?
You're welcome! It's mentioned towards the end of the video and also in the corresponding blog:
deeplizard.com/learn/video/ZjM_XQa5s6s
Love it!
Thanks for more videos. Great
No problem, Raju! Thanks for keeping up with the new releases!
Good thing I paid for grad school when the stuff on RUclips is 10x more useful.
🤦😅
great articulation! thank you..
good explanation..