Groups, Depthwise, and Depthwise-Separable Convolution (Neural Networks)
HTML-код
- Опубликовано: 22 фев 2023
- Patreon: / animated_ai
Fully animated explanation of the groups option in convolutional neural networks followed by an explanation of depthwise and depthwise-separable convolution in neural networks.
Animations: animatedai.github.io/
Intro sound: "Whoosh water x4" by beman87 freesound.org/s/162839/ Наука
Please don't stop making videos. They are of great help. Thank you for your efforts.
One of the best channels on DL I’ve seen so far. Please publish more!
This is great stuff. Please continue to make more, you are saving new scholar lives here!
This is so underestimated channel, will share it as much as I can. Thank you, Mr AI Animator!
Hey, I want to thank you for spending time making great content animated! I’ve been using depth wise for a time and it has been always a little hazy
This is amazing! There is a lot of great material out there, and your channel is a really solid and valuable contribution to that. Thanks a batch!
This was an incredible video. You can see the the amount of work and dedication; and you explain really good! Thanks, please keep on doing this videos
By far the best explanation of depthwise-separable convolutions I found! This is a service, thanks!
amazing! as an AI researcher I missed these videos back in the days when I studied convolutions, hope they'll bring more understanding to the people just coming to the field!
I just wanted to say a huge THANK YOU for all the incredible animations you've been creating. Your work has been a game-changer for me, making complex concepts so much easier to understand.
Incredible video! Brilliant visualisations and perfect explanation. Keep it up
Absolutely loved the way the instructor used animations to explain concepts like Groups, Depthwise, and Depthwise-Separable Convolution. It made understanding the topic so much easier and engaging. Keep up the great work!
Your work is truly amazing, please keep enlighting us!
The best conv layer visualization so far👍
Thank you for your great work💥💥
absolutely love these videos! doing gods work
Incridibly helpful, keep up the good work!
Please continue make such amzing videos....they really helped me
Thanks, the explanation of this mechanic is exceedingly lucid.
You are a freaking saint. I gotta sub for the effort you put in.
Great Work. I am a Master's student in ML and I your animations are really helpful in understanding this concept!! Thanks a lot.
Amazing video, so well explained and to the point.
Great stuff... the algorithm should give your content more attention!
This is so great!
That was so intuitive. Thanks for that!
Congratulations for amazing class
Very intuitive to understand, thank you.
Bro its amazing, continue please !!
Truly awesome!
Finally understood. Thanks. Really helpful videos.
Thank you for making videos like this.
great material!
Great video! Keep up the good work
this animation really helps me , thanks!
thankyou very much brother this video means a lot for people like me 😍
You got a new subscriber. You are 3b1b of AI. Thanks for existing.
Fantastic!
Thanks! Best explanation ever
Great work, thank you
so so good
amazing job !!!
underrated channel
this is too cool to handle!!!!
Great video... keep it goining..Thanks a lot
awesome thanks!
Nice job!
Excelent!!
Thank you!
Thank you
Thanks!
Fantastic explanations, even though I understand the paper diagrams, this makes it super clearer. Would you cover cascaded/DenseNet someday?
Another great video. Can't wait for you to go into animating Transformers!
תודה!
great video
Thanks! Could you also cover convolutions with processing audio?
thx!!
Please sir, also make visualizations like these on RNN, LSTM and most importantly Transformers. Would be really thankful to you. And also, your videos on CNN are just gems in the ocean of youtube.
Do remember in future vids to invite viewers to smash the like button, as it improves your ranking as per the Algorithm. I just realised I watched half a dozen of these without hitting it.
Awesome job, I have a quesion out of the box, how you are did this work? which programs used in this video to produce it?
Thank you very much for your sharing. It helps me a lot. I would really appreciate it if you could add subtitles
I want to know how can you make this video , what tools of software you used ?
Great vidoe! Your website will be a very usefull ressource.
May I ask you what tool you are using for creating these animations?
I'm using Blender and relying heavily on the Geometry Nodes feature.
ok visualize transforms next please, Vision Transforms would be nice.
yes, a good visualization of transformers would be great
Also graphnet
Soo the output has the same number of channels as the input? Or can you modify that by 1x1 convolution at the end ? Also doesn't this double the required storage for feature maps ?
In practice, it probably doesn't make a huge difference where you increase the number of channels. You could increase the channels in the depthwise convolution as long as you wanted the output channel count to be a multiple of the input. EfficientNet actually increases the number of channels with an extra pointwise convolution before the depthwise convolution.
Yes, it increases the storage required during training in TensorFlow and PyTorch. Post-training, you don't necessarily need to keep around all the intermediate feature maps. So whether or not it doubles the required storage is dependent on the library (if any) that you're using for deployment.
Great work, thanks !
Maybe FPNs next time ? :-D
Hi Animated AI, for clarification, are the stacks of cubes in the first 30 seconds of the video feature maps? Also, how exactly did the depth increase as we get into the deeper layers? Based on my understanding, the lecture you provided was focused more on maintaining the depth while increasing its efficiency. I hope to hear from you soon! Your work is great!
That's correct, they're the feature maps which are the inputs/outputs of the layers.
The depth of a feature map is equal to the number of filters in the convolutional layer that created it. So the depth increases that you're seeing are simply layers that have more filters than the number of features in their input. Let me know if that isn't what you meant by your question.
This video shows the depth staying the same in a depth-wise separable convolution, but you can still depth-wise separate a layer that increases the number of filters and get the performance benefits. You can just take the input depth and use twice (or some other multiple of) that value for the filter count in both the depth-wise and point-wise convolutions.
@@animatedai I see, I see. So if the input is an RGB image, and the first convolutional layer uses 5 filters, then the depth of the feature map will be 5. If that feature map goes to another convolutional layer with 5 filters, will the output contain a feature map with a depth of 25?
In that example, both outputs would have a depth of 5 because both layers have a filter count of 5. My video on filter count might help you visualize the relationship there: ruclips.net/video/YSNLMNnlNw8/видео.html
These videos are both part of this playlist on convolution: ruclips.net/p/PLZDCDMGmelH-pHt-Ij0nImVrOmj8DYKbB
@@animatedai Hi animatedAI! I'll check the link out. I hope I'll get it afterwards haha. Thanks for sharing!
From your example, it could be nice to give the number of computations as example of +/- 9x faster :)
Can we convert trained standard Convnets to depth wise ones ?
You could theoretically separate any kernel into a depthwise-separated one. But you'd need a lot more filters in the depthwise convolution, so the result would be about the same performance. The performance improvement comes from training the network to take advantage of depthwise-separated convolutions.
It should be noted that this will not scale well with tensor cores and may even be slower.
❤
These videos are excellent, but I suspect your ability to discern adjacent colors on a color wheel greatly outpaces mine. I have to pause and stare back and forth between blocks. It would be nice it were easier to see. Tools like Viz Palette can help pick better colors for data visualization.
I appreciate your feedback! I could rant for hours about how hard it is to pick colors :). I have two clarification questions: 1) Which part of the video did you pause to stare back and forth between blocks? 2) Which feature of Viz Palatte do you think would have helped pick colors for that instance?
please be more productive . Your videos are amazing
hi
jiff
geefs , not jiffs
ok visualize transforms next please, Vision Transforms would be nice.