I love the videos you make. The good thing is that you explain the concept right to the point and don't waste time which means you are dominant on the subject. I really hope you don't lose the motivation to make such tutorials. because there are enthusiasts like me and my colleagues that literally waiting for your future videos. So please keep making videos
Great! Neatly put. Thanks for the video. One thought -- we can add one parameter lambda as multiplication factor in the combination step, and treat as a trainable parameter which increases total trainable parameters by 1 but may help converge the solution faster, I guess. Depthwise sep conv = Depthwise conv + lambda * Pointwise conv.
Is standard convolution here and depth-wise separable convolution functionally equivalent? That is, they will both give the same outputs for a certain input? It is just that, depth-wise separable convolution saves on computations, but is otherwise functionally the same right?
Is it correct that arbitrary standard convolition cannot be exposed as depthwise convolution (except some special cases)? Depthwise convolution is just another type of convolution, right?
Ok genius iam also approaching the problem same way like you I don't use matheMatical way my question is so simple because LTI depends on convulution here's my question below Convulution is nothing but stacks and scales the input that's why the input to an amplifier is stacked and scaled or amplified but in filter design it attenuate frequency so I don't know how it regret certain frequency by stacking and scaling the input if possible some one explain to me
In the Xception research paper they actually used skip connections and dense layers , skip connection were reported to have given a major boost to the final accuracy.
First, thank you for making this helpful video. Second, why can't comp sci people agree on one notation for anything at all?! It's like for every video I watch I gotta learn a new set of notations... BOY. And why is F the input and not the filters? that's just straight up confusing man. humans really can't agree on anything.
I love the videos you make.
The good thing is that you explain the concept right to the point and don't waste time which means you are dominant on the subject.
I really hope you don't lose the motivation to make such tutorials. because there are enthusiasts like me and my colleagues that literally waiting for your future videos.
So please keep making videos
Your channel is underrated and is pure gold
love what you are doing, your recent videos were really helpful to me, keep up the good work, keep exploring and uploading videos 👍
Thanks! Glad you like the videos!
This is a wonderful tutorial which deserves (and in the future will get) way more views
Rahul Gore Hoping the same. Thanks!
Very crisp explanation loved it.
thanks for your high-quality videos which really help me a lot
That put down so simply. Just loved it :) Thanks a lot
Abinash Ankit Raut glad you liked it! Thanks!
Awesome video, subscribed! Had a really good understanding of what depth wise separable convolution is at the end of the video.
Finally understood MobileNets and DSCs. Thanks the for clear video!
Excellent video,easy and well explain of Depth wise Separable Convolution.Really grateful to you.
Well explained, beautifully demonstrated. Thanks!
Brilliant explanation, described in a very understandable way.
Shahriar Mohammad Shoyeb thanks! Glad you liked it !
You explained it in the best way.
really helpful for me to understand the depthwise separable convolution! Thank you!
absolute banger! well done
omg this came out 4 years ago? I am living under a rock
provide a clear understanding to me .so glad,thank you
Simply Brilliant thank you for much for a detailed information about Xception
well explained , you made it look really easy !
Perfect explanation. I appreciate it. Thank you!
That was a very lucid explanation, thanks.
Glad you found it usefule Sangeet
Best explanation I've found on this, thanks
As long as it helps :)
Nice video! I look forward to future videos on object detection and semantic segmentation.
Great video, reading the reference paper is going to be much easier now
Imbeccable explanations as always!
Great help for understanding DepthWise Seqarable Convolution!!!
this is fantastic explaination
Thank you so much!!
This video is real helpful. thank you
Thanks a lot for this! Very helpful.
Great! Neatly put. Thanks for the video. One thought -- we can add one parameter lambda as multiplication factor in the combination step, and treat as a trainable parameter which increases total trainable parameters by 1 but may help converge the solution faster, I guess. Depthwise sep conv = Depthwise conv + lambda * Pointwise conv.
Where to use depthwise separable convolution?
How do we come to know to where to use it? 🤔
@@strongsyedaa7378 Wherever you want to reduce number of trainable parameters. Most of the networks are defined with this depthwise conv.
Thank you so much for making such a nice video that is so easy to understand.
GUO GUANHUA For Sure! I'm glad you understood it :)
Very clear, make it easy to understand! Thanks!
Zhuotun Zhu anytime! Thanks for watching
It is so useful and clear
incredible !
Thank you. You saved me a lot of time.
It's what it do. Thanks for watching :)
Amazing Explanation!
great video,looking forward more
Awesome explanation . Loved it.
Mayank Chaurasia So glad you loved it :)
very helpful video, thanks
Explained it so simply. Thanx
No worries. Glad it helps!
Great video. Helped a lot!
Okay, now I get it.... Thanks!
Great explanation! Thank you very much!
Super explanation
Thank you for explanation, but please, use more intuitive designations (like H for height and W for width)
Very clear explanation.. Thanks a lot.
Welcome! Glad you got some use out of it
CodeEmporium Yeah.. I was reading W-Net where they have used it..
Amazing .. explained so clearly !! Thank you
Harsha Vardhana anytime! Glad you liked it!
Awesome video dude
thank you very much.
Loved it!
thank you, it was of great help !!
Many many thanks.
Amazing video sir.
Can you make a video on Resnet Architecture for beginners?
Awesome explanation
This is excellent
thank you, understood
This was great.
Good Explanation! Thanks
omg. you just saved the day!
You can always count on your friendly neighborhood data scientist..
can you do a video on Binarized Neural Networks?
amazing content. thanks alot :)
easy to understand. i suggest to add animations for better understanding if possible. thanks
Thanks for explanation
This video was very helpful, thank you :)
Welcome. Glad it was useful!
very clear!
Thanks!
Is standard convolution here and depth-wise separable convolution functionally equivalent? That is, they will both give the same outputs for a certain input? It is just that, depth-wise separable convolution saves on computations, but is otherwise functionally the same right?
excellent!very nice video
Well. I can't understand why the input size of the second phase is still M. Is that a typo?
Hey really helpful Thank You. Can you also make a video on Winograd Convolution?
Nice video. Thanks.
worth the time!!
2:00 shouldn't it be (Dk^3 ) * M? As matrix multiplication of size (n x m) . (m x p), no. of multiplication are n x m x p.
What would pointwise convolution look like in a 1d separeble convolution???
Thank you for this. What are you using for animations?
Why are the output number of features always an integral multiple of the number of input channels?
excellent
Thanks!
very nice!
Where to use depthwise separable convolution?
Is it correct that arbitrary standard convolition cannot be exposed as depthwise convolution (except some special cases)? Depthwise convolution is just another type of convolution, right?
Thanks.
"immediately" hahaha. Thanks bro. SUbscribed
Thanks
very helpful, thanks
Glad it was helpful. Thanks for watching!
Ok genius iam also approaching the problem same way like you I don't use matheMatical way my question is so simple because LTI depends on convulution here's my question below
Convulution is nothing but stacks and scales the input that's why the input to an amplifier is stacked and scaled or amplified but in filter design it attenuate frequency so I don't know how it regret certain frequency by stacking and scaling the input if possible some one explain to me
Do you have a python code 3d depthwise separable convolution?
good
How does this do with Res and Densenets?
In the Xception research paper they actually used skip connections and dense layers , skip connection were reported to have given a major boost to the final accuracy.
Hey, I am making a video using some of your animations. Hope its cool!? It's on MobileNets
bluesky314 Absolutely. Just list this video in your references. Send a link to your video here when you're done. I'd like to see it :)
Thanks! Here it is: ruclips.net/video/HD9FnjVwU8g/видео.html Would love your feedback
hey
Sakkath video!
First, thank you for making this helpful video.
Second, why can't comp sci people agree on one notation for anything at all?! It's like for every video I watch I gotta learn a new set of notations... BOY. And why is F the input and not the filters? that's just straight up confusing man.
humans really can't agree on anything.
Good1
Tjis is like mapreduce
Thank you so much for this amazing explanation!
Very helpful Thank you
Thanks! Glad it was of use.