Depthwise Separable Convolution - A FASTER CONVOLUTION!

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025

Комментарии • 123

  • @masoudmasoumimoghaddam3832
    @masoudmasoumimoghaddam3832 5 лет назад +4

    I love the videos you make.
    The good thing is that you explain the concept right to the point and don't waste time which means you are dominant on the subject.
    I really hope you don't lose the motivation to make such tutorials. because there are enthusiasts like me and my colleagues that literally waiting for your future videos.
    So please keep making videos

  • @sarthaknarayan2159
    @sarthaknarayan2159 4 года назад

    Your channel is underrated and is pure gold

  • @VikasSingh8
    @VikasSingh8 6 лет назад +6

    love what you are doing, your recent videos were really helpful to me, keep up the good work, keep exploring and uploading videos 👍

    • @CodeEmporium
      @CodeEmporium  6 лет назад +1

      Thanks! Glad you like the videos!

  • @GunnuBhaiya
    @GunnuBhaiya 6 лет назад +1

    This is a wonderful tutorial which deserves (and in the future will get) way more views

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Rahul Gore Hoping the same. Thanks!

  • @IndiaNirvana
    @IndiaNirvana Год назад +1

    Very crisp explanation loved it.

  • @王大哥-e1c
    @王大哥-e1c 2 года назад

    thanks for your high-quality videos which really help me a lot

  • @7688abinash
    @7688abinash 6 лет назад +2

    That put down so simply. Just loved it :) Thanks a lot

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Abinash Ankit Raut glad you liked it! Thanks!

  • @Kumar731995
    @Kumar731995 6 лет назад

    Awesome video, subscribed! Had a really good understanding of what depth wise separable convolution is at the end of the video.

  • @starcraftpain
    @starcraftpain 5 лет назад

    Finally understood MobileNets and DSCs. Thanks the for clear video!

  • @mamuncse068
    @mamuncse068 5 лет назад

    Excellent video,easy and well explain of Depth wise Separable Convolution.Really grateful to you.

  • @ParniaSh
    @ParniaSh 3 года назад +2

    Well explained, beautifully demonstrated. Thanks!

  • @ShahriarMohammadShoyeb
    @ShahriarMohammadShoyeb 6 лет назад +1

    Brilliant explanation, described in a very understandable way.

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Shahriar Mohammad Shoyeb thanks! Glad you liked it !

  • @deepaksingh5607
    @deepaksingh5607 4 года назад

    You explained it in the best way.

  • @张程-f6d
    @张程-f6d 3 года назад

    really helpful for me to understand the depthwise separable convolution! Thank you!

  • @maxsch.2367
    @maxsch.2367 7 месяцев назад

    absolute banger! well done

  • @thecurious926
    @thecurious926 2 года назад

    omg this came out 4 years ago? I am living under a rock

  • @esrabetulelhusseini2330
    @esrabetulelhusseini2330 4 года назад

    provide a clear understanding to me .so glad,thank you

  • @vamsiKRISHNA-io1yi
    @vamsiKRISHNA-io1yi 4 года назад

    Simply Brilliant thank you for much for a detailed information about Xception

  • @myhofficiel4612
    @myhofficiel4612 7 месяцев назад

    well explained , you made it look really easy !

  • @yungwoo729
    @yungwoo729 4 года назад +1

    Perfect explanation. I appreciate it. Thank you!

  • @sami-h5y
    @sami-h5y 6 лет назад +1

    That was a very lucid explanation, thanks.

    • @CodeEmporium
      @CodeEmporium  6 лет назад +1

      Glad you found it usefule Sangeet

  • @austinmw89
    @austinmw89 6 лет назад

    Best explanation I've found on this, thanks

  • @srinathkumar1452
    @srinathkumar1452 6 лет назад

    Nice video! I look forward to future videos on object detection and semantic segmentation.

  • @maheshwaranumapathy
    @maheshwaranumapathy 6 лет назад

    Great video, reading the reference paper is going to be much easier now

  • @keithchua1723
    @keithchua1723 2 года назад

    Imbeccable explanations as always!

  • @senli2229
    @senli2229 6 лет назад

    Great help for understanding DepthWise Seqarable Convolution!!!

  • @Frostbyte-Game-Studio
    @Frostbyte-Game-Studio 2 года назад

    this is fantastic explaination

  • @jeamsdere9636
    @jeamsdere9636 2 года назад

    This video is real helpful. thank you

  • @Vinay1272
    @Vinay1272 Год назад

    Thanks a lot for this! Very helpful.

  • @gopsda
    @gopsda 3 года назад

    Great! Neatly put. Thanks for the video. One thought -- we can add one parameter lambda as multiplication factor in the combination step, and treat as a trainable parameter which increases total trainable parameters by 1 but may help converge the solution faster, I guess. Depthwise sep conv = Depthwise conv + lambda * Pointwise conv.

    • @strongsyedaa7378
      @strongsyedaa7378 3 года назад

      Where to use depthwise separable convolution?
      How do we come to know to where to use it? 🤔

    • @gopsda
      @gopsda 3 года назад

      @@strongsyedaa7378 Wherever you want to reduce number of trainable parameters. Most of the networks are defined with this depthwise conv.

  • @bearflamewind
    @bearflamewind 6 лет назад

    Thank you so much for making such a nice video that is so easy to understand.

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      GUO GUANHUA For Sure! I'm glad you understood it :)

  • @zhuotunzhu8660
    @zhuotunzhu8660 6 лет назад

    Very clear, make it easy to understand! Thanks!

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Zhuotun Zhu anytime! Thanks for watching

  • @huythai6210
    @huythai6210 3 года назад

    It is so useful and clear

  • @duongkstn
    @duongkstn 3 месяца назад

    incredible !

  • @PalashKarmore
    @PalashKarmore 6 лет назад

    Thank you. You saved me a lot of time.

    • @CodeEmporium
      @CodeEmporium  6 лет назад +1

      It's what it do. Thanks for watching :)

  • @AmartyaMandal7
    @AmartyaMandal7 4 года назад

    Amazing Explanation!

  • @win-n6k
    @win-n6k 6 лет назад

    great video,looking forward more

  • @mayankchaurasia4483
    @mayankchaurasia4483 6 лет назад

    Awesome explanation . Loved it.

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Mayank Chaurasia So glad you loved it :)

  • @dufrewu7437
    @dufrewu7437 3 года назад

    very helpful video, thanks

  • @vinitakumari5913
    @vinitakumari5913 6 лет назад

    Explained it so simply. Thanx

  • @felippewick
    @felippewick 5 лет назад

    Great video. Helped a lot!

  • @knowhowww
    @knowhowww 3 года назад

    Okay, now I get it.... Thanks!

  • @roberttlange8607
    @roberttlange8607 5 лет назад

    Great explanation! Thank you very much!

  • @Ganitadava
    @Ganitadava 2 года назад

    Super explanation

  • @nexushotaru
    @nexushotaru Год назад

    Thank you for explanation, but please, use more intuitive designations (like H for height and W for width)

  • @RishabhGoyal
    @RishabhGoyal 6 лет назад

    Very clear explanation.. Thanks a lot.

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Welcome! Glad you got some use out of it

    • @RishabhGoyal
      @RishabhGoyal 6 лет назад

      CodeEmporium Yeah.. I was reading W-Net where they have used it..

  • @harv609
    @harv609 6 лет назад

    Amazing .. explained so clearly !! Thank you

    • @CodeEmporium
      @CodeEmporium  6 лет назад +1

      Harsha Vardhana anytime! Glad you liked it!

  • @lonewolf2547
    @lonewolf2547 6 лет назад

    Awesome video dude

  • @roonyyu5374
    @roonyyu5374 17 дней назад

    thank you very much.

  • @Lucas7Martins
    @Lucas7Martins 5 лет назад

    Loved it!

  • @sounakbhowmik2841
    @sounakbhowmik2841 5 лет назад

    thank you, it was of great help !!

  • @RafiqulIslam-je4zy
    @RafiqulIslam-je4zy 4 года назад

    Many many thanks.

  • @reactorscience
    @reactorscience 4 года назад

    Amazing video sir.

  • @UniversalRankingOfficial
    @UniversalRankingOfficial 2 года назад

    Can you make a video on Resnet Architecture for beginners?

  • @sahibsingh1563
    @sahibsingh1563 5 лет назад

    Awesome explanation

  • @willz3222
    @willz3222 4 года назад

    This is excellent

  • @harutyunyansaten
    @harutyunyansaten 3 года назад

    thank you, understood

  • @vaibhavsingh1049
    @vaibhavsingh1049 4 года назад

    This was great.

  • @wuxb09
    @wuxb09 6 лет назад

    Good Explanation! Thanks

  • @jo-of-joey
    @jo-of-joey 6 лет назад

    omg. you just saved the day!

    • @CodeEmporium
      @CodeEmporium  6 лет назад +1

      You can always count on your friendly neighborhood data scientist..

    • @jo-of-joey
      @jo-of-joey 6 лет назад

      can you do a video on Binarized Neural Networks?

  • @meyouanddata9338
    @meyouanddata9338 4 года назад

    amazing content. thanks alot :)

  • @kartikpodugu
    @kartikpodugu 6 лет назад

    easy to understand. i suggest to add animations for better understanding if possible. thanks

  • @gaussian3750
    @gaussian3750 5 лет назад

    Thanks for explanation

  • @joruPT
    @joruPT 6 лет назад

    This video was very helpful, thank you :)

  • @ducpham9991
    @ducpham9991 5 лет назад

    very clear!

  • @busy_beaver
    @busy_beaver 2 года назад

    Thanks!

  • @virajwadhwa6782
    @virajwadhwa6782 Год назад

    Is standard convolution here and depth-wise separable convolution functionally equivalent? That is, they will both give the same outputs for a certain input? It is just that, depth-wise separable convolution saves on computations, but is otherwise functionally the same right?

  • @zhengxiangyan3654
    @zhengxiangyan3654 5 лет назад

    excellent!very nice video

  • @yx9873
    @yx9873 2 года назад

    Well. I can't understand why the input size of the second phase is still M. Is that a typo?

  • @rabhinav
    @rabhinav 6 лет назад +1

    Hey really helpful Thank You. Can you also make a video on Winograd Convolution?

  • @artsyfadwa
    @artsyfadwa 5 лет назад

    Nice video. Thanks.

  • @digvijayyadav3633
    @digvijayyadav3633 4 года назад

    worth the time!!

  • @bhuvneshkumar1970
    @bhuvneshkumar1970 3 года назад

    2:00 shouldn't it be (Dk^3 ) * M? As matrix multiplication of size (n x m) . (m x p), no. of multiplication are n x m x p.

  • @Maciek17PL
    @Maciek17PL 3 года назад

    What would pointwise convolution look like in a 1d separeble convolution???

  • @MasayoMusic
    @MasayoMusic 5 лет назад

    Thank you for this. What are you using for animations?

  • @sourishsarkar5281
    @sourishsarkar5281 4 года назад

    Why are the output number of features always an integral multiple of the number of input channels?

  • @judyhwang9342
    @judyhwang9342 3 года назад

    excellent

  • @melihaslan9509
    @melihaslan9509 6 лет назад

    very nice!

  • @strongsyedaa7378
    @strongsyedaa7378 3 года назад

    Where to use depthwise separable convolution?

  • @sergeyi2518
    @sergeyi2518 4 года назад

    Is it correct that arbitrary standard convolition cannot be exposed as depthwise convolution (except some special cases)? Depthwise convolution is just another type of convolution, right?

  • @ah-rdk
    @ah-rdk 5 месяцев назад

    Thanks.

  • @mohitpilkhan7003
    @mohitpilkhan7003 4 года назад +2

    "immediately" hahaha. Thanks bro. SUbscribed

  • @abhishekchaudhary6975
    @abhishekchaudhary6975 2 года назад

    Thanks

  • @varchitalalwani3802
    @varchitalalwani3802 6 лет назад

    very helpful, thanks

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      Glad it was helpful. Thanks for watching!

  • @ranam
    @ranam 3 года назад

    Ok genius iam also approaching the problem same way like you I don't use matheMatical way my question is so simple because LTI depends on convulution here's my question below
    Convulution is nothing but stacks and scales the input that's why the input to an amplifier is stacked and scaled or amplified but in filter design it attenuate frequency so I don't know how it regret certain frequency by stacking and scaling the input if possible some one explain to me

  • @66tuananh88
    @66tuananh88 Год назад

    Do you have a python code 3d depthwise separable convolution?

  • @aq555
    @aq555 2 года назад

    good

  • @willemprins4564
    @willemprins4564 6 лет назад

    How does this do with Res and Densenets?

    • @harshnigm8759
      @harshnigm8759 6 лет назад

      In the Xception research paper they actually used skip connections and dense layers , skip connection were reported to have given a major boost to the final accuracy.

  • @rahuldeora5815
    @rahuldeora5815 6 лет назад

    Hey, I am making a video using some of your animations. Hope its cool!? It's on MobileNets

    • @CodeEmporium
      @CodeEmporium  6 лет назад

      bluesky314 Absolutely. Just list this video in your references. Send a link to your video here when you're done. I'd like to see it :)

    • @rahuldeora5815
      @rahuldeora5815 6 лет назад

      Thanks! Here it is: ruclips.net/video/HD9FnjVwU8g/видео.html Would love your feedback

    • @rahuldeora5815
      @rahuldeora5815 6 лет назад

      hey

  • @harshadj13
    @harshadj13 5 лет назад

    Sakkath video!

  • @DarkLordAli95
    @DarkLordAli95 3 года назад

    First, thank you for making this helpful video.
    Second, why can't comp sci people agree on one notation for anything at all?! It's like for every video I watch I gotta learn a new set of notations... BOY. And why is F the input and not the filters? that's just straight up confusing man.
    humans really can't agree on anything.

  • @pawansj7881
    @pawansj7881 6 лет назад

    Good1

  • @mathematicalninja2756
    @mathematicalninja2756 6 лет назад

    Tjis is like mapreduce

  • @Jonas-qz2gb
    @Jonas-qz2gb 3 года назад

    Thank you so much for this amazing explanation!

  • @santhoshkolloju
    @santhoshkolloju 6 лет назад

    Very helpful Thank you