Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (paper illustrated)

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии • 63

  • @mahmoudimus
    @mahmoudimus 10 месяцев назад +1

    Great explanation. Love the music + the voice :)

    • @AIBites
      @AIBites  10 месяцев назад

      Thanks. Glad you liked it!

  • @phattailam9814
    @phattailam9814 Год назад +1

    Thank you so much for the explanation!

  • @robosergTV
    @robosergTV 6 месяцев назад +1

    huh? ViT was the first backbone Trasnformer arch for vision, not swin

    • @AIBites
      @AIBites  4 месяца назад

      awesome spot. And thanks for this info.

  • @suke933
    @suke933 2 года назад +3

    Thanks for the video dear AI Bites. I was struggling to understand the SWIN architecture. It was very easily elaborated up to the point, but I would like to ask on "the motivation for different C value selection". Why is it important? If you would convey, it would further give more meaningful understanding to me.

  • @kalluriramakrishna5732
    @kalluriramakrishna5732 2 года назад +1

    Thank you for your fabulous Explanation

  • @deadbeat_genius_daydreamer
    @deadbeat_genius_daydreamer Год назад

    This is seriously underrated, I enjoyed this visual approach, Thanks and regards for your efforts to make this explanation. Cheers🎊👍

    • @AIBites
      @AIBites  Год назад

      Thank you so much Harshad! 😊

  • @JC-ru4bp
    @JC-ru4bp 3 года назад +1

    Very clear explanation of the paper idea, thanks.

    • @AIBites
      @AIBites  3 года назад

      very encouraging to keep making videos :)

    • @JC-ru4bp
      @JC-ru4bp 3 года назад

      @@AIBites Keep up, man,

  • @tonywang7933
    @tonywang7933 8 месяцев назад +1

    Thank you!! So nicely explained

    • @AIBites
      @AIBites  8 месяцев назад

      You're welcome. So would you like to see more of papers explained or would you like more of coding videos?

  • @muhammadsalmanali1066
    @muhammadsalmanali1066 3 года назад

    Thank you so much for the explanation. Please keep the videos coming.

    • @AIBites
      @AIBites  3 года назад +1

      Sure will do!

  • @arpitaingermany
    @arpitaingermany 2 года назад +1

    Thank you for illustrating this architecture. Can you make videos more on segmentation algorithms which are being used now a days please. Thanks.

    • @AIBites
      @AIBites  2 года назад +2

      Sure. Will plan to make one on SegFormers.

    • @arpitaingermany
      @arpitaingermany 2 года назад

      @@AIBites cool ❤️
      And thanks for this presentation

  • @MangalisoMngomezulu-y3b
    @MangalisoMngomezulu-y3b 9 месяцев назад +1

    This is brilliant!

    • @AIBites
      @AIBites  9 месяцев назад

      Thanks 👍

  • @TheMomentumhd
    @TheMomentumhd 3 года назад

    You think these swin transformers would be usefull in real time object detection? (are they fast enough)?

  • @sanjeetpatil1249
    @sanjeetpatil1249 2 года назад

    Can you kindly explain this line in the paper, related to the patch merging layer, "The first patch merging layer concatenates the
    features of each group of 2 × 2 neighboring patches, and applies a linear layer on the 4C-dimensional concatenated
    features".
    Thank you for the video

  • @tensing2009
    @tensing2009 2 года назад

    Great Video!
    Thanks for making it! :)

  • @triminh3849
    @triminh3849 3 года назад

    great video with excellent visualization, thanks a lot

    • @AIBites
      @AIBites  3 года назад

      Glad you like it! :)

  • @anhminhtran7609
    @anhminhtran7609 3 года назад

    Can you civer a bit more on the using Swin for object detection please?

  • @JagannathanK-y5e
    @JagannathanK-y5e Год назад +1

    Great explanation

  • @saeedataei269
    @saeedataei269 2 года назад +1

    Thanks for the explanation. plz review more SOTA papers.

    • @AIBites
      @AIBites  2 года назад +1

      Sure will do Saeed! Thx. 🙂

  • @garyhuntress6871
    @garyhuntress6871 3 года назад +1

    Excellent review, thanks. I've subscribed for future papers! Do you use manim for your animations?

    • @AIBites
      @AIBites  3 года назад

      Hi Gary, Thanks for your comments! In some places I use manim but not always. :)

  • @EngRiadAlmadani
    @EngRiadAlmadani 3 года назад +2

    thanks for this great video just one question why we used linear layer in patch merging while we can reshaping the input patches directly using reshape method ???

    • @AIBites
      @AIBites  3 года назад +2

      Great question. One thing I can think of is efficiency. I believe reshape is also challenging to propagate gradients backwards.

    • @Deshwal.mahesh
      @Deshwal.mahesh 2 года назад +1

      Maybe thy're trying to make the model learn how to merge with knowledge? Just like solving a graphical puzzle?

    • @suke933
      @suke933 2 года назад

      @@AIBites Can we use the convolution within this scenario?

  • @manub.n2451
    @manub.n2451 2 года назад +1

    Thank you so much

  • @muhammadwaseem_
    @muhammadwaseem_ Год назад +1

    Good explanation

  • @keroldjoumessi
    @keroldjoumessi 3 года назад +1

    Thanks for the video. It was very awesome and easy to follow. Therefore even if the Windows architecture reduces the complexity to compute the self-attention, I think we still have this computational issue for the overall image and the attention becomes locally as in CNNs instead of globally like in RNN. Anyway thanks for your explaination

    • @readera84
      @readera84 3 года назад +1

      How you are saying such complex things so easily 😫 I couldn't even understand what he said 🤕

    • @keroldjoumessi9597
      @keroldjoumessi9597 3 года назад

      ​@@readera84 what don't you understand? maybe I can give you a hand

    • @readera84
      @readera84 3 года назад

      @@keroldjoumessi9597 Windows shifting diagonally...an you make it more clear it to me

  • @harutmargaryan9980
    @harutmargaryan9980 3 года назад

    Thank you, well done!

  • @knowhowww
    @knowhowww 3 года назад

    Thank you for the great effort.

    • @AIBites
      @AIBites  3 года назад

      My pleasure!

  • @anonymous-random
    @anonymous-random 3 года назад

    The video is awesome! Thanks a lot!

    • @AIBites
      @AIBites  3 года назад

      Glad you liked it!

  • @rajatayyab7737
    @rajatayyab7737 3 года назад +1

    next should Dynamic Head: Unifying Object Detection Heads with Attentions

    • @rybdenis
      @rybdenis 3 года назад

      agreed

    • @AIBites
      @AIBites  3 года назад

      Thanks Raja for pointing out. We will try to prioritise the paper at some point.

  • @taoufiqelfilali2224
    @taoufiqelfilali2224 3 года назад

    great exlplanation, thank you

    • @AIBites
      @AIBites  3 года назад

      Thanks for your postive comment! :)

  • @parveenkaur2747
    @parveenkaur2747 3 года назад +1

    Very informative video!

    • @AIBites
      @AIBites  3 года назад

      Thanks! Glad you liked it.

  • @kashishbansal2651
    @kashishbansal2651 3 года назад

    AMAZING EXPLANATION!

  • @rybdenis
    @rybdenis 3 года назад +1

    cool, thank you

  • @jialima8298
    @jialima8298 3 года назад

    Love the voice!

  • @harshkumaragarwal8326
    @harshkumaragarwal8326 3 года назад

    great work, thanks :)

  • @peddisaivivek6676
    @peddisaivivek6676 2 года назад

    Great video. But can you refrain from putting the music in the background while explaining. It's a little distracting when viewing it at higher speed.

    • @AIBites
      @AIBites  2 года назад

      Sure will take it on board when making the future ones 👍

  • @nguyenanhnguyen7658
    @nguyenanhnguyen7658 3 года назад

    NLP, you have 100,000 words at most to permute and train with. With images? Well. ViT with 400m images can hardly manage to match ImageNet :)