CoAtNet: Marrying Convolution and Attention for All Data Sizes

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025
  • НаукаНаука

Комментарии • 23

  • @dfdiasbr
    @dfdiasbr 4 месяца назад +1

    Thank you for that video. I've been studying this model and helped me a lot.

    • @AIBites
      @AIBites  4 месяца назад

      Glad it helped 👍

  • @wilhelm8735
    @wilhelm8735 3 года назад +2

    super interesting, first time I saw a video by you, directly subscribed! Thank you

    • @AIBites
      @AIBites  3 года назад

      Thank you Wil. Very encouraging to continue my work 😊

  • @manub.n6223
    @manub.n6223 2 года назад

    Thank you so much for the brilliant explanation !!!
    I really wish you had explained the self attention equation too.

    • @AIBites
      @AIBites  2 года назад

      Will try and make one about it. Thanks 👍

  • @frenchmarty7446
    @frenchmarty7446 2 года назад +1

    3:00 the rotation example you use would actually confuse a CNN just as much as a transformer. Convolutions are translation invariant, they are *not* rotation or scale invariant without modification. Just want to point that out.
    Otherwise though excellent video.

    • @AIBites
      @AIBites  2 года назад

      yup I should have mentioned a better example. I remind myself the whole idea of RotNet now :)

  • @ShravanKumar147
    @ShravanKumar147 Год назад

    Thanks for all your work, your explanation is impressive with such beautiful illustrations.
    What tool do you use to create those illustrations?

    • @AIBites
      @AIBites  Год назад

      Thanks for your feedback Shravan! I mostly use Google Slides. I have created my own template for AI BItes. Every now and them I use Manim which is an amazing math animation library written in Python. Hope it helps.

  • @africandawahrevival
    @africandawahrevival 2 года назад +1

    Great explanation, please do (Slimmable Net) next.

    • @AIBites
      @AIBites  2 года назад

      yup sure will try and do Slimmable Nets next. Thanks.

  • @maximm5181
    @maximm5181 2 года назад

    Nice informational Video, keep going

    • @AIBites
      @AIBites  2 года назад

      Thanks, will do!

  • @filipelauar2686
    @filipelauar2686 3 года назад

    Your videos are great, congratulations and thanks a lot for doing it!!

    • @AIBites
      @AIBites  3 года назад

      Thanks for your compliments.

  • @prathyyyyy
    @prathyyyyy 2 года назад

    Can you take about CoCa(Contrasive Captioners) and how to use it

    • @AIBites
      @AIBites  2 года назад

      Sure will do at some point. Thanks for watching the other videos meanwhile :)

  • @ahmedchaoukichami9345
    @ahmedchaoukichami9345 2 года назад +1

    thank you so much it is great explain and i am looking for that code to implant it in my dataset plz if u do it

    • @AIBites
      @AIBites  2 года назад

      will try and do implementation videos at some point in time. Thanks for the constructive feedback

  • @aleksandramikhailova354
    @aleksandramikhailova354 3 года назад +1

    I would say Conv with self-attention🙃
    Similar to CNN+BERT

  • @LidoList
    @LidoList Год назад

    Dude, do something with your mic. It is on and off during your video. Sometimes it gets off when you say something very important.

    • @AIBites
      @AIBites  Год назад

      Hey thanks for pointing out. Let me check the setup again and see what's the reason for the intermittent disturbance.