Vision Transformer Basics

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 47

  • @rldp
    @rldp Год назад +44

    This is one of the best explanations of not just ViT, but transformers in general that I have watched. Excellent video

  • @whale27
    @whale27 Год назад +27

    Unbelievable quality. Happy to be here before this channel blows up.

  • @capsbr2100
    @capsbr2100 9 месяцев назад +8

    Goodness, what a remarkable video. This is by far the best explanation video I have watched about vision transformers.

  • @UWIMANALowamiLowami
    @UWIMANALowamiLowami 2 часа назад

    Thank you for this incredible piece of content. I feel like I understand transformers more

  • @thetechnocrack
    @thetechnocrack 10 месяцев назад +6

    This is one of the cleanest explanation of ViTs I have come across. Amazing work Samuel! Inspiring.

  • @newbie8051
    @newbie8051 2 месяца назад +1

    Took an Image Processing course last semester and one of the topics the prof suggested to learn about was ViT's
    I knew about the basics of Deep Learning and over the summers explored more about autoencoders
    Starting from August, I dwelved into the Transformer architecture, and now ViT seems very simple
    It's just that we divide input image into patches, line these patches up in a sequence and convert them to embeddings, add positional vectors and boom feed it to the transformer module

  • @jesusalpaca7170
    @jesusalpaca7170 9 месяцев назад +1

    for a beginner like me, I would say, this is the introduce video that we were waiting for :')

  • @srinjoy.bhuiya
    @srinjoy.bhuiya 6 месяцев назад +4

    One of the greatest explanations of the concepts of transformers to a Computer Vision Reserach

  • @continuallearning8366
    @continuallearning8366 Год назад +5

    Excellent video! Honored to be here before it goes viral 🙏🏾

  • @RayWang-m6b
    @RayWang-m6b Год назад +5

    Thank you for making this wonderful video. So clear! Please continue your awesome video work!

  • @aakashsharma3179
    @aakashsharma3179 2 месяца назад

    I have never been so HOOKED while watching a video which is going fairly "deep" into such topics. Very well presented. Keep up the good work man.

  • @thecheekychinaman6713
    @thecheekychinaman6713 10 месяцев назад

    I was studying up on Transformers and ViTs half a year ago, and recently checked back to find this (to my surprise). Great clear explanations, can tell CAML is in great hands!

  • @PotatoKaboom
    @PotatoKaboom Год назад +4

    I've held guest lectures on the inner workings of transformers myself, but I still learned a bunch from this! Everything after 22:15 was very exciting to watch, very well presented and easy to understand! Very well done, I dubscribed for more :)

  • @abhimanyuyadav2685
    @abhimanyuyadav2685 Год назад +1

    Your weekly ai news was really useful
    Please bring it back

  • @vil9386
    @vil9386 10 месяцев назад

    Wow, this video helped me a lot in understanding Attention and ViT. Packed with all the logics needed to design a solution using the latest as of this day.

  • @이연우-i2n
    @이연우-i2n Год назад +2

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 *The Evolution of AI and Computer Vision*
    - General methods leveraging computation prove most effective in AI development.
    - Evolution from handcrafted features to Convolutional Neural Networks (CNNs) and then to Transformers, showcasing a reduction in inductive biases and an increase in data-driven approaches.
    01:09 🤖 *Neural Network Architectures*
    - Importance of network architecture in building intelligent machines.
    - Distinction between network architecture and network parameters, focusing on resource limitations and efficient design.
    02:32 💡 *Introduction to Transformers*
    - Transformers' dominance in AI, initially in Natural Language Processing (NLP) and then in Computer Vision.
    - Discussion on why Transformers took time to transition from NLP to Computer Vision.
    03:57 🌐 *Understanding Transformers: Encoder and Decoder*
    - Explanation of the Transformer architecture with its encoder and decoder components.
    - Different variants of Transformers: Encoder-only, Decoder-only, and Encoder-Decoder architectures.
    05:33 🔍 *Applying Transformers to Computer Vision*
    - Vision Transformers (ViT) process images by slicing them into patches, using position embeddings and Transformer encoders.
    - The methodology of transforming images into a sequence of embeddings for the Transformer encoder.
    07:08 🔗 *Multi-Head Attention in Transformers*
    - Detailed explanation of the multi-head attention mechanism in Transformers.
    - Role of queries, keys, and values in facilitating communication between different embeddings.
    09:12 🧩 *Transformer Encoder Blocks and Scaling*
    - The structure and function of Transformer encoder blocks, including multi-head attention and MLP.
    - Importance of residual connections and layer normalization in optimizing Transformer models.
    11:05 🚀 *Scaling and Hardware Influence in AI*
    - The impact of scaling and hardware advancements on Transformer model performance.
    - Discussion on the exponential increase in computational resources for training large models.
    13:50 🛠 *MLP and Optimization in Transformers*
    - Role of the multi-layer perceptron (MLP) in Transformer architecture for independent processing of embeddings.
    - Importance of non-linearities like ReLU and GELU in Transformer models.
    15:00 ⚙️ *Residual Connections and Layer Normalization*
    - Implementation and significance of residual connections and layer normalization in Transformers.
    - These components facilitate gradient flow and stable learning in deep network training.
    17:05 🌐 *Positional Embeddings in Transformers*
    - Explanation of positional embeddings in Transformers, necessary for maintaining spatial information in sequences.
    - Different methods of implementing positional embeddings in Transformer models.
    19:27 🔄 *Cross Attention and Causal Attention in Transformers*
    - Discussion of
    Made with HARPA AI

  • @piclkesthedrummer6439
    @piclkesthedrummer6439 7 месяцев назад

    This is by far one of the most accurate, yet understandable and intuitive explaination of such a hard concept, you did a better job at explaining it than the authors! very impressive!

  • @mattsong6875
    @mattsong6875 Год назад +2

    Thanks for such a informative and educational video

  • @aminkarimi1068
    @aminkarimi1068 7 месяцев назад

    The best video to easily understand VIT

  • @MdAkmolMasud
    @MdAkmolMasud 6 месяцев назад

    The best explanation of ViT..

  • @sbdzdz
    @sbdzdz Год назад +2

    Very well presented!

  • @rmmajor
    @rmmajor 8 месяцев назад

    That is a masterpiece of a video! Many thanks for your work!

  • @ShravanKumar147
    @ShravanKumar147 5 месяцев назад

    Beautifully put together. Keep it going @Sam

  • @soylentpink7845
    @soylentpink7845 Год назад +2

    Very good video - contents & it’s presentation!

  • @amoghjain
    @amoghjain Год назад +2

    Thank you so very much for sharing your insights and intuition behind soooo many concepts.

  • @251_satyamrai4
    @251_satyamrai4 3 месяца назад

    beautifully explained.

  • @plutophy1242
    @plutophy1242 2 месяца назад

    excellent contents and slides!!!

  • @gnorts_mr_alien
    @gnorts_mr_alien 8 месяцев назад

    man, what a video. thank you!

  • @minute_machine_learning5362
    @minute_machine_learning5362 7 месяцев назад

    great explanation

  • @flamboyanta4993
    @flamboyanta4993 Год назад +2

    Excellent and clearly communicated. Thanks.
    question in 20:05 when discssing positional embeddings, the legend of the waves says dim 4,....dim 7. Here, does dim refer to the length of the pathch D? as in, we'll get as many sine waves as D dims ?

  • @shyb8079
    @shyb8079 7 месяцев назад

    Thank you for ur content.

  • @siriuscoding
    @siriuscoding Месяц назад

    superb

  • @geomanisgod
    @geomanisgod 9 месяцев назад

    A+++ quality from other planets.

  • @zainbaloch5541
    @zainbaloch5541 8 месяцев назад

    Thank you so much!

  • @EigenA
    @EigenA 9 месяцев назад

    Great work!

  • @tomrichter9021
    @tomrichter9021 10 месяцев назад

    Great video

  • @flamboyanta4993
    @flamboyanta4993 Год назад +1

    Another question:
    in 30:00 discussing how early attention layers tend to focus on local features and deeper ones on more global features of the input. I didn't understand the significance of the x-axis (sorted attention head). is this just a count of how many attention head there are in the respective block? Which suggests that in the large data regime, even early attention blocks with 14+ heads will also tend to observe the features globally? Is this correct?
    And thank you in advance!

  • @miraclemaxicl
    @miraclemaxicl 9 месяцев назад +1

    More Compute Is All You Need

  • @iez
    @iez 10 месяцев назад

    any ViTs that are open source?

  • @capsbr2100
    @capsbr2100 9 месяцев назад

    So for someone approaching this now, working on resource-constrained devices, both for training and inference, it makes more sense to just stick to CNNs?

  • @felipesuarez5041
    @felipesuarez5041 4 месяца назад

    Crazy how transformers are beating all these other classical architectures like CNNs, that have been used since ancient Greece times.

  • @AKD-le2kb
    @AKD-le2kb 6 месяцев назад

    w

  • @РодионЧаускин
    @РодионЧаускин 2 месяца назад

    Jackson Scott Wilson Anna Jackson Anna