Transformers explained | The architecture behind LLMs

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 113

  • @YuraCCC
    @YuraCCC 11 месяцев назад +15

    Thanks for the explanation. At 9:19 : Shouldn't the order of multiplication be the opposite here? E.g. x1(vector) * Wq(matrix) = q1(vector). Otherwise I don't understand how we get the 1x3 dimensionality at the end

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +9

      Oh, shoot, messed up the order in the animations there. You are right. Sorry, pinning your comment.

    • @YuraCCC
      @YuraCCC 11 месяцев назад +1

      No problem thanks for clarifying that, and thanks again for the great video@@AICoffeeBreak

    • @scifaipy9301
      @scifaipy9301 6 месяцев назад

      The vectors should be column vectors.

  • @420_gunna
    @420_gunna 11 месяцев назад +6

    Awesome video, thank you! I love the idea of you revisiting older topics -- either as a 201 or as a re-introduction. "Attention combines the representation of input vector's value vectors, weighted by the importance score (computed by the query and key vectors)."

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +3

      Thanks for your appreciation!

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 11 месяцев назад +5

    Epic as always 🤌

  • @abhishek-tandon
    @abhishek-tandon 11 месяцев назад +7

    One of the best videos on transformers that I have ever watched. Views 📈

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +1

      Do you have examples of others you liked?

  • @xyphos915
    @xyphos915 11 месяцев назад +9

    Wow, this explanation on the difference between RNNs and Transformers at the end is what I was missing!
    I've always heard that Transformers are great because of parallelization but never really saw why until today, thank you! Great video!

  • @heejuneAhn
    @heejuneAhn 6 месяцев назад +2

    BEST of BEST Explanation. 1) Visually, 2) intuitively, 3) by numerical examples. And your English is better than native for Foreigners to listen.

  • @DerPylz
    @DerPylz 11 месяцев назад +12

    Wow, you've come a long way since your first transformer explained video!

  • @jcneto25
    @jcneto25 11 месяцев назад +4

    Best didatic explanation about Transformers so far. Thank you for sharing it.

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +1

      Wow, thanks! Glad it's helpful.

  • @DaveJ6515
    @DaveJ6515 11 месяцев назад +9

    You know how to explain things. This one is not easy: I can see the amount of work that went into this video, and it was a lot. I hope that your career takes you where you deserve.

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +1

      Thanks for watching and thanks for the kind words. All the best to you as well!

  • @cosmic_reef_17
    @cosmic_reef_17 11 месяцев назад +5

    Thank you very much for the very clear explanations and detailed analysis of the transformer architecture. Your truly the 3blue1brown of machine learning!

  • @mumcarpet109
    @mumcarpet109 11 месяцев назад +6

    your videos has helped visual learner like me so much, thank you

  • @l.suurmeijer1382
    @l.suurmeijer1382 11 месяцев назад +5

    Absolute banger of a video. Wish I had seen this when I was learning about transformers in uni last year :-)

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +1

      Haha, glad I could help. Even if a bit late.

  • @davidespinosa1910
    @davidespinosa1910 3 месяца назад

    Time is quadratic, but memory is linear -- see the FlashAttention paper.
    But the number of parameters is constant -- that's the magic !
    Thanks for the excellent videos ! 👍

  • @partywen
    @partywen 6 месяцев назад +2

    Super informative and helpful! Thanks a lot!

  • @Thomas-gk42
    @Thomas-gk42 11 месяцев назад +6

    Understood about 10%, but I like these vidoes and feel intuitively the usefulness.

  • @Clammer999
    @Clammer999 7 месяцев назад +2

    Thanks so much for this video. I’ve gone through a number of videos on transformers and this is much easier to grasp and understand for a non-data scientist like myself.

  • @darylallen2485
    @darylallen2485 8 месяцев назад +3

    Letitia, you're awesome and I look forward to learning more from you.

  • @zahrashah6567
    @zahrashah6567 8 месяцев назад +1

    What a wonderful explanation😍 Just discovered your channel and absolutely loving the explanations as well as visuals😘

  • @MuruganR-tg9yt
    @MuruganR-tg9yt 10 месяцев назад +3

    Thank you. Nice explanation 😊

  • @dannown
    @dannown 11 месяцев назад +4

    Really appreciate this video.

  • @mccartym86
    @mccartym86 10 месяцев назад +3

    I think I had at least 10 aha moments watching this, and I've watched many videos on these topics. Incredible job, thank you!

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +1

      Wow, thank You for this wonderful comment!

  • @manuelafernandesblancorodr6366
    @manuelafernandesblancorodr6366 10 месяцев назад +3

    What a wonderful video! Thank you so much for sharing it!

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +1

      Thank you too for this wonderful comment!

  • @rahulrajpvr7d
    @rahulrajpvr7d 11 месяцев назад +6

    Tomorrow i have thesis evaluation and i was thinking about watching that video again, but youtube algorithm suggested me without searching anything, Thank u youtube algo..
    😅❤🔥

  • @connor-shorten
    @connor-shorten 11 месяцев назад +5

    Awesome! Epic Visuals!

  • @DatNgo-uk4ft
    @DatNgo-uk4ft 11 месяцев назад +4

    Great Video!! Nice improvement over the original

  • @muhammedaneesk.a4848
    @muhammedaneesk.a4848 11 месяцев назад +4

    Thanks for the explanation 😊

  • @xxlvulkann6743
    @xxlvulkann6743 8 месяцев назад +2

    This is a very well-made explanation. I hadn't known that the feedforward layers only received one token at a time. Thanks for clearing that up for me! 😁

  • @realbenjoyo
    @realbenjoyo 24 дня назад +1

    This was really great, never really understood query, key and values before.

  • @GarySuffield-w9p
    @GarySuffield-w9p 11 месяцев назад +5

    Really well done and easy to follow, thank you

  • @gettingdatasciencedone
    @gettingdatasciencedone 29 дней назад +1

    Great explanation -- loving your videos.The time codes for specific topics is really useful.

  • @jonas4223
    @jonas4223 11 месяцев назад +4

    Today, I had the problem I need to understand how Transformers work.. I searched on youtube and found your video 20 minutes after release. What a perfect timing

  • @HarishAkula-df8gs
    @HarishAkula-df8gs 8 месяцев назад +2

    Amazing explanation, Thank you! Just discovered your channel and I really like how the difficult topics are demystified.

  • @supanutsookkho2749
    @supanutsookkho2749 5 месяцев назад +2

    Great video and a good explanation. Thanks for your hard work on this amazing video!!

  • @SamehSyedAjmal
    @SamehSyedAjmal 11 месяцев назад +4

    Thank you for the video! Maybe an explanation on the Mamba Architecture next?

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +3

      The Mamba and SSM beans are roasting as we speak.

  • @phiphi3025
    @phiphi3025 11 месяцев назад +3

    Thanks, you helped so much explain Transformers to my PhD advisors

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +1

      This is really funny. In what field are you doing your PhD? 😅

  • @ehudamitai
    @ehudamitai 11 месяцев назад +3

    In 11:14, the weighted sum is the sum of 3 vectors of 3 elements each, but the results is a vector of 4 elements. Which, conveniently, is the same size as the input vector. Could there be a missing step there?

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +2

      Yes, there is a missing back transformation to 4 dimensions I skipped. :) Well spotted!

  • @tildarusso
    @tildarusso 11 месяцев назад +4

    As far as I am aware, word embedding has changed from legacy static embedding like Word2Vec/GLOVE (like the famous queen=woman+king-man metaphor) to BPE & unigram, this change gave me quite a headache, as most of paper do not mention any detail of their "word embedding". Perhaps Letitia you can make a video to clarify this a bit for us.

  • @volpir4672
    @volpir4672 11 месяцев назад +5

    that's great, I'm a little stuck on the special mask token? ... I'll keep digging, good info, the video is good explanation, it allows for more experimentation instead of relying on open source models that can have components look like a black box to noobs like me :)

  • @tomoki-v6o
    @tomoki-v6o 11 месяцев назад +3

    well explained . as you promised

  • @pfever
    @pfever 10 месяцев назад +2

    Just discovered your channel and this is great! Thank you! :D

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +1

      Thank you! Hope to see you again soon in the comments.

  • @uw10isplaya
    @uw10isplaya 6 месяцев назад +2

    Had to go back and rewatch a section after I realized I'd been spacing out staring at the coffee bean's reactions.

  • @paprikar
    @paprikar 11 месяцев назад +3

    here we go!
    TY for content

  • @ArthasDKR
    @ArthasDKR 11 месяцев назад +3

    Excellent explanation. Thank you!

  • @bartlomiejkubica1781
    @bartlomiejkubica1781 10 месяцев назад +2

    Thank you! Finally, I start to get it...

  • @M4ciekP
    @M4ciekP 11 месяцев назад +5

    How about a video explaining SSMs?

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +2

      ✍️

    • @AICoffeeBreak
      @AICoffeeBreak  10 месяцев назад +2

      Psst: This will be the video coming up in a few days. it's in editing right now.

    • @M4ciekP
      @M4ciekP 10 месяцев назад

      Yaay! @@AICoffeeBreak

  • @l3nn13
    @l3nn13 11 месяцев назад +4

    great video

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +1

      Thanks for the visit and for leaving the comment!

  • @ai-interview-questions
    @ai-interview-questions 11 месяцев назад +3

    Thank you, Letitia!

  • @Ben_D.
    @Ben_D. 9 месяцев назад +2

    ...ok. After binging some of your vids, I now need to go make coffee. 😆

  • @zbynekba
    @zbynekba 11 месяцев назад +3

    ❤ Letitia, thank you for great visualization and intuition. For inspiration: In the original paper, the decoder utilizes the output of the encoder by running a cross-attention process. Why does GPT not use an encoder? As you've mentioned, the encoder is typically used for classification, while the decoder is for text generation. They are never used in combination. Why is this the case?
    Missing Intuition: Why does the cross-attention layer inside the decoder take the values from the ENCODER’s output to create the enhanced embeddings (as a weighted mix)? Intuitively, I would use the values from the DECODER.

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +3

      Thanks for your thoughts! Encoders are sometimes used in combination with decoders, right? The most famous example is the T5 architecture.

    • @zbynekba
      @zbynekba 11 месяцев назад +2

      Thanks for your prompt reply. Hence, understanding the concept and intuition behind feeding the encoder output into the decoder is essential. I found only this one video on encoder-decoder cross-attention:
      ruclips.net/video/Dqjq4Gxdhng/видео.htmlsi=gtLzNxAU0pUGyLvk
      In it, Lennart emphasizes the observation that, based on the original equations, we have the enhanced embeddings calculated as a weighted sum of ENCODER values. Inside of a DECODER, I would rather expect to have the DECODER values pass through.
      Letitia, I am sure, you will resolve this mystery. 🍀

  • @LEQN
    @LEQN 9 месяцев назад +1

    Awesome video :) thanks!

    • @AICoffeeBreak
      @AICoffeeBreak  9 месяцев назад +1

      Thank you for watching and for your wonderful comment!

  • @Jeshhhhhh
    @Jeshhhhhh 4 месяца назад +2

    Oh my goddess in disguise, I thank you for saving me from depths of hell. Lots of love

  • @TheAlexBell
    @TheAlexBell 2 месяца назад

    Good explanation. Most videos on attention focus on how it's implemented, not on the design choices behind it. To my understanding, the goal was to mitigate the computational inefficiencies of RNNs and the spatial limitations of CNNs in order to achieve a universal representation of a sequence. I wanted to clarify one thing: you depicted multiple FFNNs similarly to how RNNs are usually rolled out. Is it just the same one FFNN that takes a single attention-encoded vector as input and predicts the next token from this ONE vector? By the way, what brand is that sweater? Loro Piana? :)

  • @kallamamran
    @kallamamran 11 месяцев назад +3

    Phew 😳

  • @LinkhManu
    @LinkhManu Месяц назад

    You’re the best 👏👏👏

  • @benjamindilorenzo
    @benjamindilorenzo 10 месяцев назад

    What a great video.
    It still could expand more and really sum up every sub-part and connect it to a certain clear visualization or clear step of what happens with the information at each time step and how its "transformation" progresses over time.
    So i think you could redo this video and really make it monkey proof for folks like me.
    But beware, if you look for example at the StatQuest version, its to slow and too repetative and also does not really capture, what really goes on inside the Transformer, once all steps are stacked together.
    Great work!

  • @heejuneAhn
    @heejuneAhn 5 месяцев назад

    Thanks for your video. I have a question on inference process. For example when I have a input prompt of 2 tokens = {t1, t2}. we will get the output {o1, o2, o3}. we will take only o3 and make new input sequence {t1, t2, o3}. Then we will get another output {o'1, o'2, o'3, o'4}.
    Here my questions are. When we use causal masking for attention, o1= o'1 and o2=o'2 and so on? Another question, even though the mask guarantee the causal attension. but still the matrix calcuation is performed. Then it means the computation is used any way. How can we reduce the computation resource for this case.

  • @nmfhlbj
    @nmfhlbj 9 месяцев назад

    hi! can i ask question of how did you get the dimension (d)? because all i know is dimension can be found in square matrices, and the dot product of the attention formula says that Q•K^T. if we're using 1x3 matrices, we'll get 1x1 matrices or 1 dimension, how do you get 3 ? unless its 3x1 matrix beforehand, so we'll get 3x3 or 3 dimensional matrix.
    thankyouu !

    • @AICoffeeBreak
      @AICoffeeBreak  9 месяцев назад +1

      Hi, if you mean the mistake at 10:00, then the problem is that I have written matrix times vector when I should have written vector times matrix!
      (or I could have used column vectors instead of row vectors). Is this what you mean?

  • @DaeOh
    @DaeOh 11 месяцев назад +4

    Everything makes sense except multiple attention heads. Each layer has only one set of Q, K, V, O matrices. But 8 attention heads per layer? I want to understand that.

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +5

      Think about it this way: In one layer, instead of having one head telling you how to pay attention at things, you have 8.
      In other words, instead of having one person shout at you the things they want you to pay attention to, you have 8 people simultaneously shouting at you.
      This is beneficial because it has an ensembling effect (the effect of a voting parliament. Think of Random Forests that are an ensemble of Decision Trees).
      I do not know if this helps, but I thought giving it another shot at explaining this.

  • @josephvanname3377
    @josephvanname3377 11 месяцев назад

    I want to train a transformer that eats a row of matrices instead of just a row of vectors.

  • @davide0965
    @davide0965 6 дней назад

    Terrible

    • @DerPylz
      @DerPylz 6 дней назад

      If you don't like her videos, why do you keep coming back to them just to comment that you didn't like it? Just watch something else.