CS480/680 Lecture 19: Attention and Transformer Networks

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024

Комментарии • 218

  • @SuperOnlyP
    @SuperOnlyP 4 года назад +279

    Finally, Someone can explain simply what queries, keys , values are in the transformer model . Thank you Sir !!!

    • @mathisve
      @mathisve 4 года назад +23

      Yeah, I don't understand why nobody else goes over this seemingly pretty important detail

    • @stackoverflow8260
      @stackoverflow8260 4 года назад +14

      Wow, I was gonna ask, why didn't he explain or give an example for query, key, value in the case of a simple language translation or modelling example. Machine learning community is not very good at conveying their ideas, when you can't put stuff in rigorous mathematics at least use a lot of pictures and many examples at every possible step.

    • @andrii5054
      @andrii5054 3 года назад +9

      I can also recommend this explanation: ruclips.net/video/mMa2PmYJlCo/видео.html
      It has helped me a lot

    • @SuperOnlyP
      @SuperOnlyP 3 года назад +1

      @@andrii5054 The video really simplify the concept. Thanks for sharing !

    • @Darkev77
      @Darkev77 3 года назад +1

      Where was that?

  • @pascalpoupart3507
    @pascalpoupart3507  4 года назад +136

    The slides are posted here: cs.uwaterloo.ca/~ppoupart/teaching/cs480-spring19/schedule.html

    • @tarunluthrabk
      @tarunluthrabk 3 года назад +7

      Hello Professor. Your explanations are amazing. Kindly pin this comment or add in description so that it is visible to everyone.

    • @majidheidarystories
      @majidheidarystories 2 года назад +1

      ّDear Pascal, I was wondering if you have any presentations to describe the article titled Neural Machine Translation By Jointly Learning To Align And Translate.

  • @zengrz
    @zengrz 4 года назад +196

    00:00 Attention
    31:32 Transformer
    47:15 Masked Multi-head Attention
    1:01:45 Layer normalization, Positional embedding

  • @graceln2480
    @graceln2480 3 года назад +5

    One of the best explanations for attention & transformers in RUclips. Most of the other videos are junk with authors pretending to understand the concepts and just adding to the RUclips clutter.

  • @drdr3496
    @drdr3496 Год назад +13

    This is the single best video on "Attention is all you need", attention, transformers, etc. on the Internet. It's simple as that. Thanks Dr Poupart.

    • @bleacherz7503
      @bleacherz7503 Год назад

      Why does a dot product correlate to attention?

    • @drdr3496
      @drdr3496 Год назад +1

      @@bleacherz7503 a dot product between two vectors shows how similar they are

    • @seldan6698
      @seldan6698 Год назад

      ​@@drdr3496 nice. Can you explain me whole query , key and value process for some example like " the cat sat on the mat". What is query, key and values for this sentence

    • @robn2497
      @robn2497 8 месяцев назад

      ty

  • @manikanth2166
    @manikanth2166 2 месяца назад +3

    For those who are difficult to digest the picture at 29:00, here's a short explanation. If we take a 4 word sentence as an example:
    • k = represents each word, in some n-dimensional space (let's say 512), i = 1...4 (since there are 4 words). So k is vector
    • q = represents a single word in that 512 dimensional space. So q is also a vector
    • s = is the similarity operation (dot product) of q with each of k matmul (1x512, 512x1) = 1x1. So s is scalar
    • a = just softmax( S ) to make sum of all a = 1. So each a is a scalar
    • By now, each a holds a normalized number that represents of how query is similar to each of the word or key k in the sentence. The influence of "q" on each of the "k" is already attained at this critical step. But the outputs are just scalars it only talks about the influence as weight but doesn't encode the word itself (as 512 dimension vector)
    • To encode all the influences of the words in that query, a final linear combination is done Sum(a*v) to produce the "context" vector. This output context is a single vector 1x512 . "context" because it explains how the query fits in the context of the values
    Refer to image in this wiki section for more clarification:
    en.wikipedia.org/wiki/Attention_(machine_learning)#Core_calculations

  • @sudhirghandikota1382
    @sudhirghandikota1382 4 года назад +55

    Thank you very much Dr. Poupart. This is the best explanation of transformers I have come across on the internet

  • @GotUpLateWithMoon
    @GotUpLateWithMoon 4 года назад +15

    This is the best lecture on attention mechanism I can find! Thank you Dr. Poupart! finally all the details made sense to me.

  • @JMRC
    @JMRC 4 года назад +24

    Thank you to the person asking the question at 28:49! The softmax gave it away, but I wasn't sure.

  • @momusi.9819
    @momusi.9819 4 года назад +24

    Thank you very much, this was by far the best explanation of Transformers that I found online!

  • @AI_ML_iQ
    @AI_ML_iQ Год назад +10

    In recent work, titled "Generalized Attention Mechanism and Relative Position for Transformer" , on transformer it is shown that different matrices for query and key are not required for attention mechanism in Transformer thus reducing number of parameters to be trained for Transformer of GPT, other language models and Transformers for images/videos.

  • @moustafa_shomer
    @moustafa_shomer 2 года назад +3

    This is the best Transformer / Attention Explaination ever.. Thank you

  • @tagrikli
    @tagrikli 4 года назад +178

    This video just cured my depression.

    • @judgeomega
      @judgeomega 3 года назад +22

      dont worry, im sure the next visit to a public internet forum will once again obliterate hope in humanity

    • @100vivasvan
      @100vivasvan 3 года назад +3

      haha same here

    • @dilettante9576
      @dilettante9576 2 года назад +2

      Cured my ADHD

    • @Mrduralexx
      @Mrduralexx Год назад +2

      This video gave me depression…

    • @UmerBashir
      @UmerBashir Год назад +1

      @@Mrduralexx yeah its a different level of anxiety that it instills

  • @TylerMosaic
    @TylerMosaic 3 года назад +22

    wow! love the way he answers that great question at around 50:52 : “why we dont we implement the mask with hadamard product outside of the softmax?”. brilliant prof.

  • @ghostoftsushimaps4150
    @ghostoftsushimaps4150 Год назад

    Bhaiji love from India. Is lecture ko araam se dekhunga

  • @weichen1
    @weichen1 4 года назад +4

    I am not able to find a better video than this one explaining attention and transformer on the internet

  • @richard126wfr
    @richard126wfr 2 года назад

    The best explanation of attention mechanism I found on RUclips is the making pizza analogue by Alfredo Canziani.

  • @insoucyant
    @insoucyant 4 месяца назад

    Best video on attention that I have come across

  • @Siva-Kumar-D
    @Siva-Kumar-D 2 года назад

    This is the best video Internet about Transformers network

  • @utkarshgupta7364
    @utkarshgupta7364 4 года назад +1

    Most awesome video on transformers one could find on youtube

  • @aadeshingle7593
    @aadeshingle7593 Год назад

    Thanks a lot Professor Poupart one of the best explanation for maths behind transformers!

  • @benjamindeporte3806
    @benjamindeporte3806 Год назад +1

    I eventually understood the Q,K,V in attention. Many thanks.

  • @xhulioxhelilai9346
    @xhulioxhelilai9346 7 месяцев назад

    Thank you for the very comprehensive and understandable course. Being in 2024 I can say that I can understand even better and easier this course using GPT-4.

  • @sandipbnvnhjv
    @sandipbnvnhjv Год назад +1

    I asked chatGPT for the best video on Attention and it brought me here

  • @dennishuang3498
    @dennishuang3498 3 года назад +1

    Very enjoyed your lecture, Professor Poupart! Very informative and simplified many complicated concepts. Thank you very much!

  • @Vartazian360
    @Vartazian360 11 месяцев назад +3

    Little did anyone know just how groundbreaking this foundation would be for Chat GPT / GPT 4.

  • @justinkim2973
    @justinkim2973 Год назад

    Best video to watch on the first day of 2023

  • @vihaanrajput8082
    @vihaanrajput8082 2 года назад

    His toturial video is my favorite timepass, specially at night,Hail to prof. Poupart

  • @cwtan501
    @cwtan501 3 года назад

    By far the best I have seen to explain multiheaded attention

  • @mi9807
    @mi9807 Год назад

    One of the best videos!

  • @parmidagranfar4861
    @parmidagranfar4861 3 года назад

    finallu understood what is going on . most of the videos are so simple and skipped math . i liked it

  • @davidingham3409
    @davidingham3409 2 месяца назад

    Good motivation and understanding.

  • @orhan4876
    @orhan4876 Год назад

    thank you for being so thorough!

  • @aileensengupta
    @aileensengupta Год назад

    Big fan, big fan Sir!!
    Finally understood this!

  • @HeshamChannel
    @HeshamChannel 2 года назад +1

    Very good explanation. Thanks.

  • @Hotheaddragon
    @Hotheaddragon 3 года назад +4

    You are a blessing, finally understood a very important concept.

  • @benjaminw2194
    @benjaminw2194 2 года назад

    I'm a novice and have been praying to get someone who discusses these papers. You're an answered prayer! Great lecturer.

  • @autripat
    @autripat 3 года назад +2

    At 1:18:22, the professor refers to BERT and a "Decoder transformer that predicts a missing word".
    To me, BERT is a masked Encoder (not decoder).
    After all, BERT stands for bidirectional *encoder* representation from transformers.
    It's minor (and doesn't subtract from this great presentation), but can anyone comment?

    • @abdelrahmanhammad1020
      @abdelrahmanhammad1020 3 года назад +1

      Great lecture. And I believe you are correct, it seems there is a typo here. I was questioning the same!

  • @MustafaQamarudDin
    @MustafaQamarudDin 4 года назад +2

    Thank you very much. It is very detailed and captures the intuition.

    • @syphiliticpangloss
      @syphiliticpangloss 4 года назад

      Could you explain what the model class looks like then? What is the capacity? What is the "unconstrained" version with higher capacity? I was full statistical learning theory style discussion in all pedalogical discussions. I don't understand how people think they understand this.
      If your life depended on it, would you feel confident in recommending one of these setups? What questions would you have to ask about the data, the model architecture, the observation process? You need worst case bounds, model complexity etc. I see none of that here.

    • @1Kapachow1
      @1Kapachow1 4 года назад +1

      @@syphiliticpangloss Well, in deep learning the theory is far behind engineering.
      When people say they understand this lecture, they don't mean worst case bounds (which I strongly doubt anyone in the world knows how to calculate for this, without adding so many relaxation assumptions which make it basically irrelevant, like convexity etc.),
      they just mean that:
      1. Engineering wise they understand how to build and use it
      2. They feel they grasp enough intuition to what is the purpose of each sub-block and why it was added.
      I don't think anyone truly "understands" much simpler models in DL than transformers, which perform in a far superior level to classical machine learning methods.
      For example, fully convolutional neural networks, trained with Adam optimizer, based on back-propagation, using BN.

    • @syphiliticpangloss
      @syphiliticpangloss 4 года назад

      @@1Kapachow1 So can someone explain what the transformer is doing then in a precise way? I would accept answers that reference probability distributions and predictive goals or computation description of components like NAND gates etc.
      Also accepted would be anything related to the eigenvalues, stability, curvature etc.
      There are lots of people trying to talk about this stuff. For example arxiv.org/abs/2004.09280
      Or Vapnick.
      To be perfectly clear, I think today we tend to say there are only two things really: a) "data" i.e. observations usually dozens to millions from some process we take to be slowly changing at most and b) predicates/models/architecture/constraints ... "observations" usually less than dozens, usually manually constructure (from other experiments and observations sets perhaps). To each of these we usually have some sort of "narrative" about where each came from, a way of describing it in some way to humans.
      The second thing is what I'm getting at. "Architecture" is a model constraint. If it is just pulled from thin air without undestanding the problem, the meta-problem etc, it is quite likely that there are buried problems, secret reasons for architecture choices that are not being disclosed or realised.
      Getting better at describing these models/arch/predicates is how we progress.

  • @fengxie4762
    @fengxie4762 4 года назад +5

    A great lecture! Highly recommended!

  • @giorgioregni2639
    @giorgioregni2639 3 года назад

    Best explanation of transformer I have ever seen, thank you Dr Poupart

  • @shifaspv2128
    @shifaspv2128 Год назад

    Thank you so much, the brainstorming

  • @jelenajokic9184
    @jelenajokic9184 2 года назад +1

    The simplest explanation of attention, thanks a lot for sharing, great lectures🤗!

  • @brandonleesantos9383
    @brandonleesantos9383 2 года назад

    Truly fantastic wow

  • @yd42330
    @yd42330 3 года назад +2

    Question about positional encoding.
    If we sum the Word Embedding (WE) with the Positional Encoding (PE) how does the model
    tell the difference between WE = 0.5, PE = 0.2 and WE = 0.4 and PE = 0.3 ?
    (Different words that are at different positions can yield the same value)
    Why not keep the PE separate from WE?

  • @chakibchemso
    @chakibchemso Год назад +1

    and thats how gpt was born my fellas

  • @ibrahimkaibi4200
    @ibrahimkaibi4200 3 года назад

    A very interesting explanation (wonderful)

  • @kungchun9461
    @kungchun9461 3 года назад +2

    This year should be the "tranformer year" as there a breakout in domain of CV.

  • @minhajulhoque2113
    @minhajulhoque2113 2 года назад

    Great video!

  • @weiyaox6896
    @weiyaox6896 3 года назад +1

    Best explanation

  • @aponom84
    @aponom84 4 года назад +1

    Nice lecture! Thanks!

  • @shavkat95
    @shavkat95 2 года назад

    he sounds bored and depressed but the content is high class.

  • @opencvitk
    @opencvitk Год назад +1

    the explanation of K,V and Q is great. unfortunately i lost him as soon as he started on multi-head. must be that the single head i possess is empty :-)

  • @larryobrien
    @larryobrien 10 месяцев назад

    Fantastic lecture, but I became confused reviewing minute 36, which says something like "When we do this in one block we essentially look at pairs of words... In the first block we look at pairs of words and the second block we're looking at pairs of pairs... We're combining more than just two words but [rather] groups of words that get larger and larger...." This would be similar to how we think of features in convolutional layers. But I don’t understand it here, since all of Q, K, and V are projections of the _whole_ input context, are they not? How do we get from that to the first attention block “essentially look[s] at pairs of words”?

  • @alexanderblumin6659
    @alexanderblumin6659 3 года назад +1

    Very intersting lecture. Something that is not totally clear on minute 46: these multihead presented intuitievely as explicit 3 various filterts as in cnn to produce 3 corresponding feature maps,but on previous part of lecture its being said that multi heads are stacked one after another to produce at first info from (word i,word j) and second pairs of that stuff i.e one is the input to another one. So how to understand it in the rightr way? Seems like on minute 46 the inputs to each of the linear are the same but on lecture part it looks like one is going after another and intuitevly the pair of pais and so one changes the ouput size.

  • @evgenysavelev837
    @evgenysavelev837 9 месяцев назад

    There is a better answer to the question asked at @1:04:30, regarding the positional encodings polluting word embeddings. If the positional embedding vectors are in a subspace of their own, then the addition of the positional encodings will never obfuscate the information encapsulated by word embeddings. Since the word embeddings are usually learned during network training, the network quickly learns to confine word embeddings to the subspace that is orthogonal or linearly independent to the positional encodings. So, TLDR, this is not a problem. In fact, it is advantageous, since information about position is evenly spread throughout all vector components, rather than being concentrated in a few coefficients in the head or tail. This removes a problem of having network nodes specialization, where some of the neurons become solely dedicated to deal with positional information.

  • @seminkwak
    @seminkwak 3 года назад

    Beautiful explanations

  • @hariaakashk6161
    @hariaakashk6161 4 года назад +1

    Great explanation sir... Thank You! Please post more such lectures and I would be the first to look at it...

  • @greyreynyn
    @greyreynyn 4 года назад +1

    41:14 Question, on the output side, why isn't there an additional feed-forward layer between the masked self attention in the output and the attention to the input? And maybe more broadly what are those feed forward units doing?

  • @cedricmanouan2333
    @cedricmanouan2333 4 года назад

    very interesting and useful. Thanks Sir

  • @faatemehch96
    @faatemehch96 3 года назад

    thank you, the video is really useful. 👍🏻👍🏻

  • @zugzwangelist
    @zugzwangelist 4 года назад +6

    31:34 The lecturer has changed shirt.

    • @atithi8
      @atithi8 4 года назад +1

      Also, there seems to be a discontinuity, the discussion on the generalization of attention models was cut short at the same instant

    • @abhikbanerjee3719
      @abhikbanerjee3719 4 года назад +2

      These are 2 different classes clubbed together.

  • @444haluk
    @444haluk 3 года назад

    I heard queries, keys & values were primative concepts and counter-intuitive, but I didn't know it was THIS primative.

  • @gudepuvenkateswarlu5648
    @gudepuvenkateswarlu5648 3 года назад

    Excellent session....Tq professor

  • @syedhasany1809
    @syedhasany1809 4 года назад +3

    This was a great lecture, thank you.

  • @pred9990
    @pred9990 4 года назад +1

    Cool lecture!

  • @sienloonglee4238
    @sienloonglee4238 Год назад

    very good video!😀

  • @markphillip9950
    @markphillip9950 3 года назад

    Great lecture.

  • @jinyang4796
    @jinyang4796 4 года назад

    Thank you for the clear explanation and well-illustrated examples!

  • @AnonTrash
    @AnonTrash Год назад

    Beautiful.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад +1

    Does anyone know? Which NMT video has the previous intro to Attention the professor cites in this video? I couldn’t find his video on neural machine translation.

  • @janasandeep
    @janasandeep 2 года назад +1

    23:09 - aren't query and keys both encode questions, while answers being values?
    35:53 - why the values are going to be the same as keys?
    36:02 - "attention mechanism merges information from _pairs_ of the words". Attention merges information of one word with _all_ the other words, isn't it? For each different word, values corresponding to all the words are weighed differently and added up.

    • @soumyajitganguly2593
      @soumyajitganguly2593 Год назад +1

      I am confused about this "pairs of words" too.. Let me assume that every word is represented by a linear combination of all other words. Now what is the point of stacking N (N=6 in the original paper) of these attention layers? It would still be a linear combination right?

    • @soumyajitganguly2593
      @soumyajitganguly2593 Год назад

      why the values are going to be the same as keys? - I would guess that Prof. was referring to the same word / token but not the same representation. The representations come after multiplying with Wv and Wk so they would be different.

  • @compmeist
    @compmeist Год назад

    Perhaps the reason we can't concatenate positional information is because we are trying to share that information among the dimensions of the word vector

  • @greyreynyn
    @greyreynyn 4 года назад +2

    45:50 - For the multiple linear transformations, are we applying the same linear transform to each set of Q/K/V in a "head" ? Or does each Q/K/V get its own unique linear transform applied?

    • @knoxvoxx
      @knoxvoxx 3 года назад +1

      Unique linear transform each time I guess.( In the original paper, under section 3.2.2 they mention that " h times with different learned linear projections to dk, dk and dv respectively")
      If we take repeat scaled dot product attention 3 times , then we will have total of 9 linear projections.

    • @ryanwhite7401
      @ryanwhite7401 2 года назад +2

      They each get their own learned parameters.

  • @akashpb4183
    @akashpb4183 3 года назад

    Beautifully explained.. things seem clear to me now .. Thanks a lot sir!

  • @yashrajwani3322
    @yashrajwani3322 3 года назад

    great explanation

  • @goldencircle4331
    @goldencircle4331 Год назад

    Huge thanks for putting this online.

  • @samson6707
    @samson6707 7 месяцев назад

    46:11 i dont understand how there can be a concatention of the outputs followed by a linear combination. in my mind it doesnt make sense to do both. either the outputs are concatenated or added up in a linear combination but both...?

  • @sheikhjubair7133
    @sheikhjubair7133 4 года назад

    Very clear explanation

  • @aricircle8040
    @aricircle8040 Год назад

    Thank you very much for sharing that great lecture!
    Shouldn't it be the attention vector instead of the value? at 27:44

  • @aymensekhri2133
    @aymensekhri2133 2 года назад

    Thank you very much Sir!

  • @firstnamelastname3106
    @firstnamelastname3106 3 года назад

    thank you my man, u saved me

  • @abhishekrohra9457
    @abhishekrohra9457 3 года назад

    Good explanation

  • @diffpizza
    @diffpizza Год назад

    Why not just a complex number for the positional embedding? The imaginary part could be keeping track of the position, and all multiplication and gradient operations should still work

  • @SaNDRiTa1919
    @SaNDRiTa1919 Год назад +1

    I’m sorry, but the database example was extremely easy to understand but then it goes to the similarity and I don’t get a shit. For a translation problem, for example, would it be the Similarity between the word in English and the text in the other language? How is it going to be any similarity in this case between the words? Or is it between the embeddings?
    And if we’re trying to predict the next word in a sentence, what is the similarity in this case?

  • @ephysics3801
    @ephysics3801 Год назад

    Hi Sir, Sir where did we get the lecture about the Attention mechanism basic concept.?

  • @fit_with_a_techie
    @fit_with_a_techie 3 года назад

    Thank you Professor :)

  • @nafeesahmad9083
    @nafeesahmad9083 3 года назад

    Woohoo... Thank you so much

  • @prof_shixo
    @prof_shixo 4 года назад +1

    Thanks for the nice lecture. I am still confused regarding how transformers model can replace RNNs or LSTMs for general sequence learning. The size of a sequence might be very lengthy in some applications rather than just a sentence (which can be designed to be fixed in length) so how to deal with this especially if we need to keep the complete sequence with us as there is no recurrence? If the answer is to divide the sequence, then how to link different chunks over time without a recurrence or a carry over? (Loops over time)

    • @JAKKOtutorials
      @JAKKOtutorials 4 года назад +1

      transformers are able to "query the recurrencies", think of it as instead of repeating the operation as in RNNs you just query 'x' times a database of the possible values and its given inputs and check if it matches the requirements, and because it's not a recurrence, repetition, you can make multiple of these queries, each being a new operation, at the same time!! each operation can be resolved without interference creating new tokens, or pieces, which represent convergence points in the data universe you are travelling.
      it's a huge improvement.. confirmed by the models shown at the end of the lecture. hope this helps :)

    • @venkateshdas5422
      @venkateshdas5422 4 года назад

      As JAKKO mentioned the transforms use the attention mechanism in a very efficient manner. The size of the sequence can be sufficiently longer than a sentence and still the attention mechanism will be able to capture the dependencies between the words at different positions. And this creates an efficient contextual representation of the sequence better than the normal input embedding vector. And this is how the complete input sequence is captured by the model without the recurrence.
      This is really a beautiful approach. (personal opinion)

  • @mohamedabbashedjazi493
    @mohamedabbashedjazi493 4 года назад

    Softmax is computationally expensive, I wonder if this can be replaced somehow with another function to produce probabilities since Softmax is present in many places in all the blocks of the transformer network.

  • @reuben3648
    @reuben3648 Год назад

    Thank you soo much!!!

  • @hackercop
    @hackercop 2 года назад

    This was a great lecture - really explained this to me thanks

  • @ephremtadesse3195
    @ephremtadesse3195 2 года назад

    Very helpful

  • @yen-linchen7398
    @yen-linchen7398 2 года назад

    Thank you!

  • @blasttrash
    @blasttrash 8 месяцев назад

    just curious, if transformer networks were known 4 years ago itself, why did it take chatgpt such a long time to be developed?

  • @varungoel185
    @varungoel185 3 года назад

    Around @29:50 mark, he first mentions that the key vectors correspond to each output word, but the slide mentions input word. Could someone please clarify this?

  • @abhijeetnarharshettiwar6175
    @abhijeetnarharshettiwar6175 3 года назад

    Thank you so much for great explanation, professor.

  • @evennot
    @evennot 4 года назад

    19:00 it's basically an exclusionary perceptron layer, isn't it? (also could be called fuzzy LUT) I'm sure it was used before for the attention emulation

  • @mohamedabdo-dl9dd
    @mohamedabdo-dl9dd 3 года назад

    thanks professor for easy explain ... can you share powerpoint with us ..

  • @anatolicvs
    @anatolicvs 2 года назад

    Dear Prof. Dr. Poupart, do we have chance to have your presentation that used at the lecture of 19 please ?