Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention

Поделиться
HTML-код
  • Опубликовано: 26 июн 2024
  • Visual Guide to Transformer Neural Networks (Series) - Step by Step Intuitive Explanation
    Episode 0 - [OPTIONAL] The Neuroscience of "Attention"
    • The Neuroscience of “A...
    Episode 1 - Position Embeddings
    • Visual Guide to Transf...
    Episode 2 - Multi-Head & Self-Attention
    • Visual Guide to Transf...
    Episode 3 - Decoder’s Masked Attention
    • Visual Guide to Transf...
    This video series explains the math, as well as the intuition behind the Transformer Neural Networks that were first introduced by the “Attention is All You Need” paper.
    --------------------------------------------------------------
    References and Other Great Resources
    --------------------------------------------------------------
    Attention is All You Need
    arxiv.org/abs/1706.03762
    Jay Alammar - The Illustrated Transformer
    jalammar.github.io/illustrated...
    The A.I Hacker - Illustrated Guide to Transformers Neural Networks: A step by step explanation
    jalammar.github.io/illustrated...
    Amirhoussein Kazemnejad Blog Post - Transformer Architecture: The Positional Encoding
    kazemnejad.com/blog/transform...
    Yannic Kilcher RUclips Video - Attention is All You Need
    www.youtube.com/watch?v=iDulh...

Комментарии • 616

  • @HeduAI
    @HeduAI  3 года назад +36

    *CORRECTIONS*
    A big shoutout to the following awesome viewers for these 2 corrections:
    1. @Henry Wang and @Holger Urbanek - At (10:28), "dk" is actually the hidden dimension of the Key matrix and not the sequence length. In the original paper (Attention is all you need), it is taken to be 512.
    2. @JU PING NG
    - The result of concatenation at (14:58) is supposed to be 7 x 9 instead of 21 x 3 (that is to so that the concatenation of z matrices happens horizontally and not vertically). With this we can apply a nn.Linear(9, 5) to get the final 7 x 5 shape.
    Here are the timestamps associated with the concepts covered in this video:
    0:00 - Recaps of Part 0 and 1
    0:56 - Difference between Simple and Self-Attention
    3:11 - Multi-Head Attention Layer - Query, Key and Value matrices
    11:44 - Intuition for Multi-Head Attention Layer with Examples

    • @amortalbeing
      @amortalbeing 2 года назад +2

      Where's the first video?

    • @HeduAI
      @HeduAI  2 года назад +4

      ​@@amortalbeing Episode 0 can be found here - ruclips.net/video/48gBPL7aHJY/видео.html

    • @amortalbeing
      @amortalbeing 2 года назад

      @@HeduAI thanks a lot really appreciate it:)

    • @omkiranmalepati1645
      @omkiranmalepati1645 Год назад

      Awesome...So dk value is 3?

    • @jasonwheeler2986
      @jasonwheeler2986 Год назад +1

      @@omkiranmalepati1645 d_k = embedding dimensions // number of heads

  • @thegigasurgeon
    @thegigasurgeon Год назад +159

    Need to say this out loud, I saw Yannic Kilcher's video, read tonnes of materials on internet, went through atleast 7 playlists, and this is the first time I really understood the inner mechanism of Q, K and V vectors in transformers. You did a great job here

    • @HeduAI
      @HeduAI  Год назад +8

      This made my day :,)

    • @afsalmuhammed4239
      @afsalmuhammed4239 11 месяцев назад +1

      True

    • @exciton007
      @exciton007 9 месяцев назад +1

      Very intuitive explanation!

    • @EducationPersonal
      @EducationPersonal 8 месяцев назад +1

      Totally agree with this comment

    • @VitorMach
      @VitorMach 7 месяцев назад +1

      Yes, no other video actually explains what the actual input for these are

  • @nitroknocker14
    @nitroknocker14 3 года назад +201

    All 3 parts have been the best presentation I've ever seen of Transformers. Your step-by-step visualizations have filled in so many gaps left by other videos and blog posts. Thank you very much for creating this series.

    • @HeduAI
      @HeduAI  3 года назад +9

      This comment made my day :,) Thanks!

    • @bryanbaek75
      @bryanbaek75 2 года назад

      Me, too!

    • @lessw2020
      @lessw2020 2 года назад +1

      Definitely agree. These videos really crystallize a lot of knowledge, thanks for making this series!

    • @Charmente2014
      @Charmente2014 2 года назад

      ش

    • @devstuff2576
      @devstuff2576 2 года назад

      ​@@HeduAI absolutely awesome . You are the best.

  • @nurjafri
    @nurjafri 3 года назад +72

    Damn. This is exactly what a developer coming from other backgrounds need.
    Simple analogies for a rapid understanding.
    Thanks a ton.
    Keep uploadinggggggggggg plss

    • @Xeneon341
      @Xeneon341 3 года назад +1

      Agreed, very well done. You do a very good job of explaining difficult concepts to a non-industry developer (fyi I'm an accountant) without assuming a lot of prior knowledge. I look forward to your next video on masked decoders!!!

    • @HeduAI
      @HeduAI  3 года назад +4

      @@Xeneon341 Oh nice! Glad you enjoyed these videos! :)

  • @ML-ok9nf
    @ML-ok9nf 8 месяцев назад +7

    Absolutely underrated, hands down one of the best explanations I've found on the internet

  • @HuyLe-nn5ft
    @HuyLe-nn5ft 10 месяцев назад +5

    The important detail that set you apart from the other videos and websites is that not only did you provide the model's architecture with numerous formulas but you also demonstrated them in vectors and matrixes, successfully walked us through each complicated and trivial concept. You really did a good job!

  • @Clammer999
    @Clammer999 21 день назад

    I’ve gone through dozens of videos on transformers and the multi-head attention is one of the most complex mechanisms that require not only a step-by-step explanation, but be accompanied with a step-by-step animation, which many videos tend to skip over but this video really nails it. Thanks so much!

  • @rohtashbeniwal9202
    @rohtashbeniwal9202 Год назад +4

    this channel needs more love (the way she explains is out of the box). I can say this because I have 4 years of experience in data science, she did a lot of hard work to get so much clarity in concepts (love from India)

    • @HeduAI
      @HeduAI  Год назад +1

      Thank you Rohtash! You made my day! :) धन्यवाद

  • @chaitanyachhibba255
    @chaitanyachhibba255 3 года назад +10

    Were you the one who wrote transformers in the fist place, because no one explained it like you did. This is undoubtfully the best info I have seen. I hope you please keep posting more videos. Thanks a lot.

    • @HeduAI
      @HeduAI  3 года назад +1

      This comment made my day! :) Thank you.

  • @malekkamoua5968
    @malekkamoua5968 2 года назад +11

    I've been stuck for so long trying to get the Transformer Neural Networks and this is by far the best explanation ! The examples are so fun making it easier to comprehend. Thank you so much for you effort !

    • @HeduAI
      @HeduAI  10 месяцев назад

      Cheers!

  • @forresthu6204
    @forresthu6204 2 года назад +3

    Self-attention is a villain that has struck me for a long time. Your presentation has helped me to better understand this genius idea.

  • @rohanvaidya3238
    @rohanvaidya3238 3 года назад +10

    Best explanation ever on Transformers !!!

  • @adscript4713
    @adscript4713 2 месяца назад +1

    As someone NOT in the field reading the Attention paper, after having watched DOZENS of videos on the topic this is the FIRST explanation that laid it out in an intuitive manner without leaving anything out. I don't know your background, but you are definitely a great teacher. Thank you.

    • @HeduAI
      @HeduAI  2 месяца назад

      So glad to hear this :)

  • @rishiraj8225
    @rishiraj8225 Месяц назад

    Coming back after a year, just to revise the basic concepts. It is still the best video on YT. Thanks Hedu AI

  • @rayxi5334
    @rayxi5334 Год назад +1

    Better than the best Berkeley professor! Amazing!

  • @EducationPersonal
    @EducationPersonal 8 месяцев назад +1

    This is one of the best Transformer videos on RUclips. I hope RUclips always recommends this Value (V), aka video, as a first Key (K), aka Video Title, when someone uses the Query (Q) as "Transformer"!! 😄

    • @HeduAI
      @HeduAI  8 месяцев назад

      😄

  • @ja100o
    @ja100o Год назад +1

    I'm currently reading a book about transformers and was scratching my head over the reason for the multi-headed attention architecture.
    Thank you so much for the clearest explanation yet that finally gave me this satisfying 💡-moment

  • @andybrice2711
    @andybrice2711 2 месяца назад

    This really is an excellent explanation. I had some sense that self-attention layers acted like a table of relationships between tokens, but only now do I have more sense of how the Query, Key, and Value mechanism actually works.

  • @kafaayari
    @kafaayari 2 года назад

    I won't say this is the best explanation so far, but this is the only explanation. Others are just repeating the original paper.

  • @sebastiangarciaacosta5468
    @sebastiangarciaacosta5468 3 года назад +15

    The best explanation I've ever seen of such a powerful architecture. I'm glad of having found this Joy after searching for positional encoding details while implementing a Transformer from scratch today. Valar Morghulis!

    • @HeduAI
      @HeduAI  3 года назад +2

      Valar Dohaeris my friend ;)

  • @devchoudhary8892
    @devchoudhary8892 Год назад +1

    best, best best explanation on transformer, you are adding so much value to the world.

  • @wireghost897
    @wireghost897 11 месяцев назад

    Finally a video on transformers that actually makes sense. Not a single lecture video from any of the reputed universities managed to cover the topic with such brilliant clarity.

  • @MGMG-li6lt
    @MGMG-li6lt 3 года назад +19

    Finally! You delivered me from long nights of searching for good explanations about transformers! It was awesome! I can't wait to see the part 3 and beyond!

    • @HeduAI
      @HeduAI  3 года назад +1

      Thanks for this great feedback!

    • @HeduAI
      @HeduAI  3 года назад +2

      “Part 3 - Decoder’s Masked Attention” is out. Thanks for the wait. Enjoy! Cheers! :D
      ruclips.net/video/gJ9kaJsE78k/видео.html

  • @alankarmisra
    @alankarmisra 8 месяцев назад

    3 days, 16 different videos, and your video "just made sense". You just earned a subscriber and a life-long well-wisher.

  • @shubheshswain5480
    @shubheshswain5480 3 года назад +1

    I went through many videos from Coursera, youtube, and some online blogs but none explained so clear about the Query, key, and values. You made my day.

    • @HeduAI
      @HeduAI  3 года назад

      Glad to hear this Shubhesh :)

  • @Srednicki123
    @Srednicki123 Год назад

    I just repeat what everybody else said: these videos are the best! thank you for the effort

  • @sujithkumar5415
    @sujithkumar5415 Год назад

    This is quite literally the best attention mechanism video out there guys

  • @Abhi-qf7np
    @Abhi-qf7np 2 года назад +1

    You are the best😄😄, This is THE Best explanation I have ever seen on RUclips for Transformer Model, Thank you so much for this video.

  • @persianform
    @persianform Год назад

    The best explanation of attention models on the earth!

  • @wolfie6175
    @wolfie6175 2 года назад

    This is an absolute gem of a video.

  • @frankietank8019
    @frankietank8019 9 месяцев назад +1

    Hands down the best video on transformers I have seen! Thank you for taking your time to make this video.

  • @fernandonoronha5035
    @fernandonoronha5035 2 года назад

    I don't have words to describe how much these videos saved me, thank you!

  • @skramturbo8499
    @skramturbo8499 Год назад

    I really like the fact that you ask questions within the video. In fact those are the same questions one has and first reading about transformers. Keep up the awesome work!

  • @ghostvillage1
    @ghostvillage1 Год назад

    Hands down the best series I've found on the web about transformers. Thank you

  • @raunakdey3004
    @raunakdey3004 Год назад

    Really love coming back to your videos and get a recap on multi layered attention and the transformers! Sometimes I need to make my own specialized attention layers for the dataset in question and sometimes i dunno it just helps to just listen to you talk about transformers and attention ! Really intuitive and helps me to break out of some weird loop of algorithm design I might have gotten myself stuck at. So thank you so so much :D

  • @madhu1987ful
    @madhu1987ful Год назад

    Wow. Just wow !! This video needs to be in the top most position when searched for content on transformers and their explanation

    • @HeduAI
      @HeduAI  Год назад +1

      So glad to see this feedback! :)

  • @geetanshkalra8340
    @geetanshkalra8340 2 года назад

    This is by far the best video to understand Attention Networks. Awesome work !!

  • @hubertkanyamahanga2782
    @hubertkanyamahanga2782 9 месяцев назад

    I am just speechless, this is unbelievable! Bravo!

  • @user-ne2nr2yi1h
    @user-ne2nr2yi1h 6 месяцев назад

    The best video I've ever seen for explaining transformer.

  • @oliverhu1025
    @oliverhu1025 Год назад

    Probably the best explanation of transformers I’ve found online. Read the paper, watched Yannic’s video, some paper reading videos and a few others, the intuition is still missing. This connects the dots, keep up the great work!

  • @pythondev2631
    @pythondev2631 Год назад

    The best video on multihead attention by far!

  • @adityaghosh8601
    @adityaghosh8601 2 года назад

    Blown away by your explanation . You are a great teacher.

  • @Scaryder92
    @Scaryder92 2 года назад

    Amazing video, showing how the attention matrix is created and what values it assumes is really awesome. Thanks!

  • @oludhe7
    @oludhe7 2 месяца назад

    Literally the best series on transformers. Even clearer than statquest and luis serrano who also make things very clear

  • @artukikemty
    @artukikemty Год назад

    Thanks for posting, by far this is the most didactic Transformer presentation I've ever seen. AMAZING!

  • @jonathanlarkin1112
    @jonathanlarkin1112 3 года назад +6

    Excellent series. Looking forward to Part 3!

    • @HeduAI
      @HeduAI  3 года назад +1

      “Part 3 - Decoder’s Masked Attention” is out. Thanks for the wait. Enjoy! Cheers! :D
      ruclips.net/video/gJ9kaJsE78k/видео.html

  • @shivam6565
    @shivam6565 Год назад

    Finally I understood the concept of query, key and value. Thank you.

  • @davidlazaro3143
    @davidlazaro3143 11 месяцев назад

    This video is GOLD, it should be everywere! Thank you so much for doing such an amazing job 😍😍

  • @cracksomeface
    @cracksomeface Год назад +1

    I'm a grad student currently applying NLP - this is literally the best explanation of self-attention I have ever seen. Thank you so much for a great vid!

  • @adithyakaravadi8170
    @adithyakaravadi8170 Год назад +1

    You are so good, thank you for breaking down a seemingly scary topic for all of us.The original paper requires lot of background to understand clearly, and not all have it. I personally felt lost. Such videos help a lot!

  • @giridharnr6742
    @giridharnr6742 Год назад

    Its one of the best explainations of Transformers. Just mind blowing.

  • @1HourBule
    @1HourBule Год назад

    The best video on Self-attention.

  • @bendarodes61
    @bendarodes61 2 года назад

    I've watched many video series about transformers, this is by far the best.

  • @Ariel-px7hz
    @Ariel-px7hz Год назад

    Such a fantastic and detailed yet digestible explanation. As others have said in the comments, other explanations leave so many gaps. Thank you for this gem!

  • @chenlim2165
    @chenlim2165 Год назад

    Bravo! After watching dozens of other explainer videos, I can finally grasp the reason for multi-headed attention. Excellent video. Please make more!

  • @pedroviniciuspereirajunho7244
    @pedroviniciuspereirajunho7244 Год назад

    To visualize the matrices helped me to understand better transformers.
    Again, thank you very much!

  • @an_experienced_guy
    @an_experienced_guy 8 месяцев назад

    This is by far the best and the most clear and insightful explanation of transformers, I've tried to understand it through multiple blogs and videos and stack exchange answers, this is the first time every component became clear to me and how they all work in conjunction. Thanks a lot for this series. Amazing explanations.

  • @MCMelonslice
    @MCMelonslice Год назад

    This is the best resource for an intuitive understanding of transformers. I will without a doubt point everyone towards your video series. Thank you so much!

  • @wayneqwele8847
    @wayneqwele8847 5 месяцев назад

    Thank you for taking the time explain from a linear algebra perspective what actually happens. Many teachers on youtube are comfortable just leaving it at math symbols and labels. Showing what actually happens to matrice values has sharpened my intuition of what actually happens under the hood. Thank you.🙏

  • @danielarul2382
    @danielarul2382 Год назад

    One of the best explanations on Attention in my opinion.

  • @jasonpeloquin9950
    @jasonpeloquin9950 Год назад

    Hands down the best explanation of the use of Query, Key and Value matrices. Great video with an easy example to understand.

  • @kennethm.4998
    @kennethm.4998 2 года назад

    You have a gift for explanations... Best I've seen anywhere online. Superb.

  • @DeepakSadulla
    @DeepakSadulla 2 года назад

    The RUclips example helps a lot with understanding QKVs. Really good explanation... Thanks!!

  • @cihankatar7310
    @cihankatar7310 Год назад

    This is the best explanation of transformers architecture with a lot of basic analogy ! Thanks a lot!

  • @vanhell966
    @vanhell966 Месяц назад

    Amazing work. Really appreciate you, making complex topics into simple language with the touch of anime and series. Amazing.

  • @abdot604
    @abdot604 Год назад

    brilliant explanation , your chanel deserve way more ATTENTION.

  • @mrmuffyman
    @mrmuffyman Год назад +1

    You are awesome!! I watched Yannic Kilcher's video first and was still confused by the paper, probably because there's so much detail skipped over in the paper and Kilcher's video. However, your video goes much slower and in depth so the explanations were simple to understand, and the whole picture makes sense now. Thank you!

  • @sowmendas812
    @sowmendas812 Год назад

    This is literally the best explanation for self-attention I have seen anywhere! Really loved the videos!

  • @VADemon
    @VADemon Год назад

    Excellent examples and explanation. Don't shy away from using more examples of things that you love, this love shows and will translate to better work overall. Cheers!

  • @RafidAslam
    @RafidAslam 2 месяца назад

    Thank you so much! This is by far the clearest explanation that I've ever seen on this topic

  • @nizamphoenix
    @nizamphoenix 8 месяцев назад

    Being a professional in this field for ~5years can say this is by far the best explanation of attention.
    Amused as to why this doesn't pop up on YT's recommendation for attention at the top. Probably, YT's attention needs some attention to fix its Q, K, Vs

    • @HeduAI
      @HeduAI  8 месяцев назад

      You made my day :)

  • @cw9249
    @cw9249 Год назад

    you are amazing. ive watched other videos and read materials but nothing compares to your videos

  • @onthelightway
    @onthelightway 2 года назад

    Incredibly well explained! Thanks a lot

  • @franzanders7762
    @franzanders7762 2 года назад

    I can't believe how good this is.

  • @jamesshady5483
    @jamesshady5483 Год назад

    This explanation is incredible and better than 99% of what I found on the Internet. Thank you!

  • @SOFTWAREMASTER
    @SOFTWAREMASTER 10 месяцев назад

    Most underrated video about transformers. Going to recommend this to everyone. Thankyou

  • @darkcrafteur165
    @darkcrafteur165 Год назад

    Never posting but right now I need to thank you, I really don't believe that it exists a better way to understand self attention than watching your video. Thank you !

  • @JDechnics
    @JDechnics 2 года назад

    Holy shit was this a good explanation! Other blogs literally copy what the paper states (which is kinda confusing), but you explained it in such a intuitive and fun way! Thats what I called talent!!

  • @carlosandresrocharuiz2555
    @carlosandresrocharuiz2555 2 года назад

    It´s the most incredible channel on youtube and people doesn't appreciate it :(

  • @jirasakburanathawornsom1911
    @jirasakburanathawornsom1911 2 года назад

    Hand down the best transformer explanation. Thank you very much!

  • @mrkshsbwiwow3734
    @mrkshsbwiwow3734 Месяц назад

    This is the best explanation of transformers on RUclips.

  • @bhavyaghai1924
    @bhavyaghai1924 Год назад

    Educational + Entertaining. Nice examples and figures. Loved it!

  • @mariosconstantinou8271
    @mariosconstantinou8271 Год назад

    These videos are amazing, thank you so much! Best explanation so far!!

  • @adarshkone9384
    @adarshkone9384 11 месяцев назад

    have been trying to understand this topic for a long time , glad I found this video now

  • @McBobX
    @McBobX 2 года назад

    That is what I'm looking for, for 3 days now! Thanks a lot!

  • @hesona9759
    @hesona9759 Год назад

    The best video I've ever watched, thank you so much

  • @jackderrida
    @jackderrida Год назад

    Holy crap, this tutorial is good! I've had GPT-4 generate me so many analogies to refresh my understanding of the same concepts you perfectly explain here.

  • @freaknextdoor9040
    @freaknextdoor9040 3 года назад +1

    Hands down, this series is the best one explaining the essence of transformers I have found online!!
    Thanks a lot, you are awesome!!!!

    • @HeduAI
      @HeduAI  3 года назад

      Cheers! 🙌

  • @ClaudiaAcquistapace
    @ClaudiaAcquistapace 6 месяцев назад

    Totally in love with your explanations.. You are the light at the end of my personal tunnel trying to understand transformers in preparation of the lecture I have to give on this topic. I will mention your videos all the way through my lecture. Thanks so much for explaining it so clearly.

  • @kazeemkz
    @kazeemkz 6 месяцев назад

    Spot on analysis. Many thanks for the clear explanation.

  • @melihekinci7758
    @melihekinci7758 Год назад

    This is the best explanation I've ever seen!

  • @binhle9475
    @binhle9475 Год назад +1

    Your attention to details and information structuring are just exceptional. The Avatar and GoT references on top were hilarious and make things perfect. You literally made a story out of complex deep learning concept(s). This is just brillant.
    You have such a beautiful mind (if you get the reference :D). Please consider making more videos like this, such a gift is truly precious. May the force be always with you. 🤘

  • @marcosmartinez9241
    @marcosmartinez9241 3 года назад

    These are the best serie of videos where I finally can find a good explanation about the Transformer network. Thanks a lot!!

    • @HeduAI
      @HeduAI  3 года назад

      Cheers! 🙌

  • @sheerazahmad3131
    @sheerazahmad3131 Год назад

    Just Wow...........One of the best explaination out there.
    Thank you so much :)

  • @dominikburkert2824
    @dominikburkert2824 3 года назад +1

    best transformer explanation on RUclips!

    • @HeduAI
      @HeduAI  3 года назад

      So glad to hear this! :D

  • @minruihu
    @minruihu Год назад

    it is impressive, you explain so complicated topics in a vivid and easy way!!!

  • @suttonmattp
    @suttonmattp Год назад

    Honestly understood this far better from this 15 minute video than from the 90 minute university lecture I went to on the subject. Really excellent explanation.

  • @nicholasabad8361
    @nicholasabad8361 2 года назад

    By fair the best explanation of Multi-Head Attention I've ever seen on RUclips! Thanks!

    • @HeduAI
      @HeduAI  2 года назад

      Glad to hear this :)

  • @rodi4850
    @rodi4850 2 года назад

    So underrated video! Thank you!!!

  • @srikanthkarapanahalli
    @srikanthkarapanahalli Год назад

    Awesome analogy and explanation !