For more details and code on building a translator using a transformer neural network, check out my playlist "Transformers from scratch": ruclips.net/video/QCJQG4DuHT0/видео.html
what a hugely underrated video. You did such a better job at explaining this on multiple abstraction layers in such a short video than most videos I could find on the topic which were more than twice as long.
Thanks a ton Jeffrey! Means a lot. I've come to realize (fairly recently) that only speaking in jargon isn't going to help. Pealing it down from highly abstract to more technical goes a long way for viewers and myself. I understand more when I break the jargon down. Using this more in future videos
@@CodeEmporium Of course everyone has a different approach to understanding a topic. I recently had to get into a few topics quite quickly and fore me the best way of getting there fast was to start out with very general videos to get a sort of feel for the general ideas and how everything works together on a high level. Then I would watch some more detailed videos or switch to reading more detailed articles until finally reading the actual papers and looking at the formulas and all that stuff. Having an understanding of the bigger picture helped me comprehend the details better. I Also think no explanation can ever be "too simple" cause sometimes when explanations try to save time by glossing over parts or taking things for granted you spend way more time rewinding trying to wrap your head about some small detail just because you might be missing some needed knowledge. I think in an explanation it's like with spices on food. better keep it simple and easy. Individuals can always skip parts for themselves. same with the spices: Better not add too much thinking everyone will be able to take it, rather add a little and if it's not enough for someone they can add it themselves.
@@CodeEmporium Great approach that you've taken. A high level understanding followed by deeper understanding of the topic pretty much clears up the concept. Subscribed.
Great video! Watched it a few times already so these timestamps will help me out: 0:00 Problems with RNNs and LSTMs 3:34 First pass overview over transformer architecture 8:10 Second pass with more detail 10:34 Third pass focusing on attention networks and normalization 11:57 Other resources (code & blog posts)
Wow. I've seen lectures that are 45m+ long trying to understand this architecture, even lectures from the original authors. Your video was hands-down the best, really helped me piece some key missing intuition pieces together. You have a gift for teaching and explaining -- I wholeheartedly hope you're able to leverage that in your professional career!
I bought udemy course for Transformer and BERT but with no help and wasted my time, money and energy. This video and your BERT video made my day. thanks. I may explain in my interview well. :)
Great video!! I am taking a course in my university and one of the lectures was about RNNs and transformers. Your video of 13 mins explains way better than the 100 mins lecture i attended. Thank you!
I had to make 2 passes of your video to fully understand and appreciate the underlying mathematics and working of the model. You have put a great effort in making it simpler to understand with illustration and animation.
This is awesome!!! Thank you for breaking it down concisely, understandably, and deeply! It’s hard to find explanations that aren’t so simplistic they’re useless, or so involved they don’t save time in achieving understanding. Thank you!!
This was a phenomenal video. You managed to explain transformers in 13 minutes better than my professor could in three hours. Thank you and keep on creating content!
Went through several videos on 'Attention is all you need' paper before this, all the details you managed to cover in thirteen minutes is amazing. Could not find explanation that is so easy to understand anywhere else. Great job!
Bro TBH no words to appreciate such a well structured video in a short time and the explanation was easly understandable even for people with less knowledge. Thanks for the video man.
I saw Yanik's explaination and now I saw yours. Yanik does a terrible job at explaining papers, he usually just jokes around. Your explanation is probably one of the best I have seen so far. Thanks man.
Thank you for making this! As a curious outsider I have been anxious about falling behind in recent years and this was perfect to bring me up to speed - at least enough to follow the conversation.
Thanks, man. This is a really clear and high-level explanation. Really helpful for some guys like me who just stepped into this area. I read many explanations online. They give tons of details but fail in explaining these abstract items. These explanations always use other abstract definitions to explain this one. This problem happens again in the explanation of the "other abstract item". Sometimes I just forgot originally what I want to understand. Or even worse, they form a circulation... Thank you so much! This video helped me a lot in understanding the paper
Really great video. As someone transitioning from pure math into machine learning and AI, I find the language barrier to be the biggest hurdle and you broke down these concepts in a really clear way. I love the multiple layer approach you took to this video, I think it worked really well to first give a big picture overview of the architecture before delving deeper.
Thank you very much for you effort! You just adjusted my attention layer, the pieces start to fall in place and I have a much better understanding of why TNNs are so revolutionary.
With videos like this one you should be having 100,000+ subscribers soon. Adding a bit of humor to uncompromising technical content is a very good way to go.
We live in the mahtrix... . . Sorry I'll leave. . . Psych. I'm staying since it's my channel. Haha. . . Please don't leave (Thanks for watching and reading this pointless rant)
How do you pass the output tokens when they are something we want to predict? I think you pass whatever is generated as output, and since nothing is generated in the start, you pass the token.
this video was one of the best learning videos i EVER SAW first you give a high level overview, then u step in deeper every step with an understandable example THANK YOU SO MUCH!!!
LOVE the multipass strategy for explaining the architecture. I don't think I've seen this approach used with ML, and it's a shame as this is an incredibly useful strategy for people like me trying to play catch up. I hopped on the ML train a little late, but stuff like this makes me feel not nearly as lost.
Finally a basic explanation I can understand. I tried reading the original "Attention is all you need" paper, but it felt like it was assuming I was already familiar with the basics of NLP, like the encoder-decoder setup. Which I wasn't.
This is one of the best explanations for transformers I've come across online! Awesome job, man! Thanks. I'll totally recommend your channel to some classmates!! :)
Just want to say this video is amazing. Watched like three other 30+ mins videos but they all failed to train my stupid brain. This 13 minutes video is intuitive, detailed, and beginner-friendly. Thank you :3
Watched Andrew Ng, watched this, you got me to stick through the video and Andrew who i consider one of the best in this field did not manage to express as clearly as you did! Cheers man, amazing video!
Sir, you just earned yourself a like and subscription for this amazing video. My background is Mechanical Engineering, but I was still able to easily follow each step. Thanks man!
Man! Brilliant video. I saw a 27 mins video and was totally spent out and didn't even understand much. But this was just awesome, and in half the time! The only thing lacking might be the examples of keys, values and queries but i mostly got the hang of it.
This video presents such a natural logical flow. It is very satisfying to watch. Could someone help me with some answers to these questions? Answers to these would really help orient me so I could then understand the deeper points of the video. 1) This process of feeding the English and the French words is training right? (as opposed to the process of using the model to calculate a translation desired by the user) 2) Assuming that yes, this video is about training, at what point do you say to the algorithm "The next French word was actually supposed to be 'chien'. Learn from that!" 3) Is this video an example of an NSP training? How would it look different if it were MLM? I know what MLM is, but I'm asking more from a practical standoint, like what would you feed to the algorithm and at what time. Really great video nonetheless, I can tell it would be perfect if I just had a little more background (which I am working to develop)
For more details and code on building a translator using a transformer neural network, check out my playlist "Transformers from scratch": ruclips.net/video/QCJQG4DuHT0/видео.html
what a hugely underrated video. You did such a better job at explaining this on multiple abstraction layers in such a short video than most videos I could find on the topic which were more than twice as long.
Thanks a ton Jeffrey! Means a lot. I've come to realize (fairly recently) that only speaking in jargon isn't going to help. Pealing it down from highly abstract to more technical goes a long way for viewers and myself. I understand more when I break the jargon down. Using this more in future videos
@@CodeEmporium Of course everyone has a different approach to understanding a topic. I recently had to get into a few topics quite quickly and fore me the best way of getting there fast was to start out with very general videos to get a sort of feel for the general ideas and how everything works together on a high level. Then I would watch some more detailed videos or switch to reading more detailed articles until finally reading the actual papers and looking at the formulas and all that stuff.
Having an understanding of the bigger picture helped me comprehend the details better.
I Also think no explanation can ever be "too simple" cause sometimes when explanations try to save time by glossing over parts or taking things for granted you spend way more time rewinding trying to wrap your head about some small detail just because you might be missing some needed knowledge.
I think in an explanation it's like with spices on food. better keep it simple and easy. Individuals can always skip parts for themselves. same with the spices: Better not add too much thinking everyone will be able to take it, rather add a little and if it's not enough for someone they can add it themselves.
So true man !! i scraped the net to find a simple explanation !! you are a genius :)
True, I did not understand the use of attention until watching this video
@@CodeEmporium Great approach that you've taken. A high level understanding followed by deeper understanding of the topic pretty much clears up the concept. Subscribed.
Great video! Watched it a few times already so these timestamps will help me out:
0:00 Problems with RNNs and LSTMs
3:34 First pass overview over transformer architecture
8:10 Second pass with more detail
10:34 Third pass focusing on attention networks and normalization
11:57 Other resources (code & blog posts)
Thanks for this! It'll help others watching too.
@@CodeEmporium Pin this comment.
Thank you so much! I planned to watch this a few times for reference as I delve into transformer code. This will be very useful.
Incredibly well explained and concise. I can't believe you pulled off such a complete explanation in just 13 minutes!
Thank you for the kind words. Super glad you liked it :)
Wow.
I've seen lectures that are 45m+ long trying to understand this architecture, even lectures from the original authors.
Your video was hands-down the best, really helped me piece some key missing intuition pieces together.
You have a gift for teaching and explaining -- I wholeheartedly hope you're able to leverage that in your professional career!
@2:02 that vsauce thing was
cool
Ikr 😅
...or was it?
Love that so many people got the vsauce ref
I came here to say that. Excelsior
The multi-pass approach to progressively explaining the internals worked well. Thanks for your content!
The understanding converges!
I bought udemy course for Transformer and BERT but with no help and wasted my time, money and energy. This video and your BERT video made my day. thanks. I may explain in my interview well. :)
This is by far the best explanation of the Transformers architecture that I have ever seen! Thanks a lot!
2:03 I died at the Vsauce reference. Well played.
Me too, I had to scroll back around 30s because he just continued explaining and I completely lost focus :D
:v
I love the multi-pass way of explanation so that the viewer can process high level concepts and then build upon that knowledge, great job.
By far, the MOST comprehensible explaination on Transformer available in the whole internet space.
You deserve 1M subscribers at least.
Thank you for the kind words! Maybe one day
Great video!! I am taking a course in my university and one of the lectures was about RNNs and transformers. Your video of 13 mins explains way better than the 100 mins lecture i attended. Thank you!
I had to make 2 passes of your video to fully understand and appreciate the underlying mathematics and working of the model. You have put a great effort in making it simpler to understand with illustration and animation.
This is awesome!!! Thank you for breaking it down concisely, understandably, and deeply! It’s hard to find explanations that aren’t so simplistic they’re useless, or so involved they don’t save time in achieving understanding. Thank you!!
My pleasure! If you are into building the transformer piece by piece from scratch, I suggest checking out the “Transformers from scratch” playlist.
This might be the best explanation for someone who has some experience with ANN and CNN but want to understand Transformers. Thanks!
Thanks for the kind words!
This was a phenomenal video. You managed to explain transformers in 13 minutes better than my professor could in three hours.
Thank you and keep on creating content!
One of the best or probably the best explanation I've seen. Thank you very much for the effort.
Went through several videos on 'Attention is all you need' paper before this, all the details you managed to cover in thirteen minutes is amazing. Could not find explanation that is so easy to understand anywhere else. Great job!
right? I couldn't believe this video is only 13 minutes! That's a very good talent to have.
One of the cleanest explanation for transformers without dabbling too much into the theory!! Thanks man
2:04 or are we... not gonna lie this is the best channel and best explanation ever....................
Bro
TBH no words to appreciate such a well structured video in a short time and the explanation was easly understandable even for people with less knowledge.
Thanks for the video man.
I saw Yanik's explaination and now I saw yours. Yanik does a terrible job at explaining papers, he usually just jokes around. Your explanation is probably one of the best I have seen so far. Thanks man.
I watched this video four times. After each time, I feel I understand this topic better than the previous one.
The easiest understanding video about this topic, as I can see! Thank you
You are very welcome :)
Thank you for making this! As a curious outsider I have been anxious about falling behind in recent years and this was perfect to bring me up to speed - at least enough to follow the conversation.
You are very welcome! Thanks for watching! :)
Your way to breaking down step by step is very effective! Congrats and thanks. School systems should use it more
I watched many videos related to this topic, but this video taught me easiest way...
Thanks for watching! And for the compliments :) I try my best for my more recent videos too
Thanks, man. This is a really clear and high-level explanation. Really helpful for some guys like me who just stepped into this area.
I read many explanations online. They give tons of details but fail in explaining these abstract items. These explanations always use other abstract definitions to explain this one. This problem happens again in the explanation of the "other abstract item". Sometimes I just forgot originally what I want to understand. Or even worse, they form a circulation...
Thank you so much! This video helped me a lot in understanding the paper
I paused every 10 seconds and took notes, such an excellent video!
Had to watch it like 2-3 times to get the entire thing, but worth it. Somehow so concise, yet every relevant detail is included.
Thanks a lot for watching and commenting! Super happy this video gave you value
what a guide for the transformer in just 13 min.
thanks a lot for this simplicity.
Really great video. As someone transitioning from pure math into machine learning and AI, I find the language barrier to be the biggest hurdle and you broke down these concepts in a really clear way. I love the multiple layer approach you took to this video, I think it worked really well to first give a big picture overview of the architecture before delving deeper.
Super happy this helped!
Thank you very much for you effort! You just adjusted my attention layer, the pieces start to fall in place and I have a much better understanding of why TNNs are so revolutionary.
That was the best way of explaining thins in my opinion. Start big picture, getting more detailed over time.
I am half-way through this video. I have not finished it. But I felt giving a like before watching the next half. Thank you so much.
With videos like this one you should be having 100,000+ subscribers soon. Adding a bit of humor to uncompromising technical content is a very good way to go.
The only thing that surprised me more than this high quality explanation is how you pronounce "mahtrix"
We live in the mahtrix...
.
.
Sorry I'll leave.
.
.
Psych. I'm staying since it's my channel. Haha.
.
.
Please don't leave (Thanks for watching and reading this pointless rant)
BEAUTIFUL......deep, concise, pithy, each word is meaningful.....well done.
Best explanation ever. I have an interview with Microsoft tomorrow. This was the best brushing up I could get.
How do you pass the output tokens when they are something we want to predict? I think you pass whatever is generated as output, and since nothing is generated in the start, you pass the token.
this video was one of the best learning videos i EVER SAW
first you give a high level overview, then u step in deeper
every step with an understandable example
THANK YOU SO MUCH!!!
You are very welcome. Thank you for the compliments :)
Best explanation ! "Multi pass" way of explaining is genius.
Great video! I love how you go through multiple passes, each getting into deeper specifics!
LOVE the multipass strategy for explaining the architecture. I don't think I've seen this approach used with ML, and it's a shame as this is an incredibly useful strategy for people like me trying to play catch up. I hopped on the ML train a little late, but stuff like this makes me feel not nearly as lost.
Finally a basic explanation I can understand. I tried reading the original "Attention is all you need" paper, but it felt like it was assuming I was already familiar with the basics of NLP, like the encoder-decoder setup. Which I wasn't.
I am now more confident to talk about transformers, it had been abstract to me before. Thank you so much, great explanation.
Glad and you're welcome! :)
Outstanding explanations: to the point and well illustrated. Thank you.
I don't know how many attention video I have watched. But I swear I'm getting there. Repetition is the key to understanding these shit
This is by far the most comprehensive yet short video i haave seen
Really appreciate how the explanation is going from high level and down, thanks.
Super glad you enjoyed this :)
This is heavily underrated!! Such an awesome video! Thanks Man!
Really great video. I have taken so many classes that have discussed this model and never understood it until now. Thanks!
Perfect! Glad this helped!
KEEP IT UP! Please! This is outstanding :) love the diagrams, dry crisp active speaking, & overview technique.
Outstanding. Best explanation of transformers that I’ve seen by far.
Thanks a ton for watching :)
This is one of the best explanations for transformers I've come across online! Awesome job, man! Thanks. I'll totally recommend your channel to some classmates!! :)
Thanks! And Glad it's helpful! Spreading the word of my channel is the best thing you can do to help :)
A very good explanation about transformers, Thank you :)
Wow. Amazing video. Better than anything I've watched on the topic, all in thirteen minutes.
Didnt know what a transformer hype was until I landed on this video. Thanks a lot ! Subscribed. Gotta check more content on this channel now
Best Channel for ai and ml on RUclips
Great video bro! You're underrated af. One of the best if not the best explanation of some neural network architecture. Keep up the good work. Kudos!
Thanks so much! I am making more related videos. So do check ‘em out :)
Good job CodeEmporium! Very well made overview. thanks.
How is it possible such a beautiful video don't have more views/likes? Thank you CodeEmporium:)
Hey, thanks a lot, the explanation is great, the video explains much clearer than the lecture in the university.
Thanks for the high praise
Great video! I love the layered approach for explaining the concepts. Very well done. Thank you!
I am super glad that approach helped
great job putting attention in simple words. Very intuitive!
Thanks so much!
TOP OF TOP clear explanation you provided !!!
Just want to say this video is amazing. Watched like three other 30+ mins videos but they all failed to train my stupid brain. This 13 minutes video is intuitive, detailed, and beginner-friendly. Thank you :3
This is the best explanation! I came in search for transformers but I found Gold.
Wow, I am so glad I found your channel. The concept is clearly explained and assumes an intelligent audience. Well done!
Its been a year I still come back to this video to revise my transformer knowledge :)
Thanks man!
Thanks for coming back :)
Watched Andrew Ng, watched this, you got me to stick through the video and Andrew who i consider one of the best in this field did not manage to express as clearly as you did! Cheers man, amazing video!
Oh man I’m going through this now! Didn’t even realise you had a vid on it, this is brilliant. Love that you did a bird’s eye pass then dove in.
You're saying you liked my content without watching this video? You must be a true fan of mine mwahahaa..also thanks for the kind words :)
@@CodeEmporium I mean, tbf, I clicked like before watching it because it's you, then I was like dayyyyuuumm this is great.
Really nice explanation, allthough even after the video most of Transformers still seem like a big magic black-box. Fascinating stuff
I cant believe you explained so complicated things in crystal clear format. Excellent job dude
that was an amazing content heavy explanation for just 13 minutes. thanks a lot!
You are super welcome!
Sir, you just earned yourself a like and subscription for this amazing video.
My background is Mechanical Engineering, but I was still able to easily follow each step. Thanks man!
Perfect! Glad this was useful to you. And thanks for your background info. Helps to know my audience to tailor these videos
prolly the best explanation on this topic on youtube
Good stuff, best explanation found so far, had been so confusing reading through the jargon on other sites.
Glad this was useful. The idea was to avoid as much jargon as possible. And if used, make sure it's explained
Best video on Transformers I have seen so far! The examples really help to understand how the architecture works. Subscribed.
Thank you for watching!
Wow, the best explaination on youtube ! Had to subscribe after watching !
Even though this definitely assumes some intermediate working level knowledge of ML, best layman explanation on Transformers/Attention so far.
it is really straight forward video which talks exactly on what we need to learn !!
I was searching for this particular explanation from a long tym! thanks for this!
watching your videos for a last min revision. You're awesome
Awesome! And Thank you! I do not deserve such compliments
Man! Brilliant video. I saw a 27 mins video and was totally spent out and didn't even understand much. But this was just awesome, and in half the time!
The only thing lacking might be the examples of keys, values and queries but i mostly got the hang of it.
Brilliant piece of work! You nailed it in few mins!
Thanks for watching!
Amazing way of explaining complex things in simple terms.
Thanks !
Superb explanation in 13 minutes. I have been watching videos over 1 hour long to get this concept. Well done and keep it up. Regards
This channel is so under-rated. Amazing video
Thank You man.
The underrated AI channel.
Thanks homie. Spread the word if you can! :)
thanks. I had a lecture about Transformers and couldn't quite follow first
Best channel on the topic! Soo glad that I found this channel
Thanks a ton :)
너무 감사합니다ㅠㅜㅜ 덕분에 확 이해가네요. 배뎃 쌉공감 대체 이거 왜이렇게 안 유명하냐!!! thank you for your clear explanation!! This is what I was looking for!!!!!!
This is an excellent video. This resembles the TLDR or the introduction to transformers.
Thanks for the watch :)
Best explanation.. even a beginner can understand 👍
Thanks for watching!
This video presents such a natural logical flow. It is very satisfying to watch. Could someone help me with some answers to these questions? Answers to these would really help orient me so I could then understand the deeper points of the video.
1) This process of feeding the English and the French words is training right? (as opposed to the process of using the model to calculate a translation desired by the user)
2) Assuming that yes, this video is about training, at what point do you say to the algorithm "The next French word was actually supposed to be 'chien'. Learn from that!"
3) Is this video an example of an NSP training? How would it look different if it were MLM? I know what MLM is, but I'm asking more from a practical standoint, like what would you feed to the algorithm and at what time.
Really great video nonetheless, I can tell it would be perfect if I just had a little more background (which I am working to develop)
The best video I have seen on this topic. Great job
Glad it was helpful! And Thanks for watching :)
It always blows my mind how anyone figures out any of this.
I agree 😂
Please, watch this video before reading the paper. Amazing.