Thanks for breaking this down so well. As a privacy lawyer, this is the first time someone has managed to explain this to me in a way that I can actually understand!
As a student, I do really appreciate your vedios. On the Internet, there's barely the guide about how to code a complete model, most of the vedio is talking about the theory. But if we don't walk through the code, we couldn't understand and use the model well. So I subscribe your channel to see your vedios about how to code a model from scratch, it's really awesome! Very clear and comprehensible. Can't wait to see Coding a BERT from scratch!
Thanks Umar, I really appreciate your time and effort you put in creating these videos, very appreciate to create video coding BERT from scratch with PyTorch.
I watched this video and very interested,thanks everything is getting clear. But it's better if give end to end text classification task in fine tuning llama3.thanks.
Best explanations in such a beautiful manner. Please share video on GPT also. Would be highly thankful for such. It's great to learn from you. Thanks a lot Sir ☺☺. I highly request to share video on GPT.
If you could make a video on one of the finetuning tasks with example dataset and finetuning BERT on it, that basically would complete this lecture as one video contains every single information.
Thanks for providing a good quality of video as always. As a newbie of computer vision tasks, I still have found myself struggling with training and inference of source code with dataset. Would you please upload an informative video to show and explain the entire process of how we can do this with an open source code? Thanks.
I think I followed your explanation of how BERT can be fine-tuned for Q&A, but still I am amazed how that can work. For example, fine-tuning, so that it knows the financial capital of China is Shanghai doesn't mean that it knows the financial capital of Indonesia.
Of course. The LLM will only know the concepts it is taught. If you never mention Indonesia in the training or fine-tune dataset, the LLM will never know what is Indonesia or its capital Jakarta.
@umarjamilai sir, I have some confusion, as you mentioned about MLM and NSP, how does that work in code? Does it pass one sentence (A) with some masking (and loss be called L) and simultaneously guess sentence B from sentence A (and loss for that be called L') and then train itself minimising L+L'? Like does it work on one loop like: for i in sthg: Fill MLM Evaluate loss in guessing word Guess sentence B from current filled sentence Evaluate loss in guessing sentence Minimise the total loss? Is there something that I can learn (as tutorials) for understanding how filling in the blanks and guessing next sentence works?
@@AdityaRaut765 Hi! It is calculated as two separate task, for which the loss is summed up as L1 + L2. I saw many different implementations of BERT online and so far the most trustworthy is the Hugging Face one.
Thank you Umar for this great video! One confusion I have is about CLS token at 47:49, you mentioned we should use it because it can attend to all other words, since the first row don't have any 0 values(in the orange matrix). But wouldn't it also hold true for other tokens/rows in this matrix? Since this organge matrix is derived from softmax(QKt/sqrt(d)), while calculating this orange matrix, each row(word) in Q will multiple with every column of Kt matrix. Wouldn't this process enable each word to interact with all the rest words of the input? Sorry if I missed anything.
Thanks for the wonderful break down! There was one point where I was confused. You stated that Q, K and V matrices were identical and represented the rows of the input embeddings from the input sequence. My understanding was that Q, K and V are different as they're calculated by multiplying the input matrix with the learnt weights matrices for Wq, Wk and Wv respectively. Could you clarify? Thanks!
Hello Umar. Could you please make a video coding BERT, MLM and NSP from scratch like the previous video on Transformer? That would be very helpful to us.
Thank you, Umar, for the video. I genuinely appreciate and enjoy your content. I'm looking forward to your implementation video. I have a question. At 6:14, you mentioned that the output of the Encoder would be a sequence of 10 tokens. In the case of BERT, wouldn't the output be simply the contextualized embeddings of the input token (rather than next token of sequence)? Thank you
Hi! It depends on which task BERT has been fine tuned for. If you fine tune it for the next token prediction task, it will return 10 tokens, of which the last one is the next token. If you fine tune it for another task, then the output will still be 10 tokens (since a transformer is a sequence-to-sequence network), but you need to interpret the output differently based on the task.
Isnt the value vector for each Q,K,V should be different? Or say it is not necessary to be equal? And the product of the softmax function should be dot product again with Value Tensor V?
Hi Umar, gret video, around 19:37 you say that Q,K, V are identical copies, but is it actually the case? If Q=K, then QK^T would be a symmetric matrix, but it doesn't look like it is in the result of the matrix product shown later. Probably I'm just missing something here. Thanks again for the great explanation!
In 12:56 you mentioned the idea of Cosine function to denote similarities between words. Can you tell me in which research paper this idea is most skill-fully mentioned.
I have trouble understanding last part - question answering. You train model to say where is start where is end of the question - ok - but when/where you are training (fine-tuning) it to actually answer properly for some custom domain knowledge?
Shouldn't for Q&A tesk, linear layer should be 3 classes for each token, because for all other tokens apart from T10 and T27, there should be another class which will say NOT START/END.
Nope. Since we have two linear layers, each token will either be "Start/NOT_Start" or "End/Not_End". This means each token can be in 4 possible states: "Start - End", "Not_Start - End", "Start - Not_End" and "Not_Start - Not_End". This also covers the case in which the same token is the start and the end token at the same time, because the softmax score for that token will be the highest for both linear layers for the "Start" and the "End" class.
@@umarjamilai Oh here by 2 linear layers you mean parallel layers. I assumed 2 linear layers as in series like an MLP. Also, if I think now, we don't require two separate layers right, we can use a single layer with output as 2 neurons, where each neuron will be binary START/NOT_START and other will be END/NOT_END.
In Self-Attention the Q, K and V are always the same, that's why it's called self-attention. When K, V are coming from somewhere else (and are different from Q), in that case we talk about cross-attention.
@@HowToTechChannel If you refer to the parameters Wq, Wk and Wv, yes they're separate and trainable parameters. But they all receive as input the same sequence in Self-Attention.
In the framework of scaled dot-product attention, particularly within the context of Transformer architectures, a key consideration is the interaction between the query (q), key (k), and value (v) matrices. This mechanism typically involves scaling the product of q and k by the square root of the dimensionality (d), followed by the application of the softmax function, and subsequently multiplying this result with the v matrix. A critical aspect of this process is the role of the attention heads in processing tokens. If we consider a scenario where each attention head is exposed to the entirety of the token set throughout the training process, how might this influence the effectiveness and efficiency of the model? This question assumes particular relevance in a setting where the embedding dimension is fixed at 512, and the model is dealing with a substantial number of tokens, potentially in the range of 10,000. Could such a comprehensive exposure of all heads to all tokens potentially compromise the model's performance, or are there mechanisms within the architecture that mitigate this concern?
To be honest, within the context of multihead attention, all the tokens are exposed to each head, but each head is only working with a different part of the embedding of each token. This is exactly what happens in the vanilla Transformer and also in BERT and works wonderfully. For example, if the embedding size is 512 and we have 4 attention heads, the first attention head will see the first 128 dimensions of each token, that is, the range [0...127], the second head will see the range [128...255], the third the next 128 dimensions and so on... So I don't understand your question: the transformer is already working like this, and so does BERT. In BERT, particularly, each token attends also to tokens coming after it in the sentence, which is called its "right context". Hope my explanation clarifies your doubts
@@umarjamilai First of all, I'd like to express my gratitude for your insightful videos. They've been a great resource as I embark on my PhD journey in generative AI in Spain. Though I'm originally from Pakistan, I've been living in Spain since I was nine years old. Now, regarding my query about the Transformer model: I appreciate your explanation of the multihead attention mechanism and how each attention head interacts with a different segment of the token's embedding. However, my question specifically focuses on the logic behind multiplying the attention weights by the value matrix (V). While I understand that the product of the key (K) and query (Q) matrices indicates the relationship between tokens, the rationale behind subsequently multiplying this result by V isn't clear to me. Could you please clarify the purpose or reasoning behind this step in the attention mechanism?
the question answer part at 53:31 didnt make sense to me. what is the point of [SEP]? we know the question ends there right? therefore the answer must begin there!
A Foundation Model is a large language model, just trained on a massive amount of data that only big tech can afford (Meta, Google, etc.). Foundation Models have been pre-trained on a variety of data, so they can easily be fine tuned for a specific task or even used with zero shot prompting on unseen tasks.
Hi! I didn't understand what is a mistake... The whole video I have referenced the first line of the Chinese poem. Sometimes, even for different inputs, I reference the same matrix so that people know we are talking about the same concept.
you were using seq of China.... but table was not referring to that. in fact you said relation of EOS but table was confusing as it didnt refer to the same seq you were basing on. All in all i loved all your vidoes its just this one appeared out of sync in that period rest its cool. already watching your 2 hr video :)
At the point 25:00, he was just trying to explain causal mask concept with regards to the self attention in case of usual vanila transformer where the tokens doesn't interact with token which comes after it and hence made them -inf before applying softmax and ultimately become 0 after softmax is applied to them. But, It doesn't apply in case of BERT which particularly use MLM concept where by masking certain tokens at either side. Again, this is my understanding from his explanation. Once @Umar does coding, it would be more clearer I believe.
At 19:20 you say that Key, Query and Value matrices are identical to each other. I think that is wrong. Those matrices are calculated using different linear layers with learnable parameters. Also, if they were identical as you said, the 10x10 matrix that you show at 21:17 should have been symmetrical along the diagonal.
I think I misworded it. The input sequence to the Wq, Wk and Wv matrix is the same during Self-Attention, but their projections are different. So it all depends on what you call Q, K and V... The input or the output of these three linear projections? The inputs are the same (it's the same sequence), while the output is obviously different.
@@umarjamilai Well, not so obviously because you say Q, K and V are the same matrices in the video. And then you multiply Q with K^T, then with V. You also never mention a projection. So that part of information is missing and misleading for a not knowing viewer.
@@HowToTechChannel Also found it missleading. Using Encoder architecture for 'Next word prediction' task brings concerns. I have watched Llama 2.0 videos and built my understanding of many ideas like Multi Head Attention etc. based on them, hopefully they were prepared more carefully :(
Thanks for breaking this down so well. As a privacy lawyer, this is the first time someone has managed to explain this to me in a way that I can actually understand!
Hello you are one of the best teacher I have found in my life.
i was trying to get the full picture of this for so long… seeing this felt just like getting a days long headache gone. thank you!
One of the best videos for BERT in youtube
As a student, I do really appreciate your vedios. On the Internet, there's barely the guide about how to code a complete model, most of the vedio is talking about the theory. But if we don't walk through the code, we couldn't understand and use the model well. So I subscribe your channel to see your vedios about how to code a model from scratch, it's really awesome! Very clear and comprehensible. Can't wait to see Coding a BERT from scratch!
Love the way you explain things, so clear. Excellent Work!
Man! You know how to explain these topics! Please continue uploading these great videos for the good of the community! Really appreciate it!
AMAZING summarisation of BERT and all of the intuitiveness behind it!
explained much clearer than my prof. Great!
This video is awesome! Worth watching multiple times as refresher. Please keep up the good work!🎉🎉🎉
Thank you sir you explain these topics in great detail.Please keep uploading , excellent work.
I'm coming here from the trasformers video, and again really really good and detailed explanations! Keep up the good work and thank you!
Thanks Umar, I really appreciate your time and effort you put in creating these videos, very appreciate to create video coding BERT from scratch with PyTorch.
Thanks Umar. I really appreciate your time and effort you put in creating these videos. I am anxiously waiting for the implementation video.
Excellent video sir! Can't wait to see coding BERT from scratch.
Great primer or BERT. Excellent illustrations, and you explained the concepts very well.
Man, you are an excellent teacher. I must say.
Excellent BERT deep-dive video!!
I've been using a Bert models summary endpoint and it's pretty good! Bert is underrated AF.
Thank you for explaining bert in layman's language 👍
thank you man! The best explanation i've ever seen
We need more videos from you sir..
Like explaining more papers and LLMs
The explanation was amazing, thanks a lot 🙂.
Thank you so much Jamil, it's really helps a lot!!😁
Brilliant video, thank you for sharing.
thanks a lot, I love all your presentations, please talk about GPT and other models as well.
best video explanation by far
Thank you for your feedback. Make sure to like and subscribe! Hopefully you'll love my future as much as this one.
谢谢你! Hope you enjoy the life in China.
Wow , what an explanation!!Thanks a lot
I watched this video and very interested,thanks everything is getting clear. But it's better if give end to end text classification task in fine tuning llama3.thanks.
Best explanations in such a beautiful manner. Please share video on GPT also. Would be highly thankful for such. It's great to learn from you. Thanks a lot Sir ☺☺. I highly request to share video on GPT.
Thanku for information video.....
after this video , I got a idea for my work❤...,
Best video of BERT
If you could make a video on one of the finetuning tasks with example dataset and finetuning BERT on it, that basically would complete this lecture as one video contains every single information.
Great video thanks so much
Amazing explanations!
Please create more content it is helping us lot.....thanks for video
Waiting for the BERT implementation from scratch !
Thanks for providing a good quality of video as always. As a newbie of computer vision tasks, I still have found myself struggling with training and inference of source code with dataset. Would you please upload an informative video to show and explain the entire process of how we can do this with an open source code? Thanks.
Great video explanation, could you please explain how the embeddings are generated from the token ids for the positional encodings are added?
Superb explanation, love your channel :)
Excellent work!
Great Explanation😍
Could you please make a video on Complete Gpt architecture, different versions....your explanations are really good!!!!
Great video, it’s explained really well 😊
Thanks for the lucid explanation. In the Q&A case, how does BERT know that the answer is Shanghai?
very clear! thank you!
I think I followed your explanation of how BERT can be fine-tuned for Q&A, but still I am amazed how that can work. For example, fine-tuning, so that it knows the financial capital of China is Shanghai doesn't mean that it knows the financial capital of Indonesia.
Of course. The LLM will only know the concepts it is taught. If you never mention Indonesia in the training or fine-tune dataset, the LLM will never know what is Indonesia or its capital Jakarta.
@umarjamilai sir, I have some confusion, as you mentioned about MLM and NSP, how does that work in code? Does it pass one sentence (A) with some masking (and loss be called L) and simultaneously guess sentence B from sentence A (and loss for that be called L') and then train itself minimising L+L'? Like does it work on one loop like:
for i in sthg:
Fill MLM
Evaluate loss in guessing word
Guess sentence B from current filled sentence
Evaluate loss in guessing sentence
Minimise the total loss?
Is there something that I can learn (as tutorials) for understanding how filling in the blanks and guessing next sentence works?
@@umarjamilaior is it calculated as MLM first and then NSP (like two different loops?)
@@AdityaRaut765 Thanks for your question, i am too, waiting someone can answer this
@@AdityaRaut765 Hi! It is calculated as two separate task, for which the loss is summed up as L1 + L2. I saw many different implementations of BERT online and so far the most trustworthy is the Hugging Face one.
Nice narration
Thank you Umar for this great video! One confusion I have is about CLS token at 47:49, you mentioned we should use it because it can attend to all other words, since the first row don't have any 0 values(in the orange matrix). But wouldn't it also hold true for other tokens/rows in this matrix? Since this organge matrix is derived from softmax(QKt/sqrt(d)), while calculating this orange matrix, each row(word) in Q will multiple with every column of Kt matrix. Wouldn't this process enable each word to interact with all the rest words of the input? Sorry if I missed anything.
Great as usual ❤❤
Thanks Umar Jamil,
不客气🤓
Thanks for the wonderful break down!
There was one point where I was confused. You stated that Q, K and V matrices were identical and represented the rows of the input embeddings from the input sequence. My understanding was that Q, K and V are different as they're calculated by multiplying the input matrix with the learnt weights matrices for Wq, Wk and Wv respectively. Could you clarify? Thanks!
Yes, You are correct in multi head scenario. But, Q, K and V are same in single head.
Great video. Thank you. Could you please do coding BERT from scratch? I'm very curious to learn how to implement MLM and NSP tasks in PyTorch
your Chanel is pretty awesome
Hello Umar. Could you please make a video coding BERT, MLM and NSP from scratch like the previous video on Transformer? That would be very helpful to us.
Thanks for this video :)
Awesome explanation.
Cls token is important just because there is no zero values with others token?
Thank you, Umar, for the video. I genuinely appreciate and enjoy your content. I'm looking forward to your implementation video.
I have a question. At 6:14, you mentioned that the output of the Encoder would be a sequence of 10 tokens. In the case of BERT, wouldn't the output be simply the contextualized embeddings of the input token (rather than next token of sequence)? Thank you
Hi! It depends on which task BERT has been fine tuned for. If you fine tune it for the next token prediction task, it will return 10 tokens, of which the last one is the next token. If you fine tune it for another task, then the output will still be 10 tokens (since a transformer is a sequence-to-sequence network), but you need to interpret the output differently based on the task.
What you said for the CLS, isn’t it also true for the other tokens too? Aren’t their values too getting multiplied 768 times similarly?47:34
Isnt the value vector for each Q,K,V should be different? Or say it is not necessary to be equal? And the product of the softmax function should be dot product again with Value Tensor V?
Thanks for your efforts, I would love if you can make a tutorial on how to code OCR from scratch. Thanks
Hi Umar, gret video, around 19:37 you say that Q,K, V are identical copies, but is it actually the case? If Q=K, then QK^T would be a symmetric matrix, but it doesn't look like it is in the result of the matrix product shown later. Probably I'm just missing something here. Thanks again for the great explanation!
Best Explained, can you please add some more videos regarding different different architecture related bert and transformers
excellent explanation of BERT , Thank you Umar. Can you suggest how to implement this for a NER type of task
In the future I plan to make a video on how to code BERT from scratch, but it's gonna take some time :D
In 12:56 you mentioned the idea of Cosine function to denote similarities between words. Can you tell me in which research paper this idea is most skill-fully mentioned.
Hey bro, great and amazing work. Please do a video about coding BERT from scratch
43:43 Shouldn't the sentence after the [SEP] token be "I could imagine that it's frost on the ground"?
I have trouble understanding last part - question answering. You train model to say where is start where is end of the question - ok - but when/where you are training (fine-tuning) it to actually answer properly for some custom domain knowledge?
what a awesome video!!!!!!!
谢谢你!
Thanks, most of it great! But the Q&A fine-tuning is confusing.
Thanks for your wonderful tutorial, waiting for the code part👀👀
Danke!
Shouldn't for Q&A tesk, linear layer should be 3 classes for each token, because for all other tokens apart from T10 and T27, there should be another class which will say NOT START/END.
Nope. Since we have two linear layers, each token will either be "Start/NOT_Start" or "End/Not_End". This means each token can be in 4 possible states: "Start - End", "Not_Start - End", "Start - Not_End" and "Not_Start - Not_End". This also covers the case in which the same token is the start and the end token at the same time, because the softmax score for that token will be the highest for both linear layers for the "Start" and the "End" class.
@@umarjamilai Oh here by 2 linear layers you mean parallel layers. I assumed 2 linear layers as in series like an MLP.
Also, if I think now, we don't require two separate layers right, we can use a single layer with output as 2 neurons, where each neuron will be binary START/NOT_START and other will be END/NOT_END.
@umarjamilai Then in this case, how do we ensure end_token is always start_token?
Thank you that was really helpful, can you olease do vision language models.
thanks sir
Fantastic content, like always. Just worth noting that Q and K (and V) don't necessarily have to be the same in self attention
In Self-Attention the Q, K and V are always the same, that's why it's called self-attention. When K, V are coming from somewhere else (and are different from Q), in that case we talk about cross-attention.
@@umarjamilai No, Q K and V are not the same. They are computed separately with trainable parameters.
@@HowToTechChannel If you refer to the parameters Wq, Wk and Wv, yes they're separate and trainable parameters. But they all receive as input the same sequence in Self-Attention.
In the framework of scaled dot-product attention, particularly within the context of Transformer architectures, a key consideration is the interaction between the query (q), key (k), and value (v) matrices. This mechanism typically involves scaling the product of q and k by the square root of the dimensionality (d), followed by the application of the softmax function, and subsequently multiplying this result with the v matrix. A critical aspect of this process is the role of the attention heads in processing tokens. If we consider a scenario where each attention head is exposed to the entirety of the token set throughout the training process, how might this influence the effectiveness and efficiency of the model? This question assumes particular relevance in a setting where the embedding dimension is fixed at 512, and the model is dealing with a substantial number of tokens, potentially in the range of 10,000. Could such a comprehensive exposure of all heads to all tokens potentially compromise the model's performance, or are there mechanisms within the architecture that mitigate this concern?
To be honest, within the context of multihead attention, all the tokens are exposed to each head, but each head is only working with a different part of the embedding of each token. This is exactly what happens in the vanilla Transformer and also in BERT and works wonderfully. For example, if the embedding size is 512 and we have 4 attention heads, the first attention head will see the first 128 dimensions of each token, that is, the range [0...127], the second head will see the range [128...255], the third the next 128 dimensions and so on...
So I don't understand your question: the transformer is already working like this, and so does BERT. In BERT, particularly, each token attends also to tokens coming after it in the sentence, which is called its "right context".
Hope my explanation clarifies your doubts
@@umarjamilai First of all, I'd like to express my gratitude for your insightful videos. They've been a great resource as I embark on my PhD journey in generative AI in Spain. Though I'm originally from Pakistan, I've been living in Spain since I was nine years old. Now, regarding my query about the Transformer model: I appreciate your explanation of the multihead attention mechanism and how each attention head interacts with a different segment of the token's embedding. However, my question specifically focuses on the logic behind multiplying the attention weights by the value matrix (V). While I understand that the product of the key (K) and query (Q) matrices indicates the relationship between tokens, the rationale behind subsequently multiplying this result by V isn't clear to me. Could you please clarify the purpose or reasoning behind this step in the attention mechanism?
Very good guys!
the question answer part at 53:31 didnt make sense to me. what is the point of [SEP]? we know the question ends there right? therefore the answer must begin there!
Can you do a video on QWEN 2.5 model?
Thanks!
Hi Umar ,please let me know the diff between Language models vs foundation models
A Foundation Model is a large language model, just trained on a massive amount of data that only big tech can afford (Meta, Google, etc.). Foundation Models have been pre-trained on a variety of data, so they can easily be fine tuned for a specific task or even used with zero shot prompting on unseen tasks.
Hit the like button if you are impatiently waiting for " code bert from scratch" video🎉
Please tell us about AliBi using the example of embedding in Llama 🙃 Instead of rotary coding 🙂
I like it
Please do a video on GPT and LLaMA too....!!
I have two videos on LLaMA, check them out ;-)
Bert has embedding vector of 768 and 1024 size but positional embeddings of 512, how does that work? I am confused
BERT has 512 positions. 512 is not the size of the positional embedding vector. It means the max sequence length can be up to 512 tokens.
What's the dimension in most LLMs like mixtral
is BERT outdated now?
No, BERT is very important on setence similarity tasks, and still well used.
I would better if you cover coding part as well..
25:00 your input sequence is different and you are explaining matrix for different sequence!! isnt it a mistake ?
Hi! I didn't understand what is a mistake... The whole video I have referenced the first line of the Chinese poem. Sometimes, even for different inputs, I reference the same matrix so that people know we are talking about the same concept.
you were using seq of China.... but table was not referring to that. in fact you said relation of EOS but table was confusing as it didnt refer to the same seq you were basing on. All in all i loved all your vidoes its just this one appeared out of sync in that period rest its cool. already watching your 2 hr video :)
At the point 25:00, he was just trying to explain causal mask concept with regards to the self attention in case of usual vanila transformer where the tokens doesn't interact with token which comes after it and hence made them -inf before applying softmax and ultimately become 0 after softmax is applied to them. But, It doesn't apply in case of BERT which particularly use MLM concept where by masking certain tokens at either side. Again, this is my understanding from his explanation. Once @Umar does coding, it would be more clearer I believe.
I’ll use the restroom beef or Ethel come
At 19:20 you say that Key, Query and Value matrices are identical to each other. I think that is wrong. Those matrices are calculated using different linear layers with learnable parameters.
Also, if they were identical as you said, the 10x10 matrix that you show at 21:17 should have been symmetrical along the diagonal.
I think I misworded it. The input sequence to the Wq, Wk and Wv matrix is the same during Self-Attention, but their projections are different. So it all depends on what you call Q, K and V... The input or the output of these three linear projections? The inputs are the same (it's the same sequence), while the output is obviously different.
@@umarjamilai Well, not so obviously because you say Q, K and V are the same matrices in the video. And then you multiply Q with K^T, then with V. You also never mention a projection. So that part of information is missing and misleading for a not knowing viewer.
@@HowToTechChannel Also found it missleading. Using Encoder architecture for 'Next word prediction' task brings concerns. I have watched Llama 2.0 videos and built my understanding of many ideas like Multi Head Attention etc. based on them, hopefully they were prepared more carefully :(
Tomorrow
You are a legend
code bert from srcatch
Thanks!