Coding a Transformer from scratch on PyTorch, with full explanation, training and inference.

Поделиться
HTML-код
  • Опубликовано: 9 июл 2024
  • In this video I teach how to code a Transformer model from scratch using PyTorch. I highly recommend watching my previous video to understand the underlying concepts, but I will also rehearse them in this video again while coding. All of the code is mine, except for the attention visualization function to plot the chart, which I have found online at the Harvard university's website.
    Paper: Attention is all you need - arxiv.org/abs/1706.03762
    The full code is available on GitHub: github.com/hkproj/pytorch-tra...
    It also includes a Colab Notebook so you can train the model directly on Colab.
    Chapters
    00:00:00 - Introduction
    00:01:20 - Input Embeddings
    00:04:56 - Positional Encodings
    00:13:30 - Layer Normalization
    00:18:12 - Feed Forward
    00:21:43 - Multi-Head Attention
    00:42:41 - Residual Connection
    00:44:50 - Encoder
    00:51:52 - Decoder
    00:59:20 - Linear Layer
    01:01:25 - Transformer
    01:17:00 - Task overview
    01:18:42 - Tokenizer
    01:31:35 - Dataset
    01:55:25 - Training loop
    02:20:05 - Validation loop
    02:41:30 - Attention visualization
  • НаукаНаука

Комментарии • 298

  • @comedyman4896
    @comedyman4896 9 месяцев назад +87

    personally, I find that seeing someone actually code something from scratch is the best way to get a basic understanding

    • @zhilinwang6303
      @zhilinwang6303 5 месяцев назад

      indeed

    • @user-jb2ex2ux9i
      @user-jb2ex2ux9i 5 месяцев назад

      indeed

    • @CM-mo7mv
      @CM-mo7mv 4 месяца назад

      i don't need to see someone typing... but you might also enjoy watching the gras grow or paint dry

    • @FireFly969
      @FireFly969 2 месяца назад +1

      Yeah, and you see how these technologies works.
      It's insane, that in the end it looks easy that you can do something that companies of millions and billions of dollars do. In a small way but the same idea at the end.

    • @maskedvillainai
      @maskedvillainai 2 месяца назад

      Yeah kinda ironic how that works. The simplest stuff required the most complex explanations

  • @faiyazahmad2869
    @faiyazahmad2869 3 дня назад +2

    One of the best tutorial to understand and implement the Transformer model...Thank you for making such a wonderful video

  • @umarjamilai
    @umarjamilai  Год назад +125

    The full code is available on GitHub: github.com/hkproj/pytorch-transformer
    It also includes a Colab Notebook so you can train the model directly on Colab.
    Of course nobody reinvents the wheel, so I have watched many resources about the transformer to learn how to code it. All of the code is written by me from zero except for the code to visualize the attention, which I have taken from the Harvard NLP group article about the Transformer.
    I highly recommend all of you to do the same: watch my video and try to code your own version of the Transformer... that's the best way to learn it.
    Another suggestion I can give is to download my git repo, run it on your computer while debugging the training and inference line by line, while trying to guess the tensor size at each step. This will make sure you understand all the operations. Plus, if some operation was not clear to you, you can just watch the variables in real time to understand the shapes involved.
    Have a wonderful day!

    • @AiEdgar
      @AiEdgar Год назад +1

      The best video ever

    • @odyssey0167
      @odyssey0167 9 месяцев назад

      Can you provide with the pretrained models?

    • @wilfredomartel7781
      @wilfredomartel7781 3 месяца назад

      🎉is this Bert architecture?

  • @aiden3085
    @aiden3085 7 месяцев назад +4

    Thank you Umar for our extraordinary excellent work! Best transformer tutorial ever I have seen!

  • @ArslanmZahid
    @ArslanmZahid 7 месяцев назад +20

    I have browsed RUclips for the perfect set of videos on transformer, but your set of videos (the video explanation you did on the transformer architecture) and this one is by far the best !! Take a bow brother, you have really contributed to the viewers in amount you cant even imagine. Really appreciate this !!!

  • @shresthsomya7419
    @shresthsomya7419 5 месяцев назад +2

    Thanks a lot for such a detailed video. Your videos on transformer are best.

  • @maxmustermann1066
    @maxmustermann1066 9 месяцев назад +4

    This video is incredible, never understood it like this before. I will watch your next videos for sure, thank you so much!

  • @yangrichard7874
    @yangrichard7874 7 месяцев назад +29

    Greeting from China! I am PhD student focused on AI study. Your video really helped me a lot. Thank you so much and hope you enjoy your life in China.

    • @umarjamilai
      @umarjamilai  7 месяцев назад

      谢谢你!我们在领英联系吧

  • @shakewingo3216
    @shakewingo3216 8 месяцев назад +2

    Thanks for making it so easy to understand. I definitely learn a lot and gain much more confidence from this!

  • @goldentime11
    @goldentime11 2 месяца назад +1

    Thanks for your detailed tutorial. Learned a lot!

  • @raviparihar3298
    @raviparihar3298 Месяц назад +2

    best video I have ever seen on whole youtube eon transformer model. Thank you so much sir!

  • @prajolshrestha9686
    @prajolshrestha9686 8 месяцев назад +1

    I appreciate you for this explanation. Great video!

  • @abdulkarimasif6457
    @abdulkarimasif6457 Год назад +5

    Dear Umar, your video is full of knowledge; thanks for sharing.

  • @VishnuVardhan-sx6bq
    @VishnuVardhan-sx6bq 6 месяцев назад +1

    This is such a great work, I don't really know how to thank you but this is an amazing explanation of an advanced topic such as transformer.

  • @user-db8nb5wz2z
    @user-db8nb5wz2z 8 месяцев назад +1

    Really great explanation to understand Transformer, many thanks to you.

  • @zhengwang1402
    @zhengwang1402 7 месяцев назад +2

    This feels really fantastic when looking someone write a program from bottom up

  • @lyte69
    @lyte69 8 месяцев назад +1

    Hey there! I enjoyed watching that video, you did a wonderful job explaining everything, and I found it super easy to follow along. Overall, it was a really great experience!

  • @manishsharma2211
    @manishsharma2211 7 месяцев назад +2

    WOW WOW WOW, though it was a bit tough for me to understand it, I was able to understand around 80 % of the code, beautiful. Thank you soo much

  • @ghabcdef
    @ghabcdef 5 месяцев назад +1

    Thanks a ton for making this video and all your other videos. Incredibly useful.

    • @umarjamilai
      @umarjamilai  5 месяцев назад

      Thanks for your support!

  • @godswillanosike896
    @godswillanosike896 3 месяца назад +1

    Great explanation! Thanks very much

  • @MuhammadArshad
    @MuhammadArshad 8 месяцев назад +7

    Thank God, it's not one of those 'ML in 5 lines of Python code' or 'learn AI in 5 minutes'. Thank you. I can not imagine how much time you must have spent on making this tutorial. thank you so much. I have watched it three times already and wrote the code while watching the second time (with a lot of typos :D).

  • @salmagamal5676
    @salmagamal5676 6 месяцев назад +1

    I can't possibly thank you enough for this incredibly informative video

  • @sagarpadhiyar3666
    @sagarpadhiyar3666 2 месяца назад +1

    Best video I came across for transformer from scratch.

  • @terryliu3635
    @terryliu3635 Месяц назад +1

    I learnt a lot from following the steps out of this video and create a transformer myself step by step!! Thank you!!

  • @skirazai7591
    @skirazai7591 6 месяцев назад +2

    Great video, you are insanely talented btw.

  • @user-ru4nb8tk6f
    @user-ru4nb8tk6f 8 месяцев назад +1

    You are a great professional, thanks a ton for this

  • @jeremyregamey495
    @jeremyregamey495 7 месяцев назад +1

    I love your videos. Thank you for sharing your knowledge and i cant wait to learn more.

  • @michaelscheinfeild9768
    @michaelscheinfeild9768 10 месяцев назад

    Im enjoying clear explanation of The Transformer Coding !

  • @ansonlau7040
    @ansonlau7040 3 месяца назад +1

    Big thankyou for the video, makes transformer so easy to learn(also the explanation video)👍👍

  • @Mostafa-cv8jc
    @Mostafa-cv8jc 7 месяцев назад +1

    Very good video. Tysm for making this, you are making a difference

  • @nhutminh1552
    @nhutminh1552 6 месяцев назад +1

    Thank you admin. Your video is great. It helps me understand. Thank you very much.

  • @toxicbisht4344
    @toxicbisht4344 6 месяцев назад +1

    Amazing explanation
    Thank you for this

  • @kozer1986
    @kozer1986 8 месяцев назад +3

    I'm not sure if it is because I have study this content 1000000 times or not, but is the first time that I understood the code, and feel confident about it. Thanks!

  • @abdullahahsan3859
    @abdullahahsan3859 8 месяцев назад +22

    Keep doing what you are doing. I really appreciate you taking out so much time to spread such knowledge for free. Been studying transformers for a long time but never have I understood it so well. The theoretical explanation in the other video combined with this practical implementation, just splendid. Will be going through your other tutorials as well. I know how much time taking it is to produce such high level content and all I can really say is that I really am grateful for what you are doing and hope that you continue doing it. Wish you a great day!

    • @umarjamilai
      @umarjamilai  8 месяцев назад +1

      Thank you for your kind words. I wish you a wonderful day and success for your journey in deep learning!

  • @balajip5030
    @balajip5030 8 месяцев назад +2

    Thanks Bro. With your explanation, I am able to build the transformer model for my application. You explained so awesome. Please do what you are doing.

  • @keflatspiral4633
    @keflatspiral4633 6 месяцев назад +1

    what to say.. just WOW! thank you so much !!

  • @saziedhassan3976
    @saziedhassan3976 10 месяцев назад +3

    Thank you so much for taking the time to code and explain the transformer model in such detail. You are amazing and please do a series on how transformers can be used for time series anomaly detection and forecasting!

    • @amiralioghli8622
      @amiralioghli8622 9 месяцев назад

      My question and request is same as you. if you found any tutorial please share with me.

  • @codevacaphe3763
    @codevacaphe3763 Месяц назад +1

    Hi, I just happen to see your video. It's really amazing, your channel is so good with valuable information. Hope, you keep this up because I really love your contents.

  • @LeoDaLionEdits
    @LeoDaLionEdits Месяц назад +1

    thank you so much for these videos

  • @mohamednabil374
    @mohamednabil374 Год назад +4

    Thanks Umar for this comprehensive tutorial, after watching many videos I would say, this is AWESOME! It would be really nice if you can provide us with more tutorials on Transformers especially training them for longer sequences. :)

    • @umarjamilai
      @umarjamilai  Год назад

      Hi mohamednabil374, stay tuned for my next video on the LongNet, a new transformer architecture that can scale up to 1 billion tokens.

  • @divyanshbansal2321
    @divyanshbansal2321 5 месяцев назад +1

    Thank you mate. You are a godsend!

  • @mikehoops
    @mikehoops 8 месяцев назад +2

    Just to repeat what everyone else is saying here - many thanks for an amazing explanation! Looking forward to more of your videos.

  • @dapostop7384
    @dapostop7384 Месяц назад +1

    Wow super usefull! Coding really helps me understand the process better than visuals.

  • @pawanmoon
    @pawanmoon 9 месяцев назад +1

    Great work!!

  • @user-eu3ok8dc8b
    @user-eu3ok8dc8b 9 месяцев назад +1

    one of the best videos thanks a lot for the video.

  • @coc2912
    @coc2912 10 месяцев назад +1

    Thanks for your video and code.

  • @jihyunkim4315
    @jihyunkim4315 8 месяцев назад +1

    perfect video!! Thank you so much. I always wonder the detail code and its explanation and now I almost understand all of it. thanks:) you are the best for me!

  • @SyntharaPrime
    @SyntharaPrime 7 месяцев назад +1

    Great Job. Amazing. Thanks a lot. I really appreciate you. It is so much effort.

  • @user-qo7vr3ml4c
    @user-qo7vr3ml4c Месяц назад +1

    Thank you very much, this is very useful.

  • @gunnvant
    @gunnvant 10 месяцев назад +1

    This was really good. I understood multihead attention better with the code explanation.

  • @texwiller7577
    @texwiller7577 3 месяца назад +1

    Dottore...sei un grande!

  • @omidsa8323
    @omidsa8323 5 месяцев назад +1

    Great Job!

  • @amiralioghli8622
    @amiralioghli8622 9 месяцев назад +4

    Thank you so much for taking the time to code and explain the transformer model in such detail. You are amazing and, if possible please do a series on how transformers can be used for time series anomaly detection and forecasting. it is extremly necessary on yotube for somone!
    Thanks in advance.

  • @sypen1
    @sypen1 7 месяцев назад +1

    This is amazing thank you 🙏

  • @physicswithbilalasmatullah
    @physicswithbilalasmatullah 3 месяца назад +3

    Hi Umar. I am a first year student at MIT who wants to do AI startups. Your explanation and comments during coding were really helpful. After spending about 10 hours on the video, I walk away with great learnings and great inspiration. Thank you so much, you are an amazing teacher!

    • @umarjamilai
      @umarjamilai  3 месяца назад

      Best of luck with your studies and thank you for your support!

  • @si0n4ra
    @si0n4ra 11 месяцев назад +1

    Umar, thank you for the amazing example and clear explanation of all your steps and actions.

    • @umarjamilai
      @umarjamilai  11 месяцев назад

      Thank you for watching my video and your kind words! Subscribe for more videos coming soon!

    • @si0n4ra
      @si0n4ra 11 месяцев назад

      @@umarjamilai , mission completed 😎.
      Already subscribed.
      All the best, Umar

  • @JohnSmith-he5xg
    @JohnSmith-he5xg 8 месяцев назад +5

    Loving this video (only 13 minutes in), really like you using type hints, commenting, descriptive variable names, etc. Way better coding practices than most of the ML code I've looked at.
    At 13:00, for the 2nd arg of the array indexing, you could just do ":" and it would be identical.

    • @tonyt1343
      @tonyt1343 6 месяцев назад +1

      Thank you for this comment! I'm coding along with this video and I wasn't sure if my understanding was correct. I'm glad someone else was thinking the same thing. Just to be clear, I am VERY THANKFUL for this video and am in no way complaining. I just wanted to make sure I understand because I want to fully internalize this information.

  • @sypen1
    @sypen1 7 месяцев назад +1

    Mate you are a beast!

  • @user-ul2mw6fu2e
    @user-ul2mw6fu2e 6 месяцев назад +1

    Wow Your explanation amazing

  • @cicerochen313
    @cicerochen313 8 месяцев назад +2

    Awesome! Highly appreciate. 超級讚!非常的感謝。

  • @shengjiadiao3166
    @shengjiadiao3166 2 дня назад +1

    the contents are crazy !!!!

  • @Patrick-wn6uj
    @Patrick-wn6uj 3 месяца назад +1

    Hi Umar thank you for all the work you are doing, please consider making a video like this on vision transformers

  • @dengbuqi
    @dengbuqi 2 месяца назад +1

    What a WONDERFUL example of transformer! I am Chinese and I am doing my PhD program in Korea. My research is also about AI. This video helps me a lot. Thank you!
    BTW, your Chinese is very good!😁😁

  • @linyang9536
    @linyang9536 6 месяцев назад +3

    这是我见过最详细的从零创建Transformer模型的视频,从代码实现到数据处理,再到可视化,up主真是嚼碎磨细了讲,感谢!

    • @decarteao
      @decarteao 5 месяцев назад

      Nn entendi nada! Mas botei meu like.

    • @astrolillo
      @astrolillo 3 месяца назад +1

      @@decarteaoO cara da China e muito engracado con o video

  • @oborderies
    @oborderies 7 месяцев назад +2

    Sincere congratulations for this fine and very useful tutorial ! Much appreciated 👏🏻

  • @michaelscheinfeild9768
    @michaelscheinfeild9768 10 месяцев назад

    i enjoyed the video ! now i can transform the world !

  • @MrSupron00
    @MrSupron00 9 месяцев назад

    This is excellent! Thank you for putting this together. I do have one point of confusion with how the final multihead attention concatenation takes place. I believe the concatenation takes place on line 110 where V' = (V1, V2,.. Vh) (sequenc_length, h*dk) This is intended to be multiplied by matrix W0 (h*dk, dmodel) to give something of shape (sequenc_length, dmodel ) as is required. However, here you implement a linear layer operation which takes the concat V' (sequence_length, d_model) and is fed into a linear layer constructed so that we do the following: W*V'+b where the dimension of W and b are chose to satisfy the output dimension. This is different from multiplying directly with a predefined trainable matrix of size W0. Now, I can see how these are nearly the same thing and in practice it may not matter, but it would be helpful to point out these tricks of the trade so folks like myself don't get bogged down with these subtleties. Thanks

  • @angelinakoval8360
    @angelinakoval8360 7 месяцев назад +1

    Dear Umar, thank you so so much for the video! I don't have much experience in deep learning, but your explanations are so clear and detailed I understood almost everything 😄. It wil be a great help for me at my work. Wish you all the best! ❤

    • @umarjamilai
      @umarjamilai  7 месяцев назад

      Thank you for your kind words, @angelinakoval8360!

  • @FailingProject185
    @FailingProject185 5 месяцев назад

    You are one of the coolest dude in this area. It'd be helpful if you provide a roadmap to reach your expertise. I'd really love to learn from you but i can't understand. Roadmap will help so many of your subscribers.

  • @user-sp5pf5du3m
    @user-sp5pf5du3m 4 месяца назад +1

    You are a genius

  • @forresthu6204
    @forresthu6204 8 месяцев назад +1

    At 22:39, it describes the essentials of self-attentions computation in very clear and easy to understand way.

  • @phanindraparashar8930
    @phanindraparashar8930 Год назад +10

    It is really amazing video. I tried understanding the code of it from various other youtube channel; but was always getting confused. Thanks a lot :) . Can you make a series on BERT & GPT aswell; where you build these models and train on custom data?

    • @umarjamilai
      @umarjamilai  Год назад +21

      Hi Phanindra! I'll definetely continue making more videos. It takes a lot of time and patience to make just one video, not considering the preparation time to study the model, write the code and test it. Please share the channel and subscribe, that's the biggest motivation to continue providing high quality content to you all.

    • @rubelahmed5458
      @rubelahmed5458 7 месяцев назад

      A coding example for BERT would be great!@@umarjamilai

  • @ebadsayed487
    @ebadsayed487 18 дней назад

    Your video is truly amazing, thanks a lot for this. I want to train this model on Summarization Task so what changes I need to do?

  • @ngocchienchu109
    @ngocchienchu109 11 месяцев назад +1

    thank you so much

  • @aspboss1973
    @aspboss1973 10 месяцев назад +1

    Its really awesome video with clear explanations. And flow of code is very easy to understand. One question, how to implement this transformer architecture for Question-Answer based model ? (Q/A on very specific topic lets say a manual of instrument..)
    Thank you ! so much for this video !!!

  • @md.shahabulalam9484
    @md.shahabulalam9484 2 месяца назад +1

    Umar thank you so much.........

  • @tonyt1343
    @tonyt1343 6 месяцев назад +1

    Thanks!

  • @babaka1850
    @babaka1850 Месяц назад

    for determining the max len of tgt sentence, I believe you should point to tokenizer_tgt rather than tokenizer_src. tgt_ids = tokenizer_tgt.encode(item['transaltion'][config['ang_tgt']]).ids

  • @JohnSmith-he5xg
    @JohnSmith-he5xg 8 месяцев назад +1

    OMG. And you also note Matrix shapes in comments! Beautiful. I actually know the shapes without having to trace some variable backwards.

  • @Hdjandbkwk
    @Hdjandbkwk 10 месяцев назад +2

    Just want to say thank you!! This is easily one of my favorite video on RUclips! I have watched a few videos on transformers but none explained it as clear as you, at first I was scared by the length of the video but you managed to have my attention for the full 3 hours! Following your instructions I am now able to train my very first transformer!
    Btw, I am using the tokenizer the way you are but looking at the tokenizer file it looks like my tokenizer didn’t split the sentences into words and it is using the whole sentence as token. Do you have any idea why? I am using mac if that matters.

    • @umarjamilai
      @umarjamilai  10 месяцев назад +1

      Hi! Thanks for your kind words! Make sure your PreTokenizer is the "Whitespace" one and that the Tokenizer is the "WordLevel" tokenizer. As a last resort, you can clone the repository from my GitHub and compare my code with yours. Have a wonderful rest of the day!

    • @Hdjandbkwk
      @Hdjandbkwk 10 месяцев назад

      I have PreTokenizer set as whitespace and using WordLevel tokenizer and trainer but it will still encode the sentence as a whole. I did a direct swap to use BPE tokenizer and that is correctly encoding the sentences, maybe there is bug in WordLevel tokenizer for macOS.
      Another question that I have is what determines the max context size for LLMs? Is it the d_model size?@@umarjamilai

  • @user-gj2cl2rr9x
    @user-gj2cl2rr9x 4 месяца назад

    Really helpful video. I watched it many times. Hope you enjoy your life in China. 龙年大吉

    • @umarjamilai
      @umarjamilai  4 месяца назад +1

      谢谢老板的精准扶贫

  • @kailazarov107
    @kailazarov107 9 месяцев назад

    Really great video - learned a lot. Your inference notebook is using the dataset batching, but how can you build inference with user typed sentences?

  • @therealvortex100
    @therealvortex100 11 месяцев назад

    Hello, at 51:16 why do we add normalization in the end of encoder?

    • @umarjamilai
      @umarjamilai  11 месяцев назад +1

      Hi! About the layer normalization, there are different opinions on where to add it in the model. I suggest you read this paper (arxiv.org/abs/2002.04745) which discusses this issue. Have a nice day!

  • @user-gj2cl2rr9x
    @user-gj2cl2rr9x 4 месяца назад

    in the 13:13/2:59:23, when we build the PositionalEncoding function,
    this line x = x + (self.pe[:,:x.shape[1],:]).requires_grad_(False), the x.shape[1] looks like not be used in the transformer model, because when we build the dataset.py function, we pad all the sentences into the same length, and then we load the (batch, seq_len, input_embedding_dim) into the PositionalEncoding function, where all x.shape[1] in the batch is the seq_len, instead of varying by their original sentence length.

  • @vigenisayan2343
    @vigenisayan2343 4 месяца назад +1

    it was very useful to watch. Question: What books or learning sources you would suggest to learn pytorch deeply. Thanks

  • @FireFly969
    @FireFly969 2 месяца назад

    Thank you umar jamil for this wonderfull content, to be honest i find it so hard to keep undertanding each part and what happens in each line of code for a beginner in pytorch.
    I wonder what i need to know before starting one of your videos.
    I think i need to read the paper multiple times till understand it?

  • @wellhellothere1785
    @wellhellothere1785 10 месяцев назад

    How do you suggest to apply this model to time series forcasting what do you think should be changed, so far I believe that there is no source language and target language in the forcasting there is only time based. and also for the error or loss function I should use MSE in this case. Is there anything else I might be missing ?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 8 месяцев назад

    the code is really well written. very easy and nicely organized.

  • @rafa_br34
    @rafa_br34 Месяц назад

    Great video! I'm wondering, is there any reason to save the positional encoding vector? I don't see why you would need to save it since it seems to always be the same value considering the init parameters don't change.

  • @zhuxindedongchang4229
    @zhuxindedongchang4229 3 месяца назад

    Hello Umar, really impressive work on Transformer. I have followed your step on this experiment. One small thing I am not sure is when you compute the loss you use the nn.CrossEntropyLoss() method, this method have already apply the softmax itself. As their document said:"The input is expected to contain the unnormalized logits for each class (which do not need to be positive or sum to 1, in general)". But in your project method in the built Transformer model, it has applied softmax. I wonder if we should only output the logits without this softmax to fit the nn.CrossEntropyLoss() method? Thank you anyway.

  • @albert4392
    @albert4392 9 месяцев назад +1

    This is an excellent video, your explanation is so clear and the live coding helps understanding!
    Can you give us tips to debug such an huge model? Because it is really hard to make sure the model works well.
    My tips on debugging is to print out the shape of the tensor in each step, but this only make sure the shape is correct, there may be some logical error I may miss out. Thank you!

    • @umarjamilai
      @umarjamilai  9 месяцев назад +1

      Hi! I'd love to give a golden rule for debugging models, but unfortunately, it depends highly on the architecture/loss/data itself.
      One thing that you can do is, before training the model on a big dataset, it is recommended to train it on a very small dataset to make sure everything is working and the model should overfit on the small dataset. For example, if instead of training on many books, you train a LLM on a single book, hopefully it should be able to write sentences from that book, given a prompt.
      The second most important thing is to validate the model as the training is proceeding to verify that the quality is improving over time.
      Last but not least, use metrics to decide if the model is going in the right direction and make experiments on hyper parameters to verify assumptions, do not just make assumptions without validating them. When you have a model with billions of parameters, it is difficult to predict patterns, so every assumption must be verified experimentally.
      Have a nice day!

  • @reannwu3283
    @reannwu3283 9 месяцев назад +1

    This is work of art.

  • @ageofkz
    @ageofkz 4 месяца назад

    At 29:14, the part on multihead attention, we feed each Q,V, K multiply by Wq, Wv, Wk then split them into n heads then dot product and concat them again. But should we not split them first, then apply Wq_h where Wq_h is the weight matrix for the hth query matrix, same for V and K? Because it seems like we just split them, apply attention, then concat?

  • @mohsinansari3584
    @mohsinansari3584 Год назад +1

    Just finished watching. Thanks so much for the detailed video. I plan to spend this weekend on coding this model. How long did it take to train on your hardware?

    • @umarjamilai
      @umarjamilai  Год назад +1

      Hi Mohsin! Good job! It took around 3 hours to train 30 epochs on my computer. You can train even for 20 epochs to see good results.
      Have a wonderful day!

    • @NaofumiShinomiya
      @NaofumiShinomiya Год назад

      ​@@umarjamilai what is your hardware? Just started studying deep learning few days ago and i didnt know transformers could take this long to train

    • @umarjamilai
      @umarjamilai  Год назад

      @@NaofumiShinomiya Training time depends on the architecture of the network, on your hardware and the amount of data you use, plus other factors like learning rate, learning scheduler, optimizer, etc. So many conditions to factor in.

  • @kindahall666
    @kindahall666 10 дней назад

    Thank you for such a great video. However, it seems that the softmax layer after the decoder is not included in your code. I tried implementing it myself, but after adding the final softmax, the loss function becomes extremely difficult to converge and decreases very slowly. How can this be resolved?

  • @kareemsaid8863
    @kareemsaid8863 2 месяца назад

    i tried to modify the code bit for code generation task and i am stuck at 1.2 loss what do you think is the problem litterily the same code i just changed abit in the get_ds function and get_sequances function and i treated the src tokenizer and tgt tokenizer as the same tokenizer what is wrong with my changes in your implementation ?

  • @rafabaranowski513
    @rafabaranowski513 4 месяца назад

    At 1:42:03 you are using SOS special token from source language tokenizer in sentence with target language. Tokenizers are trained on different languages so is it correct to use special tokens between them? SOS token from source language tokenizer won't have different idx compared to SOS from target language tokenizer?

  • @AyushRaj-nt3ot
    @AyushRaj-nt3ot Месяц назад

    sir, your explanation is just beyond awesome!!! Thank you so much for creating such content. Sir I didn't get the residual connections part. As I am from India, I was working on Indic Languages, so i had to make more code but that's just okay. I just want if you could please help in understanding beam search code, the one which you also gave in the GitHub File. Also, if you could give the code for evaluating the BLEU Score. I'll be really grateful to you.
    And again, thank you so much for such a comprehensive content. We'd love to see your more videos especially in Generative AI!
    P.S. : I didn't understand how you wrote it, what I've understood is that we have to take the input of the previous layer and then add with o/p of the same layer and then apply layer norm on that. Basically Add and then LayerNorm. Please help me correct mysefl!

  • @sudo_codex
    @sudo_codex 11 месяцев назад

    Thanks Umar for the amazing video, but I'm still confused, that how can we build and apply transformers from scratch for multi-label classification.

  • @stipepavic843
    @stipepavic843 4 месяца назад

    respect!!!