LoRA: Low-Rank Adaptation of Large Language Models - Explained visually + PyTorch code from scratch

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 90

  • @umarjamilai
    @umarjamilai  Год назад +21

    As usual the full code and slides are available on my GitHub: github.com/hkproj/pytorch-lora

  • @aleksandarcvetkovic7045
    @aleksandarcvetkovic7045 7 месяцев назад +9

    I looked at many blogs and explanations but none of them got to the practical usage of LoRA and showed exactly how it is used in practice. This is exactly what I was looking for.

  • @Anonymous-os2vj
    @Anonymous-os2vj 15 дней назад +1

    These videos are life-saver for my upcoming interview. So in depth and shows the code as well. Easy to follow along. Way better than my university. Thank you so much for the uploads.

  • @mahmoudtarek6859
    @mahmoudtarek6859 11 месяцев назад +14

    Perfect.
    Genius.
    Simple.
    To the point.
    Theoretical.
    Practical.

  • @lakshman587
    @lakshman587 2 месяца назад +1

    I have seen so many videos on LoRA, none of them contained this kind of explanation.
    Thanks for the video!!!

  • @AnnManMS
    @AnnManMS Год назад +4

    I'm genuinely impressed by the content and presentation you've crafted for the ML/AI community. The way you've structured the presentation is both user-friendly and cohesive, allowing for a gradual and understandable flow of information.

  • @davidde7620
    @davidde7620 8 месяцев назад +4

    One of the best explanation out there. Also the hands on code piece was just awesome!

  • @IvanFioravanti
    @IvanFioravanti 2 месяца назад +1

    Clear, simple and coincise. You rock Umar!

  • @mosca204
    @mosca204 4 месяца назад +1

    I have to say one of the best youtube channels out there. And thanks for sharing the code!

  • @mamotivated
    @mamotivated Год назад +4

    Rock solid content once again. From scratch implementations are soo beneficial.

  • @GrifinsBrother
    @GrifinsBrother 3 месяца назад +1

    As always, one of the best explainer on RUclips

  • @lordapprin
    @lordapprin 11 месяцев назад +2

    Thank you so much for your explanations, they are helping me out tremendously during my master thesis work!

  • @bosepukur
    @bosepukur 15 дней назад +1

    exceptional video , best 1hr spent on wkend

  • @AiEdgar
    @AiEdgar Год назад +6

    This channel is the best, 😊❤

  • @wiseconcepts774
    @wiseconcepts774 Месяц назад +1

    This is very nicely explained, Thanks Umar

  • @TheRohit901
    @TheRohit901 11 месяцев назад

    Another awesome video, you're a gem. Thank you for your work, do keep making these kind of videos on the latest research papers.

  • @hussainshaik4390
    @hussainshaik4390 Год назад +2

    simple use case and clear explanation thanks for this please do more of this like implementing from scratch videos

  • @benji6296
    @benji6296 7 месяцев назад +1

    Umar thank you for the content, really helps to grasp what the concepts are .

  • @sauravrao234
    @sauravrao234 9 месяцев назад +1

    So you assume there is no activation function used neither in the layers contained in the frozen W layer nor in the lower representation AB layer?

  • @Yo-rw7mq
    @Yo-rw7mq Год назад +1

    Such a great RUclips channel. Keep the great work!!!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 10 месяцев назад +1

    this is the best video on LoRA.

  • @LudaNeva
    @LudaNeva 6 месяцев назад +1

    Very good and clear explanation, thank you!

  • @Itay12353
    @Itay12353 9 месяцев назад +1

    Your videos are pure gold

  • @luis96xd
    @luis96xd Год назад +1

    Amazing video, everything was well explained, Is just what I was looking for, explanations and coding, thank you so much!

  • @bayuwicaksono7970
    @bayuwicaksono7970 4 месяца назад +1

    best explanation about lora, thank you...

  • @alexandredamiao1365
    @alexandredamiao1365 11 месяцев назад +1

    This is such quality content! Thank you!

  • @alirahmanian5127
    @alirahmanian5127 2 месяца назад +1

    Great as usual!

  • @maddai1764
    @maddai1764 8 месяцев назад +1

    this was flawless. FLAWLESS!

  • @benhall4274
    @benhall4274 Год назад

    Thanks!

  • @emir5146
    @emir5146 3 месяца назад +1

    22:57 Why other digits accuracy can be decrease? I dont understand here.

  • @Akash5130
    @Akash5130 11 месяцев назад +1

    Amazing explanation! Thank you.

  • @baba.ai.2056
    @baba.ai.2056 5 месяцев назад +1

    Loved your explanation

  • @useless_deno
    @useless_deno 2 месяца назад +1

    Great Explanation!

  • @Jayveersinh_Raj
    @Jayveersinh_Raj Год назад +1

    Great video, really impressed by the video and channel, deservers a like.

  • @arch-verse
    @arch-verse 7 месяцев назад

    Thanks!

  • @k1tajfar714
    @k1tajfar714 4 месяца назад +1

    LOVE YOUR VIDEOS LOVE YOUR KITTY'S VOICE! I MISS MY KITTY YOUR KITTEN EXACTLY MEOWS LIKE MINE WHEN I USED TO RECORD!!!! Thanks🖤👑🖤.

  • @NamanJain77
    @NamanJain77 9 месяцев назад +1

    This was insanely clear!

  • @Snyder0317
    @Snyder0317 Год назад +1

    Very good explanation. Thank you!

  • @Akuma7499
    @Akuma7499 7 месяцев назад +1

    How to load and save the lora weights can anyone explain?

  • @pravingaikwad1337
    @pravingaikwad1337 9 месяцев назад

    Is it like the base model is stored in 4bit and as the data (X vector) passes through the layer that layer is first dequantized and then the matrix multiplication is done (X*W)? And the same thing for LoRA as well? and after we get Y (by adding output of lora and base layer) the W and LoRA layers are again quantized back to 4bit? and Y is passed on to next layer?
    Also, if the LoRA is at the base of the model, does that mean to update the parameters of this LoRA we need to calculate the gradients of loss wrt all the W and LoRA matrices above it?

  • @thecutestcat897
    @thecutestcat897 9 месяцев назад +1

    perfect, this really helps me a lot

  • @shriharinair1999
    @shriharinair1999 6 месяцев назад

    so while inferencing, we ll only use a and b? but arent a and b matrices used to handle only digit 9?

  • @SuperRia33
    @SuperRia33 8 месяцев назад

    I was going insane until I came across this amazing LoRA video ,an oasis for me, Can you also explain QLoRA?

  • @marearts.
    @marearts. 8 месяцев назад

    Thank you for the great video.
    Now I am wondering how LLM or diffusion trined with LoRA.
    These models have many layers which attention, dense, fully connected.. how does LoRA adapted for this?
    In the digit example, number '9' becomes better result after LoRA adaption.
    But other numbers accuracy become much worse.
    Is this natural for LoRA apdation? or We can make all number accurately with LoRA (which is trained for 9)?
    Thank you very much!

  • @kunalnikam9112
    @kunalnikam9112 9 месяцев назад

    In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible

  • @lukeskywalker7029
    @lukeskywalker7029 10 месяцев назад

    To push loRa to its efficient limit, does it make sense to find the rank of the original weight matrices by finding statistically significant singular values with Marchenko-Pastur Law to choose the rank of the LoRA matrices?

  • @flakky626
    @flakky626 9 месяцев назад

    Hello everyone, I have been in Deep Learning space for about some time now..Sometimes when I come across new codes like this in the video..I just can't keep up many of things in code feels new and gets overwhelming to understand
    How do I bridge this gap effectively?

  • @parasetamol6261
    @parasetamol6261 Год назад +2

    That Great. Thank You. You are the god!!

  • @Tiger-Tippu
    @Tiger-Tippu Год назад

    Hi Umair ,is instruction fine tuning and full fine tuning both are same

  • @VisheshKumar-z3z
    @VisheshKumar-z3z 10 месяцев назад

    Great Presentation. I just want to know are there open source libraries for LLM models so that I can fine tune them.

  • @123playwright
    @123playwright 3 месяца назад

    Was hoping you would combine LORA in your stable diff video.

  • @subhamkundu5043
    @subhamkundu5043 Год назад

    For fine-tuning, I have a question suppose we store the pre-train matrix in a cpu and load the AB matrix in the gpu for fine-tuning. Will this work?

    • @umarjamilai
      @umarjamilai  Год назад

      Hi! Putting the AB matrix on the GPU while the rest of the model on the CPU still has one problem: the loss. I have never tried it, but I believe PyTorch would complain when it tries to compute the loss (which involves both the frozen weights and the AB matrix). You can try using my notebook (freely available on my GitHub) and comment with the result of the experiment :D

    • @subhamkundu5043
      @subhamkundu5043 Год назад

      Thanks for the reply. So in Lora also we need to store the pretrained weights in the GPU.
      Also can you make a detailed video on Flash Attention and Retentive Transformer.

  • @马国鑫
    @马国鑫 5 месяцев назад +1

    so amazing tutorial

  • @agenticmark
    @agenticmark 7 месяцев назад

    Please do a video where you show the process from scratch so we can do this with voice models ✊🏼

  • @davidromero1373
    @davidromero1373 Год назад

    Hi a question, can we use lora to just reduce the size of a model and run inference, or we have to always do the fintuning
    ?

    • @umarjamilai
      @umarjamilai  Год назад +1

      As of now, LoRA is used for fine-tuning. For reducing the "size" of the model, there are quantization techniques. I'll make a video about them in the future.
      Have a nice day!

  • @Im.nobody0
    @Im.nobody0 Год назад

    Thanks for you great work! May I ask a question? When Lora is enabled, the accuracy is 84.3% which is much worse than the original accuracy. So is it really beneficial when we enable Lora?

    • @umarjamilai
      @umarjamilai  Год назад +1

      Of course the accuracy may degrade depending on the rank of the LoRA matrices, because the model has less parameters and so, less degrees of freedom. But it's not a rule: an overparameterized model may not suffer at all from degradation when using LoRA.

  • @JohnSmith-he5xg
    @JohnSmith-he5xg Год назад +1

    Great job!

  • @MachineScribbler
    @MachineScribbler Год назад +1

    Amazing Explanation.

  • @tipiripro11
    @tipiripro11 Год назад

    Thank you for the very cool video! Can you suggest any ways that we can use to combine the finetuned and the pretrained models so they can perform well on all digits?

  • @aag7651
    @aag7651 9 месяцев назад

    Why are two matrices, A&B needed instead of just one?

    • @umarjamilai
      @umarjamilai  9 месяцев назад

      Because you the multiplication of the two matrices produces the original one.

  • @TanmayDikshit-xm3ve
    @TanmayDikshit-xm3ve Месяц назад +1

    Genius !

  • @aiden3085
    @aiden3085 Год назад

    Great video! Would you consider doing a tutorial on finetuning llama2 7b model using lora?

  • @tljstewart
    @tljstewart Год назад +1

    🎉Top tier content!, thank you, I was looking at the net results for the other digits in your demo and realized they were worse off, then thought about it a bit more deeply, it looks like you trained a single B and A matrix and added to all layers, where I think an improvement would be a separate BA matrix for each layer. Curious your thoughts on this?

    • @umarjamilai
      @umarjamilai  Год назад +1

      Hi @tljstewart
      Actually, in my code we train 3 different pairs of A and B, one for each of the layers. That's why I call the method "register_parameterization" method 3 times, one for each of the layers. Each A and B matrix has a different dimensions, because the dimensions of each layers are different.
      Usually we can't know which layer we should fine tune or not, unless we have a clue on what each layer may be doing (this can be said only for very specific architectures like the Transformer).

    • @tljstewart
      @tljstewart Год назад

      Ah thanks @umarjamilai I reviewed the code again, and it appears you do freeze the original model and train a Lora matrix for each layer. That leads me to a couple questions, how do you save the Lora weights and then how would you load them back in, for sharing say on hugging face? Just for an example, how might you load stable diffusion then load a Lora programmatically?

    • @PengfeiXue
      @PengfeiXue 9 месяцев назад

      i think you should save the lora paramters beside the orginal model, and during inference stage, you can enable, or even add different lora to get the fine tuned result@@tljstewart

    • @PengfeiXue
      @PengfeiXue 9 месяцев назад

      @umarjamilai ^^

  • @thisurawz
    @thisurawz Год назад

    Can you do a video on finetuning a multimodal LLM (Video-LlaMA, LLaVA, or CLIP) with a custom multimodal dataset containing images and texts for relation extraction or a specific task? Can you do it using open-source multimodal LLM and multimodal datasets like video-llama or else so anyone can further their experiments with the help of your tutorial. Can you also talk about how we can boost the performance of the fine-tuned modal using prompt tuning in the same video?

  • @wiktorm9858
    @wiktorm9858 Год назад

    Cool video mainly due to the topic. Sometimes, I had to rewind backwards, bacuase I could not get something, mainly why the reduction rank was 2 - is this just a chosen parameter?

    • @umarjamilai
      @umarjamilai  Год назад +1

      Hi! The rank of the matrix is a hyper-parameter and in my PyTorch implementation, I had chosen a rank of 1. The lower the parameter, the lower the size of the matrix, but also the higher the loss of "precision", because the matrix may have an intrinsic dimension higher than the chosen hyper-parameter. If it doesn't make sense to you, I suggest you read what is the rank of the matrix and how dimentionality reduction works in PCA. That should give you the math background.

  • @ramendrachaudhary9784
    @ramendrachaudhary9784 9 месяцев назад

    Good explaining. 👍👍

  • @anilaxsus6376
    @anilaxsus6376 Год назад

    why dont they lora the entire model's weights both the original and the changes ?

    • @umarjamilai
      @umarjamilai  Год назад

      How would you LoRA the original weights?

    • @anilaxsus6376
      @anilaxsus6376 Год назад

      @@umarjamilai ok i just thought about it and uhhh, yeah, i dont see how, i had a misconception in my head, i forgot that the input data goes through the weights one layer at a time hence the output of layer 1 is the input of layer 2, plus they have activations functions that might make the process non-linear, my bad, have a nice day.

  • @EkShunya
    @EkShunya Год назад +1

    thank you :)

  • @Patrick-wn6uj
    @Patrick-wn6uj 9 месяцев назад

    15:30 🤣🤣The comment is hilarious, rich boy net

  • @weiyaoli6977
    @weiyaoli6977 Год назад

    why b + a not b * a

    • @umarjamilai
      @umarjamilai  Год назад

      Where did you read B + A? 🤔

    • @weiyaoli6977
      @weiyaoli6977 Год назад

      d=1000, k=5000, p=5000(original). lora: 1000*1+1*5000=6000. so from the formula it is A*B. why A+B here? Thanks@@umarjamilai

    • @umarjamilai
      @umarjamilai  Год назад

      @@weiyaoli6977 That's the number of parameters due to LoRA, which is the size of the two matrices. When you save the model, you save the two matrices separately, so you only need to consider the size of each separately and sum them together. When you use LoRA, on the other hand, you need to multiply the two matrices.