LoRA explained (and a bit about precision and quantization)

Поделиться
HTML-код
  • Опубликовано: 4 июн 2024
  • ▬▬ Papers / Resources ▬▬▬
    LoRA Paper: arxiv.org/abs/2106.09685
    QLoRA Paper: arxiv.org/abs/2305.14314
    Huggingface 8bit intro: huggingface.co/blog/hf-bitsan...
    PEFT / LoRA Tutorial: www.philschmid.de/fine-tune-f...
    Adapter Layers: arxiv.org/pdf/1902.00751.pdf
    Prefix Tuning: arxiv.org/abs/2101.00190
    ▬▬ Support me if you like 🌟
    ►Link to this channel: bit.ly/3zEqL1W
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
    ►E-Mail: deepfindr@gmail.com
    ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
    Music from #Uppbeat (free for Creators!):
    uppbeat.io/t/danger-lion-x/fl...
    License code: M4FRIPCTVNOO4S8F
    ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
    All Icons are from flaticon: www.flaticon.com/authors/freepik
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction
    00:20 Model scaling vs. fine-tuning
    00:58 Precision & Quantization
    01:30 Representation of floating point numbers
    02:15 Model size
    02:57 16 bit networks
    03:15 Quantization
    04:20 FLOPS
    05:23 Parameter-efficient fine tuning
    07:18 LoRA
    08:10 Intrinsic Dimension
    09:20 Rank decomposition
    11:24 LoRA forward pass
    11:49 Scaling factor alpha
    13:40 Optimal rank
    14:16 Benefits of LoRA
    15:20 Implementation
    16:25 QLoRA
    ▬▬ My equipment 💻
    - Microphone: amzn.to/3DVqB8H
    - Microphone mount: amzn.to/3BWUcOJ
    - Monitors: amzn.to/3G2Jjgr
    - Monitor mount: amzn.to/3AWGIAY
    - Height-adjustable table: amzn.to/3aUysXC
    - Ergonomic chair: amzn.to/3phQg7r
    - PC case: amzn.to/3jdlI2Y
    - GPU: amzn.to/3AWyzwy
    - Keyboard: amzn.to/2XskWHP
    - Bluelight filter glasses: amzn.to/3pj0fK2

Комментарии • 39

  • @khangvutien2538
    @khangvutien2538 4 месяца назад +27

    This is one of the easiest to follow explanations of LoRA that I’ve seen. Thanks a lot.

    • @DeepFindr
      @DeepFindr  4 месяца назад +1

      Glad you found it useful!

  • @InturnetHaetMachine
    @InturnetHaetMachine 9 месяцев назад +13

    Another great video. I appreciate that you don't skip on giving context and lay a good foundation. Makes understanding a lot easier. Thanks!

  • @teleprint-me
    @teleprint-me 8 месяцев назад +3

    I've been scouring for a video like this. You're the best explanation so far!

  • @chrisschrumm6467
    @chrisschrumm6467 9 месяцев назад +2

    Nice job with summarizing transfer learning and LoRA!

  • @aurkom
    @aurkom 9 месяцев назад +3

    Awesome! Waiting for a video on implementing LoRA from scratch in pytorch.

  • @k_1_1_2_3_5
    @k_1_1_2_3_5 Месяц назад

    What an excellent video!! Congrats!!

  • @mohamedezzat5048
    @mohamedezzat5048 Месяц назад

    Thanks a lot Amazing explanation, very clear and straightforward

  • @omgwenxx
    @omgwenxx 2 месяца назад

    Amazing video, feel like I finally understood every aspect of LoRA, thank you!

    • @DeepFindr
      @DeepFindr  2 месяца назад

      Glad it was helpful :)

  • @marjanshahi979
    @marjanshahi979 3 месяца назад

    Amazing explanation! Thanks a lot!

  • @aron2922
    @aron2922 6 месяцев назад

    Another great video, keep it up!

  • @beyond_infinity16
    @beyond_infinity16 24 дня назад

    Explained quite well !

  • @binfos7434
    @binfos7434 5 месяцев назад

    Really Helpful!

  • @user-in2dd6by9q
    @user-in2dd6by9q 7 дней назад

    great video to explain lora! thanks

  • @dennislinnert5476
    @dennislinnert5476 9 месяцев назад

    Amazing!

  • @moonly3781
    @moonly3781 6 месяцев назад

    I'm interested in fine-tuning a Large Language Model to specialize in specific knowledge, for example about fish species, such as which fish can be found in certain seas or which are prohibited from fishing. Could you guide me on how to prepare a dataset for this purpose? Should I structure it as simple input-output pairs (e.g., 'What fish are in the Mediterranean Sea?' -> 'XX fish can be found in the Mediterranean Sea'), or is it better to create a more complex dataset with multiple columns containing various details about each fish species? Any advice on dataset preparation for fine-tuning an LLM in this context would be greatly appreciated.
    Thanks in advance!"

  • @henrywang4010
    @henrywang4010 5 месяцев назад

    Great video! Liked and subscribed

  • @unclecode
    @unclecode 7 месяцев назад

    Yes, indeed was hrlpful! Do you have a video on quantization?

  • @nomad_3d
    @nomad_3d 9 месяцев назад +1

    Good summary! Next time it would be great if you add headings to the tables that you show on the video. Sometimes it is hard to follow. For example, what is computational efficiency? is it inference time or inference time increase over the increase in performance (e.g. accuracy, recall, etc.)? Thanks.

  • @flecart
    @flecart 20 дней назад

    good job!

  • @Canbay12
    @Canbay12 Месяц назад

    Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?

  • @sougatabhattacharya6703
    @sougatabhattacharya6703 2 месяца назад

    Good explanation

  • @msfasha
    @msfasha Месяц назад

    Brilliant

  • @ahmadalis1517
    @ahmadalis1517 8 месяцев назад

    XAI techniques on LLMs is really interesting topic! When you would consider it?

  • @ibongamtrang7247
    @ibongamtrang7247 5 месяцев назад

    Thanks

  • @darshandv10
    @darshandv10 4 месяца назад

    What softwares do you use to make videos?

  • @ArunkumarMTamil
    @ArunkumarMTamil Месяц назад

    how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?

  • @xugefu
    @xugefu 3 месяца назад

    Thanks!

  • @Menor55672
    @Menor55672 2 месяца назад

    How do you make the illustrations ?

  • @ad_academy
    @ad_academy 4 месяца назад

    Good video

  • @prashantlawhatre7007
    @prashantlawhatre7007 9 месяцев назад +2

    please make video on QLoRA

  • @alkodjdjd
    @alkodjdjd 8 месяцев назад +11

    As clear as mud

    • @truck.-kun.
      @truck.-kun. 4 месяца назад +1

      Sounds like AI

    • @anudeepk7390
      @anudeepk7390 3 месяца назад

      Is it a compliment or no? Cause mud is not clear.

    • @iloos7457
      @iloos7457 2 месяца назад

      ​@@anudeepk7390😂😂😂😂😂😂

  • @kutilkol
    @kutilkol Месяц назад

    Ideot read paper. Lol

  • @susdoge3767
    @susdoge3767 3 месяца назад +1

    gold