Fine-tuning LLMs with Hugging Face SFT 🚀 | QLoRA | LLMOps

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024
  • In this session, Harpreet from Deci AI talked about the nuances of supervised fine-tuning, instruction tuning, and the powerful techniques that bridge the gap between model objectives and user-specific requirements. He also demonstrated how to fine-tune LLMs using Hugging Face SFT.
    Topics that were covered:
    ✅ Specialized Fine-Tuning: Adapt LLMs for niche tasks using labeled data.
    ✅ Introduction to Instruction Tuning: Enhance LLM capabilities and controllability.
    ✅ Dataset Preparation: Format datasets for effective instruction tuning.
    ✅ BitsAndBytes & Model Quantization: Optimize memory and speed with the BitsAndBytes library.
    ✅ PEFT & LoRA: Understand the benefits of the PEFT library from ‪@HuggingFace‬ and the role of LoRA in fine-tuning.
    ✅ TRL Library Overview: Delve into the TRL (Transformers Reinforcement Learning) library's functionalities.
    About LLMOps Space -
    LLMOps.Space is a global community for LLM practitioners. 💡📚
    The community focuses on content, discussions, and events around topics related to deploying LLMs into production. 🚀
    Join discord: llmops.space/d...

Комментарии • 4

  • @avishvijayaraghavan
    @avishvijayaraghavan 3 месяца назад

    Great tutorial, appreciate it guys!

  • @pravingaikwad1337
    @pravingaikwad1337 6 месяцев назад

    What is the loss function used?

  • @coldedkiller1125
    @coldedkiller1125 9 месяцев назад

    can you please share colab notebook
    also is there a way to put path of dataset and model locally (like path form my pc ) ?

    • @ShaiHaaron
      @ShaiHaaron 7 месяцев назад

      colab.research.google.com/drive/1-xGUUad2O3Y3V0-qBxES7JPD4pKB4FI9?usp=sharing