Building with Instruction-Tuned LLMs: A Step-by-Step Guide

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии • 59

  • @steveking5858
    @steveking5858 Год назад +1

    Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!

  • @prizmaweb
    @prizmaweb Год назад +2

    Outstanding! I was looking around for exactly this for the last week.

  • @redfield126
    @redfield126 Год назад +1

    This is a very very educational content. I found most of all my main questions answered. Fantastic wrap up. Thank you guys

  • @RaymonddeLacaze
    @RaymonddeLacaze Год назад +17

    That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out.
    Will you guys be making the slides and code available?

    • @Deeplearningai
      @Deeplearningai  Год назад +1

      We'll be following up with the slides!

    • @Jyovita1047316
      @Jyovita1047316 Год назад

      @@Deeplearningai when?

    • @lysanderAI
      @lysanderAI Год назад +1

      you can find a link to the slides in the chat around the 45 min mark in the video.

  • @archit_singh15
    @archit_singh15 Год назад

    such excellent explanations, perfect understanding achieved ! thanks

  • @chukypedro818
    @chukypedro818 Год назад +1

    Awesome Webinar.
    Thanks Chris an Greg

  • @fabianaltendorfer11
    @fabianaltendorfer11 Год назад

    Love the energy. Thanks for the session!

  • @fal57
    @fal57 Год назад

    Thank you so much; you've made the idea very simple.

  • @wangsherpa2801
    @wangsherpa2801 Год назад +1

    Excellent session, thanks!

  • @amortalbeing
    @amortalbeing 11 месяцев назад

    Where are the slides? I want to read the paper suggested at 30:10 .
    What am I supposed to do ?
    Thanks a lot in advance

  • @membershipyuji
    @membershipyuji Год назад +2

    The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.

    • @temp_temp3183
      @temp_temp3183 Год назад +3

      100% agree, it wasn't clear what was the value add with unsupervised training.

    • @chrisalexiuk
      @chrisalexiuk Год назад

      Great question!
      If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.

  • @llohannsperanca
    @llohannsperanca Год назад +3

    Dears, great presentation! Thank you very much!
    I wonder where the material will be available?

  • @seulkeelee4655
    @seulkeelee4655 Год назад +2

    Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!

  • @MS-en7el
    @MS-en7el Год назад +4

    Hi! Thank you for the valuable content! Although I still have a question.
    @chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?

  • @fox_trot
    @fox_trot Год назад +7

    Will you guys be making the slides and code available?

  • @akibulhaque8621
    @akibulhaque8621 Год назад

    For the supervised instruction set can i use any model? Like a Lliama 2 base model and train it?

  • @androide551
    @androide551 Год назад +2

    wen slides sir?

  • @amortalbeing
    @amortalbeing 11 месяцев назад

    Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?

  • @anujanand6
    @anujanand6 Год назад +1

    That was a great presentation! Brief yet clear and to the point!
    I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!

  • @ashsha-y5f
    @ashsha-y5f Год назад

    @chris - I wanted to finetune llama model on my mac M1but it seems bitsandbytes does not have package for Apple silicon yet. Any suggestions ?

  • @bhaveshsethi6876
    @bhaveshsethi6876 Год назад +2

    @chris Alexiuk how did you push 4 bit model to hugging face

    • @weizhili6732
      @weizhili6732 Год назад

      I got the same error today: NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported

    • @bhaveshsethi6876
      @bhaveshsethi6876 Год назад

      ​@@weizhili6732I had searched it it can't be saved and loading the 4 bit require more GPU so go with the same process again and again

    • @chrisalexiuk
      @chrisalexiuk Год назад +1

      Hey!
      You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!

  • @MrLahcenDZ
    @MrLahcenDZ Год назад +2

    I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..

    • @chrisalexiuk
      @chrisalexiuk Год назад +3

      Excellent catch! This is a relic of trying it out on a few different datasets! It should be updated as of now.

  • @Pouya..
    @Pouya.. 9 месяцев назад

    are these notebooks available?

  • @seyedmohammadseyedmahdi8913
    @seyedmohammadseyedmahdi8913 Год назад +1

    thanks!

  • @ashwinrachha1694
    @ashwinrachha1694 Год назад +2

    I tried Intruction-Tuning on a custom dataset and got this error:
    ValueError: num_samples should be a positive integer value, but got num_samples=0

    • @chrisalexiuk
      @chrisalexiuk Год назад

      This has been corrected in the notebook now, there were some changes to the libraries that cause a few errors.

    • @ShyamSunderKumarNITD
      @ShyamSunderKumarNITD Год назад +4

      @@chrisalexiuk From where i can access the notebook.

  • @karrtikiyer1987
    @karrtikiyer1987 Год назад

    Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?

  • @ChiliJ
    @ChiliJ Год назад +1

    If I'm looking to teach the LLM a new programming language, should I go with instruction tuning or fine tuning?

    • @chrisalexiuk
      @chrisalexiuk Год назад +1

      Fine-tuning will likely have the best results!

    • @ChiliJ
      @ChiliJ Год назад

      @@chrisalexiuk thank you for being responsive. Got to check out your channel as well. Very informative!

  • @pec8377
    @pec8377 Год назад

    Your first model is repeating itself, does tons of weird things. What would you do in order to correct this ? More steps ? Larger dataset ?

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox Год назад

    I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?

  • @MauricioGomez-e9e
    @MauricioGomez-e9e Год назад +1

    magnifico

  • @prayagpurohit148
    @prayagpurohit148 Год назад

    Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)

  • @paparaoveeragandham284
    @paparaoveeragandham284 8 месяцев назад

    Look it

  • @fintech1378
    @fintech1378 Год назад

    i finetuned Llama 2 on colab but it says CUDA run out of memory, what is the problem here? cuz video says its possible

  • @josephmalkom2902
    @josephmalkom2902 7 дней назад

    Chris looked so obnoxious ! like I own you

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox Год назад +1

    I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?

  • @EXPERIMENTGPT
    @EXPERIMENTGPT Год назад +1

    @Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.

    • @chrisalexiuk
      @chrisalexiuk Год назад

      On which notebook is this occuring?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT Год назад +1

      @@chrisalexiuk google colab

    • @chrisalexiuk
      @chrisalexiuk Год назад

      @@EXPERIMENTGPT Is it in the Supervised Fine-tuning notebook?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT Год назад +1

      @@chrisalexiuk yes sir

    • @chrisalexiuk
      @chrisalexiuk Год назад

      @@EXPERIMENTGPT Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?