Efficient Fine-Tuning for Llama-v2-7b on a Single GPU

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии • 64

  • @thelinuxkid
    @thelinuxkid Год назад +15

    Very helpful! Already trained llama-2 with custom classifications using the cookbook. Thanks!

  • @dinupavithran
    @dinupavithran Год назад +1

    Very informative. Direct and to the point content in a easily understandable presentation.

  • @craigrichards5472
    @craigrichards5472 4 месяца назад

    Amazing, can’t wait to play and train my first model 🎉

  • @thedelicatecook2
    @thedelicatecook2 7 месяцев назад

    Well this was simply excellent, thank you 🙏🏻

  • @manojselvakumar4262
    @manojselvakumar4262 Год назад +1

    Great content, well presented!

  • @tomhavy
    @tomhavy Год назад +2

    Thank you!

  • @andres.yodars
    @andres.yodars Год назад +1

    One of the most complete videos. Must watch

  • @karanjakhar
    @karanjakhar Год назад +1

    Really helpful. Thank you 👍

  • @KarimMarbouh
    @KarimMarbouh Год назад

    🖖alignement by sectoring hyperparameters in behaviour, nice one

  • @zubairdotnet
    @zubairdotnet Год назад +15

    Nvidia H100 GPU on Lambda labs is just $2/hr, I am using it for past few months unlike $12.29/hr on AWS as shown in the slide.
    I get it, it's still not cheap but just worth mentioning here

    • @pieromolino_pb
      @pieromolino_pb Год назад +2

      You are right, we reported the AWS price there as it's hte most popular option and it was not practical to show all the pricing of all the vendors. But yes you can get them for cheaper elsewhere like from Lambda, thanks for pointing it out

    • @rankun203
      @rankun203 Год назад

      Last time I tried it, H100s are out of stock on Lambda

    • @zubairdotnet
      @zubairdotnet Год назад

      @@rankun203 They are available only in specific region mine is in Utah, I don't think they have expanded it plus there is no storage available in this region meaning if you shut down your instance, all data is lost

    • @Abraham_writes_random_code
      @Abraham_writes_random_code Год назад +2

      together AI is $1.4/hr on your own fine tuned model :)

    • @PieroMolino
      @PieroMolino Год назад +2

      @@Abraham_writes_random_code Predibase is cheaper than that

  • @ggm4857
    @ggm4857 Год назад +6

    I like to kindly request @DeepLearningAI to prepare such hands-on workshop on fine-tunning Source Code Models.

    • @Deeplearningai
      @Deeplearningai  Год назад +3

      Don't miss our short course on the subject! www.deeplearning.ai/short-courses/finetuning-large-language-models/

    • @ggm4857
      @ggm4857 Год назад

      @@Deeplearningai , Wow thanks.

  • @ab8891
    @ab8891 Год назад

    Excellent xtal clear surgery on GPU VRAM utilization...

  • @Ev3ntHorizon
    @Ev3ntHorizon Год назад

    Excellent coverage, thankyou.

  • @Ay-fj6xf
    @Ay-fj6xf Год назад

    Great video, thank you!

  • @msfasha
    @msfasha Год назад +1

    Clear and informative, thanx.

  • @PickaxeAI
    @PickaxeAI Год назад +1

    at 51:30 he says don't repeat the same prompt in the training data. What if I am fine-tuning the model on a single task but with thousands of different inputs for the same prompt?

    • @brandtbealx
      @brandtbealx Год назад +2

      It will cause overfitting. It would be similar to training an image classifier with a 1000 pictures of roses and only one lilly, then asking it to predict both classes with good accuracy. You want the data to have a normal distribution around your problem space.

    • @satyamgupta2182
      @satyamgupta2182 Год назад

      @PickaxeAI Did you come across a solution for this?

    • @manojselvakumar4262
      @manojselvakumar4262 Год назад

      Can you give an example for the task? I'm trying to understand in what situation you'd need different completions for the same prompt

  • @nguyenanhnguyen7658
    @nguyenanhnguyen7658 Год назад

    Very helpful. Thanks.

  • @goelnikhils
    @goelnikhils Год назад

    Amazing Content of fine tuning LLM

  • @ayushyadav-bm2to
    @ayushyadav-bm2to 10 месяцев назад +1

    What's the music in the beginning, can't shake it off

  • @jirikosek3714
    @jirikosek3714 Год назад

    Great job, thumbs up!

  • @rajgothi2633
    @rajgothi2633 Год назад

    amazing video

  • @bachbouch
    @bachbouch Год назад

    Amazing ❤

  • @nekro9t2
    @nekro9t2 Год назад +2

    Please can you provide a link to the slides?

  • @ggm4857
    @ggm4857 Год назад +1

    Hello everyone, I would be so happy if the recorded video have caption/subtitles.

    • @kaifeekhan_25
      @kaifeekhan_25 Год назад +1

      Right

    • @dmf500
      @dmf500 Год назад +2

      it does, you just have to enable it! 😂

    • @kaifeekhan_25
      @kaifeekhan_25 Год назад +1

      ​@@dmf500now it is enabled😂

  • @rgeromegnace
    @rgeromegnace Год назад

    Eh, c'était super. Merci beaucoup!

  • @stalinamirtharaj1353
    @stalinamirtharaj1353 Год назад

    @pieromolino_pb -Is Ludwig allows to locally download and deploy the fine-tuned model?

  • @dudepowpow
    @dudepowpow 4 месяца назад

    28 zoom notifications! Travis working too hard

  • @hemanth8195
    @hemanth8195 Год назад

    Thankyou

  • @nminhptnk
    @nminhptnk Год назад

    I ran Colab T4 and still got into “RuntimeError: CUDA Out of memory”. Any thing else I can do please?

  • @TheGargalon
    @TheGargalon Год назад +6

    And I was under the delusion that I would be able to fine-tune the 70B param model on my 4090. Oh well...

    • @iukeay
      @iukeay Год назад

      I got a 40b model working on a 4090

    • @TheGargalon
      @TheGargalon Год назад +2

      @@iukeay Did you fine tune it, or just inference?

    • @ahsanulhaque4811
      @ahsanulhaque4811 9 месяцев назад

      70B param? hahaha.

  • @pickaxe-support
    @pickaxe-support Год назад +2

    Cool video. If I want to fine-tune it on a single specific tassk (keyword extraction), should I first train an instruction-tuned model, and then train that on my specific task? Or mix the datasets together?

    • @shubhramishra8698
      @shubhramishra8698 Год назад

      also working on keyword extraction! I was wondering if you'd had any success fine tuning?

  • @rachadlakis1
    @rachadlakis1 4 месяца назад

    can we have the slides plz ?

  • @SDAravind
    @SDAravind Год назад

    can you share the slide, please?

  • @feysalmustak9604
    @feysalmustak9604 Год назад +3

    How long did the entire training process take?

    • @edwardduda4222
      @edwardduda4222 8 месяцев назад

      Depends on your hardware, dataset, and hyper parameters you’re manipulating. The training process is the longest phase in developing a model.

  • @kevinehsani3358
    @kevinehsani3358 Год назад

    epochs=3, since we are fine tunning, would epochs=1 would suffice?

    • @pieromolino_pb
      @pieromolino_pb Год назад +3

      It really depends on the dataset. Ludwig has also an early stopping mechanism where you can specify the number of epochs (or steps) without improvement before stopping, so you could set epochs to a relatively large number and have the early stopping take care of not wasting compute time

  • @arjunaaround4013
    @arjunaaround4013 Год назад

    ❤❤❤

  • @Neberheim
    @Neberheim Год назад

    This seems to make a case for Apple Silicon for training. The M3 Max performs close to an RTX 3080, but with access to up to 192GB of memory.

  • @leepro
    @leepro 8 месяцев назад

    Cool! ❤

  • @mohammadrezagh4881
    @mohammadrezagh4881 Год назад

    when I run the code in Perform Inference, I frequently receive ValueError: If `eos_token_id` is defined, make sure that `pad_token_id` is defined.
    what should I do?

    • @arnavgrg
      @arnavgrg Год назад

      This is now fixed on Ludwig master!