Fine tuning gpt2 | Transformers huggingface | conversational chatbot | GPT2LMHeadModel

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024

Комментарии • 65

  • @gingerbread_GB
    @gingerbread_GB Год назад +2

    This is a very important learning point. When I was watching your video, I agreed with you that it was not good the bot kept saying "I'm a very experienced person." But later on, when I dove into your json file, I realized that the dialogue in the file was bad. When I searched for "experienced" in the json, I got 61 hits, with the same "I'm not sure. I'm a very experienced person." over and over again.
    So the model worked, your code worked, the bot learns exactly what you taught it in the dataset. If your dataset is bad, your bot will talk nonsense.

    • @programming_hut
      @programming_hut  Год назад +1

      I am planning to release series of about 30 days long all about embedding langchain datasets huggingface and these llms
      CurrentlyI am extensively researching about these and will surely upload.
      And thanks for your intuition I will look about this too.

    • @gingerbread_GB
      @gingerbread_GB Год назад

      @@programming_hut Thanks for the response, I'm an amateur learning about this stuff too. I wish you can do a tutorial on LoRA with Huggingface transformers. for us at home, we don't have access to powerful GPUs to train big models. LORA make it more feasible.
      Another big part I'm struggling with is how to get big data. There are plenty of datasets out there scraped from the internet, but they are mostly filled with trash. Like inappropriate languages, messy grammar. May that's something you can talk about.

    • @programming_hut
      @programming_hut  Год назад

      Sure I will try to cover after some research

  • @ilyas8523
    @ilyas8523 Год назад +3

    great video, I was stuck on some steps until I found your video.

  • @bennievaneeden2720
    @bennievaneeden2720 Год назад +2

    Great video. Glad to see people actually debugging their code. It really helps to better grasp whats going on

  • @kaenovama
    @kaenovama Год назад +3

    Ayy, this is what I've been looking for

  • @rishabhramola448
    @rishabhramola448 Год назад +1

    Really helpful!!! Deserve more views

  • @fp-mirzariyasatali1985
    @fp-mirzariyasatali1985 Год назад +2

    Its great to try multiple techniques

  • @plashless3406
    @plashless3406 Год назад

    This is amazing. Really appreciate your efforts, brother.

  • @ammar46
    @ammar46 10 месяцев назад +3

    Nothing is generated for me. I used exact code as you used. Can you help understand why this is happening??

  • @ucduyvo4552
    @ucduyvo4552 Год назад +1

    Great Video, thanks

  • @shubhamkumar8093
    @shubhamkumar8093 Год назад

    Very Informative 😊

  • @muntazkaleem7977
    @muntazkaleem7977 Год назад +1

    Nice work

  • @Websbird
    @Websbird 3 месяца назад

    Great help - Can you recommend any editor for windows just like yours? and is that possible for you to create a colab or kaggle for same project?

  • @elibrignac8050
    @elibrignac8050 Год назад +1

    Epic video, just one question: After saving the model, how can I load it and make inferences.

  • @amitvyas7905
    @amitvyas7905 Год назад +1

    Hi!
    It was a good video.
    I would like to know once the model is trained how can we check the accuracy? Can you generate a ROUGE score? That way we know how goo or bad the model is.

  • @arjunparasher7005
    @arjunparasher7005 Год назад

    🙌🙌

  • @lekhwar
    @lekhwar Год назад

    Great

  • @tobi418
    @tobi418 Год назад +1

    Hello, thank you for sharing your code. I have question, Why don't you use Trainer class of HuggingFaces Transformers library for training?

    • @programming_hut
      @programming_hut  Год назад

      Nvm I just wanted to try this way

    • @tobi418
      @tobi418 Год назад

      @@programming_hut is there any advantage this way? Can I use Trainer class? Also can I use TextDataset class for import and tokenize my dataset?

  • @georgekokkinakis7288
    @georgekokkinakis7288 Год назад

    What if we don't add the < , and to the training data? I have a dataset where each sample is formated as [context] [user_question] [bot_answer] and each sample is separated from the next one by an empty line. I am using a pretrained model lighteternal/gpt2-finetuned-greek

  • @sanvisundarrajan
    @sanvisundarrajan 9 месяцев назад +1

    Hey , I debugged the same exact way .. the bot does'nt reply though. Can someone help with this ?

  • @rakesh2233
    @rakesh2233 11 месяцев назад

    Great video, thanks. I am a beginner studying about these LLM's. I have a small doubt,
    I have seen people use different data formats to fine-tune different LLMs. For example, the following format can be used for Llama-2:
    {
    "instruction": "",
    "input": "",
    "output": ""
    }
    and sometimes the format below is used for chatglm2-6b:
    {
    "content": "",
    "summary": ""
    }
    Is it related to what format was used for pre-training or actually both can be used for different llms, how do I organize my custom data if I want to fine-tune a llm?

  • @sanketgaikwad996
    @sanketgaikwad996 Год назад

    thanks a lot bro

  • @SabaMomeni-i1n
    @SabaMomeni-i1n Год назад

    great video. just wondering why u used

  • @jackeyhua
    @jackeyhua 4 месяца назад

    Hi there, very good video, really appreciate. Currently we are facing a problem that the input token does not generate any bot output, like . Can you help figure it out?

  • @sharathmandadi
    @sharathmandadi Год назад +1

    what changes required to use this code for squad Question Answering training

  • @miningahsean886
    @miningahsean886 Год назад

    Good Video Sir, where did you source your dataset?

  • @nicandropotena2682
    @nicandropotena2682 Год назад

    Is it possible to train this model with the history of the conversation too? to keep track of what user said, in order to mantain a logical sense to the conversation.

    • @programming_hut
      @programming_hut  Год назад +1

      Working on that would probably find out and then will make tutorial.

  • @nitron3006
    @nitron3006 Год назад

    after i run your code training doesn't work, it just kept on 0% | | 0/12 [00:00

    • @joshwaphly
      @joshwaphly 4 месяца назад

      same error try chaning GPU to T4 in Run time tab Run selection, thanks alot to @programming_hut. You have enlightened me

  • @manthanghonge007
    @manthanghonge007 5 месяцев назад

    hey can you tell me how to connect this gpt 2 model from front end

  • @甘楽-u7v
    @甘楽-u7v Год назад

    I wonder where I can find more dataset for finetuning a chatbot GPT-2, if anybody have idea please tell me, thanks.

    • @programming_hut
      @programming_hut  Год назад

      You can use chatgpt itself to generate more data
      You can look at alpaca model”s dataset

  • @kotcraftchannelukraine6118
    @kotcraftchannelukraine6118 Год назад

    Is it possible to fine tune OPT-125M or GPT-Neo-125M using this?

    • @MrJ3
      @MrJ3 Год назад +2

      It's using the HuggingFace API so it's easy to swap models as long as the models support training on the task you are interested in. Just swap the model name out.

  • @coldbubbyguy6206
    @coldbubbyguy6206 Год назад

    Is it normal for the training to get stuck at 0% if I only have access to CPU?

  • @ucduyvo4552
    @ucduyvo4552 Год назад

    Thanhs sir, but the language in the fine tuning gpt2 video is English, how about the languages different English

    • @programming_hut
      @programming_hut  Год назад

      I might work on that and will try making video for it…

  • @quantumjun
    @quantumjun Год назад

    if finetuning needs to add a new layer?

    • @programming_hut
      @programming_hut  Год назад +2

      Fine-tuning trains a pretrained model on a new dataset without training from scratch
      Now it’s your choice to add or remove layer

    • @quantumjun
      @quantumjun Год назад

      @@programming_hut thank you very much

    • @quantumjun
      @quantumjun Год назад

      I reckon if we add a new layer for chatbot, it may overfit the chat data?

    • @quantumjun
      @quantumjun Год назад

      since we don’t have enough data for that?

  • @UncleDavid
    @UncleDavid Год назад

    why u breathing so fast? are u nervous?

  • @neel_aksh
    @neel_aksh Год назад

    Can i use this with gpt-2-simple

  • @mohammedismail6872
    @mohammedismail6872 Год назад

    bro pleasse place your mic away from keyboard.

  • @robosergTV
    @robosergTV Год назад

    please speak faster, video too slow.

    • @programming_hut
      @programming_hut  Год назад

      RUclips has feature to make it fast in playback speed 😬

  • @balarajagopalan4981
    @balarajagopalan4981 Год назад

    Talk slowly, man. Can't understand what you're saying.

    • @programming_hut
      @programming_hut  Год назад +1

      Sorry but you can reduce the speed

    • @rigeshyogi
      @rigeshyogi Год назад +1

      @@programming_hut If you want to teach something, speak slowly and clearly so that you're understood. Otherwise, you're not going to reach a broader audience.

    • @programming_hut
      @programming_hut  Год назад

      Sure will keep in mind next time

    • @sloowed_reveerb
      @sloowed_reveerb Год назад

      I'm not an English speaker and I understood everything! (and I'm not from India 😅😀) You can turn on subtitles if you're struggling :) Amazing tutorial btw, thanks to the author!

    • @programming_hut
      @programming_hut  Год назад

      thanks so kind of you