How-To Fine-Tune a Model and Export it to Ollama Locally

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 27

  • @jason77nhri
    @jason77nhri 5 дней назад

    Thank you for sharing the tutorial.
    I’m currently using Ollama + OpenWebUI to run LLMs on my local computer.
    I’d like to ask if it’s possible to fine-tune small-scale models solely on a local machine with Ollama + OpenWebUI, or is it necessary to connect to the internet?
    Thank you!

    • @fahdmirza
      @fahdmirza  5 дней назад +1

      You can definitely fine-tune models locally without internet access.

    • @jason77nhri
      @jason77nhri 5 дней назад

      @@fahdmirza Thank you for your reply!
      Does this video include a related tutorial?
      Or do you have other videos showing how to fine-tune models using Ollama + OpenWebUI?
      Thank you!

  • @SpicyMelonYT
    @SpicyMelonYT 3 месяца назад +1

    So I trained the model and then built the model with ollama and the modelfile. However, when I run the model it seems a little broken. For instance I trained it so that if I write TMAWJ (tell me a weird joke) it should respond with a dad joke. But instead it started questioning me and its existence LOL. And when ever it did kinda respond it responded with NOT with the joke but instead WITH THE chat_template. So it said the "Below are some instructions...". This seems wrong. I am not supposed to parse that am I?

    • @SpicyMelonYT
      @SpicyMelonYT 3 месяца назад

      Ah ok I may have figured out the issue. So the llama3 model it provides is not an instruct model, so it did not have the conversational element the instruct model does. So that's why it was responding with such small answers. Also the chat_template that is selected by default is not the one for llama3. So i had to switch it to the llama3 one. Now it seems to work!

    • @fahdmirza
      @fahdmirza  3 месяца назад

      awesome.

  • @user-hydestory
    @user-hydestory 3 месяца назад

    thank you so much for your code!

  • @spotnuru83
    @spotnuru83 3 месяца назад

    thank you so much, but can you show us an example how to create with our own custom data because i am struggling from 6 months.. no one is actually showing the proper way.. everyone is showing with only alpaco which is available in hugging face and is not much of a use, also can i not train this in our local machine? because i might have sensitive data to train and use it for confidential purposes only, any thing in this line will be of great help. thanks in advance.

    • @nguyentran7068
      @nguyentran7068 3 месяца назад

      1. Format your dataset, csv or json up to you. Make sure you have 2 columns, if you have more you can try to merge those columns into "input", "ouput" or something along the line.
      2. The guide uses dataset = some dataset from hugging face, just change it to yours
      3. Unsloth models can be used locally so you don't have to worry about sensitive data being processed.

    • @mr.gk5
      @mr.gk5 2 месяца назад

      So I can load my local dataset without having to upload it to huggingface? Would something like this work dataset = pd.read_csv(file_path) work?

    • @nguyentran7068
      @nguyentran7068 2 месяца назад +1

      ​@@mr.gk5 Yes it should work

  • @SpicyMelonYT
    @SpicyMelonYT 3 месяца назад +1

    I get this error at the last step:
    ollama create unsloth_model -f ./model/Modelfile
    transferring model data
    Error: invalid model reference: ./model/unsloth.Q8_0.gguf
    All the files are in their correct location.
    The only thing I can think of is that i downloaded the files from the colab to my home computer that is running windows. I don't know why that would be a problem tho.
    Please help o coding savior!!!

    • @SpicyMelonYT
      @SpicyMelonYT 3 месяца назад +1

      OMG i figured it out almost instantly on accident. Ok you have to change the modelfile so that instead of this line:
      FROM ./model/unsloth.Q8_0.gguf
      its this line:
      FROM unsloth.Q8_0.gguf
      i guess it was just a simple pathing issue

    • @fahdmirza
      @fahdmirza  3 месяца назад +1

      cool, thanks

  • @mr.caca6917
    @mr.caca6917 Месяц назад

    Hello if I just want to save the model in keras can I do that? I would like it to be an h5 model?

  • @chaithanyachaganti4305
    @chaithanyachaganti4305 4 месяца назад

    Thank you for the video 👍

  • @drmetroyt
    @drmetroyt 4 месяца назад +1

    Can i use pdf to fine tune a model ?

    • @RedSky8
      @RedSky8 3 месяца назад

      You normally what it to be in a csv format since that's what's clear for the model to read. You may want to save for PDF as a CSV file and try it that way. I think there is a way to use RAG with PDF documents so your model has access to the information in your PDF.

    • @mr.gk5
      @mr.gk5 2 месяца назад

      @@RedSky8so after creating the csv I feed it to the script directly or do I upload it to huggingface? How does it work

  • @armandosilvera6314
    @armandosilvera6314 3 месяца назад +1

    I struggle for days with save_pretrained_gguf. It showed a error: "/bin/sh: 1: python: not found" The problem is that I had python3 (python 3.10.4) installed and alias python was not defined. I solved it by: sudo apt install python-is-python3. Nice video, simple and complete.

  • @sriharimohan618
    @sriharimohan618 4 месяца назад

    ModelFile is not generating for me when I do save_pretrained_gguf ...
    till here all done - Unsloth: Conversion completed! Output location: ./model/unsloth.Q8_0.gguf
    any idea?

  • @nc9024
    @nc9024 Месяц назад

    unsloth required gpu but i dont have any