Thank you for sharing the tutorial. I’m currently using Ollama + OpenWebUI to run LLMs on my local computer. I’d like to ask if it’s possible to fine-tune small-scale models solely on a local machine with Ollama + OpenWebUI, or is it necessary to connect to the internet? Thank you!
@@fahdmirza Thank you for your reply! Does this video include a related tutorial? Or do you have other videos showing how to fine-tune models using Ollama + OpenWebUI? Thank you!
So I trained the model and then built the model with ollama and the modelfile. However, when I run the model it seems a little broken. For instance I trained it so that if I write TMAWJ (tell me a weird joke) it should respond with a dad joke. But instead it started questioning me and its existence LOL. And when ever it did kinda respond it responded with NOT with the joke but instead WITH THE chat_template. So it said the "Below are some instructions...". This seems wrong. I am not supposed to parse that am I?
Ah ok I may have figured out the issue. So the llama3 model it provides is not an instruct model, so it did not have the conversational element the instruct model does. So that's why it was responding with such small answers. Also the chat_template that is selected by default is not the one for llama3. So i had to switch it to the llama3 one. Now it seems to work!
thank you so much, but can you show us an example how to create with our own custom data because i am struggling from 6 months.. no one is actually showing the proper way.. everyone is showing with only alpaco which is available in hugging face and is not much of a use, also can i not train this in our local machine? because i might have sensitive data to train and use it for confidential purposes only, any thing in this line will be of great help. thanks in advance.
1. Format your dataset, csv or json up to you. Make sure you have 2 columns, if you have more you can try to merge those columns into "input", "ouput" or something along the line. 2. The guide uses dataset = some dataset from hugging face, just change it to yours 3. Unsloth models can be used locally so you don't have to worry about sensitive data being processed.
I get this error at the last step: ollama create unsloth_model -f ./model/Modelfile transferring model data Error: invalid model reference: ./model/unsloth.Q8_0.gguf All the files are in their correct location. The only thing I can think of is that i downloaded the files from the colab to my home computer that is running windows. I don't know why that would be a problem tho. Please help o coding savior!!!
OMG i figured it out almost instantly on accident. Ok you have to change the modelfile so that instead of this line: FROM ./model/unsloth.Q8_0.gguf its this line: FROM unsloth.Q8_0.gguf i guess it was just a simple pathing issue
You normally what it to be in a csv format since that's what's clear for the model to read. You may want to save for PDF as a CSV file and try it that way. I think there is a way to use RAG with PDF documents so your model has access to the information in your PDF.
I struggle for days with save_pretrained_gguf. It showed a error: "/bin/sh: 1: python: not found" The problem is that I had python3 (python 3.10.4) installed and alias python was not defined. I solved it by: sudo apt install python-is-python3. Nice video, simple and complete.
ModelFile is not generating for me when I do save_pretrained_gguf ... till here all done - Unsloth: Conversion completed! Output location: ./model/unsloth.Q8_0.gguf any idea?
Thank you for sharing the tutorial.
I’m currently using Ollama + OpenWebUI to run LLMs on my local computer.
I’d like to ask if it’s possible to fine-tune small-scale models solely on a local machine with Ollama + OpenWebUI, or is it necessary to connect to the internet?
Thank you!
You can definitely fine-tune models locally without internet access.
@@fahdmirza Thank you for your reply!
Does this video include a related tutorial?
Or do you have other videos showing how to fine-tune models using Ollama + OpenWebUI?
Thank you!
So I trained the model and then built the model with ollama and the modelfile. However, when I run the model it seems a little broken. For instance I trained it so that if I write TMAWJ (tell me a weird joke) it should respond with a dad joke. But instead it started questioning me and its existence LOL. And when ever it did kinda respond it responded with NOT with the joke but instead WITH THE chat_template. So it said the "Below are some instructions...". This seems wrong. I am not supposed to parse that am I?
Ah ok I may have figured out the issue. So the llama3 model it provides is not an instruct model, so it did not have the conversational element the instruct model does. So that's why it was responding with such small answers. Also the chat_template that is selected by default is not the one for llama3. So i had to switch it to the llama3 one. Now it seems to work!
awesome.
thank you so much for your code!
sure thing
thank you so much, but can you show us an example how to create with our own custom data because i am struggling from 6 months.. no one is actually showing the proper way.. everyone is showing with only alpaco which is available in hugging face and is not much of a use, also can i not train this in our local machine? because i might have sensitive data to train and use it for confidential purposes only, any thing in this line will be of great help. thanks in advance.
1. Format your dataset, csv or json up to you. Make sure you have 2 columns, if you have more you can try to merge those columns into "input", "ouput" or something along the line.
2. The guide uses dataset = some dataset from hugging face, just change it to yours
3. Unsloth models can be used locally so you don't have to worry about sensitive data being processed.
So I can load my local dataset without having to upload it to huggingface? Would something like this work dataset = pd.read_csv(file_path) work?
@@mr.gk5 Yes it should work
I get this error at the last step:
ollama create unsloth_model -f ./model/Modelfile
transferring model data
Error: invalid model reference: ./model/unsloth.Q8_0.gguf
All the files are in their correct location.
The only thing I can think of is that i downloaded the files from the colab to my home computer that is running windows. I don't know why that would be a problem tho.
Please help o coding savior!!!
OMG i figured it out almost instantly on accident. Ok you have to change the modelfile so that instead of this line:
FROM ./model/unsloth.Q8_0.gguf
its this line:
FROM unsloth.Q8_0.gguf
i guess it was just a simple pathing issue
cool, thanks
Hello if I just want to save the model in keras can I do that? I would like it to be an h5 model?
yes should be fine.
Thank you for the video 👍
Welcome 👍
Can i use pdf to fine tune a model ?
You normally what it to be in a csv format since that's what's clear for the model to read. You may want to save for PDF as a CSV file and try it that way. I think there is a way to use RAG with PDF documents so your model has access to the information in your PDF.
@@RedSky8so after creating the csv I feed it to the script directly or do I upload it to huggingface? How does it work
I struggle for days with save_pretrained_gguf. It showed a error: "/bin/sh: 1: python: not found" The problem is that I had python3 (python 3.10.4) installed and alias python was not defined. I solved it by: sudo apt install python-is-python3. Nice video, simple and complete.
thanks
ModelFile is not generating for me when I do save_pretrained_gguf ...
till here all done - Unsloth: Conversion completed! Output location: ./model/unsloth.Q8_0.gguf
any idea?
unsloth required gpu but i dont have any
ok