Great video! I just want to add that as of January 2023, tokens you use with your fine-tuned model are about 6 times more expensive than the base model (Davinci: $0.12 vs $0.02 /1k tokens). So you might not save money, but you will get more accurate outputs if you fine-tune it correctly .
Excelente lesson. I expected Open AI Fine Tuning API to improve the general model itself with new training data but what we see is a significant deterioration of the original model responses when it's Fine Tuned on new data. Is there anyway to overcome this problem?
Thanks for your tutorial. I got a question. Is it possible to fine-tune a model that answer questions the same way it would do with the answer function of OpenAI API? I mean, is there something like a parameter for the "fine-tune.creates" line in which i can send a set of documents to look for? So the model can answer questions based on the information of the documents. Thank you again!
Hey! Sorry for the late reply. If you are still interested in joining our community, here's a link join.slack.com/t/buzzrobot/shared_invite/zt-1zsh7k8pd-iMu_M8bUxIK3pOJgqJgCRQ
Sorry I'm an idiot, and I'm new to this. Will fine tuning allow me to make a chatbot with a backstory, memory and personality any size I choose? Fine Tune on previous chats so It will be able to retrieve the information from there instead of using tons of tokens for every question? Or are you saying that I would only be able to draw on that information by creating custom prompts to access that information, in which case that doesn't really help me with a memory and backstory. that would be too much work. This is incredible technology, it's so fun to play this. The potential is huge. I love the fact that you guys have let random idiots like me play with such a powerful tool 😅.
Thank you for tutorial. I followed the tutorial but it is not runing on Windows. Also the code lines which is given in Developer quickstar -> Making requests OpenAI Api are not running on Windows. Is this tutorial only for Mac?
I want to teach my model to generate a json string with properties that I have strict naming for them. I tried to get chat gbt to do it and it did a nice job but he takes the liberty of changing some of the property names. For example if my prompt tells him to generate the json string he will name one of the json properties in his own way. If I train my model can my completion property in the jsonL file have an inner json string example?
If we want to bias the gpt-3 language, is it possible? For example, i want my model to use only my field related vocab while generation. How can i do that?
Is the fine tuned model context aware? I mean if I trained it with a specific logical QnA flow, will it be aware of the context? ...especially in domain specific QnA training data?
can any one show a video of the paid acount, how it will be, what is the minimum subscription.and more over to fine tune a model using 18$ free credit , it is not creating a model , instead it throws error as in correct api
Great video! I just want to add that as of January 2023, tokens you use with your fine-tuned model are about 6 times more expensive than the base model (Davinci: $0.12 vs $0.02 /1k tokens). So you might not save money, but you will get more accurate outputs if you fine-tune it correctly .
Excelente lesson. I expected Open AI Fine Tuning API to improve the general model itself with new training data but what we see is a significant deterioration of the original model responses when it's Fine Tuned on new data. Is there anyway to overcome this problem?
As we see here: 38:40
Thanks for your tutorial. I got a question. Is it possible to fine-tune a model that answer questions the same way it would do with the answer function of OpenAI API?
I mean, is there something like a parameter for the "fine-tune.creates" line in which i can send a set of documents to look for? So the model can answer questions based on the information of the documents.
Thank you again!
Hi the slack community link you have provided in the description has expired, can you please share a new one! thanks
Hey! Sorry for the late reply. If you are still interested in joining our community, here's a link join.slack.com/t/buzzrobot/shared_invite/zt-1zsh7k8pd-iMu_M8bUxIK3pOJgqJgCRQ
@@BuzzRobot Thanks
Hey, quick question, how did you format the CSV file before uploading it to the CLI?
it takes jsonl right? so {prompt:”… ->”, completion:”…”} u can convert csv to jsonl in python. use csv and srsly libraries
Hello, thank you for the video. Is there anywhere we could get the yelp_review_sentiment.csv file?
Thank you!
Sorry I'm an idiot, and I'm new to this. Will fine tuning allow me to make a chatbot with a backstory, memory and personality any size I choose? Fine Tune on previous chats so It will be able to retrieve the information from there instead of using tons of tokens for every question? Or are you saying that I would only be able to draw on that information by creating custom prompts to access that information, in which case that doesn't really help me with a memory and backstory. that would be too much work.
This is incredible technology, it's so fun to play this. The potential is huge. I love the fact that you guys have let random idiots like me play with such a powerful tool 😅.
any solution?
Thank you for tutorial. I followed the tutorial but it is not runing on Windows. Also the code lines which is given in Developer quickstar -> Making requests OpenAI Api are not running on Windows. Is this tutorial only for Mac?
how to update my existing fine-tune model with my new custom data?
I want to teach my model to generate a json string with properties that I have strict naming for them. I tried to get chat gbt to do it and it did a nice job but he takes the liberty of changing some of the property names.
For example if my prompt tells him to generate the json string he will name one of the json properties in his own way. If I train my model can my completion property in the jsonL file have an inner json string example?
Can we fine-tune same GPT-3 model more than once?
Hi did you find the answer I am interested too !?
@@sadrahel2142 no we can’t do more than once
If we want to bias the gpt-3 language, is it possible? For example, i want my model to use only my field related vocab while generation. How can i do that?
were you able to do that sir ?
I want to do the same thing, is it achievable ?
@@redouanaf6378 only fine-tuning worked in my case
Is the fine tuned model context aware? I mean if I trained it with a specific logical QnA flow, will it be aware of the context? ...especially in domain specific QnA training data?
No, you ask gpt3 and it will also answer you with a no.
can any one show a video of the paid acount, how it will be, what is the minimum subscription.and more over to fine tune a model using 18$ free credit , it is not creating a model , instead it throws error as in correct api
DO ONE FOR GPT-4
I want that csv file
What time is it today?
A bee finds a name
In the computer
I'm getting this tattooed on my chest. What time is it yesterday?