Fine-Tune ChatGPT For Your Exact Use Case

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024
  • In this video, I show you how to fine-tune ChatGPT 3.5 Turbo. This newly released fine-tuning feature lets you customize ChatGPT to your exact needs. Plus, I show you the easiest way to generate fine-tuning datasets, which is always the most challenging part of fine-tuning.
    Enjoy!
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewber...
    Need AI Consulting? ✅
    forwardfuture.ai/
    Rent a GPU (MassedCompute) 🚀
    bit.ly/matthew...
    USE CODE "MatthewBerman" for 50% discount
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    Google Colab - colab.research...
    Blog Announcement - openai.com/blo...
    Matt Schumer - / mattshumer_

Комментарии • 116

  • @zacharypump5910
    @zacharypump5910 11 месяцев назад +59

    Cool example but keep in mind without any fine tuning, GPT vanilla will likely yield the same results just by using that same system prompt. It’s very challenging to evaluate the benefits of fine-tuning without truly using a private and distinct data set that wouldn’t be part of its base training.

    • @jimmytaylor1279
      @jimmytaylor1279 11 месяцев назад +6

      I tried this. I did get a very similar result. I used the prompt before asking the question. I think the advantage is if you have a need for chatbot to respond the same way every time you won't have to input the prompt every time. the other advantage is you can have multiple setups and not have to change it in your settings every time. I can see this being useful with much larger prompts when you want a specific style of writing and later need to do some coding and even change coding languages. I think it has its uses.

    • @jimmytaylor1279
      @jimmytaylor1279 11 месяцев назад +1

      I tried this. I did get a very similar result. I used the prompt before asking the question. I think the advantage is if you have a need for chatbot to respond the same way every time you won't have to input the prompt every time. the other advantage is you can have multiple setups and not have to change it in your settings every time. I can see this being useful with much larger prompts when you want a specific style of writing and later need to do some coding and even change coding languages. I think it has its uses.

    • @avi7278
      @avi7278 11 месяцев назад +4

      Yeah this is like custom instructions for the API and without having to spend tokens on the prompt for every response, which brings down cost over time. l think that is the point and the biggest advantage, not that you can't sometimes get the same result via a prompt.

    • @raj_talks_tech
      @raj_talks_tech 11 месяцев назад

      Agreed. It depends on a very good quality data-set. Fine-tuning chatgpt doesn't make a huge difference on general topics.

    • @anshgoestoschool
      @anshgoestoschool 11 месяцев назад

      yes, however there is a limit to what you can fit in the context window for a standalone api call. Multi shot prompting can only get you so far.

  • @Boneless1213
    @Boneless1213 11 месяцев назад +11

    This is a great video. I would love a follow up that tackles either a real use case or at least a usecase that isn't something I could just ask chatgpt to do for me already. Like maybe inventing a new concept and giving it understanding of it and being able to do intelligent tasks with it. I guess I just don't see a point to fine tuning unless it helps with something that just adding to the prompt couldn't do.

  • @kummbaratr
    @kummbaratr 2 дня назад

    That's the cleanest video I've ever seen about fine-tuning. I still wonder how to fine tune on a book and make an expert model on an extremely niche topic.

  • @dameanvil
    @dameanvil 7 месяцев назад +1

    00:03 🛠 Fine-tuning Chat GPT allows customization for specific use cases, reducing costs and improving efficiency.
    00:37 📄 Fine-tuning with GPT 3.5 Turbo offers improved steerability, reliable output formatting, and a custom tone for desired behavior.
    01:07 📋 Three steps to fine-tuning chat GPT: Prepare data, upload files, and create a fine-tuning job.
    01:42 🖥 Google Colab simplifies fine-tuning and synthetic dataset creation, making it easy with a few clicks.
    01:52 🔄 Use a provided Google Colab script to generate synthetic datasets for fine-tuning, specifying prompts and parameters.
    02:37 🤖 API key creation and system message generation are part of the Google Colab script to facilitate dataset creation.
    03:19 🧾 Format your examples for fine-tuning with a system message, user message, and assistant output example.
    04:32 🔄 Upload the formatted file to initiate the fine-tuning process, which takes about 20 minutes.
    05:21 ⌛ Track the fine-tuning progress using the Google Colab script, getting updates on each step.
    05:23 ✔ Successfully completed fine-tuning results in a new custom GPT 3.5 Turbo model with a specific name.
    05:36 🧾 Save the model name for future API calls and testing.
    05:40 🧪 Test the fine-tuned model by replacing sample values with custom prompts to get desired responses.
    06:16 🚀 Customized models can be used for personal, business, or chatbot applications, offering a tailored AI experience.

  • @jackiekerouac2090
    @jackiekerouac2090 11 месяцев назад +3

    Great video, which brought 4 questions:
    1) Can you use that process with free accounts?
    2) How secure is it, especially if you upload personal files?
    3) Let's say I have a 300-page novel in draft mode, can I "securely" upload it in GPT?
    4) Is there a way to use ChatGPT as a standalone tool, for your own stuff only?

    • @colecrum3542
      @colecrum3542 11 месяцев назад

      I recommend you research tokens and how this language model ingests lots of data. It can’t take a 300 page novel all at once. In theory you need a way to break up your book into word chunks for GPT to understand. Also look into Pinecone and vector data storage to do that.
      OpenAI says they do not use private documents to tune their models so it’s safe to upload your own info.
      Using GPT as your own tool is a vague question but if you mean as your own personal chatbot, kinda. You see in this video that you can tune your bot to give specific answers but feeding it your own data and then asking it to take action with your data is difficult for it to do. I again refer you back to pincecone and data storage with LLMs.

  • @mirmisbahuddin9921
    @mirmisbahuddin9921 11 месяцев назад +3

    Please prepare a video on finetuning of llama-2-7B using colab

  • @tunestyle
    @tunestyle 11 месяцев назад

    Working on this now. You are an absolute wealth of information!!!

  • @mokiloke
    @mokiloke 11 месяцев назад +2

    Yo, yo, thanks for the great videos. Makes my day.

  • @SRSGMAG
    @SRSGMAG 11 месяцев назад +2

    Why can't you just type the same command in the chat prompt instead of all this?

  • @ramsesmendoza8951
    @ramsesmendoza8951 11 месяцев назад +7

    I was really excited about this, until I found out that the training is also censured. I even got an email from them giving me 2 weeks to fix the problem.
    BTW the data was not even NSFW.
    So, right now is great for business and family friendly stuff.

    • @mattbarber6964
      @mattbarber6964 11 месяцев назад +2

      Same here. What a bummer. It's basically nothing more than a dumbed down version of what langchain or Code Interpreter can already produce from a PDF upload.

    • @blacksage81
      @blacksage81 11 месяцев назад

      I had a feeling it would end up like this, it looks like people who need a fine tuned LLM for off the "beaten path" purposes will need to grab a 7b Model with at least 10gb of VRam to make it comfortable locally, or rent a gpu. Hm, I wonder if there will be a market for 1b, or 3b LLM's?

  • @CoachDeb
    @CoachDeb 9 месяцев назад +1

    GREAT Video on Fine Tuning!
    one of the BEST!
    Now that GPT 4 TURBO was just released -- will we still be able to do this fine tuning programming?
    or will it be obsolete -- now that we will all have GPTs and assistants to do things like this for us?

  • @federicoloffredo1656
    @federicoloffredo1656 11 месяцев назад +3

    Great video, it worked for me and that's already great.. Just a question: now I've a "personal" model but in practice how can I use it? How can I change it? It's not so clear for me...

    • @Truevined
      @Truevined 11 месяцев назад

      As of right now, the models can only be used when making api call. This models don't show up on the chatGPT web UI - hopefully they add that in the future. I don't believe you can change your finetuned model, but only way to know for sure is to test ;)
      Also, keep in mind, you can achieve maybe 90% of what Matt showed in this video via prompting in the ChatGPT web UI (even the example in the video) - but this is a very powerful feature to have for more advanced use-cases (see the blog post Matt linked in the description for more details).

  • @johnnvula7246
    @johnnvula7246 7 месяцев назад +1

    That is cool.
    So can I reuse the same API to integra-te in some other platform like flutter to creat an App. Or I have to creat an other API according to the fine-tune.

  • @rochtm
    @rochtm 11 месяцев назад +1

    Thank you 👍

  • @testadrome
    @testadrome 11 месяцев назад +3

    How is fine-tuning helping to reduce the cost of inference? Price per 1K tokens is 8x higher on fine-tuned vs. base models

    • @matthew_berman
      @matthew_berman  11 месяцев назад +2

      Because you can use less tokens (less explanation) but still get the result you want.

    • @thenoblerot
      @thenoblerot 11 месяцев назад +3

      The price compared to untuned 3.5 is higher, but the fine tuned model can perform as well as gpt-4, for cheaper.

  • @RichardGetzPhotography
    @RichardGetzPhotography 11 месяцев назад +3

    When fine-tuning, do I always have to use the roles format? Can I upload a bunch of docs and have it gain the voice from there? Say I want it to speak in an engineering tone, would uploading our engineering papers aid in that? If I do have to use the role formate, then how do I fine-tune on my data for knowledge?

    • @matthew_berman
      @matthew_berman  11 месяцев назад +2

      Wow thank you so much!
      Yes, you have to use the roles format. If you want it to have access to additional knowledge, such as from your engineering papers, you’re probably looking to do RAG, aka using a vector DB to store info and provide additional context at inference-time. Does that make sense? Feel free to shoot me an email (address in my bio) if you want more info.

    • @echofloripa
      @echofloripa 11 месяцев назад

      ​@@matthew_bermanI have a similar question, I need chatgpt to be aware of the information about some laws. I know about the vector database, but wouldn't it be better to have both? Training with the law texts and also have access to the vector database with the exact text?

    • @joepropertykey3612
      @joepropertykey3612 11 месяцев назад

      @@matthew_berman Curious about this too. I wouldn't want to use one the usual suspects like Supabasse or what not, just because Postgres extensions are booming to create Vector databases. The Postgres extensions Pg-embeddings and pg-vector will let you install a vector db on a local machine with Postgres.

    • @rasterize
      @rasterize 11 месяцев назад +2

      In my understanding, you don't actually store specific data when you finetune. It's more kind of changing the flavor of how responses will be delivered back to you. It might remember bits and pieces when using a finetune, but it is a very inaccurate and cumbersome way to store data for retrieval. With RAG and vectorDB, the model will search and retrieve specific data that is most likely to be the data you look for. With a finetune, It will respond with data that seems most likely to look correct in the context you give it. So it will look like law data and feel very official, but references and numbers are just made to feel right.
      Hope this makes sense😊 @@echofloripa

    • @echofloripa
      @echofloripa 11 месяцев назад +1

      @@rasterize I understand your point yes, but still not totally convinced. 😀 I guess I'll have to test it out. I did with a small llm, I will try with llama2 and gpt3. 5

  • @chanansiegel834
    @chanansiegel834 11 месяцев назад +1

    Can i give tune on my own computer and then upload the model to open ai. I have some medical reports which I would like the ai to learn how to write but I have to be careful of who has access to those reports.

  • @p.c.336
    @p.c.336 6 месяцев назад

    Thank you very much for this compact and helpful video.
    Do you have any idea why OpenAI made this fine-tuning thing "too" structured and not flexible?
    Just like some other 3rd party tools, that you can just upload your files, Q&As etc. and start asking questions.
    I mean I can expect structured answers for my use case, giving some detailed instructions, but why should I feed it in a very strict way to get that result?

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc 11 месяцев назад

    Excellent, information thanks

  • @IrmaRustad
    @IrmaRustad 11 месяцев назад

    You are just brilliant!

  • @echofloripa
    @echofloripa 10 месяцев назад

    I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same.
    Yes, it gave a different response compared to the base model, but yet, never a correct answer.
    For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic.
    I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2.
    Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.

  • @nor3299
    @nor3299 11 месяцев назад +2

    Yeah that was amazing but is there a method through which we can create the dataset using the data from a pdf file etc

    • @matthew_berman
      @matthew_berman  11 месяцев назад +4

      Yes. But likely you just want to use a vector db instead of fine-tuning.

    • @GenericMeme42
      @GenericMeme42 11 месяцев назад

      @@matthew_berman I’d much rather train a model on my data. The vector db seems like it would fail on large topics where the relevant answer requires a significant return from the vector db that may exceed the context window or cost a lot in tokens.

  • @richardbasile
    @richardbasile 11 месяцев назад

    Hey Matthew! I love your videos, keep up the great work. I was wondering how I could deploy a fine-tuned LLM to a service or ChatBot like you mentioned at the end of the video? It seems like an interesting concept but I have yet to find any videos on it.

  • @echofloripa
    @echofloripa 11 месяцев назад +1

    What about if I want to train with question answers of my niche (about specific law area) and after I'd like to train with several laws full text, can I do that?

    • @benmak5326
      @benmak5326 11 месяцев назад +2

      I am doing that, you want to get a specific case law or statute you want do a start to finish output Case breach remedy outcome precedent , give it high quality samples - text and refine :)

    • @echofloripa
      @echofloripa 11 месяцев назад +1

      @@benmak5326 could you detail that please?

    • @1242elena
      @1242elena 11 месяцев назад +1

      Please share once it's done

    • @echofloripa
      @echofloripa 10 месяцев назад

      ​@@benmak5326I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same.
      Yes, it gave a different response compared to the base model, but yet, never a correct answer.
      For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic.
      I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2.
      Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.

  • @kingturtle6742
    @kingturtle6742 5 месяцев назад

    Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?

  • @MetaphoricMinds
    @MetaphoricMinds 11 месяцев назад +1

    The only problem I have with some of your videos is, they are too high-level. Sometimes you rush through sections that you may have explained in a previous video. I understand you can't go all the way into each aspect due to time, but maybe a quick reference on "how to do that". Just a suggestion. Otherwise, great content!

    • @glenraymond379
      @glenraymond379 11 месяцев назад

      i think its on purpose so we feel the need to buy is patron..

  • @MetaphoricMinds
    @MetaphoricMinds 11 месяцев назад +1

    There seems to be an awful lot of work to get this to work. How hard would it be to create an application that actually requires no code? You just open it, input your requirements in the fields provided, and viola..

  • @p0gue23
    @p0gue23 11 месяцев назад +2

    Is there a tool anywhere that will convert text to system/user/assistant JSON format for fine-tuning?

    • @mungojelly
      @mungojelly 11 месяцев назад +1

      um ask gpt4 to write you a converter that fits your data, give it examples of the data and what format you want and it should be able to write you a script to convert it

    • @p0gue23
      @p0gue23 11 месяцев назад

      @@mungojelly Interesting. I've actually used gpt to convert text to JSONL, but I didn't consider having it write a script. Time to experiment...

  • @hqcart1
    @hqcart1 11 месяцев назад +1

    I dont get it, what was the data you trained the model for??

    • @matthew_berman
      @matthew_berman  11 месяцев назад

      What do you mean?

    • @TzaraDuchamp
      @TzaraDuchamp 11 месяцев назад +1

      It’s synthetic data that is based on the prompt that you create in the Colab Notebook by calling GPT-4 (recommended, but more costly) or 3.5. In this video, he used number_of_examples = 50. That means 50 synthetic examples are created.
      Why use GPT-4 for this? Because that model is more advanced than 3.5 and gives more consistent and expected output. When creating synthetic data for fine-tuning you want it to conform to your standards as much as possible.

  • @wiseshopinfo
    @wiseshopinfo 11 месяцев назад

    Hi Matthew, thank you so much for this tutorial, this is mindblowing
    I was wondering you could help me with the create a completion process so I could use the generated model as integration for other platforms?

  • @mernsolution
    @mernsolution 7 месяцев назад

    After build model I use in Node js thanks

  • @mdnpascual
    @mdnpascual 9 месяцев назад

    I want to make a chatbot for a retailer. A customer can prompt "suggest me a gift item for an 8 year old girl who loves etc etc.", Is this the right solution for me? If yes, does the training data always needs to response in a question/prompt? I already have the dataset for this retailer of their catalogue plus their tags, like toys, education, kitchen, etc. How can I format the data that chatgpt can do it?

  • @jeffnall5206
    @jeffnall5206 3 месяца назад

    The Colab example no longer functions. I get an APIRemovedInV1 exception. I tried to run "openai migrate" as suggested but that won't work here.

    • @moe_chami
      @moe_chami Месяц назад

      Were you able to find a fix to the colab?

  • @themvpdev
    @themvpdev 4 месяца назад

    When, where or how can we use ChatGPT-4-turbo instead of 3.5-turbo?

  • @Aidev7876
    @Aidev7876 11 месяцев назад

    Just to understand. The data you have generated with the 50 entries. And the system prompt. Temperature etc. Everything is stored under your openAI account somewhere in the cloud? And gpt appends it as a context before running the query?

  • @Nick_Tag
    @Nick_Tag 11 месяцев назад

    Might be a silly question but for actually using the fine tuned model (either as a chat bot or within other apps) how would you achieve that? I guess there would be a unique API key and model name to put in relevant places for apps, but is there a recommended 'ChatGPT-like app' where you would just paste those in as variables and get a similar experience. Doing it inside of the same colab seems a little clunky?

  • @marcfruchtman9473
    @marcfruchtman9473 11 месяцев назад

    Awesome.

  • @AshiqKhan-ky5cu
    @AshiqKhan-ky5cu 8 месяцев назад

    Hey, thanks for this video. I have some large ppt file and I want to fine tune those content. How can I achieve this?

  • @mrmastr
    @mrmastr 6 месяцев назад +1

    This doesn't work anymore

  • @muh6131
    @muh6131 10 месяцев назад

    Very good. I have a question. Can I use gpt-3.5 in your code? I made an error. Thanks

  • @vKILLZ0NEv
    @vKILLZ0NEv 9 месяцев назад

    Does the format of the dataset have to be system, user, assistant as shown here?

  • @anshgoestoschool
    @anshgoestoschool 11 месяцев назад

    can a fine tuned model be fine tuned further by adding more examples and training again further?

  • @trackerprince6773
    @trackerprince6773 9 месяцев назад

    Mathew , if I have a database of PDF docs , and I want to fine tune gpt3.5 turbo on a private knowledge base ,how can I use gpt4 to create the training data? Also would I need to fine tune again if new docs are added to my knowledge base or can would I just add to vector DB and query my custom model

    • @p.c.336
      @p.c.336 6 месяцев назад

      I recently asked a similar question and noticed that you haven't received an answer yet. Could you share your experience if you've found a solution? If not, here's my plan for my use case that might inspire you or others: I plan to consolidate my knowledge base into a few PDF files and ask ChatGPT (4) to convert it into a dataset that I can use to feed my own fine-tuned model. And proceed with the rest as explained here.

  • @wiseshopinfo
    @wiseshopinfo 11 месяцев назад

    sorry for taking advantage of your goodwill but I have one more question.
    As I am trying to integrate the model trained through colab ( fine tune ) to a chat bot; the chat bot I am using currently only gives support to text models like davinci 003, is there a colab similar to the one you shared with us on youtube that does the same with models like davinci?
    Also, is there a way to delete or rename models that have been already created? I've been scratching my head trying to find that.
    And last one ( so so sorry ) I trained a model to act like an employee of an online commerce, using a detailed prompt with all the details and everything else, however when testing the model on playground, it does not act like it was supposed to be. Am I doing something wrong?

  • @oryxchannel
    @oryxchannel 11 месяцев назад

    0:46 OpenAI can make us behave the way they want us to behave, lol.

  • @09jake12
    @09jake12 11 месяцев назад

    Hi Matthew, I'm looking for something like this that searches the internet for actual data to train on, rather than synthetic data, because my use case requires updated and recent data. (say 2021 and later) Could you point me in the right direction? Thanks!

  • @andreaswinsnes6944
    @andreaswinsnes6944 11 месяцев назад +1

    Can you make a video about how to fine-tune an LLM for game modding?

    • @matthew_berman
      @matthew_berman  11 месяцев назад

      Can you clarify ?

    • @ZuckFukerberg
      @ZuckFukerberg 11 месяцев назад

      I would like to know if you are referring to the model roleplaying as an NPC or if you want the model to help you create mods for a game.
      I'm interested in both cases hehe.

    • @andreaswinsnes6944
      @andreaswinsnes6944 11 месяцев назад

      Would be nice to have an AI co-pilot that can quickly search and replace things in AAA games that are based on any significant engine, like Unity or Unreal for instance.
      For example, if I want to find all instances of the word “Sevastopol” in an Aliens game and replace it with “Sierra Leone”, then it can be very tedious to do this manually, since that word can be found on walls in the game, in texts discovered in the game, in spoken NPC dialogues and in subtitles. Would be amazing to have an LLM that can do all this “search and replace” after a single prompt.
      Similarly, it would be cool if an LLM can find all instances of a certain type of vehicle in a game, maybe in Stalker or Fallout 4, and replace it with another type of vehicle, after having been given a single prompt.
      Is it possible to fine-tune an LLM for this kind of modding?

  • @drgnmsr
    @drgnmsr 11 месяцев назад

    Would there be a way to use this to fine tune a model based on a collection a pdfs?

  • @otaviopmartins
    @otaviopmartins 11 месяцев назад

    Awesome

  • @p0gue23
    @p0gue23 11 месяцев назад

    Useful to see the process, but how is fine-tuning gpt-3.5 with it's own output any different than just using stock gpt-3.5 with the same training system message? The fine-tuned version costs 8x more to run.

    • @mungojelly
      @mungojelly 11 месяцев назад

      you've got to actually train it on something with a zillion examples enough that it becomes more than 8x easier to get the right answers ,, that's a high bar ,, you can train it to be concise and get to the point so that helps a bit ,, but mostly i'd think what it's most useful for is things that you really can't get to within the context window, even if you give a bunch of style notes for a character that's not going to be as effective as training on a big corpus that sounds right & it can pick up a zillion little details

    • @jimigoodmojo
      @jimigoodmojo 11 месяцев назад

      It's not, BUT...
      Specialized fine-tuned GPT-3.5 can replace expensive general models like GPT-4 and get equal or better performance for certain tasks, saving on per-token costs.
      Shorter prompts enabled by specialization directly reduce input token costs.
      Higher quality outputs from fine-tuned models reduce wasted tokens from failures/retries.

  • @thiagofelizola
    @thiagofelizola 11 месяцев назад

    What is the limit of data set for fine tuning?

  • @achille_king
    @achille_king 11 месяцев назад

    I mean, this is nice but I wanted to fine tune chat GPT with the knowledge I got in PDFs.

  • @thenext9537
    @thenext9537 11 месяцев назад +1

    Fine tuning is a train wreck, if you get really deep you’ll find it’s not that hot. I don’t appreciate being treated like a child with some data that’s been lobotomized. The pace is accelerating and yet all I keep finding is walls. I keep hitting the great idea, try to execute something and realize I’m going to need a 11 step flow to accomplish something. Fragmented reality.
    I know there are people out there with a lot of experience thinking the same thing. It’s frustrating.

    • @dhananjaywithme
      @dhananjaywithme 11 месяцев назад

      so are you saying the fine tuning output wasnt that great?
      #FragmentedRealityIndeed

    • @thenext9537
      @thenext9537 11 месяцев назад

      @@dhananjaywithme If you spend enough time, and ask something like is 27 a prime number. Then tell it it isn't, then tell it is. Of course, you shall provide evidence of both outcomes and it will change it's mind constantly. This was 2 weeks ago I did this, and I tried it now and it's still *SORT OF* true, I think they updated it a bit and these types of things are less so now. Ie, I tell it is NOT a prime number, it apologizes. I say it IS a prime number I get "Of course, my apologies". When that happened, I lost everything on it and realized this is a tool that needs heavy vetting.
      What my point is, exploit and learn. You find something? Keep it close until you can prove it.

  • @aadityamundhalia
    @aadityamundhalia 11 месяцев назад

    how can you do the same with llama2 on local

  • @kocahmet1
    @kocahmet1 5 месяцев назад

    awesome

  • @gajyapatil5224
    @gajyapatil5224 11 месяцев назад

    What is differnece between using chatgpt fine-tuning and using langchain?
    I find langchain more general purpose and useful for fine tuning models not only gpt models but also open source models.

    • @thenoblerot
      @thenoblerot 11 месяцев назад +1

      Lang chain is for chaining together prompts and for doing information retrieval. Fine-tuning aligns the model for your desired style of output.

    • @BlissfulBloke
      @BlissfulBloke 11 месяцев назад +2

      @@thenoblerot so one could perhaps use both these methods for a refined output style on a specific data set? Set of podcast transcript PDF's using fine-tuning to have the response sound like Joe Rogan for example?

    • @thenoblerot
      @thenoblerot 11 месяцев назад +1

      @@BlissfulBloke Sure. Although you don't NEED langchain. Imho, it adds unnecessary abstraction and complexity.
      It seems rude to directly link to another RUclipsr here lol, but search for "create Data Star Trek fine tuning gpt-3.5" for a demo on how to fine tune a persona from a script.

  • @eyemazed
    @eyemazed 11 месяцев назад

    What's the difference between fine-tuning and just using custom instructions?

    • @mungojelly
      @mungojelly 11 месяцев назад

      custom instructions just adds something automatically to every prompt ,, "fine-tuning" as we're euphemistically calling it means actually waking the robot up and letting it see some data and learn something ,, you can show it any dataset, including things that aren't in what it was already trained on at all, or things shaped in a very different way, and it'll learn from that new facts and ideas and processes and formats-- eventually, given a VERY large dataset of examples, it won't really learn much from fifty examples it needs more like thousands, ideally hundreds of thousands or millions and then it'll actually grok things

    • @eyemazed
      @eyemazed 11 месяцев назад

      @@mungojelly but for practical purposes it's much the same as careful prompt engineering, or custom instructions, as in - you can achieve same or very similar responses by prefixing your prompts instead of fine tuning the entire model?

  • @trailblazer7108
    @trailblazer7108 11 месяцев назад

    Please can you test Phind code lama 32b model - apparently better than chat GPT 4.

  • @mastamindchaan387
    @mastamindchaan387 11 месяцев назад

    Im sorry but the tutorial is really bad. He just read out what's on the page, and I can read on my own. I still don't know what I'm supposed to do next.
    I don't understand. So, when I click on upload, my model is automatically uploaded to OpenAI's servers? and then I can use my model on the chatgpt site just like usual?
    What should I put in the Tokens field? The default is 1000 Tokens. But what if I want to train with more than 1000 words? Did I change it when i want to train with 200k words to 200k tokens or not?
    I also don't get the "Role, System, Content" part. So, do I have to set everything up beforehand? Like, if I input "dog" as the prompt, should ChatGPT respond with "nice,"?!
    Example:
    Role: You are my teachter
    User: Tell me how much is 5+5
    Role/Answer/output should be: sure i will help u 5+5 is 10
    Role: You are answering me as someone interested in books
    User: Name from the guy from harry potter?
    Role/output should be: Her name is harry Potter
    I still have no idea how to train ChatGPT, for example, to learn the content of a book and then ask it questions. I can't preconfigure every question and answer beforehand?

  • @moon8013
    @moon8013 11 месяцев назад

    nice

  • @claudiodev8094
    @claudiodev8094 11 месяцев назад

    You don't want to finetune your models, it's not worth it. What little you gain in stability and save in prompting is lost immediately since you now have a static model that is what it is and nothing else. Invest in proper prompts and validation instead

  • @428manish
    @428manish 11 месяцев назад

    getting error: Exception has occurred: InvalidRequestError
    Resource not found
    File "C:\D-Drive\AI-ChatBot\ChatBot-chatGpt\fineTune-upload.py", line 22, in
    response = openai.FineTuningJob.create(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    openai.error.InvalidRequestError: Resource not found