323 - How to train a chatbot on your own documents?

Поделиться
HTML-код
  • Опубликовано: 8 ноя 2024

Комментарии • 46

  • @Ethan-gs5ib
    @Ethan-gs5ib Год назад +8

    Better than most paid courses online! Thanks.

  • @alisonwright2189
    @alisonwright2189 Год назад +13

    I've been using the function ChatOpenai() rather than Openai() to call the model "gpt-3.5-turbo" which costs $0.002 rather than $0.025. Cheaper and more powerful, can still be used for standard querying.

    • @vishnuvardhanvaka
      @vishnuvardhanvaka Год назад

      Hello mam, can please make a video on usage costs and other cost factors about openai api

  • @robosergTV
    @robosergTV Год назад +8

    Would be nice to make the same video but for Llama-2. Llama-2 can run in our private cloud. Many companies dont want to use OpenAI because of data privacy concerns. Also Llama-2 is completely free and can run locally for free.

    • @turck_pharao
      @turck_pharao 6 месяцев назад

      Still would be usefull

  • @pabolusatyavivek9481
    @pabolusatyavivek9481 Год назад +1

    Thanks, Sreeni. Your content is always the best!

  • @BlazeArteryak
    @BlazeArteryak Год назад +4

    I have an pdf with thousands of pages, is the gpt-4 able to undestand and memorize all of it ? My questions to this big pdf need to correlate all the information.

  • @souravran
    @souravran Год назад +2

    GPT is general purpose and its been trained on millions pieces of text so that it can understand human language. Sure, it might be able to answer specific questions based on the information that it was trained on - for example, "Who is the CEO of Google?" - but as soon as you need to produce specific results based on your product, results will be unpredictable and often just wrong. GPT-3 is notorious for just like confidently making up answers that are just plain wrong.
    There are two approaches to address this:
    1) Fine-tune the model - Need to retrain the model with your own custom data or every time new data is added
    2) Context injection - Pre-process knowledge base (embedding), store it as object or in database, based on user's query, search your knowledge base for most relevant info, inject the top most relevant pieces into the actually prompt as context

    • @carlos.duclos
      @carlos.duclos Год назад

      For very specific data extraction, do you think it'd be better to train your own model, for instance using LayoutLMv3?

  • @humaitrix
    @humaitrix 7 месяцев назад

    Great material! Thanks for sharing, good job 🚀

  • @amnn8507
    @amnn8507 3 месяца назад

    Thank you for your great videos. Just a quick note, you are not training anything here, you're building a RAG system. You could say "training" if you were optimizing the parameters of a model (e.g. neural nets) for minimizing a loss function.

  • @AdnanKhan-mi2kf
    @AdnanKhan-mi2kf Год назад +1

    Hi Sreeni,
    I enjoy your content every time I see it.
    Just a question why you jumped from 311 to 323?

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Good observation. I have already created content and written code for the remaining videos (312-322) and they focus on image analysis and optimization techniques. I recorded a couple more language model videos based on viewer questions so I had to assign them new numbers that do not follow the sequence. I don't want to reshuffle all numbers or wait a few months to release another language model video.

  • @AlexDerBar
    @AlexDerBar Год назад +1

    Hi Sreeni! Love the content, everything's always amazingly explained. I was wondering if you were planning on covering the YOLOv7 algorithm. It would be really interesting seeing a video of you covering it and your takes on it.
    Keep up the good content :)

  • @develom_ai
    @develom_ai Год назад +1

    Great video. Thanks!👍

  • @happyg8682
    @happyg8682 Год назад +1

    Thank you very much for this great video! Could you please let me know here we used ChatGPT or GPT4? And it’s not fine tuning here, it’s embedding, right? Which one do you think is better? Fine tuning or embedding? Thank you very much!

  • @TLogan-eu7qt
    @TLogan-eu7qt Год назад +1

    Great vid. thank you for your time and effort for these vids.

  • @deanstarkey4375
    @deanstarkey4375 Год назад

    this was awesome! I never do any coding, and was able to follow and do it

  • @drayhancolak
    @drayhancolak Год назад

    you ate amazing mate. thank you for awesome lectures

  • @amedyasar1021
    @amedyasar1021 Год назад

    nice tutorial... how could I limit the topic only with the PDFs? for example in case that the chatbot must not answer.

  • @91255438
    @91255438 Год назад

    thank you! It's exactly I was looking for.

  • @vishnuvardhanvaka
    @vishnuvardhanvaka Год назад

    Sir Can you please make a video on usage costs of api and other cost factors !

  • @mdabdullahalhasib2920
    @mdabdullahalhasib2920 Год назад

    Always appreciate your work. Thanks sir...

  • @ronaldgourgeot2759
    @ronaldgourgeot2759 Год назад

    Thanks!

  • @a3hindawi
    @a3hindawi 6 месяцев назад

    Thanks

  • @romanemul1
    @romanemul1 Год назад

    The biggest problem is the API key. Try to make it using without all this Open AI company. What happen if you dont extend your API key subscription ? Will the pipeline just stop working ?

  • @kai-yihsu3556
    @kai-yihsu3556 Год назад

    May I ask if this tutorial example simply extracts the content from the PDF article as context and sends it along with the question to the OPENAI API? Or is there any training being done locally? I'm curious about this because the video mentioned the use of an API KEY. Thank you.

    • @guiomoff2438
      @guiomoff2438 Год назад

      Regarding tokenization, when you use the OpenAI API, both your PDF data and your question will go through tokenization processes. The text from your PDF file will be tokenized to prepare it for input to the model, and your question will also be tokenized to match the model's input format. The tokenization ensures that the text is divided into smaller units that the model can process.
      The tokenizations for your PDF data and question are independent of each other. The model doesn't directly compare the tokenizations to extract relevant content from your PDF file. Instead, the model processes the tokenized input and generates responses based on its understanding of the language and context. The model doesn't have direct access to the original PDF data or its specific tokenization.
      OpenAI doesn't have access to your data!

    • @guiomoff2438
      @guiomoff2438 Год назад

      You need an API key to add the openAI API layer on your model.

    • @DigitalSreeni
      @DigitalSreeni  Год назад +2

      No training is happening, just a vector match of embeddings. I've used the term 'training' in the tutorial but what I should have said was that embeddings are being matched.

    • @kai-yihsu3556
      @kai-yihsu3556 Год назад

      @@DigitalSreeniThank you so much! 😊

  • @BlazeArteryak
    @BlazeArteryak Год назад

    Is it better than chatwithpdf plugin model ?

  • @anshikak3
    @anshikak3 7 месяцев назад

    Does it work of csv filled with numeric data converted to pdf and then imported in the file?

  • @bropocalypseteam3390
    @bropocalypseteam3390 Год назад

    Where's the training?

  • @elibrignac8050
    @elibrignac8050 Год назад

    can you link the txt file you used

  • @shubhamdubey9181
    @shubhamdubey9181 7 месяцев назад

    But langchain is free ?

  • @Driftwood-f8d
    @Driftwood-f8d 8 месяцев назад

    But IDK how to code😢😢😢😢😢😂😂

    • @DigitalSreeni
      @DigitalSreeni  8 месяцев назад

      Don't worry. There are a lot of service providers out there that allow you to train your own chatbots, just costs some $$$

  • @telexiz
    @telexiz 6 месяцев назад

    Thanks!