I've been using the function ChatOpenai() rather than Openai() to call the model "gpt-3.5-turbo" which costs $0.002 rather than $0.025. Cheaper and more powerful, can still be used for standard querying.
Would be nice to make the same video but for Llama-2. Llama-2 can run in our private cloud. Many companies dont want to use OpenAI because of data privacy concerns. Also Llama-2 is completely free and can run locally for free.
I have an pdf with thousands of pages, is the gpt-4 able to undestand and memorize all of it ? My questions to this big pdf need to correlate all the information.
GPT is general purpose and its been trained on millions pieces of text so that it can understand human language. Sure, it might be able to answer specific questions based on the information that it was trained on - for example, "Who is the CEO of Google?" - but as soon as you need to produce specific results based on your product, results will be unpredictable and often just wrong. GPT-3 is notorious for just like confidently making up answers that are just plain wrong. There are two approaches to address this: 1) Fine-tune the model - Need to retrain the model with your own custom data or every time new data is added 2) Context injection - Pre-process knowledge base (embedding), store it as object or in database, based on user's query, search your knowledge base for most relevant info, inject the top most relevant pieces into the actually prompt as context
Thank you for your great videos. Just a quick note, you are not training anything here, you're building a RAG system. You could say "training" if you were optimizing the parameters of a model (e.g. neural nets) for minimizing a loss function.
Good observation. I have already created content and written code for the remaining videos (312-322) and they focus on image analysis and optimization techniques. I recorded a couple more language model videos based on viewer questions so I had to assign them new numbers that do not follow the sequence. I don't want to reshuffle all numbers or wait a few months to release another language model video.
Hi Sreeni! Love the content, everything's always amazingly explained. I was wondering if you were planning on covering the YOLOv7 algorithm. It would be really interesting seeing a video of you covering it and your takes on it. Keep up the good content :)
Thank you very much for this great video! Could you please let me know here we used ChatGPT or GPT4? And it’s not fine tuning here, it’s embedding, right? Which one do you think is better? Fine tuning or embedding? Thank you very much!
The biggest problem is the API key. Try to make it using without all this Open AI company. What happen if you dont extend your API key subscription ? Will the pipeline just stop working ?
May I ask if this tutorial example simply extracts the content from the PDF article as context and sends it along with the question to the OPENAI API? Or is there any training being done locally? I'm curious about this because the video mentioned the use of an API KEY. Thank you.
Regarding tokenization, when you use the OpenAI API, both your PDF data and your question will go through tokenization processes. The text from your PDF file will be tokenized to prepare it for input to the model, and your question will also be tokenized to match the model's input format. The tokenization ensures that the text is divided into smaller units that the model can process. The tokenizations for your PDF data and question are independent of each other. The model doesn't directly compare the tokenizations to extract relevant content from your PDF file. Instead, the model processes the tokenized input and generates responses based on its understanding of the language and context. The model doesn't have direct access to the original PDF data or its specific tokenization. OpenAI doesn't have access to your data!
No training is happening, just a vector match of embeddings. I've used the term 'training' in the tutorial but what I should have said was that embeddings are being matched.
Better than most paid courses online! Thanks.
Thank you very much :)
I've been using the function ChatOpenai() rather than Openai() to call the model "gpt-3.5-turbo" which costs $0.002 rather than $0.025. Cheaper and more powerful, can still be used for standard querying.
Hello mam, can please make a video on usage costs and other cost factors about openai api
Would be nice to make the same video but for Llama-2. Llama-2 can run in our private cloud. Many companies dont want to use OpenAI because of data privacy concerns. Also Llama-2 is completely free and can run locally for free.
Still would be usefull
Thanks, Sreeni. Your content is always the best!
Thank you very much.
I have an pdf with thousands of pages, is the gpt-4 able to undestand and memorize all of it ? My questions to this big pdf need to correlate all the information.
GPT is general purpose and its been trained on millions pieces of text so that it can understand human language. Sure, it might be able to answer specific questions based on the information that it was trained on - for example, "Who is the CEO of Google?" - but as soon as you need to produce specific results based on your product, results will be unpredictable and often just wrong. GPT-3 is notorious for just like confidently making up answers that are just plain wrong.
There are two approaches to address this:
1) Fine-tune the model - Need to retrain the model with your own custom data or every time new data is added
2) Context injection - Pre-process knowledge base (embedding), store it as object or in database, based on user's query, search your knowledge base for most relevant info, inject the top most relevant pieces into the actually prompt as context
For very specific data extraction, do you think it'd be better to train your own model, for instance using LayoutLMv3?
Great material! Thanks for sharing, good job 🚀
Thank you for your great videos. Just a quick note, you are not training anything here, you're building a RAG system. You could say "training" if you were optimizing the parameters of a model (e.g. neural nets) for minimizing a loss function.
Hi Sreeni,
I enjoy your content every time I see it.
Just a question why you jumped from 311 to 323?
Good observation. I have already created content and written code for the remaining videos (312-322) and they focus on image analysis and optimization techniques. I recorded a couple more language model videos based on viewer questions so I had to assign them new numbers that do not follow the sequence. I don't want to reshuffle all numbers or wait a few months to release another language model video.
Hi Sreeni! Love the content, everything's always amazingly explained. I was wondering if you were planning on covering the YOLOv7 algorithm. It would be really interesting seeing a video of you covering it and your takes on it.
Keep up the good content :)
Great video. Thanks!👍
Thank you too!
Thank you very much for this great video! Could you please let me know here we used ChatGPT or GPT4? And it’s not fine tuning here, it’s embedding, right? Which one do you think is better? Fine tuning or embedding? Thank you very much!
Great vid. thank you for your time and effort for these vids.
this was awesome! I never do any coding, and was able to follow and do it
you ate amazing mate. thank you for awesome lectures
nice tutorial... how could I limit the topic only with the PDFs? for example in case that the chatbot must not answer.
thank you! It's exactly I was looking for.
Sir Can you please make a video on usage costs of api and other cost factors !
Always appreciate your work. Thanks sir...
Thanks!
Thank you
Thanks
Thank you
The biggest problem is the API key. Try to make it using without all this Open AI company. What happen if you dont extend your API key subscription ? Will the pipeline just stop working ?
May I ask if this tutorial example simply extracts the content from the PDF article as context and sends it along with the question to the OPENAI API? Or is there any training being done locally? I'm curious about this because the video mentioned the use of an API KEY. Thank you.
Regarding tokenization, when you use the OpenAI API, both your PDF data and your question will go through tokenization processes. The text from your PDF file will be tokenized to prepare it for input to the model, and your question will also be tokenized to match the model's input format. The tokenization ensures that the text is divided into smaller units that the model can process.
The tokenizations for your PDF data and question are independent of each other. The model doesn't directly compare the tokenizations to extract relevant content from your PDF file. Instead, the model processes the tokenized input and generates responses based on its understanding of the language and context. The model doesn't have direct access to the original PDF data or its specific tokenization.
OpenAI doesn't have access to your data!
You need an API key to add the openAI API layer on your model.
No training is happening, just a vector match of embeddings. I've used the term 'training' in the tutorial but what I should have said was that embeddings are being matched.
@@DigitalSreeniThank you so much! 😊
Is it better than chatwithpdf plugin model ?
Does it work of csv filled with numeric data converted to pdf and then imported in the file?
Where's the training?
can you link the txt file you used
But langchain is free ?
But IDK how to code😢😢😢😢😢😂😂
Don't worry. There are a lot of service providers out there that allow you to train your own chatbots, just costs some $$$
Thanks!
Thank you.