Brilliantly explained with clarity and insight, thank you! Also really pleased you point out that RAG emerged from IR ideas and wasn't brand new: when I saw it I was like, haven't people seen Facebook's DrQA from 2017?!? And even that wasn't out the blue, there's a long established history with IR 👍
thank you. I agree, in most of the case, we are reinventing the wheel and giving old approaches with new names. Interestingly enough a simple keyword based search (BM-25) will still out perform dense embeddings in most cases!
This is exactly what I've been trying to find for the last couple of days. Simple instructions on how to do this with pure python and local LLM. Thank you!
Nice I've been wanting to start in C# for RAG... Any tips or guidance for a newbie? I was using KoboldCPP's webui for LLM generation... but have NO idea where to go. None of these videos even hint at anything with C#... let alone Kobold.
Problem with RAG solutions is they don’t hold up with bigger amounts of unstructured data. I wish there was a solution that includes long term memory for chat agents so that they get smarter about your context as you chat with them
it should work with open models. For bigger corpus, you will need to think about latency in retrieval. You might want to look into Quantized embeddings in that case.
Hello sir! I want to build a question answering chatbot which gives answer form provided knowledge base in pdf or text format with python language. I'm working on this since last 10 days but failed to do till now! Can you please guide me through this project sir?
As a newbe im hooked on this channel. Im about to take your RAG course, the issue have is, everytime ive been trying to use Langchain i get crazy errors about upgrades and in compatibilities with Python versions. How do you address this issue? Frustrating to resolve if at all.
My recommendation is to stick to a version of langchain and don't use the latest version. You can fix that in the requirements.txt. you don't need to latest version in most cases. For Python, use 3.10. Hope this helps
could you please make a video on a a chatbot that can interact with pdf files and answer questions with recent tech ? I'm having the most difficulties with outdated tutorials. It would be a great help!
Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).
Hello! I’ve a doubt. The similarities is a way to reduce the number of tokens that is sent to the openAi api? So basically when you make a query to the llm you are not sending the entire text of the wikipedia page? I ask it because of tokens cost, to know exactly what openai will charge us. Your content is probably the best on youtube! Really appreciate all your videos
Probably. He used a Wiki page but you may have a 1000 pages pdf that will cost a lot to process and maybe most of it is irrelevant to what you want. When you break the text, and then get the 'n' most relevant chunks you get what you want faster and cheaper.
Yes, there are two parts as mentioned by @luizemanoel. First the document can contain a lot of irrelevant info. You only want to provide what is relevant to the query to the LLM. This will improve the responses. And the added benefit is reduced tokens which means less cost as well.
What are the best ways of importing documents into the RAG system From corporate systems, such as Google Docs or Confluence or Notion without asking your IT? I have actually done a few things manually, but they are very labour-intensive and manual for example using scraping tools and chrome extensions but is there something that is a bit more streamlined?
You are looking for data connectors in this case. Each of these services will have their own APIs or you can use data loaders from langchain (python.langchain.com/v0.2/docs/integrations/document_loaders/). This is one aspect where i would recommend using a framework.
Thanks for this great video. I tried to run your juypter notebook. When calling the line "from google.colab import userdata" I get the error: ModuleNotFoundError: No module named 'google'. and somewhere I see pkg_resources is deprecated as an API Is python 3.12.3 too new? OK, I replaced the google part. There are other ways to create an OpenAI client ! Now it works !
Brilliantly explained with clarity and insight, thank you!
Also really pleased you point out that RAG emerged from IR ideas and wasn't brand new: when I saw it I was like, haven't people seen Facebook's DrQA from 2017?!? And even that wasn't out the blue, there's a long established history with IR 👍
thank you. I agree, in most of the case, we are reinventing the wheel and giving old approaches with new names. Interestingly enough a simple keyword based search (BM-25) will still out perform dense embeddings in most cases!
This is exactly what I've been trying to find for the last couple of days. Simple instructions on how to do this with pure python and local LLM. Thank you!
x2! thanks @prompt engineering!
I just got done implementing an almost identical setup. Used SQLite and fastBart all in C# it’s amazing
Nice I've been wanting to start in C# for RAG... Any tips or guidance for a newbie? I was using KoboldCPP's webui for LLM generation... but have NO idea where to go. None of these videos even hint at anything with C#... let alone Kobold.
Excelent and concise description. Thank you.
Problem with RAG solutions is they don’t hold up with bigger amounts of unstructured data. I wish there was a solution that includes long term memory for chat agents so that they get smarter about your context as you chat with them
Google released context caching for their long context models. This could be a solution
@@engineerpromptis there a way to save and load the vector store that you made here sir ?
The graph rag solution may work better for large amounts of unstructured data
I never liked RAG frameworks .. thanks for the useful content
yes! i did the same a year ago in research duration.. it works.
great work! very well explained
Brilliant! Thanks for this one
Great job. I'd try to make this work with free/opensource AI Models
I also wants to see if this will work with bigger corpus.
it should work with open models. For bigger corpus, you will need to think about latency in retrieval. You might want to look into Quantized embeddings in that case.
Thank you!
Hello sir!
I want to build a question answering chatbot which gives answer form provided knowledge base in pdf or text format with python language. I'm working on this since last 10 days but failed to do till now! Can you please guide me through this project sir?
As a newbe im hooked on this channel. Im about to take your RAG course, the issue have is, everytime ive been trying to use Langchain i get crazy errors about upgrades and in compatibilities with Python versions. How do you address this issue? Frustrating to resolve if at all.
My recommendation is to stick to a version of langchain and don't use the latest version. You can fix that in the requirements.txt. you don't need to latest version in most cases. For Python, use 3.10. Hope this helps
Can this also be implemented with a local model through Ollama?
Of course there is no restriction
could you please make a video on a a chatbot that can interact with pdf files and answer questions with recent tech ? I'm having the most difficulties with outdated tutorials. It would be a great help!
Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).
Yes, checkout this video: ruclips.net/video/mdLBr9IMmgI/видео.html
can u also show how to make structured output?
Great video, nice style and easy to listen to, subscribed 👍🏼
Hello!
I’ve a doubt. The similarities is a way to reduce the number of tokens that is sent to the openAi api? So basically when you make a query to the llm you are not sending the entire text of the wikipedia page?
I ask it because of tokens cost, to know exactly what openai will charge us.
Your content is probably the best on youtube! Really appreciate all your videos
Probably. He used a Wiki page but you may have a 1000 pages pdf that will cost a lot to process and maybe most of it is irrelevant to what you want.
When you break the text, and then get the 'n' most relevant chunks you get what you want faster and cheaper.
And if you use a AI locally, the more info you use the slower it will be. So it can make a not so powerful PC do the job too.
Yes, there are two parts as mentioned by @luizemanoel. First the document can contain a lot of irrelevant info. You only want to provide what is relevant to the query to the LLM. This will improve the responses. And the added benefit is reduced tokens which means less cost as well.
@@engineerprompt @luizemanoel2588 Ok thanks to both!
What are the best ways of importing documents into the RAG system From corporate systems, such as Google Docs or Confluence or Notion without asking your IT?
I have actually done a few things manually, but they are very labour-intensive and manual for example using scraping tools and chrome extensions but is there something that is a bit more streamlined?
Also - how to add indexing, link backs, more nuances chunking mechanisms (context and type of info aware)?
You are looking for data connectors in this case. Each of these services will have their own APIs or you can use data loaders from langchain (python.langchain.com/v0.2/docs/integrations/document_loaders/). This is one aspect where i would recommend using a framework.
Thank you so much!
great work! thanks!
Great work 👍🏻 Thanks
Legend!
500 likes, keep it up !
Thanks for this great video. I tried to run your juypter notebook. When calling the line "from google.colab import userdata"
I get the error: ModuleNotFoundError: No module named 'google'. and somewhere I see pkg_resources is deprecated as an API
Is python 3.12.3 too new?
OK, I replaced the google part. There are other ways to create an OpenAI client !
Now it works !
Great! Thanks!
language arabic is supported or not
Thanks for the video! However, RAG never convinced me. I'm looking for fine-tuning in 10 lines of code.
... yes, you can do it that way - but, you lose functionality in terms of accuracy of relevance between topics
Coooolll
Your course is too expensive
No frameworks, but please install RAGatuille? WTF!
Are you also mad he used numpy? Hahahahah wtf
Framework: a collection of libraries to build applications
Libraries: a tool to leverage functionality
@@Yocoda24 , well: if the claim is pure python, no frameworks, yes. WTF.
@@MeinDeutschkurs not sure where you’re pulling “pure python” from? Can you give me a timestamp to when it is said in the video?
@@Yocoda24 Read the video title:
“RAG from Scratch in 10 lines Python - No Frameworks Needed!”
@@MeinDeutschkurs oh okay so it doesn’t say pure python, and he doesn’t use any frameworks. Glad we could come to an understanding
"10 lines" 🤣
Thankyou