I have the same question, how do we handle multiple documents of similar types, let's say office policies for different companies? The similarity search will return all similar chunks (k=5) as context to LLM, which may contain different answers based on the company's policy. There is lot of ambiguity here. Also how do we handle tables in PDFs as when asked questions they don't provide correct answer for it. Can anyone help me out here?
One way would be to have an agent select a specific database based on the query, or have a variable for the user stating which company they work for. You would then have multiple databases, one for each company involved, or whatever.. This would also keep the databases smaller. Handling that in some way like that would speed up the search and response.
Hey, These videos are really helpful. What do you think about scalability? When the document size increases from few to 1000s, the performance of semantic search decreases. Also have you tried qdrant? It worked better than chroma for me.
Scalability is potentially an issue. Will be making some content around it. In theory, the retrieval speed will decrease as the number of documents increases by order of magnitude. But in that case, finding approximate neighbors will work. Haven't looked at qdrant yet but it's on my list. Thanks for sharing
Great - while you can persist the chromadb, is there a way to persist der bm25retriever? or do you have to chunk always again when starting the application?
@@engineerprompt I finaly got my POC up and running to search for parts and materials using hybrid search and it works really well. Thanks for do this video.
Hello! First of all, thank you very much for the video! Secondly, at minute 10:20 you mention that you are going to create a new video about obtaining the metadata of the chunks. Do you have that video? Again, thank you very much for the material.
Thank you for sharing the guide. One question, how to make the response longer, I have tried to change the max_length parameter, as you suggested in the video, but the response is always some ~ 300 characters long.
@@engineerprompt I've experienced a similar issue, I'm using the zephyr-7b-beta model. Also, I don't want the AI to get the answers from the internet, and just give response if the context is available in the database provided. I tried to use the prompting for that, didn't help. Any tips?
Amazing video! How can you use this in a conversational chat engine? I have built conversational pipelines that use RAG, however how would I do this here while having different retrievers?
I get KeyError 0 when I run this # Vector store with the selected embedding model vectorstore = Chroma.from_documents(chunks, embeddings) What am I doing wrong? I added my HF token with read the first time and then with write too... I would appreciate the help. Thanks for the video, though. Its amazing.
Thanks! I have 500k documents. I want to compute the keyword retriever once and call it the same way I have external index for dense DB vector. Is there a way?
Hi, I have a question, hope you reply. If we want to give it a PDF with bunch of video transcripts and ask it to formulate a creative article based on the info given, can it actually do the tasks like that? Or is it just useful for finding relevant information from the source files?
RAG is good for finding the relevant information. For the use case you are describing, you will need to add everything in the context window of the LLM in order for it to look at the whole file. Hope this helps.
Wait, this doesn't seem like RAG at all? If I'm following, the LLM is not using embedding vectors at all in the actual llm inference step? It seems you're using a complex text->embedding->search engine step as a way to build a text search engine that just injects regular text into the context, but does not use embeddings directly added to the model? Couldn't you generate extra 'ad-hoc' search text you're just plopping into the context window in any number of methods, only one of them being using embeddings -> db -> text? And this method has none of the advantage of actually 'grafting on' embeddings directly to the model as you're using up the context window?
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Hi there, Personally I find the price too steep for only 2 hours of content but maybe you can convince us with a preview ! Cheers
Very nice idea with this 'code display window' in your video:
now the code is much easier to read, and much easier to follow step by step. Thanks.
Excellent video I’ve been needing this. Very slick way to combine the responses from semantic and keyword search.
Fantastic Video and very timely. Thanks for the advice. I have made some massive progress because of it.
Glad it was helpful and thank you for your support 🙏
This video is really helpful to me!Thanks a lot!
Thanks 😊
It's great that the example code uses free LLM inference like Hugging Face (or OpenRouter)!
But can we host them locally? Working in an industry that can’t use public SaaS stuff.
How do you handle multiple documents that are unrelated to find the answer for the user ?
I have the same question, how do we handle multiple documents of similar types, let's say office policies for different companies?
The similarity search will return all similar chunks (k=5) as context to LLM, which may contain different answers based on the company's policy. There is lot of ambiguity here.
Also how do we handle tables in PDFs as when asked questions they don't provide correct answer for it.
Can anyone help me out here?
One way would be to have an agent select a specific database based on the query, or have a variable for the user stating which company they work for. You would then have multiple databases, one for each company involved, or whatever..
This would also keep the databases smaller.
Handling that in some way like that would speed up the search and response.
Hey, These videos are really helpful. What do you think about scalability? When the document size increases from few to 1000s, the performance of semantic search decreases. Also have you tried qdrant? It worked better than chroma for me.
Scalability is potentially an issue. Will be making some content around it. In theory, the retrieval speed will decrease as the number of documents increases by order of magnitude. But in that case, finding approximate neighbors will work. Haven't looked at qdrant yet but it's on my list. Thanks for sharing
Great - while you can persist the chromadb, is there a way to persist der bm25retriever? or do you have to chunk always again when starting the application?
You can fetch documents from DB and feed it.
Excellent video, it's helping me with my proof of concept. Thank you.
Glad to hear that!
@@engineerprompt I finaly got my POC up and running to search for parts and materials using hybrid search and it works really well. Thanks for do this video.
@@kenchang3456 this is great news.
Hello! First of all, thank you very much for the video! Secondly, at minute 10:20 you mention that you are going to create a new video about obtaining the metadata of the chunks. Do you have that video? Again, thank you very much for the material.
Thank you for sharing the guide. One question, how to make the response longer, I have tried to change the max_length parameter, as you suggested in the video, but the response is always some ~ 300 characters long.
It depends on the model too. May be your llm model doesn't support more than 300!? . Which model you are using btw ?
Which model are you trying? How long is your context?
@@engineerprompt I've experienced a similar issue, I'm using the zephyr-7b-beta model. Also, I don't want the AI to get the answers from the internet, and just give response if the context is available in the database provided. I tried to use the prompting for that, didn't help. Any tips?
@@sarcastic.affirmations did you get what you were trying to find?
Great stuff! Thanks!
Thank you for the video:)
Great,do you have videos for using docx files
thanks, same will work but you will need to use a separate loader for it. Look into unstructured.io.
Amazing video! How can you use this in a conversational chat engine? I have built conversational pipelines that use RAG, however how would I do this here while having different retrievers?
This should work out of the box, you will need to replace your current retriever with the ensemble one.
Amazing video , thanks
🙏
Great effort and good content..😇😇
@engineerprompt - Could you convert Notebook with LlamaIndex if you don't mind?
Really helpful, thank you ❤
I'll have to try this one. Great video!
Glad it was helpful
I get KeyError 0 when I run this
# Vector store with the selected embedding model
vectorstore = Chroma.from_documents(chunks, embeddings)
What am I doing wrong? I added my HF token with read the first time and then with write too...
I would appreciate the help.
Thanks for the video, though. Its amazing.
I am getting same error
i dont know what RAG to implement . is there benchmarks out there for the best solution? My use case will be 100s of LONG documents even textbooks.
Thanks! I have 500k documents. I want to compute the keyword retriever once and call it the same way I have external index for dense DB vector. Is there a way?
hello! thanks for the video. I was wondering if we can use it on csv files instead of PDF? How would that affect the architecture?
Hi, I have a question, hope you reply. If we want to give it a PDF with bunch of video transcripts and ask it to formulate a creative article based on the info given, can it actually do the tasks like that? Or is it just useful for finding relevant information from the source files?
RAG is good for finding the relevant information. For the use case you are describing, you will need to add everything in the context window of the LLM in order for it to look at the whole file. Hope this helps.
@@engineerprompt Can you point me out a good video/ channel who focuses on accomplishing such things using local LLMs or even chatGpt4 ?
I'm using RAG for a coding model. can anyone suggest a good retriever for this task?. Thanks in advance!
Can you add this functionality to localGPT?
"Where can I find the PDF data?"
You will need to provide your own PDF files.
Wait, this doesn't seem like RAG at all? If I'm following, the LLM is not using embedding vectors at all in the actual llm inference step? It seems you're using a complex text->embedding->search engine step as a way to build a text search engine that just injects regular text into the context, but does not use embeddings directly added to the model? Couldn't you generate extra 'ad-hoc' search text you're just plopping into the context window in any number of methods, only one of them being using embeddings -> db -> text? And this method has none of the advantage of actually 'grafting on' embeddings directly to the model as you're using up the context window?
the whole point is to fix the broken part of RAG. the typical rag implementation doesn’t do too well with anything larger than a few docs
The background is little distracting, its better to avoid the flashy one, i couldn't concentrate on your lecture. Please. Thank you.
Great stuff! Thanks!