Hi, thanks for the video, really covered a lot of relevant questions for me. Open question to the community- I have been struggling with the retrieval relevance for relatively small chunks using ada-002 (OpenAI embedding). For example, I search do a similarity Search on a key word ("sea slug") I know only appears a few times, and the top k result doesn't even include either parts words. It appears in the text as "sea-slug", but this feels extremely brittle and like something the embeddings should capture. Is this somewhat expected? Hence the need for more complicated retrieval?
Since the embeddings capture the context of a chunk, it isn't focused on specific words (this is where hybrid search can come into play). My thought is the embedding model doesn't know much context around a word like "sea-slug" so potentially finetuning the embedding model with some examples using that phrase, or using a hybrid search method would help.
Great video, learnt allot! Had a question. What should be the chunking approach for a RAG application scraping the Internet for context? Since the documents would be web pages I get that you'd start off with the html splitter, but what approach should you use to try to get as much relevant context as possible while limiting the number of pages you embed? Especially considering that embeddings will be made in real time, trying to make the process as fast as possible etc. Would the approach be very different from using an offline document corpus?
Great video! I have a question about chunk decoupling. Shouldn't the vector storage embedding do pretty much the same abstraktion with the large text, as the summary does? I mean, wouldn't the summary and the original end up i the same place in the vector space, rendering the summary more or less pointless?
Thanks for the question! In this context, the summary should highlight the key points and concepts in the original document, which should make retrieval more accurate especially in cases where there are documents covering similar/adjacent concepts. This is because in a full document there could be unnecessary information that could throw off vector search. The quality of the summary needs to be high for this to work. If the quality of the summary is not good and does not present the key points of the original document then yes it would be better to just embed original document as a whole.
Good job guys, valuable talk thnx
Thanks .. It was A very good content and full of Details..
excellent presentation!
Thank you, Ryan! Awesome lecture.
Keep up the good work!
Sometimes, document has some images and figures inside, i think that's hard part to deal with that for RAG.😊
Hi, thanks for the video, really covered a lot of relevant questions for me. Open question to the community-
I have been struggling with the retrieval relevance for relatively small chunks using ada-002 (OpenAI embedding). For example, I search do a similarity Search on a key word ("sea slug") I know only appears a few times, and the top k result doesn't even include either parts words. It appears in the text as "sea-slug", but this feels extremely brittle and like something the embeddings should capture. Is this somewhat expected? Hence the need for more complicated retrieval?
Since the embeddings capture the context of a chunk, it isn't focused on specific words (this is where hybrid search can come into play). My thought is the embedding model doesn't know much context around a word like "sea-slug" so potentially finetuning the embedding model with some examples using that phrase, or using a hybrid search method would help.
Great video, learnt allot! Had a question. What should be the chunking approach for a RAG application scraping the Internet for context? Since the documents would be web pages I get that you'd start off with the html splitter, but what approach should you use to try to get as much relevant context as possible while limiting the number of pages you embed? Especially considering that embeddings will be made in real time, trying to make the process as fast as possible etc. Would the approach be very different from using an offline document corpus?
Hi Sir,
Can you suggest me the best chunking strategy for 10K reports (pdfs) to chat with?
Great video! I have a question about chunk decoupling. Shouldn't the vector storage embedding do pretty much the same abstraktion with the large text, as the summary does? I mean, wouldn't the summary and the original end up i the same place in the vector space, rendering the summary more or less pointless?
Thanks for the question! In this context, the summary should highlight the key points and concepts in the original document, which should make retrieval more accurate especially in cases where there are documents covering similar/adjacent concepts. This is because in a full document there could be unnecessary information that could throw off vector search. The quality of the summary needs to be high for this to work. If the quality of the summary is not good and does not present the key points of the original document then yes it would be better to just embed original document as a whole.