Chunking Best Practices for RAG Applications

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 13

  • @reiniervaneijk
    @reiniervaneijk 7 месяцев назад +1

    Good job guys, valuable talk thnx

  • @deepaksingh9318
    @deepaksingh9318 8 месяцев назад +1

    Thanks .. It was A very good content and full of Details..

  • @tizulis2
    @tizulis2 8 месяцев назад +1

    excellent presentation!

  • @mauriciolopes8502
    @mauriciolopes8502 10 месяцев назад +2

    Thank you, Ryan! Awesome lecture.

  • @IgnacioLlorca-v9r
    @IgnacioLlorca-v9r 11 месяцев назад +2

    Keep up the good work!

  • @tonylv6119
    @tonylv6119 8 месяцев назад +1

    Sometimes, document has some images and figures inside, i think that's hard part to deal with that for RAG.😊

  • @Jaybearno
    @Jaybearno 10 месяцев назад +4

    Hi, thanks for the video, really covered a lot of relevant questions for me. Open question to the community-
    I have been struggling with the retrieval relevance for relatively small chunks using ada-002 (OpenAI embedding). For example, I search do a similarity Search on a key word ("sea slug") I know only appears a few times, and the top k result doesn't even include either parts words. It appears in the text as "sea-slug", but this feels extremely brittle and like something the embeddings should capture. Is this somewhat expected? Hence the need for more complicated retrieval?

    • @RyanSieglerAI
      @RyanSieglerAI 10 месяцев назад

      Since the embeddings capture the context of a chunk, it isn't focused on specific words (this is where hybrid search can come into play). My thought is the embedding model doesn't know much context around a word like "sea-slug" so potentially finetuning the embedding model with some examples using that phrase, or using a hybrid search method would help.

  • @maryamashraf6370
    @maryamashraf6370 9 месяцев назад

    Great video, learnt allot! Had a question. What should be the chunking approach for a RAG application scraping the Internet for context? Since the documents would be web pages I get that you'd start off with the html splitter, but what approach should you use to try to get as much relevant context as possible while limiting the number of pages you embed? Especially considering that embeddings will be made in real time, trying to make the process as fast as possible etc. Would the approach be very different from using an offline document corpus?

  • @vijaybrock
    @vijaybrock 7 месяцев назад

    Hi Sir,
    Can you suggest me the best chunking strategy for 10K reports (pdfs) to chat with?

  • @soren81
    @soren81 9 месяцев назад

    Great video! I have a question about chunk decoupling. Shouldn't the vector storage embedding do pretty much the same abstraktion with the large text, as the summary does? I mean, wouldn't the summary and the original end up i the same place in the vector space, rendering the summary more or less pointless?

    • @RyanSieglerAI
      @RyanSieglerAI 9 месяцев назад +2

      Thanks for the question! In this context, the summary should highlight the key points and concepts in the original document, which should make retrieval more accurate especially in cases where there are documents covering similar/adjacent concepts. This is because in a full document there could be unnecessary information that could throw off vector search. The quality of the summary needs to be high for this to work. If the quality of the summary is not good and does not present the key points of the original document then yes it would be better to just embed original document as a whole.