RAG from scratch: Part 6 (Query Translation -- RAG Fusion)

Поделиться
HTML-код
  • Опубликовано: 4 июн 2024
  • Query rewriting is a popular strategy to improve retrieval. RAG-fusion is an approach that re-writes a question from multiple perspectives, performs retrieval on each re-written question, and performs reciprocal rank fusion on the results from each retrieval, giving a consolidated ranking.
    Slides:
    docs.google.com/presentation/...
    Code:
    github.com/langchain-ai/rag-f...
    Reference:
    github.com/Raudaschl/rag-fusion

Комментарии • 12

  • @girijeshthodupunuri1300
    @girijeshthodupunuri1300 2 месяца назад

    Hey Lance,
    Just wanted to drop a quick note to say thanks for your fantastic work! I'm currently catching up on the series and I've already implemented one of the methods you shared into my app. Your explanation really made RAG much easier to understand. Thanks a bunch!

  • @landon.wilkins
    @landon.wilkins 3 месяца назад

    Hey Lance, it's Landon. Loving your videos -- thanks!

  • @egonkirchof
    @egonkirchof Месяц назад +1

    You are not really using previous_score inside your ranking function, are you

  • @andrybratun7064
    @andrybratun7064 2 месяца назад +2

    previous_score does not used

  • @ShengLUO
    @ShengLUO 3 месяца назад +3

    Does the order after reranking affects the final answer since all docs are used as the context? Do I miss anything here?

    • @jaydeepthik1201
      @jaydeepthik1201 3 месяца назад

      I believe using Rag-Fusion you can rank overall top-k results across different retrievers or vector stores and use them in the generation stage. ruclips.net/video/77qELPbNgxA/видео.html

    • @clemlysergy3335
      @clemlysergy3335 3 месяца назад +1

      Yes I have the same question. I can't see anywhere where the LLM gets told that it should give priority to the context that comes first. Unless most of them have a tendency to do that by default? Or I could also well be missing something!

    • @darwingli1772
      @darwingli1772 3 месяца назад +1

      My understanding is only the top few docs will be fed to the context window.

    • @hardiknahata4328
      @hardiknahata4328 2 месяца назад

      Yes, the order of documents in the context window can impact the output of a RAG model. RAG models have a retriever component for selecting relevant documents and a reader component for extracting information. The order of documents can influence the reader's understanding and prioritization of information. Consequently, it affects the final output of the RAG model, although the extent of this impact depends on the model's architecture and design choices.

    • @eschoepis
      @eschoepis Месяц назад

      I'd also assume you would only feed the top n documents to the LLM. the ordering argument does not make a lot of sense, since most LLM seem to pay more attention to the beginning and the end of a prompt, so it would pay more attention to the most and least important document..

  • @luke43591
    @luke43591 2 месяца назад

    reciprocal_rank_fusion is confusing, I try to use it on my own document retrieval and got the same ranking for all the page_content with a number of 0.016666666666666666