RAG from scratch: Part 5 (Query Translation -- Multi Query)

Поделиться
HTML-код
  • Опубликовано: 10 май 2024
  • Query rewriting is a popular strategy to improve retrieval. Multi-query is an approach that re-writes a question from multiple perspectives, performs retrieval on each re-written question, and takes the unique union of all docs.
    Slides:
    docs.google.com/presentation/...
    Code:
    github.com/langchain-ai/rag-f...

Комментарии • 14

  • @anonymous6666
    @anonymous6666 2 месяца назад +3

    i love lance's hand motions, so freakin' entertaining

  • @paraconscious790
    @paraconscious790 24 дня назад

    God this is amazing series from LangChain and Lance!!! Lance is an angel!!! 🙌🙏

  • @Wiktor-rf3tu
    @Wiktor-rf3tu 2 дня назад

    Great piece of knowledge! I am not a professional python developer (yet) and the syntax with building a chain with " | " broke my brain. You could either explain it a little bit or use more explicit syntax if possible in the future.

  • @mrchongnoi
    @mrchongnoi 2 месяца назад +2

    Thank you for the video. I am enjoying the series I do have a question regarding this method.
    If I think about conversations I have had with others, there are times either myself or the other person I am speaking with may say, "I do not understand your questions. What are you trying to say? What are you asking me". Would be better for the LLM to engage the user to see if there is deeper meaning to the question or questions? Once the LLM gains an understand, then retrieval can take place using the Multi Query.
    A person who is a investment expert who asks the question "What was Tesla performance in FY2022 compared to FY2023, will have a different expectation in the answer than a layman who who ask the same question.
    Just thinking out loud.

  • @hasszhao
    @hasszhao 2 месяца назад

    Correct me if I was wrong. Is the topic about what the class
    langchain.retrievers.multi_query.MultiQueryRetriever
    does?
    And it is very similar to the Llama-Index SubQuestionQueryEngine, only different is that the Llama-Index applies the break-down of the origin question into sub pieces instead finding similar questions with LLM.

  • @user-xv2mx8rx7y
    @user-xv2mx8rx7y 2 месяца назад

    How did you do the graphics ?

  • @jay-dj4ui
    @jay-dj4ui Месяц назад

    So the charging profit is tracing? Like a log system? What about LlamaIndex?

  • @girijeshthodupunuri1300
    @girijeshthodupunuri1300 Месяц назад

    Can you share the notebook?

  • @B0tch0
    @B0tch0 2 месяца назад

    How do you use embeddings WITHOUT OpenAI ?????

    • @theartofwar1750
      @theartofwar1750 2 месяца назад +1

      You don't need openai for embeddings. The only benefit of using it is that that it is faster, plug and play, and potentially more accurate.
      You can use any embedding model you want, e.g. any embedding model from huggingface, etc. For example, you can use a swap out the openai embeddings for a CLIP model from hugging face.

    • @B0tch0
      @B0tch0 2 месяца назад

      @@theartofwar1750 If I tell you, build those shelves from scratch then you realize you need a subscription to IKEA. How does that make you feel?
      Using another methods would certainly be more "building RAG from scratch" don't you think?

  • @mitast
    @mitast 2 месяца назад

    The LCEL is the most confusing thing you have ever invented guys... No need of that sh*t