Better RAG: Hybrid Search in Chat with Documents | BM25 and Ensemble

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 59

  • @engineerprompt
    @engineerprompt  6 месяцев назад

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 6 месяцев назад

      Hi there, Personally I find the price too steep for only 2 hours of content but maybe you can convince us with a preview ! Cheers

  • @poloceccati
    @poloceccati 10 месяцев назад +2

    Very nice idea with this 'code display window' in your video:
    now the code is much easier to read, and much easier to follow step by step. Thanks.

  • @TomanswerAi
    @TomanswerAi 10 месяцев назад +2

    Excellent video I’ve been needing this. Very slick way to combine the responses from semantic and keyword search.

  • @paulmiller591
    @paulmiller591 10 месяцев назад +3

    Fantastic Video and very timely. Thanks for the advice. I have made some massive progress because of it.

    • @engineerprompt
      @engineerprompt  10 месяцев назад

      Glad it was helpful and thank you for your support 🙏

  • @MikewasG
    @MikewasG 10 месяцев назад +4

    This video is really helpful to me!Thanks a lot!

  • @attilavass6935
    @attilavass6935 10 месяцев назад +2

    It's great that the example code uses free LLM inference like Hugging Face (or OpenRouter)!

    • @morespinach9832
      @morespinach9832 9 месяцев назад +1

      But can we host them locally? Working in an industry that can’t use public SaaS stuff.

  • @mrchongnoi
    @mrchongnoi 10 месяцев назад +4

    How do you handle multiple documents that are unrelated to find the answer for the user ?

    • @parikshitrathode4578
      @parikshitrathode4578 10 месяцев назад +1

      I have the same question, how do we handle multiple documents of similar types, let's say office policies for different companies?
      The similarity search will return all similar chunks (k=5) as context to LLM, which may contain different answers based on the company's policy. There is lot of ambiguity here.
      Also how do we handle tables in PDFs as when asked questions they don't provide correct answer for it.
      Can anyone help me out here?

    • @texasfossilguy
      @texasfossilguy 10 месяцев назад +1

      One way would be to have an agent select a specific database based on the query, or have a variable for the user stating which company they work for. You would then have multiple databases, one for each company involved, or whatever..
      This would also keep the databases smaller.
      Handling that in some way like that would speed up the search and response.

  • @lakshay510
    @lakshay510 10 месяцев назад +4

    Hey, These videos are really helpful. What do you think about scalability? When the document size increases from few to 1000s, the performance of semantic search decreases. Also have you tried qdrant? It worked better than chroma for me.

    • @engineerprompt
      @engineerprompt  10 месяцев назад +2

      Scalability is potentially an issue. Will be making some content around it. In theory, the retrieval speed will decrease as the number of documents increases by order of magnitude. But in that case, finding approximate neighbors will work. Haven't looked at qdrant yet but it's on my list. Thanks for sharing

  • @saqqara6361
    @saqqara6361 10 месяцев назад +4

    Great - while you can persist the chromadb, is there a way to persist der bm25retriever? or do you have to chunk always again when starting the application?

  • @kenchang3456
    @kenchang3456 8 месяцев назад

    Excellent video, it's helping me with my proof of concept. Thank you.

    • @engineerprompt
      @engineerprompt  8 месяцев назад +1

      Glad to hear that!

    • @kenchang3456
      @kenchang3456 7 месяцев назад

      @@engineerprompt I finaly got my POC up and running to search for parts and materials using hybrid search and it works really well. Thanks for do this video.

    • @engineerprompt
      @engineerprompt  7 месяцев назад +1

      @@kenchang3456 this is great news.

  • @SRV900
    @SRV900 4 месяца назад

    Hello! First of all, thank you very much for the video! Secondly, at minute 10:20 you mention that you are going to create a new video about obtaining the metadata of the chunks. Do you have that video? Again, thank you very much for the material.

  • @rafaf6838
    @rafaf6838 10 месяцев назад +1

    Thank you for sharing the guide. One question, how to make the response longer, I have tried to change the max_length parameter, as you suggested in the video, but the response is always some ~ 300 characters long.

    • @linuxmanju
      @linuxmanju 10 месяцев назад +1

      It depends on the model too. May be your llm model doesn't support more than 300!? . Which model you are using btw ?

    • @engineerprompt
      @engineerprompt  10 месяцев назад +1

      Which model are you trying? How long is your context?

    • @sarcastic.affirmations
      @sarcastic.affirmations 10 месяцев назад +2

      @@engineerprompt I've experienced a similar issue, I'm using the zephyr-7b-beta model. Also, I don't want the AI to get the answers from the internet, and just give response if the context is available in the database provided. I tried to use the prompting for that, didn't help. Any tips?

    • @PallaviChauhan91
      @PallaviChauhan91 9 месяцев назад

      @@sarcastic.affirmations did you get what you were trying to find?

  • @andaldana
    @andaldana 9 месяцев назад

    Great stuff! Thanks!

  • @micbab-vg2mu
    @micbab-vg2mu 10 месяцев назад +1

    Thank you for the video:)

  • @hassentangier3891
    @hassentangier3891 8 месяцев назад

    Great,do you have videos for using docx files

    • @engineerprompt
      @engineerprompt  8 месяцев назад

      thanks, same will work but you will need to use a separate loader for it. Look into unstructured.io.

  • @zYokiS
    @zYokiS 10 месяцев назад +1

    Amazing video! How can you use this in a conversational chat engine? I have built conversational pipelines that use RAG, however how would I do this here while having different retrievers?

    • @engineerprompt
      @engineerprompt  10 месяцев назад +1

      This should work out of the box, you will need to replace your current retriever with the ensemble one.

  • @12351624
    @12351624 10 месяцев назад +1

    Amazing video , thanks

  • @KOTAGIRISIVAKUMAR
    @KOTAGIRISIVAKUMAR 10 месяцев назад

    Great effort and good content..😇😇

  • @JanghyunBaek
    @JanghyunBaek 8 месяцев назад

    @engineerprompt - Could you convert Notebook with LlamaIndex if you don't mind?

  • @aneerpa8384
    @aneerpa8384 10 месяцев назад

    Really helpful, thank you ❤

  • @TheZEN2011
    @TheZEN2011 10 месяцев назад

    I'll have to try this one. Great video!

  • @clinton2312
    @clinton2312 10 месяцев назад +2

    I get KeyError 0 when I run this
    # Vector store with the selected embedding model
    vectorstore = Chroma.from_documents(chunks, embeddings)
    What am I doing wrong? I added my HF token with read the first time and then with write too...
    I would appreciate the help.
    Thanks for the video, though. Its amazing.

    • @goel323
      @goel323 8 месяцев назад

      I am getting same error

  • @karanv293
    @karanv293 9 месяцев назад

    i dont know what RAG to implement . is there benchmarks out there for the best solution? My use case will be 100s of LONG documents even textbooks.

  • @Tofipie
    @Tofipie 9 месяцев назад

    Thanks! I have 500k documents. I want to compute the keyword retriever once and call it the same way I have external index for dense DB vector. Is there a way?

  • @deixis6979
    @deixis6979 10 месяцев назад

    hello! thanks for the video. I was wondering if we can use it on csv files instead of PDF? How would that affect the architecture?

  • @PallaviChauhan91
    @PallaviChauhan91 9 месяцев назад

    Hi, I have a question, hope you reply. If we want to give it a PDF with bunch of video transcripts and ask it to formulate a creative article based on the info given, can it actually do the tasks like that? Or is it just useful for finding relevant information from the source files?

    • @engineerprompt
      @engineerprompt  9 месяцев назад +1

      RAG is good for finding the relevant information. For the use case you are describing, you will need to add everything in the context window of the LLM in order for it to look at the whole file. Hope this helps.

    • @PallaviChauhan91
      @PallaviChauhan91 9 месяцев назад

      @@engineerprompt Can you point me out a good video/ channel who focuses on accomplishing such things using local LLMs or even chatGpt4 ?

  • @chrismathew638
    @chrismathew638 10 месяцев назад

    I'm using RAG for a coding model. can anyone suggest a good retriever for this task?. Thanks in advance!

  • @denb1568
    @denb1568 10 месяцев назад

    Can you add this functionality to localGPT?

  • @abhinandansharma3983
    @abhinandansharma3983 8 месяцев назад

    "Where can I find the PDF data?"

    • @engineerprompt
      @engineerprompt  7 месяцев назад

      You will need to provide your own PDF files.

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 10 месяцев назад +2

    Wait, this doesn't seem like RAG at all? If I'm following, the LLM is not using embedding vectors at all in the actual llm inference step? It seems you're using a complex text->embedding->search engine step as a way to build a text search engine that just injects regular text into the context, but does not use embeddings directly added to the model? Couldn't you generate extra 'ad-hoc' search text you're just plopping into the context window in any number of methods, only one of them being using embeddings -> db -> text? And this method has none of the advantage of actually 'grafting on' embeddings directly to the model as you're using up the context window?

    • @s11-informationatyourservi44
      @s11-informationatyourservi44 10 месяцев назад

      the whole point is to fix the broken part of RAG. the typical rag implementation doesn’t do too well with anything larger than a few docs

  • @vamshi3676
    @vamshi3676 9 месяцев назад +1

    The background is little distracting, its better to avoid the flashy one, i couldn't concentrate on your lecture. Please. Thank you.

  • @andaldana
    @andaldana 9 месяцев назад

    Great stuff! Thanks!