LangChain - Advanced RAG Techniques for better Retrieval Performance

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024

Комментарии • 54

  • @codingcrashcourses8533
    @codingcrashcourses8533  7 месяцев назад +1

    Many requested a follow-up video with an example - Two-Stage Retrieval with Cross-Encoders: ruclips.net/video/3w_D1L0F-uE/видео.html

  • @ultrainstinct6715
    @ultrainstinct6715 26 дней назад

    Very informative content. Thank you so much for sharing.

  • @santasalo86
    @santasalo86 2 месяца назад

    Nice work! few new methods of Langchain I was not aware of :)

  • @say.xy_
    @say.xy_ 8 месяцев назад +1

    Already Love your content ❤
    Would love to see you making Production Ready Chatbot Pt 2 along with deployment part. Thankyou for producing quality content for free.

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад

      Thank you! I currently work on a Udemy Course, which explains how to deploy a Production Grade Chatbot on Microsoft Azure. It´s not free, but only costs a few bucks 🙂. Will release it in January. But of course I will continue to do Videos on YT which are completely free.

    • @Peter-cd9rp
      @Peter-cd9rp 7 месяцев назад

      @@codingcrashcourses8533 very cool. where is it :D

  • @wylhias
    @wylhias 4 месяца назад

    Great useful content, with clear explanation. 👍

  • @StyrmirSaevarsson
    @StyrmirSaevarsson 7 месяцев назад

    Thank you so much for this tutorial! It is exactly the stuff I was looking for!

  • @Davi-do8iz
    @Davi-do8iz 8 дней назад

    awesome! very usefull

  • @BJarvey
    @BJarvey 17 дней назад

    Back 4. Maguire😢😢 and Licha again. We lose. Manager: we need a new CB
    De ligt: I'm right here😮

  • @sivajanumm
    @sivajanumm 8 месяцев назад

    Thanks for great video of this topic.
    can you also post some videos related to LoRA with any LLMs of your choice.

  • @gangs0846
    @gangs0846 8 месяцев назад

    Absolutely fantastic

  • @newcooldiscoveries5711
    @newcooldiscoveries5711 7 месяцев назад

    Excellent information!! Thank you. Liked and Subscribed.

    • @codingcrashcourses8533
      @codingcrashcourses8533  7 месяцев назад +1

      Nice! Will release a follow up video with a practical example on monday ;-)

  • @danielbusquets3282
    @danielbusquets3282 4 месяца назад

    Liked and subscribed. Spot on!

  • @saurabhjain507
    @saurabhjain507 8 месяцев назад

    Nice video. Can you please create a video on evaluation of RAG? I think a lot of people would be interested in this.

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад +1

      Thank you! That kind of video is currently not planned, since it´s actually quite expensive to evaluate RAG Output and designing that experiment is PROBABLY something not many people would watch on RUclips. In addition to that I am not really an Expert on that topic. In my company our data scientists currently work on this^^

    • @prateek_alive
      @prateek_alive 8 месяцев назад

      @@codingcrashcourses8533 what would be the right technique for evaluating a RAG? If you can share your thoughts in chat?

  • @quengelbeard
    @quengelbeard 6 месяцев назад +1

    Fantastic video! :D
    Quick question: Do you know how it's possible to create a local vector database that's queried via code, so the database doesn't get initialised each time the script is run?
    Would really appreciate your help!

    • @codingcrashcourses8533
      @codingcrashcourses8533  6 месяцев назад +1

      You just have the use the correct constructor for that Database class. Methods like from_documents are just helper functions to make that easier. Not sure if I understood your question correct though

    • @quengelbeard
      @quengelbeard 6 месяцев назад

      Yeah, answered my question pretty much, thanks a lot! Do you know which function i can use to create a local database, that can also be passed to the SelfQueryRetriever.from_llm() constructor?@@codingcrashcourses8533

  • @Chevignay
    @Chevignay 8 месяцев назад

    Thank you so much this is really good stuff

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад

      Thanks for your comment :)

    • @Chevignay
      @Chevignay 8 месяцев назад

      You're welcome I just bought your course actually 🙂@@codingcrashcourses8533

  • @moonly3781
    @moonly3781 6 месяцев назад

    Thank you for the amazing tutorial! I was wondering, instead of using ChatOpenAi, how can I utilize a llama 2 model locally? Specifically, I couldn't find any implementation, for example, for contextual compression, where you pass compressor = LLMChainExtractor.from_llm(llm) with the ChatOpenAi (llm). How can I achieve this locally with llama 2? My use case involves private documents, so I'm looking for solutions using open-source LLMS.

    • @codingcrashcourses8533
      @codingcrashcourses8533  6 месяцев назад +1

      Sorry, I only use the OpenAI models due to my old computer. Can´t really help you with that

  • @syedhaideralizaidi1828
    @syedhaideralizaidi1828 8 месяцев назад

    Thank you so much for making this video! You create valuable content. I just have one question. I'm currently utilizing the Azure Search Service, and I'm curious if it's feasible to integrate all the retrievers. I've attempted to use LangChain with it, but my options seem limited to searching with specific parameters and filters. Unfortunately, there's not a lot of information available on how to effectively use these retrievers in conjunction with the Azure Search Service.

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад

      I tried ACS before and also was not tooo happy with it. My biggest con is that ACS does not support the indexing API. I prefer Postgres/PgVector :)

  • @ghazouaniahmed766
    @ghazouaniahmed766 5 месяцев назад

    Thank you, can you handle theproblem of retrieval when we ask question out of context of rag or greeting for exemple ?

  • @micbab-vg2mu
    @micbab-vg2mu 8 месяцев назад

    Thank you for the video:). In your opinion which method of retrieval will give me the most accurate output ( the cost is not as important in my case )? I work in pharma industry - tolerance to LMMs mistakes is very low.

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад +1

      I can not give you a blueprint for that. Just try it out and experiment. You know your data and there are so many different ways to improve performance. If cost does not matter the easiest way is use GPT-4 instead of GPT-3.5. Also try chain of thought prompting and then use one of the techniques I showed in the notebooks. There are so many ways to improve performance :)

  • @theindianrover2007
    @theindianrover2007 7 месяцев назад

    Thanks for the video, what is x & y dim in the scatter plot (5.19)?

  • @yazanrisheh5127
    @yazanrisheh5127 8 месяцев назад

    I'm a beginner here and I've been using langchain from your videos. Is the advanced RAG instead of doing something like my code below where instead of using the search type as similarity, I'm using the types that you showed in the video yet everything else stays the same like using ConversationalRetrievalChain, prompt, memory etc...?
    retriever=knowledge_base.as_retriever(search_type = "similarity_score_threshold", search_kwargs = {"score_threshold":0.8})
    Also, which would you recommend to retrieve for large documents? I need to do RAG over 80 PDF documents and have been struggling with accuracy.
    Lastly, in your OpenAi embeddings, why are you using chunk_size= 1 when by default, its chunk_size = 1000? Can you explain this part also please and thank you in advance

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад +1

      The advanced techniques also work with memory etc., but with the High Level chains I showed I may become a little bit difficult and "hacky".
      In general I don´t set any scores, but just retrieve the best documents. I also don´t have an answer for setting a good threshold. In general I recommend using the get_documents method with the retriever interface for getting documents.
      I set the chunk_size to 1 due to rate limit errors I often experienced. With higher chunk sizes it just makes too many requests at once it seems.

  • @vicvicking1990
    @vicvicking1990 2 месяца назад

    Wait what, I thought FAISS didnt support metadata filters ?
    Weird that TimeWaited works with it no ?

    • @codingcrashcourses8533
      @codingcrashcourses8533  2 месяца назад +1

      I am not too familiar with each change, FAISS is also work in progress, maybe they added it in some version :)

    • @vicvicking1990
      @vicvicking1990 2 месяца назад

      @@codingcrashcourses8533 In any case, your video is amazing and you are greatly helping me for my internship project.
      Many thanks, keep up the great work 💪👍

  • @karthikb.s.k.4486
    @karthikb.s.k.4486 8 месяцев назад

    Nice tutorial . May I know the theme used for visual studio code please

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад

      Material Theme dark :)

    • @karthikb.s.k.4486
      @karthikb.s.k.4486 8 месяцев назад

      @@codingcrashcourses8533 link for the theme please as I see lot of material themes in market place extensions

  • @akshaykumarmishra2129
    @akshaykumarmishra2129 8 месяцев назад

    hi, in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 месяцев назад

      Gpt-3.5 Turbo allows 32 tokens I guess, gpt-4-turbo 128k. If you really need that large context window, my go-to apporach would be to use models with larger context windows at the end of 2023. There are also map-reduce methode to reduce the context, but these also do many requests before sending a final one.

  • @rafaykhattak4470
    @rafaykhattak4470 Месяц назад

    Can we combine all of them?

    • @codingcrashcourses8533
      @codingcrashcourses8533  Месяц назад

      Yes, but you probably should not since latency is also a key Part of an app

  • @whitedeviljr9351
    @whitedeviljr9351 8 месяцев назад

    PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?

  • @lefetznove3185
    @lefetznove3185 Месяц назад

    hum .. you forgot to remove your OpenAI API Key from the source code !

  • @alex.5801
    @alex.5801 23 дня назад

    What is your email for business?