Build RAG application with Gemini using Langchain | How to use Gemini with Langchain| Karndeep Singh

Поделиться
HTML-код
  • Опубликовано: 1 ноя 2024

Комментарии • 50

  • @meetarpitjain
    @meetarpitjain 6 месяцев назад +1

    Watched dozens of videos on RAG. One of the best tutorial. Thanks.

  • @Techwithusman-fq2hd
    @Techwithusman-fq2hd 17 дней назад

    excellent video of RAG system. thanks for giving informative knowledge of RAG

  • @anigilajeyusuf8201
    @anigilajeyusuf8201 19 часов назад

    ​​ Hi, I want to build a RAG to solve a logistics supply problem which can recommend nearest logistics company in real time by retriving information from Google API and data about the logistics company. How do I go about it please.

  • @gandharvsikri1378
    @gandharvsikri1378 Месяц назад

    Good Explanation, just a query
    Is thisRAG_on_fly scenario ?
    Like if there is a usecase where we don't reuse KB, and we have to create a new KB for different type of queries, is this scenario similar to what is explained in this code ?

  • @Thanks_Only_For_God
    @Thanks_Only_For_God 3 месяца назад

    very good explanation , thanks

  • @ExpoDev_Dash
    @ExpoDev_Dash 42 минуты назад

    hey, whats the difference between this and vertex ai?

  • @bharanidharansundar907
    @bharanidharansundar907 8 месяцев назад +1

    Hi, instead of PDF's, can we embed a bunch of images and retrieve them based on the prompt's similarity with the images. If the prompt says 'red saree' the result should be the images of red saree from the vector db. If it's possible, any recommended embedding models for that?

    • @karndeepsingh
      @karndeepsingh  6 месяцев назад

      Yes you can. May be you can use open source CLIP model, LLaVa models

  • @vinodsagar2412
    @vinodsagar2412 3 месяца назад

    Bro Thank you for this project

  • @venkateshpolisetty5624
    @venkateshpolisetty5624 10 месяцев назад

    Hi. Nice explaination. You used retrievalqa. My question is what is the use of loadqachain ?

    • @karndeepsingh
      @karndeepsingh  10 месяцев назад

      loadqachain is also used to extract answers from documents but here you need to pass all the documents meanwhile in RetrievalQAChain top documents are selected based on query and document similarity provided by vector database.

  • @syedahafsadeveloper
    @syedahafsadeveloper 17 дней назад

    thanks

  • @SagarAhirrao2709
    @SagarAhirrao2709 4 месяца назад

    How can i get related answer to just previous asked question to maintain context with RAG?
    submitting previous question-answer would cosume more tokens and might differ with context..
    Thanks❤

  • @Alessandro-un9dr
    @Alessandro-un9dr 6 месяцев назад

    In the way you implemented it, is the model capable of knowing what was previously asked? Or does it only retrieve documents, but not the content of previous interactions?

    • @karndeepsingh
      @karndeepsingh  6 месяцев назад

      We can also submit previous chat history into the LLM.

  • @jayanthAILab
    @jayanthAILab 8 месяцев назад +3

    very low sound bro

  • @karthikb.s.k.4486
    @karthikb.s.k.4486 10 месяцев назад

    Nice. What is the purpose of RAG ? Why we need to use can you please explain?

    • @karndeepsingh
      @karndeepsingh  10 месяцев назад +3

      RAG has some good advantages:
      1. With Rag LLMs are able to respond based on the knowledge that even it was not trained on.
      2. It reduces the hallucinations of the LLMs and restricts the context to the application needs
      3. Rag is also helpful in guiding and improving the results of LLMs by giving relevant context to it

    • @karthikb.s.k.4486
      @karthikb.s.k.4486 10 месяцев назад

      @@karndeepsingh Thank you

  • @ananyaredhu744
    @ananyaredhu744 9 месяцев назад

    Will the model always access Gemini's API for generating answers or does it have the capability to answer FAQs from the knowledge base rather than going to API everytime for generating an answer?

    • @karndeepsingh
      @karndeepsingh  9 месяцев назад

      Ideally you should prepare certain list of FAQs so that LLMs can help to answer with respect to the context required specific to your knowledge base rather than relying on it own knowledge base which could leads to hallucinations

  • @thefurreverfriends
    @thefurreverfriends 8 месяцев назад

    Hello Sir, I want to make a social media caption generator web app using Gemini api, but for some inputs, it gives random language answer..I am not sure why that is happening? Because I have specify that I want the output in English language.

    • @shoutdaola4452
      @shoutdaola4452 Месяц назад

      Use a caption generator model for huggingface like blip, etc

  • @ishavmahajan
    @ishavmahajan 10 месяцев назад

    Suppose I only have tabular data in pdf file. Will the same code be able to generate answers

  • @shaktidharreddy6822
    @shaktidharreddy6822 10 месяцев назад

    Is gemini pro model which u used here free or chargeable like gpt4

    • @karndeepsingh
      @karndeepsingh  10 месяцев назад

      Right now, its free with 60 Request per minute

  • @PratheekBabu
    @PratheekBabu 4 месяца назад

    Hi if i pass 10 pdfs can i get the name of the pdf from which it is retrieving answer from source documents

    • @alexbuccheri5635
      @alexbuccheri5635 2 месяца назад +1

      Yes, you can see he's actually enabled source retrieval by setting `return_source_documents=True` when creating qa_chain. See 24.06 for example. This will be returned in the result dict. One just doesn't see it here because he's printing result['result']. With that said, this API is already depreciated. QA chains now look like:
      qa_chain = (
      {"context": vs.as_retriever(k=3) | format_docs,
      "question": RunnablePassthrough()
      }
      | prompt
      | llm
      | StrOutputParser()
      )

  • @_mohamedesmat
    @_mohamedesmat 8 месяцев назад

    Can i train gemini on custom data and export this new model into an online chatbot app!

    • @alexbuccheri5635
      @alexbuccheri5635 2 месяца назад

      You're not training the LLM. RAG gives a context window of better quality or more focused information on which to query. But yes, you can create a chabot or agent with a RAG layer, and deploy it online.

  • @ananyaredhu744
    @ananyaredhu744 9 месяцев назад

    I used the exact same code as yours but the model is not generating answers. Sometimes it's generating answers but maximum times it is showing "I don't know the answer". (I used the same PDF as yours)

    • @karndeepsingh
      @karndeepsingh  9 месяцев назад

      It may be possibly because the questions you asked relative to that answer might not be present in the PDF. Or you need to improve similarity algorithms so that it finds nearest answers for your questions from the PDF

    • @ananyaredhu744
      @ananyaredhu744 9 месяцев назад

      The questions that I asked were related to the topics presented in the pdf, still it is not able to answer.
      How should I improve the similarity algorithms in your code?@@karndeepsingh

    • @karndeepsingh
      @karndeepsingh  9 месяцев назад

      You can change from mpnet to bge model or m2-bert

    • @santhoshmanoharan8969
      @santhoshmanoharan8969 7 месяцев назад

      even I'm getting " I dont know the answer " response@@karndeepsingh

  • @markjoshua8971
    @markjoshua8971 10 месяцев назад

    What about using pinecone as a vectordb? I tried to switch chromdb as pinecone but it doesn't work

    • @karndeepsingh
      @karndeepsingh  10 месяцев назад

      You need to check the integration with pinecone for handling multimodal inputs and outputs

    • @markjoshua8971
      @markjoshua8971 10 месяцев назад

      ​@@karndeepsinghso its not the same approach as using palm? Because i already have a working rag system using palm and retrievalqa but when i changed the model to gemini, it doesn't work anymore(ofc i already install and imported the necessary files for gemini like in the vid)

  • @kamitp4972
    @kamitp4972 9 месяцев назад

    Can it extract tables from pdfs

    • @aquiveal
      @aquiveal 8 месяцев назад

      you need build a parser that first extract the tables propely.

    • @kamitp4972
      @kamitp4972 8 месяцев назад

      how can i do it? Please help
      @@aquiveal

  • @kroax9720
    @kroax9720 3 месяца назад

    Is gemini ai pro free ?

    • @alexbuccheri5635
      @alexbuccheri5635 2 месяца назад

      No, but the free trial lasts for 2 months. In my experience, Gemini pro was piss-poor with code generation, I much prefer GPT-4o.

  • @AJITHKODAKATERIPUDHIYAVEETIL
    @AJITHKODAKATERIPUDHIYAVEETIL 6 месяцев назад

    hi karn this is great video. i had a question. suppose i have some customer conversation data from our chat application on the website, i want to have a question answerign system where i can ask questions about the data like "what are the top concerns for which customer is coming to chat", "how can we help improve customer experience" etc.. do you suggest going with the RAG approach or is there a better way. the reason am asking is in this case the data is not going to structured like it will be in a pdf document. looking forward to your reply.

    • @karndeepsingh
      @karndeepsingh  6 месяцев назад

      You can use RAG for building up restricted Question Answering system for your usecase.