RAG with Llama-Index: Vector Stores

Поделиться
HTML-код
  • Опубликовано: 25 окт 2024

Комментарии • 26

  • @engineerprompt
    @engineerprompt  Год назад

    Want to connect?
    💼Consulting: calendly.com/engineerprompt/consulting-call
    🦾 Discord: discord.com/invite/t4eYQRUcXB
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Join Patreon: Patreon.com/PromptEngineering

  • @AchiniHewawitharana
    @AchiniHewawitharana 11 месяцев назад +2

    This tutorial series is great ! Best one I found so far. Thank you for sharing this.

  • @sanjaybhatikar
    @sanjaybhatikar Месяц назад

    Outstanding video content

  • @gregorykarsten7350
    @gregorykarsten7350 Год назад +3

    Great work. Excellent topic. Llama index opens up so much more possibility for RAG. Im very much interested in building a knowledge base. That gets added to on a daily basis. What do think of knowledge graph in this context

  • @vitalis
    @vitalis Год назад

    Super interesting, looking forward to the video

  • @fuba44
    @fuba44 Год назад

    This was great, love this kind of content! ❤❤❤

  • @anilshinde8025
    @anilshinde8025 11 месяцев назад +1

    great video. Thanks. waiting for addition of Local LLM in the same code

  • @hassentangier3891
    @hassentangier3891 Год назад +1

    Awesome Work, Like Always.
    Can you refer to documentation or video "on how to update the chromadb in this context"

  • @kdlin1
    @kdlin1 8 месяцев назад +1

    Why is OpenAI API Key needed when it does not use OpenAI? Thanks!

  • @Rahul-zq8ep
    @Rahul-zq8ep 10 месяцев назад

    Great I understood most of the explanation in video but Where is the RAG implementation in it ? I have also created a vector_store, storage_context, index etc when I was implementing chatBot with my data, but I am confused on how to implement RAG as an added functionality ?

  • @smoq20
    @smoq20 Год назад +1

    I always seem to run into the problem of exclusions when using vector similarity search for RAG. I.e. when you run a query for "Tell me everything you know about dogs other then Labradors." guess which documents will be returned as first 10 (assuming you have a lot of chunks)? Yes, about Labradors. Has anyone figured a way around that yet?
    I've been attempting to filter out results if queries include exclusions with additional LLM passes, but only GPT4 seems to have enough brains to do it correctly. PaLM 2 gets it right in 50% of cases.

  • @Kishorekkube
    @Kishorekkube Год назад +1

    Self hosting? Seems interesting

  • @saikashyapcheruku6103
    @saikashyapcheruku6103 8 месяцев назад

    Is there a way to bypass the rate limit error for openai api?
    Additionally, why is the openai being used even after specifically mentioning the service context?

  • @arkodeepchatterjee
    @arkodeepchatterjee Год назад +1

    please make the video comparing different embedding models

  • @scorpionrevenge
    @scorpionrevenge 9 месяцев назад

    I keep receiving this error :
    cannot import name 'Doc' from 'typing_extensions'
    I am trying to run your codes on jupyter notebook environment. Can you please help and let me know how to create a vector db?

  • @toannn6674
    @toannn6674 Год назад

    I have 2 million data chunks of text, i was used db chroma but it didn't work. Can you help me?

  • @shubhamanand9095
    @shubhamanand9095 Год назад

    Can you share the full architecture diagram

  • @memsSudar
    @memsSudar 9 месяцев назад

    hey i have question as we have injested our data to the vector db how do retrive answer without runnin the injestion code all the time

    • @chrisksjdvs603
      @chrisksjdvs603 6 месяцев назад

      setting up the vector store as persistent should help like he says in the video. Once you have your data stored you just need to load the vector store to communicate with the data if I understand it correctly

  • @hiramcoriarodriguez1252
    @hiramcoriarodriguez1252 Год назад

    is this a LangChain competitor library?

  • @devikasimlai4767
    @devikasimlai4767 4 месяца назад

    1:30 onw

  • @srikanth1107262
    @srikanth1107262 9 месяцев назад

    Would like to have a video on local download model ( llama2 ggml/gguf ) using llamaindex to build rag pipeline with chormadb. Thank you for videos its helps a lot.