Reliable, fully local RAG agents with LLaMA3.2-3b

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 46

  • @homeandr1
    @homeandr1 Месяц назад +11

    Great explanation, would be great to do one more tutorial using multimodal local RAG, considering the different chunks like tables, texts, and images, where you can use unstructured, Chroma, and MultiVectorRetriever completely locally.

  • @ytaccount9859
    @ytaccount9859 Месяц назад +3

    Awesome stuff. Langgraph is a nice framework. Stoked to build with it, working through the course now!

  • @leonvanzyl
    @leonvanzyl Месяц назад +25

    The tutorial was "fully local" up until the moment you introduced Tavily 😜😉.
    Excellent tutorial Lance 👍

    • @sergeisotnik
      @sergeisotnik Месяц назад

      Any internet search, by definition, is no longer local. However, embeddings here are used from a third-party service (where only the first 1M tokens are free).

    • @starbuck1002
      @starbuck1002 Месяц назад +4

      @@sergeisotnik Hes using nomic-embed-text embedding model locally, so there is no token cap at all.

    • @sergeisotnik
      @sergeisotnik Месяц назад +7

      @@starbuck1002 It looks like you're right. I saw that `from langchain_nomic.embeddings import NomicEmbeddings` is used, which usually means an API call. But in this case, the initialization is done with the parameter `inference_mode="local"`. I didn’t check the documentation, but it seems that in this case, the model is downloaded from HuggingFace and used for local inference. So, you’re right, and I was wrong.

  • @ravivarman7291
    @ravivarman7291 Месяц назад +1

    Amazing session and content explained very nicely in just 30 mins; Thanks so much

  • @becavas
    @becavas Месяц назад +5

    Why did you use lama3.2:3b-instruct-fp16 instead of lama3.2:3b?

  • @sunderrajan6172
    @sunderrajan6172 Месяц назад

    Beautifully done; thanks

  • @adriangpuiu
    @adriangpuiu Месяц назад +4

    @lance, please add langgraph documentation to the chat. the community will appreciate that. Let me know what you think

  • @LandryYvesJoelSebeogo
    @LandryYvesJoelSebeogo 2 дня назад

    may GOD bless you Bro

  • @joxxen
    @joxxen Месяц назад

    You are amazing, like always. Thank you for sharing

  • @marcogarciavanbijsterveld6178
    @marcogarciavanbijsterveld6178 25 дней назад +1

    I'm a med student interested in experimenting with the following: I'd like to have several PDFs (entire medical books) from which I can ask a question and receive a factually accurate, contextually appropriate answer-thereby avoiding online searches. I understand this could potentially work using your method (omitting web searches), but am I correct in thinking this would require a resource-intensive, repeated search process?
    For example, if I ask a question about heart failure, the model would need to sift through each book and chapter until it finds the relevant content. This would likely be time-consuming initially. However, if I then ask a different question, say on treating systemic infections, the model would go through the entire set of books and chapters again, rather than narrowing down based on previous findings.
    Is there a way for the system to 'learn' where to locate information after several searches? Ideally, after numerous queries, it would be able to access the most relevant information efficiently without needing to reprocess the entire dataset each time-while maintaining factual accuracy and avoiding hallucinations.

    • @JesterOnCrack
      @JesterOnCrack 4 дня назад

      I'll take a minute to try and asnwer your question to the best of my ability.
      Basically, what you are describing are ideas that seem sound for your specific application, but are not useful everywhere. Whenever you restrict search results, there is a chance you're not finding the 1 correct answer you needed. Speaking from experience, even a tiny chance of not finding what you need is enough to deter many customers.
      Of course, your system would have a tradeoff in efficiency - completing queries quicker.
      Bottom line is, there are ways to achieve this with clever data- and AI-engineering. I don't think that there is a single straightforward fix to your problem though.

  • @arekkusub6877
    @arekkusub6877 День назад

    Interesting, you basically use old school workflow to orchestrate the steps of LLM based atomic tasks. But what about to let the LLM to execute the workflow and also to perform all required atomic tasks? That would be more like agentic approach...

  • @VictorDykstra
    @VictorDykstra Месяц назад

    Very well explained.😊

  • @Togowalla
    @Togowalla Месяц назад

    Great video. What tool did you use to illustrate the nodes and edges in your notebook?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Месяц назад

    Can you consider doing an example of contextual retrieval that Anthropic recently introduced.

  • @davesabra4320
    @davesabra4320 28 дней назад

    Thanks it is indeed very cool. Last time you used 32Gb , do you think this will run with 16Gb? memory.

  • @thepeoplesailab
    @thepeoplesailab Месяц назад

    Very informative ❤❤

  • @AlexEllis
    @AlexEllis Месяц назад

    Thanks for the video and sample putting all these parts together. What did you use to draw the diagram at the beginning of the video? Was it generated by a DSL/config?

  • @henson2k
    @henson2k Месяц назад +1

    You make LLM to do all hard work for candidates filtering

  • @developer-h6e
    @developer-h6e 24 дня назад

    is it possible to make agent that when provide with few hundred links extracts info in all links and store it

  • @hari8568
    @hari8568 Месяц назад

    Is there an elegant way to handle recursion errors?

  • @beowes
    @beowes Месяц назад

    Question: You have operator.add on the loopstep, but tnen increment the loopstep in the state too… am i wrong in that it would then incorrect?

  • @fernandobarros9834
    @fernandobarros9834 Месяц назад

    Great tutorial! Is it necessary to add a prompt format?

    • @skaternationable
      @skaternationable Месяц назад +1

      Using PromptTemplate/ChatPromptTemplate works as well. It seems that the .format here is equivalent to the `input_variables` param within the former 2 classes

    • @fernandobarros9834
      @fernandobarros9834 Месяц назад

      @@skaternationable Thanks!

  • @sidnath7336
    @sidnath7336 Месяц назад

    If different tools require different key word arguments, how can these be passed in for the agent to access?

  • @johnrogers3315
    @johnrogers3315 Месяц назад

    Great tutorial, thank you

  • @andresmauriciogomezr3
    @andresmauriciogomezr3 Месяц назад

    thank you

  • @jamie_SF
    @jamie_SF Месяц назад

    Awesome

  • @serychristianrenaud
    @serychristianrenaud Месяц назад

    thanks

  • @ericlees5534
    @ericlees5534 25 дней назад

    why does he make it so easy..

  • @SavvasMohito
    @SavvasMohito Месяц назад

    That's a great tutorial that shows the power of LangGraph. It's impressive you can now do this locally with decent results. Thank you!

  • @ephimp3189
    @ephimp3189 Месяц назад

    Is it possible to add a "fact checker" method? What if the answer is obtained from a document that gives false information? it would technically answer the question, just not be true

  • @aiamfree
    @aiamfree Месяц назад

    it's sooooo fast!

  • @ghostwhowalks2324
    @ghostwhowalks2324 Месяц назад

    amazing stuff which can be done with few lines of code. disruption coming everywhere

  • @HarmonySolo
    @HarmonySolo Месяц назад +10

    LangGraph is too complicated, you have to implement State, Node etc. I would prefer to implement the Agent workflow by myself, which is mush easier at least I do not need to learn how to use LangGraph

    • @generatiacloud
      @generatiacloud 25 дней назад +1

      Any repo to share?

    • @RazorCXTechnologies
      @RazorCXTechnologies 24 дня назад

      Excellent tutorial! Another easier option is to use n8n instead because it has Langchain integration with AI agents built in and almost no code required to achieve same functionality. N8n also has automatic chatbot interface and webhooks.

    • @kgro353
      @kgro353 16 дней назад

      langflow is best solution

  • @_Learn_With_Me_EraofAI
    @_Learn_With_Me_EraofAI Месяц назад

    Unable to access chat ollama

  • @HELLODIMENSION
    @HELLODIMENSION Месяц назад

    You have no idea how much u saved me 😂 salute 🫡 thank u.