Great explanation, would be great to do one more tutorial using multimodal local RAG, considering the different chunks like tables, texts, and images, where you can use unstructured, Chroma, and MultiVectorRetriever completely locally.
Any internet search, by definition, is no longer local. However, embeddings here are used from a third-party service (where only the first 1M tokens are free).
@@starbuck1002 It looks like you're right. I saw that `from langchain_nomic.embeddings import NomicEmbeddings` is used, which usually means an API call. But in this case, the initialization is done with the parameter `inference_mode="local"`. I didn’t check the documentation, but it seems that in this case, the model is downloaded from HuggingFace and used for local inference. So, you’re right, and I was wrong.
I'm a med student interested in experimenting with the following: I'd like to have several PDFs (entire medical books) from which I can ask a question and receive a factually accurate, contextually appropriate answer-thereby avoiding online searches. I understand this could potentially work using your method (omitting web searches), but am I correct in thinking this would require a resource-intensive, repeated search process? For example, if I ask a question about heart failure, the model would need to sift through each book and chapter until it finds the relevant content. This would likely be time-consuming initially. However, if I then ask a different question, say on treating systemic infections, the model would go through the entire set of books and chapters again, rather than narrowing down based on previous findings. Is there a way for the system to 'learn' where to locate information after several searches? Ideally, after numerous queries, it would be able to access the most relevant information efficiently without needing to reprocess the entire dataset each time-while maintaining factual accuracy and avoiding hallucinations.
I'll take a minute to try and asnwer your question to the best of my ability. Basically, what you are describing are ideas that seem sound for your specific application, but are not useful everywhere. Whenever you restrict search results, there is a chance you're not finding the 1 correct answer you needed. Speaking from experience, even a tiny chance of not finding what you need is enough to deter many customers. Of course, your system would have a tradeoff in efficiency - completing queries quicker. Bottom line is, there are ways to achieve this with clever data- and AI-engineering. I don't think that there is a single straightforward fix to your problem though.
Interesting, you basically use old school workflow to orchestrate the steps of LLM based atomic tasks. But what about to let the LLM to execute the workflow and also to perform all required atomic tasks? That would be more like agentic approach...
Thanks for the video and sample putting all these parts together. What did you use to draw the diagram at the beginning of the video? Was it generated by a DSL/config?
Using PromptTemplate/ChatPromptTemplate works as well. It seems that the .format here is equivalent to the `input_variables` param within the former 2 classes
Is it possible to add a "fact checker" method? What if the answer is obtained from a document that gives false information? it would technically answer the question, just not be true
LangGraph is too complicated, you have to implement State, Node etc. I would prefer to implement the Agent workflow by myself, which is mush easier at least I do not need to learn how to use LangGraph
Excellent tutorial! Another easier option is to use n8n instead because it has Langchain integration with AI agents built in and almost no code required to achieve same functionality. N8n also has automatic chatbot interface and webhooks.
Great explanation, would be great to do one more tutorial using multimodal local RAG, considering the different chunks like tables, texts, and images, where you can use unstructured, Chroma, and MultiVectorRetriever completely locally.
Awesome stuff. Langgraph is a nice framework. Stoked to build with it, working through the course now!
The tutorial was "fully local" up until the moment you introduced Tavily 😜😉.
Excellent tutorial Lance 👍
Any internet search, by definition, is no longer local. However, embeddings here are used from a third-party service (where only the first 1M tokens are free).
@@sergeisotnik Hes using nomic-embed-text embedding model locally, so there is no token cap at all.
@@starbuck1002 It looks like you're right. I saw that `from langchain_nomic.embeddings import NomicEmbeddings` is used, which usually means an API call. But in this case, the initialization is done with the parameter `inference_mode="local"`. I didn’t check the documentation, but it seems that in this case, the model is downloaded from HuggingFace and used for local inference. So, you’re right, and I was wrong.
Amazing session and content explained very nicely in just 30 mins; Thanks so much
Why did you use lama3.2:3b-instruct-fp16 instead of lama3.2:3b?
Beautifully done; thanks
@lance, please add langgraph documentation to the chat. the community will appreciate that. Let me know what you think
may GOD bless you Bro
You are amazing, like always. Thank you for sharing
I'm a med student interested in experimenting with the following: I'd like to have several PDFs (entire medical books) from which I can ask a question and receive a factually accurate, contextually appropriate answer-thereby avoiding online searches. I understand this could potentially work using your method (omitting web searches), but am I correct in thinking this would require a resource-intensive, repeated search process?
For example, if I ask a question about heart failure, the model would need to sift through each book and chapter until it finds the relevant content. This would likely be time-consuming initially. However, if I then ask a different question, say on treating systemic infections, the model would go through the entire set of books and chapters again, rather than narrowing down based on previous findings.
Is there a way for the system to 'learn' where to locate information after several searches? Ideally, after numerous queries, it would be able to access the most relevant information efficiently without needing to reprocess the entire dataset each time-while maintaining factual accuracy and avoiding hallucinations.
I'll take a minute to try and asnwer your question to the best of my ability.
Basically, what you are describing are ideas that seem sound for your specific application, but are not useful everywhere. Whenever you restrict search results, there is a chance you're not finding the 1 correct answer you needed. Speaking from experience, even a tiny chance of not finding what you need is enough to deter many customers.
Of course, your system would have a tradeoff in efficiency - completing queries quicker.
Bottom line is, there are ways to achieve this with clever data- and AI-engineering. I don't think that there is a single straightforward fix to your problem though.
Interesting, you basically use old school workflow to orchestrate the steps of LLM based atomic tasks. But what about to let the LLM to execute the workflow and also to perform all required atomic tasks? That would be more like agentic approach...
Very well explained.😊
Great video. What tool did you use to illustrate the nodes and edges in your notebook?
Can you consider doing an example of contextual retrieval that Anthropic recently introduced.
Thanks it is indeed very cool. Last time you used 32Gb , do you think this will run with 16Gb? memory.
Very informative ❤❤
Thanks for the video and sample putting all these parts together. What did you use to draw the diagram at the beginning of the video? Was it generated by a DSL/config?
looks like excalidraw
You make LLM to do all hard work for candidates filtering
is it possible to make agent that when provide with few hundred links extracts info in all links and store it
Is there an elegant way to handle recursion errors?
Question: You have operator.add on the loopstep, but tnen increment the loopstep in the state too… am i wrong in that it would then incorrect?
Great tutorial! Is it necessary to add a prompt format?
Using PromptTemplate/ChatPromptTemplate works as well. It seems that the .format here is equivalent to the `input_variables` param within the former 2 classes
@@skaternationable Thanks!
If different tools require different key word arguments, how can these be passed in for the agent to access?
Great tutorial, thank you
thank you
Awesome
thanks
why does he make it so easy..
That's a great tutorial that shows the power of LangGraph. It's impressive you can now do this locally with decent results. Thank you!
Is it possible to add a "fact checker" method? What if the answer is obtained from a document that gives false information? it would technically answer the question, just not be true
it's sooooo fast!
amazing stuff which can be done with few lines of code. disruption coming everywhere
LangGraph is too complicated, you have to implement State, Node etc. I would prefer to implement the Agent workflow by myself, which is mush easier at least I do not need to learn how to use LangGraph
Any repo to share?
Excellent tutorial! Another easier option is to use n8n instead because it has Langchain integration with AI agents built in and almost no code required to achieve same functionality. N8n also has automatic chatbot interface and webhooks.
langflow is best solution
Unable to access chat ollama
You have no idea how much u saved me 😂 salute 🫡 thank u.