thank you!, one idea I saw and think is a good improvement to the architecture is adding a search into a knwoledge graph module, like dbpedia or similar KGdatabase with the posibilty of adding triplets extracted from the RAG documents itself. The result of the semantic and keyword queries to vectorDb and KGDb will enrich the context provided to the LLM
Can you make a video going through at a high level each branch in order? Also could you cover LangGraph workflows involving tool use / function calling? Thank you!
Eden, lang graph doesn't have any good checkpoint libraries apart from sqlite for production use cases like you have for langchain. Do you know anything about that?
Great question ... what about some nosql ways like redis etc ...for checkpointing ... also ended up creating my own way of selecting last K messages ... you can't pass the whole conversational history for a thread to the model (i.e implementing react agent with memory)
🎯 Key points for quick navigation: 00:13 *📁 The speaker has been working on a public GitHub repository that implements advanced RAG workflows using LangGraph.* 00:40 *💡 The speaker felt that the existing notebook was missing a software engineering perspective on how to structure an advanced LangGraph application and write maintainable code.* 01:07 *🔩 The speaker refactored the notebook to make it more maintainable, splitting it into sub-modules and writing tests for each chain.* 01:47 *📊 The speaker emphasizes the importance of writing unit tests for code.* 02:44 *🚀 The Advanced RAG workflow involves choosing whether to retrieve documents from a vector store or use a web search, grading documents, and generating an answer while checking for hallucinations and relevance.* 04:23 *💡 The implementation is a combination of three papers on Advanced RAG, corrective RAG, adaptive RAG, and self-RAG.* Made with HARPA AI
Really nice work, Eden. Thank you for such a great content.
Really great and intuitive refactoring of the original code - well done!
Thank you for clearly explaining the system architecture, helps everyone understand.
yet another awesome tutorial that takes advanced AI concepts and makes them dead simple 🎲
thanks Eden !
Thank you very much. It's really cool
thank you!, one idea I saw and think is a good improvement to the architecture is adding a search into a knwoledge graph module, like dbpedia or similar KGdatabase with the posibilty of adding triplets extracted from the RAG documents itself. The result of the semantic and keyword queries to vectorDb and KGDb will enrich the context provided to the LLM
amazing! but i'm struggling to understand when RAG should be used and when it should not be used
Can you make a video going through at a high level each branch in order?
Also could you cover LangGraph workflows involving tool use / function calling? Thank you!
Eden, lang graph doesn't have any good checkpoint libraries apart from sqlite for production use cases like you have for langchain. Do you know anything about that?
Great question ... what about some nosql ways like redis etc ...for checkpointing ... also ended up creating my own way of selecting last K messages ... you can't pass the whole conversational history for a thread to the model (i.e implementing react agent with memory)
@@todormishinev I am just using a history aware retriever with RedisChatMessageHistory to get around this memory thingy. Works flawlessly
🎯 Key points for quick navigation:
00:13 *📁 The speaker has been working on a public GitHub repository that implements advanced RAG workflows using LangGraph.*
00:40 *💡 The speaker felt that the existing notebook was missing a software engineering perspective on how to structure an advanced LangGraph application and write maintainable code.*
01:07 *🔩 The speaker refactored the notebook to make it more maintainable, splitting it into sub-modules and writing tests for each chain.*
01:47 *📊 The speaker emphasizes the importance of writing unit tests for code.*
02:44 *🚀 The Advanced RAG workflow involves choosing whether to retrieve documents from a vector store or use a web search, grading documents, and generating an answer while checking for hallucinations and relevance.*
04:23 *💡 The implementation is a combination of three papers on Advanced RAG, corrective RAG, adaptive RAG, and self-RAG.*
Made with HARPA AI
drop that bullshit thumbnail. Be better!
So true :) LOL