This is wild. Really loving the LangChain + LangGraph + LangSmith suite. Looking forward to LangGraph Studio (drag and drop) which is a logical next step. Amazing work.
Suggestion - use an Instruction-Trained model as your "module coordinator" and have it print a temp file at the beginning of the process to serve as an incorruptible cache wherein the product can be developed during the research and creation process. You can use the top section of this Temp file to store the original user prompt - which is useful not only for record-keeping, but also because you can then instruct the appropriate model to always refer to the original user input to maintain an appropriate perspective and scope. Then, use your module coordinator to break-down and plan each subtask along the way to keep your sub-modular LLMs from getting overwhelmed and having to compress their contextual window (thereby losing detailed continuity) so that when you're writing and breaking down the essay-writing process into sub-tasks (into summary, section 1, section 2, conclusion) then the entire process maintains structural integrity. You can also write it into the instructions for each node to print their outputs at each stage in the process into another log file so that you (or another module designed for critical behavioral analysis of LLM outputs and processes for self-correction and long-term world-view development) can monitor the process and adjust your protocol system accordingly. From what I've been reading, it might be a good idea to use an SSM as the main node of your world-view/self-image module (if you're going for developing an AGI).
Also, I have a question - I hear that Rust is much more resource-efficient and safe for developing such complex modular projects, and is easier to compile. So why don't I see any tutorials on how to do stuff like this in Rust? Why does everybody use Python when it's more resource-intensive and less secure?
LMAO... nvm, I guess. You ended up describing the system I was talking about. But the part about the long-term evolving memory SSM being used as a world-view/self-image builder is still something I'd like to hear feedback on.
That's an interesting approach. The only issue i could see with having different instances (agents) writing different sections is managing consistency of tone and style. For that, are you prompting your section agents with few-shot prompts to maintain consisstency? Or has this not been an issue at all in your outputs?
Great demo with source code! Thank you. It was nice to see that it is so easy to use NVIDIA NIM service.
This is a fantastic dive into using Langchain and Nim Thank you. Looking forward to going through your channel
This is wild. Really loving the LangChain + LangGraph + LangSmith suite. Looking forward to LangGraph Studio (drag and drop) which is a logical next step. Amazing work.
love you guys! would be so cool to see langgraph studio for linux and windows as well. that would open worlds to people.
Fantastic demo. Thx😇
Nice video man, good stuff
Suggestion - use an Instruction-Trained model as your "module coordinator" and have it print a temp file at the beginning of the process to serve as an incorruptible cache wherein the product can be developed during the research and creation process. You can use the top section of this Temp file to store the original user prompt - which is useful not only for record-keeping, but also because you can then instruct the appropriate model to always refer to the original user input to maintain an appropriate perspective and scope.
Then, use your module coordinator to break-down and plan each subtask along the way to keep your sub-modular LLMs from getting overwhelmed and having to compress their contextual window (thereby losing detailed continuity) so that when you're writing and breaking down the essay-writing process into sub-tasks (into summary, section 1, section 2, conclusion) then the entire process maintains structural integrity.
You can also write it into the instructions for each node to print their outputs at each stage in the process into another log file so that you (or another module designed for critical behavioral analysis of LLM outputs and processes for self-correction and long-term world-view development) can monitor the process and adjust your protocol system accordingly.
From what I've been reading, it might be a good idea to use an SSM as the main node of your world-view/self-image module (if you're going for developing an AGI).
Also, I have a question - I hear that Rust is much more resource-efficient and safe for developing such complex modular projects, and is easier to compile. So why don't I see any tutorials on how to do stuff like this in Rust? Why does everybody use Python when it's more resource-intensive and less secure?
LMAO... nvm, I guess. You ended up describing the system I was talking about. But the part about the long-term evolving memory SSM being used as a world-view/self-image builder is still something I'd like to hear feedback on.
That's an interesting approach. The only issue i could see with having different instances (agents) writing different sections is managing consistency of tone and style. For that, are you prompting your section agents with few-shot prompts to maintain consisstency? Or has this not been an issue at all in your outputs?
i guess, nvidia api is a paid service? can we easily replace nvidia api with alternatives? any particular reason why nvidia nim?
The api isn't by the sounds of it but you have to pay for model hosting
Llama 3.3-70b for automated reports? That's a big leap. Intrigued by the NVIDIA NIM integration.
CUSTOMER SERVICE IS THE MOST USED.