Hey David, I really enjoy your amazing videos! However, I wanted to mention that $197 is quite a hefty sum-it's nearly a month's rent in many cities in India! 😄
nice : But ! when generating graphs or creating them : you can note how they work : nodes and edges : and a root and end node : these form the tree of nodes : the edges are used to create or Denote paths : so they are only significant in drawing the route or using the edges to defie paths : ( or check routes are complete or broken or exist ) ( so edges can be created or not depending on the operations on the tree) .. but your ode is doing the work : so one node actualy points to its next node : even conditional nodes could point to its set of nodes : so the edges are not to execute the tree : you just execute the first node and it will return ( via recusion ) the final node : SO between these nodes you can pass a State ! ... ie the data packet which each node can handle and change as required and at the final node the state also returns the output ! .. So for the functionality previously mentioed : , we create an edge map/matrix of all the edges , ie all edges which connect to a , then al the edges which connect to b then c to the end node : now we have a list of lists , and dictionary of nodes , ie abc, and when we look up A , we find all its possible edges : so we can traverse its paths : and do all the algorithms , breath first, depth first ... whatever ( we will most ly use shortest path only ) and we will oly need to give the graph wiz the edge list to draw the map : so in truth you do not need langgraph ( despite it look lovely ) .. just the programming principle: as it is basically a decision tree: So your first node should always be your , Intent detector node < as this node will determine which route in the tree to take , we can consider him as the Router in modern terms : Each Task will have its own planner , which will be prompted to plan for its route , ie web research or essay writing , or data querying or , coding tasks , or app devlopment etc : so your graph could include many sub trees : based off your conditional node or Router ! ( intent detector ) .. When creatig your graph class : you should also consider these conditional nodes : as the coditional node is the first node and its sib nodes are added to it by edge so the creation of this node should be a thing of its own ... ie it will also need to create the sub nodes first and add them to the conditional node ( itself ) ... Very important are Start nodes : as despite creating a greeting bot ( which also is an intent detector or requiremnts gatherers / router ) but multiple start nodes also give you jump point into your graph which you can instanciate from other ui elements or discssive ques . ad triggers : ( as you can always add the chatbot interface on top of a chat model ( ie add all your keyword and detect and response ( old school chatbot system ) which also is backed by the llm : so if the chatbot detects in chat some medical terms , you may send the prompt as a doctor , and if the chatbot detects fitnesss to talk then it creates sports system prompt : so as your talking your input is being enhanced by your chatbot front end to your llm ! ( this is how it will be done ( as this give you the llm as a brain only) an you can create NPC as a RAG!) ! Graphs as dialog or task trees : Im sure you already realised that chain are just indepth closures ! Nw im sure you not realising but because your using online resources ! , the more tools you load the slower the model respond : so we need to stop loading all tools on the model and restrict the model with its own subset of tools only when its required : so hence your chatbot front end should detect your discussing the weather and load weatter tools in case it needs them , and if the conversation changes to research add the research toolkit ready to be prompted ... so throughout the ocnverstation the model may load and unload various tools it could potentially need to answer the next few questions : and if it is not asked it will unload the tools ! hence the response speed will stay high : your front end will be intercpeting some inputs and even serving some tasks with normal function pathways which do not require ai , such as ingesting a specific folder , such a question could be intercepted by the front end and not call the llm to perform the task , but initiate the entry point into the task : as after it returns its final state the user would have entered a new input which may have been sent to the model : HEnce knowing when to send a task to the model and when to handle the task via some other automated method : blending the old chatbot styled agent , ( look up RASA / DIALOG FLOW ) with the llm and Rag : Enables for the agentic and standard Intent/Slot_filling - Dialog management system : which intern now will perform like AGI , as well as have a Personality ! ( as you will of have created your NPC character dialogs for local tasks , as well as your dynamic llm task solver and information back end : as well as your chat aboitys with the model , by just implementing , a personality as a tool , ( same as a think tool , but a formal response tool and a informal response tool : so the model can choose a response type , and the prompt for the tool is ther magic ! as it will create the personality and role : just as you do with autogen agents : so they have personalitys which enables for them to work well : but you can acess the same personalitys with flair ! ... SO ! Right ow eveyrthing has gone quiet again : So you can take a BACKSTEP ! and Go back to making the old style chatbot setup and back it with a llm ! > as this will show you how to make an NPC ! as the dialogs you create in your chatbot front end will be the entry points into your llm back end : ( so Jarvis is the front en ChatBOT ! ) and the LLM is the Back End Brain : the front end chatbot is connected to the UI and the Chatbot is connected to the BackEnd LLM hence its importance :
I am hitting a 404 Not Found with LangGraph Engineer when I hit submit of my messaage. Error on the screen says " h.tasks is not iterable". Reported this to the LangGraph team but would anyone know what the error is about?
Is this competing with Agency Swarm? I really like VRSEN’s vids and guides and I’ve been trying to learn his framework but this seems much more user friendly especially with the UI. Would you say these provide similar services?
David, can you make a more focused video on how to deliver agents to clients? Do you recommend use the client API key or putting our own, message limit, etc., something more oriented to doing business?
Would it be possible to recreate Pietro’s Maestro framework using this UI as a foundation for an agency and then input the same script for Claude engineer to act as a subagent? Does that even make sense lol
Hey David, I really enjoy your amazing videos! However, I wanted to mention that $197 is quite a hefty sum-it's nearly a month's rent in many cities in India! 😄
This is literally the best way to make custom agentic flows.
Awesome as usual, Ondrej! However, I think you forgot to include the part about how to create the venv with conda. Beginners might not notice.
i already showed that in many previous videos
if you don't have conda, just ask ChatGPT "how do i install miniconda and create a new env?"
Hey Ondrej are you from Prague?
nice : But !
when generating graphs or creating them : you can note how they work :
nodes and edges : and a root and end node :
these form the tree of nodes : the edges are used to create or Denote paths : so they are only significant in drawing the route or using the edges to defie paths : ( or check routes are complete or broken or exist ) ( so edges can be created or not depending on the operations on the tree) .. but your ode is doing the work : so one node actualy points to its next node : even conditional nodes could point to its set of nodes : so the edges are not to execute the tree : you just execute the first node and it will return ( via recusion ) the final node :
SO between these nodes you can pass a State ! ... ie the data packet which each node can handle and change as required and at the final node the state also returns the output ! ..
So for the functionality previously mentioed : , we create an edge map/matrix of all the edges , ie all edges which connect to a , then al the edges which connect to b then c to the end node : now we have a list of lists , and dictionary of nodes , ie abc, and when we look up A , we find all its possible edges : so we can traverse its paths : and do all the algorithms , breath first, depth first ... whatever ( we will most ly use shortest path only ) and we will oly need to give the graph wiz the edge list to draw the map :
so in truth you do not need langgraph ( despite it look lovely ) .. just the programming principle: as it is basically a decision tree:
So your first node should always be your , Intent detector node < as this node will determine which route in the tree to take , we can consider him as the Router in modern terms :
Each Task will have its own planner , which will be prompted to plan for its route , ie web research or essay writing , or data querying or , coding tasks , or app devlopment etc :
so your graph could include many sub trees : based off your conditional node or Router ! ( intent detector ) ..
When creatig your graph class : you should also consider these conditional nodes : as the coditional node is the first node and its sib nodes are added to it by edge so the creation of this node should be a thing of its own ... ie it will also need to create the sub nodes first and add them to the conditional node ( itself ) ...
Very important are Start nodes : as despite creating a greeting bot ( which also is an intent detector or requiremnts gatherers / router ) but multiple start nodes also give you jump point into your graph which you can instanciate from other ui elements or discssive ques . ad triggers : ( as you can always add the chatbot interface on top of a chat model ( ie add all your keyword and detect and response ( old school chatbot system ) which also is backed by the llm : so if the chatbot detects in chat some medical terms , you may send the prompt as a doctor , and if the chatbot detects fitnesss to talk then it creates sports system prompt : so as your talking your input is being enhanced by your chatbot front end to your llm ! ( this is how it will be done ( as this give you the llm as a brain only) an you can create NPC as a RAG!) !
Graphs as dialog or task trees :
Im sure you already realised that chain are just indepth closures !
Nw im sure you not realising but because your using online resources ! , the more tools you load the slower the model respond : so we need to stop loading all tools on the model and restrict the model with its own subset of tools only when its required : so hence your chatbot front end should detect your discussing the weather and load weatter tools in case it needs them , and if the conversation changes to research add the research toolkit ready to be prompted ... so throughout the ocnverstation the model may load and unload various tools it could potentially need to answer the next few questions : and if it is not asked it will unload the tools ! hence the response speed will stay high : your front end will be intercpeting some inputs and even serving some tasks with normal function pathways which do not require ai , such as ingesting a specific folder , such a question could be intercepted by the front end and not call the llm to perform the task , but initiate the entry point into the task : as after it returns its final state the user would have entered a new input which may have been sent to the model :
HEnce knowing when to send a task to the model and when to handle the task via some other automated method : blending the old chatbot styled agent , ( look up RASA / DIALOG FLOW ) with the llm and Rag : Enables for the agentic and standard Intent/Slot_filling - Dialog management system : which intern now will perform like AGI , as well as have a Personality ! ( as you will of have created your NPC character dialogs for local tasks , as well as your dynamic llm task solver and information back end : as well as your chat aboitys with the model , by just implementing , a personality as a tool , ( same as a think tool , but a formal response tool and a informal response tool : so the model can choose a response type , and the prompt for the tool is ther magic ! as it will create the personality and role : just as you do with autogen agents : so they have personalitys which enables for them to work well : but you can acess the same personalitys with flair ! ...
SO !
Right ow eveyrthing has gone quiet again : So you can take a BACKSTEP ! and Go back to making the old style chatbot setup and back it with a llm ! > as this will show you how to make an NPC ! as the dialogs you create in your chatbot front end will be the entry points into your llm back end : ( so Jarvis is the front en ChatBOT ! ) and the LLM is the Back End Brain : the front end chatbot is connected to the UI and the Chatbot is connected to the BackEnd LLM hence its importance :
amazing video, thx David, very informative
Thanks David bro. I appreciate your help and inspiring mission
That's fresh news! Thanks for sharing this.
Thanks for your excellent information. I just study with use cases. 'Example' is so simple to understand... Can I get your 'engineer' code?
How do you use langgraph engineer after completing all steps
I am hitting a 404 Not Found with LangGraph Engineer when I hit submit of my messaage. Error on the screen says " h.tasks is not iterable". Reported this to the LangGraph team but would anyone know what the error is about?
did you recommend langflow? and if yes can you do some video about it?
Unfortunately it doesn't works guess with what? Podman 😞
Is this competing with Agency Swarm? I really like VRSEN’s vids and guides and I’ve been trying to learn his framework but this seems much more user friendly especially with the UI. Would you say these provide similar services?
David, can you make a more focused video on how to deliver agents to clients? Do you recommend use the client API key or putting our own, message limit, etc., something more oriented to doing business?
Great post! Thank you for sharing
Would it be possible to recreate Pietro’s Maestro framework using this UI as a foundation for an agency and then input the same script for Claude engineer to act as a subagent? Does that even make sense lol
Is there a way to use it with Llama 3.1?
is there any way to use it on Windows?
No, though they promise they want to do it soon, whatever that means.
Thank you! 🙏
can we use gemini instead of open ai
dolphin-llama is no longer unfiltered. any suggestions of nsfw ai chats?
claude sonnet 3.5 can do that without agents
windows?
I really like your video but My ADHD telling you to put the headset correctly
this is how professionals do it - one ear in, one ear out - so you can hear yourself clearly
I was just asking the universe for this!!!!
What happened to langlow
Can I have the URL for the langCloud engineer? Thanks
In the description
only for apple silicon? pass
I'm pretty sure he showed most of the demo using the web interface.
@@louiscoetzee7744I didn't see the link to langstudio webapp. I can't find it
@@louiscoetzee7744Yes but to run it yourself you currently need an Apple Silicon based Mac. Windows and Linux versions will follow, the docs say.
It's for both Intel and apple silicone for mac
Not wearing Headphones properly. Dislike 🙂
💀
🔥 Start making money with AI Agents: www.skool.com/new-society