If you are interested in building LLM Agents Fill out the form below for what type of agents you want some examples of ? Building LLM Agents Form: drp.li/dIMes
I really like the idea of integrating Grpah theory into this. You can experiment with different agents and tools for certain types of tasks. Then you can start playing around with network measures and give edges weight based on the successful completion of types of tasks. The network essentially will end up balancing itself out as you start to direct traffic along your high-weight edges. You can run another network and experiment with different models for different tasks. It's like a simulation of a workplace where people end up going to the most productive people to accomplish tasks.
This is ABSOLUTELY FANTASTIC!!! I've been dealing with manual "orchestrator" that felt so dummy before... This is a game changer! You effin deliver on your content, man! Holyfack...
Sir please make a full course using langchain with open , hugging face ,lama and fine thing models and chatbot. Keep little bit affordable like 100$ it would be really great . Lots of love from India
Awesome work once again ! Very interested by this langGraph for more complexe use cases ! For us building a team platform augmentation with many Agent (which are Agent, or just Chains), it can allows use to have a big and powerful super agent, an supervisor as you in third part. To be continued
Thanks, this is great stuff. I've been teaching myself to build agents in langchain for some months and it is slow going. I think I need to step back and re-architect to use LangGraph instead. Looking forward to seeing more of your material on this stuff!
Super useful. I would say this was explained in a better way than the official Langchain channel. Next Video: it would be cool to build the perplexity’s copilot feature. So, ask for clarifying questions if needed with human-in-the-loop feature. Then give access to the internet to get the results.
You're a good teacher, Sam, and I appreciate your efforts. One Q I have about frameworks in general is what they're providing over and above python functions than can branch on conditions and call other functions. I have a feeling there is a good answer. People wouldn't be sinking so much effort into them if there wasn't. But it's not obvious. So a great video might compare a roll-your-own agent with a langgraph (e.g.) agent, and highlight what is different between them, and what is better about the langgraph path. If you've already done this, drop a link in the replies.
Thanks Sam. For colab 01, I tried inputs = {"input": "Give me a random number and then write in words", "chat_history": []}.. it is still calling to_lower_case tool.. is it expected or we have to be more vocal in our input?
Pretty much every LLM API has a large set of parameters: temperature, max output length, top P, [top K], frequency penalty, presence penalty. Shrink-wrapped UIs like ChatGPT don't give access to these. The defaults differ in some APIs: sometimes temperature is set to 1, sometimes 0.8. Some experiments I've done indicate that changing these parameters has serious impact on the results. But I've hardly ever seen benchmarks, papers, videos that discuss this. As far as I can tell, most LLM benchmarks only test the "default" settings. I'd love to see some more in-depth experiments that compare models and change these parameters. The community has been trying a lot of elaborate optimizations to get the most desired results out of LLMs. But my partial experiments suggest that there's a fair bit of untapped potential with the model parameters.
Another matter is the way the community discusses ChatGPT: late last year OpenAI added a new model to the ChatGPT app: GPT 4 Turbo. This is a model that's as different from GPT-4 as GPT 3.5 is from GPT-4. Or at least, it's different. It's smaller, distilled, simplified, dumber. Yet some discussion has shown that users didn't accept that as a fact: they've thought they GPT 4 Turbo us just some "faster version" of GPT 4. Magically :) But there's no magic. In the ChatGPT app, you can select the "ChatGPT" mode, which is GPT-4 Turbo with Vision, DALL-E and tool switching, or you can choose ChatGPT Classic mode, which is the real GPT-4 model. They're very different, and should be treated as separate models in comparisons.
@@Ken129100 You can simply import a new model when setting llm (for example, llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0, verbose=True, streaming=True)), or use gemeni, or claude-2. (dont forget to include the API key at the top).
you mentioned something on point to what i was wondring sam, with you experience which oif the open-source LLM support function calling as of today? which one would you try out first? and if you do please make a video about langgraph and HF LLM and function calling maybe! ☺, love you work btw!
Thanks for making this video. A question, iiuc, imo It would be perfect if Coder node could be routed by the supervisor and executed to generate the chart by leverage the PythonREPLTool. Did you try to remove the PythonREPLTool tool from the Lotto_Manager agent, and only provide it in Coder agent? Make sense?
What open source models support function calling? I must admit, I don't know what it is about function calling that needs to be supported. Recently, I tried AutoGen function calling with a few different _local_ 7B parameter models without any luck.
There aren't many which excel. The ones that do are fine tuned on the task. First was gorilla. Now we have functionary. And apparently qwen 1.5 (even the 0.5b) model can reach near gpt4 reliability (though not in my personal testing). As far as I know there aren't any great drag and drop solutions. You may also need to use multiple models (one for function calling and one for higher level reasoning)
Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.
Amazing! But Is not clear how the agent can understand to repeat the function random_number() for 10 times, everytime that it finish it will recall again OpenAI and ask if the task is accomplished? If is like that why we don't see it on LangSmith?
Thanks a lot. I wanna know, how does the agent excuter know to write the anwer "4" to capitalized "FOUR" and then send it to the lower_case tool? Is there another bulit-in LLM doing that?
I tried to follow along in the provided file, but in the third example, my supervisor tells the coder to run, then keeps telling it to run over and over. The supervisor keeps choosing "coder" as the next step. Any idea why this difference in result even though I haven't changed anything about the code and simply ran it as-is?
how can you do with anthropic-claude, i done this and created my own parser after that it gives error: A conversation must alternate between user and assistant roles
at 9:30 you said you can have multiple agents with langgraph but isnt langchain originally a single agent framework? Unlike Autogen and CrewAI? I'm a bit confused
thanks again for the awesome content i ve learned a lot from your videos please keep doing what do. I was also wondering if you plan on making videos about production ready RAGs with the methods that you talked about in your rag series. Thanks a lot and please keep enriching us with your content.
I extremely respect what Sam does. He is one of the few youtubers who avoids just hyping up trendy things and simply makes very useful videos. But am I the only one who thinks that langchain's syntax is just insane? Having looked at the 3rd notebook, I find that I create an agent that has tools to then create agent executor passing the agent and aforementioned tools (why again?). Then I create agent node that invokes some kind of agent, then I pass created agent executor as agent argument to the node... How can anyone be able to understand this Russian doll...
😀 Hey Sasha I can totally relate to how you feel. It is very low level and they have also change some things I think since this video. Also they are finally supporting function Calling better across multiple models. I have been playing with some new notebooks for this and will make a video about it soon. LangGraph is good at a low level but I agree it can be insanely frustrating at times. You can use something like CrewAI if you want to stay really high level but I find that frustrating when it runs into issues as well. I promise I will try to get some new vids on this out soon, hopefully with some open source models like the new Llama etc as well.
Your video is great. However, it presumes prior knowledge of the Langgraph ecosystem. For example, @11:23, the Trace page and the inherent setup that was done there is not explained. Your collab 01 as well: try running it incognito as a viewer: once you reach the 'prompt' cell, things start breaking apart. Overall, you are engaging, knowledgeable but the video could use a warning at the beginning to inform the viewer the requirements like knowing the Langgraph ecosystem etc. I'm subscribing nonetheless, hope you see this as a constructive comment Sam.
Wait, why do you need to explicitly define an array of tools and forward it into an agent creator, if you have decorators in there? What use of decorators then? I'm confused.
This is really excellent tutorial. If I want to develop a use case wherein I have to do an API (e.g., Google Map API) based on a location and use returned result to filter down some customers around the location (wihin 2 kilometer around location) which (the customer and ordering information) is stored in a relational data store (say Sqllite or Postgresql or MySQL). Can you provide any implementation suggestion. Just to clarify that user input could lead to 3 scenarios for queries, 1) API only 2) API + RDBMS 3) RDBMS only.
Put the effort into making it as a took and then the Agent just uses the tool with simple commands. Eg put the heavy lifting on the tools side. I have a CrewAI tutorial coming out later this week that goes into this.
By the way, why use agents and agent executors? I have seen some many tutorials with just models with binded tools. What is the benefit/difference in using AgentExecutor? What would I do with memory if I am using agent executor? Create agent executor with memory or create memory that saves the state of the graph? How multiple agents access the memory then?... omg, langchain...
The AgentExecutor was more the old way of doing agents before LangGraph. Think of the graph as a big state machine and you just pass that around. multiple agents can be like different nodes on the graph. I am still thinking of some simple examples to show off the basics. but these are great questions and I will address them in a video.
@@samwitteveenai thank you so much for responding to the comments) Thank you for your attention. Keep up the great work, while I am integrating self-queryng RAG for my startup based on your tutorial)
Hi Sam, your videos are always very insightful and has helped me keep up with the latest development LLM space. I do have a question, when we pass a python function as tool into LLM, how does the execution works? Let's say there is a very long function which is to be executed next, now is the whole function along with its parameters passed on to LLM (using precious tokens) and LLM runs function on its server and gets the output. Or is it that LLM just assigns the function to run, the function then runs locally and provides the output to LLM for next action. Also is the behaviour same in python REPL functions?
The functions are executed on the process that executes the runnable chain, not remotely on the LLM. The LLM only determines which function to run and what the parameters should be, then LangChain / LangGraph executes the code "locally."
CrewAI is much higher level yes but it is not as flexible as LangGraph. That said both are LangChain so should be able to do much of the same stuff. I will make some vids soon about that.
This is a great video, it seems overly complicated tho compared to AutoGen which seems to be hiding a lot of the complexity. We built so,etching similar using regular agents as tools (node) which then have their own tools. A more dynamic agent with multiple personalities can be built with this but it would be hard to manage.
Great job sir, all the example in the docs are using openai, can you please do a video where you use a different model like Gemini for this? Also if I have a complex input like a list of objects containing messages from different users and I want to work on each of them. Can you show us how to go about this. Maybe send a response message to each of the users in the list after reading their messages?
I have been doing research on nlp and software engineering for 6 years. I have some good research publications as well in IEEE transactions on Software Engineering, journal of systems and software, Requirements Engineering conference. I have also developed skills on RAG, agent based frameworks. Can I get a good job in the field of GenAI, LLM orchestration? If you have please ask for my cv. Thanks in advance.
I would say yes. I have some research background with papers at EMNLP and NeurIPS workshops etc. and I see that as an advantage for a lot of the new skills. Understanding the basics of NLP and NLU really helps for a lot of these skills. That said you certainly need to update the skills etc.
📝 Summary of Key Points: 📌 Langgraph is a graph-based system for building custom agents in the Langchain ecosystem. Nodes represent different components of an agent, and edges connect these nodes to enable decision-making and conditional routing within the agent. 🧐 The video provides coding examples to demonstrate Langgraph's functionality. Examples include building an agent executor using custom tools, using a chat model and a list of messages for more complex conversations, and creating an agent supervisor to route user requests to different agents based on predefined conditions. 💡 Additional Insights and Observations: 💬 "Langgraph is a powerful tool for building custom agents with decision-making capabilities." 📊 No specific data or statistics were mentioned in the video. 🌐 The Langchain ecosystem and Langgraph provide a flexible framework for creating various types of agents. 📣 Concluding Remarks: Langgraph is an innovative tool within the Langchain ecosystem that allows users to build custom agents with decision-making capabilities. The video showcases coding examples to demonstrate the functionality of Langgraph and encourages viewers to explore different use cases. Langgraph provides a flexible and powerful framework for creating agents, making it a valuable tool for developers. Generated using TalkBud
Nope…. Still lazy after a while. It takes time just to search the output code to see where they are summarizing etc instead of full code. It really breaks the flow when your actually working well together, then program generates NEW error and you’ve copied over goodccode with a bunch fixed code interspersed with some random “put in your stuff” here sections. 😮
Thanks for the video + code xamples Sam. I have consistent trouble with early stopping, is there anyway to prevent it? Like, the should_continue function receives an AgentFinish message but the output will look like this: 'agent_outcome': AgentFinish(return_values={'output': 'Please call the following function: {"function":{... So it knows it should keep calling functions, but fires a Finish anyway. I've tried to change the system prompt to make it not finish until all its functions are done, but it will still do this. Any suggestions?
@@samwitteveenai Your response is much appreciated. If I understand you correctly, it would mean looking for isinstance of AgentFinish in the outcome returned by agent.invoke(), then ignoring that msg and generating an AgentAction manually inside of run_agent. I couldn't figure out how to create the AgentAction yet, but also that almost feels like a hack to me--maybe the better solution is to split the tasks better among multiple agents, using your supervisor code (I will try this next). However, it feels like one agent should be able to handle a few tools each. Additional information on my setup is that I consistently get the early stop problem when trying to get the same agent to call the same tool twice (for different inputs). Presumably, the agent looks at the log and sees it has already called the tool, then gives up. I have tried altering the sys/user prompts to avoid that behavior, to no success. Let me know if there is an error in my comprehension.
@@samwitteveenai Looking at the other examples, I think I see what is happening. Instead of putting the function request in the additional_kwargs of the last message, the agent sometimes puts 'Please use function x' in the body of the response, which results in an AgentFinish firing.
If you are interested in building LLM Agents Fill out the form below for what type of agents you want some examples of ?
Building LLM Agents Form: drp.li/dIMes
Thanks Sam, I already did that
I really like the idea of integrating Grpah theory into this. You can experiment with different agents and tools for certain types of tasks. Then you can start playing around with network measures and give edges weight based on the successful completion of types of tasks. The network essentially will end up balancing itself out as you start to direct traffic along your high-weight edges. You can run another network and experiment with different models for different tasks. It's like a simulation of a workplace where people end up going to the most productive people to accomplish tasks.
This is ABSOLUTELY FANTASTIC!!! I've been dealing with manual "orchestrator" that felt so dummy before... This is a game changer! You effin deliver on your content, man! Holyfack...
Thank you for going through the notebooks line by line. Helps noobs like me follow along.
Sir please make a full course using langchain with open , hugging face ,lama and fine thing models and chatbot. Keep little bit affordable like 100$ it would be really great . Lots of love from India
Awesome work once again !
Very interested by this langGraph for more complexe use cases ! For us building a team platform augmentation with many Agent (which are Agent, or just Chains), it can allows use to have a big and powerful super agent, an supervisor as you in third part. To be continued
This video is timely as I was ready to start exploring LangGraph to get a feel for what use cases can fit. Deeper dive video will be much appreciated.
Sam I watch all your videos from Colombia. They are awesome!! they explain really well.
Thanks much appreciated!!
You. Are. Fantastic... Thank you Sam
Thanks, this is great stuff. I've been teaching myself to build agents in langchain for some months and it is slow going. I think I need to step back and re-architect to use LangGraph instead. Looking forward to seeing more of your material on this stuff!
can you lmk what are the resources you suggest for getting better at agents using langchain
Great video as usual. Yes, more videos and use cases on building agents with the updated version of LangChain would be great.
Super useful. I would say this was explained in a better way than the official Langchain channel.
Next Video: it would be cool to build the perplexity’s copilot feature. So, ask for clarifying questions if needed with human-in-the-loop feature. Then give access to the internet to get the results.
You're a good teacher, Sam, and I appreciate your efforts. One Q I have about frameworks in general is what they're providing over and above python functions than can branch on conditions and call other functions. I have a feeling there is a good answer. People wouldn't be sinking so much effort into them if there wasn't. But it's not obvious. So a great video might compare a roll-your-own agent with a langgraph (e.g.) agent, and highlight what is different between them, and what is better about the langgraph path. If you've already done this, drop a link in the replies.
Thanks Sam. For colab 01, I tried inputs = {"input": "Give me a random number and then write in words", "chat_history": []}.. it is still calling to_lower_case tool.. is it expected or we have to be more vocal in our input?
This is such a great intro, thank you so much for the effort.
Hi Sam, how much has this changed since 9 months ago? breaking changes? Is it still relevant? Thank you!
Pretty much every LLM API has a large set of parameters: temperature, max output length, top P, [top K], frequency penalty, presence penalty.
Shrink-wrapped UIs like ChatGPT don't give access to these. The defaults differ in some APIs: sometimes temperature is set to 1, sometimes 0.8.
Some experiments I've done indicate that changing these parameters has serious impact on the results. But I've hardly ever seen benchmarks, papers, videos that discuss this. As far as I can tell, most LLM benchmarks only test the "default" settings.
I'd love to see some more in-depth experiments that compare models and change these parameters.
The community has been trying a lot of elaborate optimizations to get the most desired results out of LLMs. But my partial experiments suggest that there's a fair bit of untapped potential with the model parameters.
Another matter is the way the community discusses ChatGPT: late last year OpenAI added a new model to the ChatGPT app: GPT 4 Turbo. This is a model that's as different from GPT-4 as GPT 3.5 is from GPT-4. Or at least, it's different. It's smaller, distilled, simplified, dumber.
Yet some discussion has shown that users didn't accept that as a fact: they've thought they GPT 4 Turbo us just some "faster version" of GPT 4. Magically :)
But there's no magic. In the ChatGPT app, you can select the "ChatGPT" mode, which is GPT-4 Turbo with Vision, DALL-E and tool switching, or you can choose ChatGPT Classic mode, which is the real GPT-4 model. They're very different, and should be treated as separate models in comparisons.
How do you change to other LLMs? I tried but not it was not successful
@@Ken129100 You can simply import a new model when setting llm (for example, llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0, verbose=True, streaming=True)), or use gemeni, or claude-2. (dont forget to include the API key at the top).
you mentioned something on point to what i was wondring sam, with you experience which oif the open-source LLM support function calling as of today? which one would you try out first? and if you do please make a video about langgraph and HF LLM and function calling maybe! ☺, love you work btw!
Thanks for making this video. A question, iiuc, imo It would be perfect if Coder node could be routed by the supervisor and executed to generate the chart by leverage the PythonREPLTool. Did you try to remove the PythonREPLTool tool from the Lotto_Manager agent, and only provide it in Coder agent? Make sense?
What open source models support function calling? I must admit, I don't know what it is about function calling that needs to be supported. Recently, I tried AutoGen function calling with a few different _local_ 7B parameter models without any luck.
There aren't many which excel. The ones that do are fine tuned on the task. First was gorilla. Now we have functionary. And apparently qwen 1.5 (even the 0.5b) model can reach near gpt4 reliability (though not in my personal testing).
As far as I know there aren't any great drag and drop solutions. You may also need to use multiple models (one for function calling and one for higher level reasoning)
Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.
Amazing! But Is not clear how the agent can understand to repeat the function random_number() for 10 times, everytime that it finish it will recall again OpenAI and ask if the task is accomplished? If is like that why we don't see it on LangSmith?
Thanks a lot. I wanna know, how does the agent excuter know to write the anwer "4" to capitalized "FOUR" and then send it to the lower_case tool? Is there another bulit-in LLM doing that?
I tried to follow along in the provided file, but in the third example, my supervisor tells the coder to run, then keeps telling it to run over and over. The supervisor keeps choosing "coder" as the next step. Any idea why this difference in result even though I haven't changed anything about the code and simply ran it as-is?
how can you do with anthropic-claude, i done this and created my own parser after that it gives error: A conversation must alternate between user and assistant roles
very insightful but heavy stuff to master. thank you, Sam. ❤
Thanks for your great video 🙏
Well explained. Could you please show these examples using VS Code with production file structure?
What do you think about Microsoft's Semantic Kernel and PromptFlow?
at 9:30 you said you can have multiple agents with langgraph but isnt langchain originally a single agent framework? Unlike Autogen and CrewAI? I'm a bit confused
@ what timestamp is the demo where we can see it in use
thanks for the very informative video. do you know wich OSS models support function calling?
thanks again for the awesome content i ve learned a lot from your videos please keep doing what do. I was also wondering if you plan on making videos about production ready RAGs with the methods that you talked about in your rag series. Thanks a lot and please keep enriching us with your content.
Yeah I will go back to the RAG stuff again.
I extremely respect what Sam does. He is one of the few youtubers who avoids just hyping up trendy things and simply makes very useful videos.
But am I the only one who thinks that langchain's syntax is just insane? Having looked at the 3rd notebook, I find that I create an agent that has tools to then create agent executor passing the agent and aforementioned tools (why again?). Then I create agent node that invokes some kind of agent, then I pass created agent executor as agent argument to the node... How can anyone be able to understand this Russian doll...
😀 Hey Sasha I can totally relate to how you feel. It is very low level and they have also change some things I think since this video. Also they are finally supporting function Calling better across multiple models. I have been playing with some new notebooks for this and will make a video about it soon. LangGraph is good at a low level but I agree it can be insanely frustrating at times. You can use something like CrewAI if you want to stay really high level but I find that frustrating when it runs into issues as well. I promise I will try to get some new vids on this out soon, hopefully with some open source models like the new Llama etc as well.
Your video is great. However, it presumes prior knowledge of the Langgraph ecosystem. For example, @11:23, the Trace page and the inherent setup that was done there is not explained. Your collab 01 as well: try running it incognito as a viewer: once you reach the 'prompt' cell, things start breaking apart. Overall, you are engaging, knowledgeable but the video could use a warning at the beginning to inform the viewer the requirements like knowing the Langgraph ecosystem etc. I'm subscribing nonetheless, hope you see this as a constructive comment Sam.
Wait, why do you need to explicitly define an array of tools and forward it into an agent creator, if you have decorators in there? What use of decorators then? I'm confused.
This is really excellent tutorial. If I want to develop a use case wherein I have to do an API (e.g., Google Map API) based on a location and use returned result to filter down some customers around the location (wihin 2 kilometer around location) which (the customer and ordering information) is stored in a relational data store (say Sqllite or Postgresql or MySQL). Can you provide any implementation suggestion. Just to clarify that user input could lead to 3 scenarios for queries, 1) API only 2) API + RDBMS 3) RDBMS only.
Put the effort into making it as a took and then the Agent just uses the tool with simple commands. Eg put the heavy lifting on the tools side. I have a CrewAI tutorial coming out later this week that goes into this.
By the way, why use agents and agent executors?
I have seen some many tutorials with just models with binded tools. What is the benefit/difference in using AgentExecutor?
What would I do with memory if I am using agent executor? Create agent executor with memory or create memory that saves the state of the graph? How multiple agents access the memory then?... omg, langchain...
The AgentExecutor was more the old way of doing agents before LangGraph. Think of the graph as a big state machine and you just pass that around. multiple agents can be like different nodes on the graph. I am still thinking of some simple examples to show off the basics. but these are great questions and I will address them in a video.
@@samwitteveenai thank you so much for responding to the comments) Thank you for your attention. Keep up the great work, while I am integrating self-queryng RAG for my startup based on your tutorial)
how is it different from agent executor?
Hi Sam, your videos are always very insightful and has helped me keep up with the latest development LLM space. I do have a question, when we pass a python function as tool into LLM, how does the execution works? Let's say there is a very long function which is to be executed next, now is the whole function along with its parameters passed on to LLM (using precious tokens) and LLM runs function on its server and gets the output. Or is it that LLM just assigns the function to run, the function then runs locally and provides the output to LLM for next action.
Also is the behaviour same in python REPL functions?
The functions are executed on the process that executes the runnable chain, not remotely on the LLM. The LLM only determines which function to run and what the parameters should be, then LangChain / LangGraph executes the code "locally."
@@pnhbs392 Thanks, this was really helpful.
How does this work with open source LLM instead of OpenAI?
Thank you so much for the course!
Can we use Local LLM using the HuggingFaceTextGenInference?
Can I ask what you used to draw the StateGraph slide? Looks cool
Excalidraw. It works very well for things like this.
I am curious how you compare this with CrewAI for setting up agents. I feel setting up an agent with Langragh has too many steps....
CrewAI is much higher level yes but it is not as flexible as LangGraph. That said both are LangChain so should be able to do much of the same stuff. I will make some vids soon about that.
Interesting, Thank you!!
Beautiful......... Thank you so much.
Hey Sam, do you know if it is possible to integrate memory in a graph and how to do it?
yeah you can save and load etc and use the normal ways of making I will make some more vids for LangGraph when I get a chance
Please do one deep dive on dspy also.
This is a great video, it seems overly complicated tho compared to AutoGen which seems to be hiding a lot of the complexity. We built so,etching similar using regular agents as tools (node) which then have their own tools. A more dynamic agent with multiple personalities can be built with this but it would be hard to manage.
Great job sir, all the example in the docs are using openai, can you please do a video where you use a different model like Gemini for this? Also if I have a complex input like a list of objects containing messages from different users and I want to work on each of them. Can you show us how to go about this. Maybe send a response message to each of the users in the list after reading their messages?
I have been doing research on nlp and software engineering for 6 years. I have some good research publications as well in IEEE transactions on Software Engineering, journal of systems and software, Requirements Engineering conference. I have also developed skills on RAG, agent based frameworks. Can I get a good job in the field of GenAI, LLM orchestration? If you have please ask for my cv. Thanks in advance.
I would say yes. I have some research background with papers at EMNLP and NeurIPS workshops etc. and I see that as an advantage for a lot of the new skills. Understanding the basics of NLP and NLU really helps for a lot of these skills. That said you certainly need to update the skills etc.
Thank you for the great video:)
This is sick Sam! Keep it up!
It would be a gratis idea to make an agent that makes other agents
🎯 Key Takeaways for quick navigation:
00:18 🐉 *庆祝华人传统*
- 强调作为龙的后代的自豪感,象征深厚的根基和丰富的历史。
01:18 📱 *文化矛盾*
- 讨论传统价值与现代实践之间的对立,如尽管历史冲突仍购买iPhone和在日本度假。
02:17 🎉 *强调团结与文化自豪*
- 鼓励庆祝华人文化,展示龙舞等元素,强调与根源保持联系的重要性。
Made with HARPA AI
Thanks again, Sam!
Thank you! 🙏
📝 Summary of Key Points:
📌 Langgraph is a graph-based system for building custom agents in the Langchain ecosystem. Nodes represent different components of an agent, and edges connect these nodes to enable decision-making and conditional routing within the agent.
🧐 The video provides coding examples to demonstrate Langgraph's functionality. Examples include building an agent executor using custom tools, using a chat model and a list of messages for more complex conversations, and creating an agent supervisor to route user requests to different agents based on predefined conditions.
💡 Additional Insights and Observations:
💬 "Langgraph is a powerful tool for building custom agents with decision-making capabilities."
📊 No specific data or statistics were mentioned in the video.
🌐 The Langchain ecosystem and Langgraph provide a flexible framework for creating various types of agents.
📣 Concluding Remarks:
Langgraph is an innovative tool within the Langchain ecosystem that allows users to build custom agents with decision-making capabilities. The video showcases coding examples to demonstrate the functionality of Langgraph and encourages viewers to explore different use cases. Langgraph provides a flexible and powerful framework for creating agents, making it a valuable tool for developers.
Generated using TalkBud
Thanks so much
Great. Great
Markov chains meet LLMs.
sam model gpt4 you used not working
you will have to change to the latest GPT-4o
is it me or does he sound very close to @3Blue1Brown
Nope…. Still lazy after a while. It takes time just to search the output code to see where they are summarizing etc instead of full code. It really breaks the flow when your actually working well together, then program generates NEW error and you’ve copied over goodccode with a bunch fixed code interspersed with some random “put in your stuff” here sections. 😮
ok
Thanks for the video + code xamples Sam. I have consistent trouble with early stopping, is there anyway to prevent it? Like, the should_continue function receives an AgentFinish message but the output will look like this: 'agent_outcome': AgentFinish(return_values={'output': 'Please call the following function: {"function":{...
So it knows it should keep calling functions, but fires a Finish anyway. I've tried to change the system prompt to make it not finish until all its functions are done, but it will still do this. Any suggestions?
Try adding in another self check step, so it having another node check and then if it things all is done it can trigger the Agent END etc.
@@samwitteveenai Your response is much appreciated. If I understand you correctly, it would mean looking for isinstance of AgentFinish in the outcome returned by agent.invoke(), then ignoring that msg and generating an AgentAction manually inside of run_agent. I couldn't figure out how to create the AgentAction yet, but also that almost feels like a hack to me--maybe the better solution is to split the tasks better among multiple agents, using your supervisor code (I will try this next). However, it feels like one agent should be able to handle a few tools each.
Additional information on my setup is that I consistently get the early stop problem when trying to get the same agent to call the same tool twice (for different inputs). Presumably, the agent looks at the log and sees it has already called the tool, then gives up. I have tried altering the sys/user prompts to avoid that behavior, to no success.
Let me know if there is an error in my comprehension.
@@samwitteveenai Looking at the other examples, I think I see what is happening. Instead of putting the function request in the additional_kwargs of the last message, the agent sometimes puts 'Please use function x' in the body of the response, which results in an AgentFinish firing.
@samwitteveen very nicely explained
Hi @samwitteveenai I sent you a LI request :). Great video!
great