Is it possible to bake more instruction into the single-LLM first example's input prompt so it just follows the JSON format for its output? It's very easy for GPT4o to provide direct JSON output, for example. Just append ",answering in JSON format with the following structure {}" to whatever the input message to the LLM is? -Jason asking about JSON
In the second example, if we are extracting the content of the tool message from the ToolNode and passing it as a HumanMessage in "respond", couldn't you have created a direct edge from the "tools" node to the "respond" node without coming back to the "agent" node for less token usage? Or am i missing a possible error?
pro-tip for the first diagram: make the nodes that are the same in the same place so I don't have to parse the whole thing to see there is only one extra node
Is there a way you guys could provide a transposed version of the code for typescript or mention in the videos how we could do it ourselves, I love langgraph but doing anything in typescript is a bit difficult compared to the python version
If the output is longer than 4000x tokens, how can we generate longer output? I have an expected structured output of 5000 tokens since the JSON is so large.
Massive Applause 👏 The best introduction to structured output with LangChain out there! Appreciate you keeping it so simple and explanatory with both examples!
can we for the love of god please do a different use case than grabbing the weather in sf every demo?
Maybe it's the new hello world.
Hahahha , same thought!!!
Or just duck duck go search
maybe the weather in LA?
Same with the snake game… it’s a conspiracy at this point
Is it possible to bake more instruction into the single-LLM first example's input prompt so it just follows the JSON format for its output? It's very easy for GPT4o to provide direct JSON output, for example. Just append ",answering in JSON format with the following structure {}" to whatever the input message to the LLM is? -Jason asking about JSON
In the second example, if we are extracting the content of the tool message from the ToolNode and passing it as a HumanMessage in "respond", couldn't you have created a direct edge from the "tools" node to the "respond" node without coming back to the "agent" node for less token usage? Or am i missing a possible error?
pro-tip for the first diagram: make the nodes that are the same in the same place so I don't have to parse the whole thing to see there is only one extra node
Are there any really world problem example?
Is there a way you guys could provide a transposed version of the code for typescript or mention in the videos how we could do it ourselves, I love langgraph but doing anything in typescript is a bit difficult compared to the python version
Is it true that only Mac users can use langgraph?
Is Agent Studio going to be available on other platforms?
No the langgraph python library is available on windows and Linux as well
How to get the langgraph studio working on windows local ? @@OrestisStefanis
Please kindly provide notebooks links all these videos. Thx
Is this using the native structured Outputs support that openai is providing?
Nope, this is using the with_structured_output method from LangChain which basically adds a tool and extend the prompt used by the LLM.
If the output is longer than 4000x tokens, how can we generate longer output? I have an expected structured output of 5000 tokens since the JSON is so large.
Infer the first 4k tokens, append it to messages with role == assistant and rerun inference. It will continue completing its first output.
@@danieldvali9128 could you please expand on that ? I have the same problem
Massive Applause 👏
The best introduction to structured output with LangChain out there!
Appreciate you keeping it so simple and explanatory with both examples!
These are just the docs bro !
In fact these guys are making the video docs