Whenever I see lance ,i know the tutorial is going to be awesome ,he teaches very slowly and accurately connecting the dots , thanks lance ,from langchain (it has a nice ring to it)
I shit you not, 2 months ago I had the idea of using Abstract Syntax Trees and Control Flow Diagrams as part of advanced RAG mechanisms as an element to bring codegen based on whole repositories to the next level.
Really awesome concept and presentation Lance! I'm really intrigued by all the work that's going into solving the code generation problem. I hear a lot of people dismissing the notion that AI will be able to generate more than a few lines of simple code at a time any time soon. They're typically thinking of LLM limitations. While this concept obviously doesn't quite get us to the "full-blown, complex app" level, it definitely shows that progress is being made, despite limitations of current LLMs. Nice work!
Watching these its hard not to get lost in the potentials. Thanks for sharing. Putting time in learning more and practicing some concepts. Fun to brainstorm potentials and I learn a little more each video.
Pretty nice, seems like this is just doing what we'd manually do when using chatgpt interface. Would love to see more examples, maybe around other verticals like text-to-sql, like how do you automatically validate that and something general like chatbots solving a problem like writing a blogpost
Would be nice to compare results w/ various foundational models. I'm assuming an obvious case here is using a crappier but cheaper model eating the cost of multiple inference runs to get a potentially better result compared to less runs from a more expensive model.
This is a briliant idea. The only problem is that this coding assistant will probably backrupt my open ai account if the iteration runs more than 3 times, which is more likely to happen in real case....
(This is Lance.) Ya, good point. I am working on an update that will run locally for free :) The only issue is that local LLMs have a smaller context window.
Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.
"Hello everyone! Does anyone know of a video or tutorial that shows how to achieve the self-correcting coding like demonstrated here, but locally using Ollama? I've seen tons of videos about tool calling and RAG agents, but not specifically for coding assistance. I tried replacing the model with llm = ChatOllama(model="llama3.1", temperature=0), but it seems that ChatOllama doesn't handle code well-it gave me a TypeError with the message: 'ModelMetaclass' object is not iterable. Any advice would be greatly appreciated!". and Lance love the way you explain
Cool. However the execution success rate is not related to LangGraph per se, but the corrective ability of the design, which has been demonstrated by many non-langchain OSS, namely, open-interpreter, autogen, rawdog, etc. And the real juicy part of those implementation is about two things: how do you handle package installation failure and code execution failure, which you have done in the video as well. So the point is: it's not about langchain or langgraph, but the mechanism to do corrective coding. Maybe it's better if you could compare how complex it is using bare-metal non-langchain python code to implement the corrective coding vs how easy it is using langchain/langgraph.
(This is Lance.) For sure. AlphaCodium did it as well (not using LangChain, of course). LangChain/Graph are absolutely not required to implement this idea, but I found LangGraph to be one reasonable and fairly easy to to implement it. I've been chatting w/ Itamar from CodiumAI to augment AlphaCodium to include retrieval so that it can be applied over any codebase / set of docs.
@@DonBranson1 (This is Lance from the video.) I use Excalidraw. And yes, we put out a video on deployment using Modal: ruclips.net/video/X3yzWtAkaeo/видео.html
What if you will train another network on the correct paths in the graph (when solution is found in the graph) and then use that network as heuristic function to improve path finding. Will that improve search?
Quick tip: open your next video with a slow, steady overview of the technology itself and what it wants. It helps to imagine you’re presenting a person that matters, that has all the feelings you do, ambitious goals and logic for its reason. Your job when tutoring should be that only, explain why that’s logical to ‘kids’ in your pretend audience, the parts that got you excited, and an overview of what you’ll do to follow said human’s logical plan. This is much better than just repeating stuff we can already see without you saying it’s there anyway ha. Then dive deep. It’s been fairly shallow and sometimes impossible to stay engaged in these clips since I first saw one last week. When someone says “I do this then that” then it’s simply saying they made something without ever explaining why, or when not to, and the point of why it expects that. Teaching happens when you provide the approach, yet never happens by simply reporting the approach.
Whenever I see lance ,i know the tutorial is going to be awesome ,he teaches very slowly and accurately connecting the dots , thanks lance ,from langchain (it has a nice ring to it)
I shit you not, 2 months ago I had the idea of using Abstract Syntax Trees and Control Flow Diagrams as part of advanced RAG mechanisms as an element to bring codegen based on whole repositories to the next level.
Those moments are called "Ascension Symptoms" HAKUNA MATATA it's a quick and beautiful ride
Can you explain a little bit so I'd know what to Google to find out more about this? @@ForTheEraOfLove
GPT 5 will hit RAG really hard
How?@@perrygoldman612
you mean we will not need RAG?@@perrygoldman612
This guy is awesome. He explains advanced code and ideas so nicely and easy to understand 👍👍👍
Incredible! I see several practical use cases. Thanks
Really awesome concept and presentation Lance! I'm really intrigued by all the work that's going into solving the code generation problem. I hear a lot of people dismissing the notion that AI will be able to generate more than a few lines of simple code at a time any time soon. They're typically thinking of LLM limitations. While this concept obviously doesn't quite get us to the "full-blown, complex app" level, it definitely shows that progress is being made, despite limitations of current LLMs. Nice work!
Watching these its hard not to get lost in the potentials. Thanks for sharing. Putting time in learning more and practicing some concepts. Fun to brainstorm potentials and I learn a little more each video.
Thanks Lance. Amazing tutorial!
Pretty nice, seems like this is just doing what we'd manually do when using chatgpt interface. Would love to see more examples, maybe around other verticals like text-to-sql, like how do you automatically validate that and something general like chatbots solving a problem like writing a blogpost
Thank you very much!! I appreciate your work on this topic with advanced flows and langchain
LangGraph is now on fire!
Very informative and useful. Would appreciate if you can do a video on Langraph with a sql/graph db chain as the nodes. Thanks !
(This is Lance from the video.) Good feedback! I will think about a good example use-case here.
Really cool, thanks for sharing!
Would be nice to compare results w/ various foundational models. I'm assuming an obvious case here is using a crappier but cheaper model eating the cost of multiple inference runs to get a potentially better result compared to less runs from a more expensive model.
(This is Lance from LangChain.) Yes. This is a good point; I want to update this w/ an OSS model and run eval. On my list!
This is a briliant idea. The only problem is that this coding assistant will probably backrupt my open ai account if the iteration runs more than 3 times, which is more likely to happen in real case....
(This is Lance.) Ya, good point. I am working on an update that will run locally for free :) The only issue is that local LLMs have a smaller context window.
Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.
Maybe run a bash subprocess to execute the complie/run the code or use some python bindings for the language
Is there some network visualization thing for this???
Thanks for this video❤
Thank you for the amazing tutorial , can u also share the Notion note that you are using at the beginning of the video.
LangSmithNotFoundError: Dataset lcel-teacher-eval not found
why am i getting this error?
"Hello everyone! Does anyone know of a video or tutorial that shows how to achieve the self-correcting coding like demonstrated here, but locally using Ollama? I've seen tons of videos about tool calling and RAG agents, but not specifically for coding assistance.
I tried replacing the model with llm = ChatOllama(model="llama3.1", temperature=0), but it seems that ChatOllama doesn't handle code well-it gave me a TypeError with the message: 'ModelMetaclass' object is not iterable. Any advice would be greatly appreciated!". and Lance love the way you explain
Great job. I'd prob name the state nodes as 'node_generate' though and edge_check_code_imports.
Cool.
However the execution success rate is not related to LangGraph per se, but the corrective ability of the design, which has been demonstrated by many non-langchain OSS, namely, open-interpreter, autogen, rawdog, etc.
And the real juicy part of those implementation is about two things: how do you handle package installation failure and code execution failure, which you have done in the video as well.
So the point is: it's not about langchain or langgraph, but the mechanism to do corrective coding.
Maybe it's better if you could compare how complex it is using bare-metal non-langchain python code to implement the corrective coding vs how easy it is using langchain/langgraph.
(This is Lance.) For sure. AlphaCodium did it as well (not using LangChain, of course). LangChain/Graph are absolutely not required to implement this idea, but I found LangGraph to be one reasonable and fairly easy to to implement it. I've been chatting w/ Itamar from CodiumAI to augment AlphaCodium to include retrieval so that it can be applied over any codebase / set of docs.
ESLint straight from the dryer????
Awesome video. Does LangGraph / LangChain support instantiating the code generation within a Docker Container? Good to have guardrails for this.
You draw some excellent diagrams. Are you using LangChain / LLMs to do that?
@@DonBranson1 (This is Lance from the video.) I use Excalidraw. And yes, we put out a video on deployment using Modal: ruclips.net/video/X3yzWtAkaeo/видео.html
LETS GOOOOOO
What if you will train another network on the correct paths in the graph (when solution is found in the graph) and then use that network as heuristic function to improve path finding. Will that improve search?
❤️
This can be Devin
It doesn’t support with azureopenai
Could we maybe post edit the “ums” out? LLM could do it,
Give this guy $5 for each time he says "like".
(This is Lance from the video.) I'll send you my Venmo :D ... happily accepting payment.
Quick tip: open your next video with a slow, steady overview of the technology itself and what it wants. It helps to imagine you’re presenting a person that matters, that has all the feelings you do, ambitious goals and logic for its reason. Your job when tutoring should be that only, explain why that’s logical to ‘kids’ in your pretend audience, the parts that got you excited, and an overview of what you’ll do to follow said human’s logical plan. This is much better than just repeating stuff we can already see without you saying it’s there anyway ha. Then dive deep. It’s been fairly shallow and sometimes impossible to stay engaged in these clips since I first saw one last week. When someone says “I do this then that” then it’s simply saying they made something without ever explaining why, or when not to, and the point of why it expects that. Teaching happens when you provide the approach, yet never happens by simply reporting the approach.