Building a self-corrective coding assistant from scratch

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии •

  • @Slimshady68356
    @Slimshady68356 9 месяцев назад +7

    Whenever I see lance ,i know the tutorial is going to be awesome ,he teaches very slowly and accurately connecting the dots , thanks lance ,from langchain (it has a nice ring to it)

  • @robertputneydrake
    @robertputneydrake 9 месяцев назад +28

    I shit you not, 2 months ago I had the idea of using Abstract Syntax Trees and Control Flow Diagrams as part of advanced RAG mechanisms as an element to bring codegen based on whole repositories to the next level.

    • @ForTheEraOfLove
      @ForTheEraOfLove 9 месяцев назад +4

      Those moments are called "Ascension Symptoms" HAKUNA MATATA it's a quick and beautiful ride

    • @EzraSchroeder
      @EzraSchroeder 9 месяцев назад

      Can you explain a little bit so I'd know what to Google to find out more about this? ​@@ForTheEraOfLove

    • @perrygoldman612
      @perrygoldman612 9 месяцев назад +5

      GPT 5 will hit RAG really hard

    • @seansullivan6263
      @seansullivan6263 9 месяцев назад

      How?@@perrygoldman612

    • @alestar22
      @alestar22 9 месяцев назад

      you mean we will not need RAG?@@perrygoldman612

  • @maof77
    @maof77 9 месяцев назад +8

    This guy is awesome. He explains advanced code and ideas so nicely and easy to understand 👍👍👍

  • @chrisogonas
    @chrisogonas 8 месяцев назад +1

    Incredible! I see several practical use cases. Thanks

  • @TimKitchens7
    @TimKitchens7 9 месяцев назад +1

    Really awesome concept and presentation Lance! I'm really intrigued by all the work that's going into solving the code generation problem. I hear a lot of people dismissing the notion that AI will be able to generate more than a few lines of simple code at a time any time soon. They're typically thinking of LLM limitations. While this concept obviously doesn't quite get us to the "full-blown, complex app" level, it definitely shows that progress is being made, despite limitations of current LLMs. Nice work!

  • @TraderE-fl2zm
    @TraderE-fl2zm 6 месяцев назад

    Watching these its hard not to get lost in the potentials. Thanks for sharing. Putting time in learning more and practicing some concepts. Fun to brainstorm potentials and I learn a little more each video.

  • @ibbbyscode
    @ibbbyscode 9 месяцев назад +2

    Thanks Lance. Amazing tutorial!

  • @RobertoMartin1
    @RobertoMartin1 9 месяцев назад +3

    Pretty nice, seems like this is just doing what we'd manually do when using chatgpt interface. Would love to see more examples, maybe around other verticals like text-to-sql, like how do you automatically validate that and something general like chatbots solving a problem like writing a blogpost

  • @wSevenDays
    @wSevenDays 9 месяцев назад

    Thank you very much!! I appreciate your work on this topic with advanced flows and langchain

  • @youngzproduction7498
    @youngzproduction7498 6 месяцев назад

    LangGraph is now on fire!

  • @kishorkukreja7733
    @kishorkukreja7733 9 месяцев назад +2

    Very informative and useful. Would appreciate if you can do a video on Langraph with a sql/graph db chain as the nodes. Thanks !

    • @r.lancemartin7992
      @r.lancemartin7992 8 месяцев назад +1

      (This is Lance from the video.) Good feedback! I will think about a good example use-case here.

  • @proteusnet
    @proteusnet 9 месяцев назад +1

    Really cool, thanks for sharing!

  • @dv_interval42
    @dv_interval42 9 месяцев назад +4

    Would be nice to compare results w/ various foundational models. I'm assuming an obvious case here is using a crappier but cheaper model eating the cost of multiple inference runs to get a potentially better result compared to less runs from a more expensive model.

    • @r.lancemartin7992
      @r.lancemartin7992 8 месяцев назад

      (This is Lance from LangChain.) Yes. This is a good point; I want to update this w/ an OSS model and run eval. On my list!

  • @perrygoldman612
    @perrygoldman612 9 месяцев назад +2

    This is a briliant idea. The only problem is that this coding assistant will probably backrupt my open ai account if the iteration runs more than 3 times, which is more likely to happen in real case....

    • @r.lancemartin7992
      @r.lancemartin7992 9 месяцев назад

      (This is Lance.) Ya, good point. I am working on an update that will run locally for free :) The only issue is that local LLMs have a smaller context window.

  • @PrashantSaikia
    @PrashantSaikia 8 месяцев назад +2

    Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.

    • @nadavlev4534
      @nadavlev4534 7 месяцев назад

      Maybe run a bash subprocess to execute the complie/run the code or use some python bindings for the language

  • @mohamedfouad1309
    @mohamedfouad1309 9 месяцев назад +2

    Is there some network visualization thing for this???

  • @Jakolo121
    @Jakolo121 9 месяцев назад +2

    Thanks for this video❤

  • @AnilUttani
    @AnilUttani 7 месяцев назад

    Thank you for the amazing tutorial , can u also share the Notion note that you are using at the beginning of the video.

  • @mayanklohani19
    @mayanklohani19 9 месяцев назад +1

    LangSmithNotFoundError: Dataset lcel-teacher-eval not found
    why am i getting this error?

  • @dileepprabhu8941
    @dileepprabhu8941 Месяц назад

    "Hello everyone! Does anyone know of a video or tutorial that shows how to achieve the self-correcting coding like demonstrated here, but locally using Ollama? I've seen tons of videos about tool calling and RAG agents, but not specifically for coding assistance.
    I tried replacing the model with llm = ChatOllama(model="llama3.1", temperature=0), but it seems that ChatOllama doesn't handle code well-it gave me a TypeError with the message: 'ModelMetaclass' object is not iterable. Any advice would be greatly appreciated!". and Lance love the way you explain

  • @jonclement
    @jonclement 7 месяцев назад

    Great job. I'd prob name the state nodes as 'node_generate' though and edge_check_code_imports.

  • @chengduman
    @chengduman 9 месяцев назад +1

    Cool.
    However the execution success rate is not related to LangGraph per se, but the corrective ability of the design, which has been demonstrated by many non-langchain OSS, namely, open-interpreter, autogen, rawdog, etc.
    And the real juicy part of those implementation is about two things: how do you handle package installation failure and code execution failure, which you have done in the video as well.
    So the point is: it's not about langchain or langgraph, but the mechanism to do corrective coding.
    Maybe it's better if you could compare how complex it is using bare-metal non-langchain python code to implement the corrective coding vs how easy it is using langchain/langgraph.

    • @r.lancemartin7992
      @r.lancemartin7992 9 месяцев назад +1

      (This is Lance.) For sure. AlphaCodium did it as well (not using LangChain, of course). LangChain/Graph are absolutely not required to implement this idea, but I found LangGraph to be one reasonable and fairly easy to to implement it. I've been chatting w/ Itamar from CodiumAI to augment AlphaCodium to include retrieval so that it can be applied over any codebase / set of docs.

  • @AtomicPixels
    @AtomicPixels 8 месяцев назад

    ESLint straight from the dryer????

  • @DonBranson1
    @DonBranson1 9 месяцев назад +5

    Awesome video. Does LangGraph / LangChain support instantiating the code generation within a Docker Container? Good to have guardrails for this.

    • @DonBranson1
      @DonBranson1 9 месяцев назад +1

      You draw some excellent diagrams. Are you using LangChain / LLMs to do that?

    • @r.lancemartin7992
      @r.lancemartin7992 8 месяцев назад

      @@DonBranson1 (This is Lance from the video.) I use Excalidraw. And yes, we put out a video on deployment using Modal: ruclips.net/video/X3yzWtAkaeo/видео.html

  • @preston_is_on_youtube
    @preston_is_on_youtube 9 месяцев назад +2

    LETS GOOOOOO

  • @zedmor
    @zedmor 9 месяцев назад +3

    What if you will train another network on the correct paths in the graph (when solution is found in the graph) and then use that network as heuristic function to improve path finding. Will that improve search?

  • @excalidraw
    @excalidraw 9 месяцев назад +1

    ❤️

  • @jessiondiwangan2591
    @jessiondiwangan2591 8 месяцев назад +1

    This can be Devin

  • @prajwal3114
    @prajwal3114 6 месяцев назад

    It doesn’t support with azureopenai

  • @fkxfkx
    @fkxfkx 9 месяцев назад

    Could we maybe post edit the “ums” out? LLM could do it,

  • @ShaunPrince
    @ShaunPrince 9 месяцев назад

    Give this guy $5 for each time he says "like".

    • @r.lancemartin7992
      @r.lancemartin7992 8 месяцев назад +2

      (This is Lance from the video.) I'll send you my Venmo :D ... happily accepting payment.

  • @AtomicPixels
    @AtomicPixels 8 месяцев назад +1

    Quick tip: open your next video with a slow, steady overview of the technology itself and what it wants. It helps to imagine you’re presenting a person that matters, that has all the feelings you do, ambitious goals and logic for its reason. Your job when tutoring should be that only, explain why that’s logical to ‘kids’ in your pretend audience, the parts that got you excited, and an overview of what you’ll do to follow said human’s logical plan. This is much better than just repeating stuff we can already see without you saying it’s there anyway ha. Then dive deep. It’s been fairly shallow and sometimes impossible to stay engaged in these clips since I first saw one last week. When someone says “I do this then that” then it’s simply saying they made something without ever explaining why, or when not to, and the point of why it expects that. Teaching happens when you provide the approach, yet never happens by simply reporting the approach.