Building and Testing Reliable Agents

Поделиться
HTML-код
  • Опубликовано: 30 июл 2024
  • This talk was given as a workshop at the AI Engineering World's Fair on June, 24 2024. LLM-powered agents hold tremendous promise for autonomously performing tasks, but reliability is often a barrier for deployment and productionisation. Here, we'll show how to design and build reliable agents using LangGraph. We’ll cover ways to test agents using LangSmith, examining both agent's final response as well as agent tool use trajectory. We'll compare a custom LangGraph agent to a ReAct agent for RAG to showcase the reliability benefit associated with building custom agents using LangGraph.
    Slides:
    docs.google.com/presentation/...
    CoLab:
    drive.google.com/file/d/1KUCI...
    Notebook:
    github.com/langchain-ai/langg...
    LangGraph:
    blog.langchain.dev/langgraph-...
  • НаукаНаука

Комментарии • 12

  • @andydataguy
    @andydataguy Месяц назад +2

    Great summary of everything 🙌🏾💜

  • @darkmatter9583
    @darkmatter9583 Месяц назад +1

    you are always doing great, huge fan of your videos❤ keep doing that

  • @awakenwithoutcoffee
    @awakenwithoutcoffee Месяц назад +1

    Hi Lance just wanted to drop a thank you from me and my team for always being on top of the RAG game. This is a complex field with fast evolving concepts and LangGraph seems the tool we have been looking for.
    What is your take on graphRAG's: are they production ready, will they eventually replace or complement current RAG systemes ?

    • @andydataguy
      @andydataguy Месяц назад +1

      I'm nightly shaken awake in my sleep, a cold sweat drenched from the nightmares from contemplating this exact concept.
      Please do a video series about GraphRAG 💜💜💜

  • @adrenaline681
    @adrenaline681 Месяц назад +2

    Trying to learn more about these types of processes. Am I correct to understand that the agent for loop would also need to make more LLM calls (thus being more expensive) as it needs to make an extra to decide which step to take next? While with the mixed method you only make that extra call when he is grading.

    • @lala-kq5ho
      @lala-kq5ho Месяц назад

      Yes, thats correct and also the chance of llm to incorrectly call tools makes it less reliable. You can add retry loops but those will increase LLM calls as you will have to include a correction prompt aswell

  • @pphodaie
    @pphodaie Месяц назад

    I would be very helpful if LangGraph had built-in code interpreter support: LLM prompted to generate code instead of calling predefined functions (tools), the framework executes the code and return the results back to LLM.
    Both OpenAPI Assistants API and AutoGen have this.

  • @codekiln
    @codekiln 14 дней назад

    What is the conference that you gave this presentation at?

  • @nessrinetrabelsi8581
    @nessrinetrabelsi8581 Месяц назад

    thanks, was the workshop recorded?

  • @xuantungnguyen9719
    @xuantungnguyen9719 26 дней назад

    Tools have docstrings as part of prompt. How do you manage these docstrings?