Build Agents from Scratch (Building Advanced RAG, Part 3)

Поделиться
HTML-код
  • Опубликовано: 19 июн 2024
  • In this third video of this series we teach you how to build LLM-powered agentic pipelines - specifically we teach you how to build a ReAct agent (Yao et al.) from scratch!
    We do this in two parts:
    1. First define a single execution step using LlamaIndex query pipelines to define an agentic DAG. We use special components to maintain mutable state; this state can be carried over to the next step of execution.
    2. We can then wrap this DAG in an agent worker that can execute this DAG step-by-step or e2e until complete.
    Colab: colab.research.google.com/dri...
    Timeline:
    00:00-10:42 Intro
    10:42-13:04 Setup Data + SQL Tool
    13:04-23:06 Define Agent Modules
    23:06-26:27 Define Links between Modules
    26:27 Setup and Run Agent

Комментарии • 13

  • @amansingh.ai.01
    @amansingh.ai.01 4 месяца назад

    awesome! Thank you Jerry. Absolutely love tutorials like these.

  • @seanbergman8927
    @seanbergman8927 2 месяца назад

    Great video! This entire three-part series is exactly what I needed. Can't wait for the next video you mentioned that will take user feedback into account.

  • @AngusLou
    @AngusLou 19 дней назад

    Suggestion: Place your avatar on the right side of the screen so that it doesn't block the text as much. Thank you.

  • @joaooliveira7051
    @joaooliveira7051 4 месяца назад

    Great!
    Is the version 0.10.5 already available?

  • @vittoriohalfon
    @vittoriohalfon Месяц назад

    Amazing video. However please make some video tutorials in node.js / typescript!! Not only python devs out there😇

  • @orhandag540
    @orhandag540 2 месяца назад

    Hi Guys.. This was a great video. But I have a question. Is it possible to build this agent with Huggigface LLMs ?

  • @fintech1378
    @fintech1378 4 месяца назад

    How is it different from autogen or crewai

  • @explorer945
    @explorer945 4 месяца назад +1

    Will this give flexibility to use any model or tight coupling on how OpenAI models respond? Choices etc?

    • @LlamaIndex
      @LlamaIndex  4 месяца назад +2

      yeah you can use any model - practically speaking GPT-4 will give you the best results but we've seen good results on mistral-7b/zephyr

    • @explorer945
      @explorer945 4 месяца назад

      @@LlamaIndex The reason I asked is to experiment with the sum of the local only models using ollama. Completely disconnected from the internet. You know Apocalypse scenario 😁

    • @ashishsingh-bv1rq
      @ashishsingh-bv1rq 3 месяца назад

      @LlamaIndex - Can you share the same solution with an open source model.

  • @renaudgg
    @renaudgg 4 месяца назад +3

    Did i understand well at the end that we can just write agent.chat instead of writing all those 800 lines of code?
    also is all this good with gpt-3.5-turbo? will that agent help to get better answers

  • @austinmw89
    @austinmw89 3 месяца назад

    Shouldn't `parse_react_output_fn` also append the `reasoning_step` (e.g. a `ActionReasoningStep` or `ResponseReasoningStep`) and `agent_input_fn` only append `reasoning_step` on the first run, so you'd get the following as the chat history:
    ```
    user: ObservationReasoningStep (original query)
    assistant: ActionReasoningStep
    user: ObservationReasoningStep (tool output)
    assistant: ResponseReasoningStep
    ```
    Instead of the current:
    ```
    user: ObservationReasoningStep (original query)
    user: ObservationReasoningStep (tool output)
    user: ObservationReasoningStep (original query)
    ```
    Where currently no assistant messages are mixed in, and the original query appears twice?