Building an multi-agent concierge system using LlamaIndex Workflows

Поделиться
HTML-код
  • Опубликовано: 5 ноя 2024

Комментарии • 9

  • @shubhammittal7832
    @shubhammittal7832 2 месяца назад

    Errors being seen in the project. If I type "stop" as the first input. It goes in some kind of loop of errors and takes some time to come out of it

  • @xuexileader
    @xuexileader 2 месяца назад +4

    I m confusing, your project not using llama-agents , instead , rom llama_index.core.agent import FunctionCallingAgentWorker. what is this????

  • @chirwatra
    @chirwatra 2 месяца назад +1

    Can we do this using local LLM that we hosted on vLLM? Because we don’t want to send customer data to OpenAI.

    • @aryansakhala3930
      @aryansakhala3930 12 дней назад

      if you have the solution to the vllm
      implementation, please let me know

  • @eneskosar4649
    @eneskosar4649 Месяц назад

    it has so many build errors. Please provide a requirements.txt file for versions

  • @BerndPrager
    @BerndPrager 6 дней назад

    If you want to explain the LlamaIndex workflows, I believe you would be far better off simplifying the example: less agents, less tools, less events ... for a first-time viewer this is getting quickly confusing and convoluted.

  • @GuruprasadGV
    @GuruprasadGV Месяц назад +2

    i think this is overly complicated. 90% of this has nothing to do with LLMs - and they are all solved problems. I don't think llmindex should try to re-invent application building mechanics - like workflows etc.

  • @RajeshGupta-gx3yz
    @RajeshGupta-gx3yz 8 дней назад

    I wish the explanation was better! Disappointed!

  • @SamiSabirIdrissi
    @SamiSabirIdrissi 2 месяца назад

    First