If you want to explain the LlamaIndex workflows, I believe you would be far better off simplifying the example: less agents, less tools, less events ... for a first-time viewer this is getting quickly confusing and convoluted.
i think this is overly complicated. 90% of this has nothing to do with LLMs - and they are all solved problems. I don't think llmindex should try to re-invent application building mechanics - like workflows etc.
Errors being seen in the project. If I type "stop" as the first input. It goes in some kind of loop of errors and takes some time to come out of it
I m confusing, your project not using llama-agents , instead , rom llama_index.core.agent import FunctionCallingAgentWorker. what is this????
Can we do this using local LLM that we hosted on vLLM? Because we don’t want to send customer data to OpenAI.
if you have the solution to the vllm
implementation, please let me know
it has so many build errors. Please provide a requirements.txt file for versions
If you want to explain the LlamaIndex workflows, I believe you would be far better off simplifying the example: less agents, less tools, less events ... for a first-time viewer this is getting quickly confusing and convoluted.
i think this is overly complicated. 90% of this has nothing to do with LLMs - and they are all solved problems. I don't think llmindex should try to re-invent application building mechanics - like workflows etc.
I wish the explanation was better! Disappointed!
First