Local Function Calling with Llama3 using Ollama and Phidata

Поделиться
HTML-код
  • Опубликовано: 9 июн 2024
  • In this video, we are going to test the phidata library and implement a system where a raw LLM uses Memory, Knowledge and Tools. We are going to implement a function Calling code using Llama 3 LLM and Ollama and Phidata.
    We are going to see how we can turn any LLM to an Assistant.
    Follow me for hands-on session and code-along.
    Try Phidata: www.phidata.com/#pricing
    Cookbook: github.com/phidatahq/phidata/...
    #phidata #function-calling #assistants #tools #memory #autogen #memgpt #crewai
    CHANNEL LINKS:
    🕵️‍♀️ Join my Patreon: / promptengineer975
    ☕ Buy me a coffee: ko-fi.com/promptengineer
    📞 Get on a Call with me at $125 Calendly: calendly.com/prompt-engineer4...
    ❤️ Subscribe: / @promptengineer48
    💀 GitHub Profile: github.com/PromptEngineer48
    🔖 Twitter Profile: / prompt48
    TIME STAMPS:
    0:00 Intro
    1:08 Argument
    2:39 Documents
    5:28 Github Code
    6:02 Ollama installation
    6:34 Coding
    12:30 Running Streamlit app
    18:18 Conclusion
  • НаукаНаука

Комментарии • 34

  • @uberalus
    @uberalus Месяц назад +5

    Pretty awesome find in phidata. Seems like a solid open source alternative for private, local AI assistants to get up and running quickly. The helper methods for embedding unstructured data look very convenient. Much more intuitive and less complicated than other systems I have seen. Hopefully, they will expand to support multi-agent applications that divide responsibilities and cooperate to craft the ultimate response.

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад +1

      Indeed. I'm noting your requirements and will get back to the developers to meet this requirements.

  • @free_thinker4958
    @free_thinker4958 Месяц назад +2

    Thanks for the video 💯👏, we would like to see a use case where you combine the phidata with the crewai framework 🙌

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      Welcome. But I think both phidata and crewai are frameworks. I dont see the intention of mixing the frameworks. 😁 If you could give more hints on what would you like to do exactly, I will rethink and try to make a use case.

    • @free_thinker4958
      @free_thinker4958 Месяц назад +1

      @@PromptEngineer48 can we integrate phidata assistant option as a tool in a crewai agent ?

  • @IdPreferNot1
    @IdPreferNot1 Месяц назад +1

    Looking forward to the series! Local inference continues to be slow for agents, especially when doing agentic work. Would love if you could groq them with their llama3 70B endpoint. Otherwise would prefer just to use the best OAI paid inference to see what the framework limits are.

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      Thanks. you can expect another video today like in 7 hours.

  • @mernik5599
    @mernik5599 Месяц назад +2

    This is great! Pretty much what I've been looking for, just wondering if it would be possible to run this through Ollama web ui? Since it feels the most intuitive and I've been using it on my phone by running it through ngrok server. Also would love to see you making different use case using this..like if I can use it to ask what files are there in a particular folder, we can start with creating a folder in a virtual environment and placing a small csv file in it, maybe an another function that can also tell the content of that csv, it would allow for so many use cases.
    And although it might be too far fetched but would love to see what level of personalised agent can be created using this, like a food ordering assistant where we can define menu items and price and when user tries to place an order it can tell available options from the predefined menu and calculate the price of the order using function calling so that it can use proper maths.

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад +1

      Ollama Web UI:I will have to check that out.
      Use Cases: I am preparing and will let out several video on phidata.
      Your use cases are pretty awesome. I will try to build one two..

  • @pavguy
    @pavguy Месяц назад +1

    Cool, Thanks for sharing this...am wondering, how i can integrate phidata agentic tools into my current RAG application, built using chainlit...i was thinking earlier of crewai, guess will try both to check them out. Any inputs on this usecase ?

  • @Techonsapevole
    @Techonsapevole Месяц назад +2

    Thanks, is possible to use phidata with open webui instead of streamlit ?

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      Im not sure. coz. Open webui it seems is made for Ollama. I will definetely find out.

  • @mazziskin
    @mazziskin Месяц назад +1

    Can I use os.environ to access groq api with llama3 70B instead of using my local ollama model?

  • @gnorts_mr_alien
    @gnorts_mr_alien Месяц назад +2

    when the models do not have native function calling capabilities, how reliable does this get? how does this convince llama3 to call a function, or make it notice that it needs to call a function? just hope?

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      Yes. I understand your point. But part of the functionality of function calling is dependent on the structure of the prompt as well. no doubt the LLM itself plays a big role. In this video, I have brought forward tool calling..

    • @gnorts_mr_alien
      @gnorts_mr_alien Месяц назад +1

      @@PromptEngineer48 thank you, that was not a dig against you at all. sorry if it sounded like a criticism. I was curious about how phidata is trying to solve the problem. at the end of the day, even for models that are trained for tool use, we use some sort of "hope" that the probabilities will work in our favor. I was curious what types of guarantees phidata tries to give.

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      No. it didnot sound to be as criticism.. :) . Here phidata cannot along guarantee. but the setup is so simple to use and integrations are easy. which makes it easy to fail and pass faster. ultimately the llm and the prompts win.

  • @samyio4256
    @samyio4256 Месяц назад +1

    How about licencing? Can i use phi data for my company ai apps? Or is it not for commercial usage by default?

    • @phidata
      @phidata Месяц назад +2

      Hi there, phidata is open source so please feel free to use it as you'd like :)

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      Thanks for clarifying

    • @oltipreka3599
      @oltipreka3599 Месяц назад +1

      What is your use case? Just, out of curiosity.

    • @samyio4256
      @samyio4256 Месяц назад +2

      Im trying to build an advanced ai agent that delegates mini llm's for specific tasks. The ultimate goal is to handle the entire business intelligence of the company with it.
      From my view phidata looks like a good foundation for that agent.

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      So instead of using the normal route of tool selection. U are going to go LLM selection.

  • @nholmes86
    @nholmes86 Месяц назад +1

    can you show me the best webUI available for Ollama?

  • @themax2go
    @themax2go Месяц назад +1

    agent swarm w/ phidata? deviation using phidata?

  • @JarppaGuru
    @JarppaGuru Месяц назад +1

    is that windows 11...skipping. have fun ads