Automate AI Research with Crew.ai and Mozilla Llamafile

Поделиться
HTML-код
  • Опубликовано: 25 июн 2024
  • In this video we'll walk through how to set up crew.ai with Mozilla LLamafile to run a local large language model on your computer and automate multi-step tasks using a model of your choosing.
    Sample Code: github.com/heaversm/crew-llam...
    List of Llamafile Models on hugging face: huggingface.co/models?library...
    Serper Search API: serper.dev/
    CrewAI documentation - docs.crewai.com/how-to/LLM-Co...
    Langchain Docs - python.langchain.com/v0.2/doc...
    0:00 Intro
    0:25 - Example - gathering job candidate data and assigning scores
    1:20 - Installing Crew
    1:53 - Using Langchain for Local AI models
    2:52 - Using the Sample Code
    4:12 - Adding our API keys for Serper and OpenAI
    5:08 - Setting up our agents and tasks in Crew
    6:15 - Running our workflow
    7:45 - Looking at the output file
    8:35 - Switching to a local LLM with Mozilla LLamafile
    10:35 - Next steps

Комментарии • 17

  • @Nick__X
    @Nick__X 15 дней назад

    awesome content dude!

  • @mattc3265
    @mattc3265 Месяц назад +1

    Love being able to run this locally. Great vid
    👏👏👏

  • @SixTimesNine
    @SixTimesNine Месяц назад

    Very good vid - thanks!

  • @vladimirmiletin4486
    @vladimirmiletin4486 Месяц назад +1

    Nice, thanks

  • @iliadobrevqqwqq
    @iliadobrevqqwqq Месяц назад

    Tks

  • @practical-ai-prototypes
    @practical-ai-prototypes  Месяц назад +2

    Update - I made an `app-input.py` script that allows you to create your own agent and task just by answering some questions in the command line.

    • @JofnD
      @JofnD Месяц назад

      Seems very useful! Is there an update video for this?

    • @practical-ai-prototypes
      @practical-ai-prototypes  22 дня назад

      @@JofnD no - but same instructions, just run `python app-input.py` from the command line.

  • @dbreardon
    @dbreardon Месяц назад

    Now everyone needs a new computer and a Cuda graphics card which are massively expensive due to crypto mining and now AI servers. Local runs way too slow on my 3-4 year old laptop.
    Will have to see if new Intel and AMD chips with embedded NPU's provide any support for multiple LLMs run on local machines.

    • @practical-ai-prototypes
      @practical-ai-prototypes  22 дня назад

      Fair point - performance on local is not as good as running on cloud infrastructure. Seems like "AI-enabled" PCs will be the new trend.

  • @gymlin123
    @gymlin123 29 дней назад

    but dont I still have to pay for tokens?

  • @ryanbthiesant2307
    @ryanbthiesant2307 Месяц назад +1

    Not good. Does not show the problems of crew ai working with Ollama or any other lllm. Crewai persistently asks for open ai key. The good I discovered Mozilla lllm server thank you. Crew ai is really bad.

    • @mandelafoggie9359
      @mandelafoggie9359 28 дней назад

      So what is better than Crewai?

    • @practical-ai-prototypes
      @practical-ai-prototypes  22 дня назад +1

      You don't have to use openAI or the API key - you can just remove it from the code. The Ollama file sample from the github repo shows you how to use Ollama. Note that Ollama is not an LLM - it just allows you to run LLMs locally.

    • @practical-ai-prototypes
      @practical-ai-prototypes  22 дня назад

      @@mandelafoggie9359 You can try autogpt if you want - I found it harder to use.

    • @ryanbthiesant2307
      @ryanbthiesant2307 15 дней назад

      ​@@practical-ai-prototypes Thank you I will check that out, again. I think it still asks for a key, even a fake key. Even if you want to use ollama.