Automate AI Research with Crew.ai and Mozilla Llamafile
HTML-код
- Опубликовано: 25 июн 2024
- In this video we'll walk through how to set up crew.ai with Mozilla LLamafile to run a local large language model on your computer and automate multi-step tasks using a model of your choosing.
Sample Code: github.com/heaversm/crew-llam...
List of Llamafile Models on hugging face: huggingface.co/models?library...
Serper Search API: serper.dev/
CrewAI documentation - docs.crewai.com/how-to/LLM-Co...
Langchain Docs - python.langchain.com/v0.2/doc...
0:00 Intro
0:25 - Example - gathering job candidate data and assigning scores
1:20 - Installing Crew
1:53 - Using Langchain for Local AI models
2:52 - Using the Sample Code
4:12 - Adding our API keys for Serper and OpenAI
5:08 - Setting up our agents and tasks in Crew
6:15 - Running our workflow
7:45 - Looking at the output file
8:35 - Switching to a local LLM with Mozilla LLamafile
10:35 - Next steps
awesome content dude!
Love being able to run this locally. Great vid
👏👏👏
Very good vid - thanks!
Nice, thanks
Tks
Update - I made an `app-input.py` script that allows you to create your own agent and task just by answering some questions in the command line.
Seems very useful! Is there an update video for this?
@@JofnD no - but same instructions, just run `python app-input.py` from the command line.
Now everyone needs a new computer and a Cuda graphics card which are massively expensive due to crypto mining and now AI servers. Local runs way too slow on my 3-4 year old laptop.
Will have to see if new Intel and AMD chips with embedded NPU's provide any support for multiple LLMs run on local machines.
Fair point - performance on local is not as good as running on cloud infrastructure. Seems like "AI-enabled" PCs will be the new trend.
but dont I still have to pay for tokens?
You don't have to pay for tokens if you run a local model
Not good. Does not show the problems of crew ai working with Ollama or any other lllm. Crewai persistently asks for open ai key. The good I discovered Mozilla lllm server thank you. Crew ai is really bad.
So what is better than Crewai?
You don't have to use openAI or the API key - you can just remove it from the code. The Ollama file sample from the github repo shows you how to use Ollama. Note that Ollama is not an LLM - it just allows you to run LLMs locally.
@@mandelafoggie9359 You can try autogpt if you want - I found it harder to use.
@@practical-ai-prototypes Thank you I will check that out, again. I think it still asks for a key, even a fake key. Even if you want to use ollama.