Build Anything with Local Agents, Here’s How
HTML-код
- Опубликовано: 30 май 2024
- If you're serious about AI, and want to learn how to build Agents, join my community: www.skool.com/new-society
Follow me on Twitter - x.com/DavidOndrej1
Please Subscribe.
Credits: @matthew_berman
This video will show you how to build Agents using open-source models.
If you're serious about AI, and want to learn how to build Agents, join my community: www.skool.com/new-society
Hi David, thank you for making the demo right off the bet. as much as i want to follow along with your tutorial, seems like this is not the entire operation of algorithm right? I am having trouble locate the model after downloading the crewai but the code demo in your vid simply could not locate the model. (i am new to this)
Hey Dave, thanks for the tutorial, i think i finally figured it out. it was my env not setup in the right folder that causes the issue. i think my M2 macbook is too slow for the model. each prompt takes more than a couple hrs to execute haha
Great video! suggestion: utilize the "word wrap" functionality in vscode located under view tab or "alt+z", so that there is no need to scroll for long lines of code. cheers!
is there a reason why sometimes the vid gets cut off when david is reading out a command. i would be typing along the vid and the vid just skip to the next step...
just a thought, its very easy to /save checkpoints of the same model in ollama with custom prompts and context, passing those checkpoints through distinct agents could help finetune without training
Great content. Keep it up
how about publishing an exact procedural map on what to do as some of us are a tad confused.
Hey, do you have an idea of how we could integrate Vision models such as 'gpt-4-vision-preview' or 'gpt-turbo' with crewAi agents to analyze images>
What is the difference with this type of agent and a GPT from OpenAI? I see the advantage of choose different LLMs. But the ease of config a GPT through a GUI..
This is extremely useful, thank you!
LLMs ran locally use RAM? I though it used VRAM or is that just for training?
Are there any other PC requirements needed to get the biggest LLM model to run well locally?
Yes you can run LLMs with ram. Generally you’d choose the “quantized” version of the LLM you want to run. Also regarding compute requirements, you still need a lot. I have an 8 core i9 and 40gb ram. With that I can get by with running quantized 13 billion parameter models.
@@austinpatrick1871 Hmmm I have a i7-11700KF, I was hoping I could just buy more RAM and get it to 256GB or something but from the sounds of it there will be other bottlenecks
How do I go about adding these agents to an existing ollama instance running in docker? I have been using Anything LLM that has pre-built Agents but have no idea how to add my own for it to use
You chose mixtral but that was 47B parameters. I thought you said you were on a mac and that that wouldn't work?
Damn it would be amazing if this had a nice UI. The first company that implements some eye candy , easy to understand and work UI will skyrocket instantly in this space.
I'm pretty tech savvy and I have no issues installing and running this, but I'm too spoiled to watch a wall of text every time when I use it.
Totally agree with you. It badly needs someone to create a GUI that my mom could use. That would be worth a ton of money.
@@roccovergoglini7670 Yep, Steve Jobs was one of the biggest geniuses in the world not because it had great engineering mind, but because it was able to see things through the eyes of the everyday layman and how to put complex technology in use for the everyday user.
Right now these agents are limited to only more engineered minded people like us. Until there is a nice, sexy GUI that most of the people can use, we won't see big adoption and use-case breakthroughs.
One of the biggest reason Stable Diffusion gain traction is because tons of easy to use GUI popped up (from A1111 to Fooocus)
Until then, agents will be a niche buzzword.
Yep, this is like being back in the day using DOS. We need someone to invent Windows for this stuff. It's ridiculous that it takes programmers so long to figure out that it's not about them.
@@slddive9025 You nailed the analogy to Windows. That's when the breakthrough might come. But I suppose this is all still in its infancy.
@@roccovergoglini7670 100% agree
Don't use a 50b model then tell us we can do this with a 1b model. Small models suck for agents. I'm still trying to get a 7b model to work and I've had no luck with crewai, langchain, autogen, etc. I'm having to build my own from scratch.
Im using lama7b, on my pentium dual core, 16gm ram desktop, it works well actually, only it responds very late, around 20min-40min
Welll that sounds like, what’s the point
@@b.d.y.k.1081 main reason I prefer groq for all. It's still free lol
how to give my agent google serper access so that it can search through internet .Please help
Does the lama need to include the 3?
how do i give this access to the internet
?
That's what I want.
What type of Mac Book would you suggest I get to run everything locally? What memory requirements as well?
They don't make one. Chip technology jump is crazy, save for 9 more months, the m4s are going to be a huge leap
I have a M2 chip 2022 mac air and it can’t run Ollama fast enough to be useful
@@qAidleX great insight
If you're gonna buy a Macbook go with an M3 - they have integrated Neural Engines shared between the CPU and the GPU, which are made exactly for tasks like running AI inference
Choose an option with at least 32GB of RAM, ideally 64
@@DavidOndrej thanks Dave. I signed today for your class too.
Will agents still make sense as a business opportunity in a few months when Google and Microsoft start building their own agents that are integrated directly into the OS?
Nope and Microsoft already has an OS with agents
Yes,they will.But I foresee them being more profitable if you are selling a B2B product focusing on specific domain problems.
My main problem using CrewAI with local models is that they have a very hard time using tools. What model do you recommend for tools / function calling?
From my experience, making local models behave requires creating custom prompts, which is tough when using pre-made agent frameworks like crewai and autogen. if you can find and modify the prompts they're using you'll have a chance. My strategy is to build my own agents so I have control over their inner workings.
Maybe it would hav been helpful to clearly state on the minunum system requirements
It could have been an interesting video. But all those windows, opening and closing and all this rush to explain... After 5 minutes I gave up
My laptop core i3 , since 2010 . Poor one , what is the minimum hardware needed to set up all these apps and llms ?
afaik, min $2k with proper AI GPU
As a rule of thumb: all models smaller than 7B could fit into about 8GB RAM, so if you're Laptop has 16GB of Ram you should be able to use 'ollama run llama2:7b' for example
Did you checkout Pearsonai? It's crewai and autogen combined
Link? All I see is the edu company
pip install crewai doesnt work. it says command not found:pip does anyone know what im supposed to do?
You don't have pip .. or you are not working in the python directory find where python is installed and open cmd there and use pip
Thanks. Your thoughts on Mind Studio?
I’m going to take you up on your AI agent building offer I just dove in the intro to Python via harvards cs50 (excellent and free) so my head is jammed for the next several weeks but this is where it’s at there is no question I want to be fully versed I am very clever and will use it to great affect
Big fan of you and your show.
and now we have llama3
I LIKED MY COMMENT 😂😅
Very cool tutorial. I got to the end and I think this may be the downfall of my laptop. i7 with 8 cores (and HT), 32Gb of memory and NVidia 3500 RTX GPU just aren't good enough.
source code ?
Build anything with Agents?
Can you build a 2D platformer game with 150 different levels, animations for all sprites, great level design, amazing player controller script for top-tier platformer, several types of menus (beautiful UI), beautiful consistent art, fitting music for the game, bug-free, high-performing popular game, compatible with many game engines, with agents?
If you can have Agents do Game Dev professionally then you'll have my attention.
imagine recreating sword art online
Brb downloading some RAM for my 8 GB MacBook
But is crew ai free ?
yes
Pydantic is telling me that crewai Agent doesn't have an argument "model" but it does have "llm"
If you actuality watch the video instead of commenting at the first hiccup you would've learned that
@@toyvo Thank you for pointing out my flaws, Comment Jesus. You have swooped in to save me from my sins and I thank you from the bottom of my heart… but alas, I have fallen.
@@stonedoubtwhat other sins? homosexuality?
What kind of machine are you using xD are you rich?
ram isn't that expansive, bro.
@@Noqtis Im not uptodate on hardware stuff. But Im also prefer notebooks/laptops
You guys have heard of the RUclips pause video function, right?????? 🤔🤔🤔
You can even slow down the audio!!!!!! 🙂
56 gb???
I recognize this is good content. I’m still just too dumb to follow it
How the hell do you have a Mac either 128gb ram??? 😮😅😅
I bought it
would be cheaper to rent than own tbh.
@@DavidOndrej You must really love Apple 😂😂😂 I built my Linux machine with an RTX 3090 and 128gb ram for under half of that.
Another good video that turns out to be useless for the majority of people (non coders). Thanks for the effort but you lose us in the first minutes. I wish content creators would think about this step-by-step to make sure they get the right answer.
Sorry David. Don't tell that you teach, when you are running through this like a gepard, where no one can follow. Feels like you are fishing for joining your training. Also posing with good results by using a huge model doesn't make it easier for people running on low end computers. Make it more accesible ;)
I think he's really trying to sell his "community" support. At $77 a month, it's not going to appeal to many people. I have lots better ways of spending my hard earned cash.
I beg to differ I enjoyed the fact that for once it was fast. Nowadays you have a lot of tutorial not going deep enough because of starting from scratch or some too detailed