*UPDATE:* Thanks to the viewer @tryingET great suggestion, I managed to improve the prompts and make the output consistent. You can check out the improved version on my github. Thanks for watching and I'm curious, what were your experiences with crewAI like?
Thanks for this deep dive! The problem with open source templates is that they don't handle function calls, which is necessary for the crew to function. It seems that OpenHermes handles it well, the scripts work as expected, but even gpt3.5 gives better results.. thanks again for sharing
@@tryingET This was just a great suggestion! I just ran the script couple of times, and you're right, the results are much more consistent. I only changed the part with "(linkToURL)", for some reason it was throwing the agent off. But it works with simple "(link to project)". I'll update the repo, thanks a lot for this help 🙏🏻
crewAI creator here, so cool to see videos like this! I also automated parts of my work with crewAI as well and it's a "a-ha" moment for sure! Great content! keep it up 💪
is it possible to run them in parallel instead of serial (maybe via threads)? The idea is that there's a manager who of course manages the worker agents (researchers, writers, ...); the worker agents then hand-off their work to analyst agent(s) who determine if more work is to be done, the result is then handed off to the manager, who then hands off to-do work to task creator agents to define what else needs to be done based on metrics from the analyst(s), then brought back to the manager who then assigns tasks to agents based on available workload (agent workload queue would be cool). also, dynamically "spawning" (instantiating) agents based on needs would be cool also, to conserve resources. maybe some features in crewai are missing yet to do that - what do you think?
When humans take their organic brain inside their skulls for granted... Even if we could reach Singularity, we will always be n-1 away from the 'n'th civilization that Created our n-1 universe. This is the fundamental limitation of pixel based evolution. Hence, we would go a longer way being organic hackers than materialists.
I hardly comment on RUclips channels. But this is another level. The way you explain things, organization, referencing, and pace spot on. Great content. Please keep up the great work. Thanks :)
I'm relieved to find someone else who faced challenges while running local models. Your sincere and practical review is appreciated. Unlike many others who simply join the hype train without discussing their struggles, your honesty is refreshing. Thank you.
me too, local models beside being pain in the butt, freezing your whole damn dev environment, then you get a shitty output, i am surprised she did test many models, i would have givenup much faster.
Not enough people talk about this. I follow the steps and run into error after error. Versions don't match, missing dependencies, list goes on. Ai development really takes a toll on your life force lmao
I only tested one model weeks ago and i got no issue installing it and running it. It’s uncensored and i was curious. Outputs were great. If you have at least 16gb ram on 7B models there’s no way it will crash your PC. Of course it’s slow as fuck, like 1 word per second generation
First time i encounter your channel/videos, and immediately got a subscribe from me. I love the clear explanation, straight to the point, no over promising or "Make $3000 dollar a day with these 10 simple bla bla". Thank you for keeping things real and useful. Breath of fresh air!
It's shocking to me how quickly someone made an AI to do this, as I created my own autonomous agents in python a few months back to do these similar things. One tip I have for people trying to come up with a large & detailed tasks/descriptions is to write it in a .TXT file, and then reference it in your code. That way it keeps the code clean and also easy to modify the descriptions and tasks in the future without changing anything in the code.
Excellent idea! You could even create services to update it from a GUI without touching your code. I'd probably set it up like a traditional JSON config file.
This mirrors my experience with local models vs automation. I've come to the conclusion I either need to massively upgrade my hardware or just wait it out for a new breakthrough model. I'm a bit jaded with all the hype that never seems to live up to real-world use.
Welllllll, yes and no. I've been able to tripple my productivity and I got 100% on two seperate essays using AI to workshop some ideas and build the essay outlines.
🎯 Key Takeaways for quick navigation: 00:41 🧠 *The video discusses the differences between System 1 (fast, subconscious thinking) and System 2 (slow, deliberate thinking) in the context of AI capabilities, highlighting that current language models primarily operate on System 1 thinking.* 01:48 💡 *Introduces two methods to simulate System 2 thinking in AI: "Tree of Thought" prompting and the use of platforms like CrewAI, which enable the construction of custom AI agents capable of collaborative problem-solving.* 03:26 🚀 *Outlines the process of setting up AI agents using CrewAI, emphasizing the importance of defining specific roles, goals, and backstories for each agent to ensure effective task execution.* 07:22 📈 *Describes how AI agents can be made more intelligent by granting them access to real-world data, and how to avoid fees and maintain privacy by running models locally.* 14:31 💸 *Discusses the cost implications of using models like GPT-4 for AI-driven tasks and explores local models as a more cost-effective and private alternative, despite their varying performance and capabilities.* Made with HARPA AI
Maya, have to say - that was in-depth. Love the detail. Expensive running GPTs with CHAT 4. The output is definitely worth it though. I guess getting your own custom newsletter every day for less than a dollar does save research time. The next step is to get that file data into the actual newsletter now. Cheers for the free resources on Langchain. Just getting into APIs with py and deployment apps. I predict that you will have a great future on RUclips. Keep up the good work.
Thanks for the refreshingly honest results rather than the usual fake hype. It looks to me that LLMs have a long way to improve before autonomous agents can become actually useful.
I want to thank you for this video! It is one of the most informative videos I have seen. Now that I watched it through I realized you really did your homework. I appreciate it.👏
Super work there Maya. You earned a subscriber, and a follow. I built an agent on node with run tools with custom functions over RAG. But this is next level only, will try this next. Thanks again. Keep shining
Really nice work friend. Nice narrative style and prosody while getting such a structured goal. And definitely awesome to listen discussions on the topics and decision making, this marks the difference. ... For trivial coding there is AI and Internet... for the core reasons and concepts there is us the humans
This video is fantastic! The content is thoroughly explained and incredibly helpful for professionals in the IT space. At Xerxes, we truly value such clarity and effort. Keep up the great work-looking forward to more insightful content like this!
This is a great video!! THANK YOU! What about fine-tuning a local model to perform better by training individual agents? Copywriter: trained on how to write effective copy Proofreader: trained to review the copy for edits and suggestions Project Manager: review work against requirements Marketer: brings in marketing experience for evaluating concepts Researcher: skilled in researching against the requirements and working with the marketer. Risk Manager: identifies risks and how to mitigate them Venture Capitalist: reviews the project and provides feedback on how to get funding. Can you have a router with crewAI? Starts with PM to scope the work, assign tasks, and validate deliverables.
I had the same idea about fine-tuning LLMs for specific tasks. Since the agents are created with prompts (afak), I think it makes sense to fine-tune LLMs for those prompts. Maybe there is already a dataset for that. If someone knows about it, please share where it is.
Many thnakns, you just saved me a kot of time trying to figure out if compound model with agents will solve a particular problem. I have basic python knowledge and that was going to eat a lot of my time. So thank youuuuuuu! ❤
Thank you for the video. At the moment, I am experimenting with Crewai and Autogen (it uses cheaper GPT4 turbo) - these tools are improving every month. In practice, I still achieve better results when I closely collaborate with LMMs - but who knows, in 6-12 months it might be possible to fully automate my workflows.
thanks for the feedback! that's interesting, I also can only automate parts of my work that require processing big amount of data. but who know what's going to be possible in 6-12 months!
@micbab-vg2mu Could you share insights into where you create your workflow for optimal results? I'm curious to know if you have any specific advice or insights for optimizing your workflows with LLMs? Any tips you can share would be appreciated!
Hello, could you share your thoughts about crewAI vs Autogen? Which one provides better results? Maybe which one is simpler to use? Or which one gives more opportunities?
Adam - obie metody są bardzo prose w użyciu ( nie mam wykształcenia IT i daje radę). Jeśli planujesz open source - rekomenduje CrewAI jeśli GPT4 to Autogen2. Mimo że workflowy nie sa perfekcyjen to wart je znać - )@@aszmajdzinski
Side note, when showing something like the “ai agent landscape” would be neat if there was a reference where to find it. (There was enough info to do so, but a side of the repo would be sweet)
It's quite possible that GPT4 is much more adept at understanding the premise of function calling, as it likely has a fine tuned expert in it's MOE to deal with "GPTs", thus making it more capable when dealing with OOTB solutions like CrewAI et al. I'd hazard that until someone fine tunes an OS model with a variety of function calling methods, and tools like CrewAI move on to more dynamic conversation flows rather than just sequential, then we'll begin to see the benefits of offline muilti-agent setups.
Hello, just found your channel. I was expecting some mediocre video under a somewhat clickbait title, but I quickly realized this was some actually interesting content, and I am quite impressed by this thing. will definitely give it a try, although I can already see the terrible social and economical impacts of using it in the real world in enterprises. PS: at the end, it sounds to me like what you did to get your result was overfitting.
Yes well said definitely got my attention even clicked the bell for the fourth time ever. Funny intro then actual genius-like content behind it. This channel must be AI already that's the only possibility.... ;]
00:28 The inner dialogue shows the two systems of thinking: slow, deliberate system 2 vs. fast, automatic system 1. 🤔 01:36 Some people found two ways to simulate system 2 rational thinking in AI: tree of thought prompting and platforms like CrewAI. 💡 03:13 I’ll set up 3 AI agents on CrewAI to analyze my startup idea: a marketer, a technologist, and a business consultant. 👥 05:42 I defined specific tasks for each agent: analyze market demand, make product suggestions, and create biz plan. 📝 08:12 Making agents smarter is easy - add tools to give them access to real-time data like Google and Wikipedia. 🧠 11:06 I wrote a custom Reddit scraper tool to get higher quality info for my agents. 🤖 15:25 I tested 13 local AI models - most failed, but Llama-13B worked best to understand the task. 📈 Supported by NoteGPT
Their efficiency in handling tasks like data processing and research is astounding. Have you ever attempted to coordinate several AI agents with SmythOS?
Thank you for your great research and video. Concerning Daniel Kahneman´ system 1 and system 2, the French neuroscientist Olivier Houdé proposes a continuation of this theory, based on the latest neurology discoveries. He published a short book called “L’Intelligence humaine n’est pas un algorithme.” that is easy to read and understandable, and as it is short, it might be easy to translate to English with an LLM.
My understanding is that using a larger quantized model works better. I’m planning on trying it soon on my computer, maybe with autogen. I’ve got a 4090, i9-10900k, and 64Gb RAM, so I’m hoping I can run maybe a 30b quantized model on it. I read that the ~5-bit quantized models are the sweet spot that reduces your memory footprint without any significant loss in quality of responses. 4-bit is still good but takes enough of a hit to matter. Again, haven’t tried it myself, so maybe I’m mistaken, but that’s what I read.
Impressive video. The Reddit scrapper, thinking about changing ollama settings :D I'm sad your system limited your choice of LLM. But now I'm really motivated to try on my system, to test Mixtral.
On 8GB VRAM I had no problem running 10B or 13B models, however I run Q5 gguf. On 24GB VRAM I am able to run 70B Q4 gguf. For slow tasks it's acceptable speed.
Watching from BerylCNC... what's your opinion on using CrewAI or AutoGen versus creating a GPT within OpenAI, and providing instructions that frame out the functionality in a similar way? My development work is mostly related to CNC tool path utilities. LLMs do a poor job of inferring and understanding geometry, so I have to bake in a lot of rules and math. GPTs seem like a cool way to get noticed, but I really need to include libraries and Python code. One of ours is called "Beryl of Widgets", and it helps makers figure out what to make and sell with the tools they have available. It could be so much more with CrewAI, I think, but then I need a way to deploy it. Great content, thank you!
Sometimes it's faster and more efficient to do the work (write the business plan or blog) yourself as a human rather than spending time and programming AI to do it. But I'm old, with an efficient creative mind. But, good video.
WOW incredible value...your work is enterprise-grade and enterprise-ready ithank you so much for this huge work i'm a software engineer (Java, Unity..) passionate about programming and i love this practical video this is amazing...thanks so much beautiful Maya!!❤ Subscribing!!!!
Hi Maya! I’m so fascinated by your technical abilities. I barely understand the whole thing, but I’ve always been fascinated by AI and started learning AI tools.I saw on your tiktok that you taught yourself Python. It would be awesome if you can also share your learning process coming from a non-tech background and how’s your progress so far. Thank you :)
This is like a book in one video, thank you so much! Just curious, but how would you compare the latest autogen studio to crew ai? Lots of wonderful ideas here and beautifully presented, thank you so much for publishing this, you are indeed a knowledge sharing master and the world needs more intellectual contributions like this. Thanks again!
I just started to learn about AI and.. I barely understand whats going on here, but I'm fascinated :). I was thinking about possibility to create this kind of agents/assistants for tasks like searching informations about specific topic online. I will follow you :),
*UPDATE:* Thanks to the viewer @tryingET great suggestion, I managed to improve the prompts and make the output consistent. You can check out the improved version on my github.
Thanks for watching and I'm curious, what were your experiences with crewAI like?
Thanks to You I will start this in my home lab :) btw good job, waiting for more content. I see ring on finger, that's mean it's to late?
Top ASMR experience👌
Thanks for this deep dive! The problem with open source templates is that they don't handle function calls, which is necessary for the crew to function. It seems that OpenHermes handles it well, the scripts work as expected, but even gpt3.5 gives better results.. thanks again for sharing
i have to agree @@TheGalacticIndian
@@tryingET This was just a great suggestion! I just ran the script couple of times, and you're right, the results are much more consistent. I only changed the part with "(linkToURL)", for some reason it was throwing the agent off. But it works with simple "(link to project)". I'll update the repo, thanks a lot for this help 🙏🏻
crewAI creator here, so cool to see videos like this! I also automated parts of my work with crewAI as well and it's a "a-ha" moment for sure! Great content! keep it up 💪
Wow I was trying to find your video my guy… someone posted a tutorial with your video and didn’t link you, great videos @BuildNewThings
Great program! Any chance we will be able to use memgpt with it?
is it possible to run them in parallel instead of serial (maybe via threads)?
The idea is that there's a manager who of course manages the worker agents (researchers, writers, ...); the worker agents then hand-off their work to analyst agent(s) who determine if more work is to be done, the result is then handed off to the manager, who then hands off to-do work to task creator agents to define what else needs to be done based on metrics from the analyst(s), then brought back to the manager who then assigns tasks to agents based on available workload (agent workload queue would be cool). also, dynamically "spawning" (instantiating) agents based on needs would be cool also, to conserve resources. maybe some features in crewai are missing yet to do that - what do you think?
Huh wat? The video eventually comes to the conclusion that the results are useless and basically a waste of time. ☹️
@@themax2go not at the moment it only does sequential, maybe pair it with autogen
Literally every LLM video
Title: "I automated everything"
Video: "Wow no model can understand the task"
Still a long way to go!
When humans take their organic brain inside their skulls for granted...
Even if we could reach Singularity, we will always be n-1 away from the 'n'th civilization that Created our n-1 universe. This is the fundamental limitation of pixel based evolution. Hence, we would go a longer way being organic hackers than materialists.
@hidroman1993 A little late to the party. Has it gone any better by now?
@@syedirfanajmalNo, lol
This is by far the most clear explanation I've found on agents, how to use them and how to run them locally. Congrats!
I hardly comment on RUclips channels. But this is another level. The way you explain things, organization, referencing, and pace spot on. Great content. Please keep up the great work. Thanks :)
I'm relieved to find someone else who faced challenges while running local models. Your sincere and practical review is appreciated. Unlike many others who simply join the hype train without discussing their struggles, your honesty is refreshing. Thank you.
Try ollama or gpt4all anyway to run a SOTA model you will need gpu or apple silicon.
me too, local models beside being pain in the butt, freezing your whole damn dev environment, then you get a shitty output, i am surprised she did test many models, i would have givenup much faster.
Not enough people talk about this. I follow the steps and run into error after error. Versions don't match, missing dependencies, list goes on. Ai development really takes a toll on your life force lmao
I only tested one model weeks ago and i got no issue installing it and running it. It’s uncensored and i was curious. Outputs were great. If you have at least 16gb ram on 7B models there’s no way it will crash your PC. Of course it’s slow as fuck, like 1 word per second generation
@@VizDopeyou assume you have nothing else running dude, how do you develop this way? you need at least 64GB just to feel something!
First time i encounter your channel/videos, and immediately got a subscribe from me. I love the clear explanation, straight to the point, no over promising or "Make $3000 dollar a day with these 10 simple bla bla". Thank you for keeping things real and useful. Breath of fresh air!
Dr CC cfv CC cccc denn cr FFM ggf so XXL ref für du v_''_& cv
It's shocking to me how quickly someone made an AI to do this, as I created my own autonomous agents in python a few months back to do these similar things.
One tip I have for people trying to come up with a large & detailed tasks/descriptions is to write it in a .TXT file, and then reference it in your code. That way it keeps the code clean and also easy to modify the descriptions and tasks in the future without changing anything in the code.
Excellent idea! You could even create services to update it from a GUI without touching your code. I'd probably set it up like a traditional JSON config file.
How to use .txt from my laptop and feed it as an input to the llm?
please guide
@@DAN_1992 I don't know the answer, but could you just type that question into ChatGPT and it'll tell you?
Very good and clear explantion. I rarely comment but when i do its because is worth it. So GPT-4 was the best model for all the tasks.
This mirrors my experience with local models vs automation. I've come to the conclusion I either need to massively upgrade my hardware or just wait it out for a new breakthrough model. I'm a bit jaded with all the hype that never seems to live up to real-world use.
totally agree
Welllllll, yes and no. I've been able to tripple my productivity and I got 100% on two seperate essays using AI to workshop some ideas and build the essay outlines.
We already have "Agency Swarm" in our business so there is no need for CrewAI or Autogen for sure. Try it out.
Love this. Really transparent and talking about limitations of models..
🎯 Key Takeaways for quick navigation:
00:41 🧠 *The video discusses the differences between System 1 (fast, subconscious thinking) and System 2 (slow, deliberate thinking) in the context of AI capabilities, highlighting that current language models primarily operate on System 1 thinking.*
01:48 💡 *Introduces two methods to simulate System 2 thinking in AI: "Tree of Thought" prompting and the use of platforms like CrewAI, which enable the construction of custom AI agents capable of collaborative problem-solving.*
03:26 🚀 *Outlines the process of setting up AI agents using CrewAI, emphasizing the importance of defining specific roles, goals, and backstories for each agent to ensure effective task execution.*
07:22 📈 *Describes how AI agents can be made more intelligent by granting them access to real-world data, and how to avoid fees and maintain privacy by running models locally.*
14:31 💸 *Discusses the cost implications of using models like GPT-4 for AI-driven tasks and explores local models as a more cost-effective and private alternative, despite their varying performance and capabilities.*
Made with HARPA AI
Maya, have to say - that was in-depth. Love the detail. Expensive running GPTs with CHAT 4. The output is definitely worth it though. I guess getting your own custom newsletter every day for less than a dollar does save research time. The next step is to get that file data into the actual newsletter now. Cheers for the free resources on Langchain. Just getting into APIs with py and deployment apps. I predict that you will have a great future on RUclips. Keep up the good work.
everybody can do it.. then opens a terminal and starts typing 😂.. loving this video already.. can’t wait till we start pip installing stuff
some encouragement is not bad ;) but also, the creator of crewai is already working on UI, so hopefully, people won't be pip installing for too long
😂 yea that install she skipped at the beginning would have been helpful for people like me, luckily I have GPT4 😅
Thanks for the refreshingly honest results rather than the usual fake hype. It looks to me that LLMs have a long way to improve before autonomous agents can become actually useful.
One of the best, honest reviews - love this! Thank you!
Huge thanks for such detailed, well structured and illustrated information! The best video I’ve watched on AI so far.
Good Job Maya :) I thought about many ideas while watching your video!
I used this same method only through Google apps script and integrated into spreadsheets. Nice work
That sounds interesting. I know Apps script better than Python and I want to work with sheets. Would you share your work, or some of your insights?
@@lausianne sure. I have a few videos on my channel and if you shoot me an email, I can share the code I used
This was an awesome video Maya. Thank you so very much for the wonderful and very helpful information! 🙏
Excellent explainer... Congrats and keep going !!!! Hello from Dominican Republic.
Thank you for your clear and lucid explanation about CrewAI !!!
I want to thank you for this video! It is one of the most informative videos I have seen. Now that I watched it through I realized you really did your homework. I appreciate it.👏
Super work there Maya. You earned a subscriber, and a follow. I built an agent on node with run tools with custom functions over RAG. But this is next level only, will try this next. Thanks again. Keep shining
Really nice work friend. Nice narrative style and prosody while getting such a structured goal. And definitely awesome to listen discussions on the topics and decision making, this marks the difference. ... For trivial coding there is AI and Internet... for the core reasons and concepts there is us the humans
I had some unsalted Pringles, last week. This week, I had a Four Loko Gold
I haved learned few very important things from your video. Thank you for an amazing video 🙏
Great video, Maya! Please keep creating more valuable content about agent creation.
This is incredible. Thank you so much for sharing. Very inspiring!
Damn, you really are DOING THE WORK and then reporting back to us for free, dude !
Thanks so much for such gem. Much appreciated !
INCREDIBLE CONTENT. You just got a new follower
This video is fantastic! The content is thoroughly explained and incredibly helpful for professionals in the IT space. At Xerxes, we truly value such clarity and effort. Keep up the great work-looking forward to more insightful content like this!
Appreciate the way you've explained difference with analogy of "Thinking, Fast and Slow"
Amazing video thanks for the insights.
This is awesome! Building a team of AI agents that can access real-world data sounds incredibly powerful.
Am I the only one who appreciates a good educational video that has no overly hyped reactions?
Thanks for testing local AI models, I had a similar experience
This is a great video!! THANK YOU!
What about fine-tuning a local model to perform better by training individual agents?
Copywriter: trained on how to write effective copy
Proofreader: trained to review the copy for edits and suggestions
Project Manager: review work against requirements
Marketer: brings in marketing experience for evaluating concepts
Researcher: skilled in researching against the requirements and working with the marketer.
Risk Manager: identifies risks and how to mitigate them
Venture Capitalist: reviews the project and provides feedback on how to get funding.
Can you have a router with crewAI? Starts with PM to scope the work, assign tasks, and validate deliverables.
I had the same idea about fine-tuning LLMs for specific tasks. Since the agents are created with prompts (afak), I think it makes sense to fine-tune LLMs for those prompts. Maybe there is already a dataset for that. If someone knows about it, please share where it is.
Really good foundational information and great content. Thank you!!
Absolutely Stunning Maya, Thanks for sharing these golden information.🤩
Wow. Thank you for such a great video and for sharing these insights. Really good.
this is so in-depth and really appreciate your hard-work and dedication Maya.
Ha never heard anyone mentioning thinking fast and slow anywhere that is a great book
You look like a 500k+ suscribers's creator, great job and great video btw
thanks a lot :)
Many thnakns, you just saved me a kot of time trying to figure out if compound model with agents will solve a particular problem. I have basic python knowledge and that was going to eat a lot of my time. So thank youuuuuuu! ❤
That's the most mental use of a boom arm I've seen anywhere.
😆
Commented for creativity, and the engagement boost.
This is awesome Maya, thank you for sharing.
This is an incredible guide!!! Thank you so much for making this video
You just won a New Subscriber, great video 🎉
This was probally the most helpful video I have ever watched.
I’m not a coder but want to learn how to build and tinker with these. Thanks for the clear explanations!
Great presentation of the use cases and functionality. Thanks, Maya! 🙂
This is one of the best videos on ai assistants.! 🎉 thank you Maya!
Thank you for the video. At the moment, I am experimenting with Crewai and Autogen (it uses cheaper GPT4 turbo) - these tools are improving every month. In practice, I still achieve better results when I closely collaborate with LMMs - but who knows, in 6-12 months it might be possible to fully automate my workflows.
thanks for the feedback! that's interesting, I also can only automate parts of my work that require processing big amount of data. but who know what's going to be possible in 6-12 months!
@micbab-vg2mu Could you share insights into where you create your workflow for optimal results? I'm curious to know if you have any specific advice or insights for optimizing your workflows with LLMs? Any tips you can share would be appreciated!
Hello, could you share your thoughts about crewAI vs Autogen? Which one provides better results? Maybe which one is simpler to use? Or which one gives more opportunities?
Adam - obie metody są bardzo prose w użyciu ( nie mam wykształcenia IT i daje radę). Jeśli planujesz open source - rekomenduje CrewAI jeśli GPT4 to Autogen2. Mimo że workflowy nie sa perfekcyjen to wart je znać - )@@aszmajdzinski
I just discovered your channel, beautiful concept, all the best insh'Allah
long time no see ... A lot has happened since . i was expecting a video.
Side note, when showing something like the “ai agent landscape” would be neat if there was a reference where to find it. (There was enough info to do so, but a side of the repo would be sweet)
good callout, thanks! just included the link in the description box!
It's quite possible that GPT4 is much more adept at understanding the premise of function calling, as it likely has a fine tuned expert in it's MOE to deal with "GPTs", thus making it more capable when dealing with OOTB solutions like CrewAI et al. I'd hazard that until someone fine tunes an OS model with a variety of function calling methods, and tools like CrewAI move on to more dynamic conversation flows rather than just sequential, then we'll begin to see the benefits of offline muilti-agent setups.
Hello, just found your channel. I was expecting some mediocre video under a somewhat clickbait title, but I quickly realized this was some actually interesting content, and I am quite impressed by this thing. will definitely give it a try, although I can already see the terrible social and economical impacts of using it in the real world in enterprises.
PS: at the end, it sounds to me like what you did to get your result was overfitting.
thanks a lot :) yeah you're right, I didn't even think about it, but overfitting might be the problem!
Yes well said definitely got my attention even clicked the bell for the fourth time ever. Funny intro then actual genius-like content behind it. This channel must be AI already that's the only possibility.... ;]
Thanks for the info. Didn't even knew about Crew AI.
00:28 The inner dialogue shows the two systems of thinking: slow, deliberate system 2 vs. fast, automatic system 1. 🤔
01:36 Some people found two ways to simulate system 2 rational thinking in AI: tree of thought prompting and platforms like CrewAI. 💡
03:13 I’ll set up 3 AI agents on CrewAI to analyze my startup idea: a marketer, a technologist, and a business consultant. 👥
05:42 I defined specific tasks for each agent: analyze market demand, make product suggestions, and create biz plan. 📝
08:12 Making agents smarter is easy - add tools to give them access to real-time data like Google and Wikipedia. 🧠
11:06 I wrote a custom Reddit scraper tool to get higher quality info for my agents. 🤖
15:25 I tested 13 local AI models - most failed, but Llama-13B worked best to understand the task. 📈
Supported by NoteGPT
As I loved the video, I'm going to create my agents 🕵♀; Thank you Maya! 😄
I have been considering doing this. Thanks Maya.
Maya, wonderful! I love learning from you. About local models, I bet Nous-2-Pro-7B could do a good job but have yet to try it. Keep up the good work!
Their efficiency in handling tasks like data processing and research is astounding. Have you ever attempted to coordinate several AI agents with SmythOS?
great! so much of ur tests and trials, thank you for sharing all
Great video! Thanks for all your hard work!
Thank you for your great research and video. Concerning Daniel Kahneman´ system 1 and system 2, the French neuroscientist Olivier Houdé proposes a continuation of this theory, based on the latest neurology discoveries. He published a short book called “L’Intelligence humaine n’est pas un algorithme.” that is easy to read and understandable, and as it is short, it might be easy to translate to English with an LLM.
Excellent review. New subscriber earned 😊. Would be interesting to see your take on Opengen Studio and compare this with Crew AI.
crewAI UI is coming 😎👉👉
Actually a great video to start with AI agents, thanks
My understanding is that using a larger quantized model works better. I’m planning on trying it soon on my computer, maybe with autogen. I’ve got a 4090, i9-10900k, and 64Gb RAM, so I’m hoping I can run maybe a 30b quantized model on it.
I read that the ~5-bit quantized models are the sweet spot that reduces your memory footprint without any significant loss in quality of responses. 4-bit is still good but takes enough of a hit to matter. Again, haven’t tried it myself, so maybe I’m mistaken, but that’s what I read.
Great video and LLM review.
Impressive video. The Reddit scrapper, thinking about changing ollama settings :D I'm sad your system limited your choice of LLM. But now I'm really motivated to try on my system, to test Mixtral.
Great work! I'll try it out tomorrow! 👏
Great video! Congrats!
On 8GB VRAM I had no problem running 10B or 13B models, however I run Q5 gguf. On 24GB VRAM I am able to run 70B Q4 gguf. For slow tasks it's acceptable speed.
title: How I Made AI Assistants Do My Work For Me
reality : agents didn't understand what the task is......
I like your setup and vibe keep this good work up
Watching from BerylCNC... what's your opinion on using CrewAI or AutoGen versus creating a GPT within OpenAI, and providing instructions that frame out the functionality in a similar way? My development work is mostly related to CNC tool path utilities. LLMs do a poor job of inferring and understanding geometry, so I have to bake in a lot of rules and math. GPTs seem like a cool way to get noticed, but I really need to include libraries and Python code. One of ours is called "Beryl of Widgets", and it helps makers figure out what to make and sell with the tools they have available. It could be so much more with CrewAI, I think, but then I need a way to deploy it. Great content, thank you!
It's very calming to listen to you. Doesn't happen with technical videos a lot. Great vid!
Sometimes it's faster and more efficient to do the work (write the business plan or blog) yourself as a human rather than spending time and programming AI to do it. But I'm old, with an efficient creative mind. But, good video.
It's just a tool. Simple old human is still the best to get work done!😂
Thanks a lot for this video. I like it. It's very useful. :)
Wow! Very good video! So much GREAT info - thank you!
Amazing video 🤘
Will be learning more about this CrewAI. I have been using ollama to run LLMs on my Raspberry Pi5. I really need more Pi5s to run in parallel.
Excellent work... keep going and soon you can just go to bed and stay there for the rest of your life. Amazing!
Your channel is really good! Eage to see more about te AI content
Thanks for making this type of content. I like it.
Thank you! This content is so good.
WOW incredible value...your work is enterprise-grade and enterprise-ready ithank you so much for this huge work i'm a software engineer (Java, Unity..) passionate about programming and i love this practical video this is amazing...thanks so much beautiful Maya!!❤ Subscribing!!!!
Hi Maya! I’m so fascinated by your technical abilities. I barely understand the whole thing, but I’ve always been fascinated by AI and started learning AI tools.I saw on your tiktok that you taught yourself Python. It would be awesome if you can also share your learning process coming from a non-tech background and how’s your progress so far. Thank you :)
Thanks! It's a great idea, I'll definitely make a video about that :)
This is like a book in one video, thank you so much! Just curious, but how would you compare the latest autogen studio to crew ai? Lots of wonderful ideas here and beautifully presented, thank you so much for publishing this, you are indeed a knowledge sharing master and the world needs more intellectual contributions like this. Thanks again!
thanks a lot! I'm working on autogen studio video and I'll compare it to crewai
Great content and overview Maya. Thanks for sharing.
Wow, what an intense research on the topic! Thank you for sharing this info with us 🙏
Tremendous! incredible research and findings ! THx
I watched Matthew Berman’s video on AutoGen, what made you decide to use CrewAI and have you tried/compared it to AutoGen?
Wow, what great research and presentation. Thanks for sharing.
very informative thank you. something like a carpet or room dampening could help the sound quality of the stream. thanks again.
I just started to learn about AI and.. I barely understand whats going on here, but I'm fascinated :). I was thinking about possibility to create this kind of agents/assistants for tasks like searching informations about specific topic online. I will follow you :),
great, very useful & interesting. Thanks a lot for your great video