I am commenting here as I'm watching, so maybe was answered. But for longer replies, the mention of "long-form" in the prompt helps. But what really helps is Agent 1 brainstorms overall structure Agent 2 takes that structure and makes it detailed Agent 3 iterates through that structure and fills in each section while having a general summary of what was done so far (for context) Agent 4 reviews the joined together output for consistency
nice. i see it as useful to think about this agentic workflow in terms of its context- what it is bringing in and the qualifying agents understanding of what the the final outcome is supposed to be
@worldbridgerone What I find really useful in architecturing agent frameworks is basically trying out the steps with you in the driving seat. That way you get a good grip on which prompts would work well. When you get a few iterations to the standard you want, you orchestrate it that way and you just remove yourself from the role of passing the contents from the previous stages. In this particular scenario it is about those models being tuned for shorter replies, I guess.
🎯 Key points for quick navigation: 00:00 *🤖 Introduction to Crew AI with Joe* - Joe, CEO of Crew AI, joins the session to guide the setup and answer questions, - Host discusses the goal to use Crew AI for creating educational AI content, starting from basic research, - Initial setup and project context shared, emphasizing Crew AI's utility in producing structured content. 02:01 *🔍 Exploring Language Models and Model Selection* - Host describes testing different language models, comparing comprehensive outputs of “01 models” with other models, - Discussion on balancing model selection with cost-effectiveness and detail generation, with Joe’s input. 05:10 *📑 Structuring Content Generation and Workflow* - Joe suggests segmenting content creation into “planning” and “writing” stages to improve structure, - Host plans to apply multi-stage Crew flows for collaborative agent interaction, enhancing overall efficiency. 07:28 *🖥️ Integrating LangTrace AI for Observability* - Host mentions using LangTrace AI to track process metrics, offering insights into cost and time per task, - Discussion of benefits of detailed observability in building AI-driven content workflows. 08:22 *🌐 Vulture Cloud Sponsor Segment* - Overview of Vulture Cloud's advantages, such as GPU accessibility and deployment at scale for machine learning, - Promotion of Vulture’s offer, with a $300 credit for new users for GPU workload testing. 09:31 *🚀 Initializing Crew AI Flow Setup* - Joe guides the host in setting up Crew AI flows and terminal commands to organize project files, - Explanation of “flows” for chaining Crew processes, allowing multiple Crews to function in stages. 13:08 *🔄 Organizing Project Files for Flow Management* - Host follows Joe’s instructions to move Crew components into structured flow folders, - Joe clarifies directory organization and environment setup, ensuring proper Crew integration. 17:01 *🧩 Configuring Initial Research and Content Writing Functions* - Host and Joe establish functions for the research and content writing phases, defining how tasks are processed, - Code adjustments ensure flow interoperability, preparing the flow system for advanced content generation. 20:28 *🖼️ Visualizing Flow Structure with Crew AI Flow Plot* - Host uses `crew AI flow plot` to visualize flow setup, assessing flow structure for task collaboration, - Joe explains benefits of a structured flow system, which simplifies CMS integration and complex data handling. 23:18 *🔧 Setting Up New "Research" and "Content Writer" Crews* - New "research" and "content writer" crews are created within the educational flow, with separate roles for research and content planning, - Explanation of folder structure and comparison with previous setup, emphasizing how new setups are more compact and focused. 25:21 *📝 Defining Agent and Task Files for Research Crew* - Agents and tasks are added to the new "research" crew, using older definitions as a reference, - Tips on efficiently setting up tasks and refining the output format to ensure each agent performs specific content planning and research. 27:38 *💡 Planning Output Details and Refining Task Descriptions* - Planning tasks are refined to ensure they include subtitles, a high-level goal, and relevant sources, - Recommendations on maintaining raw information for improved planning agent performance. 31:57 *🚀 Preparing Flow Kickoff and Import Adjustments* - Import and setup of the "research" crew, enabling flow kickoff within the main script file, - Explanation of flow kickoff command adjustments to ensure correct instantiation and parameter configuration. 47:41 *🔍 Error Resolution and Final Flow Execution* - Debugging process for the kickoff command, including necessary code adjustments and correct instantiation methods, - Final verification and troubleshooting tips for flow execution using the Crew AI setup. 50:23 *🔍 Reviewing Crew AI Plan Output and Refining Results* - Generated plan for educational content is reviewed, confirming key sections and sources are included for high-quality output, - Discussion on converting the plan into a structured object format for improved crew interaction. 53:08 *🛠️ Creating a Pydantic Object for Structured Data Output* - Host sets up a Pydantic object for the content plan to improve data handling and readability, - Explanation of how structured data enables better code interaction and seamless integration between crews. 55:17 *🧩 Implementing Educational Plan Object in Main Flow* - Crew AI’s structured object is used in the main file, allowing looped processing for each content section, - Plan setup enables crew to handle each section individually, paving the way for detailed and organized content generation. 59:41 *✍️ Defining Agents and Tasks for Content Writing Crew* - Content writer and quality reviewer agents are added, setting up structured tasks for each to manage writing and reviewing, - Enhanced agent-task setup ensures the output aligns with the research phase, adding modularity and control over quality. 01:07:09 *🔄 Testing Flow and Identifying Fixes* - Flow execution begins, verifying that the structured content plan is processed correctly, - Adjustments are made to crew files and variable handling, ensuring smooth data transfer between research and content writing phases. 01:13:25 *⚙️ Model Optimization and Cleanup* - Host confirms model and configuration adjustments, optimizing for lower-cost, efficient model usage, - Old files and references are removed to streamline project structure, improving organization and maintainability. 01:16:30 *🔍 Reviewing and Adjusting Agent Task References* - Ensuring correct agent references in task files, verifying that each task is linked to its corresponding agent for proper flow, - Adjustments help avoid mismatches in output, particularly in quality assessment and review stages. 01:18:03 *📝 Testing the Flow and Verifying Output of Sections* - Crew AI flow is tested, with each section processed by the content writer agent before quality review, - Each section flows individually, allowing for sequential, manageable content creation that builds the final output. 01:20:08 *⚙️ Structuring Inputs for Content Writer Crew Tasks* - Adjustments ensure that the content writer crew receives relevant input for each section, enhancing the agent’s ability to produce coherent text, - By setting the section input in each task, the system can process information consistently, section by section. 01:23:08 *🛠️ Implementing JSON Conversion and Final Content Output* - JSON conversion setup allows the agent to handle structured data, enabling an accurate text file output of final content, - Outputs from each section are saved as a cohesive document, simplifying the final review process. 01:27:10 *🔄 Cost-Effective Model Selection and Agent Optimization* - Testing of GPT-4-turbo for agent flows to improve performance while maintaining cost efficiency, - Consideration for model adjustments based on latency and output quality, confirming that robust agent definitions enable lower-cost models to perform well. 01:33:38 *🔍 Fine-Tuning Agent Output for Readability and Quality* - Refinements to prompt definitions to improve section readability and avoid redundant phrasing, like unnecessary summaries, - Emphasis on specific, concise outputs for agents to ensure final content flows naturally without excessive formatting. 01:40:07 *📈 Refining Next Steps for Workflow Improvements* - Discussion on possible improvements, including adding a web scraper to the research agent for more thorough data validation, - The second agent could also be equipped with a scraper to ensure data accuracy and reduce hallucination, enhancing overall content quality. 01:41:00 *🚀 Finalizing Content Output and Reviewing Workflow Efficiency* - Suggestions to increase efficiency by integrating Gro for faster processing once definitions are complete, - Final output is checked in the educational content file, with notes to potentially extend and improve the content based on readability and completeness. 01:41:43 *🧹 Reviewing Content Quality and Planning Further Refinements* - Plans to review and enhance content length and detail by refining agent and task prompts, - Noted approach to iteratively adjust task definitions using Cursor for faster, more refined adjustments. Made with HARPA AI
This was an awesome tutorial! Easy to follow. I've been following your tutorials for a LONG time and this is by far the most useful one to date. I really appreciate you sharing this with the masses. Keep it up, Matthew!
I have been watching this channel for a long time now but this video has to be the most inspiring one I’ve seen! Not only on this channel but all of RUclips.
Great video Matt. I appreciate the mostly uncut nature of it. Cool to get down in to the weeds with your development process and THE expert on Crew AI.
Yes, the CrewAI Agent/Flow Builder UI would be nice. People could use that to initialize teams and flows and then open them in cursor to flesh out agent behavior and tooling. I had thought that was what the Enterprise tool was, but alas its more of an add on for erecting an API onto a Crew that is already built.
Wow. That was oustanding. Don’t often respond, but I watched the entire thing to the end! My only issue is that I don’t program in Python but I was able to follow along and get the basics. I would love to see a GUI one day ( respect the code though really well done).
Love this, one of the best I've seen. Great vibe, great experience to see the wins and challenges in real time. Plus Joel is so gentle and patient, and just what an inspiration to both use Cursor now but to build out my own teams of crews, I just have to find a way to get enough to afford the (yes very low costs, but for me quite literally pennies matter). That said this is awesome. I was going to glance and then chill, and I remain engaged all the way through and the time flew by.
This is incredibly amazing and yet so, SO far from what the average power user could even HOPE to do. Hopefully in a year or two, someone who isn't a professional Python expert could do something like this.
I get that same grin when interacting with Cursor on a daily basis. Definitely gonna give this crewai a shot. I’ve been using swarm with ollama for the time being.
I honestly do not know what is the use case for crew AI? i get it , it has agents, researchers, planners, etc.... but what can it do that perplexity/o1 cant??? also, your examples are so simple that virtually no one would consider them. here is a real life example: find me today's stocks that have PE < 15 and up >5% can CREW AI DO THIS?
I saw that GROQ LLM Models can be used in this CrewAI application. I wonder how good are the results using those? Especially in comparison with the results of the o1 model!
I think that agents frameworks are still bad but they will get there. Normal python scripts (with openai api calls when needed ) still can automate this better.
I was kind of hoping in 1 hour and 42mins they would have started a project from scratch rather than trying to shoehorn a old project into the way of doing it with a flow planner. It got confusing trying to understand the directory structure and how it describes the agents and planner and the project and the python environment. Once I understand what they were trying to accomplish and i make some sense of the directory structure and naming conventions I can move on to working with the planner. But I'm just trying to figure out how to build the directories and why pieces are where they are.
@author. Can you help me in buying laptop for AI development. Is macbook M3 with 16GB ram enough? Or Intel core ultra 9 with 32GB RAM is better ? Could you please help me in these 2 options?
Why isn't there a UI for this ? This is merely a feedback loop of system prompts and calling a local llm or api endpoint. There's so many other similiar git projects that have nothing and do nothing to truly simplify and make useful. ... Editing weighty spreadsheets of prompts?
please explain super critical my company super critical i gave up langgrapph and oh no if at all still hmmm i did hopless complicated swarm hmmm if hmmm
Hi @mattew, I would like to ask Joe why the interpolation of the topic in front of the Agent? Interpolating the topic inside the task isn't enough? By the way thanks for the content.
This dude is a humble and cool CEO
I know this is a Crew video, but it inadvertently turned into the best advertising Cursor could ever ask for. Insane.
Great to see an up-and-coming creator on the channel!
I am commenting here as I'm watching, so maybe was answered.
But for longer replies, the mention of "long-form" in the prompt helps.
But what really helps is
Agent 1 brainstorms overall structure
Agent 2 takes that structure and makes it detailed
Agent 3 iterates through that structure and fills in each section while having a general summary of what was done so far (for context)
Agent 4 reviews the joined together output for consistency
nice. i see it as useful to think about this agentic workflow in terms of its context- what it is bringing in and the qualifying agents understanding of what the the final outcome is supposed to be
@worldbridgerone What I find really useful in architecturing agent frameworks is basically trying out the steps with you in the driving seat.
That way you get a good grip on which prompts would work well.
When you get a few iterations to the standard you want, you orchestrate it that way and you just remove yourself from the role of passing the contents from the previous stages.
In this particular scenario it is about those models being tuned for shorter replies, I guess.
🎯 Key points for quick navigation:
00:00 *🤖 Introduction to Crew AI with Joe*
- Joe, CEO of Crew AI, joins the session to guide the setup and answer questions,
- Host discusses the goal to use Crew AI for creating educational AI content, starting from basic research,
- Initial setup and project context shared, emphasizing Crew AI's utility in producing structured content.
02:01 *🔍 Exploring Language Models and Model Selection*
- Host describes testing different language models, comparing comprehensive outputs of “01 models” with other models,
- Discussion on balancing model selection with cost-effectiveness and detail generation, with Joe’s input.
05:10 *📑 Structuring Content Generation and Workflow*
- Joe suggests segmenting content creation into “planning” and “writing” stages to improve structure,
- Host plans to apply multi-stage Crew flows for collaborative agent interaction, enhancing overall efficiency.
07:28 *🖥️ Integrating LangTrace AI for Observability*
- Host mentions using LangTrace AI to track process metrics, offering insights into cost and time per task,
- Discussion of benefits of detailed observability in building AI-driven content workflows.
08:22 *🌐 Vulture Cloud Sponsor Segment*
- Overview of Vulture Cloud's advantages, such as GPU accessibility and deployment at scale for machine learning,
- Promotion of Vulture’s offer, with a $300 credit for new users for GPU workload testing.
09:31 *🚀 Initializing Crew AI Flow Setup*
- Joe guides the host in setting up Crew AI flows and terminal commands to organize project files,
- Explanation of “flows” for chaining Crew processes, allowing multiple Crews to function in stages.
13:08 *🔄 Organizing Project Files for Flow Management*
- Host follows Joe’s instructions to move Crew components into structured flow folders,
- Joe clarifies directory organization and environment setup, ensuring proper Crew integration.
17:01 *🧩 Configuring Initial Research and Content Writing Functions*
- Host and Joe establish functions for the research and content writing phases, defining how tasks are processed,
- Code adjustments ensure flow interoperability, preparing the flow system for advanced content generation.
20:28 *🖼️ Visualizing Flow Structure with Crew AI Flow Plot*
- Host uses `crew AI flow plot` to visualize flow setup, assessing flow structure for task collaboration,
- Joe explains benefits of a structured flow system, which simplifies CMS integration and complex data handling.
23:18 *🔧 Setting Up New "Research" and "Content Writer" Crews*
- New "research" and "content writer" crews are created within the educational flow, with separate roles for research and content planning,
- Explanation of folder structure and comparison with previous setup, emphasizing how new setups are more compact and focused.
25:21 *📝 Defining Agent and Task Files for Research Crew*
- Agents and tasks are added to the new "research" crew, using older definitions as a reference,
- Tips on efficiently setting up tasks and refining the output format to ensure each agent performs specific content planning and research.
27:38 *💡 Planning Output Details and Refining Task Descriptions*
- Planning tasks are refined to ensure they include subtitles, a high-level goal, and relevant sources,
- Recommendations on maintaining raw information for improved planning agent performance.
31:57 *🚀 Preparing Flow Kickoff and Import Adjustments*
- Import and setup of the "research" crew, enabling flow kickoff within the main script file,
- Explanation of flow kickoff command adjustments to ensure correct instantiation and parameter configuration.
47:41 *🔍 Error Resolution and Final Flow Execution*
- Debugging process for the kickoff command, including necessary code adjustments and correct instantiation methods,
- Final verification and troubleshooting tips for flow execution using the Crew AI setup.
50:23 *🔍 Reviewing Crew AI Plan Output and Refining Results*
- Generated plan for educational content is reviewed, confirming key sections and sources are included for high-quality output,
- Discussion on converting the plan into a structured object format for improved crew interaction.
53:08 *🛠️ Creating a Pydantic Object for Structured Data Output*
- Host sets up a Pydantic object for the content plan to improve data handling and readability,
- Explanation of how structured data enables better code interaction and seamless integration between crews.
55:17 *🧩 Implementing Educational Plan Object in Main Flow*
- Crew AI’s structured object is used in the main file, allowing looped processing for each content section,
- Plan setup enables crew to handle each section individually, paving the way for detailed and organized content generation.
59:41 *✍️ Defining Agents and Tasks for Content Writing Crew*
- Content writer and quality reviewer agents are added, setting up structured tasks for each to manage writing and reviewing,
- Enhanced agent-task setup ensures the output aligns with the research phase, adding modularity and control over quality.
01:07:09 *🔄 Testing Flow and Identifying Fixes*
- Flow execution begins, verifying that the structured content plan is processed correctly,
- Adjustments are made to crew files and variable handling, ensuring smooth data transfer between research and content writing phases.
01:13:25 *⚙️ Model Optimization and Cleanup*
- Host confirms model and configuration adjustments, optimizing for lower-cost, efficient model usage,
- Old files and references are removed to streamline project structure, improving organization and maintainability.
01:16:30 *🔍 Reviewing and Adjusting Agent Task References*
- Ensuring correct agent references in task files, verifying that each task is linked to its corresponding agent for proper flow,
- Adjustments help avoid mismatches in output, particularly in quality assessment and review stages.
01:18:03 *📝 Testing the Flow and Verifying Output of Sections*
- Crew AI flow is tested, with each section processed by the content writer agent before quality review,
- Each section flows individually, allowing for sequential, manageable content creation that builds the final output.
01:20:08 *⚙️ Structuring Inputs for Content Writer Crew Tasks*
- Adjustments ensure that the content writer crew receives relevant input for each section, enhancing the agent’s ability to produce coherent text,
- By setting the section input in each task, the system can process information consistently, section by section.
01:23:08 *🛠️ Implementing JSON Conversion and Final Content Output*
- JSON conversion setup allows the agent to handle structured data, enabling an accurate text file output of final content,
- Outputs from each section are saved as a cohesive document, simplifying the final review process.
01:27:10 *🔄 Cost-Effective Model Selection and Agent Optimization*
- Testing of GPT-4-turbo for agent flows to improve performance while maintaining cost efficiency,
- Consideration for model adjustments based on latency and output quality, confirming that robust agent definitions enable lower-cost models to perform well.
01:33:38 *🔍 Fine-Tuning Agent Output for Readability and Quality*
- Refinements to prompt definitions to improve section readability and avoid redundant phrasing, like unnecessary summaries,
- Emphasis on specific, concise outputs for agents to ensure final content flows naturally without excessive formatting.
01:40:07 *📈 Refining Next Steps for Workflow Improvements*
- Discussion on possible improvements, including adding a web scraper to the research agent for more thorough data validation,
- The second agent could also be equipped with a scraper to ensure data accuracy and reduce hallucination, enhancing overall content quality.
01:41:00 *🚀 Finalizing Content Output and Reviewing Workflow Efficiency*
- Suggestions to increase efficiency by integrating Gro for faster processing once definitions are complete,
- Final output is checked in the educational content file, with notes to potentially extend and improve the content based on readability and completeness.
01:41:43 *🧹 Reviewing Content Quality and Planning Further Refinements*
- Plans to review and enhance content length and detail by refining agent and task prompts,
- Noted approach to iteratively adjust task definitions using Cursor for faster, more refined adjustments.
Made with HARPA AI
This was an awesome tutorial! Easy to follow. I've been following your tutorials for a LONG time and this is by far the most useful one to date. I really appreciate you sharing this with the masses. Keep it up, Matthew!
Loving that these research agents keep improving. So many use cases outside research as well, awesome content!
i know creating and editing this long video was hard, i respect the effort you put in this. thank you.
This is one of my favorite Crew AI videos. 🤩
Great video, good to see advice direct from the CEO.
I have been watching this channel for a long time now but this video has to be the most inspiring one I’ve seen! Not only on this channel but all of RUclips.
Great video Matt. I appreciate the mostly uncut nature of it. Cool to get down in to the weeds with your development process and THE expert on Crew AI.
This was mind blowing, I was smiling ear to ear with you
Yes, the CrewAI Agent/Flow Builder UI would be nice. People could use that to initialize teams and flows and then open them in cursor to flesh out agent behavior and tooling. I had thought that was what the Enterprise tool was, but alas its more of an add on for erecting an API onto a Crew that is already built.
Muito obrigado, João e Matthew. Conteúdo incrível!
Matthew and Joao . . I want to have coffee with these guys
That would be awesome
CrewAI is so cool !
Wow. That was oustanding. Don’t often respond, but I watched the entire thing to the end! My only issue is that I don’t program in Python but I was able to follow along and get the basics. I would love to see a GUI one day ( respect the code though really well done).
Love this, one of the best I've seen. Great vibe, great experience to see the wins and challenges in real time. Plus Joel is so gentle and patient, and just what an inspiration to both use Cursor now but to build out my own teams of crews, I just have to find a way to get enough to afford the (yes very low costs, but for me quite literally pennies matter). That said this is awesome. I was going to glance and then chill, and I remain engaged all the way through and the time flew by.
28:13 watching Matthew trying to predict his next token. Even backs up and sends again to build momentum :)
This is incredibly amazing and yet so, SO far from what the average power user could even HOPE to do. Hopefully in a year or two, someone who isn't a professional Python expert could do something like this.
"I'll see you in the next one" should really be, "You'll see me in the next one."
Fantastic Video, you two guys work very well, and are really interesting to watch and learn from! Thank You so much Guys!
Great
Folder is a windows term. Linux and Mac guys, we call them directories. Just saying.
So exciting!!! Wicked!!🎉🎉🎉
I get that same grin when interacting with Cursor on a daily basis. Definitely gonna give this crewai a shot. I’ve been using swarm with ollama for the time being.
Excellent video btw…exactly what I needed for the project I’m developing now. Thank you!
I honestly do not know what is the use case for crew AI? i get it , it has agents, researchers, planners, etc....
but what can it do that perplexity/o1 cant???
also, your examples are so simple that virtually no one would consider them.
here is a real life example:
find me today's stocks that have PE < 15 and up >5%
can CREW AI DO THIS?
It would be nice to see this but applied for app development / code generation
This channels is the best ever!
Wow that was fun Cheer!
I saw that GROQ LLM Models can be used in this CrewAI application. I wonder how good are the results using those? Especially in comparison with the results of the o1 model!
Joe!! Love this episode and like the product bullishness! Visiting Sao Paulo on the 20th probably will try to see Joe then
create graph agents from scratch with no frameworks woth edges and conditionals will give you best results with some good Meta prompting
I think that agents frameworks are still bad but they will get there. Normal python scripts (with openai api calls when needed ) still can automate this better.
Couldn’t you just called customer support? You had to go all the way to the top.?😂
Very cool video - I learned a lot.
glad to see i'm not the only one with execution errors lol
🍀Outstanding!🍀
Yeah! Great!🤩
If AI is to stop progress now, we already have enough to keep us going for at least 10 years.
I was kind of hoping in 1 hour and 42mins they would have started a project from scratch rather than trying to shoehorn a old project into the way of doing it with a flow planner. It got confusing trying to understand the directory structure and how it describes the agents and planner and the project and the python environment. Once I understand what they were trying to accomplish and i make some sense of the directory structure and naming conventions I can move on to working with the planner. But I'm just trying to figure out how to build the directories and why pieces are where they are.
Watch part 1.
@author. Can you help me in buying laptop for AI development. Is macbook M3 with 16GB ram enough? Or Intel core ultra 9 with 32GB RAM is better ? Could you please help me in these 2 options?
does every developer get an onboarding session with Joao? lol
what extension are you using to generate code (that command +k)
Not sure if yet another tool makes any difference for your GTM.
CMD-L was set to gpt-4o
Very rich content!
now we know why cline never got tested 🤣
@1:10:30 Sometime you just gotta do it manually….wait, how do I write code again? 😂😂
"joe" lol, Americans really struggle to say Joǎo correctly "Joe-ow"
no github links anymore?
why not flowise
I want a virtual Joao when I'm coding.
Can you run crew on colab??
Proudly not using best practices virtual environment and environment management?
Why isn't there a UI for this ?
This is merely a feedback loop of system prompts and calling a local llm or api endpoint.
There's so many other similiar git projects that have nothing and do nothing to truly simplify and make useful. ... Editing weighty spreadsheets of prompts?
GO BRAZIL!!!
please explain super critical my company super critical i gave up langgrapph and oh no if at all still hmmm i did hopless complicated swarm hmmm if hmmm
Let me guess, another researcher agent? For 100000 time? borrinnggg
The world is boring if you stick your thumbs in your ears and try to guess everything
@@threepe0 Not if I'm right
The channel has been going downhill. Hit and miss.
It needs to focus on one topic style.
Not even close.
Fuuuuuuck
First😂
Hi @mattew, I would like to ask Joe why the interpolation of the topic in front of the Agent? Interpolating the topic inside the task isn't enough?
By the way thanks for the content.