Llama3 + CrewAI + Groq = Email AI Agent

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 93

  • @deemo16
    @deemo16 8 месяцев назад +11

    Great content! This was my first exposure to groq... the potential here is pretty amazing! Thank you for your perspective and candid explanations that really help to grasp at the "ground truth" of these technologies. I love watching the progress in ML and LLMs, as people collectively explore boundaries and breakthroughs!

  • @jimofthehill
    @jimofthehill 6 месяцев назад

    this is really inspiring. it opens up all sorts of possibilities, in terms of document processing , and combining it with web search

  • @kate-pt2ny
    @kate-pt2ny 8 месяцев назад +4

    Thanks for sharing, looking forward to the follow-up videos with langraph and ollama, thank you for your work

  • @yellowboat8773
    @yellowboat8773 8 месяцев назад +3

    Thanks for the cideo mate, i feel like your of onky a few on youtube that actually dive a little deeper into these tools using different examples, other than the standard copy paste examples we see from everyone else. Appreciate it

  • @jdallain
    @jdallain 8 месяцев назад +3

    Really looking forward to your LangGraph video. I think it’s the best option, but also the hardest to learn

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад +5

      I do feel LangGraph is more stable and it feels more like programming to me. I agree it is more work at the start to learn etc. I want to try and bake in RAG as a tool with the next one too

    • @ArjunKrishnaUserProfile
      @ArjunKrishnaUserProfile 8 месяцев назад

      Yes, looking forward to langGraph video.

  • @caleboleary182
    @caleboleary182 8 месяцев назад +2

    Super cool! Wild to see it run through the whole agent flow in 11 seconds.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад +1

      yeah I kept thinking it was a previous run when i was testing it, until I realized it is just that quick.

    • @clray123
      @clray123 8 месяцев назад

      Not so wild if you're used to llama.cpp rather than Python crap.

    • @JJaitley
      @JJaitley 8 месяцев назад +1

      @samwitteveenai. Excellent video looking forward to the LangGraph video on the same use case.

    • @watchdog163
      @watchdog163 7 месяцев назад

      @@clray123
      I want to do stuff, not read instructions.

  • @DonBranson1
    @DonBranson1 8 месяцев назад +3

    An awesome video focusing on effective use of Groq and Llama3 for agentic workflows. Looking forward to the LangGraph video even more. Some minor issues toward the end of the lab, but still teaches key concepts and capabilities. (couldn't find the CSV file)
    Would be interesting to integrate LangSmith as well to compare answer quality between Mixtral and Llama3.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      Glad you liked it. There is no csv file, not sure what you are referring to there.

  • @jayhu6075
    @jayhu6075 8 месяцев назад +2

    What an great explanation!
    Perhaps next time, you could delve into the topic of constructing an internal knowledge retrieval system (RAG system) for information retrieval, offering an alternative to relying solely on web searches?

  • @MattSimmonsSysAdmin
    @MattSimmonsSysAdmin 8 месяцев назад +23

    This is a reminder, from a human, to other humans who are using this channel to learn how to implement things like this, to not fully hand your brain to AI. Customer complaint emails might be able to be handled automatically, but the humans running the system can't fix things if they don't know what people are complaining about. Make sure to include some feedback mechanism in what you're doing so that humans can maintain observability of the AI system and the world that the AI system is processing.

    • @mysticaltech
      @mysticaltech 8 месяцев назад +4

      Yep, super well said. That's how to flywheel of service improvement works, and AI will make it spin way harder.

    • @zatoichi1
      @zatoichi1 8 месяцев назад

      Yet if the humans observe, for example, hallucinations from the AI, there is no way to troubleshoot that due to the black box aspect of high parameter AI. The AI will most likely stop its hallucinations but perhaps after two or three (ten😂?) customers and then the only solution is prompting but there will never be the reliability of programming or finding and fixing broken code.

    • @zatoichi1
      @zatoichi1 8 месяцев назад

      Perhaps the answer is bots watching other bots and fixing their mistakes 😅

  • @kepenge
    @kepenge 8 месяцев назад

    Hi @Sam, I can't appreciate more, I would like to thank you for all the effort you are putting on to share these incredible contents, it helped and is helping me building my project around agents.

  • @kazoooou
    @kazoooou 8 месяцев назад +2

    Great job! I'm super excited to try CrewAI. With LLAMA 3, it’s so promising!
    The future is being written now, friends :)

  • @helix8847
    @helix8847 8 месяцев назад +1

    Llama3 is Amazing. I have replaced so many tasks that I used to use ChatGPT for with Llama3 70B.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      totally agree 3.5 is looking pretty old now

  • @SamCodeMan
    @SamCodeMan 8 месяцев назад

    Thanks @Sam Witteveen, it has been very informative, I will start working on my RAG based project now with the help of your colab notebook!

  • @Trashpanda_404
    @Trashpanda_404 8 месяцев назад

    Dude thank you for all of your videos. You’re awesome.

  • @ravipratapmishra7013
    @ravipratapmishra7013 8 месяцев назад +1

    Videos is great as always, but this time the thumbnail is awesome.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад +2

      truth be told the thumbnail is what convinced me to make the video. 😀

  • @Aidev7876
    @Aidev7876 8 месяцев назад +1

    Good video. Can you maybe address sql based RAG with llama3 and crewAI? Such as a recommendation system for a product on an SQL inventory with maybe fall back as internet search?

  • @BrentBuildsOnline
    @BrentBuildsOnline 8 месяцев назад

    First off, great video - but I have a question and I realize this is a noob question so sorry 'bout it. In the part right around 4:00 you say, "okay here we're setting up our groq api key..." But I'm confused where to put this key. I have the api key already but I can't find where you put the API in the code. Does it go in the os.environ or does it go in the userdata.get? Or both? Or neither? Thanks so much for the video again. If you could help me figure out this part that's the only thing I'm confused about. Thanks.

  • @quito96
    @quito96 8 месяцев назад +1

    Llama3 is Amazing for sure, but so is Sam. 😀Thx for sharing

  • @thefutureisbright
    @thefutureisbright 8 месяцев назад

    Hi Sam another excellent tutorial. I've posted it in the crewai discord channel. Thanks Paul

  • @ratral
    @ratral 8 месяцев назад

    @Sam, thanks. That was excellent help, as always.

  • @bartoszludera2604
    @bartoszludera2604 8 месяцев назад +2

    Thanks Sam, but there is any option to connect this python code with real gmail or other inbox?

  • @yazanrisheh5127
    @yazanrisheh5127 8 месяцев назад

    Hey Sam. Thanks again for the video and I was wondering if next video with the langraph or video after u could show us hw can we do the internal rag as you mentioned for production lvl apps

  • @BradleyKieser
    @BradleyKieser 8 месяцев назад

    Great video, added points for clever references to The Beatles.

  • @humzaahmed5608
    @humzaahmed5608 8 месяцев назад

    Thanks for all your uploads Sam your explanations are always amazing! I was wondering if you provide consulting sessions or advice on other AI projects as well, I've got a conversational AI agent that I've been trying to build up for a specific use case in valuations that I would love to talk to you about.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      Thanks for the kind words. Best to contact me on Linkedin for any consulting etc.

  • @felixkuria1250
    @felixkuria1250 Месяц назад

    When am using Groq with Llama3 and getting this error message when I execute the agent:
    ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable
    what am I doing wrong?

  • @alberjumper
    @alberjumper 8 месяцев назад +2

    This is not an agentic flow, this is just a regular pipeline built with CrewAI.

    • @landob6393
      @landob6393 7 месяцев назад

      Hey, new to agent space and I was thinking same thing. Can you give me some ideas where frameworks like crewAI would actually be useful and not just an unnecessary layer of abstraction? Current understanding is if you need to implement some sort of loop/cycle into the workflow.

  • @clray123
    @clray123 8 месяцев назад +5

    And make sure you comply with GDPR when you pass on all that private, confidential, person-related customer information they send you in the email to some external service such as "groq"... You will need to make customer 40 pages beforehand and sign a release before they are allowed to send you email.

    • @gavinknight8560
      @gavinknight8560 8 месяцев назад

      And this is why a local LLM makes a lot of sense. The other element is, whatever you do needs to be discoverable. When the law suits start flying, the logging will need to be of high quality.

    • @clray123
      @clray123 8 месяцев назад

      @@gavinknight8560 You can always ask LLM to fake the logs. The reality is that GDPR is only used to extort money and kickbacks from big companies because there is absolutely no way to check compliance (e.g. if I say, I deleted all your data, there is no need - or technical possibility - to prove that I did).

  • @drlordbasil
    @drlordbasil 8 месяцев назад

    Never really use crewai, but I have an email assistant like this with 2 brains(still upgrading), 1 for tools for researching/leaving notes/reading notes/ect and then another for the response to the email after reviewing the returns from the tool agent. Replies directly to the clients and has RAG via ollama!
    Although it's all python.
    I love this type of agentic workflow. I really should learn more about crewAI but Iunno, seems annoying to me still for some odd reason haha. Any other tips that werent in the vid for coders that are hesitant to use crewai?

  • @74Gee
    @74Gee 8 месяцев назад

    This is very impressive indeed, thanks for sharing! I suspect the upcoming Ollama 7b version might not be quite as accurate but this gave me an idea.
    I was thinking it might work to generate 2 replies for each of say 50 emails from a huge AI, and use the manually chosen replies as examples for the smaller models. It feels like cheating but I think it might give the smaller models an extra accuracy boost they might need.

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад +1

      yeah creating good ICL exemplars works really well. I have done a project recently that makes good use of this with the Haiku model.

    • @74Gee
      @74Gee 8 месяцев назад

      @@samwitteveenai Makes sense it wasn't an original idea! Thanks for classifying it for me, now I can look into it further :)

  • @Derick99
    @Derick99 8 месяцев назад

    How can we get it to make a complex wordpress plugins with multiple files without losing progress along the way as it just forgets stuff and leaves placeholders causing us to just do circles if its to complex

  • @yellowboat8773
    @yellowboat8773 8 месяцев назад

    Anychance of showing how to integrate langchain tools into crewai? Specifically gptresearcher?

  • @cnclubmember23
    @cnclubmember23 8 месяцев назад

    Great but is it possible to allow internet search in ollama web UI?

  • @SaikatDeshmukh
    @SaikatDeshmukh 8 месяцев назад

    How did you create the thumbnail?

  • @landob6393
    @landob6393 7 месяцев назад

    What's the need for crewAI here, or for similar examples? From my understanding, this could be passed into a simple sequential LLM chain and be much simpler. I'm new to AI agents and LLM applications so bear with me, just a genuine question. Any replies would be awesome!

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      Its the decision points and parsing (and acting on) those decisions. In many ways I set this up to compare with the LangGraph example that followed it. The CrewAI framework can also be used to decide the next steps itself, though I feel this often isn't reliable.

  • @eyad_aiman
    @eyad_aiman 8 месяцев назад

    Sam is CrewAI is production ready? it causes a lot of internal server errors in production

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      I would say it isn't production ready currently. I use it more for trying ideas out quickly and then remaking them in LangGraph or my own little framework

  • @tuneshverma2582
    @tuneshverma2582 7 месяцев назад

    awesome video, really helpfull

  • @hqcart1
    @hqcart1 8 месяцев назад

    I don't understand why do we need to complicate a simple email reply task with crews and agents???
    a simple prompt is sufficient to categorize the email and reply based on the prompt..
    someone please explain why the complication??

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад +1

      I don't disagree, this could be done with just a chain of prompts. I tried to keep this simple. Where the Agent elements come into their element more is with multiple decision point that the LLM is choosing the flow path.

  • @WhyitHappens-911
    @WhyitHappens-911 8 месяцев назад

    Nice! Do you know if llama3 powered by groq is usable with autogen instead of crewai?

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      I haven't tried it but I think it should be.

    • @WhyitHappens-911
      @WhyitHappens-911 8 месяцев назад

      Thank you! It would be nice to have the same kind of tutorial with Autogen. I really appreciate the quality of the work you are providing

  • @flavorbot
    @flavorbot 8 месяцев назад

    great tutorial thanks a lot

  • @tubingphd
    @tubingphd 8 месяцев назад

    Thank you Sam

  • @sayanosis
    @sayanosis 8 месяцев назад

    Amazing video ❤

  • @MarcelMilcent
    @MarcelMilcent 8 месяцев назад +1

    Please, do it with Ollama locally. It would be really nice to have some more multi-agent examples. By the way, as per what you asked in the other video about other languages, llam3 is working ptretty nice in Brazilian Portuguese.Thanks for all!

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      Interesting to hear it is doing well in Brazilian Portuguese

  • @teprox7690
    @teprox7690 8 месяцев назад +1

    why the 3rd and 4th have orange sunglases ?

  • @drlordbasil
    @drlordbasil 8 месяцев назад

    damnit I was busy coding >,< lol brb watching

  • @pensiveintrovert4318
    @pensiveintrovert4318 8 месяцев назад

    Llama 3 70b Instruct starts producing junk output once the conversation gets beyond 8k. Pretty unuseable with gpt-pilot, for example.

    • @alizhadigerov9599
      @alizhadigerov9599 8 месяцев назад

      how can you go beyond 8k if context length is maximum 8k?

    • @pensiveintrovert4318
      @pensiveintrovert4318 8 месяцев назад

      @@alizhadigerov9599 I guess the only thing to do is employ a sliding window of some kind. Maybe compress old content. There are articles about context size extension. I was using ollama, it may have a problem with how it handles context size.

    • @choiswimmer
      @choiswimmer 8 месяцев назад

      Why are you stuffing all that in there? You can summarize the conversation or do other techniques to manage that. It's just lazy to stuff things in and let the model take care of it

    • @pensiveintrovert4318
      @pensiveintrovert4318 8 месяцев назад

      @@choiswimmer when I need to have gpt-pilot agents lectured on being lazy, I am sure to get in touch with you.

    • @ZacMagee
      @ZacMagee 8 месяцев назад

      Bit like the current state of chatGPT

  • @clray123
    @clray123 8 месяцев назад

    419 scammers / scambaiters you better listen up...

  • @clray123
    @clray123 8 месяцев назад

    I suspect that signing machine-generated emails as "Sarah, the resident manager" when there is no "Sarah" is at the very least unethical, and potentially illegal (depending on the context).

    • @sd5853
      @sd5853 8 месяцев назад +3

      I mean when Indian call center guys try to reach out to you do they present themselves as Radesh from Mumbai or as Paul from Missouri ?

    • @clray123
      @clray123 8 месяцев назад

      @@sd5853 You mean Indian scam centers? Yes, scammers usually assume a different identity from their own because it aids their scam. Do you want your company to be perceived as liars and scammers?

  • @megaimpian986
    @megaimpian986 8 месяцев назад

    fuck my brain i cant understand!!

  • @user-ue9bi2ui2q
    @user-ue9bi2ui2q 8 месяцев назад

    There are some issues when you do stuff like this in real life :
    1. Groq is not that actually fast when you use it with crew ai
    2. In every use case I have tried , just using code is faster and more accurate than using a team of agents .
    3. Most LLMs can’t even consistently format the Agent messages properly , resulting in massive waste of tokens as wrongly formatted messages repeat over and over .
    This makes the utility of this method for writing reliable production code very limited right now.
    That is the REALITY I am seeing with dirty hands .
    Please share your own thoughts or experiences with me!

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      I wouldn't use CrewAI for production at all currently. It is like a idea testing tool/toy. It makes the trade off to get fast and easy creation of agents by giving up full control, custom checking, validations etc.

    • @user-ue9bi2ui2q
      @user-ue9bi2ui2q 8 месяцев назад

      @@samwitteveenai OK , thanks for the reply ! Ya makes sense 👍