AutoGen FULL Tutorial with Python (Step-By-Step) 🤯 Build AI Agent Teams!

Поделиться
HTML-код
  • Опубликовано: 6 янв 2025

Комментарии • 588

  • @matthew_berman
    @matthew_berman  Год назад +132

    Advanced guide coming soon. What topics do you want me to cover?

    • @tarekayoubi1913
      @tarekayoubi1913 Год назад +46

      Autogen with langchain :)

    • @techguy2342
      @techguy2342 Год назад +62

      Autogen or aider working with local LLM! OpenAI API is way too expensive. ($0.50 just running the YTD stock example notebook once)

    • @tmhchacham
      @tmhchacham Год назад +8

      Would this work (to some extent) with creating an app in either Android Studio or Visual Studio (MAUI or even Xamarin)?

    • @togglebone2320
      @togglebone2320 Год назад +5

      Second!@@techguy2342

    • @duaazprayer
      @duaazprayer Год назад +22

      Working with docker,. Logging and troubleshooting and finally Integra of private llm hosted in cloud

  • @spencerphillips9533
    @spencerphillips9533 Год назад +55

    Just have to say it. You’re an awesome dude, and I look forward to your videos every day. Your passion is clearly genuine and certainly contagious. I wish you nothing but continued and growing success in life and your video-making ventures! You rock Matthew Berman.

    • @matthew_berman
      @matthew_berman  Год назад +10

      Thank you so much, this comment means a lot to me!

    • @richardbeare11
      @richardbeare11 Год назад +1

      100% agree! The word that came to my mind was "infectious", but the good kind - I've been infected with "stokedness".
      I've been really inspired to learn all I can about this subject, and explore tools and developments in this area - and have been applying it in my own work and life. It's very exciting. Since finding this channel, it's the place I keep coming back to over and over.
      I'll double down with another thank you. Thank you Matt!

    • @FabioEloi
      @FabioEloi Год назад

      Just one more nerd here addicted to the infected stokedness of Matthew and his ability to gave me goosebumps after each new video on AI.

  • @dclutter01
    @dclutter01 Год назад +6

    Yes! Please keep doing this series! On a side note, it's been awesome to watch you grow over the last 7ish months. Your content here, and your personal knowledge and skills have shown a tremendous improvement! Your walkthroughs have helped me, and many others keep up with the incredible volume of AI developments lately. So.... Thank you 🙏.

  • @ciaopizzabella
    @ciaopizzabella Год назад +76

    Very good tutorial! ... BUT... I have seen a ton of videos using LLM agents to make trivial examples but never any more substantial app. It would be great if you could demonstrate how to make a more significant app or game that consists of multiple files, image resources, etc., and where you also participate in editing code (for fixing some bugs or making changes this is often much easier and faster than getting the agents to do it).

    • @therealsharpie
      @therealsharpie Год назад +14

      That and context windows, the bane of AI programming. Eventually, any project is just going to get too big to be seen holistically.

    • @zef3k
      @zef3k Год назад +4

      @@KEKW-lc4xi I wonder if it's just a matter of a lot more but very specific agents. This one handles functions, this one handles file i/o, one can research different dependencies that might work best or more efficient, several can qa along the way.

    • @ciaopizzabella
      @ciaopizzabella Год назад +5

      @@KEKW-lc4xi I believe applications of moderate size and complexity (maybe up to a few thousand lines of code) can be done with current technology. It just requires setting things up right and making the human part of the team to correct any mistakes.

    • @marsrocket
      @marsrocket Год назад +5

      This is my comment exactly. All we see is an AI doing things that a human could easily do themselves, but maybe faster.

    • @pdjinne65
      @pdjinne65 Год назад +1

      Making an app requires interacting with the UI, having a backend and front end, debugging... The tech is far from ready to do any of this, it's way too complex. That's why it's limited to simple things for now.
      My guess is that it's going to be perfect to write isolated pieces of code or paper and articles.

  • @Dave-cg9li
    @Dave-cg9li Год назад +12

    As a postgraduate student, I'd love to see how it could be used in academic research. While I wouldn't trust it to write a paper, it could be helpful in assessing which papers are worth reading when researching a specific topic. Or it could browse the internet and search for relevant papers that the researcher has missed, but that could get quite expensive.
    I'm sure many people would find it useful, since it would be applicable even outside of Academia. And there are definitely more ways in which it could simplify research that I haven't thought of (other than writing fake papers haha) :)

  • @mercypark
    @mercypark Год назад

    Bro right?! It’s the obsession I can’t get away from. Starting this second video now. I’m hoping we’ll focus on coding agents. If not more of that would be wonderful! You’re the best btw.

  • @alexjensen990
    @alexjensen990 Год назад +13

    I would really like to see you do some more advanced stuff. I'm new to coding and kind of jumped into the deep end by focusing on AI. I had spent roughly 100 hours on working with ChatDev and I've already started porting my agents over to AutoGen, but I would love to see how far the rabbit hole goes.
    What would be super cool would be doing a video game type visualization of what AutGen does from a dialog similar to Chat Dev. I love being able to watch my little design firm work through the task I give it visually. Being able to orginize by "company" like Chat Dev would be awesome too. That way you could have different sets of agents built for any series of given tasks, but if they arent need you could lean out the code by removing unnecessary agents. I'll end you creating profules anyway, but the concept of creating almost an entire ecosystem of different "companies" that do different things (SEO, branding, design, front end/ back end integration,etc) would be very interesting.

  • @93cutty
    @93cutty Год назад +2

    I've been messing with it AutoGen since you introduced it and I love it. I am at work so I am going to listen to it here and then when I get home I'll work along if there's anything like that. The only thing I wish I knew if you could do is have multiple other AI working together. IE have Bard, GPT, some HuggingFace models, etc. I bought a 4090 and a solid rig to run some AI locally and I'm ready to learn! lol

  • @Levicandoit
    @Levicandoit Год назад +33

    You mentioned multi agents Matt, but how I would love to see some more tutorials on how to actually get all those agents to work together

    • @AnjewTate
      @AnjewTate Год назад +1

      Introduce them to each other. Name them.

    • @boukm3n
      @boukm3n Год назад +3

      Setup some sort of loop where their answers are all forwarded to each other. You can do this with langchain, Flowise, or Botpress

  • @maxivy
    @maxivy Год назад

    This man is a fixture of the early day AI revolution just by virtue of spreading accessibility, knowledge and passion. Sincerely thank you

  • @johnrmilton1966
    @johnrmilton1966 Год назад +1

    These AutoGen tutorials are extremely helpful, Matthew. Please continue rolling out this content.

  • @MetaphoricMinds
    @MetaphoricMinds Год назад +1

    Matt.... THANK YOU for breaking these down in step-by-step. You are giving us so much knowledge and capability. Truely, thank you.

  • @CursorBl0ck
    @CursorBl0ck Год назад +4

    Thanks for this! One head's up: on a fresh install of anaconda, you may first need to issue 'conda init' before activating the environment, as shown at 2:28.

    • @matthew_berman
      @matthew_berman  Год назад +2

      Thanks for sharing.

    • @pnddesign
      @pnddesign Год назад +1

      That was a pain for me this conda thing ... had conflicts with python3 install via brew.

    • @DreamingConcepts
      @DreamingConcepts Год назад

      also need to have env variable
      %USERPROFILE%\Anaconda3\condabin

  • @Chasingaxl
    @Chasingaxl Год назад

    Glad to see someone else is so excited about this. I am here with you. Let's push this to the limits. I am ready to create my army of agents and groups. Great work buddy!

    • @erwan5482
      @erwan5482 Год назад

      Can you give a use for all this? At the exception of toys problems, I don't see any use.

  • @manuelherrerahipnotista8586
    @manuelherrerahipnotista8586 Год назад

    In only two days big fan of your work man. Simple and to the point. Way to go.

  • @elwii04
    @elwii04 Год назад +4

    Looking forward to the Video with an open source model😍

  • @almahmeed
    @almahmeed Год назад

    This is so interesting. It worked from the 1st time after correcting one of my spelling mistakes :) I made it generate a list of chemical elements with their details ..
    Can't wait to watch the open source and other parts of those videos. Thank you so much.

  • @animeshdevarshi
    @animeshdevarshi Год назад

    This tutorial is so good, that I am watching it third time. Many thanks for this work.

  • @paraconscious790
    @paraconscious790 Год назад +3

    Indeed AutoGen is incredible technology. I am trying to use it for real serious business suff. Let's see how it performs. Thank you very much!

  • @johnrmilton1966
    @johnrmilton1966 Год назад

    Thanks!

  • @naytron210
    @naytron210 Год назад

    Dude I am game for any and all Autogen content you can possibly create 🙌

  • @lucassaccone
    @lucassaccone Год назад

    I thought this would actually be intermediate :( I really love this topic, I saw your first video on it and in the comments a lot of people asked for a more advanced use case, but you didn’t. Btw I rlly like your videos, looking forward for the advanced tutorial!

  • @timkarsten8610
    @timkarsten8610 Год назад

    I more than excited about AutoGen!
    Keep 'em coming!

  • @pnddesign
    @pnddesign Год назад

    This is actually THE tutorial that gave me the curiosity to try this thing. Ready for the advance stuff !

  • @karanjain9707
    @karanjain9707 Год назад

    Thank you so much for creating this! I have 0 coding experience or knowledge but I still managed to make my own CXO Agents! Cheers!

  • @changethementality
    @changethementality 2 месяца назад

    You're a lifesaver! Thanks for the awesome tutorial!

  • @IrmaRustad
    @IrmaRustad Год назад +3

    Great video! Please show how to use different agents that work together to solve a task.

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh Год назад +1

    Having considerable problems getting code execution running under windows. Not sure how to make this integrate with wsl.

  • @Tenly2009
    @Tenly2009 Год назад

    I wanted to make sure you saw my reply about temperature - so I’m posting this as a new comment. Check the API documentation again and scroll down to the “Create Chat Completion” and “Create Completion” sections. They definitely specify the temperature range is 0 to 2.
    “temperature
    number or null
    Optional
    Defaults to 1
    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

  • @alwattsj
    @alwattsj Год назад +1

    Matt, great tutorial. Thank you. For the community, in some limited testing I have had success using the 'gpt-3.5-turbo-16k' model for the assistant and 'gpt-4' for the user proxy. It seems like (specifically for code execution) the user proxy won't actually initiate execution of code from the assistant if the user proxy is using gpt-3.5. Probably worth exploring deeper as these agent 'teams' grow in size.

  • @ddwinhzy
    @ddwinhzy Год назад

    Thanks for solving a problem that's been bothering me, I wish there were more tutorials on autogen, more agents!

  • @francoisneko
    @francoisneko Год назад +1

    Love your videos and I appreciate you take care to explain every steps as I am just learning how to code. I would love if you do a tutorial about how to write a book autonomously, from finding ideas, to writing a summary and then completing each chapter. That would be a great project starting point for me. Really exited about your next videos!

  • @MrSuntask
    @MrSuntask Год назад +7

    Great! Would love to use it with one of the LLAMA2 models.

    • @mrquicky
      @mrquicky Год назад +2

      It can do it, but requires FastChat loading the LLM on the backend.

  • @davidbaity7399
    @davidbaity7399 Год назад

    GO GO GO!
    Awesome, Thank you.
    I will be doing this tonight and extending it.
    My goal is to write applications in C++ or C#, it will be interesting to see how it does.
    Python is ok for writing developer applications not consumer applications.

  • @jamesmillsnicholas7813
    @jamesmillsnicholas7813 Год назад

    Wow! Just ran the article summary example with gpt-4, a bit slower but it installed all the necessary packages and generated a great summary on an article. I can see this being a useful addition to my tool-kit ;-)

  • @sozno4222
    @sozno4222 Год назад

    Thanks!

  • @cedricsipakam2601
    @cedricsipakam2601 Год назад

    This tutorial was just what I needed. You are the best!

  • @ZenchantLive
    @ZenchantLive Год назад +1

    Love seeing ur groth. Werent you at like 32k a few months ago? Either ways man! Great content!

    • @matthew_berman
      @matthew_berman  Год назад +1

      Thanks so much! Yea I started earlier this year :)

  • @BunniesAI
    @BunniesAI Год назад +2

    1. Awesome, please do more advanced stuff (not that this was trivial or anything), but would love to see something that is tangential to a commercial project. Like, how will people use this in production? How can it be deployed to something we can expose to the world? - Brilliant work, keep it up 🙏🏻😍

  • @rayhinojos5063
    @rayhinojos5063 Год назад +1

    Thank you for this video series, it has really helped getting things going.

  • @will2462
    @will2462 Год назад +4

    I would be super interested in using this with Llama or another open source local LLM

  • @Dr_Tripper
    @Dr_Tripper Год назад +5

    Matt, you can't say 'locally' and then use openAI!! I sthere a way to use Autogen with Flowise or Langflow?

    • @matthew_berman
      @matthew_berman  Год назад +5

      I went back-and-forth on this question of using the word local. Ultimately, I am building this on my local computer. There’s one piece that hits an external API. And I’m still planning my video using an open source model. And so my distinction is local means on the computer and open source is when I mean using a non OpenAI model.

    • @Dr_Tripper
      @Dr_Tripper Год назад

      @@matthew_berman Cool. Will wait for it with anticipation. I am learning Langflow and would like to incorporporate an Executor function but this is a way down the road for me. Great teachings.

  • @maybe3392
    @maybe3392 10 месяцев назад

    At 9:43 you say to run via the run button, however the run button opens a new terminal which does not have the same Anaconda context. How do you have this configured to be this way? I am not seeing a way to do this.

  • @satheng931
    @satheng931 Год назад

    I'm not a big fan of using AI outside of my local machine because I don't like sharing my data with large AI companies. Anything you cover highlighting how to install an AI 100% locally is what keeps me coming back for more.

  • @XiOh
    @XiOh Год назад +2

    yes really would love to see how a local opensource llm works with autogen

  • @TrilochanSatapathy
    @TrilochanSatapathy Год назад +2

    Great tutorial 🙌

  • @seanbergman8927
    @seanbergman8927 Год назад +1

    This was great! You make the best LLM tutorials. This was fun!! Got it running except it tended to have trouble running the code script on my system…sometimes, saying it was blocked. But then in the 1-200 round it ran the script. It was wild seeing the agents discussing how to deal with the script blocking issue!!! Yes, most definitely let’s dive in deeper here, both with local OS LLMs and more challenging tasks using gpt. How can it be made to surf the web using gpt, for example? Maybe a plugin or langchain integration?

    • @matthew_berman
      @matthew_berman  Год назад +1

      Much appreciated. Advanced tutorial coming next week :)

  • @codescholar7345
    @codescholar7345 Год назад +4

    Awesome! Can you get local open source LLM add local code interpreters working with autogen? I’m trying to get fastchat working as a local LLM like it says in the docs but it’s getting stuck at the last step before launch. Please advise. Thanks!

    • @matthew_berman
      @matthew_berman  Год назад +3

      I’m working on getting a local model setup. AutoGPT is already code interpreter ;)

    • @alexandresemenov8671
      @alexandresemenov8671 Год назад +4

      There is fastgen that can generate openai like api for working with autogen.

    • @codescholar7345
      @codescholar7345 Год назад

      Sounds great! This tutorial is good. After the agents thank themselves about 10 times, I'm out of GPT tokens... 🤣 we've got to get the local model setup soon, thanks!@@matthew_berman

  • @panikos
    @panikos Год назад +1

    Thanks

  • @adriannairda7922
    @adriannairda7922 Год назад +2

    Thank you for such a good tutorial !

  • @yorth8154
    @yorth8154 Год назад +5

    Hey Matt, love your videos! Here is an idea you might want to consider: Book writing agency. You'll have a team of different agents: a scene writer, a chapter manager (or author), a book outliner, an editor, and a critic. The reason for this specific team is that I found gpt agents have a really hard time writing full chapters, so making them just focus on one scene might be better. I also believe you should have some retrieval system with a vector database or langchain so that the agents who are at a higher level of abstraction can still get information through hundreds of pages of text. I'm sure there are multiple tweaks that needs to be done to this plan as it is put in practice, but I think that it's a decent starting point! I would love to even work with you on this kind of project if you're up for it. It sounds fun!

    • @ryzikx
      @ryzikx Год назад +2

      i'm planning on doing something like this as a novelist myself

  • @hqcart1
    @hqcart1 Год назад +2

    you should give a real world example, like a complete chrome extension, a back and front end web app.

  • @JohnLewis-old
    @JohnLewis-old Год назад +3

    Can't wait to see a local LLM. Can you try it with the 7B Mistral?

  • @keithncs
    @keithncs Год назад +4

    Hi thank you Matthew for creating videos like these. In terms video suggestions, I'm interested in the marketing side of things. Maybe an example video would be "How AutoGen can help you build a 1-man social media marketing agency"

  • @mrNashmann
    @mrNashmann Год назад

    thanks mathew i am new but will rewatch it some points were a bit complicated
    appreciate your wisdom

  • @gregortidholm
    @gregortidholm Год назад +1

    The future is here 💪

  • @jslime
    @jslime Год назад

    This is so cool, your videos are the best man! Keep up the good work.
    I would LOVE to see an example of a low-level marketing agency with multiple agents working together with something like this.
    Say one that researches and generates ideas for industry-relevant twitter (x) posts (agent #1).
    Then another one that plans a schedule for them (agent #3), and finally an agent that actually posts them to twitter for you via an API (agent #3) according to the schedule agent #2 came up with.
    Basically, multiple agents working together to finish a multi-step task.

  • @bobbob-mi6pq
    @bobbob-mi6pq Год назад +9

    Careful while playing around with this your OpenAI bill can come to $10 in a couple days or even hours depending on how many times you use AutoGen

    • @matthew_berman
      @matthew_berman  Год назад +1

      Correct.

    • @mcombatti
      @mcombatti Год назад +1

      How much would it cost to pay an employee to do the same?

    • @bobbob-mi6pq
      @bobbob-mi6pq Год назад

      @@mcombatti That's not the point no matter where you are you could find cheap labor if you outsource, the point is that costs can add up quickly. I made a basic pong game with Chatdev for wayyy cheaper than it cost with AutoGen for example. I feel like this tool is a great steppingstone for better tools in the future. Spotting and fixing mistakes can be mentally taxing when you want to custom it especially when you fix the problem but another one comes up then another one and so on and so on. I realized I was already paying $20 per month for GPT Pro and had set up my custom instructions very well, I could have just put all the prompts into GPT Pro to save time, reduce stress, and cut extra costs.

  • @werwardas1
    @werwardas1 Год назад

    Thank you this was very helpful! Small start, but I see so much potential as LLMs progress..

  • @BobHigley-ne3fk
    @BobHigley-ne3fk Год назад +1

    Wow, this is so awesome. Thank you so much for this.

  • @abelarvizu2898
    @abelarvizu2898 10 месяцев назад

    what kind of file are you creating @1:20? python, text file or something else???

  • @jayanth22
    @jayanth22 Год назад

    00:05 Autogen is an AI technology that allows you to set up multiple AI agents to accomplish any task.
    01:49 Create a new conda environment to manage Python versions and modules
    03:37 Setting up the OpenAI API key and configuring the LLM
    05:44 Creating AI Agent Teams with AutoGen
    07:36 Setting the termination message and code execution config in AutoGen FULL Tutorial with Python
    09:29 Demonstration of using AutoGen with Python and GPT
    11:23 The script successfully generates numbers 1 to 100 and stores them in a file.
    13:23 The script successfully executed and created a file named numbers.txt.

  • @scitechtalktv9742
    @scitechtalktv9742 Год назад +2

    I very much appreciate your content!
    I would like to see more on running open source LLMs such as all the Llama 2 LLMs.
    How to run them in a COLAB (free version) notebook by prompting a local API / URL endpoint. When using OLLAMA and LITELLM for this: what exactly do these two tools do, what are their functions in the process?
    Is it possible to also use vLLM because of the large speed benefits of vLLM (paged attention)? Does that work in conjunction with OLLAMA and/or LITELLM ? My view of LITELLM is, that it can act as a server for LLMs while having the same interface to the server API as OpenAI has (so the server LLM can act as a drop-in replacement for an OpenAI closed source LLM). Is that a correct view?
    I hope you answer these questions. Thanks in advance!

  • @Tenly2009
    @Tenly2009 Год назад +1

    @4:55 did you misspeak about the range for temperature - or is it actually different in pyautogen? For OpenAi, the range is 0 to 2 but in this video you said it was 0 to 1.

    • @matthew_berman
      @matthew_berman  Год назад +1

      Really? It's always 0 to 1...I've never heard of 0 to 2.
      I just checked the docs for OpenAI, it's 0 to 1.

    • @Tenly2009
      @Tenly2009 Год назад

      @@matthew_berman Scroll down further on the API page to the “Create Chat Completion” and “Create Completion” sections.

  • @ralphamhof2664
    @ralphamhof2664 Год назад +1

    Thank you. Great content and really helpful😎

  • @rabihbadr54
    @rabihbadr54 Год назад +4

    Requesting LLama2 (13B-70B) (or any other powerful alternative model) with Autogen

  • @1242elena
    @1242elena Год назад +1

    Please do a tutorial on the best way to intertwine agents and provide them with memory. Also address the issues of rate and token limits. The problem I keep running into is the agents going theough the actions then completing with minimal feedback as opposed to asking me for more prompting for more actions to Complete.

  • @marcfruchtman9473
    @marcfruchtman9473 Год назад

    Thanks for the video. This kind of AI programming Tool is really useful. Good to see it starting to gel.

  • @hellblade-kaos
    @hellblade-kaos Год назад

    Hey Matt you are awesome! Thank you so much for taking the time for sharing your knowledge. Really, thank you

  • @kingofall770
    @kingofall770 Год назад

    Mind blowing. I am hooked

  • @illuminated2438
    @illuminated2438 Год назад

    My agents are having a truly fantastic day!

  • @MoTab78
    @MoTab78 Год назад +1

    First of all, you are making amazing videos for people interested in AI.
    If possible, can you make a video about autogen working locally using text-generetion-webui with its api or openai extensions. Also I wonder if autogen can connect to different models for different agents with running only one instance of textgen.

  • @alwarya
    @alwarya Год назад

    Wow just when I asked for it. Thank you for creating such informational videos and providing resources as well. May god bless you for your work 🙏🏻 And really really interested in the Autogen series and advanced tutorials

  • @RushyNova
    @RushyNova Год назад +3

    Please upload a video using open source models with AutoGen 🙏🏼

  • @prodigiart
    @prodigiart Год назад +1

    Amazing tutorial on Autogen! I look forward to seeing Autogen work with other models like Llama-2 that can be instanced using cloud based gpu architecture.

  • @matt_4329
    @matt_4329 Год назад +1

    How did you get "(base) -> autogen" in the terminal at 1:45? Stuck at the beginning! :/

    • @matthew_berman
      @matthew_berman  Год назад +1

      That just means I'm in the autogen folder I created.

    • @matt_4329
      @matt_4329 Год назад

      Thanks! Working now! :)@@matthew_berman

  • @WINDSORONFIRE
    @WINDSORONFIRE Год назад +3

    I've been coding using aider for a couple months now and it's pretty good. I wonder if you could do some kind of a comparison video to show why this would be more compelling? I mean for something more than a snake game or counting from 1 to 100 lol. I'm doing a serious application/website from scratch. Aider has been amazing but I'm always looking for a new tools.

    • @mrquicky
      @mrquicky Год назад +1

      This is the first time I'm hearing of aider. The readme indicates github functionality, which autogen doesn't do. Does it actually spawn separate processes to test the code it generates? Reluctantly Microsoft's autogen seems to be the only software available which will actually utilize a local LLM to generate the code. I do not fault Matt's use of an online model for code generation. While autogen does support this, it requires fastchat as a backend and the very latest version of the transformers module. Open-interpreter states that it will work with local LLM's, but in a future version. In other words, it absolutely does not support it now.

  • @GoofyGuy-WDW
    @GoofyGuy-WDW Год назад

    Excellent how to, looking forward to testing tomorrow/later today

  • @javi_park
    @javi_park Год назад +1

    can't wait to see these experiments. would love to see examples of mini apps (i.e recreate twitter, instagram, etc) or simple recreations of exisiting apps.

  • @stickmanland
    @stickmanland Год назад +2

    Thanks man!!

  • @zvimelkman4407
    @zvimelkman4407 Год назад

    Hi, I find this content very interesting. Thanks for it. I would be interested in knowing if far more complex tasks may be done this way. All the Best.

  • @shreejipaliwal1215
    @shreejipaliwal1215 Год назад +1

    Can't install it on my pc. How to do.
    Stuck on the first step

  • @cotiew
    @cotiew Год назад

    I would love to see a deeper dive into a piece of what you started here. Task 1 and Task 2, using multiple group chats that pass information down to each other like going from one team to another.

  • @henrywithu
    @henrywithu Год назад +2

    I think you should use a more complicated task (like a snake game via pygame, etc), because it doesn’t show the advantage of multi-agent, compare to a simple one-turn prompt

  • @peterc7144
    @peterc7144 Год назад +13

    Hi Matt, by far most valuable for me would be a series of videos how to use AutoGen with free locally running LLMs - for example with Text-Generation Web UI. Then another series of videos how to do the same with SupeAGI, then other tools which you made videos about. All is nice, but without ability to use locally running free models, none of these are usable for me and probably for many of others.
    I know that SuperAGI can be hooked to Text-Gen-WebUI, but I still have not figured it out.
    AutoGen looks amazing, but I am unable to run it as I would want to and use it with the prices of OpenAI.
    OpenAI, if you are listening, I am willing to pay say 100$ a month, but I need to have unlimited use of your API for my agents etc.
    What do you think? What other guys think? Make sense?

    • @henk.design
      @henk.design Год назад +1

      Open-source will triumph, why rely on OpenAI?

    • @civilianemail
      @civilianemail Год назад +1

      Wow, what do you want to use it for? I've been using OpenAI pretty extensively in personal projects and I've never hit $5 in monthly usage, let alone $100. Outside of splurging on AWS Cloud Compute I've never spent that much. So the idea that you want to run something locally that would cost you $100 a month in api calls is intriguing.

    • @richardbeare11
      @richardbeare11 Год назад +2

      I spent $10 just translating one book yesterday (gpt-4) 😛

    • @richardbeare11
      @richardbeare11 Год назад

      It also cost me $3 to run ChatDev to generate me a small solution for a unity game I'm working on (fairly simple spline follower algorithm - it was great cause it pulled it off and I didn't have to do the gruntwork, but it has to run lots of iterations to converge on solution, like my brain does lmao)

    • @richardbeare11
      @richardbeare11 Год назад +1

      I'm opting to go the local LLM route. I've started work on integrating that into some simpler tools. Curious how much effort it'd be to integrate into autogpt and/or ChatDev.

  • @danialmirmartinez
    @danialmirmartinez Год назад

    Hey Matthew, love how you easily explain it; please, touch on how to use it in real life cases🙌🏼🙌🏼🙌🏼

  • @coryreeve1
    @coryreeve1 11 месяцев назад

    If anyone is just coming into this now and trying to follow along on Windows without adding Anaconda to your PATH (the official Anaconda doc says not to), you can run 'conda' commands from the prompt in VS Code if you launch VS Code from inside Anaconda Navigator (assuming the theme doesn't blind you). This will allow you to follow along the same as the video. Otherwise, you will need to have a separate "Anaconda Prompt" Window open to run the commands.
    Hope this helps!

  • @akibulhaque8621
    @akibulhaque8621 Год назад

    Autogen with langchain where the lliama 2 model can be loaded locally through langchain and also use private data such as a pdf through embeddings so that the llm model will generate a qa reply using the data in the pdf.

  • @ericchastain1863
    @ericchastain1863 Год назад

    Yes i have been working on multiple qutrit hadamard mostly use notepad++
    So now i have .csv, .bin, .hex, qkd auth, and work on game engine design.
    So now that i have my first job as a self taught dev.
    .
    Live broadcast audio to translate stt, translation, and tts back in Ukrainian or choice of audio and text visual well within a min.
    .
    I do have tts tutorial on building from arduino.

  • @ITSupport-q1y
    @ITSupport-q1y Год назад

    Brilliant, thanks for the learning. Cheers Terence (Nelson) New Zealand.

  • @judge_li9947
    @judge_li9947 Год назад

    nice vid. would be nice to see a tutorial on setting up an autonomous trading company. Agents for live trading, risk analysis, market analysis, programming & support etc.

  • @Finnious
    @Finnious Год назад +1

    Keep going! Very clean clear explanations.
    Have you used Google colab instead of running locally?

    • @matthew_berman
      @matthew_berman  Год назад +1

      Yes. Check out my previous AutoGen overview where I give an example using Colab :)

  • @themartdog
    @themartdog Год назад +1

    Can autogen just hook up any API endpoint for the LLM? For example, if I have my own LLM endpoint? Or if I wanted to use an AWS Bedrock API?

  • @XRubiconWay
    @XRubiconWay Год назад

    dude, that was fun, thank you

  • @IanFHood
    @IanFHood Год назад +1

    Excellent work Matt, I'm right behind you LOL Still trying to get all the examples running in colab. Curious if you ran into problems doing that. Is the full development environment going better?
    Thanks for this. Also very excited about Autogen.

    • @matthew_berman
      @matthew_berman  Год назад

      I didn’t run into any issues using Google colab. I like the full development environment better because I just have more control. I have another video coming soon with even more advanced techniques.

  • @Wouldntyouliketoknow2
    @Wouldntyouliketoknow2 Год назад +2

    I'd love a review of the competition in this collaborative ai agents space.. I've heard about chat dev, and I guess there must be others. It would be great to get an overview.

  • @hojank
    @hojank 6 месяцев назад

    This tutorial is so easy to follow. Curious, what software do you use for the screen capture?

  • @_joshwalter_
    @_joshwalter_ Год назад +3

    I would love a tutorial for ChatDev and AutoGen for people with literally no coding background (maybe also hiw to use a GUI).

  • @toddgattfry5405
    @toddgattfry5405 Год назад

    Looks fantastic! Is there a way to automate my MS OneNote inbox so that captured data can be organized and moved to the correct notebook? Thanks!

  • @rmnvishal
    @rmnvishal Год назад +2

    I tried building a functional chess game using AutoGen but failed.. it just created a basic UI and no game logic. Any ideas what could have gone wrong and how do I make it work?