AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 544

  • @matthew_berman
    @matthew_berman  Год назад +58

    What tutorial do you want next about AutoGen Studio?

    • @DougBohm
      @DougBohm Год назад +30

      Set up a workflow for a specific department of a traditional company. Ie Human Resources, or billing department.

    • @thespecialnoob
      @thespecialnoob Год назад +18

      Build a SaaS

    • @enesgul2970
      @enesgul2970 Год назад +6

      It would be workflow, a lot of agents and more examples.

    • @DaleNeil
      @DaleNeil Год назад +23

      Use it with lmstudio, memgpt and a rag like chroma DB 🙏🏿

    • @build.aiagents
      @build.aiagents Год назад +7

      Autogen Workflows need to be a site after this video, this will be the coming thing, Autogen templates and workflows… so maybe building a platform or infrastructure for that 😅 this is the new gpt store, Autogen workflows, said it here first ☝🏽😂 a civitai of sorts

  • @ManiSaintVictor
    @ManiSaintVictor Год назад +77

    I'm so grateful for how quickly you figure this stuff out and then articulate it. Thank you for the hundreds of hours you save me.

  • @john849ww
    @john849ww Год назад +1

    Really appreciate these videos!

  • @DaleNeil
    @DaleNeil Год назад +3

    Thanks!

  • @MakilHeru
    @MakilHeru Год назад +16

    Very cool. Amazing how over the span of 3 months we went from an all command line version to this interactive webui solution. Can't wait to try it out.

  • @jakeparker918
    @jakeparker918 Год назад +62

    This is great. I would love to see a comparison breaking down CrewAI vs Autogen and theirs pros/cons/use cases.

    • @kylealcazar8878
      @kylealcazar8878 Год назад +8

      Second this!

    • @Tokaint
      @Tokaint Год назад

      Third this

    • @devonkincaid360
      @devonkincaid360 Год назад

      4th this!

    • @zacharywalker9102
      @zacharywalker9102 Год назад

      Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications?
      PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???

    • @noaicancode
      @noaicancode 11 месяцев назад

      @@zacharywalker9102that’s what everyone is trying to figure out. It all depends on tasks you do everyday that takes hours of your word day. For instance, you keep sending emails to clients, or u do blogging for the company, or you need to get some data from internet. U have people to do these tasks, right? Why don’t u fire them and leave one guy who is going to use AI, for completing these tasks.

  • @RamirosLab
    @RamirosLab Год назад +70

    For Windows users: set OPENAI_API_KEY= (Instead of 'export OPENAI_API_KEY=' )

    • @Anime-Manwha-Manga-webnovel
      @Anime-Manwha-Manga-webnovel Год назад +1

      Thank you🥰🥰

    • @Tmorriso_
      @Tmorriso_ Год назад +1

      THANK YOU!!

    • @starblaiz1986
      @starblaiz1986 Год назад +1

      Thank you this was driving me bonkers! XD

    • @AngusLou
      @AngusLou 11 месяцев назад

      How to set API URL base?

    • @starblaiz1986
      @starblaiz1986 11 месяцев назад

      @@AngusLou If you're using conda like in the tutorial and the above isn't working, try the following:
      -----------------------------------------------------------
      conda env config vars set OPENAI_API_KEY=[YOUR API KEY] -n ag
      -----------------------------------------------------------
      This will permanently set the OPENAI_API_KEY environment variable for the conda environment "ag" (obviously change that at the end of the command if you named your conda environment something different).
      ALSO NOTE: I noticed there's a bug in AutogenStudio - if you change an agent to use a different LLM, it does NOT update that agent in the Workflows. You either have to create a new Workflow with the updated agent, or modify the agent in the Workflow itself.
      I only realized this when I noticed Mistral wasn't outputting anything in the console, and then realized AutogenStudio was actually using GPT4 instead of Mistral like I told it to. Only after a lot of digging did I find the Workflows didn't update after I changed the agents to use Mistral. Hopefully they fix that bug, but that might also be why its calling for the OPENAI_API_KEY even if you're "using" local LLMs.

  • @spaceflights
    @spaceflights Год назад +7

    Holy moley.. There is not enough time in the day to experiment with all these new toys! What a exciting time to be alive!. Thank you for easy to understand video

    • @CaliJumper
      @CaliJumper 11 месяцев назад

      Yes this is exactly the selling point for Brain computer interfaces.... the rate of acceleration is slowly and steadily increasing

    • @CaliJumper
      @CaliJumper 10 месяцев назад

      Yea but you still have the same rate of interaction with the data, which I guess is not a real problem yet. But there is no groundbreaking shift in the rate at which information can be transmitted from computer systems to biological systems, then I don't think we will be maximizing value extraction from these systems and other more concerning issues as monitoring behavior for deviancy and rebellion and other potential "internal" issues potentially buried within the weights of the models. For the incredible amount of money invested, it would be prudent to consider.

  • @isaiassoares8458
    @isaiassoares8458 11 месяцев назад +2

    Matthew, you are amazing, a blessed man!!! I am an engineer that wants to learn about AI and I enjoy your videos so much!!! Thank you so much for the high quality videos!!!!!

  • @Kwasyuk
    @Kwasyuk Год назад +8

    Yesss! I've been waiting on this video from you! great job. Keep up the amazing work, your videos are very helpful even for the more "experienced" people working within AI

  • @sephirothcloud3953
    @sephirothcloud3953 Год назад +8

    You are the only useful channel about AI, others talk about theory, but you show us how to accomplish real life projects. Bravo.

    • @zacharywalker9102
      @zacharywalker9102 Год назад

      Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications?
      PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???

  • @vishnunallani
    @vishnunallani Год назад +1

    The speed at which innovation is happening is staggering and the speed at which you are making the videos is amazing

    • @zacharywalker9102
      @zacharywalker9102 Год назад

      Have you seen the new RABBIT device ? It runs of the first
      Large Action Model
      Now the Rabbit could literally be working for you 24/7 and making you money while you sleep 🛌!!

  • @AmplifyAmbition
    @AmplifyAmbition Год назад +120

    Matthew I am going to revoke this API key Berman

    • @MakeKasprzak
      @MakeKasprzak Год назад +14

      To be fair, he'd get spammed with folks saying he should revoke if he didn't

    • @PigOnPCIn4K
      @PigOnPCIn4K Год назад +5

      He used to get half the comments giving him "advice" to revoke the key 😂

    • @theden0minat0r
      @theden0minat0r Год назад +8

      This comment had me rolling with laughter. Thank you.

    • @bits_of_bryce
      @bits_of_bryce Год назад +3

      Haha I started saying it with him. It just feels right now.

    • @matthew_berman
      @matthew_berman  Год назад +3

      @@MakeKasprzak exactly lol

  • @Save_Humanity_Save_Children
    @Save_Humanity_Save_Children Год назад +23

    Amazing. Could you do a video for coding? A developer team agents (backend, front end, quality assurance, testing team, buisness consultant, project requirements drafter, project planner, etc) and have this produce a ready product.

  • @MultiWolfxxx
    @MultiWolfxxx Год назад

    Thank you for how quick and clear and straight you move through steps. Usually one has to skip parts and listen to rest on high speed to get info

  • @srinub523
    @srinub523 Год назад +18

    Thank you. Could you please create a tutorial on how to use different options in memgpt like functional calling, custom instructions, RAG with local files work together?

  • @micbab-vg2mu
    @micbab-vg2mu Год назад +1

    The biggest benefit it is that new Autogen is using GPT4 turbo (now the cost is ok to play withit) - the old Autogen used old expensive GPT4. Thank you for the video.

  • @FlyinEye
    @FlyinEye Год назад +1

    This is awesome I started working with Ai after seeing your AutoGen video months ago. But my coding skills aren't strong and I've moved on to other things like LM Studio, Faradev, Coze, MindStudio, LOLLM, etc. Every time I scroll past my autogen folder with the python code I felt kinda sad for it. This is amazing I can't wait to delve back into it. FYI Gemini API keys are available for free now too.

  • @sabofx
    @sabofx Год назад +1

    Great tutorial! I would love to see you demonstrate some original complex multistep examples (that could not be executed by a single prompt in chatgpt).

    • @awee1234
      @awee1234 11 месяцев назад

      Exactly my thought!

  • @AnotherMaker
    @AnotherMaker Год назад +1

    I freaking love your videos. Can I make one request? Will you consider not quite doing your screencap to the very bottom of the screen? Since your code snippets aren't in the description, I'm constantly fighting the youtube playback bar to see the last thing you've typed. Keep up the awesome work.

    • @AnotherMaker
      @AnotherMaker Год назад

      i.imgur.com/wKlvbcp.png

    • @user-cz8ks8ve7x
      @user-cz8ks8ve7x Год назад

      You can navigate through RUclips videos with these keyboard shortcuts:
      Use the "Left Arrow" key to rewind the video by 5 seconds.
      Use the "Right Arrow" key to fast forward the video by 5 seconds.
      For more precise control:
      Hold down the "Shift" key.
      Use the "Left Arrow" key to rewind by 1 second.
      Use the "Right Arrow" key to fast forward by 1 second.

  • @GuildOfCalamity
    @GuildOfCalamity Год назад +5

    Love these tutorial videos.
    Hopefully Ollama will have a release for Windows soon.

  • @michaelmarkoulides7068
    @michaelmarkoulides7068 Год назад

    Matthew you save me so much time figuring stuff out ! I appreciate you and your channel so much

  • @Ullibrightfalls
    @Ullibrightfalls Год назад +2

    if you cant run ollama because youre on windows like me, you can use lm studio to do the same. there is a local server function as well

  • @aumatto
    @aumatto Год назад

    A++ love your work matty! lots of love form Perth, Western Australia - keep up the good work!

  • @officialdiadonacs
    @officialdiadonacs Год назад

    Note: If your doing this in a Windows Powershell Conda Enviroment, the command is $env:OPENAI_API_KEY = "sk-youropenaiapikeyhere"
    Thanks for all the great work Mr.Berman 🙏

    • @zerohcrows
      @zerohcrows Год назад

      just tried this and its not working for me.

    • @officialdiadonacs
      @officialdiadonacs Год назад

      @@zerohcrows Yeah, I am having issues too connecting Litellm with the WSL Ubuntu as well. Trying to install the conda env on the linux side but running into issues with that as well. I will get back to you if I can find a fix.

  • @gbengaomoyeni4
    @gbengaomoyeni4 Год назад +1

    Thanks a mil, Matthew. Your tutorial is always top-notch

  • @korseg1990
    @korseg1990 Год назад +19

    That’s cool. Will be great to try to setup a team of developers, like project manager, frontend, backend, qa, devops, ui/ux. With local models, and give them simple project to accomplish as for instance a business owner. I’m really wondering how good it can be for something related to a simple business case, like landing page or promo product page.

    • @Hedonist87
      @Hedonist87 Год назад +1

      Everything is ready, the whole team

    • @PigOnPCIn4K
      @PigOnPCIn4K Год назад +1

      I am a small business owner here for the same thing. Been using these similar models including ChatDev for a while bit I haven't found many good uses yet, it's mainly been me spending time making the outputs viable or troubleshooting the output scripts etc... I know some folks are automating the video lead process though

    • @r34ct4
      @r34ct4 Год назад +3

      We are still limited by context windows are we not? Having this many layers would overflow the buffer.

    • @wasjosh
      @wasjosh Год назад

      I've been doing this with chatdev, crew ai and gpt pilot, it's pretty neat but looks like autogen studio might have em beat.

    • @sabino.software
      @sabino.software Год назад

      ​@@r34ct4still limited yes, but think of each agent having its own token limit. In a multi-agent setup, the token limit constraint becomes less of a bottleneck, especially in workflows involving multiple, smaller tasks

  • @brianhansen6481
    @brianhansen6481 Год назад

    I like that you always add the local llm twist. Thank you

  • @RogueExplorer75
    @RogueExplorer75 Год назад +3

    Great video. It would be cool if you could demonstrate how AutoGen projects are deployed in saas production environments - perhaps do a complete case study from ideation, development and final deployment - thanks 🙂

  • @ameet2000
    @ameet2000 11 месяцев назад

    Nice work, your videos are much appreciated! You've caught me up so quickly and now I'm experimenting with my own agents. thank you!

  • @joshuaprivett3552
    @joshuaprivett3552 11 месяцев назад

    I really appreciate this. I don't pay for the monthly sub to GPT- I only have the API access, so I don't have access to the image generator. This allows me to use their image generator without paying for the monthly sub!

  • @Steve-iz5li
    @Steve-iz5li 11 месяцев назад

    Awesome video man - thank you for making this clean and easy.

  • @restrollar8548
    @restrollar8548 Год назад +2

    Great vid Matt. Would be great if you could do a more in depth vid on skills, e.g. web search with google or other similar things that limit agents at the moment.

  • @danialn
    @danialn Год назад +1

    “And it’s going to insult all the packages you need” 1:11

  • @agentesAi
    @agentesAi 11 месяцев назад

    Thanks a lot Matthew !! Great tutorial !!!

  • @TimKitchens7
    @TimKitchens7 Год назад

    I watch nearly every video you create Matthew! I really like the way you teach these topics. I always leave with something that I can actually start using right away! I had toyed with AutoGen Studio for a few minutes and you clarified several important parts that hadn't been clear to me. Please do continue to include steps for running with local LLMs. I think that part is super important.

  • @leegregory5617
    @leegregory5617 Год назад +4

    Great video! I'm going to try this. Is there any way to connect this locally to stable diffusion models to create images in the way that you created images using DALL-E3 with gpt?

  • @maxivy
    @maxivy Год назад +1

    “Hi welcome to McDonald’s”
    “Can I have a large iced coffee”
    “Sure! Will that be with GPT4 or local models”

  • @pebre79
    @pebre79 Год назад

    Awesome. Thanks for posting. This will exponentially improve productivity Matt🤓

  • @CatLee22
    @CatLee22 Год назад +8

    We encounter issues when configuring the agents to use local models instead of an OpenAI key. Are there similar problems or proposed solutions? We implemented it with LM Studio instead of ollama because we faced error messages in LightLlm: 'ModuleNotFoundError: No module named 'pkg_resources'.' However, we consistently receive the following error message: 'openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable.

    • @thefutureisbright
      @thefutureisbright Год назад +5

      I'm also getting this error using litellm and ollama

    • @CatLee22
      @CatLee22 Год назад

      You need to set the environment variable for ChatGPT, then you can fix the API key error.@@thefutureisbright

    • @FinnNegrello
      @FinnNegrello 10 месяцев назад +1

      Any luck? I'm also having this error

    • @CatLee22
      @CatLee22 10 месяцев назад

      @@FinnNegrello not really

  • @Chris-se3nc
    @Chris-se3nc 11 месяцев назад +1

    It’s like a trim down open source version of the enterprise ready Watson X orchestrate on AWS

  • @CodewithFemi
    @CodewithFemi Год назад

    For the new beginners, incase you run into these errors trying to follow the video example exactly:
    1. Error occurred while processing message: Connection error.
    2. Cannot generate chart
    Problem 1 Solution
    - Make sure your payment method is updated in your open ai account platform where you generated your API key
    - Ensure that your credit balance is more than $0.00
    Problem 2 Solution
    Run the below before your startup the autogenstudio ui:
    pip install yfinance pandas matplotlib numpy
    You are welcome!

  • @DonAngelo333
    @DonAngelo333 Год назад

    Impressive! Thanks for sharing so quickly!

  • @MeinDeutschkurs
    @MeinDeutschkurs Год назад

    That‘s promising. Just downloaded CrewAI, but this matches my needs more.

  • @samuelsamuel5505
    @samuelsamuel5505 Год назад +2

    Awesome. Really awesome. Thank you so much for this. Can you do a real world application tutorial on how to use Autogen? Sorta a case study.

  • @henrychien9177
    @henrychien9177 Год назад +3

    Is it possible to use gemini api or lm studio?

  • @Jibs-HappyDesigns-990
    @Jibs-HappyDesigns-990 Год назад

    real nice Matthew! thank U so much for exposing me 2 U'r channel !!! U'r the super helpful quicky King!:) good luck!

  • @JohnLewis-old
    @JohnLewis-old Год назад +5

    Thank you for explaining this! Can you do a detail comparison between AutoGen Studio and CrewAI? I'm torn between them.

    • @luismatabrito
      @luismatabrito Год назад

      Autogen has way more money to develop faster I guess..

  • @build.aiagents
    @build.aiagents Год назад +1

    Oh boy, excited to see what’s in store, grabs 🍿 😁

  • @amitkumarsingh4489
    @amitkumarsingh4489 Год назад

    thank you for an excellent introduction

  • @PnchBagTF2
    @PnchBagTF2 Год назад

    13:04 classic mistral, i think that the mistral dataset had no jokes in it except for this one.

  • @boriskrumrey6501
    @boriskrumrey6501 Год назад +1

    Great video. I followed all instructions but when I wanted to test the stock price example. It came back with "Error occurred while processing message: Connection error." Is there something that needs to be enabled on MacOS to run it?

  • @wood6454
    @wood6454 Год назад

    The future of AI really is what Karpathy said where a bunch of specialized AI models will work together to perform complex tasks like a computer.

  • @akilja2011
    @akilja2011 Год назад +7

    Since Ollama is still only available for Mac users it would be great to see how to set up local LLMs with something like LM Studio

    • @matthew_berman
      @matthew_berman  Год назад +5

      The process is very similar, just start a server and plug in the URL to the agent.

    • @albyt3403
      @albyt3403 Год назад

      @@matthew_berman for the life of me i can't make autogen in wsl2 and LMStudio comunicate, no matter what i use (localhost the wsl ipv4, the computer ipv4) or if i turn off the firewall, it wont even register as an event in LMstudio console, and i tried another app outside of wsl2 and it's working.

    • @KolTregaskes
      @KolTregaskes Год назад

      @@matthew_berman What URL from LM Studio do you use? localhost:1234/?

    • @akilja2011
      @akilja2011 Год назад

      @@matthew_berman I’ll give it a shot - thanks for the reply!

    • @wasjosh
      @wasjosh Год назад +1

      I use Ollama regularly on windows without fuss, I just run it from WSL.

  • @ponsaravanan
    @ponsaravanan 4 месяца назад

    Great video.
    Could be a recent development. Now you may be able to directly connect Ollama API as it is OpenAI compilant.

  • @GearForTheYear
    @GearForTheYear Год назад +2

    For anyone wondering, no, code execution does not seem to work with the Mistral/Mixtral models. The system prompts that Autogen creates are a bit too complicated for these local models to be useful. I’d say wait a few months for some better local models to be released and then try it again.

    • @john849ww
      @john849ww Год назад

      Could it be an issue with context length limitation? If not, should try to unpack what "too complicated" means.

    • @GearForTheYear
      @GearForTheYear Год назад

      @@john849ww By 'too complicated' I mean that the smaller parameter models have trouble following detailed prompts. Context length is not usually the limiting factor for system prompt performance.

    • @john849ww
      @john849ww Год назад +1

      @@GearForTheYear ok thanks

    • @MrWizardGG
      @MrWizardGG Год назад

      ​@GearForTheYear has anyone tried it with the code llama models

  • @jpmottin
    @jpmottin Год назад

    ❤ Thanks you so much for sharing your knowledge. You will probably save me hours of experimentations… Great and simple video !

  • @JuanRodriguez-ko7eh
    @JuanRodriguez-ko7eh Год назад +9

    The introduction of a Bitcoin ETF marks a groundbreaking moment in the cryptocurrency world, merging digital currencies with traditional investment methods. This innovation could stabilize Bitcoin prices and broaden its appeal to a wider range of investors, potentially increasing demand and value. At the heart of this evolution is Jason graystone fx, whose deep understanding of both cryptocurrency and traditional trading has been instrumental. His holistic approach to investment and commitment to staying abreast of market trends make her an invaluable ally in navigating this new era in cryptocurrency investment.

    • @doodee2392
      @doodee2392 Год назад +1

      WhatI appreciate about Jason graystone fx is his ability to tailor strategies to individual needs. He recognizes that each investor has unique goals and risk tolerances, and he adapts his advice accordingly

    • @camerontita7661
      @camerontita7661 Год назад +2

      In a field as rapidly evolving as cryptocurrency, staying updated is crucial. Jason Graystone fx continual research and adaptation to the latest market changes have been instrumental in helping me make informed decisions

    • @sulaimanbala8873
      @sulaimanbala8873 Год назад +2

      jason. is about limiting losses when you're wrong and maximising gains when you're right, not about being correct all of the time

    • @jonathanmorgan4803
      @jonathanmorgan4803 Год назад +2

      Please I'm very much interested. How can I get in touch with Jason Graystone fx.

    • @SusanMelson-kk6ll
      @SusanMelson-kk6ll Год назад +1

      Bitcoin's role as a store of value and its potential for future growth make it an attractive investment option. BTC trading can be a thrilling way to participate in this digital asset's journey

  • @RichardGetzPhotography
    @RichardGetzPhotography Год назад

    This is great. Thanks Matthew! It would be helpful to do a multi-agent programming workflow.

  • @ChrisOrillia
    @ChrisOrillia Год назад

    said something, he already did it, explains, incorporates quick cuts, upload Bermanogen

  • @Tokaint
    @Tokaint Год назад

    wait so can this be used as an alternative to custom GPT's with the same features?
    If so can you please make a tutorial specifically dedicated to local custom GPT's, seen a lot of people also needing something like this.
    If not then I'd love for someone to explain the differences between Agents and GPT's or at least the use cases. Ofcourse everyone uses them differently for example some people think GPT's are useless but for someone like me who needs a model to output based on specific premade knowlede bases, GPT's are insanely helpful, I never understood the use case for Agents tho

  • @tarmiziizzuddin337
    @tarmiziizzuddin337 Год назад +3

    will lmstudio work with autogen?

  • @VastIllumination
    @VastIllumination Год назад +3

    Thank you for this. I was just wondering last night the best way to run local LLM's with AutoGen Studio.

  • @hansgruber3495
    @hansgruber3495 Год назад +3

    I found a small error when using only local models.
    You still have to define the OPENAI_API_KEY variable, or Autogen will complain about that, even though GPT is nowhere used.
    When I set it to anything, Autogen is happy 🤖
    Thanks a lot for this video, as always a pleasure 👍♥

    • @brianhansen6481
      @brianhansen6481 Год назад +3

      I think i'm having the same issue "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable". i did export OPENAI_API_KEY=dummy, but still get the error. im in the correct conda environmet aswell. Restarted both litellm and autogenstudio. Anything other changes you made? i'm running Linux, might have something to say.
      Edit: I figured it out, i copied the wrong port when setting up litellm. this resulted in a 404 error. the key part is also needed tho. Thanks Hans, you lead me to the answer.

    • @AndreasAliasPeterPan
      @AndreasAliasPeterPan Год назад

      @@brianhansen6481 having the same issue. a random key does not solve it. (using ollama under WIN11/WSL2 setup). found no solution so far 😞

    • @mog22utube
      @mog22utube Год назад

      @@brianhansen6481 I'm on Windows and having the same error. After setting the API Key in auton but now I'm getting a connection error. Any tips?

    • @brianhansen6481
      @brianhansen6481 Год назад

      @@mog22utube might be the port number from litellm. It writes 2 addresses when you start hosting. Make sure you pick the right one. And remember to change the model for your workflow as well

    • @VastIllumination
      @VastIllumination Год назад +1

      Thank you. Ran into this issue as well. Your solution fixed it instantly!

  • @serhiilytvyn8753
    @serhiilytvyn8753 Год назад

    Thanks a lot! Very good tutorial!

  • @lenderzconstable
    @lenderzconstable Год назад +1

    I wish I could somehow be brought up to speed on the level of skill needed to be able to do this confidently and comfortably and be able to explain any aspect you mentioned. I would pay someone to teach me.
    Edit: 06:06:00-06:17:00 Beautiful 👍🏻👍🏻👍🏻

    • @Elsombrero512
      @Elsombrero512 Год назад

      Learn Linux, that’ll allow you to run and understand the commands being run here. Then learn some basic Python so you can know how to configure these models. Once comfortable with that, I would learn web development so you can interface with ai tools you can build in Python. After learning that, learn the mathematics of machine learning i.e. Linear Algebra, Calculus, Statistics and so on. Finally learn a frame work like Tensorflow. Tensorflow is a machine learning framework.

    • @Elsombrero512
      @Elsombrero512 Год назад

      With those first 2 steps though you can do most things that exist, like set up a RAG system or Agent

  • @weedsandwildflowersshop4461
    @weedsandwildflowersshop4461 11 месяцев назад

    Worked like a charm 🎉🎉❤❤

  • @Baleur
    @Baleur Год назад +2

    Cant solve it with the local mistral model.
    When i finally got the openai_api_key set, it just freezes on sending any input to the llm via autogen.
    Chatting normally in the ubuntu terminal works, but nothing from autogen gets properly sent.
    (Using the autogen in my windows browser, while everything else is running in ubuntu terminals)

    • @devonkincaid360
      @devonkincaid360 Год назад

      I'm having trouble in ubuntu as well, keep getting the Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable. Eventhough I'm using local models with ollama. I'm not sure what the issue is since there's no api_key for local. I might just stick to crew for now.

  • @Imakemvps
    @Imakemvps 7 месяцев назад

    What i have the most trouble is understanding how do you get to apply this into a real world. Like could you do a use case from start to finish? For example can you get an agent to create some sort of content and post it on tiktok? I would love to see how create that from scratch and build from there.

  • @peralser
    @peralser Год назад

    Amazing video ans explanation. Thanks a lot for your effort and time.

  • @ernstmayer3868
    @ernstmayer3868 Год назад +2

    Tried to replicate the mistral workflow: Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
    I have no gpt4 model or similar left, only mistral everywhere.

  • @Hedonist87
    @Hedonist87 11 месяцев назад

    I can finally hire 100 employees who don’t call in sick

  • @teleprint-me
    @teleprint-me Год назад +4

    "100% local": proceeds to use OpenAI API key.

  • @BoogieBoogsForever
    @BoogieBoogsForever Год назад

    I like the vids and have subbed.
    Not trying to be a downer, just trying to understand the hassle:benefit ratio in this.

  • @shrn680
    @shrn680 Год назад +1

    Is there a way to have more than 2 agents in a workflow? I couldn't see how to add any more in the GUI?

  • @ProfessorOfCookies
    @ProfessorOfCookies 9 месяцев назад

    Dude you are the spitting immage of Rich Fulcher when he was younger, you even have his speaking mannerisms xD

  • @ieye5608
    @ieye5608 Год назад +1

    It's finally here :D

  • @macnfoster
    @macnfoster 11 месяцев назад +1

    When trying to use the ollama with "ollama run mistral" I get: "'ollama' is not recognized as an internal or external command,
    operable program or batch file." I have it installed, and I can see it's running. Am I missing something?

    • @macnfoster
      @macnfoster 11 месяцев назад

      Also it tells me gpt 4 does not exist

  • @michamohe
    @michamohe Год назад

    I'd like to see something that breaks down how to have multiple local LLMs controlling agents at the same time.
    the swarm I'm wanting to make is going to have GPT4, nous-hermes2-mixtral, mixtral, dolphin-mixtral, and mistral-openorca because a mixtral orca variant hasn't made its way to ollama yet.

  • @stanTrX
    @stanTrX 8 месяцев назад

    Thanks. How to use workflow description correctly? Does it affect the output?

  • @kawwabonga
    @kawwabonga Год назад

    would be nice to see some real world agents application for personal use. maybe something like a tool that would help to research and compare different options when you wanna buy something, and then search for best deals. for instance, set a goal to find best wireless headphones in a certain price range that have certain set of features.

  • @sennetor
    @sennetor Год назад +1

    Cool - Running it locally on my 4xA100 system and I'm thrilled by this UI platform's promise.

    • @JakeHall-o9g
      @JakeHall-o9g Год назад +1

      I hope you are using something more powerful than Mistral with that set up 😂

    • @sennetor
      @sennetor Год назад

      @@JakeHall-o9g Dolphin uncensored based on Mixtral 8x7b full precision and another couple of smaller multimodal models alongside.

    • @akhilsharma2712
      @akhilsharma2712 Год назад

      @@JakeHall-o9g 4xA100 is wild lol

    • @daryladhityahenry
      @daryladhityahenry Год назад +1

      Hi. How is it going with local llm? Is it performing great on workflow/team?

    • @sennetor
      @sennetor Год назад

      @@daryladhityahenry going great, keeping inferencing on a separate GPU platform seems to be the way to go unless I'm putting it into a factory or something critical for high speed video analytics like flying drones

  • @robertheinrich2994
    @robertheinrich2994 Год назад

    I see another massive idea what this thing could do:
    I don't know if the austrian RIS (rechtsinformationssystem, basically the austrian collection of all laws and court decisions) has an API to connect to, but if there is one, a LLM could with various agents try to find all the relevant infos to a potential case.
    this of course should work with other countries online law collection too.
    need to do something special? ask the agents and they start cracking.

  • @samsontan1141
    @samsontan1141 Год назад +1

    How can we use lmstudio instead of ollama for windows?

  • @khaledalshammari857
    @khaledalshammari857 Год назад +1

    what about windows if i want to run local LLM? we dont have ollama :/

  • @wardehaj
    @wardehaj Год назад

    Nice video. When would you recommend using autogen and when open interpretor?

  • @fbravoc9748
    @fbravoc9748 Год назад

    Amazing tutorial!! Thanks!! It would be great to learn how we could connect Agents to an SQL database

  • @seppimweb5925
    @seppimweb5925 Год назад +1

    Would be cool to see more sophisticated examples

  • @julianmaya3753
    @julianmaya3753 Год назад

    @mathew can you talk about hardware requirements or point to an existing video?

  • @RiddleMaster-wi9yw
    @RiddleMaster-wi9yw Год назад

    I just had a zapier ad when you were talking about connecting to zapier 💀

  • @PigOnPCIn4K
    @PigOnPCIn4K Год назад +1

    Id love to see how to do somethong unique with this, all these AutoGen vids are just the same default tasks they released it with...

  • @Order_of_the_Night
    @Order_of_the_Night Год назад +1

    I haven't figured out how to set up my own group chat agent. AutoGen gives the Travel Agent Group Chat Workflow, but in there under Receiver>group_chat_manager>Group Chat Agents, I can't actually add my custom agents. Any ideas?

  • @jguillengarcia
    @jguillengarcia Год назад

    which one do you recomend? Crew ai or autogen studio?

  • @darkmelodiesay
    @darkmelodiesay Год назад +1

    Step by step on running on LM studio? get openai api error

  • @lesmilansdevany8014
    @lesmilansdevany8014 10 месяцев назад

    Do you plan to make a tutorial about langraph? or autogen is just better ?

  • @AizenAwakened
    @AizenAwakened Год назад +5

    Anyone else getting a error message asking for an Openai api key when trying to run on a local model?

    • @tobiakilo3413
      @tobiakilo3413 11 месяцев назад

      I did. Did you figure it out

    • @AizenAwakened
      @AizenAwakened 10 месяцев назад +1

      @tobiakilo3413 yeah sort of. The version of Autogen in this video needs to have a placeholder key (i.e. "sk-not-needed"). Although I ran into the same issue with Autogen 2 and it took some tinkering to get past that error.
      If you are savy with py notebooks, I recommend the non-studio version of AutoGen. More control, less UI bug complexity

  • @lenderzconstable
    @lenderzconstable Год назад +1

    Did you say you were a hobbyist recently? You seem like a Pro!

    • @matthew_berman
      @matthew_berman  Год назад

      If I never call myself a pro, I force myself to keep learning!

  • @rajeshberry6264
    @rajeshberry6264 Год назад

    O:46 seconds: Is Conda something we need to install, after python is already Installed, or is Conda stand alone app?

  • @penthoy
    @penthoy 11 месяцев назад +1

    How do you turn on dark mode for autogenstudio? for me its white by default, couldn't find how at all from google, is it because of your default system/browser settings?

    • @penthoy
      @penthoy 11 месяцев назад

      nvm, found it, its the button on the top right side next to profile

  • @chengduman
    @chengduman Год назад +1

    Where is your Autogen Expert tutorial? Still looking forward to it!

    • @matthew_berman
      @matthew_berman  Год назад +2

      Got multiple coming. Autogen studio with tools is next