Autogen Workflows need to be a site after this video, this will be the coming thing, Autogen templates and workflows… so maybe building a platform or infrastructure for that 😅 this is the new gpt store, Autogen workflows, said it here first ☝🏽😂 a civitai of sorts
Very cool. Amazing how over the span of 3 months we went from an all command line version to this interactive webui solution. Can't wait to try it out.
Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications? PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???
@@zacharywalker9102that’s what everyone is trying to figure out. It all depends on tasks you do everyday that takes hours of your word day. For instance, you keep sending emails to clients, or u do blogging for the company, or you need to get some data from internet. U have people to do these tasks, right? Why don’t u fire them and leave one guy who is going to use AI, for completing these tasks.
@@AngusLou If you're using conda like in the tutorial and the above isn't working, try the following: ----------------------------------------------------------- conda env config vars set OPENAI_API_KEY=[YOUR API KEY] -n ag ----------------------------------------------------------- This will permanently set the OPENAI_API_KEY environment variable for the conda environment "ag" (obviously change that at the end of the command if you named your conda environment something different). ALSO NOTE: I noticed there's a bug in AutogenStudio - if you change an agent to use a different LLM, it does NOT update that agent in the Workflows. You either have to create a new Workflow with the updated agent, or modify the agent in the Workflow itself. I only realized this when I noticed Mistral wasn't outputting anything in the console, and then realized AutogenStudio was actually using GPT4 instead of Mistral like I told it to. Only after a lot of digging did I find the Workflows didn't update after I changed the agents to use Mistral. Hopefully they fix that bug, but that might also be why its calling for the OPENAI_API_KEY even if you're "using" local LLMs.
Holy moley.. There is not enough time in the day to experiment with all these new toys! What a exciting time to be alive!. Thank you for easy to understand video
Yea but you still have the same rate of interaction with the data, which I guess is not a real problem yet. But there is no groundbreaking shift in the rate at which information can be transmitted from computer systems to biological systems, then I don't think we will be maximizing value extraction from these systems and other more concerning issues as monitoring behavior for deviancy and rebellion and other potential "internal" issues potentially buried within the weights of the models. For the incredible amount of money invested, it would be prudent to consider.
Matthew, you are amazing, a blessed man!!! I am an engineer that wants to learn about AI and I enjoy your videos so much!!! Thank you so much for the high quality videos!!!!!
Yesss! I've been waiting on this video from you! great job. Keep up the amazing work, your videos are very helpful even for the more "experienced" people working within AI
Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications? PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???
Have you seen the new RABBIT device ? It runs of the first Large Action Model Now the Rabbit could literally be working for you 24/7 and making you money while you sleep 🛌!!
Amazing. Could you do a video for coding? A developer team agents (backend, front end, quality assurance, testing team, buisness consultant, project requirements drafter, project planner, etc) and have this produce a ready product.
Thank you. Could you please create a tutorial on how to use different options in memgpt like functional calling, custom instructions, RAG with local files work together?
The biggest benefit it is that new Autogen is using GPT4 turbo (now the cost is ok to play withit) - the old Autogen used old expensive GPT4. Thank you for the video.
This is awesome I started working with Ai after seeing your AutoGen video months ago. But my coding skills aren't strong and I've moved on to other things like LM Studio, Faradev, Coze, MindStudio, LOLLM, etc. Every time I scroll past my autogen folder with the python code I felt kinda sad for it. This is amazing I can't wait to delve back into it. FYI Gemini API keys are available for free now too.
Great tutorial! I would love to see you demonstrate some original complex multistep examples (that could not be executed by a single prompt in chatgpt).
I freaking love your videos. Can I make one request? Will you consider not quite doing your screencap to the very bottom of the screen? Since your code snippets aren't in the description, I'm constantly fighting the youtube playback bar to see the last thing you've typed. Keep up the awesome work.
You can navigate through RUclips videos with these keyboard shortcuts: Use the "Left Arrow" key to rewind the video by 5 seconds. Use the "Right Arrow" key to fast forward the video by 5 seconds. For more precise control: Hold down the "Shift" key. Use the "Left Arrow" key to rewind by 1 second. Use the "Right Arrow" key to fast forward by 1 second.
Note: If your doing this in a Windows Powershell Conda Enviroment, the command is $env:OPENAI_API_KEY = "sk-youropenaiapikeyhere" Thanks for all the great work Mr.Berman 🙏
@@zerohcrows Yeah, I am having issues too connecting Litellm with the WSL Ubuntu as well. Trying to install the conda env on the linux side but running into issues with that as well. I will get back to you if I can find a fix.
That’s cool. Will be great to try to setup a team of developers, like project manager, frontend, backend, qa, devops, ui/ux. With local models, and give them simple project to accomplish as for instance a business owner. I’m really wondering how good it can be for something related to a simple business case, like landing page or promo product page.
I am a small business owner here for the same thing. Been using these similar models including ChatDev for a while bit I haven't found many good uses yet, it's mainly been me spending time making the outputs viable or troubleshooting the output scripts etc... I know some folks are automating the video lead process though
@@r34ct4still limited yes, but think of each agent having its own token limit. In a multi-agent setup, the token limit constraint becomes less of a bottleneck, especially in workflows involving multiple, smaller tasks
Great video. It would be cool if you could demonstrate how AutoGen projects are deployed in saas production environments - perhaps do a complete case study from ideation, development and final deployment - thanks 🙂
I really appreciate this. I don't pay for the monthly sub to GPT- I only have the API access, so I don't have access to the image generator. This allows me to use their image generator without paying for the monthly sub!
Great vid Matt. Would be great if you could do a more in depth vid on skills, e.g. web search with google or other similar things that limit agents at the moment.
I watch nearly every video you create Matthew! I really like the way you teach these topics. I always leave with something that I can actually start using right away! I had toyed with AutoGen Studio for a few minutes and you clarified several important parts that hadn't been clear to me. Please do continue to include steps for running with local LLMs. I think that part is super important.
Great video! I'm going to try this. Is there any way to connect this locally to stable diffusion models to create images in the way that you created images using DALL-E3 with gpt?
We encounter issues when configuring the agents to use local models instead of an OpenAI key. Are there similar problems or proposed solutions? We implemented it with LM Studio instead of ollama because we faced error messages in LightLlm: 'ModuleNotFoundError: No module named 'pkg_resources'.' However, we consistently receive the following error message: 'openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable.
For the new beginners, incase you run into these errors trying to follow the video example exactly: 1. Error occurred while processing message: Connection error. 2. Cannot generate chart Problem 1 Solution - Make sure your payment method is updated in your open ai account platform where you generated your API key - Ensure that your credit balance is more than $0.00 Problem 2 Solution Run the below before your startup the autogenstudio ui: pip install yfinance pandas matplotlib numpy You are welcome!
Great video. I followed all instructions but when I wanted to test the stock price example. It came back with "Error occurred while processing message: Connection error." Is there something that needs to be enabled on MacOS to run it?
@@matthew_berman for the life of me i can't make autogen in wsl2 and LMStudio comunicate, no matter what i use (localhost the wsl ipv4, the computer ipv4) or if i turn off the firewall, it wont even register as an event in LMstudio console, and i tried another app outside of wsl2 and it's working.
For anyone wondering, no, code execution does not seem to work with the Mistral/Mixtral models. The system prompts that Autogen creates are a bit too complicated for these local models to be useful. I’d say wait a few months for some better local models to be released and then try it again.
@@john849ww By 'too complicated' I mean that the smaller parameter models have trouble following detailed prompts. Context length is not usually the limiting factor for system prompt performance.
The introduction of a Bitcoin ETF marks a groundbreaking moment in the cryptocurrency world, merging digital currencies with traditional investment methods. This innovation could stabilize Bitcoin prices and broaden its appeal to a wider range of investors, potentially increasing demand and value. At the heart of this evolution is Jason graystone fx, whose deep understanding of both cryptocurrency and traditional trading has been instrumental. His holistic approach to investment and commitment to staying abreast of market trends make her an invaluable ally in navigating this new era in cryptocurrency investment.
WhatI appreciate about Jason graystone fx is his ability to tailor strategies to individual needs. He recognizes that each investor has unique goals and risk tolerances, and he adapts his advice accordingly
In a field as rapidly evolving as cryptocurrency, staying updated is crucial. Jason Graystone fx continual research and adaptation to the latest market changes have been instrumental in helping me make informed decisions
Bitcoin's role as a store of value and its potential for future growth make it an attractive investment option. BTC trading can be a thrilling way to participate in this digital asset's journey
wait so can this be used as an alternative to custom GPT's with the same features? If so can you please make a tutorial specifically dedicated to local custom GPT's, seen a lot of people also needing something like this. If not then I'd love for someone to explain the differences between Agents and GPT's or at least the use cases. Ofcourse everyone uses them differently for example some people think GPT's are useless but for someone like me who needs a model to output based on specific premade knowlede bases, GPT's are insanely helpful, I never understood the use case for Agents tho
I found a small error when using only local models. You still have to define the OPENAI_API_KEY variable, or Autogen will complain about that, even though GPT is nowhere used. When I set it to anything, Autogen is happy 🤖 Thanks a lot for this video, as always a pleasure 👍♥
I think i'm having the same issue "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable". i did export OPENAI_API_KEY=dummy, but still get the error. im in the correct conda environmet aswell. Restarted both litellm and autogenstudio. Anything other changes you made? i'm running Linux, might have something to say. Edit: I figured it out, i copied the wrong port when setting up litellm. this resulted in a 404 error. the key part is also needed tho. Thanks Hans, you lead me to the answer.
@@mog22utube might be the port number from litellm. It writes 2 addresses when you start hosting. Make sure you pick the right one. And remember to change the model for your workflow as well
I wish I could somehow be brought up to speed on the level of skill needed to be able to do this confidently and comfortably and be able to explain any aspect you mentioned. I would pay someone to teach me. Edit: 06:06:00-06:17:00 Beautiful 👍🏻👍🏻👍🏻
Learn Linux, that’ll allow you to run and understand the commands being run here. Then learn some basic Python so you can know how to configure these models. Once comfortable with that, I would learn web development so you can interface with ai tools you can build in Python. After learning that, learn the mathematics of machine learning i.e. Linear Algebra, Calculus, Statistics and so on. Finally learn a frame work like Tensorflow. Tensorflow is a machine learning framework.
Cant solve it with the local mistral model. When i finally got the openai_api_key set, it just freezes on sending any input to the llm via autogen. Chatting normally in the ubuntu terminal works, but nothing from autogen gets properly sent. (Using the autogen in my windows browser, while everything else is running in ubuntu terminals)
I'm having trouble in ubuntu as well, keep getting the Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable. Eventhough I'm using local models with ollama. I'm not sure what the issue is since there's no api_key for local. I might just stick to crew for now.
What i have the most trouble is understanding how do you get to apply this into a real world. Like could you do a use case from start to finish? For example can you get an agent to create some sort of content and post it on tiktok? I would love to see how create that from scratch and build from there.
Tried to replicate the mistral workflow: Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable I have no gpt4 model or similar left, only mistral everywhere.
When trying to use the ollama with "ollama run mistral" I get: "'ollama' is not recognized as an internal or external command, operable program or batch file." I have it installed, and I can see it's running. Am I missing something?
I'd like to see something that breaks down how to have multiple local LLMs controlling agents at the same time. the swarm I'm wanting to make is going to have GPT4, nous-hermes2-mixtral, mixtral, dolphin-mixtral, and mistral-openorca because a mixtral orca variant hasn't made its way to ollama yet.
would be nice to see some real world agents application for personal use. maybe something like a tool that would help to research and compare different options when you wanna buy something, and then search for best deals. for instance, set a goal to find best wireless headphones in a certain price range that have certain set of features.
@@daryladhityahenry going great, keeping inferencing on a separate GPU platform seems to be the way to go unless I'm putting it into a factory or something critical for high speed video analytics like flying drones
I see another massive idea what this thing could do: I don't know if the austrian RIS (rechtsinformationssystem, basically the austrian collection of all laws and court decisions) has an API to connect to, but if there is one, a LLM could with various agents try to find all the relevant infos to a potential case. this of course should work with other countries online law collection too. need to do something special? ask the agents and they start cracking.
I haven't figured out how to set up my own group chat agent. AutoGen gives the Travel Agent Group Chat Workflow, but in there under Receiver>group_chat_manager>Group Chat Agents, I can't actually add my custom agents. Any ideas?
@tobiakilo3413 yeah sort of. The version of Autogen in this video needs to have a placeholder key (i.e. "sk-not-needed"). Although I ran into the same issue with Autogen 2 and it took some tinkering to get past that error. If you are savy with py notebooks, I recommend the non-studio version of AutoGen. More control, less UI bug complexity
How do you turn on dark mode for autogenstudio? for me its white by default, couldn't find how at all from google, is it because of your default system/browser settings?
What tutorial do you want next about AutoGen Studio?
Set up a workflow for a specific department of a traditional company. Ie Human Resources, or billing department.
Build a SaaS
It would be workflow, a lot of agents and more examples.
Use it with lmstudio, memgpt and a rag like chroma DB 🙏🏿
Autogen Workflows need to be a site after this video, this will be the coming thing, Autogen templates and workflows… so maybe building a platform or infrastructure for that 😅 this is the new gpt store, Autogen workflows, said it here first ☝🏽😂 a civitai of sorts
I'm so grateful for how quickly you figure this stuff out and then articulate it. Thank you for the hundreds of hours you save me.
+100% agree
For real
Really appreciate these videos!
Thanks so much!
Thanks!
thanks so much!
@@matthew_berman anytime sir 🫡
Very cool. Amazing how over the span of 3 months we went from an all command line version to this interactive webui solution. Can't wait to try it out.
This is great. I would love to see a comparison breaking down CrewAI vs Autogen and theirs pros/cons/use cases.
Second this!
Third this
4th this!
Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications?
PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???
@@zacharywalker9102that’s what everyone is trying to figure out. It all depends on tasks you do everyday that takes hours of your word day. For instance, you keep sending emails to clients, or u do blogging for the company, or you need to get some data from internet. U have people to do these tasks, right? Why don’t u fire them and leave one guy who is going to use AI, for completing these tasks.
For Windows users: set OPENAI_API_KEY= (Instead of 'export OPENAI_API_KEY=' )
Thank you🥰🥰
THANK YOU!!
Thank you this was driving me bonkers! XD
How to set API URL base?
@@AngusLou If you're using conda like in the tutorial and the above isn't working, try the following:
-----------------------------------------------------------
conda env config vars set OPENAI_API_KEY=[YOUR API KEY] -n ag
-----------------------------------------------------------
This will permanently set the OPENAI_API_KEY environment variable for the conda environment "ag" (obviously change that at the end of the command if you named your conda environment something different).
ALSO NOTE: I noticed there's a bug in AutogenStudio - if you change an agent to use a different LLM, it does NOT update that agent in the Workflows. You either have to create a new Workflow with the updated agent, or modify the agent in the Workflow itself.
I only realized this when I noticed Mistral wasn't outputting anything in the console, and then realized AutogenStudio was actually using GPT4 instead of Mistral like I told it to. Only after a lot of digging did I find the Workflows didn't update after I changed the agents to use Mistral. Hopefully they fix that bug, but that might also be why its calling for the OPENAI_API_KEY even if you're "using" local LLMs.
Holy moley.. There is not enough time in the day to experiment with all these new toys! What a exciting time to be alive!. Thank you for easy to understand video
Yes this is exactly the selling point for Brain computer interfaces.... the rate of acceleration is slowly and steadily increasing
Yea but you still have the same rate of interaction with the data, which I guess is not a real problem yet. But there is no groundbreaking shift in the rate at which information can be transmitted from computer systems to biological systems, then I don't think we will be maximizing value extraction from these systems and other more concerning issues as monitoring behavior for deviancy and rebellion and other potential "internal" issues potentially buried within the weights of the models. For the incredible amount of money invested, it would be prudent to consider.
Matthew, you are amazing, a blessed man!!! I am an engineer that wants to learn about AI and I enjoy your videos so much!!! Thank you so much for the high quality videos!!!!!
Yesss! I've been waiting on this video from you! great job. Keep up the amazing work, your videos are very helpful even for the more "experienced" people working within AI
You are the only useful channel about AI, others talk about theory, but you show us how to accomplish real life projects. Bravo.
Can someone please tell me what the fuck just happened here?? Not laughing at all . As a business owner how would this help me ? What real world applications?
PS. When he says AGENT is that similar to say a A.I. Companion aka Large Action Model like the new 🐇 RABBIT tech???
The speed at which innovation is happening is staggering and the speed at which you are making the videos is amazing
Have you seen the new RABBIT device ? It runs of the first
Large Action Model
Now the Rabbit could literally be working for you 24/7 and making you money while you sleep 🛌!!
Matthew I am going to revoke this API key Berman
To be fair, he'd get spammed with folks saying he should revoke if he didn't
He used to get half the comments giving him "advice" to revoke the key 😂
This comment had me rolling with laughter. Thank you.
Haha I started saying it with him. It just feels right now.
@@MakeKasprzak exactly lol
Amazing. Could you do a video for coding? A developer team agents (backend, front end, quality assurance, testing team, buisness consultant, project requirements drafter, project planner, etc) and have this produce a ready product.
Thank you for how quick and clear and straight you move through steps. Usually one has to skip parts and listen to rest on high speed to get info
Thank you. Could you please create a tutorial on how to use different options in memgpt like functional calling, custom instructions, RAG with local files work together?
The biggest benefit it is that new Autogen is using GPT4 turbo (now the cost is ok to play withit) - the old Autogen used old expensive GPT4. Thank you for the video.
This is awesome I started working with Ai after seeing your AutoGen video months ago. But my coding skills aren't strong and I've moved on to other things like LM Studio, Faradev, Coze, MindStudio, LOLLM, etc. Every time I scroll past my autogen folder with the python code I felt kinda sad for it. This is amazing I can't wait to delve back into it. FYI Gemini API keys are available for free now too.
Great tutorial! I would love to see you demonstrate some original complex multistep examples (that could not be executed by a single prompt in chatgpt).
Exactly my thought!
I freaking love your videos. Can I make one request? Will you consider not quite doing your screencap to the very bottom of the screen? Since your code snippets aren't in the description, I'm constantly fighting the youtube playback bar to see the last thing you've typed. Keep up the awesome work.
i.imgur.com/wKlvbcp.png
You can navigate through RUclips videos with these keyboard shortcuts:
Use the "Left Arrow" key to rewind the video by 5 seconds.
Use the "Right Arrow" key to fast forward the video by 5 seconds.
For more precise control:
Hold down the "Shift" key.
Use the "Left Arrow" key to rewind by 1 second.
Use the "Right Arrow" key to fast forward by 1 second.
Love these tutorial videos.
Hopefully Ollama will have a release for Windows soon.
Matthew you save me so much time figuring stuff out ! I appreciate you and your channel so much
if you cant run ollama because youre on windows like me, you can use lm studio to do the same. there is a local server function as well
A++ love your work matty! lots of love form Perth, Western Australia - keep up the good work!
Note: If your doing this in a Windows Powershell Conda Enviroment, the command is $env:OPENAI_API_KEY = "sk-youropenaiapikeyhere"
Thanks for all the great work Mr.Berman 🙏
just tried this and its not working for me.
@@zerohcrows Yeah, I am having issues too connecting Litellm with the WSL Ubuntu as well. Trying to install the conda env on the linux side but running into issues with that as well. I will get back to you if I can find a fix.
Thanks a mil, Matthew. Your tutorial is always top-notch
That’s cool. Will be great to try to setup a team of developers, like project manager, frontend, backend, qa, devops, ui/ux. With local models, and give them simple project to accomplish as for instance a business owner. I’m really wondering how good it can be for something related to a simple business case, like landing page or promo product page.
Everything is ready, the whole team
I am a small business owner here for the same thing. Been using these similar models including ChatDev for a while bit I haven't found many good uses yet, it's mainly been me spending time making the outputs viable or troubleshooting the output scripts etc... I know some folks are automating the video lead process though
We are still limited by context windows are we not? Having this many layers would overflow the buffer.
I've been doing this with chatdev, crew ai and gpt pilot, it's pretty neat but looks like autogen studio might have em beat.
@@r34ct4still limited yes, but think of each agent having its own token limit. In a multi-agent setup, the token limit constraint becomes less of a bottleneck, especially in workflows involving multiple, smaller tasks
I like that you always add the local llm twist. Thank you
Great video. It would be cool if you could demonstrate how AutoGen projects are deployed in saas production environments - perhaps do a complete case study from ideation, development and final deployment - thanks 🙂
Nice work, your videos are much appreciated! You've caught me up so quickly and now I'm experimenting with my own agents. thank you!
I really appreciate this. I don't pay for the monthly sub to GPT- I only have the API access, so I don't have access to the image generator. This allows me to use their image generator without paying for the monthly sub!
Awesome video man - thank you for making this clean and easy.
Great vid Matt. Would be great if you could do a more in depth vid on skills, e.g. web search with google or other similar things that limit agents at the moment.
“And it’s going to insult all the packages you need” 1:11
Thanks a lot Matthew !! Great tutorial !!!
I watch nearly every video you create Matthew! I really like the way you teach these topics. I always leave with something that I can actually start using right away! I had toyed with AutoGen Studio for a few minutes and you clarified several important parts that hadn't been clear to me. Please do continue to include steps for running with local LLMs. I think that part is super important.
Great video! I'm going to try this. Is there any way to connect this locally to stable diffusion models to create images in the way that you created images using DALL-E3 with gpt?
“Hi welcome to McDonald’s”
“Can I have a large iced coffee”
“Sure! Will that be with GPT4 or local models”
Awesome. Thanks for posting. This will exponentially improve productivity Matt🤓
We encounter issues when configuring the agents to use local models instead of an OpenAI key. Are there similar problems or proposed solutions? We implemented it with LM Studio instead of ollama because we faced error messages in LightLlm: 'ModuleNotFoundError: No module named 'pkg_resources'.' However, we consistently receive the following error message: 'openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable.
I'm also getting this error using litellm and ollama
You need to set the environment variable for ChatGPT, then you can fix the API key error.@@thefutureisbright
Any luck? I'm also having this error
@@FinnNegrello not really
It’s like a trim down open source version of the enterprise ready Watson X orchestrate on AWS
For the new beginners, incase you run into these errors trying to follow the video example exactly:
1. Error occurred while processing message: Connection error.
2. Cannot generate chart
Problem 1 Solution
- Make sure your payment method is updated in your open ai account platform where you generated your API key
- Ensure that your credit balance is more than $0.00
Problem 2 Solution
Run the below before your startup the autogenstudio ui:
pip install yfinance pandas matplotlib numpy
You are welcome!
Impressive! Thanks for sharing so quickly!
That‘s promising. Just downloaded CrewAI, but this matches my needs more.
Awesome. Really awesome. Thank you so much for this. Can you do a real world application tutorial on how to use Autogen? Sorta a case study.
Is it possible to use gemini api or lm studio?
real nice Matthew! thank U so much for exposing me 2 U'r channel !!! U'r the super helpful quicky King!:) good luck!
Thank you for explaining this! Can you do a detail comparison between AutoGen Studio and CrewAI? I'm torn between them.
Autogen has way more money to develop faster I guess..
Oh boy, excited to see what’s in store, grabs 🍿 😁
Hope you enjoy!
@@matthew_berman🔥🔥🔥 as usual
thank you for an excellent introduction
13:04 classic mistral, i think that the mistral dataset had no jokes in it except for this one.
Great video. I followed all instructions but when I wanted to test the stock price example. It came back with "Error occurred while processing message: Connection error." Is there something that needs to be enabled on MacOS to run it?
The future of AI really is what Karpathy said where a bunch of specialized AI models will work together to perform complex tasks like a computer.
Since Ollama is still only available for Mac users it would be great to see how to set up local LLMs with something like LM Studio
The process is very similar, just start a server and plug in the URL to the agent.
@@matthew_berman for the life of me i can't make autogen in wsl2 and LMStudio comunicate, no matter what i use (localhost the wsl ipv4, the computer ipv4) or if i turn off the firewall, it wont even register as an event in LMstudio console, and i tried another app outside of wsl2 and it's working.
@@matthew_berman What URL from LM Studio do you use? localhost:1234/?
@@matthew_berman I’ll give it a shot - thanks for the reply!
I use Ollama regularly on windows without fuss, I just run it from WSL.
Great video.
Could be a recent development. Now you may be able to directly connect Ollama API as it is OpenAI compilant.
For anyone wondering, no, code execution does not seem to work with the Mistral/Mixtral models. The system prompts that Autogen creates are a bit too complicated for these local models to be useful. I’d say wait a few months for some better local models to be released and then try it again.
Could it be an issue with context length limitation? If not, should try to unpack what "too complicated" means.
@@john849ww By 'too complicated' I mean that the smaller parameter models have trouble following detailed prompts. Context length is not usually the limiting factor for system prompt performance.
@@GearForTheYear ok thanks
@GearForTheYear has anyone tried it with the code llama models
❤ Thanks you so much for sharing your knowledge. You will probably save me hours of experimentations… Great and simple video !
The introduction of a Bitcoin ETF marks a groundbreaking moment in the cryptocurrency world, merging digital currencies with traditional investment methods. This innovation could stabilize Bitcoin prices and broaden its appeal to a wider range of investors, potentially increasing demand and value. At the heart of this evolution is Jason graystone fx, whose deep understanding of both cryptocurrency and traditional trading has been instrumental. His holistic approach to investment and commitment to staying abreast of market trends make her an invaluable ally in navigating this new era in cryptocurrency investment.
WhatI appreciate about Jason graystone fx is his ability to tailor strategies to individual needs. He recognizes that each investor has unique goals and risk tolerances, and he adapts his advice accordingly
In a field as rapidly evolving as cryptocurrency, staying updated is crucial. Jason Graystone fx continual research and adaptation to the latest market changes have been instrumental in helping me make informed decisions
jason. is about limiting losses when you're wrong and maximising gains when you're right, not about being correct all of the time
Please I'm very much interested. How can I get in touch with Jason Graystone fx.
Bitcoin's role as a store of value and its potential for future growth make it an attractive investment option. BTC trading can be a thrilling way to participate in this digital asset's journey
This is great. Thanks Matthew! It would be helpful to do a multi-agent programming workflow.
said something, he already did it, explains, incorporates quick cuts, upload Bermanogen
wait so can this be used as an alternative to custom GPT's with the same features?
If so can you please make a tutorial specifically dedicated to local custom GPT's, seen a lot of people also needing something like this.
If not then I'd love for someone to explain the differences between Agents and GPT's or at least the use cases. Ofcourse everyone uses them differently for example some people think GPT's are useless but for someone like me who needs a model to output based on specific premade knowlede bases, GPT's are insanely helpful, I never understood the use case for Agents tho
will lmstudio work with autogen?
Thank you for this. I was just wondering last night the best way to run local LLM's with AutoGen Studio.
You're welcome, it's easy!
I found a small error when using only local models.
You still have to define the OPENAI_API_KEY variable, or Autogen will complain about that, even though GPT is nowhere used.
When I set it to anything, Autogen is happy 🤖
Thanks a lot for this video, as always a pleasure 👍♥
I think i'm having the same issue "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable". i did export OPENAI_API_KEY=dummy, but still get the error. im in the correct conda environmet aswell. Restarted both litellm and autogenstudio. Anything other changes you made? i'm running Linux, might have something to say.
Edit: I figured it out, i copied the wrong port when setting up litellm. this resulted in a 404 error. the key part is also needed tho. Thanks Hans, you lead me to the answer.
@@brianhansen6481 having the same issue. a random key does not solve it. (using ollama under WIN11/WSL2 setup). found no solution so far 😞
@@brianhansen6481 I'm on Windows and having the same error. After setting the API Key in auton but now I'm getting a connection error. Any tips?
@@mog22utube might be the port number from litellm. It writes 2 addresses when you start hosting. Make sure you pick the right one. And remember to change the model for your workflow as well
Thank you. Ran into this issue as well. Your solution fixed it instantly!
Thanks a lot! Very good tutorial!
I wish I could somehow be brought up to speed on the level of skill needed to be able to do this confidently and comfortably and be able to explain any aspect you mentioned. I would pay someone to teach me.
Edit: 06:06:00-06:17:00 Beautiful 👍🏻👍🏻👍🏻
Learn Linux, that’ll allow you to run and understand the commands being run here. Then learn some basic Python so you can know how to configure these models. Once comfortable with that, I would learn web development so you can interface with ai tools you can build in Python. After learning that, learn the mathematics of machine learning i.e. Linear Algebra, Calculus, Statistics and so on. Finally learn a frame work like Tensorflow. Tensorflow is a machine learning framework.
With those first 2 steps though you can do most things that exist, like set up a RAG system or Agent
Worked like a charm 🎉🎉❤❤
Cant solve it with the local mistral model.
When i finally got the openai_api_key set, it just freezes on sending any input to the llm via autogen.
Chatting normally in the ubuntu terminal works, but nothing from autogen gets properly sent.
(Using the autogen in my windows browser, while everything else is running in ubuntu terminals)
I'm having trouble in ubuntu as well, keep getting the Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable. Eventhough I'm using local models with ollama. I'm not sure what the issue is since there's no api_key for local. I might just stick to crew for now.
What i have the most trouble is understanding how do you get to apply this into a real world. Like could you do a use case from start to finish? For example can you get an agent to create some sort of content and post it on tiktok? I would love to see how create that from scratch and build from there.
Amazing video ans explanation. Thanks a lot for your effort and time.
Tried to replicate the mistral workflow: Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
I have no gpt4 model or similar left, only mistral everywhere.
I can finally hire 100 employees who don’t call in sick
"100% local": proceeds to use OpenAI API key.
just export any value. good to go
I like the vids and have subbed.
Not trying to be a downer, just trying to understand the hassle:benefit ratio in this.
Is there a way to have more than 2 agents in a workflow? I couldn't see how to add any more in the GUI?
Dude you are the spitting immage of Rich Fulcher when he was younger, you even have his speaking mannerisms xD
It's finally here :D
Yes!
When trying to use the ollama with "ollama run mistral" I get: "'ollama' is not recognized as an internal or external command,
operable program or batch file." I have it installed, and I can see it's running. Am I missing something?
Also it tells me gpt 4 does not exist
I'd like to see something that breaks down how to have multiple local LLMs controlling agents at the same time.
the swarm I'm wanting to make is going to have GPT4, nous-hermes2-mixtral, mixtral, dolphin-mixtral, and mistral-openorca because a mixtral orca variant hasn't made its way to ollama yet.
Thanks. How to use workflow description correctly? Does it affect the output?
would be nice to see some real world agents application for personal use. maybe something like a tool that would help to research and compare different options when you wanna buy something, and then search for best deals. for instance, set a goal to find best wireless headphones in a certain price range that have certain set of features.
Cool - Running it locally on my 4xA100 system and I'm thrilled by this UI platform's promise.
I hope you are using something more powerful than Mistral with that set up 😂
@@JakeHall-o9g Dolphin uncensored based on Mixtral 8x7b full precision and another couple of smaller multimodal models alongside.
@@JakeHall-o9g 4xA100 is wild lol
Hi. How is it going with local llm? Is it performing great on workflow/team?
@@daryladhityahenry going great, keeping inferencing on a separate GPU platform seems to be the way to go unless I'm putting it into a factory or something critical for high speed video analytics like flying drones
I see another massive idea what this thing could do:
I don't know if the austrian RIS (rechtsinformationssystem, basically the austrian collection of all laws and court decisions) has an API to connect to, but if there is one, a LLM could with various agents try to find all the relevant infos to a potential case.
this of course should work with other countries online law collection too.
need to do something special? ask the agents and they start cracking.
How can we use lmstudio instead of ollama for windows?
what about windows if i want to run local LLM? we dont have ollama :/
Nice video. When would you recommend using autogen and when open interpretor?
Amazing tutorial!! Thanks!! It would be great to learn how we could connect Agents to an SQL database
Would be cool to see more sophisticated examples
@mathew can you talk about hardware requirements or point to an existing video?
I just had a zapier ad when you were talking about connecting to zapier 💀
Id love to see how to do somethong unique with this, all these AutoGen vids are just the same default tasks they released it with...
I haven't figured out how to set up my own group chat agent. AutoGen gives the Travel Agent Group Chat Workflow, but in there under Receiver>group_chat_manager>Group Chat Agents, I can't actually add my custom agents. Any ideas?
which one do you recomend? Crew ai or autogen studio?
Step by step on running on LM studio? get openai api error
Do you plan to make a tutorial about langraph? or autogen is just better ?
Anyone else getting a error message asking for an Openai api key when trying to run on a local model?
I did. Did you figure it out
@tobiakilo3413 yeah sort of. The version of Autogen in this video needs to have a placeholder key (i.e. "sk-not-needed"). Although I ran into the same issue with Autogen 2 and it took some tinkering to get past that error.
If you are savy with py notebooks, I recommend the non-studio version of AutoGen. More control, less UI bug complexity
Did you say you were a hobbyist recently? You seem like a Pro!
If I never call myself a pro, I force myself to keep learning!
O:46 seconds: Is Conda something we need to install, after python is already Installed, or is Conda stand alone app?
How do you turn on dark mode for autogenstudio? for me its white by default, couldn't find how at all from google, is it because of your default system/browser settings?
nvm, found it, its the button on the top right side next to profile
Where is your Autogen Expert tutorial? Still looking forward to it!
Got multiple coming. Autogen studio with tools is next