Finally a good tutorial on AG studio! Next suggested content: how to integrate AGS in a frontend able to manage input/output like text and files (pdf, images). For example: external frontend > input (text) >> Autogen Studio flow >>> external frontend >>>> output (downloadable pdf with text and images inside). #AutoGenStudioChallenge
Great work Tyler. Got a bit stuck because I am on a ubuntu 22.04 virtual machine, but got it all working in the end. Great examples. I am quite up with autogen and langsmith, but I always pick up bits and pieces from watching so resources. LM Studio is a nice touch.
Hi Tyler, as an idea for future videos, I would suggest AutoGen Studio LOW-CODE real-world examples (maybe building a skills cookbook?). I.e.: agents that do web search+retrieval and embedding the results together with uploaded PDFs to get an offline RAG considering both offline docs and online search results. Using ollama would be a premium feature! Thank you for your enlighting videos explained in a way also a non-english like me can understand well. Piero
That's a good idea. I would suggest building a workflow for processing documents like a PDF. 1. Have agent 1 load enough of the document to stay within agent 2's context window. 2. Pass the information extracted from the document to agent 2. 3. Have agent 2 process the data - examples: summarization, extraction, conversion, question/answer extraction (the transformation required would be given in the initial prompt) 4. Have agent 2 or agent 3 progressively (or at the end - whatever works) save output of agent 2 to a file. 5. Repeat until the document is fully processed.
These are good suggestions! I have videos coming using langchain and vector db's for more context for the llm's! And then this leads into real work examples. I like your web search and pdf combiner suggestion for an offline RAG. 👍
Excellent course on building AI agents without code! Zapier is another fantastic no-code tool that can automate your workflows effortlessly. #NoCode #Zapier
I hope Autogen people integrate a premade workflow to generate, test, and save skills/functions. This would usually be done using a more capable model like GPT4, but then could be executed using a less advanced model like GPT3.5 or a local model.
As far as just the Autogen framework goes, I think they are working towards more of this idea. I agree, it would be nice to have more of a 'suite' to just run. 👍
Hi Tyler, thank you for your videos on Autogen. Those were very helpful to get started. Whoever, those are also very basic, like for those who are lazy enough to read the Autogen tutorials. What about to create something interesting and more complex? Something that uses prebuilt OpenAI Assistants, that talks with the external world through API by sending and getting data?
Hey, thank you for watching my videos. I do agree about some of them being basic, but I am working towards integrations with langchain tools, and sending requests for more complex scenarios. Autogen is definitely more of an orchestration framework, meaning we can integrate many things into the framework but then have AI Agents perform certain things from those integrations. Thank you for the suggestion 👍
📝 Summary of Key Points: 📌 The video provides a comprehensive guide on using Autogen Studio UI, covering topics like setting up a new project, installing Autogen Studio, creating agents, models, skills, and workflows, and running sessions. 🧐 It demonstrates how to work with different components in Autogen Studio UI, such as agents, models, skills, and workflows, and how to interact with them to generate responses, create conversations, and even generate images using skills. 🚀 The video also delves into advanced topics like downloading workflows, programmatically using them, integrating local open-sourced language models, and utilizing LM Studio for local model deployment, showcasing a wide range of functionalities available in Autogen Studio. 💡 Additional Insights and Observations: 💬 Quotable Moments: Autogen Studio provides a user-friendly interface for creating agents, models, and workflows, making it easy to prototype and test conversational AI applications without extensive coding. 📊 Data and Statistics: The video demonstrates practical examples of creating agents, models, and workflows, showcasing the versatility and capabilities of Autogen Studio in developing conversational AI solutions. 🌐 References and Sources: The video references LM Studio for local model deployment, highlighting the flexibility of integrating different tools and platforms within the Autogen ecosystem. 📣 Concluding Remarks: The video offers a detailed walkthrough of Autogen Studio UI, from basic setup to advanced functionalities like skill-based image generation and local model deployment. By following the steps outlined in the video, users can gain a solid understanding of how to leverage Autogen Studio for creating conversational AI applications efficiently. Generated using TalkBud
I can't find the prompt you have in the video in the git repo. I thought I would add it here. Create 3 python functions: 1. This one returns a random number. 2. This one returns a random number. 3. This one checks if a number is even or odd. Make sure we check to see if the code is correct.
Yes, I don't think I added those prompts in the code as I just wrote them in the UI. Thank you, I should add that to the description though. Appreciate you doing that here 🙌
I love your videos but your screen is kinda out of focus . I mean we can kinda see what your clicking on but if you're going fast there will be several pauses as i have to make sure i'm clicking the right thing. also wanted to add you can run this in lm studios server mode and select zephr. of course have several models downloaded but it picked it up after changing the port to the one in lm studio server port string.
I hear ya! And thank you, I’m fixing that moving forward! I’ll have a video out Sunday I’d love to hear your feedback from on the quality of the video. And thank you for that clarification!
I am looking ito a workflow for reviewing the legality and feasability of a lot of draft docuemtns and verify if they are following the rulesto a bunch of template documents. Are there ways I can make a knowledge base consisting of a lot of docuements as well as a way of uploading and reading through documents? One way I can think of is to use the AskYourPDF-api and route all the documents through that.
Hey, what caching issues are you having? As far as looping issues, are your agents constantly thanking each other? Make sure you have a termination message if that is an issue. Let me know the error you are getting and I can try to help!
Hello, Tyler!Your tutorials are good and easy to be understood. I am an architect,not AI expert. Do you think it is possible to create some architect agents to design buildings together? I mean they can discuss the project,and create some design drawing image...
Hey thank you! So what I think would need to happen is first what kind of files could we feed a model to understand what you do? Are they images or some other type? So if we can have a model look at an image and describe it, and then based on things like (I am not an architect so forgive me) how many rooms, what style, any other architecture options, we can have it discuss what a client would maybe need.
considero que autogen tiene mucha ventaja sobre crew, lo que me justaría saber como puedo hacer para que autogeneren las herramientas que necesiten sin tener que cargarlas antes
Hey, so I don't know if Autogen studio can auto load skills, we would still need to create them and tell it which agents to connect with. If that's what you meant. If that didn't answer the question, let me know!
how can you keep context of the previous messages of chat history after it terminates? so that it can respond based on previous messages and the new ones as a context for nex conversation?
As far as with AutoGen Studio goes, I don't know if that can be done...yet. Unless they had an update very recently, it will probably come in the future. They will keep making updates! If you are just using autogen with python code, you can try autogen.initiate_chats(...). Or use a vector database to store previous messages. If you need help with that, let me know. I have videos coming out soon with some of this information to help!
Tyler!.. I can't get my agent to run the script.. It looks at it and suggest some other code instead, but the script runs fine executed by itself.. What to do?
Hey, so what script are you giving it? I guess you mean you're giving it a skill and it's not running it? Sorry if you dont mind clarifying! I'll do my best to help
Hi, is it possible to make LLM agent speak with another LLM agent, and another agent put the answer in a knowledge graph base? if autogen cannot do this, please guide me to an alternative.
Hey, this is a good question, I haven't used knowledge graphs before, but I think another alternative would be to save it to a vector database, and then the llm would have more context for other agents in the process. Langchain is good at this, and a vector database such as Chroma or FAISS
When I try to run local LLM via LM Studio (marianna13/llava-phi-2-3b-GGUF and Blombert/llava-phi-2-3b-GGUF) my responses are truncated to 2 tokens. Same prompt works perfectly in LM Studio. WTAF?
Yeah this seems to be a current but they are working on. One of the last releases did this. There are work around some people got working, but hopefully they fix it soon. Nothing you did wrong on your end!
So am I right in saying that this is generally just for python users? I created a c # Workflow and, although it did spit out the correct c # methods in your example I got a few errors that code is incompatible with the environment.
Yeah, autogen itself is for python. So if you have it try to execute C# code natively, I don't believe it would work. If there was a python library that helped run native C# code, then maybe, the agent or function would just need some help with that. It can write C# code, but I don't think actually executing it would work. But, I'll be honest, I haven't tried it
@@TylerReedAI ok thanks for confirming that. So O was able to get it to post c# code. But have issues setting up agents and workflow. I am using python functions for skills set up.
Somebody help, how do i set up for GPT 3.5, visual studio didn't work, and everything is running in conda. export OpenAi throws errors only set seems to work.. but What now.?? or where can i find the json config file.. for a y llm???
Hey! So I’m unsure kinda what you mean. For AutoGen studio it’s a UI, and you don’t really need to do anything else with code. So you just export or set depending on your machine the open ai key, and then start AutoGen studio. You won’t have a json config file, it will be handled in the model or agent when you open the ui
Great video. I am struggling to have my own skill to search the web and have a agent in the group chat use it. I could not get it to work. Have you done something like that. ?
The skill does not get called even though I specifically ask to use it in my system prompt. May be there is a certain way a skill needs to be written or documented. It would be nice if we could generate code from a whole workflow including agents and skill involved.
I have some pdf with english and another language text, I want autogen to do ocr on it and extract the English text from this pdf 2nd angent should give 3rd agent that text 3rd agent should create a pdf of that English text only. Is it possible please reply
I will think about this, I have a video coming out using Langchain to take in a PDF, and then not exactly extracting just the English text, but doing actions with that pdf in the workflow. Let me see how we can do this! Maybe I can have a video on something similar 👍
Hey Tyler, from a non tech perspective, I really appreciate your content! Unfortunately, I get an error as soon as I want to add a new agent "Error occurred while creating agent: table agents has no column named description". I cannot understand why, do you have any tips? I also use the free Pycharm Community Version and followed all the steps.
Hey thank you for watching, I appreciate that! So I think from what I read in their github, they just released an update that may fix this. pip install -U autogenstudio Run this and try again, let me know if that fixes it up. This just upgrades the library
As far as I know, right now you can't upload images, though I am making a video on how to have a local server setup you can upload an image and have it analyze it! There are also free things like LlaVa that you can setup something locally and test it out
Congrats, great tutorial! I have 1 question: what is pyautogen for? Why do they evolve it separately from autogenstudio? Isn’t supposed studio to substitute the python versio? What version is best to use? I am a bit confused, pyautogen has many features that studio don’t ( graph etc) thnx
Hey thank you! Okay, so pyautogen is the library we install. This contains the autogen framework as a whole. The reason they dont have us install autogen, is because it was already taken and is not related at all. This is the original framework that is always being updated. AutoGen Studio came in after and is just a UI version that helps us not use code to do the same thing. There are a lot of features that need tested, so it will take some time to catch up, and I also think its separated. I believe a separate group are doing the Autogen Studio. Just takes time, that's all
i keep getting the following error: "Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable" I've tried inputting "", NULL, not-needed and leaving it empty with no luck
Hey! so right before you start up autogen studio, type this in the terminal: export OPENAI_API_KEY=sk-1111. Then just put in whatever your api key is. This still isn't quite fixed yet and I hope in the future releases we won't need to do this
Hey, okay sorry, I will be coming out with a video about this to help out! But, an IDE is an integrated development environment, even something like notepad could technically do this. It's just software to help you code! PyCharm has a free version to make it much easier for you. If you have questions, email me!
Great video but when I try to run Autogen the second time I keep getting this error [Errno 10048] error while attempting to bind on address ('127.0.0.1', 8081): only one usage of each socket address Please help! Thanks in advanced.
Sorry, I don't know why RUclips had this under review. 😖. So if you haven't figured it out yet, I would say make sure you have the first one stopped because they will try to run on the same port, which is what you are getting I believe. Meaning, if you tried to run it again, you may have forgot to stop the first one. OR, you can just change the port up when you run, so instead of 8081, try 8082. Let me know if that helps!
Good question. I've seen documentation to run a local web ui for stable diffusion: github.com/AUTOMATIC1111/stable-diffusion-webui. I believe you also can run it local with just terminal commands, but not sure how that would be used in Studio UI just yet. I'll have to think and look into that.
The installation was fine.. it all runs in conda.but I don't have gpt4, and I can't seem to setup gpt3.5 configuration, it keeps throwing errors.. int the cli export doesn't work. This is all in conda mind you.. visual studio was a mess.
Hey, so let’s tackle this one at a time. So when you sign in as a guest that’s fine, but you’re trying to sign out? I’m not sure if you can sign out properly, so you’ll have to stop the service. And then start it up again. You can hit “ctrl+c” in the terminal to do that. As for switching to 3.5 or another model open source, you would have to export the OpenAI key before you start it up, or the base url. Hope that helped somewhat. If not let me know I can try to help more
@@TylerReedAI if I log in the page looked distorted and it won’t allow sign out unless I chavge the page and log out in dos. I can’t send a message. It gives me 404 error and the model only shows gpt 4 and preview and bloke I’ve then gone in to change settings and added my api key and it does nothing but hangs
@virtualassistantbureau gotcha okay, so before you start autogenstudio, type this: export OPENAI_API_KEY=sk-your api key Then start up AutoGen studio. I’ll take a look tomorrow as well, it could be something with a recent uodate
@@TylerReedAI I’ve added that to my variables where do I add it where I haven’t added it ? I’ve got py files with it but this is the ui itself. It looks all fd up and when I build the agents it only has the code writer and image creator. It doesn’t give me any other option and I don’t want gpt4. I want the 7b I have win 10 64 bit
Keep up the good work! Your tutorials are always thorough and informative - even for the experienced.
Hey I appreciate that! Thank you for watching
Finally a good tutorial on AG studio! Next suggested content: how to integrate AGS in a frontend able to manage input/output like text and files (pdf, images). For example:
external frontend > input (text) >> Autogen Studio flow >>> external frontend >>>> output (downloadable pdf with text and images inside).
#AutoGenStudioChallenge
Thank you and thats a great workflow. AutogenStudio will have some great updates soon, and Ill have something like this in a video sooon
You are a kind dude. Thanks for your hard work showing us so much information!
Great work Tyler. Got a bit stuck because I am on a ubuntu 22.04 virtual machine, but got it all working in the end. Great examples. I am quite up with autogen and langsmith, but I always pick up bits and pieces from watching so resources. LM Studio is a nice touch.
Awesome, glad you were able to get it! I need to play around with Langsmith a little bit.
Awesome tutorial and subbed! it didn't say it but this was the only one I found that allowed me to use both local and paid models simultaneously!
Thank you, I appreciate that so much!
any chance you can a video on calling claude 3 api? also would you consider doing some crewAI tutorials? thanks in advance!@@TylerReedAI
Hi Tyler, as an idea for future videos, I would suggest AutoGen Studio LOW-CODE real-world examples (maybe building a skills cookbook?). I.e.: agents that do web search+retrieval and embedding the results together with uploaded PDFs to get an offline RAG considering both offline docs and online search results. Using ollama would be a premium feature! Thank you for your enlighting videos explained in a way also a non-english like me can understand well. Piero
That's a good idea.
I would suggest building a workflow for processing documents like a PDF.
1. Have agent 1 load enough of the document to stay within agent 2's context window.
2. Pass the information extracted from the document to agent 2.
3. Have agent 2 process the data - examples: summarization, extraction, conversion, question/answer extraction (the transformation required would be given in the initial prompt)
4. Have agent 2 or agent 3 progressively (or at the end - whatever works) save output of agent 2 to a file.
5. Repeat until the document is fully processed.
These are good suggestions! I have videos coming using langchain and vector db's for more context for the llm's! And then this leads into real work examples. I like your web search and pdf combiner suggestion for an offline RAG. 👍
I like this
You nailed it
Thank you, appreciate that!
Power Packed Teaching! Awesome Thanks Tyler.
My pleasure!
This is a really useful tutorial... Thanks for sharing this!👍
Excellent video. Will be wonderful to see some videos on building skill (library). Thank you.
Thank you, and I think a more advanced video on building the skills would be a good idea as well
Excellent course on building AI agents without code! Zapier is another fantastic no-code tool that can automate your workflows effortlessly. #NoCode #Zapier
It's supper cool, thanks for sharing!
I hope Autogen people integrate a premade workflow to generate, test, and save skills/functions.
This would usually be done using a more capable model like GPT4, but then could be executed using a less advanced model like GPT3.5 or a local model.
As far as just the Autogen framework goes, I think they are working towards more of this idea. I agree, it would be nice to have more of a 'suite' to just run. 👍
Hi Tyler, thank you for your videos on Autogen. Those were very helpful to get started. Whoever, those are also very basic, like for those who are lazy enough to read the Autogen tutorials. What about to create something interesting and more complex? Something that uses prebuilt OpenAI Assistants, that talks with the external world through API by sending and getting data?
Hey, thank you for watching my videos. I do agree about some of them being basic, but I am working towards integrations with langchain tools, and sending requests for more complex scenarios. Autogen is definitely more of an orchestration framework, meaning we can integrate many things into the framework but then have AI Agents perform certain things from those integrations.
Thank you for the suggestion 👍
📝 Summary of Key Points:
📌 The video provides a comprehensive guide on using Autogen Studio UI, covering topics like setting up a new project, installing Autogen Studio, creating agents, models, skills, and workflows, and running sessions.
🧐 It demonstrates how to work with different components in Autogen Studio UI, such as agents, models, skills, and workflows, and how to interact with them to generate responses, create conversations, and even generate images using skills.
🚀 The video also delves into advanced topics like downloading workflows, programmatically using them, integrating local open-sourced language models, and utilizing LM Studio for local model deployment, showcasing a wide range of functionalities available in Autogen Studio.
💡 Additional Insights and Observations:
💬 Quotable Moments: Autogen Studio provides a user-friendly interface for creating agents, models, and workflows, making it easy to prototype and test conversational AI applications without extensive coding.
📊 Data and Statistics: The video demonstrates practical examples of creating agents, models, and workflows, showcasing the versatility and capabilities of Autogen Studio in developing conversational AI solutions.
🌐 References and Sources: The video references LM Studio for local model deployment, highlighting the flexibility of integrating different tools and platforms within the Autogen ecosystem.
📣 Concluding Remarks:
The video offers a detailed walkthrough of Autogen Studio UI, from basic setup to advanced functionalities like skill-based image generation and local model deployment. By following the steps outlined in the video, users can gain a solid understanding of how to leverage Autogen Studio for creating conversational AI applications efficiently.
Generated using TalkBud
I can't find the prompt you have in the video in the git repo. I thought I would add it here.
Create 3 python functions:
1. This one returns a random number.
2. This one returns a random number.
3. This one checks if a number is even or odd.
Make sure we check to see if the code is correct.
Yes, I don't think I added those prompts in the code as I just wrote them in the UI. Thank you, I should add that to the description though. Appreciate you doing that here 🙌
Nice tutorial thanks man!
Really great work!
Thank you, appreciate it!
I love your videos but your screen is kinda out of focus . I mean we can kinda see what your clicking on but if you're going fast there will be several pauses as i have to make sure i'm clicking the right thing. also wanted to add you can run this in lm studios server mode and select zephr. of course have several models downloaded but it picked it up after changing the port to the one in lm studio server port string.
I hear ya! And thank you, I’m fixing that moving forward! I’ll have a video out Sunday I’d love to hear your feedback from on the quality of the video.
And thank you for that clarification!
Thanks for the video!
You're welcome!
Amazing video! like and subscribed! Can you use stable diffusion models in skill?
Thank you I appreciate it! So, I think right now it could be done if you can use something like an inference server api inside of a skill
Phenomenal
I am looking ito a workflow for reviewing the legality and feasability of a lot of draft docuemtns and verify if they are following the rulesto a bunch of template documents. Are there ways I can make a knowledge base consisting of a lot of docuements as well as a way of uploading and reading through documents?
One way I can think of is to use the AskYourPDF-api and route all the documents through that.
Yeah you could use AutoGens RAG Agents, or use langchain to get documents into a vectordb, and then ask questions to that to retrieve answers
@@TylerReedAI Thnks a lot! If I get it to work, 70-80% of my workload at my current job can be autosourced to agents.
Keep up the great work!
how did you solve the looping issues, or the caching issues?
Hey, what caching issues are you having?
As far as looping issues, are your agents constantly thanking each other? Make sure you have a termination message if that is an issue. Let me know the error you are getting and I can try to help!
Excellent video! Is there any way to start LM Studio in server mode automatically when the computer starts?
Thank you! I don't believe there is a way yet to have it auto-start a server. That may be a feature down the line they add, but it would be nice
Hello, Tyler!Your tutorials are good and easy to be understood. I am an architect,not AI expert. Do you think it is possible to create some architect agents to design buildings together? I mean they can discuss the project,and create some design drawing image...
Hey thank you! So what I think would need to happen is first what kind of files could we feed a model to understand what you do? Are they images or some other type?
So if we can have a model look at an image and describe it, and then based on things like (I am not an architect so forgive me) how many rooms, what style, any other architecture options, we can have it discuss what a client would maybe need.
I think it is easier to implement it myself with langchain. This whole thing looks super confusing
How come there is no find_papers_arxiv skill? The primary assistant agent has it as a skill.
Hey so they have examples on their backend code if you can’t see it just to show something off. But it’s not there in the skill list?
considero que autogen tiene mucha ventaja sobre crew, lo que me justaría saber como puedo hacer para que autogeneren las herramientas que necesiten sin tener que cargarlas antes
Hey, so I don't know if Autogen studio can auto load skills, we would still need to create them and tell it which agents to connect with. If that's what you meant. If that didn't answer the question, let me know!
how can you keep context of the previous messages of chat history after it terminates? so that it can respond based on previous messages and the new ones as a context for nex conversation?
As far as with AutoGen Studio goes, I don't know if that can be done...yet. Unless they had an update very recently, it will probably come in the future. They will keep making updates!
If you are just using autogen with python code, you can try autogen.initiate_chats(...). Or use a vector database to store previous messages.
If you need help with that, let me know. I have videos coming out soon with some of this information to help!
Tyler!.. I can't get my agent to run the script.. It looks at it and suggest some other code instead, but the script runs fine executed by itself.. What to do?
Hey, so what script are you giving it? I guess you mean you're giving it a skill and it's not running it? Sorry if you dont mind clarifying! I'll do my best to help
Hi, is it possible to make LLM agent speak with another LLM agent, and another agent put the answer in a knowledge graph base? if autogen cannot do this, please guide me to an alternative.
Hey, this is a good question, I haven't used knowledge graphs before, but I think another alternative would be to save it to a vector database, and then the llm would have more context for other agents in the process. Langchain is good at this, and a vector database such as Chroma or FAISS
When I try to run local LLM via LM Studio (marianna13/llava-phi-2-3b-GGUF and Blombert/llava-phi-2-3b-GGUF) my responses are truncated to 2 tokens. Same prompt works perfectly in LM Studio. WTAF?
Yeah this seems to be a current but they are working on. One of the last releases did this. There are work around some people got working, but hopefully they fix it soon. Nothing you did wrong on your end!
So am I right in saying that this is generally just for python users? I created a c # Workflow and, although it did spit out the correct c # methods in your example I got a few errors that code is incompatible with the environment.
Yeah, autogen itself is for python. So if you have it try to execute C# code natively, I don't believe it would work. If there was a python library that helped run native C# code, then maybe, the agent or function would just need some help with that. It can write C# code, but I don't think actually executing it would work. But, I'll be honest, I haven't tried it
@@TylerReedAI ok thanks for confirming that. So O was able to get it to post c# code. But have issues setting up agents and workflow. I am using python functions for skills set up.
Somebody help, how do i set up for GPT 3.5, visual studio didn't work, and everything is running in conda. export OpenAi throws errors only set seems to work.. but What now.?? or where can i find the json config file.. for a y llm???
oops.. i forgot to say Free :)
Hey! So I’m unsure kinda what you mean. For AutoGen studio it’s a UI, and you don’t really need to do anything else with code. So you just export or set depending on your machine the open ai key, and then start AutoGen studio. You won’t have a json config file, it will be handled in the model or agent when you open the ui
Great video. I am struggling to have my own skill to search the web and have a agent in the group chat use it. I could not get it to work. Have you done something like that. ?
Hey, I don't believe I've tried with studio, what kind of error are you getting?
The skill does not get called even though I specifically ask to use it in my system prompt. May be there is a certain way a skill needs to be written or documented. It would be nice if we could generate code from a whole workflow including agents and skill involved.
Hmm, and you also in the agent have it assigned? So when you go to the agent you assign it the skill at the bottom when you click on the agent?
I have some pdf with english and another language text, I want autogen to do ocr on it and extract the English text from this pdf 2nd angent should give 3rd agent that text 3rd agent should create a pdf of that English text only. Is it possible please reply
I would definitely imagine this is possible, you can extract the english part, then simply send that into another pdf document!
@@TylerReedAI how can I do that please reply
I will think about this, I have a video coming out using Langchain to take in a PDF, and then not exactly extracting just the English text, but doing actions with that pdf in the workflow. Let me see how we can do this! Maybe I can have a video on something similar 👍
Hey Tyler, from a non tech perspective, I really appreciate your content! Unfortunately, I get an error as soon as I want to add a new agent "Error occurred while creating agent: table agents has no column named description". I cannot understand why, do you have any tips? I also use the free Pycharm Community Version and followed all the steps.
Hey thank you for watching, I appreciate that! So I think from what I read in their github, they just released an update that may fix this.
pip install -U autogenstudio
Run this and try again, let me know if that fixes it up. This just upgrades the library
@@TylerReedAI wow, it finally works now! Thank you!
Can AutoGen studio upload image, i.e using GPT Vision model
As far as I know, right now you can't upload images, though I am making a video on how to have a local server setup you can upload an image and have it analyze it! There are also free things like LlaVa that you can setup something locally and test it out
Also, do you know how to delete Gallery sessions?
Congrats, great tutorial! I have 1 question: what is pyautogen for? Why do they evolve it separately from autogenstudio? Isn’t supposed studio to substitute the python versio? What version is best to use? I am a bit confused, pyautogen has many features that studio don’t ( graph etc) thnx
Autogenstudio is just a Web browser UI.
Hey thank you! Okay, so pyautogen is the library we install. This contains the autogen framework as a whole. The reason they dont have us install autogen, is because it was already taken and is not related at all. This is the original framework that is always being updated. AutoGen Studio came in after and is just a UI version that helps us not use code to do the same thing. There are a lot of features that need tested, so it will take some time to catch up, and I also think its separated. I believe a separate group are doing the Autogen Studio.
Just takes time, that's all
When will you upload autogen-studio-fc?
uploaded! (thanks for keeping me on my toes) 😀
i keep getting the following error: "Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable" I've tried inputting "", NULL, not-needed and leaving it empty with no luck
Hey! so right before you start up autogen studio, type this in the terminal: export OPENAI_API_KEY=sk-1111. Then just put in whatever your api key is. This still isn't quite fixed yet and I hope in the future releases we won't need to do this
@@TylerReedAI if you have a windows machine the keyword is set not export
Ah right, is it SET?
Got unexpected extra arguments (port 8084) get this error no matter what port number i use?
autogenstudio ui
DO NOT PUT A PORT NUMBER AT ALL WORKS THEN
I got lost at 0:27. What's an IDE? And what does PyCharm do? Doesn't anyone have a tutorial for people who haven't been coding for years?
Hey, okay sorry, I will be coming out with a video about this to help out! But, an IDE is an integrated development environment, even something like notepad could technically do this. It's just software to help you code! PyCharm has a free version to make it much easier for you. If you have questions, email me!
great! but how to configure ollama model, err: 503😢
Hey! So, type this in the terminal before you start autogen studio: export base_url=localhost:11434/v1
@TylerReedAI Hi, Tyler! I set it up like this but it still doesn’t work. Do you have time to make a teaching video? THANK YOU!
Great video but when I try to run Autogen the second time I keep getting this error [Errno 10048] error while attempting to bind on address ('127.0.0.1', 8081): only one usage of each socket address Please help! Thanks in advanced.
Sorry, I don't know why RUclips had this under review. 😖. So if you haven't figured it out yet, I would say make sure you have the first one stopped because they will try to run on the same port, which is what you are getting I believe. Meaning, if you tried to run it again, you may have forgot to stop the first one.
OR, you can just change the port up when you run, so instead of 8081, try 8082. Let me know if that helps!
How can you generate with Stable Diffusion instead of DALL-E?
Good question. I've seen documentation to run a local web ui for stable diffusion: github.com/AUTOMATIC1111/stable-diffusion-webui.
I believe you also can run it local with just terminal commands, but not sure how that would be used in Studio UI just yet. I'll have to think and look into that.
Thanks!
Thank you so much, really appreciate this!!
The installation was fine.. it all runs in conda.but I don't have gpt4, and I can't seem to setup gpt3.5 configuration, it keeps throwing errors.. int the cli export doesn't work. This is all in conda mind you.. visual studio was a mess.
Export for API sorry..
Hey, are you running windows, Linux or Mac?
When I connect to 127 it logs me in as a guest and I can’t sign out and I get 404 when I send a message abd I can’t switch the GPT 3.5 or 7b
Hey, so let’s tackle this one at a time. So when you sign in as a guest that’s fine, but you’re trying to sign out? I’m not sure if you can sign out properly, so you’ll have to stop the service. And then start it up again. You can hit “ctrl+c” in the terminal to do that.
As for switching to 3.5 or another model open source, you would have to export the OpenAI key before you start it up, or the base url.
Hope that helped somewhat. If not let me know I can try to help more
@@TylerReedAI if I log in the page looked distorted and it won’t allow sign out unless I chavge the page and log out in dos. I can’t send a message. It gives me 404 error and the model only shows gpt 4 and preview and bloke I’ve then gone in to change settings and added my api key and it does nothing but hangs
@virtualassistantbureau gotcha okay, so before you start autogenstudio, type this: export OPENAI_API_KEY=sk-your api key
Then start up AutoGen studio. I’ll take a look tomorrow as well, it could be something with a recent uodate
@@TylerReedAI I’ve added that to my variables where do I add it where I haven’t added it ? I’ve got py files with it but this is the ui itself. It looks all fd up and when I build the agents it only has the code writer and image creator. It doesn’t give me any other option and I don’t want gpt4. I want the 7b
I have win 10 64 bit
has endless error messages, seems still in very early beta version
ah, sorry about that. Yeah it is getting upgrades. What messages are you getting?
Takk!
Wow! Thank you so much for the super 🙏🙌👍