There are a LOT of channels offering ~10 minute videos diving into the most recent and powerful LLM frameworks... most offering far less impactful examples (often minimal transformations of tutorials published in the repositories themselves), with far less clear explanations, with far less fluency both in the code and their walkthroughs. Your presentation style is clear, concise, and dense, yet friendly and approachable :) And using Kubernetes as an example, built on top of local LLM (including explanations as to the how and why) are not only practical, but help illustrate the range of use cases beyond yet another sqlite+gpt-4 "research agent swarm!" video. Keep up the great work! You're going to rise to the top of these in no time!!!
this is legit the best video explaining how autogen works, and i also love that you use local models. keep on doing amazing things. I would like to see what other real world use cases are there for the different types of agents
🎯 Key Takeaways for quick navigation: 00:00 🤖 *[Introduction and Restrictions]* - Setting the stage for using Oren to create AI-powered applications. - Three self-imposed restrictions: Open-source models only, code explanation in detail, and ensuring replicability in viewers' projects. - Emphasizing the commitment to using open-source models contrary to common beliefs. 02:30 🛠️ *[Building External System Adapter]* - Creating an instance of an external system adapter for Kubernetes. - Explaining the structure of the adapter class and its get resources method. - Discussing the flexibility of the method parameters and the use of AI to determine values. 04:19 🌐 *[Configuring Autogen for Kubernetes]* - Configuring Autogen for AI-powered interaction with Kubernetes. - Setting up the llama CPP inference server for better performance. - Adjusting parameters like cache, response timeout, and temperature for optimal AI responses. 06:25 🤝 *[Agent Coordination and Workflow]* - Introducing the Kubernetes engineer agent responsible for calling the function. - Describing the role of the kubernetes expert agent in researching values. - Explaining the user proxy agent as a substitute for human input and the group chat manager for agent coordination. 07:35 🔄 *[Agent Coordination Workflow]* - Detailing the workflow of agents' coordination in Autogen. - Explaining how the group chat manager orchestrates the conversation between agents. - Highlighting the role-playing game analogy used for model decision-making. 09:36 🤔 *[Testing the Multi-Agent System]* - Demonstrating the interaction and coordination of agents in action. - Checking the logs for successful execution and agent collaboration. - Acknowledging the efficiency of the agents in working as a team for the intended task. Made with HARPA AI
Hey man. Good videos. You should make one on Hashicorp Nomad. Seems everybody is running behind k8s and it is overkill for most cases. New and early stage startups would benefit from a Nomad tutorial.
Thanks for this video. It's readlly great. I would love to see a video about how to get the output from Autogen into a webapp, including the human input. Would great. Thanks
Do you have a specific requirements .yml file for the conda environment you say to setup in step 1 of you "Setup conda env" or can i just create a blank one?
@@YourTechBudCodes thank you! I will check it out. I realized that we need our own Open AI key, may I ask why do we need it if we are running our own inference server and open source model?
I am trying to run this with lm studio instead of Ollama and the model just generates text instead of running the function. Maybe autogen changed something since this video got out?
Actually... I have written my own wrapper above ollama to power function calling. Most open source servers don't support it. Try using inferix as your server.
Uhm. I'm not sure I understand the question. The inference server does set up an API. Or are you talking about some kind of SaaS service you can integrate with?
There are a LOT of channels offering ~10 minute videos diving into the most recent and powerful LLM frameworks... most offering far less impactful examples (often minimal transformations of tutorials published in the repositories themselves), with far less clear explanations, with far less fluency both in the code and their walkthroughs.
Your presentation style is clear, concise, and dense, yet friendly and approachable :) And using Kubernetes as an example, built on top of local LLM (including explanations as to the how and why) are not only practical, but help illustrate the range of use cases beyond yet another sqlite+gpt-4 "research agent swarm!" video.
Keep up the great work! You're going to rise to the top of these in no time!!!
Thank you so much for the kind words. I really hope my videos add value to anyone who watches it. This motivates me to keep going.
i know this channel's gonna become huge so i wanna be some of the guys that started following from the start❤
This really means a lot. Thank you so much!
this is legit the best video explaining how autogen works, and i also love that you use local models. keep on doing amazing things. I would like to see what other real world use cases are there for the different types of agents
Thank you so much for the kind words. I'm planning to make videos on WebSearch and RAG soon.
🎯 Key Takeaways for quick navigation:
00:00 🤖 *[Introduction and Restrictions]*
- Setting the stage for using Oren to create AI-powered applications.
- Three self-imposed restrictions: Open-source models only, code explanation in detail, and ensuring replicability in viewers' projects.
- Emphasizing the commitment to using open-source models contrary to common beliefs.
02:30 🛠️ *[Building External System Adapter]*
- Creating an instance of an external system adapter for Kubernetes.
- Explaining the structure of the adapter class and its get resources method.
- Discussing the flexibility of the method parameters and the use of AI to determine values.
04:19 🌐 *[Configuring Autogen for Kubernetes]*
- Configuring Autogen for AI-powered interaction with Kubernetes.
- Setting up the llama CPP inference server for better performance.
- Adjusting parameters like cache, response timeout, and temperature for optimal AI responses.
06:25 🤝 *[Agent Coordination and Workflow]*
- Introducing the Kubernetes engineer agent responsible for calling the function.
- Describing the role of the kubernetes expert agent in researching values.
- Explaining the user proxy agent as a substitute for human input and the group chat manager for agent coordination.
07:35 🔄 *[Agent Coordination Workflow]*
- Detailing the workflow of agents' coordination in Autogen.
- Explaining how the group chat manager orchestrates the conversation between agents.
- Highlighting the role-playing game analogy used for model decision-making.
09:36 🤔 *[Testing the Multi-Agent System]*
- Demonstrating the interaction and coordination of agents in action.
- Checking the logs for successful execution and agent collaboration.
- Acknowledging the efficiency of the agents in working as a team for the intended task.
Made with HARPA AI
This is interesting
Hey man. Good videos. You should make one on Hashicorp Nomad. Seems everybody is running behind k8s and it is overkill for most cases. New and early stage startups would benefit from a Nomad tutorial.
I kinda like that idea. Let me prepare something really quick
y ouare the best you are the best you are the best. best autogen tutorial creator out there easily
Thanks for the kind words
well done, very underrated content
Thank you. Glad you liked it.
Hi, very nice tutorial, would you do a follow up to show can the data can be passed across agents ?
Yeah. I've been thinking about doing something on that.
Is there any specific use case you are trying to achieve?
Thanks for explaining AutoGen!
Your welcome. I'm glad you found it to be helpful.
Dude this is REALLY good. Well done & thank you 👏🏽
I really appreciate it. Glad it was helpful.
Thanks for this video. It's readlly great. I would love to see a video about how to get the output from Autogen into a webapp, including the human input. Would great. Thanks
Thanks. I'm glad you found it to be helpful.
A video to integrate all this with a web app is definitely in the works. Will share that soon.
Cc amazing video ❤, excited for series
Thanks. Glad you liked it!
Do you have a specific requirements .yml file for the conda environment you say to setup in step 1 of you "Setup conda env" or can i just create a blank one?
I just realised that i made a mistake in the Readme. You don't need conda since we are using poetry. I have updated the Readme to reflect that.
I need part 2!!
Haha. Glad you liked it. I just posted a part two last week. Do check it out and let me know your thoughts.
@@YourTechBudCodes thank you! I will check it out. I realized that we need our own Open AI key, may I ask why do we need it if we are running our own inference server and open source model?
The openai sdk is annoying. it forces you to provide one. Just put a dummy key and you'll be fine.
This is very interesting!!
Ikr. AutoGen is awesome!
I am trying to run this with lm studio instead of Ollama and the model just generates text instead of running the function. Maybe autogen changed something since this video got out?
Actually... I have written my own wrapper above ollama to power function calling. Most open source servers don't support it. Try using inferix as your server.
@@YourTechBudCodes Interesting, thank you!
Is there a possibility to run an "Autogen Inference Server" with an API? I think that could be really powerful.
Uhm. I'm not sure I understand the question. The inference server does set up an API.
Or are you talking about some kind of SaaS service you can integrate with?