AutoGen DeepDive: Building Conversational Agents for Kubernetes!

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии • 37

  • @CaptTerrific
    @CaptTerrific Год назад +2

    There are a LOT of channels offering ~10 minute videos diving into the most recent and powerful LLM frameworks... most offering far less impactful examples (often minimal transformations of tutorials published in the repositories themselves), with far less clear explanations, with far less fluency both in the code and their walkthroughs.
    Your presentation style is clear, concise, and dense, yet friendly and approachable :) And using Kubernetes as an example, built on top of local LLM (including explanations as to the how and why) are not only practical, but help illustrate the range of use cases beyond yet another sqlite+gpt-4 "research agent swarm!" video.
    Keep up the great work! You're going to rise to the top of these in no time!!!

    • @YourTechBudCodes
      @YourTechBudCodes  11 месяцев назад +1

      Thank you so much for the kind words. I really hope my videos add value to anyone who watches it. This motivates me to keep going.

  • @suseendaran5690
    @suseendaran5690 Год назад +1

    i know this channel's gonna become huge so i wanna be some of the guys that started following from the start❤

  • @tocutandrei9465
    @tocutandrei9465 Год назад +3

    this is legit the best video explaining how autogen works, and i also love that you use local models. keep on doing amazing things. I would like to see what other real world use cases are there for the different types of agents

    • @YourTechBudCodes
      @YourTechBudCodes  Год назад +1

      Thank you so much for the kind words. I'm planning to make videos on WebSearch and RAG soon.

  • @-Evil-Genius-
    @-Evil-Genius- Год назад +2

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 *[Introduction and Restrictions]*
    - Setting the stage for using Oren to create AI-powered applications.
    - Three self-imposed restrictions: Open-source models only, code explanation in detail, and ensuring replicability in viewers' projects.
    - Emphasizing the commitment to using open-source models contrary to common beliefs.
    02:30 🛠️ *[Building External System Adapter]*
    - Creating an instance of an external system adapter for Kubernetes.
    - Explaining the structure of the adapter class and its get resources method.
    - Discussing the flexibility of the method parameters and the use of AI to determine values.
    04:19 🌐 *[Configuring Autogen for Kubernetes]*
    - Configuring Autogen for AI-powered interaction with Kubernetes.
    - Setting up the llama CPP inference server for better performance.
    - Adjusting parameters like cache, response timeout, and temperature for optimal AI responses.
    06:25 🤝 *[Agent Coordination and Workflow]*
    - Introducing the Kubernetes engineer agent responsible for calling the function.
    - Describing the role of the kubernetes expert agent in researching values.
    - Explaining the user proxy agent as a substitute for human input and the group chat manager for agent coordination.
    07:35 🔄 *[Agent Coordination Workflow]*
    - Detailing the workflow of agents' coordination in Autogen.
    - Explaining how the group chat manager orchestrates the conversation between agents.
    - Highlighting the role-playing game analogy used for model decision-making.
    09:36 🤔 *[Testing the Multi-Agent System]*
    - Demonstrating the interaction and coordination of agents in action.
    - Checking the logs for successful execution and agent collaboration.
    - Acknowledging the efficiency of the agents in working as a team for the intended task.
    Made with HARPA AI

  • @adpandehome996
    @adpandehome996 9 месяцев назад +1

    Hey man. Good videos. You should make one on Hashicorp Nomad. Seems everybody is running behind k8s and it is overkill for most cases. New and early stage startups would benefit from a Nomad tutorial.

    • @YourTechBudCodes
      @YourTechBudCodes  9 месяцев назад +1

      I kinda like that idea. Let me prepare something really quick

  • @mcdaddy42069
    @mcdaddy42069 Год назад +1

    y ouare the best you are the best you are the best. best autogen tutorial creator out there easily

  • @supernewuser
    @supernewuser 11 месяцев назад +1

    well done, very underrated content

  • @rsjain1978
    @rsjain1978 3 месяца назад +1

    Hi, very nice tutorial, would you do a follow up to show can the data can be passed across agents ?

    • @YourTechBudCodes
      @YourTechBudCodes  2 месяца назад

      Yeah. I've been thinking about doing something on that.
      Is there any specific use case you are trying to achieve?

  • @shubhamnazare3525
    @shubhamnazare3525 Год назад

    Thanks for explaining AutoGen!

  • @bawbee27
    @bawbee27 10 месяцев назад

    Dude this is REALLY good. Well done & thank you 👏🏽

    • @YourTechBudCodes
      @YourTechBudCodes  10 месяцев назад

      I really appreciate it. Glad it was helpful.

  • @Matthias-c4p
    @Matthias-c4p 11 месяцев назад

    Thanks for this video. It's readlly great. I would love to see a video about how to get the output from Autogen into a webapp, including the human input. Would great. Thanks

    • @YourTechBudCodes
      @YourTechBudCodes  11 месяцев назад

      Thanks. I'm glad you found it to be helpful.
      A video to integrate all this with a web app is definitely in the works. Will share that soon.

  • @golangNinja29
    @golangNinja29 Год назад

    Cc amazing video ❤, excited for series

  • @IdPreferNot1
    @IdPreferNot1 Год назад +1

    Do you have a specific requirements .yml file for the conda environment you say to setup in step 1 of you "Setup conda env" or can i just create a blank one?

    • @YourTechBudCodes
      @YourTechBudCodes  Год назад +1

      I just realised that i made a mistake in the Readme. You don't need conda since we are using poetry. I have updated the Readme to reflect that.

  • @ianng8243
    @ianng8243 3 месяца назад +1

    I need part 2!!

    • @YourTechBudCodes
      @YourTechBudCodes  3 месяца назад +1

      Haha. Glad you liked it. I just posted a part two last week. Do check it out and let me know your thoughts.

    • @ianng8243
      @ianng8243 2 месяца назад

      @@YourTechBudCodes thank you! I will check it out. I realized that we need our own Open AI key, may I ask why do we need it if we are running our own inference server and open source model?

    • @YourTechBudCodes
      @YourTechBudCodes  19 дней назад

      The openai sdk is annoying. it forces you to provide one. Just put a dummy key and you'll be fine.

  • @abhishekkhanna1349
    @abhishekkhanna1349 Год назад

    This is very interesting!!

  • @dekeleli
    @dekeleli 10 месяцев назад

    I am trying to run this with lm studio instead of Ollama and the model just generates text instead of running the function. Maybe autogen changed something since this video got out?

    • @YourTechBudCodes
      @YourTechBudCodes  10 месяцев назад

      Actually... I have written my own wrapper above ollama to power function calling. Most open source servers don't support it. Try using inferix as your server.

    • @dekeleli
      @dekeleli 10 месяцев назад +1

      ​@@YourTechBudCodes Interesting, thank you!

  • @MrMoonsilver
    @MrMoonsilver 11 месяцев назад

    Is there a possibility to run an "Autogen Inference Server" with an API? I think that could be really powerful.

    • @YourTechBudCodes
      @YourTechBudCodes  11 месяцев назад

      Uhm. I'm not sure I understand the question. The inference server does set up an API.
      Or are you talking about some kind of SaaS service you can integrate with?