Autogen Full Beginner Course

Поделиться
HTML-код
  • Опубликовано: 27 май 2024
  • Welcome to a Full AutoGen Beginner Course! Whether you know everything there to AI Agents or are a complete beginner, I believe there is something to learn here. We have topics to the introduction to AutoGen to group chats, to a Reddit Project at the end. There are many topics to go over, and also a few projects for you to do. You can pause the video when you get to them and then unpause so you can see how I did it.
    By the time you are done with this course you will be able to understand what AutoGen is and create your own Multi-Agent workflow.
    You can download the IDE I use and you can use the Conda Environment with the following download as well:
    🥧 PyCharm Download: www.jetbrains.com/pycharm/dow...
    🐍 Anaconda Download: www.anaconda.com/download
    Bonus Project URL: / apps
    AutoGen Beginner Course Code: github.com/tylerprogramming/a...
    Nested Sequential Chat Video: • AutoGen Tutorial | Seq...
    Don't forget to sign up for the FREE newsletter below to give updates in AI, what I'm working on and struggles I've dealt with (which you may have too!):
    =========================================================
    📰 Newsletter Sign-up: bit.ly/tylerreed
    =========================================================
    Join me on Discord: / discord
    Connect With Me:
    🐦 X (twitter): @TylerReedAI
    🙋‍♂️ GitHub: github.com/tylerprogramming/ai
    📸 Instagram: TylerReedAI
    💼 LinkedIn: / tylerreedai
    📆 31 Day Challenge Playlist: • 31 Day Challenge AutoGen
    🙋‍♂️ GitHub 31 Day Challenge: github.com/tylerprogramming/3...
    🦙 Ollama Download: ollama.com/
    🤖 LM Studio Download: lmstudio.ai/
    The paper: arxiv.org/abs/2403.04783
    📖 Chapters:
    00:00:00 Welcome to the Course!
    00:00:46 Autogen Introduction
    00:02:59 Download PyCharm
    00:04:17 01 - two way chat
    00:11:51 01.1 - human interaction
    00:13:30 02 - group chat
    00:19:38 Project #1 - Snake
    00:22:25 04 - sequential chat
    00:28:50 05 - nested chat
    00:33:54 06 - logging
    00:40:16 07 - vision agent
    00:46:57 openai vs. local
    00:47:31 09 - lm studio
    00:53:00 10 - function calling
    01:04:48 brief intermission
    01:05:15 11 - tools
    01:14:51 12 - create images!
    01:17:54 Project #2 - autogen + img + save file
    01:20:16 Bonus Reddit Project
    💬 If you have any issues, let me know in the comments and I will help you out!

Комментарии • 57

  • @TylerReedAI
    @TylerReedAI  Месяц назад +12

    Hey! With this course from beginning to end, you will be familiar with AutoGen and be able to create your own ai agent workflow. Like, subscribe and comment 😀 Have a good day coding!

  • @nestorcolt
    @nestorcolt 28 дней назад +3

    You earned a new subscriber and loyal follower, gentleman. Great speech modulation and clearness.

    • @TylerReedAI
      @TylerReedAI  28 дней назад

      Thank you so much appreciate this 🙌

  • @jimg8296
    @jimg8296 Месяц назад +3

    Great summary. I've been looking for more examples of Autogen. I'd love to see a comparison of CrewAI vs Autogen and the code behind the test.

  • @ahmedadly
    @ahmedadly 29 дней назад +2

    Thank you Tyler, Awesome as usual!

  • @robgruhl3439
    @robgruhl3439 12 дней назад +1

    This course is fantastic, thank you!!

  • @abhaymishra7991
    @abhaymishra7991 20 дней назад +1

    I usually do not comment, but I am commenting because you are just awsome. I was confused from last 3 days about LangGraph vs Autogen, but now you have completed my all doubt, with this video, thanks.

    • @TylerReedAI
      @TylerReedAI  17 дней назад

      Hey thank you so much! I'm so glad to help clear things up 👍

  • @RetiredVet1
    @RetiredVet1 Месяц назад +1

    Sounds fantastic. I will take it as soon as I can.

  • @BerkGoknil
    @BerkGoknil 23 дня назад +1

    Tyler, excellent video. I learned a lot. God bless you.

    • @TylerReedAI
      @TylerReedAI  17 дней назад

      Thank you, glad it was helpful!

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 2 часа назад

    so if i want to run LLM locally is there a way to update the OAI config list? for all thw files in one go? this bit confused me bad

  • @padhuLP
    @padhuLP 27 дней назад +2

    A wonderful beginner's tutorial. Thanks for providing the code as we can copy paste and test it quickly. Appreciate it.

    • @AndyPandy-ni1io
      @AndyPandy-ni1io 2 часа назад

      how did you get it working i get so many issues with it running locally

    • @TylerReedAI
      @TylerReedAI  2 часа назад

      @AndyPandy-ni1io what issues are you having?

    • @AndyPandy-ni1io
      @AndyPandy-ni1io Час назад

      @@TylerReedAI main thing is when I make from scratch but I want it run run a local LLM do I still need the config.json or do I just put the Equivalent API stuff in main.py?

  • @user-td2od6xx1h
    @user-td2od6xx1h 12 дней назад

    Thanks for sharing this video, it helps me a a lot.
    I have one question. Is that possible to dynamically changing base prompt(system_message)?
    "dynamically" means that I would like to know how to change system_message during conversation.

    • @TylerReedAI
      @TylerReedAI  6 дней назад

      Hey I'm glad it could help! And I will look into that. I can think of just adding context in each iteration to shape the output, but you would need to set the human_input_mode="ALWAYS".

  • @jcdenton1664
    @jcdenton1664 Месяц назад

    This is excellent! Thank you for your efforts in bringing this tutorial out. Can I ask, can we add pdf’s to agents. Like ask agents to digest pdfs at particular points in the workflow and contribute to the discussion based on what it learns there?

    • @TylerReedAI
      @TylerReedAI  28 дней назад

      Hey thank you, and absolutely you can. This would be using RAG. I will have a video soon on how to do just this!

  • @RetiredVet1
    @RetiredVet1 25 дней назад +1

    I don't see the reddit url you mention at 1:20:28. I tried to see the url, but I can't make it out when I zoomed in on the video.

  • @steveknows6126
    @steveknows6126 24 дня назад +1

    Thanks Tyler. I see you suggested goinng oai config instead of a .env and they both appear to do the same. What's the difference?

    • @TylerReedAI
      @TylerReedAI  23 дня назад

      Hey, yeah there really is no functional difference, it's just how they get the properties. Because you could even just import os, and then say like os.environ("OPENAI_API_KEY"), something like that, and you would have that set into your configuration on env path. the oai_json is just the way I like to do it

  • @PrinzMegahertz
    @PrinzMegahertz 21 день назад +1

    Thank you for this excellent introduction! I have one question: I would like to have two agents performing an interview with a human on a certain topic. The first agent should ask the questions, while the second agent should reflect on their understanding of the topic and decide whether additional messages are needed. This seems like a good case for a Nested Chat. However, the nested chat seems to be bound to the amount of turns you define in the beginning. Is there a way to have the nested agent decide when to finish the interaction?

    • @TylerReedAI
      @TylerReedAI  17 дней назад

      Hey I'm glad it has helped and thank you! So yeah you can determine how many max_turns a chat could have in the nested chat. It is sequential, but I guess for that...you may just need to say something in the prompt of each agent. For instance, AssistantAgent could say, ".... When task is done, reply TERMINATE". Then the UserAgent checks for that in the termination message.
      res = user.initiate_chats(
      [
      {"recipient": assistant_1, "message": tasks[0], "max_turns": 1, "summary_method": "last_msg"},
      {"recipient": assistant_2, "message": tasks[1]},
      ]
      )
      Here, in this example, you can increase the max turns where the user will initiate a chat with another assistant. I get what you're saying, and I think the answer is...No. Like not exactly. The closest would be with the prompting. Hope this helps, if it didn't let me know!

    • @PrinzMegahertz
      @PrinzMegahertz 15 дней назад

      @@TylerReedAI Thank you very much, I'll give it a try!

  • @artemkhomenko2317
    @artemkhomenko2317 28 дней назад

    Any way to use AutoGen to login on the website and perform a job?
    I mean the functionality where I can describe with the text to login on the specific website with my credentials and do specific tasks, without specifying manually CSS or XPath elements and without writing (or generating) code for Selenium or similar tools?

    • @TylerReedAI
      @TylerReedAI  28 дней назад

      Hey, I don't think doing that with their native tools just yet, however I know they are hard working (as per last week) on making things like this happen. They mentioned it in a discord call they had.

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 2 часа назад

    Hi So I want to do this with a local run LLM how do I change [
    {
    "model": "gpt-3.5-turbo",
    "api_key": "sk-proj-1111"
    }
    ] to run with say LM studio with Llama 3

    • @TylerReedAI
      @TylerReedAI  2 часа назад

      model: the actual model from LM studio you are using
      api-key: “lm-studio”
      base_url: “the url found in LM studio as well”
      There is like a snippet of python code when you start local server, both of the model and base url can be found there

    • @AndyPandy-ni1io
      @AndyPandy-ni1io 2 часа назад

      import autogen
      def main():
      llama3 = {
      "config_list": [
      {
      "model": "Meta-Llama-3-8B-Instruct-GGUF",
      "base_url": "localhost:1234/v1",
      "api_key": "lm-studio",
      },
      ],
      "cache_seed": None,
      "max_tokens": 1024
      }
      phil = autogen.ConversableAgent(
      "Phil (Phi-2)",
      llm_config=llama3,
      system_message="""
      Your name is Phil and you are a comedian.
      """,
      )
      # Create the agent that represents the user in the conversation.
      user_proxy = autogen.UserProxyAgent(
      "user_proxy",
      code_execution_config=False,
      default_auto_reply="...",
      human_input_mode="NEVER"
      )
      user_proxy.initiate_chat(phil, message="Tell me a joke!")
      if __name__ == "__main__":
      main()

  • @wpuncensored
    @wpuncensored 26 дней назад

    Can I request the a code written in Next.js(typescript) or .NET(C#) or it is strictly working with Python?

    • @TylerReedAI
      @TylerReedAI  23 дня назад

      you are in luck! They just added .net support!

  • @frankkujath3501
    @frankkujath3501 Месяц назад

    well i wonder, the first programs runs with no error, but it don`t create the coding folder and also don`t create or run files where i see the chart in it, also not if i create it before, also in mode "Never". So it only produce output on output but not the resulting scripts. Any idea? Well i tryed with python 3.10

    • @frankkujath3501
      @frankkujath3501 Месяц назад

      i also don*t see the 3 dots in the output log...

    • @frankkujath3501
      @frankkujath3501 Месяц назад +1

      found the reason: after create project i have to say in the 3thd tab conda as environment and use py 3.10.11. it seams there is a problem with my 3.10.6 installed for automatic1111.

    • @TylerReedAI
      @TylerReedAI  28 дней назад +1

      Sorry for late reply, I'm glad you got it figured out. Yeah so this is why I'm soon going to be creating docker images so everybody can have the same workflow with same settings I have. Then we won't have issues like this.

  • @LeviZortman
    @LeviZortman 24 дня назад +1

    Hi Tyler I was following along with your repo and it vanished mid tutorial. Any ideas? Great work btw

    • @TylerReedAI
      @TylerReedAI  24 дня назад

      Hey thank you, what do you mean…like the repo doesn’t exist?

    • @LeviZortman
      @LeviZortman 24 дня назад +1

      @@TylerReedAI I was following along with your /autogen_beginnner_course repo and I refreshed at one point and got 404. Its gone

    • @TylerReedAI
      @TylerReedAI  24 дня назад +1

      I see, I had a different one and migrated because of some issue, I apologize. Try this: github.com/tylerprogramming/autogen-beginner-course

  • @georgewestbrook4512
    @georgewestbrook4512 2 дня назад +1

    Amazing tutorial, very clear and packed full!

  • @Oliv-B
    @Oliv-B Месяц назад

    I'm at 10:45 on your tutorial and my code just pop up a META & TESLA Stock price graph! Just an issue: it was a success with Assistant sending "TERMINATE", but then "user" doesn't stop sending empty messages to Assistant, and Assistant responding "good bye" / "feel free to ask more questions"... in an infinite loop, CTRL-C help me get out of here! (^_^)

    • @TylerReedAI
      @TylerReedAI  28 дней назад +1

      I'm glad you got the graph! Yeah sorry, it happens, but I will try to update the code with better termination replies and prompts so you don't run into this issue nearly as often. But yeah ctrl + c gets you out of it :D

  • @RetiredVet1
    @RetiredVet1 Месяц назад +1

    When I ran the code, I got the following repeated about 12 or more times. Maybe we need to limit replies?
    Assistant (to user):
    If you have any more questions or need assistance in the future, please feel free to ask. Have a great day! Goodbye!
    TERMINATE
    --------------------------------------------------------------------------------
    user (to Assistant):
    I also did not get the same results you did, but I now think I know why. Since I have docker running, I set "use_docker" to True. When I set "user_docker" to false, I get results closer to yours.
    I was thinking I needed to use the docker executor, but that causes other issues. You might want to try using docker and see if there are any differences. If so, it might be the subject of another video.
    I had more consistent results when I set the temperature to 0, and set use_docker to false.

    • @TylerReedAI
      @TylerReedAI  28 дней назад +1

      Gotcha, we talked in discord but yeah it's really interesting to have differences like this.

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 2 часа назад

    WHY NOT JUST SHOW HOW TO RUN WITH LOCAL LLM FROM THE START SO WE CAN LEARN WITHOUT THE COSTS. NOW THE WHOLE THING DONT WORK CAUSE I HAVE TO CHANGE SOMETHING IN EVERY MAIN OR CONFIG????

    • @TylerReedAI
      @TylerReedAI  2 часа назад

      Hey man, it’s all there! It’s just a beginner course so there are multiple ways. I just chose to show how an agent actually works and then how to do it locally. I responded to your other comment how to do it

    • @AndyPandy-ni1io
      @AndyPandy-ni1io Час назад

      @@TylerReedAI ahhhh makes sense now sorry im a noob

  • @TheLombudXa
    @TheLombudXa Месяц назад +1

    After like 3 weeks of fiddling around with AI, the way to go is to fine-tune the model itself directly to create agents. There's no need for any tool. The AI itself has it all already.

    • @b.861
      @b.861 Месяц назад

      😂😂

    • @kareldaulatram
      @kareldaulatram Месяц назад

      How?

    • @brandonheaton6197
      @brandonheaton6197 21 день назад

      Using llama 3, that is a viable strategy. However, consider that the AgentOptimizer autogen workflow from Zhang and Zhang allows you to get the effect while still using the top of the line models.
      gpt-4-turbo is current $30 per million tokens. Until the SML agent swarm gets traction, this is going to be the best option