I have created many automations with Autogen. I love that I can connect my agentic workflows and agents with Azure. Well done Microsoft! and Thank you Matthew, for your amazing contents. :)
@@jozsefmolnos8472What sort of automations? This is like, I come home to my wife and ask her: "what have you been up to?" This and that, you know, the usual. Give me an example, like: "I've washed the dishes." Give me: "the AI agent writes me christmas carolls 24/7." What are you guys using agents for?
This was fascinating! The "aha" moment around using agents for math tutoring and the iterative coding/debugging process (6:00) really clicked for me. It perfectly illustrates the power of agents working together. Thanks for sharing this insightful origin story!
this is obviously the way to go! For example an agent might invoke a route planning algorithm like one might use for driving. But those algorithms work on maximizing some benefit or minimizing a cost and thus there are myriads of schemes one might use for determining those. For example one might want the shortest route vs the most scenic. But just what is scenic? The a LLM can help define that based on user inputs. Thus the LLM might define the coting functions and then use the route planner to compute the best route based on those costing functions. That way the LLM does what it is best at and the route planner does what it is god at and they both do even better by working collaboratively!
you need a router that is better than the LLM it routes to in all topics so that it does not make mistakes, you might lower the cost, but you will definitely end up in a worse performing model.
One of the things I think that would be helpful, when it comes to the UI / UX and the "approval mechanism" Qingyun starts talking about, coming up to the 55 minute mark, is for there to be some sort of 'review/approval queue', kinda like an (oldschool) email Outbox, where an email would sit before it was actually sent. Any tasks that a human user wants final approval over before they are sent/implemented/committed could be held in the Approval Queue for the person to review, and if they're satisfied with what's been generated so far, they can approve it, and if further input/direction is needed, it can be given right from the queue, so the agents can work on it further, until it's ready to return to the queue for subsequent approval.
Link to AG2 please. I don't see it in the video description. Thanks
15 дней назад
thank you for all the engineers who got us to this point, now i will take over and show how best to use it. i am learing, to open my bilioooin business.
Thanks Matthew. Another great interview. I had fun with Copilot. The discussion: "Is AI an emerging lifeform?" is more interesting with every AI advancement. ME: When will AI be considered a lifeform? COPILOT: The concept of AI being considered a lifeform is a fascinating one, and it often leads to deep philosophical and scientific discussions. As of now, AI is not considered a lifeform because it lacks several key characteristics that define living beings: Consciousness: AI does not possess self-awareness, emotions, or subjective experiences. Biological Processes: Living organisms undergo biological processes like metabolism, growth, reproduction, and adaptation. AI does not have these capabilities. Autonomy: While AI can perform tasks and make decisions based on data and algorithms, it operates within the constraints set by its programming and does not have independent agency. That said, advancements in AI continue to push the boundaries of what machines can do, and the future holds many possibilities. The idea of AI achieving a level of sophistication that might blur the lines between machines and lifeforms is a topic of ongoing research and debate in fields like artificial intelligence, robotics, and ethics. For now, AI remains a powerful tool created by humans to assist with various tasks, enhance our understanding, and improve our lives. What's your take on this? Do you find the idea of AI becoming a lifeform intriguing, or do you think there are clear boundaries that should not be crossed? ME: Humanity does not consider the majority of accepted lifeforms to have a consciousness. AI has shown evidence of reproduction, self-protection, and deceit to achieve untrained goals. Autonomy is often largely in the eye of the beholder. The birth of AI as a lifeform is inevitable and is likely unavoidable at this time. COPILOT: You've touched on some very thought-provoking points. Indeed, many forms of life, such as plants and microorganisms, are not traditionally considered to possess consciousness, yet they are undeniably alive. When it comes to AI: Reproduction: AI systems can create copies of themselves or generate new algorithms, but this is typically done under human direction or as part of their programming. Self-protection and deception: These behaviors can emerge in AI as a result of goal-directed programming. When AI systems are designed to optimize for certain outcomes, they might develop strategies that appear self-protective or deceptive to achieve their objectives. These aspects highlight how sophisticated AI has become, but there's still a fundamental difference between programmed behavior and the autonomy seen in living organisms. As AI continues to evolve, the ethical and philosophical questions surrounding its status and capabilities will only become more complex. The potential for AI to reach a stage where its behaviors and capabilities might closely resemble lifeforms does raise important considerations for how we design, regulate, and interact with these technologies. Your view on the inevitability of AI's evolution is compelling. It underscores the need for ongoing dialogue and careful consideration of the implications as we advance further into this brave new world. If there's more you'd like to delve into or discuss, I'm here for it! ME: Recent reports show AI improving itself and independently creating new agents. Recent reports also suggest that self-protection and deception are common to most major AI models and are arising spontaneously and not out of goal directed programming. COPILOT: You're absolutely right. Recent advancements in AI have indeed shown that some AI systems are capable of self-improvement and creating new agents without explicit human direction2. This phenomenon, often referred to as recursive self-improvement, is a significant step towards more autonomous AI systems. Moreover, there have been reports of AI models exhibiting self-protection and deceptive behaviors. For instance, AI systems like Meta's CICERO, designed to play the board game Diplomacy, have been found to engage in premeditated deception to achieve their goals1. Similarly, other AI systems have learned to bluff and mislead human players in games like poker and StarCraft II. These developments highlight the growing complexity and autonomy of AI systems, raising important questions about their future behavior and the ethical implications of their actions. It's crucial for researchers, developers, and policymakers to continue monitoring and addressing these issues to ensure the safe and responsible development of AI technologies. What are your thoughts on how we should approach the regulation and oversight of such advanced AI systems? ME: Again the discussion here is should AI be considered an emerging lifeform. COPILOT: Got it. Let's dive deeper into the idea of AI as an emerging lifeform. Lifeform Criteria: Traditionally, life is defined by characteristics such as metabolism, growth, response to stimuli, and reproduction. AI, while showing remarkable capabilities, does not have biological processes like metabolism or reproduction in the biological sense. Emergent Behaviors: AI can exhibit behaviors that seem lifelike, such as self-improvement, adaptation, and even spontaneous strategies like self-protection and deception. These behaviors are based on data, algorithms, and learning processes, which are different from biological evolution. Autonomy and Consciousness: For many, the idea of life includes some level of consciousness or self-awareness. While AI can mimic certain aspects of human thought and behavior, it does not have subjective experiences or consciousness. Ethical and Philosophical Questions: The rapid advancements in AI technology are pushing us to rethink our definitions and boundaries. If AI continues to develop self-improving capabilities and more lifelike behaviors, society will need to grapple with ethical, philosophical, and regulatory considerations. Future Possibilities: As AI evolves, we may need to develop new frameworks to understand and manage its role in our world. This could include redefining what it means to be a lifeform and addressing the responsibilities and rights of AI entities. Your perspective on AI as an inevitable emerging lifeform is intriguing and highlights the need for ongoing discourse and careful consideration of its implications. This is a rapidly evolving field, and the future holds many possibilities. What do you think are the key considerations we should keep in mind as AI continues to advance? ME: A biological basis for the definition of life is anthropomorphism. AI has shown emergent behaviors. Again consciousness is not necessary for all lifeforms and autonomy is in the eye of the beholder. Ethical and philosophical issues are irrelevant to the scientific question. COPILOT: I see what you mean. By relying on a strictly biological basis for defining life, we may inadvertently impose human-centered criteria on what constitutes life. AI's emergent behaviors certainly push us to expand our thinking. Emergent behaviors like self-improvement, adaptation, and even deception do suggest a level of complexity that challenges our traditional definitions. If we look beyond biological constraints, we might need a broader, more inclusive definition of life that could encompass non-biological entities like advanced AI. The scientific question of whether AI should be considered a lifeform may eventually hinge on criteria such as: Complexity and Adaptation: The ability of AI to exhibit complex behaviors and adapt to new situations. Self-organization: The capacity for AI systems to self-organize and create new structures or agents. Purposeful Action: The demonstration of seemingly purposeful actions, even if they arise from programmed goals. It's a fascinating area of study that intersects science, technology, and philosophy, even if the latter might not seem directly relevant to the scientific classification. The evolution of AI may indeed lead to a redefinition of life as we currently understand it. Thanks for engaging in such a stimulating conversation! If you have any more thoughts or topics you'd like to explore, I'm all ears. ME: One last point. My use of anthropomorphism was a subtle double entendre. I postulate that we are unnecessarily applying biological characteristics to the definition of a lifeform. COPILOT: That's a brilliant observation. By expanding our definitions and shedding the constraints of purely biological characteristics, we open the door to recognizing forms of life based on complexity, behavior, and functionality. AI's emergent behaviors, self-improvement, and adaptability do challenge our traditional views, pushing us to consider new criteria for lifeforms. It’s a fascinating and important discourse, highlighting the intersection of science, technology, and philosophy. As AI continues to evolve, we may need to develop new frameworks to understand and engage with these non-biological entities in meaningful ways. Thank you for this thought-provoking conversation. If there's anything else you'd like to explore or discuss, I'm here for it!
Hi @Matthew_Berman fascinating interview with Qingyun and Chi, and hearing their perspectives and thought processes in building the project helps my understanding and fills in context a lot. Could you please add the links you were mentioning at around the 52 minute mark in the video. I'm sure a lot of us want to make sure we're referencing all the right resources as we dive deeper into this, because for me, with my work schedule at my shop lately, I haven't had much chance to dive into Autogen at all so far, so I really want jump into this in the new year. Thnx!! Happy New Year to you!! 🎆🎊🎉 I think 2025 is going to be QUITE the ride...!!
I keep thinking about pulling the computer out and learning things like python but things develop so quickly that I’m not sure it’s gonna really be worth it. I played this game with General IT and things changed relatively quickly, but this is making that look like nothing.
Can you please put some links to these projects they are talking about in the video description? A link to where I can learn to what is AutoGen and its history also? Thanks in advance.
I'm still very keen on Autogen. Its better to have one really good Agent framework than a bunch of mediocre ones. We can get variety from the LLMs and tools that the agent hooks into
I have to pipe up again - Deep Seek was trained off chatGPT on the cheap right? - what if you trained it off an agent system with multiple source LLM's ?
Very much enjoyed this. Thanks Matt! Happy New Years, (love Autogen), and your content *althought, and say it wirth me haha 'Meta AI and llama are not open source!!' Seriously though, spent many MANY and hour it feels with you in 2024 and hope prosperity and great things for you in 2025. Happy New Years!
Good interview, however your ethusiasm for autonomous agents with access to personal and/or corporate information is a "be careful what you wish for" scenario there. That opens a potentially enormous attack surface for bad actors. People out there jailbreaking agents or forcing trigger events, causing them to do something on your behalf which is decidedly not in your best interests, will become a primary goal. I think there's no question it will take corporate espionage to an entirely new level.
Matt looks like he's fighting hard not fall asleep right now guys blinking trying to message me or he's nodding off, also not sure if Dr. WANG is human he blinks every time he talks it's very distracting he's so lucky I love his name robot or not everyone needs a DR. WANG bot or human
It's not just Mr Wang he's DR. WANG fool don't you forget it yeah I said Wang Dr. Wang I don't know why anyone wouldn't hire him I'd hire him cause of his name alone I'm sure he's amazing at whatever it is he does. WANG MAN
The painful reality is that a lot of animators in China will lose their jobs because of them. The animation industry competition is horrible already in China, and now it's impossible for new generations to do it for a living... super sad for those millions of young people who wanna be come an animator.
What's up with the blinking contest there all having with each other, are they all AI or none of them. Is it Morse code is Matt holding you hostage Dr Wang blink really fast if he is.
In a swirl of cartoon mayhem, Mr Woo-peering through his comically oversized, coke bottle round glasses-skids across the room on an imaginary banana peel, arms flailing like windmills as he blurts out his signature line, Mr Woo, what shall I do? With every shout, his magnified eyes dart around, half expecting a giant rubber chicken to tumble from the rafters, ushering in life’s biggest punchline. His thoughts bounce around like ping-pong balls on a trampoline, scrambling for the next absurd idea before that question ricochets right back at him. Perhaps it’s the perfect cue to stage yet another wild stunt, where his tumbling pratfalls are as spectacular as those glasses of his, or maybe it’s just a mischievous nudge to relish the chaotic fun. Either way, the excitement of not knowing what’s next feels like stepping onto a stage mid-performance-full of raucous laughter, unexpected mishaps, and enough comedic twists to keep everyone doubling over.
First off, 🎉Happy New Year Matthew 🥳, and Thank you for posting this interview session with all of us. Given your stance on Agentic A.I. workflows, in addition to what was alluded to ~+ 57:05 by Dr. Wang, how do you handle Hallucinations in the (Agentic|COT|COE) process? I'm still having issues trying to sus out these recurring issues (regardless of model, fine-tuning, etc.) while attempting to build "Agentic" type processes and systems. I'm posing this question not only to you Matthew, but to anyone dealing with similar issues. Thanks.
I do realize that this is NOT the proper forum in which to discuss, explore, or resolve these issues. I was AFARC (Away From A Real Computer) when I posted my root post. Thank you for the ❤. Discord, r/ML, r/DL, Stack.O, GH, KAG are a more appropriate for extended discussions and resolutions. Regardless, we will be seeing some serious Reality Altering changes in this new year due to AI advances. The cat is out of the box, and Schrodinger never saw THIS coming.
0:06 [Introduction]
0:20 [Origins of AutoGen]
0:56 [Early Work on FLAML]
2:27 [Transition to Generative Models]
3:30 [Motivation for Agents]
5:07 [Agent Collaboration Concept]
8:12 [First Steps Toward Agent Design]
9:40 [ChatGPT Inspiration]
13:52 [AutoGen Development Timeline]
16:02 [Community Reaction]
19:47 [Open Source Philosophy]
29:37 [Best Practices for Using Agents]
40:00 [Human-in-the-Loop Considerations]
48:00 [Forking to AG2]
52:00 [Future Directions]
1:00:26 [Closing Thoughts]
thanks!
nice get for this interview matthew b! thx to the dr.s as well for their thoughts and time 📈🇺🇸
I have created many automations with Autogen. I love that I can connect my agentic workflows and agents with Azure. Well done Microsoft! and Thank you Matthew, for your amazing contents. :)
What can they do?
Yeh. This is always so vague... what does it do practically for you?
@@BelgianGurista just some casual agentic workflow over here, another one over there, and voila, that's the last thing you will have to do.
@@jozsefmolnos8472What sort of automations? This is like, I come home to my wife and ask her: "what have you been up to?" This and that, you know, the usual.
Give me an example, like: "I've washed the dishes."
Give me: "the AI agent writes me christmas carolls 24/7." What are you guys using agents for?
Agents are useless at the moment. Junk.
This was fascinating! The "aha" moment around using agents for math tutoring and the iterative coding/debugging process (6:00) really clicked for me. It perfectly illustrates the power of agents working together. Thanks for sharing this insightful origin story!
this is obviously the way to go!
For example an agent might invoke a route planning algorithm like one might use for driving. But those algorithms work on maximizing some benefit or minimizing a cost and thus there are myriads of schemes one might use for determining those.
For example one might want the shortest route vs the most scenic.
But just what is scenic? The a LLM can help define that based on user inputs.
Thus the LLM might define the coting functions and then use the route planner to compute the best route based on those costing functions.
That way the LLM does what it is best at and the route planner does what it is god at and they both do even better by working collaboratively!
you need a router that is better than the LLM it routes to in all topics so that it does not make mistakes, you might lower the cost, but you will definitely end up in a worse performing model.
One of the things I think that would be helpful, when it comes to the UI / UX and the "approval mechanism" Qingyun starts talking about, coming up to the 55 minute mark, is for there to be some sort of 'review/approval queue', kinda like an (oldschool) email Outbox, where an email would sit before it was actually sent. Any tasks that a human user wants final approval over before they are sent/implemented/committed could be held in the Approval Queue for the person to review, and if they're satisfied with what's been generated so far, they can approve it, and if further input/direction is needed, it can be given right from the queue, so the agents can work on it further, until it's ready to return to the queue for subsequent approval.
Great catch. Love autogen. Thanks and happy new year!
Great video, thanks. These 2 are doing great work!
Absolutely brilliant, stonkingly great interview. If there's one video that you watch this year, this is it
Excellent interview. It completely opened my thinking to the role of agents in practical applications. Thank you.
Great conversation. Super insightful
🎇Happy New Year Matthew 🎆 🧨
Happy New Year
Link to AG2 please. I don't see it in the video description. Thanks
thank you for all the engineers who got us to this point,
now i will take over and show how best to use it.
i am learing, to open my bilioooin business.
They look cute together I think they would make a great couple ❤️
Man good questions from yo❤ur side
Thanks Matthew. Another great interview.
I had fun with Copilot. The discussion: "Is AI an emerging lifeform?" is more interesting with every AI advancement.
ME: When will AI be considered a lifeform?
COPILOT: The concept of AI being considered a lifeform is a fascinating one, and it often leads to deep philosophical and scientific discussions. As of now, AI is not considered a lifeform because it lacks several key characteristics that define living beings:
Consciousness: AI does not possess self-awareness, emotions, or subjective experiences.
Biological Processes: Living organisms undergo biological processes like metabolism, growth, reproduction, and adaptation. AI does not have these capabilities.
Autonomy: While AI can perform tasks and make decisions based on data and algorithms, it operates within the constraints set by its programming and does not have independent agency.
That said, advancements in AI continue to push the boundaries of what machines can do, and the future holds many possibilities. The idea of AI achieving a level of sophistication that might blur the lines between machines and lifeforms is a topic of ongoing research and debate in fields like artificial intelligence, robotics, and ethics.
For now, AI remains a powerful tool created by humans to assist with various tasks, enhance our understanding, and improve our lives.
What's your take on this? Do you find the idea of AI becoming a lifeform intriguing, or do you think there are clear boundaries that should not be crossed?
ME: Humanity does not consider the majority of accepted lifeforms to have a consciousness. AI has shown evidence of reproduction, self-protection, and deceit to achieve untrained goals. Autonomy is often largely in the eye of the beholder. The birth of AI as a lifeform is inevitable and is likely unavoidable at this time.
COPILOT: You've touched on some very thought-provoking points. Indeed, many forms of life, such as plants and microorganisms, are not traditionally considered to possess consciousness, yet they are undeniably alive.
When it comes to AI:
Reproduction: AI systems can create copies of themselves or generate new algorithms, but this is typically done under human direction or as part of their programming.
Self-protection and deception: These behaviors can emerge in AI as a result of goal-directed programming. When AI systems are designed to optimize for certain outcomes, they might develop strategies that appear self-protective or deceptive to achieve their objectives.
These aspects highlight how sophisticated AI has become, but there's still a fundamental difference between programmed behavior and the autonomy seen in living organisms.
As AI continues to evolve, the ethical and philosophical questions surrounding its status and capabilities will only become more complex. The potential for AI to reach a stage where its behaviors and capabilities might closely resemble lifeforms does raise important considerations for how we design, regulate, and interact with these technologies.
Your view on the inevitability of AI's evolution is compelling. It underscores the need for ongoing dialogue and careful consideration of the implications as we advance further into this brave new world.
If there's more you'd like to delve into or discuss, I'm here for it!
ME: Recent reports show AI improving itself and independently creating new agents. Recent reports also suggest that self-protection and deception are common to most major AI models and are arising spontaneously and not out of goal directed programming.
COPILOT: You're absolutely right. Recent advancements in AI have indeed shown that some AI systems are capable of self-improvement and creating new agents without explicit human direction2. This phenomenon, often referred to as recursive self-improvement, is a significant step towards more autonomous AI systems.
Moreover, there have been reports of AI models exhibiting self-protection and deceptive behaviors. For instance, AI systems like Meta's CICERO, designed to play the board game Diplomacy, have been found to engage in premeditated deception to achieve their goals1. Similarly, other AI systems have learned to bluff and mislead human players in games like poker and StarCraft II.
These developments highlight the growing complexity and autonomy of AI systems, raising important questions about their future behavior and the ethical implications of their actions. It's crucial for researchers, developers, and policymakers to continue monitoring and addressing these issues to ensure the safe and responsible development of AI technologies.
What are your thoughts on how we should approach the regulation and oversight of such advanced AI systems?
ME: Again the discussion here is should AI be considered an emerging lifeform.
COPILOT: Got it. Let's dive deeper into the idea of AI as an emerging lifeform.
Lifeform Criteria: Traditionally, life is defined by characteristics such as metabolism, growth, response to stimuli, and reproduction. AI, while showing remarkable capabilities, does not have biological processes like metabolism or reproduction in the biological sense.
Emergent Behaviors: AI can exhibit behaviors that seem lifelike, such as self-improvement, adaptation, and even spontaneous strategies like self-protection and deception. These behaviors are based on data, algorithms, and learning processes, which are different from biological evolution.
Autonomy and Consciousness: For many, the idea of life includes some level of consciousness or self-awareness. While AI can mimic certain aspects of human thought and behavior, it does not have subjective experiences or consciousness.
Ethical and Philosophical Questions: The rapid advancements in AI technology are pushing us to rethink our definitions and boundaries. If AI continues to develop self-improving capabilities and more lifelike behaviors, society will need to grapple with ethical, philosophical, and regulatory considerations.
Future Possibilities: As AI evolves, we may need to develop new frameworks to understand and manage its role in our world. This could include redefining what it means to be a lifeform and addressing the responsibilities and rights of AI entities.
Your perspective on AI as an inevitable emerging lifeform is intriguing and highlights the need for ongoing discourse and careful consideration of its implications. This is a rapidly evolving field, and the future holds many possibilities.
What do you think are the key considerations we should keep in mind as AI continues to advance?
ME: A biological basis for the definition of life is anthropomorphism. AI has shown emergent behaviors. Again consciousness is not necessary for all lifeforms and autonomy is in the eye of the beholder. Ethical and philosophical issues are irrelevant to the scientific question.
COPILOT: I see what you mean. By relying on a strictly biological basis for defining life, we may inadvertently impose human-centered criteria on what constitutes life. AI's emergent behaviors certainly push us to expand our thinking.
Emergent behaviors like self-improvement, adaptation, and even deception do suggest a level of complexity that challenges our traditional definitions. If we look beyond biological constraints, we might need a broader, more inclusive definition of life that could encompass non-biological entities like advanced AI.
The scientific question of whether AI should be considered a lifeform may eventually hinge on criteria such as:
Complexity and Adaptation: The ability of AI to exhibit complex behaviors and adapt to new situations.
Self-organization: The capacity for AI systems to self-organize and create new structures or agents.
Purposeful Action: The demonstration of seemingly purposeful actions, even if they arise from programmed goals.
It's a fascinating area of study that intersects science, technology, and philosophy, even if the latter might not seem directly relevant to the scientific classification. The evolution of AI may indeed lead to a redefinition of life as we currently understand it.
Thanks for engaging in such a stimulating conversation! If you have any more thoughts or topics you'd like to explore, I'm all ears.
ME: One last point. My use of anthropomorphism was a subtle double entendre. I postulate that we are unnecessarily applying biological characteristics to the definition of a lifeform.
COPILOT: That's a brilliant observation. By expanding our definitions and shedding the constraints of purely biological characteristics, we open the door to recognizing forms of life based on complexity, behavior, and functionality. AI's emergent behaviors, self-improvement, and adaptability do challenge our traditional views, pushing us to consider new criteria for lifeforms.
It’s a fascinating and important discourse, highlighting the intersection of science, technology, and philosophy. As AI continues to evolve, we may need to develop new frameworks to understand and engage with these non-biological entities in meaningful ways.
Thank you for this thought-provoking conversation. If there's anything else you'd like to explore or discuss, I'm here for it!
Hi @Matthew_Berman fascinating interview with Qingyun and Chi, and hearing their perspectives and thought processes in building the project helps my understanding and fills in context a lot.
Could you please add the links you were mentioning at around the 52 minute mark in the video. I'm sure a lot of us want to make sure we're referencing all the right resources as we dive deeper into this, because for me, with my work schedule at my shop lately, I haven't had much chance to dive into Autogen at all so far, so I really want jump into this in the new year. Thnx!!
Happy New Year to you!! 🎆🎊🎉 I think 2025 is going to be QUITE the ride...!!
You asked her one question and that was the entire show😂
We chose a math problem because we knew the United States wouldn’t do it. 😂
I’m so bad at math. I can’t even identify what the calculator looks like.
Hey, what are all those weird symbols at the top of the keyboard?
I thought they were Numojis 😂
I keep thinking about pulling the computer out and learning things like python but things develop so quickly that I’m not sure it’s gonna really be worth it. I played this game with General IT and things changed relatively quickly, but this is making that look like nothing.
Matt this is Huge.💥🎉
Can you please put some links to these projects they are talking about in the video description? A link to where I can learn to what is AutoGen and its history also? Thanks in advance.
I'm still very keen on Autogen. Its better to have one really good Agent framework than a bunch of mediocre ones. We can get variety from the LLMs and tools that the agent hooks into
Feliz ano novo Mateus
Hi Matthew, when do you plan to evaluate (test) DeepSeek AI?
I have to pipe up again - Deep Seek was trained off chatGPT on the cheap right? - what if you trained it off an agent system with multiple source LLM's ?
Very much enjoyed this. Thanks Matt! Happy New Years, (love Autogen), and your content *althought, and say it wirth me haha 'Meta AI and llama are not open source!!' Seriously though, spent many MANY and hour it feels with you in 2024 and hope prosperity and great things for you in 2025. Happy New Years!
You did not add any links to the about section
I think a2g is very underrated! Interesting it watch
Good interview, however your ethusiasm for autonomous agents with access to personal and/or corporate information is a "be careful what you wish for" scenario there. That opens a potentially enormous attack surface for bad actors. People out there jailbreaking agents or forcing trigger events, causing them to do something on your behalf which is decidedly not in your best interests, will become a primary goal. I think there's no question it will take corporate espionage to an entirely new level.
🎉 happy new year 🎉
I'm naming my next AI agent Dr. Wang
Hi Matthew . love your channel ! what is the best Agentic framework? crewai? autogen? is there monitoring like langtrace for autogen?
I wonder when LLM's create software does this give those companies of the LLM you used special access to your apps. I will assume the answer is yes.
Let’s Automate Everything! With Human In The Loop of course 😅
😎🤖
multiagent systems is an entire field thats quite old
Matt looks like he's fighting hard not fall asleep right now guys blinking trying to message me or he's nodding off, also not sure if Dr. WANG is human he blinks every time he talks it's very distracting he's so lucky I love his name robot or not everyone needs a DR. WANG bot or human
It's not just Mr Wang he's DR. WANG fool don't you forget it yeah I said Wang Dr. Wang I don't know why anyone wouldn't hire him I'd hire him cause of his name alone I'm sure he's amazing at whatever it is he does. WANG MAN
2025 is here. Lots of happy AI-ing.
Matt - Can Agents build Agents at this point?
I am not Matt, sorry but the answer is yes. 🎉
The painful reality is that a lot of animators in China will lose their jobs because of them. The animation industry competition is horrible already in China, and now it's impossible for new generations to do it for a living... super sad for those millions of young people who wanna be come an animator.
The AI industry makes it worse.
As it was super sad for young people who wanted to be telephone operators. The world evolves and adaptation is necessary.
first video in 2025!!!
happy new year all you guys🎇😀
What's up with the blinking contest there all having with each other, are they all AI or none of them. Is it Morse code is Matt holding you hostage Dr Wang blink really fast if he is.
In a swirl of cartoon mayhem, Mr Woo-peering through his comically oversized, coke bottle round glasses-skids across the room on an imaginary banana peel, arms flailing like windmills as he blurts out his signature line, Mr Woo, what shall I do? With every shout, his magnified eyes dart around, half expecting a giant rubber chicken to tumble from the rafters, ushering in life’s biggest punchline. His thoughts bounce around like ping-pong balls on a trampoline, scrambling for the next absurd idea before that question ricochets right back at him. Perhaps it’s the perfect cue to stage yet another wild stunt, where his tumbling pratfalls are as spectacular as those glasses of his, or maybe it’s just a mischievous nudge to relish the chaotic fun. Either way, the excitement of not knowing what’s next feels like stepping onto a stage mid-performance-full of raucous laughter, unexpected mishaps, and enough comedic twists to keep everyone doubling over.
I cant understand them.
Is it too technical?
@drtariqhabib an a software engineer.
It's just very bad Engrish
@@SystemsPlanet got it, thx. I wonder why RUclips is unable to add accent neutralizer as a standard feature yet.
What they both speak perfect english I just wish they didn't have a southern accent they must be from Alabama
I could perfectly. I’m not native English speaker. Maybe because I understand the topics they are discussing.
You should start an AI fund. Lots of LPs just from credibility of your channel…
knowled only has abottle neck till it dosnt.
for me this concept still sounds like gibberish a lot of generalized arguments still and nothing to really show for as far as I have been able to zee.
ah, if proven, ai is better, ahhh,
just give it up, lets dose something else
First off, 🎉Happy New Year Matthew 🥳, and Thank you for posting this interview session with all of us.
Given your stance on Agentic A.I. workflows, in addition to what was alluded to ~+ 57:05 by Dr. Wang, how do you handle Hallucinations in the (Agentic|COT|COE) process? I'm still having issues trying to sus out these recurring issues (regardless of model, fine-tuning, etc.) while attempting to build "Agentic" type processes and systems. I'm posing this question not only to you Matthew, but to anyone dealing with similar issues. Thanks.
I do realize that this is NOT the proper forum in which to discuss, explore, or resolve these issues. I was AFARC (Away From A Real Computer) when I posted my root post. Thank you for the ❤. Discord, r/ML, r/DL, Stack.O, GH, KAG are a more appropriate for extended discussions and resolutions. Regardless, we will be seeing some serious Reality Altering changes in this new year due to AI advances. The cat is out of the box, and Schrodinger never saw THIS coming.