I think about agents planning in terms of paths (or let's call them chains or loops) in abstraction space. Abstraction space is a graph of concepts. I think this is the most useful way for them to learn. Identify abstractions and than use them. Think about that and apply it to your research pls. This makes the users interacting with them capable of understanding their planning and knowing what they would do. It's essential for trustworthy AI Agents
This podcast felt somewhat detached from how Reinforcement Learning and Large Language Model research is merging. It still talks about the world of RL in games
There should be rather than a benchmark for assessment there could be a comparison between a fairly generalist human's abilities and a computational system as a base for assessment. Games may offer some variables and potential for decision making and action assessment but real world challenges may be broadly diverse and richer so working with a generalist may provide examples and equally the model may also improve the weaknesses of the generalist. Learning from each other would be a key to providing real world understanding for machine learning and create opportunities for millions of training examples.
Im just curious... I've been using NotebookLM a lot lately, is the host of this podcast the Voice of the Female host in there? I noticed some familiarity in the voice... But i don't know if I'm tripping 😅
40:32 Look up clone robotics they have intricate hands with hydrolics that can manipulate small objects and are 10x stronger than other hydrolic hands.
The most important objectives in life are highly subjective: politics, religion, fashion, architecture, etc. How does AI deal with aesthetic experience, emotions, thoughts?
somebody once said, i forget who, that no one they had ever talked to had put forward a possible utopia where robots have taken over all the labor, but i have one: just food distribution and multiple a.r.g.s whereby 10,000+ can all play star trek and many other universes in real time on play sets built for them by an asi system that knows everything what everyone is doing on all of earth that can make us all believe that we have transcended into superhumans through mastery of the total environs. just thought this would be a good place to put that. ✌️
Awesome stuff. Unfortunately the hypothalamus is trigger for most individuals when thinking about AI and the future, understandably but not ideal. Great to see you're out here spreading hope and optimism!
Maybe I didn’t understand the guy completely, but he didn’t get across how this SIMA could scale to larger and more varied environments. So much so that it seems somewhat ineffective, inconsequential, incapable, and irrelevant. Perhaps it will lead to a new paradigm of AI development, but from what was discussed here, it seems it could be a dead end.
I believe it would require the collection of a lot of data of humans just ... doing stuff. Then, like LLMs, these models could learn to mimic humans doing stuff and do stuff for themselves. It does seem a bit underwhelming at the moment, but I wonder if it will get to a point where the agents are good enough that they can explore games and explain what they are doing. Then, that autonomously-generated exploration data could be used as better training data, similar to what people expect OpenAI have done with o1 and training their next larger LLM. You would just need human data to learn the language to use, and to bootstrap its development.
The enthusiasm of the interviewer and her ability to ask meaningful questions make the whole interview very informative and enjoyable !
Interesting, Great podcast, I’ve enjoyed every episode. The production and sound is especially good 👏👏👏
Great interview. Clear display of knowledge. Thx
Outstanding interview. Great interviewer. Very professional
I think about agents planning in terms of paths (or let's call them chains or loops) in abstraction space. Abstraction space is a graph of concepts. I think this is the most useful way for them to learn. Identify abstractions and than use them. Think about that and apply it to your research pls. This makes the users interacting with them capable of understanding their planning and knowing what they would do. It's essential for trustworthy AI Agents
Great video with great people
this is some great content right here
This podcast felt somewhat detached from how Reinforcement Learning and Large Language Model research is merging. It still talks about the world of RL in games
There should be rather than a benchmark for assessment there could be a comparison between a fairly generalist human's abilities and a computational system as a base for assessment.
Games may offer some variables and potential for decision making and action assessment but real world challenges may be broadly diverse and richer so working with a generalist may provide examples and equally the model may also improve the weaknesses of the generalist. Learning from each other would be a key to providing real world understanding for machine learning and create opportunities for millions of training examples.
Im just curious... I've been using NotebookLM a lot lately, is the host of this podcast the Voice of the Female host in there? I noticed some familiarity in the voice... But i don't know if I'm tripping 😅
the notebookml host has a thick american accent. they do not sound very similar, imo.
@@torbenhr450 yeah the voices are very different
Glad to see Hanna getting stuck in with AI these days. Two of my favorite things.
40:32
Look up clone robotics they have intricate hands with hydrolics that can manipulate small objects and are 10x stronger than other hydrolic hands.
The most important objectives in life are highly subjective: politics, religion, fashion, architecture, etc. How does AI deal with aesthetic experience, emotions, thoughts?
Bring back the 60fps
I cant help but think about the perfect slave conditions. No reward, they are just ready to work at all times. Limitation Learning 22:38
somebody once said, i forget who, that no one they had ever talked to had put forward a possible utopia where robots have taken over all the labor, but i have one: just food distribution and multiple a.r.g.s whereby 10,000+ can all play star trek and many other universes in real time on play sets built for them by an asi system that knows everything what everyone is doing on all of earth that can make us all believe that we have transcended into superhumans through mastery of the total environs.
just thought this would be a good place to put that. ✌️
Awesome stuff. Unfortunately the hypothalamus is trigger for most individuals when thinking about AI and the future, understandably but not ideal. Great to see you're out here spreading hope and optimism!
In just a few years we will be talking to NPCs in games which are smarter than any human we've ever met in real life.
Makes sense
I wanna be at places like deepmind oneday
What about double agents? Be careful out there.
why, why why I ask, I am not questioning if we can achieve it but are we so bored and have no other problem left to solve
Salman Nagar
You thought runescape had a botting problem before wait until the ais take over loool
Have a bunch of shroud agents owning noobs at the games.
.
same
..
...
2nd
Maybe I didn’t understand the guy completely, but he didn’t get across how this SIMA could scale to larger and more varied environments. So much so that it seems somewhat ineffective, inconsequential, incapable, and irrelevant. Perhaps it will lead to a new paradigm of AI development, but from what was discussed here, it seems it could be a dead end.
I believe it would require the collection of a lot of data of humans just ... doing stuff. Then, like LLMs, these models could learn to mimic humans doing stuff and do stuff for themselves.
It does seem a bit underwhelming at the moment, but I wonder if it will get to a point where the agents are good enough that they can explore games and explain what they are doing. Then, that autonomously-generated exploration data could be used as better training data, similar to what people expect OpenAI have done with o1 and training their next larger LLM. You would just need human data to learn the language to use, and to bootstrap its development.
Real missing mood to talk about building agents but nothing about the dangers or risks.