This is the best intro to advancing AI with AGI and SOAR showing deep insight on how to make AI that is fundamentally capable of doing what humans do, yet also, an explanation on how to develop the system. As someone that always loved Cognitive psychology and how we explain what we do, this insight to AI and a clever plan to do the same just with math, language and code is in short - Absolutely Fabulous!
I've needed to build a mind since 2003. Anything that exhibits goal-oriented problem solving capability, from an insect to a reptile, to mammals, and beyond. I think people are focusing too much on "human" intelligence. We have yet to be able to produce something of a dynamic system that can be put into a robotic insect body and at least operate on part with insect level intelligence and I believe that if we can do that then we can scale up the abstract processing capabilities (i.e. neocortical/hippocampal function) to scale to more mammalian and even human level intelligence. How does a bee calibrate how its flaps its wings to compensate for where the flower its attention has zeroed in on, so that it can land? Then it follows through on a series of behaviors, each adapting to each unique instant it finds itself in - moving each leg as it has many times before but uniquely for each moment. Then once some need has been satisfied it flies off to another bloom, or pursues its way back to its nest, visually. This is some advanced stuff! I don't believe that its "instinctual" or "hard-coded". I believe that the configuration of the bee's brain, and its physicality, lead to it learning how to do these things within seconds of emerging from incubation, quickly calibrating based on the inherent reward of novelty and furthering potential entropy, and cementing within a day these behaviors. It can lose a leg and it will adapt, to the best of its ability, which is a far cry more advanced than anything Boston Dynamics has managed, for instance. Insects exhibit the exact same flexibility and awareness that all creatures do, just on a much smaller scale, but it's still exactly the thing we have yet to figure out. These complex systems we've been engineering for decades to emulate human-level understanding and learning are approaching the problem in the wrong way. It's like trying to make a cake that looks like a cake, smells like a cake, but you go to take a slice and underneath it's glued together sawdust, just a facsimile that isn't doing anything near what the time-tested evolved biological intelligence mechanisms do - which are clearly very powerful. Insects obviously are not modeling reality anywhere near the level of humans or animals, but whatever they're doing is very effective. How do they modulate behavior in pursuit of a biologically driven motivation? How do they passively learn their environment in-passing and are then able to apply that awareness within the context of pursuing a goal? These are questions I believe we'll be able to figure out the answers to much faster by studying insect brains, and those are the same answers we've been looking for with regards to producing animal/human level intelligence. There's an ingredient, a mechanism, that is universal across all creatures which exhibit awareness, goal pursuit, and problem solving ability, and as far as I've seen we've come no closer to figuring out what it is in the 15 years I've been studying brains and AI.
And a harsh comment, this guy doesn't get it and directly go coding without having a clue first... (like those AI winter harbingers). We need a genius that can think outside the box like Galois.
At 24:23 There shouldn't be a "memory" in the architecture! "Memory" is a description of what's happening in the present. It's always about the present. But if you think of it, you could never know a "present" - but it's past. I'm working on AGI too.
The focus on the importance of forgetting reminds me of a 60 minutes episode of a few people who hardly ever forget, so this approach to me falters a bit. Source: www.cbsnews.com/news/marilu-henner-super-memory-totally-a-gift/ But I can understand if memory is severely limited, for example your application of a game on a portable smartphone.
It is very valuable to forget. It allows being succinct in describing all our hypothesis that we found to be correct. Kind of love that about cognitive architecture. We associate while encoding, and depend on our core tree of knowledge. It allows AGI to distill down so to say. Very innovative and needed to be human competitive.
Isn’t crazy that a stunt is collecting the debate of native design where the outcome is the “dare,” which is the complications of fused mental awareness of robotics that runs on a “human design?”
AI needs to have independently all the time working unit that would compare input information with concepts stored in its databases. AI then will be able, thus, to perform recognizing, accepting, rejecting, differentiating, summing and other "thinking" operations and even emotions: ibb.co/hvObLS
@gespilk AGI theory includes human cognition, cognitive evolutionary theory, and AGI. In other words, how humans are able to reason and comprehend, how it evolved from non-intelligent round worms, and how it could be replicated non-biologically. AGI is not derived from computational theory and is not directly related to AI. It can't run on a computer. AI that asks questions is still AI and has nothing to do with AGI. This is not what it is to me; this is what the scientific research indicates that it is.
@gespilk Oh, I see. You are still trying to insist (desperately believe) that everything can be simulated. We've burned through three entire generations of computer scientists trying that without any progress toward AGI. But somehow if we just keep doing the same thing it will work at some point. Good luck.
@gespilk I don't know what to tell you. From what has been published, I'm in the lead on AGI theory by a considerable margin and continuing to make progress. What I'm doing is vastly more effective than any other effort I'm aware of. If you can point to another group that is making better progress then you would have a point. The groups trying to simulate brains will never have a working model. But, you can keep hoping.
@gespilk So, if I told you that I wrote a disproof of an algorithmic solution two years ago I guess you can always fall back on the hope that I'm wrong.
This is the best intro to advancing AI with AGI and SOAR showing deep insight on how to make AI that is fundamentally capable of doing what humans do, yet also, an explanation on how to develop the system. As someone that always loved Cognitive psychology and how we explain what we do, this insight to AI and a clever plan to do the same just with math, language and code is in short - Absolutely Fabulous!
I'm just a dumb layman but I'm enjoying these classes immensely.
I've needed to build a mind since 2003. Anything that exhibits goal-oriented problem solving capability, from an insect to a reptile, to mammals, and beyond. I think people are focusing too much on "human" intelligence. We have yet to be able to produce something of a dynamic system that can be put into a robotic insect body and at least operate on part with insect level intelligence and I believe that if we can do that then we can scale up the abstract processing capabilities (i.e. neocortical/hippocampal function) to scale to more mammalian and even human level intelligence. How does a bee calibrate how its flaps its wings to compensate for where the flower its attention has zeroed in on, so that it can land? Then it follows through on a series of behaviors, each adapting to each unique instant it finds itself in - moving each leg as it has many times before but uniquely for each moment. Then once some need has been satisfied it flies off to another bloom, or pursues its way back to its nest, visually. This is some advanced stuff! I don't believe that its "instinctual" or "hard-coded". I believe that the configuration of the bee's brain, and its physicality, lead to it learning how to do these things within seconds of emerging from incubation, quickly calibrating based on the inherent reward of novelty and furthering potential entropy, and cementing within a day these behaviors. It can lose a leg and it will adapt, to the best of its ability, which is a far cry more advanced than anything Boston Dynamics has managed, for instance. Insects exhibit the exact same flexibility and awareness that all creatures do, just on a much smaller scale, but it's still exactly the thing we have yet to figure out. These complex systems we've been engineering for decades to emulate human-level understanding and learning are approaching the problem in the wrong way. It's like trying to make a cake that looks like a cake, smells like a cake, but you go to take a slice and underneath it's glued together sawdust, just a facsimile that isn't doing anything near what the time-tested evolved biological intelligence mechanisms do - which are clearly very powerful. Insects obviously are not modeling reality anywhere near the level of humans or animals, but whatever they're doing is very effective. How do they modulate behavior in pursuit of a biologically driven motivation? How do they passively learn their environment in-passing and are then able to apply that awareness within the context of pursuing a goal? These are questions I believe we'll be able to figure out the answers to much faster by studying insect brains, and those are the same answers we've been looking for with regards to producing animal/human level intelligence. There's an ingredient, a mechanism, that is universal across all creatures which exhibit awareness, goal pursuit, and problem solving ability, and as far as I've seen we've come no closer to figuring out what it is in the 15 years I've been studying brains and AI.
Amazing lecture. Thank you so much!
hey lex , please invite Jeff Hawkins of Numenta for one of your talks .
Deric Pinto 👍🏾!!
I agree, Jeff Hawkins in my opinion made the furthest progress on cognitive AI.
And a harsh comment, this guy doesn't get it and directly go coding without having a clue first... (like those AI winter harbingers). We need a genius that can think outside the box like Galois.
Jeff Hawkins is tops.
I make a point of watching him every year, to catch up on new developments.
There doesn't need to be a bunch of derivatives on the board, but of course there are.
I love AGI
Hi Dr.Fridman,
Is there any chance of Dr.Derbinsky's lecture notes being posted?
Thank you,
At 24:23
There shouldn't be a "memory" in the architecture! "Memory" is a description of what's happening in the present. It's always about the present. But if you think of it, you could never know a "present" - but it's past. I'm working on AGI too.
The focus on the importance of forgetting reminds me of a 60 minutes episode of a few people who hardly ever forget, so this approach to me falters a bit. Source: www.cbsnews.com/news/marilu-henner-super-memory-totally-a-gift/ But I can understand if memory is severely limited, for example your application of a game on a portable smartphone.
It is very valuable to forget. It allows being succinct in describing all our hypothesis that we found to be correct. Kind of love that about cognitive architecture. We associate while encoding, and depend on our core tree of knowledge. It allows AGI to distill down so to say. Very innovative and needed to be human competitive.
Isn’t crazy that a stunt is collecting the debate of native design where the outcome is the “dare,” which is the complications of fused mental awareness of robotics that runs on a “human design?”
just remember dude, it's 'its' unless you mean to convey 'it is'. just a tiny assist from an old gardener.
and thanks for these awesome uploads, just what an old gardener's brain likes to chew on at the end of another brainless day!
Really like this lecture. Is it possible to get the slides as a quick summary/reference?
Will you post the next lecture videos?
The lectures are really amazing!
No mentioning of "West World" in the intro? :)
Make a RUclips bot that posts Links from Video References. For instance the recommended reading links :)
hanselberry I'm on it
@@sumlercorp8414 did you achieve this? Would be really useful!
pretty dank
In Britain we call them the naughties, guy.
❤️
THe Spaun audio is awful. I can’t understand what is being said.
"ChatGPT please write me an rsync script" Oh boy.
The detailed reasons behind why we think any profession can be replaced by a robot: medium.com/@caps.raaj/humans-are-sophisticated-robots-40c57ddb97a6
Does anyone have any thoughts on Cognitive Architecture (Sigma ∑ architecture) developed at ICT (USC)/Sigma?
Here's the SPAUN video: ruclips.net/video/1AO2g1EgcWE/видео.html
You're welcome.
I wish engineers stopped working with the defense industry
AI needs to have independently all the time working unit that would compare input information with concepts stored in its databases. AI then will be able, thus, to perform recognizing, accepting, rejecting, differentiating, summing and other "thinking" operations and even emotions: ibb.co/hvObLS
👍
12:15
Science! Forward progress scientifically.
Again, a claimed AGI lecture without any AGI content. This is only AI; it has nothing to do with AGI. Down vote.
@gespilk AGI theory includes human cognition, cognitive evolutionary theory, and AGI. In other words, how humans are able to reason and comprehend, how it evolved from non-intelligent round worms, and how it could be replicated non-biologically.
AGI is not derived from computational theory and is not directly related to AI. It can't run on a computer. AI that asks questions is still AI and has nothing to do with AGI. This is not what it is to me; this is what the scientific research indicates that it is.
@gespilk That isn't a contradiction. Again, AGI is not computational. There are machines other than computers.
@gespilk Oh, I see. You are still trying to insist (desperately believe) that everything can be simulated. We've burned through three entire generations of computer scientists trying that without any progress toward AGI. But somehow if we just keep doing the same thing it will work at some point. Good luck.
@gespilk I don't know what to tell you. From what has been published, I'm in the lead on AGI theory by a considerable margin and continuing to make progress. What I'm doing is vastly more effective than any other effort I'm aware of. If you can point to another group that is making better progress then you would have a point. The groups trying to simulate brains will never have a working model. But, you can keep hoping.
@gespilk So, if I told you that I wrote a disproof of an algorithmic solution two years ago I guess you can always fall back on the hope that I'm wrong.