The goal of this video is to address the problem with current LLMs, which are just learning the data fed to them and do not focus on learning from the data. These are some thoughts on how we can develop AI to think like a human and also understand language like a human. For discussion, you can join our Discord server: discord.com/invite/pBj2MwHhVk
The “thinking” and “reasoning” that we call intelligence is very similar to the way the LLM thinks and reasons. We forget that when we were young, we learned by imitating behavior that our parents trained us on. As we got older we learned how to “think” be repeating the lessons our teachers taught us. The LLMs are missing long term memory and recursive thinking, and an Oracle whose knowledge the LLM can trust.
nah, llms have no understanding of any meaning behind any word. They have no observation, they are developed with simple idea of next word predictions developed in complex models. On the other hand, we as humans have interacted with world to create meaning behind words. We have ideas, we create words to describe those ideas or observations. LLMs have no observation nor any idea. There is no thinking in LLMs, just inference of learned pattren from trained vocab.
The base idea of LLMs, being next word predictor is the part which make these models unable to develop any undersatnding or critical thinking. Ig, Agi is based in agent models who interact with thier enviornemnts.
The goal of this video is to address the problem with current LLMs, which are just learning the data fed to them and do not focus on learning from the data. These are some thoughts on how we can develop AI to think like a human and also understand language like a human.
For discussion, you can join our Discord server:
discord.com/invite/pBj2MwHhVk
The “thinking” and “reasoning” that we call intelligence is very similar to the way the LLM thinks and reasons. We forget that when we were young, we learned by imitating behavior that our parents trained us on. As we got older we learned how to “think” be repeating the lessons our teachers taught us.
The LLMs are missing long term memory and recursive thinking, and an Oracle whose knowledge the LLM can trust.
nah, llms have no understanding of any meaning behind any word. They have no observation, they are developed with simple idea of next word predictions developed in complex models. On the other hand, we as humans have interacted with world to create meaning behind words. We have ideas, we create words to describe those ideas or observations. LLMs have no observation nor any idea. There is no thinking in LLMs, just inference of learned pattren from trained vocab.
Amazing video . What software u used for this ?
We should give LLMs understanding, critical thinking, emotions, reasoning, insight, and even faith. But how do we do that?
The base idea of LLMs, being next word predictor is the part which make these models unable to develop any undersatnding or critical thinking. Ig, Agi is based in agent models who interact with thier enviornemnts.