Why LLMs Are Going to a Dead End Explained | AGI Lambda

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии • 20

  • @jamesrav
    @jamesrav 3 дня назад

    i like to get all viewpoints, critical and 'hype'. It sounds like a combination of pure neural nets and other techniques (opposed by Hinton) may move things forward. But your comparison of LLM's to simply a compressed database is persuasive. Hard to see how people think the current methods for LLM's will lead to novel discoveries (good video, you should correct some spelling errors)

  • @spectator59
    @spectator59 День назад

    Good content. I've been saying similar things; it's shocking how many people refuse to see/accept what's actually happening with LLMs. (minor point: I think you could find a better TTS these days).

  • @dan-cj1rr
    @dan-cj1rr 6 часов назад

    what now with o3, everyone saying its AGI ?

  • @NebulaNomad1337
    @NebulaNomad1337 20 дней назад +1

    Well done!!!!!

  • @bestmoment151
    @bestmoment151 Месяц назад

    Amazing video . What software u used for this ?

  • @insolace
    @insolace Месяц назад +4

    The “thinking” and “reasoning” that we call intelligence is very similar to the way the LLM thinks and reasons. We forget that when we were young, we learned by imitating behavior that our parents trained us on. As we got older we learned how to “think” be repeating the lessons our teachers taught us.
    The LLMs are missing long term memory and recursive thinking, and an Oracle whose knowledge the LLM can trust.

    • @Square-Red
      @Square-Red Месяц назад

      nah, llms have no understanding of any meaning behind any word. They have no observation, they are developed with simple idea of next word predictions developed in complex models. On the other hand, we as humans have interacted with world to create meaning behind words. We have ideas, we create words to describe those ideas or observations. LLMs have no observation nor any idea. There is no thinking in LLMs, just inference of learned pattren from trained vocab.

    • @tomblock7774
      @tomblock7774 День назад +2

      It's actually completely different from how we learn. We don't need to see everything thousands of times to understand how to repeat them. We also have a world model from a very young age, that many many animals have (if not every) as well, that doesn't involve language. A great example is looking at something like Sora, in which any mammal has a far better understanding of physics then Sora does. Our brains are not like neural networks, and really it's a shame those networks are named as such, because it makes people believe they are similar, when they are not.

  • @dfas1497tcf3
    @dfas1497tcf3 Месяц назад

    We should give LLMs understanding, critical thinking, emotions, reasoning, insight, and even faith. But how do we do that?

    • @Square-Red
      @Square-Red Месяц назад

      The base idea of LLMs, being next word predictor is the part which make these models unable to develop any undersatnding or critical thinking. Ig, Agi is based in agent models who interact with thier enviornemnts.

    • @zandrrlife
      @zandrrlife 2 дня назад

      @@Square-Red?? That’s wholly wrong. There is research that proves LM are better at next word prediction than humans and next word prediction does learn reasoning, if it’s encoded in the data. Procedural knowledge drives these models. Are there pitfalls to next token prediction? Yes, why we need to generate multiple tokens. The issue isn’t the LM bro, it’s the data.

    • @Square-Red
      @Square-Red 2 дня назад

      @zandrrlife that's it. Data. Biased data. Memorisation of data. That is not reasoning nor understanding nor critical thinking. We don't speak on next word predictions. We have ideas and then we try to communicate that idea which results in evolution of language. LMs have never even interacted with the environment of which language they are using. They might predict word 'up' but don't even know physical meaning of up.

  • @lukasritzer738
    @lukasritzer738 2 дня назад

    Anyone think it’s funny they didn’t make generative AI called GAI

  • @nanotech_republika
    @nanotech_republika 10 часов назад

    Nice presentation and some of the ideas. But you are wrong a few major points in that video. I felt like you made this video to have somebody explain it to you because you have so many questions. Your main misunderstanding is that the LLM does not have understanding of the information that it learns. The fact is that even the smallest neural network (say from 1986 Hinton paper) has some understanding. Not perfect, never perfect but it definitely has one. It is easy to show it.
    Another main point you are wrong about, you mentioned about static neural nets having problems updating the knowledge. And you said that this is in contrast to humans. How do you know how this is done in humans? Don't we have a huge "static weights" in our brain? What is the difference? You will be surprised when you find out !
    Therefore, I give you a big thumb down for the factual content.

    • @AGI.Lambdaa
      @AGI.Lambdaa  9 часов назад +1

      For the first point, you can refer to this article: LLMs and Artificial General Intelligence Part IV: Counter-Arguments, Searle’s Chinese Room, and Its Implications. I hope this will provide you with some useful insights, as I am unable to provide extensive details at the moment. However, I am currently working on a second video that will explain these concepts further.
      ahmorse.medium.com/llms-and-artificial-general-intelligence-part-iv-counter-arguments-searles-chinese-room-and-its-9cc798f9b659
      To gain a better understanding, I recommend exploring how decentralized reinforcement learning (RL) agents acquire language. Comparing this process to how large language models (LLMs) operate will help highlight the fundamental differences between the two approaches.
      Regarding the second point, its kinda funny to think we have "static weights." in our barins. i thought we have continously learning neurons, which adapt continuously over time. strange.
      For now, I suggest watching this video, ruclips.net/video/zEMOX3Di2Tc/видео.html
      By the way, I have already shared arguments why llms do not have language understanding and expecting responses with reasoning . It seems there is a conceptual distinction between the "understanding" I am discussing in the video and the one you are referring to.

  • @daburritoda2255
    @daburritoda2255 2 дня назад

    o3 is agi