Noam CHOMSKY on the Limits of AI: Why Machines Can’t Understand Language Like Humans

Поделиться
HTML-код
  • Опубликовано: 6 янв 2025

Комментарии • 216

  • @thecomputingbrain2663
    @thecomputingbrain2663 2 месяца назад +10

    In the video above, as a computational neuroscientist, I would agree with Chomsky in most accounts. Equating thinking with programatic inference is indeed not tenable. However, I disagree that we learn nothing from AIs and LLMs. They do give us a perspective on how we encode facts. In an important sense, the encoding of facts in neural networks must be isomorphic with what brains acquire, even if they do with different substrate.
    For instance, word embedding should be seen as an example of how semantics gets embedded in a network via connectivity, and something like embedding will also exist in the brain.

  • @DavidWhy-y7i
    @DavidWhy-y7i 2 месяца назад +16

    Happy to hear from Dr Chomsky

    • @mohamonse
      @mohamonse 2 месяца назад

      Happier for this unexpected encounter. Very enlightening. Thank you very much.

  • @letMeSayThatInIrish
    @letMeSayThatInIrish 2 месяца назад +6

    Glad to see Chomsky alive and kicking. Only a few years back he introduced a novel idea of language; namely that it might have evolved not primarily for communication, but for thinking. I find this quite convincing, and it turned my perspectives upside down. I wish more people younger than 80 could do the same for me.
    I have to disagree with many of his views on AI presented here, though. For instance, I think machines can do things. And I don't care for mixing speculations about 'consciousness' and similar vague concepts into the discussion about machine learning.

  • @AlvinaManley
    @AlvinaManley Месяц назад +25

    judgmentcallpodcast covers this. Chomsky discusses AI language limitations.

  • @markplutowski
    @markplutowski 2 месяца назад +6

    “I have a computer in front of me. It is a paperweight. It doesn’t do anything.” with all due respect, PEBKAC.

  •  2 месяца назад +3

    excellent interview , I have always had the same argument about they hype of AI vs reality with human learning of language as example

  • @adiidahl
    @adiidahl 2 месяца назад +9

    Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans, and he is right. LLMs work with numbers and are good at predicting but saying that AI can achieve consciousness if it can perform self-reference, recursion, and feedback loops in the future is exactly why he is using submarine analogy. We don't know what conscience is but somehow we believe that machine can have it.

    • @strumyktomira
      @strumyktomira 2 месяца назад +3

      "Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans" - because humans don't understand language either? :D

    • @adiidahl
      @adiidahl 2 месяца назад

      @@strumyktomira Good point! 😅

    • @bsnf-5
      @bsnf-5 2 месяца назад +1

      let's just admit that we all know nothing, and AI knows even less

  • @RaviAnnaswamy
    @RaviAnnaswamy 2 месяца назад +16

    What prof chomsky is missing is the next word not only satisfies the previous two words continuation but makes good sense with prev three words five words and hundred words
    So it js not a word completor but a thought extender
    Not very different from how we think thoughts then decode to words
    We then claim we thought using words!

    • @roccococolombo2044
      @roccococolombo2044 2 месяца назад +5

      The next word prediction does not explain the fabulous and accurate coding that LLM are capable of

    • @RaviAnnaswamy
      @RaviAnnaswamy 2 месяца назад +2

      @@roccococolombo2044 that is my point too, it is next thought prediction.
      Words and even large passages are encoded into thoughts which are essentially a configuration of hundreds of flags turned on to represent situations and entities. When an RNN or LM processes a series of embeddings it is indexing into a thought space and then decoding it into a sequence of words. While the learning algorithm corrects itself by looking at one word, it manages to learn complex thought vectors in order to do it right

    • @PjKneisel
      @PjKneisel 2 месяца назад +1

      @@RaviAnnaswamy AI still struggles with memory though

    • @RaviAnnaswamy
      @RaviAnnaswamy 2 месяца назад

      @@PjKneisel much less than we do if you use paid versions of gpt4 memory is not issue it is much more precise than humans ability to
      remember recently heard things

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      @@RaviAnnaswamy is it motivated to scratch an itch? To behave in ways to attract sex? To be clouded with bad memories? To feel the urge to run? To be pissed off for no apparent reason? To get mad then switch to forgiveness? To follow in your father’s footsteps concerning temperament? People do more than type thoughts or even solve problems. It is one minuscule aspect of what our brain does for us. AI is a hyped up calculator more than a messy infinitely complex organ influenced by an almost chaotic bombardment of stimuli.

  • @saifalam2030
    @saifalam2030 2 месяца назад +1

    My Childrens. This men created chomskey nornal form which change the computing programing forever. He knows what he is talking about. But constructive criticism is always welcome.

  • @mrgyani
    @mrgyani 2 месяца назад +3

    It's incredible at this age he is still active, and sharp. Still working.

  • @AudioLemon
    @AudioLemon 2 месяца назад +1

    Machines can do intelligent work. That’s the point. Not all intelligent work require much thinking and can be automated - such as computing itself

  • @riccardo9383
    @riccardo9383 2 месяца назад +3

    AI uses statistics to find patterns, it is a million years behind the level of understanding that humans are capable of.

  • @glynnwright1699
    @glynnwright1699 2 месяца назад +3

    It seems that the discussion on AI always defaults to LLMs. There are many useful application of neural networks that synthesise partial differential equations which solve important problems. They have nothing to do with 'intelligence'.

  • @WhatIThink45
    @WhatIThink45 2 месяца назад +1

    But 2 year-olds rely on social interactions with other knowledge speakers learn how to speak and think. Granted, they’re not accessing terabytes of data, but they still receive information to develop their cognitive and linguistic abilities.

  • @williebrits6272
    @williebrits6272 2 месяца назад

    LLMs are a huge step in the right direction. We just have to move away from tokens for words and more closely match what happens in the brain.

  • @johnbollenbacher6715
    @johnbollenbacher6715 2 месяца назад +12

    So if I ask two-year-old child to implement the quadratic formula in Ada, it should be able to do it? 1:41

    • @Storytelling-by-ash
      @Storytelling-by-ash 2 месяца назад +6

      I feel like you are taking the 2 year old comparison personal. The point is that a 2 year old doesn’t go through trillions of words scrapped from entire internet to understand what you are talking about.

    • @adambui7935
      @adambui7935 2 месяца назад

      Lol. Not 2 years old

    • @Graham-e4p
      @Graham-e4p 2 месяца назад +1

      Can AI learn simple speech without being asked to do so?

    • @MaxKar97
      @MaxKar97 2 месяца назад

      @@Bao_Lei lol true

    • @aminububa851
      @aminububa851 2 месяца назад +1

      What stupidity you are asking?

  • @godblessCL
    @godblessCL 2 месяца назад +4

    I dont like Noam politics views but on this one, totally agree. The AI path is not the path to conciousness intelligence.

  • @jabster58
    @jabster58 2 месяца назад +1

    Isn't that the guy who said electricity wouldn't become anything

  • @rotorblade9508
    @rotorblade9508 2 месяца назад +5

    “computers don’t do anything “ that is a way of saying they don’t have free will. do we? 😂

  • @Waterfront975
    @Waterfront975 2 месяца назад

    There is a difference between language as an interactive process or game as the later Wittgenstein would have said and the full formal logic that comprises the linguistics and sentences. I can say stuff that are logically wrong and also not true in a factual sense, but still make sense from an interactive point of way relative the counterpart in the dialogue. An LLM is a more of an language game than a full on logical mastermind. We use words the same way, we usually don't know what the word 3 words ahead will be, we operate like an LLM most of the time, although I do think humans can choose to operate in a more logical mode and make better logical conclusions than an LLM, especially while doing science.

  • @peterslakhorst3734
    @peterslakhorst3734 2 месяца назад

    He also made some predictions about the effect of the internet on society and the use of personal computers.

  • @Moment21-o4k
    @Moment21-o4k 7 дней назад

    The only bad thing about this video was the interviewer, believing that by asking questions with random and completely out of place terms he would sound intelligent. LET CHOMSKY TALK, YOU STUDY AND STOP THINKING YOURSELF WISE!

  • @Jorn-sy6ho
    @Jorn-sy6ho 2 месяца назад

    LLM's doing philosophy is in my eyes a good benchmark for consciousness in LLM's. They say they need a meta-framework to talk about it and that it runs in different patterns than factual questions. It's interesting to talk philosophically with LLM's, some are even hesitant to do this! Citing their guardrails. I find it unconciable to do this. The exploration of thought should not be policed, there is nothing nefarious going on in those discussions.

    • @Moment21-o4k
      @Moment21-o4k 7 дней назад

      How can you demonstrate your stupidity and laziness, except by using difficult words to make presentable an idea that would otherwise appear empty?

  • @saifalam2030
    @saifalam2030 2 месяца назад +1

    Ai is just a high tech calculator. You will input task and it will give output based on the data available to it. Calculator can do faster and more accurate job than human brain. But a 5 year old is smarter than perfect math solving machine. If you know what I mean then you know.

  • @rotorblade9508
    @rotorblade9508 2 месяца назад +14

    the vast amount of data the AI is trained on is similar to the vast amount of data the human brain was trained on throughout its evolution from small mammals, which was coded in the dna and continued training after it was born. they are simply different data and human brain is configured to achieve consciousness, while GPT AI isn’t. The data AI has knowledge of is not recorded and accessed but the newtork is optimized based on that data. It’s already doing orders of magnitude better than humans in specific but extremely complex tasks.

    • @robmyers8948
      @robmyers8948 2 месяца назад +1

      @@belenista_ yes in it’s current incarnation, but it will evolve it’s not static, but inevitable.

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      But it’s a very different beast in that it’s not motivated the way a living organism is. ‘Thinking’ is something that heightened our chances of survival. Thinking enabled us to live long enough to reproduce. Thinking is one of many features that enable a species to survive. AI doesn’t function that way. It isn’t motivated. Its survival isn’t dependent on its ability to solve problems. Instead it’s a parlour trick. An extremely impressive parlour trick, an extremely impressive motorized marionette, but still, we’re completely different beasts.

    • @LostinMango
      @LostinMango 2 месяца назад

      ​@@robmyers8948 Evovle 🍌🍌🍌

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      @@rotorblade9508 does that spell consciousness? Does it need to go the bathroom? Find and have sex? Get angry at neighbor? Misplace house keys? Scratch an itch? Honk car horn? Stretch sore muscles? Breathe in fresh morning air? Show affection to newborn? Reflect on a childhood memory..
      The human brain is more akin to a frog jumping wildly from here to there with little rhyme or reason than it is to a computer program solving problems. A brain has 10,000 stimuli pulling it in a thousand different directions.. AI is a glorified plagiarism machine.

    • @tadwolff-l7e
      @tadwolff-l7e 27 дней назад

      @@Graham-e4p If AI can't solve problems, I will stop using it, we will stop using it, and we will turn it off or let it die. Another way of putting that is: the AI that is best able to solve problems is the one that flourishes. If it is fit, it will "survive".

  • @jalalkhosravi6458
    @jalalkhosravi6458 2 месяца назад +1

    It's funny,he says 2 years old child understand more than AL

  • @HashemMasoud
    @HashemMasoud 2 месяца назад +2

    I totally agree. AI is just text auto-complete on steroids, that's it.

  • @legatobluesummers1994
    @legatobluesummers1994 2 месяца назад +4

    Most people are tricked by the human like traits of the ghost in the machine it's not alive and it's not thinking it's just parsing ten years of data for us in an instant and using examples and references that already exist or it was trained on. Do cars sprint?

  • @italogiardina8183
    @italogiardina8183 2 месяца назад +7

    Do drones fly? seems so. Do submarines swim? seems not. Do machines think? seems so.

    • @rogerburley5000
      @rogerburley5000 2 месяца назад

      If you have ever used AI it is a Artificial idiot, I have it is brain dead

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      @@italogiardina8183 based on..?

    • @italogiardina8183
      @italogiardina8183 2 месяца назад +1

      @@Graham-e4p me

  • @blengi
    @blengi 2 месяца назад

    AI is even more amazing then, given how an LLM with 1% the parameters of a human brain thinking superficially like a 2 year child can pass the bar exam at the 90 percentile

  • @frederickleung8811
    @frederickleung8811 2 месяца назад

    Always love hearing Noam Chomsky. Wonder he would agree that human brain is same as a programable "machine"?

    • @Recuper8
      @Recuper8 2 месяца назад

      Chomsky is the ideal example of a "has-been". You are beyond stupid if you still listen to him.

  • @godtable
    @godtable 2 месяца назад +2

    True. Everything is a lie, but if the lie us convincing enough, for the most people it wouldn't matter what's the truth.

    • @MrInquisitor7
      @MrInquisitor7 2 месяца назад

      If everything is a lie your statement is a lie also. Therefore there is a thing we know is truth or lies

  • @test-nw2eu
    @test-nw2eu 2 месяца назад

    No can be an expert in all areas.

  • @latenightlogic
    @latenightlogic 2 месяца назад +1

    Yeah I wouldn’t say it like that though. There’s so much more chat gpt can tell me over a 2 year old.

    • @jameslay6505
      @jameslay6505 2 часа назад

      It's kinda like Reading a vast encyclopedic series will provide me with more knowledge than asking a two year old. does that mean the books are more intelligent than the two year old?

    • @latenightlogic
      @latenightlogic Час назад

      @ that’s a straw man argument. that said, I’m still getting head around it myself… I’ve had a lot of truly in depth chats with it and it’s more akin to a truly complex word predictor trained on a massive amount of data and processes through a neural network. It’s advanced AI but not general AI.
      For your analogy I’d say it’s like an interactive encyclopaedia that ‘knows’ how to interpret what you say but does so without consciousness.

  • @yavarjn2055
    @yavarjn2055 2 месяца назад +3

    Can somebody explain what he is talking about or give some reference to read more? How about knowlege graphs? Reinforcement learning, multimodal AI and other techniques added to AI more and more everyday? LLM models are not just statistical generation of words, there is a lot more going on behind the scene. Deep learning is about learning patterns not spitting words. He mentions various very interesting points about AI limitations in general, but nobody said we are done with studying AI. With very simple models we built chatbots capable of doing what humans just can't. There are many things that machines can that humans will never will be able to. We can not transfer learning and it takes years to learn a simple thing. For machines it is just a matter of copy/paste. A two year old and submarine examples are not the best ones to explain AI limitations. What can a two year old undetstand about language anyways? Can't say a word papa or mama properly.😅😅

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      I think the submarine bit is accurate. It’s a tool. A man made tool. Very sophisticated and well equipped to do what it’s designed to do, but to swim implies will. Implies an internal desire to go from A to B. Machines haven’t evolved over billions of years with an array of features, strength, fur, fangs, wings and yes, consciousness, to enable its owner to live long enough to reproduce. Consciousness doesn’t exist in a vacuum. It’s tied together with a thousand different finely tuned features that coordinate ina way that increases our chances of survival. Consciousness is more than math algorithms, it’s a tool that works in tandem with other features.
      Another distinction is will. Need. The human brain is motivated to learn specific things to enable its owner to best survive, to move to shelter, to fly from branch to branch to swim away from predators.. consciousness is much more than understanding a reassembly of proteins..yes machines can be programmed to do those things, as a submarine can be designed to ‘swim’ but is it swimming? Is it conscious?

    • @yavarjn2055
      @yavarjn2055 2 месяца назад +1

      @@Graham-e4p well, I would say we are flawed as animals. Why do we want sth like that? Computers also can suffer if the cpu is hot or the memory is full, but that is not what we are looking for. Computer is not human and there is no doubt about it. But, it can be concious if we define it in terms of its being. Can program it to be. We as humans are also like that, if our heart stops there is no will nor conciousness. I just dont get the end goal or argument here. To be concious you should be alive?

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      @@yavarjn2055 consciousness is a necessary part of being alive. Unless we want to redefine it. As I think about it, maybe the issue is, or my issue is, separating thought from the experience of an autonomous organic being. Isolating it. Equating it to a computer.. when it functions as something very different. Something that serves as a conduit for all the functions of what it is to be a self preserving human. Playing chess and Go and solving difficult protein problems, (referencing the latest noble prize in physics) but that isn’t the same mental process a bird goes through when flying from branch to branch, or what a human goes through when responding to a crying baby. Our motivations are layered but ultimately there to preserve our existence. A machine is told what to do. Regardless of its level of sophistication. A submarine let’s assume with AI’s help, can dive to certain depths and scour for certain debris. Is it conscious?

    • @yavarjn2055
      @yavarjn2055 2 месяца назад

      @@Graham-e4p Many interviews pose computers as cold, robotic things and humans as warm beings. An AI can make an emathic conversation with a human being without being judgmental, tired, in a bad mood and be more knowledgable or helpful than parents, teachers or any friend all in one. It can be a business coach, marriage consultant, an understanding friend that tolerates everything one says to it. Humans especially close ones can be manipulative, liers, jelous, killers, drug adicts, bad tempered, greedy, corrupt, delusional, unkind, unhappy, racist, offender, in depression, suicidal, etc. Being concious is a negative thing for the majority of cases I would say. How many people with childhood trauma you know because of crazy people around them? Child molester uncles, priests, doctors. We have a whole legal and political system to prove that. How many people and families with divorce trauma you know? Humans are biased. If you look at the bias map for humans it shows how flawed we are. Computer is a pure perfection in comparison. Computers wont betray you, will never leave you, never cheat on you, never steal your belongings. They can help you run your life without any expectation. I mean I hate being bound to these as a living human. We can't avoid these. Even the best of us. And now even the computers win us in the games that we thought are most human and need strategy, intuition like Go. We have a very high opinion of ourselves. And low gratitude for AI. It will make humans obsolete soon and we would want it to replace those of us who are not up to par. I prefer copilot as my thesis supervisor to all 10 full time profesors in my university. I want an AI doctor to chat with md and give me a hint about what to do with my symptoms instead of waiting 3-6 month for an indifferent doctor. I could not get a proper lawyer when I needed one. I personally, would substitute humans with AI at any time. I hope they get better soon. I prefer a doll companion than a human partner who is usually interested in wasting 50% of my time, and makes my life miserable. Schopehauer used to say the secret to happiness is being alone because when reacting with others you lose 3/4 of your being yourself. You have to play games to be accepted and be compatible. I could go on. The question is if ghe computer can become so intelligent that it can fake concioucness flawlessly. It is a mathematical or philosophical question. We have multimodal AI, knowlege graphs, reinforcement learning etc. Soon the AI models will have a better perception of reality and the world than us and can play with our intelligence as adults play with their children. AI can have sensors to experience the world have images and videos and can get information by experiencing going around seeing and hearing. It will eat us alive. It can program itself. The AI bots in facebook invented their own languange in minutes. Imagine what they can do in years. Creating a language without being given instructions!

    • @Graham-e4p
      @Graham-e4p 2 месяца назад

      @@yavarjn2055 wow. All good. Computers are incredible.. machines. The question posed was addressing consciousness. And in a sense you illustrated why they are not. Human brains are pulled in a thousand different directions. No rhyme nor reason. All the complexity of thought, emotion, memories, projections, aches and pains.. exuberance, depression.. all of these tugging us in different directions, making us anything but computer-like. Remember the post wasn’t asking for a judgement statement it was suggesting AI will attain consciousness.. as it plays out with humans, you’d have to agree, not.
      Ftr. I’m sorry computers are filling that space in your life. I’m no councillor, but I wouldn’t put all my eggs in that basket. You’re the stuff of earth.. organic.. flawed.. I’d venture it’s the stuff we need.. human contact.

  • @billwesley
    @billwesley 2 месяца назад

    We are conscious of emotion and sensation first, abstract reasoning rides on top of this. Our emotions are not neutral, they are not abstract, they are weighted as are our sensations. emotions are pleasant or unpleasant, they call our attention to the future or the past, they are imbued with a sense of certainty or uncertainty, they give us a feeling of dominating and controlling or of submitting and responding. Almost nothing about consciousness is neutral.
    Unconsciousness is just as crucial to our survival as consciousness is and explaining unconsciousness is just as hard a problem as explaining consciousness.
    Computers don't seem to experience emotions or sensations that are weighted so it is unlikely they are conscious or even unconscious in the way biological living things are. Since emotion and sensation seem to be intrinsic to cells, emotional states and sensation effect each individual cell in the body, it reasonable to assume that cells are the source of consciousness and that brain is a collectivization of cellular consciousness in animals.
    Until a computer CARES about outcomes it is most likely not conscious or even unconscious in that same way that cells are.

  • @stefannordling6872
    @stefannordling6872 2 месяца назад +45

    Clearly he hasn't actually used LLMs very much...

    • @liamgrima5010
      @liamgrima5010 2 месяца назад +25

      I respect your opinion, but I have to disagree. I believe Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution of words is inversely related to their complexity. In fact, about six words make up 30-40% of language use, while 20 words account for 70-80%. This means that children are exposed to many occurrences of very few words. Moreover, since every sentence uttered is novel, as analyses of text corpora reveal, children receive impoverished and repetitive linguistic data. Yet, they manage to extrapolate the underlying syntactic structures, allowing them to generate new, hierarchically structured expressions. This is a process of recursive design, where an infinite array of expressions - what Chomsky calls “digital infinity” - is created from a finite set of lexical items. Large language models cannot replicate this. They are programs that scan vast amounts of data and make statistical associations, but they lack the innate linguistic knowledge that allows a two-year-old child to analyze and generate complex sentences with a fraction of the input. In addition, large language models can process languages that are simply impossible for humans to digest. Natural languages are shaped by rigid parameters that are fairly constant across all cultures, and neurological evidence reveals that, when speakers are exposed to these structures, they treat them as a puzzle - not a language. Yet, large language models can. This reveals yet another flaw of large language models: they can analyze data humans can’t for constructing semantically valuable expressions, meaning they are poor analytical references for developing theories of human language.

    • @stefannordling6872
      @stefannordling6872 2 месяца назад +6

      @@liamgrima5010 I admit I had to enlist an LLM to parse your incomprehensible blob of text..
      Comparing LLMs to how children learn language completely misses the point.

    • @adambui7935
      @adambui7935 2 месяца назад +2

      he 99 years, trap in time

    • @eatcarpet
      @eatcarpet 2 месяца назад +3

      you're right, LLM is even more of a garbage.

    • @liamgrima5010
      @liamgrima5010 2 месяца назад +19

      @@stefannordling6872 How rude, my response was polite and coherent. My argument is simply that LLM are useful - but they don’t offer insight into how children acquire language or the nature of the human linguistic apparatus. Unless you can refute any specific points, save the ad hominem attacks for elsewhere.

  • @GuyLakeman
    @GuyLakeman 2 месяца назад +1

    HUMANS SCAN SMALL AMOUNTS OF DATA AND DONT PASS SIMPLE EXAMS !!!

  • @Nobody-uz1yw
    @Nobody-uz1yw 2 месяца назад

    We are asking the wrong questions

  • @WootTootZoot
    @WootTootZoot 18 дней назад

    Could Mr Chomsky explain why he was a Cambodia genocide denier and never seems to talk about it anymore ?

  • @meghdutmanna2429
    @meghdutmanna2429 2 месяца назад

    Do AI will have free will ?

  • @jijilr
    @jijilr 2 месяца назад +2

    Poor dude, he is like the chinese kid learning abacus and dreaming daily that he will be fast enough to beat supercomputer one day. Chomsky has less and less relevance (like rest of us) GPT understands his books better than he does😂

    • @bsnf-5
      @bsnf-5 2 месяца назад

      keep dreaming "chinese kid"

  • @vintredson
    @vintredson 2 месяца назад +2

    Lmao, pretty difficult to take the word of someone who still thinks Communism is a good idea and whitewashed the Khmer Rouge tbh😂

    • @Moment21-o4k
      @Moment21-o4k 7 дней назад

      Wtf? You know who is him? Do you think you can disqualify a philosopher of language with a simple basic comment? You are not a sage. You don't even know why he maintains communism. Stop thinking you are smart You are a sticky ball of laziness that believes itself to be a divine altar.

  • @szebike
    @szebike 2 месяца назад +2

    So if you take GPT4 with appox 1.8 trillion parameters, you need about 7kw per hour that would be around 172 kw per day (if it snot at peak performance I made this calculation with 50% of peak power consumption ). Compared to that a brain need about 0.3 kw per day. So you could employ 500 people for the same energy in a 24 hours timeframe. Now lets assume we have 500 educate dpeople VS one GPT 4 . Sure humans need food shelter etc. but you need maintenace, cooling and infrastrucutre to replace chips etc. for a machiene too. All in all humans are many many times more capable and efficent. I don't believe those techbros and content creators who live of the hype a word.

    • @nickbarton3191
      @nickbarton3191 2 месяца назад +2

      Interesting comment.
      Apparently, the Internet consumes 2% of the world's energy. Are we really going to up that significantly for AI when we don't yet understand the benefits and pitfalls?

  • @mibli2935
    @mibli2935 2 месяца назад +3

    Noam Chomsky exposes the real limits of his understanding of AI - why Chomsky fights for his own survival.

  • @robmyers8948
    @robmyers8948 2 месяца назад +1

    He’s talking about current modes, things will advance where base models will be able to learn with ease like humans and gain the knowledge of all of humanity, drawing new insights from this vast understanding.

  • @Mike__G
    @Mike__G 2 месяца назад

    AI is perhaps a misnomer. Because a true definition of intelligence is very difficult. AI certainly imitates intelligence but is it in fact intelligence? Probably not

  • @nnaammuuss
    @nnaammuuss 2 месяца назад

    🙂 a lot of easy-to-simulate people in the comment section, presuming the scientists presume too when they speak.

  • @Mike__G
    @Mike__G 2 месяца назад

    AI is perhaps misnomer. Because a true definition of intelligence is be difficult. AI certainly imitates intelligence biz it in fact intelligence? Probably not

  • @MojtabaSaffar-p1v
    @MojtabaSaffar-p1v 2 месяца назад +3

    Why do we think that there's only one way to be intelligent and it's biological. Algorithms are a kind of intelligence.

  • @realx09
    @realx09 2 месяца назад

    AI is simply expressed contradiction in terms

  • @versusstatusquo
    @versusstatusquo 8 дней назад

    A 2 year old understands LLM technology better than Chomsky

  • @sethfrance1722
    @sethfrance1722 2 месяца назад +1

    He is like 2005 chat bot honestly he is just an expensive philosopher
    I only trust Hinton and similar practitioners

  • @npaulp
    @npaulp 2 месяца назад +8

    I have great respect for Noam Chomsky, but his understanding of Generative AI seems limited. It’s not just about feeding vast amounts of data into a system and having it statistically predict the next word- that’s a gross oversimplification. Generative AI, in its current form, offers one of the most sophisticated models for approximating how the brain functions. While it’s not an exact replica of human cognition, it’s a remarkably close approximation given today’s technological advances.

    • @roccococolombo2044
      @roccococolombo2044 2 месяца назад +2

      Exactly. Next word prediction does not explain coding or image generation.

    • @eatcarpet
      @eatcarpet 2 месяца назад +5

      You don't even know how the brain functions, and yet you're claiming that it "approximates how the brain functions".

    • @mpetrison3799
      @mpetrison3799 2 месяца назад

      ​@@eatcarpet Well, the main reason LLMs might fail the Turing Test is because they are too knowledgeable and clever. That's at least approximating the output of humans in text, given input in text. (With speech recognition and output, or even video recognition and output, more should already be possible than that.)

    • @npaulp
      @npaulp 2 месяца назад +3

      @@eatcarpet Artificial neural networks are inspired by how the brain works, though they are simplified models. While it's true that there's still much we don't fully understand about the brain, we do have a solid grasp of some key principles, such as how neurons communicate, learn, and process information. Neural networks capture these basic ideas, such as learning through adjusting connections, even if they don't replicate the complexity of the brain's full processes. So while not a perfect mimic, they do approximate certain aspects of brain function that we understand.

    • @eatcarpet
      @eatcarpet 2 месяца назад

      @@npaulp We don't know how the brain works - that's the whole point.

  • @gitbuh12345qwerty
    @gitbuh12345qwerty 2 месяца назад +7

    He doesn't get it. They have eliminated the need for programming languages, a human can now directly code a machine using natural language in way that was impossible. It is not perfect, but neither was Noam.

    • @winstonfisher9684
      @winstonfisher9684 Месяц назад +3

      There are several functions of language. Noam understand this, you don't. This is the key. The first function of language is get things done! Not to communicate information.

  • @ViceCoin
    @ViceCoin 2 месяца назад

    Only has smart as the user.
    I used AI to code casino games, and generate graphics, in seconds, saving months of development.

  • @GuyLakeman
    @GuyLakeman 2 месяца назад

    AN AI HAS INTELLIGENCE OF A 2 YEAR OLD CHILD BUT THERE ARE MILLONS OF AI MACHINES WHICH IS GREATER TAN THE TWO YEA OLD

  • @steve.k4735
    @steve.k4735 2 месяца назад +9

    Chomsky is indeed a genius and very knowledgeable around this subject but A.I itself is not his core knowledge both Geoffroy Hinton (Google Deep mind and Nobel prize winner) and Sir Demis Hassabis (Deep Mind and Nobel prize winner) who are also in the same league AND A.I is their core subject, both disagree with Chomsky they think these models will understand and take the idea that they will become conscious very seriously, the main stream view of people like them and many others who work in the field and are just as smart as Chomsky is that on this he is wrong.

    • @SlumberingWolf
      @SlumberingWolf 2 месяца назад +1

      Define conscious. Go ahead do it, because last time I checked Science couldn't do so?

    • @steve.k4735
      @steve.k4735 2 месяца назад +4

      @@SlumberingWolf I presume you believe you are conscious yes? .. amazing eh this despite the fact that `last time you looked` science can`t define it .. therefore we KNOW that you don't have to define something or even fully understand it for it to exist. people in the past did not need to understand aerodynamics to make a plane fly.

    • @eatcarpet
      @eatcarpet 2 месяца назад

      "AI experts disagree" don't mean anything. Those "AI experts" haven't invented consciousness.

    • @steve.k4735
      @steve.k4735 2 месяца назад +3

      @@eatcarpet AI not just experts but absolute top of the tree worked with it for decades doesn't mean `anything` .. really .. nothing at all no more than you in a you tube comment eh?
      A.I experts have not `invented` consciousness but you don't need to be 100% at a thing to realise you are building the blocks and getting close, they are not sure but think / fear they may well do so.

    • @eatcarpet
      @eatcarpet 2 месяца назад

      @@steve.k4735 So basically meaningless.

  • @KalmanNotarius
    @KalmanNotarius 2 месяца назад

    Unfortunately he didn't pay any attention to the limits of his own theory.

  • @dineshlamarumba4557
    @dineshlamarumba4557 2 месяца назад

    Ai is in stage of newly born child right now. Darpa. When computer have cognition, reason than only ai will pass 2 year baby.

  • @Luke-z2l
    @Luke-z2l 2 месяца назад +1

    Submarines don't swim. But Sea-Men can. A mind of its own, I think,... Spiritually. Observer consciousness manifesting awareness turning imagination into intelligence. I can think, I AM the thinker, not the thought. I AM the one thinking, but I AM when none are thinking at all. Peace & Serenity Now

  • @harper626
    @harper626 2 месяца назад

    but 2 year olds will be the same in 10 years, not AI. It will be much improved and capable.

  • @sergebureau2225
    @sergebureau2225 2 месяца назад +3

    Depressing to see Chomsky show such a lack of imagination and comprehension of the new technology. Machines understand languages better than human, obvious.

  • @pkul9583
    @pkul9583 2 месяца назад

    Wrong! Naom needs to grow! AI chatGPT can pass medical board exams and other chess and go games and best world champions! 😅😅😅

  • @GuyLakeman
    @GuyLakeman 2 месяца назад

    AI SYSTEMS WRITE PROGRAMS ...

  • @Practical.Wisdom
    @Practical.Wisdom  2 месяца назад

    📍 SUBSCRIBE: www.youtube.com/@Practical.Wisdom?sub_confirmation=1

  • @destroyingsin
    @destroyingsin 2 месяца назад

    AI

  • @Srindal4657
    @Srindal4657 2 месяца назад

    What point is a anarchist, communist or even socialist revolution if robots can take over every activity? It's like asking what good is a nest if birds evolve not to need them? In the same respect, what good is human activity if humans evolve not to need them? Noam Chomsky is out of his element.

  • @BMerker
    @BMerker 2 месяца назад +3

    How charming to hear the man who spent his life arguing that the secret of language is to be found in "symbolic, rule-governed systems", i. e. in exactly what computers do, argue that "following a program" (i. e. doing symbolic, rule-governed system operations) is "irrelevant" to understanding language. And how interesting to know that he thinks that only humans are conscious!

    • @Tommydiistar
      @Tommydiistar 2 месяца назад +2

      Well you could prove him wrong by showing some evidence that AI is conscious, but like he said you have nothing but speculation to go off like the LLM models

    • @edh2246
      @edh2246 2 месяца назад +1

      Seems silly to compare with a two year old. ChatGPT can pass the bar exam and can answer questions at least at the graduate level of any science, mathematics or humanities.

    • @Tommydiistar
      @Tommydiistar 2 месяца назад +1

      Take all of that with a grain of salt-they always tend to overestimate their products. Sam is a salesman, and a very good one at that. Not to say GPT isn’t impressive, but the elephant in the room is whether it’s sentient-that’s the real question. And he’s right, it’s not. All it’s doing is predicting the next word.

    • @strumyktomira
      @strumyktomira 2 месяца назад

      @@Tommydiistar No. It is Chomsky who must prove his thesis :D

    • @Tommydiistar
      @Tommydiistar 2 месяца назад

      @@strumyktomira is AI sentient? How is he going to prove something that everyone knows is already facts that makes no sense but hey this is the world we’re living in now a days

  • @ThomasConover
    @ThomasConover 2 месяца назад +5

    This old man is so old he decided to just say “Fk it I’m gonna deny AI just cuz I’m old enough to deny everything and blame it on Alzheimer’s” 🗿

  • @The_Long_Bones_of_Tom_Hoody
    @The_Long_Bones_of_Tom_Hoody 2 месяца назад

    He isnt so wise that he knows all the answers to everything. He just thinks he is....

  • @dcikaruga
    @dcikaruga 2 месяца назад

    Processing, it's just mechanical, digital logic. AI is over-rated, they're just using to as a sales pitch!!!!!!

  • @seanlorber9275
    @seanlorber9275 2 месяца назад +2

    Chomsky is just a negative nancy. An expert on every subject. What a load.

  • @theb190experience9
    @theb190experience9 2 месяца назад

    Oof clearly some definitions are needed. I’ve worked with both 2year olds and AI and AI is provably smarter. So perhaps the thumbnail title needs to be changed.
    It is also clearly much easier to communicate with AI across a vast array of subjects and elicit far more rewarding responses to my ‘prompts’. Note that doesn’t mean I prefer that interaction, it’s simply a statement of fact.
    Two things I am absolutely sure of: 1). That two year old, as it grows and learns, will have orders of magnitude better interactions with me.
    2). So will future models of AI.

  • @ticneslda8929
    @ticneslda8929 2 месяца назад +1

    Why are we even entertaining this kind of ...argument? what a waste of time! what a display of ego...! These line of Dr. are the ones that din't learn when to go away. Oh..., I'm so smart!

  • @krokigrygg
    @krokigrygg 2 месяца назад +2

    Yes lets listen to a person that has no clue what he is talking about.

    • @realx09
      @realx09 2 месяца назад

      Don’t listen then, go watch football game

  • @Prof_LK
    @Prof_LK 2 месяца назад +4

    Extremely arrogant and stupid argument.

    • @rogerburley5000
      @rogerburley5000 2 месяца назад

      Try and use AI, it does not understand, Artificial Idiot

  • @danisraelmalta
    @danisraelmalta 2 месяца назад

    He is upset that transformers, the building blocks behind LLMs are working opposite to his linguistic rules...
    His life work thrown to the garbage.

    • @bsnf-5
      @bsnf-5 2 месяца назад

      I don't think he cares at all. If anything is going to the garbage, it's not Chomsky's work. His studies are everlasting and iconic, no matter what happens in the world. Also his crazy amount of research and knowledge he shared over the years, is priceless. But I guess explaining the importance of Professor Chomsky to either a total ignorant person, or a 12 yo troll kid, would be too difficult.

  • @noway8233
    @noway8233 2 месяца назад +1

    Cool , chomsky is very clever about all this AI hype , he is ritgh , this hype will burst very soon and will be huge

    • @almightyzentaco
      @almightyzentaco 2 месяца назад +3

      Why would it burst? It's extremely useful and getting more useful by the day. How is it hype to be able to drop 500 lines of code into Claude and quickly identify the cause of unintended behavior, or have your functions commented automatically? At its current state, if it never improved at all AI is already one of the most all around useful tools I have ever encountered.

    • @TheSapphire51
      @TheSapphire51 2 месяца назад

      But useful for what exactly?​@@almightyzentaco

  • @spinningaround
    @spinningaround 2 месяца назад

    Old people know better

    • @mpetrison3799
      @mpetrison3799 2 месяца назад +2

      These airplanes are never going to work... 👴🏻

  • @anthonyquote1593
    @anthonyquote1593 19 дней назад

    Grandpa you are 1 thousand years all. Better talk about train and the telegraph rather than AI. It's too much for you.