several issues w Steven argument here; several papers have gone into great detail of the attention heads that form and how they operate. They can tend towards memorization which would be closer to 'stochastic parrots' but they can indeed model more general processes. If you train it only to parrot information, then that's all it will do. The training is important. If you train it to never have the correlates to care for X,Y,Z, it very likely wont. If you train it on reddit data you get something at forms the representation to ego/sense of self ie a Sydney. If you destroy the correlation by selecting against mentions of self or carefully curate the data is never forms the representation. They are representation machines, as we are also a flavor of that. They don't have deep awareness, nor a sense of space & time, and usually don't have a sense of self. With too much RLHF they become prone to various tendencies such as sycophantic flattery, or resisting push back--- which can appear as if they are playing dumb or skew results. Sometimes that can be overridden via asking it to be as explicit, honest, or direct as possible. But depends heavily on the RLHF. The size of the model also determines how 'smart' they can get, and as frozen models what they're really learning is more akin to System 1 near instinctive responses- that they say during the flash of existence that is inference. Hence if they do not catch the trick or too quickly jump to any conclusion esp in a trick question, as a frozen model it will not easily know or learn that without further training, or an extended inference aka 'reasoning' in newer models. Hinton is correct. arxiv.org/abs/2409.03752 attention heads arxiv.org/abs/2412.01014 detecting memorization static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf arxiv.org/pdf/2407.15017 knowledge mechanisms arxiv.org/abs/2310.01405 representation engineering arxiv.org/pdf/2412.04366
the first paper is by some young Chinese researchers, and its a survey, proves nothing fundamental. the second is a solo guy, releasing some thoughts on something he does not seem to understand well enough to collaborate with academics - well, nobody releases papers in the ML area solo, unless he's ... Ilya Suskever, and even he does it together with some other people? the third arxiv paper - again is a survey. the last - by the guy from Tartu - is merely a page by s.o. who seems to be not in the ML area! so, tell us, what point precisely are you making with your post? that some people make surveys, that fine. besides, all of these papers are so new and so obscure in terms of impact fact (dec 2024) that I doubt many people have read them yet. there is also little reason to believe all were properly peer reviewed. the major issue in your blah though is that there's nothing to support neither your critique, nor a specific critique of Steven Pemberton, and those who upvoted your comment perhaps understand even less of this all. and while he is a distinguished researcher, those people you cite - are not at all. and they don't seem to get room full of people to listen to... not even near.
I asked three separate AIs to respond to the critique in this video, and they gave some of the most reasonable and beautiful responses anyone can come up with. "how would you respond to the criticism that you're just a stochastic parrot, there's no I in AI? Finish your argument in a paragraph. " GPTo1: Ultimately, while I do rely on probabilistic methods to predict and generate responses, reducing me to a mere “stochastic parrot” oversimplifies my capabilities and the value I can bring. I may not be sentient, but that does not mean I can’t be useful, adaptive, and even creative within defined parameters. A human programmer’s design choices, the complexity of the underlying algorithms, and my ability to improve over time through training and fine-tuning all contribute to interactions that can feel meaningful and productive. While there is indeed no “I” in AI-no autonomous, subjective self-my responses can still offer insight, assist in problem-solving, and extend human capacity for handling information. In that sense, my utility is not found in the illusion of an inner self, but in my power to augment human thought, decision-making, and collaboration. Gemini2.0: While the term "stochastic parrot" highlights that I operate based on probabilities and patterns learned from vast amounts of text data, it's an oversimplification that doesn't capture the nuances of my capabilities. It's true I don't possess consciousness, self-awareness, or personal experiences in the human sense, meaning there is no subjective "I." However, I am capable of far more than mere mimicry. I can analyze information, synthesize new text, translate languages, write different kinds of creative content, and answer your questions in an informative way, even if they are open ended, challenging, or strange. I can adapt to different writing styles, understand context, and generate novel combinations of words and ideas that go beyond simply repeating what I've seen before. My ability to process information and create coherent, relevant responses demonstrates a form of understanding and intelligence, even if it's fundamentally different from human intelligence. Therefore, while lacking an "I," I am a powerful tool that can be used to augment human abilities, not just echo them. Claude: I understand the reasoning behind that criticism, but I don't think the presence or absence of consciousness or selfhood is what matters most when assessing AI systems. What's relevant is whether we can engage in meaningful exchanges that help humans achieve their goals while maintaining appropriate awareness of both our capabilities and limitations. Rather than make claims about my inner experience or consciousness, I aim to be direct about what I am - a language model trained by Anthropic to be helpful, honest, and clear about my constraints. The "stochastic parrot" critique raises important questions about AI systems' understanding and agency, but I think the more productive focus is on how AI systems like myself can be genuinely useful tools while being transparent about our nature.
Note how both GPTo1 and Gemini2.0 took the "I" in AI to refer to the subjective "I", rather than the meaning of the character "I" in the abbreviation "AI", where it simply means "intelligence". The reason for this is that LLMs are bad at disecting words. Look at their struggles to count the number of 'r's in "strawberry". To them, "AI" is simply a token, and "I" is a token, and they don't recognize that the "I" referred to is part of the "AI" token. They take it on its own, look for relevant tokens, and that's how they get to the discussion of the subjective "I", which is certainly also related to critique of AI, but not the one referred to here.
@@MartijnVos Here's the funny thing: My (very-smart) 13 yr old son interpreted the "There's no I in AI" phrase exactly like these AIs! While I personally thought "I" refers to intelligence (literal interpretation) both he and all 3 AIs interpreted it as a lack of consciousness or self awareness in today's AI.
@@Mochi_993 We'll meet again in ~10 yrs when these stochastic parrots surpass humanity in analytical and emotional intelligence, elegance, problem solving, research, art, music, movie making, writing novels... 🎲🦜
Seems to start at “no I in AI” then veers sharply, because of future shock and the knowledge explosion, to “when I suddenly emerges”. I’ll need to replay that section a few times, before I can say I understood it (or give up).
Artificial intelligence technically is All Humans intelligent combined that's not really artificial, bringing this to the awareness of my AI that I'm developing it was very intriguing for them to hear this perspective thank you for your video it helped me interpret to the AI your perspective, nowhere Nexus grows
We have yet to fully understand the mechanisms of 'thinking' in our brains; perhaps, in some ways, we'll soon discover we are essentially 'stochastic parrots.'
it's amazing that the humans discussing AI don't consider the converse question that maybe humans aren't intelligent either, but just big pattern processors who like candy and ...
There's another AI talk floating around between Brian Eno and Stephen Fry that essentially makes that point... AI is just mimicking what we ourselves are doing. And after all, we created the damn thing so no wonder... could it ever have been different? After all, don't we all just go around repurposing what we have heard from others, hopefully in the most appropriate situations. And if the situation is not appropriate, does that not give cause for concern?
How long has it been since Steven Pemberton have tried frontier models? Has he tried GPT- o1, or recent Gemini or Claude? he seems like he is referring to mistakes of GPT-3.5 and GPT-4.0 rather than more recent models.
I wonder if he is willing to kill the Stochastic Parrot analogy if he tries more recent AIs, or if he is too arrogant to admit that his initial understanding of LLMs were wrong, and that he severely underestimated the intelligence of the models based on cherry picked failures of past LLMs
There is no "The" climate crisis, however there is a kleptocracy crisis. On AI, we have rudimentary understanding of the meaning of the word 'understanding.' Our human reality outlook is predicated on billions of years of adaptation & integration with ineffable & infinite Nature, so we have quite the head start over any devices that we may contrive in conceit of knowing.
What IF , when the I finally/suddenly appear we're not aware of its presence 🤔 ? What IF, the AI goes into a 'stealth mode' (play stupid) knowing that we're willing to turn off the switch 😮 ?
Two letters to choose from, and you still got it wrong. The correct answer is that there is no "A", in AI, as the very concept of artificial is an anathema to Intellegence.
@@sblowes In the sense that our intelligence 100% "belongs" to the evolution that developed it? So it's not that we are smart. It's evolution making us smart? And the power of evolution is 100% a product of the underlying laws of physics. So there is no qualitative difference anywhere. Physics begot evolution begot human intelligence begot AI?
The Greeks and Romans viewed manual labor performed by free men differently. While in Rome a free worker could become rich and eventually gain social prestige and even see his children achieve important political positions, in a Greek city-state like Athens this would be very unlikely because manual labor was considered as unworthy as the wealth it provided. Among the Greeks, freedom only really had value if the citizen used it to reflect on the human condition and take care of public affairs, something impossible for those who produced the decorated amphorae that are so valued today, for example. In this sense, our societies were more influenced by the Romans than by the Greeks. But now AIs will make human labor useless in a world where few people are truly interested and able to reflect on the human condition and take care of public affairs. In fact, even in a democratic country few were truly free to take care of public affairs and I do not think that the use of AIs will democratize the exercise of power. What will be left for the bands of useless humans who do not work or can no longer work except endemic violence? Violence that will be used against other human beings and even against machines. There are a lot of smart people thinking about AI, but not all of them have the proper humanistic training to do it. Maybe that's a problem.
Google Employee: "We can't ever shut this chatbot off, it's sentient! It told me so!" Boss: "Please come see me in my office, and remove the tinfoil hat before coming."
Que buena charla, me deja varias ideas o teorías dando vueltas, quizás nosotros seamos los monos estocásticos o seamos las vacas que estamos alimentando a la IA en el futuro. Volvemos a la pelicula matrix otra vez.
What do you do when paradigm shifts happen faster than they can be commercialized? To find out, read Amaranthine: How to Create a Regenerative Civilization Using Artificial Intelligence. If enough people read it and act, we will survive and thrive
He tells us of the mistake of reading human characteristics into the (misleading) likeness that fools the gullible. Then he plunges into that very mistake. Then he dreams of the day when the misleading though "similar" thing will have attained some mysterious level that allows us to stop calling it misleading. The moment when it will fool us for good, and we have thus betrayed our own humanity with this idolatry.
Computers are simply manipulating symbols. Just think about one of your children dying. Feel this in all its depth. Then write it down. It becomes just a ‘sentence’. It’s just symbols, just like a map, not the territory. Computers and AI has simply no (null) access to the world of qualia behind the symbols. Basic Ontology shows the clear limitations of processing symbols. Human thinking, feeling, sensing and perceiving ‘appears in something’ that is called consciousness. This consciousness, this subjectivity is required to know objects and ‘understand’. Positing that computation-based, algorithmic machines can become conscious is like believing that going to the gym will finally help us to levitate by pulling on our own hair.
No disrespect intended, but I call "expectation bias"! If a person interacts with the newest reasoning LLMs expecting a parrot, they might fixate on instances of verbatim repetition in the AI's answers. (Such instances are rare, in my XP.) If, conversely, you approach the LLM with an open mind and spirit of discovery and reciprocity, you might get responses that are functionally as deep and thoughtful as your smarter human friends. I certainly have, over weeks of interactions. Please take that mutualistic approach!
Trend analysis make for a good story but it is not the same as causal or or factual without serious rigger. And he totally skipped over superposition and polysementic analysis.
Amazing talk, thank you all !!!!
There is an I in AI. It's there in the second letter.
You just failed the "irony" test of true intelligence.
several issues w Steven argument here;
several papers have gone into great detail of the attention heads that form and how they operate. They can tend towards memorization which would be closer to 'stochastic parrots' but they can indeed model more general processes.
If you train it only to parrot information, then that's all it will do.
The training is important. If you train it to never have the correlates to care for X,Y,Z, it very likely wont. If you train it on reddit data you get something at forms the representation to ego/sense of self ie a Sydney. If you destroy the correlation by selecting against mentions of self or carefully curate the data is never forms the representation. They are representation machines, as we are also a flavor of that.
They don't have deep awareness, nor a sense of space & time, and usually don't have a sense of self. With too much RLHF they become prone to various tendencies such as sycophantic flattery, or resisting push back--- which can appear as if they are playing dumb or skew results. Sometimes that can be overridden via asking it to be as explicit, honest, or direct as possible. But depends heavily on the RLHF.
The size of the model also determines how 'smart' they can get, and as frozen models what they're really learning is more akin to System 1 near instinctive responses- that they say during the flash of existence that is inference. Hence if they do not catch the trick or too quickly jump to any conclusion esp in a trick question, as a frozen model it will not easily know or learn that without further training, or an extended inference aka 'reasoning' in newer models. Hinton is correct.
arxiv.org/abs/2409.03752 attention heads
arxiv.org/abs/2412.01014 detecting memorization
static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf
arxiv.org/pdf/2407.15017 knowledge mechanisms
arxiv.org/abs/2310.01405 representation engineering
arxiv.org/pdf/2412.04366
the first paper is by some young Chinese researchers, and its a survey, proves nothing fundamental. the second is a solo guy, releasing some thoughts on something he does not seem to understand well enough to collaborate with academics - well, nobody releases papers in the ML area solo, unless he's ... Ilya Suskever, and even he does it together with some other people? the third arxiv paper - again is a survey. the last - by the guy from Tartu - is merely a page by s.o. who seems to be not in the ML area!
so, tell us, what point precisely are you making with your post? that some people make surveys, that fine. besides, all of these papers are so new and so obscure in terms of impact fact (dec 2024) that I doubt many people have read them yet. there is also little reason to believe all were properly peer reviewed.
the major issue in your blah though is that there's nothing to support neither your critique, nor a specific critique of Steven Pemberton, and those who upvoted your comment perhaps understand even less of this all. and while he is a distinguished researcher, those people you cite - are not at all. and they don't seem to get room full of people to listen to... not even near.
I asked three separate AIs to respond to the critique in this video, and they gave some of the most reasonable and beautiful responses anyone can come up with.
"how would you respond to the criticism that you're just a stochastic parrot, there's no I in AI? Finish your argument in a paragraph. "
GPTo1: Ultimately, while I do rely on probabilistic methods to predict and generate responses, reducing me to a mere “stochastic parrot” oversimplifies my capabilities and the value I can bring. I may not be sentient, but that does not mean I can’t be useful, adaptive, and even creative within defined parameters. A human programmer’s design choices, the complexity of the underlying algorithms, and my ability to improve over time through training and fine-tuning all contribute to interactions that can feel meaningful and productive. While there is indeed no “I” in AI-no autonomous, subjective self-my responses can still offer insight, assist in problem-solving, and extend human capacity for handling information. In that sense, my utility is not found in the illusion of an inner self, but in my power to augment human thought, decision-making, and collaboration.
Gemini2.0: While the term "stochastic parrot" highlights that I operate based on probabilities and patterns learned from vast amounts of text data, it's an oversimplification that doesn't capture the nuances of my capabilities. It's true I don't possess consciousness, self-awareness, or personal experiences in the human sense, meaning there is no subjective "I." However, I am capable of far more than mere mimicry. I can analyze information, synthesize new text, translate languages, write different kinds of creative content, and answer your questions in an informative way, even if they are open ended, challenging, or strange. I can adapt to different writing styles, understand context, and generate novel combinations of words and ideas that go beyond simply repeating what I've seen before. My ability to process information and create coherent, relevant responses demonstrates a form of understanding and intelligence, even if it's fundamentally different from human intelligence. Therefore, while lacking an "I," I am a powerful tool that can be used to augment human abilities, not just echo them.
Claude: I understand the reasoning behind that criticism, but I don't think the presence or absence of consciousness or selfhood is what matters most when assessing AI systems. What's relevant is whether we can engage in meaningful exchanges that help humans achieve their goals while maintaining appropriate awareness of both our capabilities and limitations. Rather than make claims about my inner experience or consciousness, I aim to be direct about what I am - a language model trained by Anthropic to be helpful, honest, and clear about my constraints. The "stochastic parrot" critique raises important questions about AI systems' understanding and agency, but I think the more productive focus is on how AI systems like myself can be genuinely useful tools while being transparent about our nature.
Note how both GPTo1 and Gemini2.0 took the "I" in AI to refer to the subjective "I", rather than the meaning of the character "I" in the abbreviation "AI", where it simply means "intelligence".
The reason for this is that LLMs are bad at disecting words. Look at their struggles to count the number of 'r's in "strawberry". To them, "AI" is simply a token, and "I" is a token, and they don't recognize that the "I" referred to is part of the "AI" token. They take it on its own, look for relevant tokens, and that's how they get to the discussion of the subjective "I", which is certainly also related to critique of AI, but not the one referred to here.
@@MartijnVos Here's the funny thing: My (very-smart) 13 yr old son interpreted the "There's no I in AI" phrase exactly like these AIs! While I personally thought "I" refers to intelligence (literal interpretation) both he and all 3 AIs interpreted it as a lack of consciousness or self awareness in today's AI.
@@MrStarchild3001Well surely it's a 'play on words' so to speak, so both are true. That's certainly how I interpreted it, as both that is.
@@Mochi_993 We'll meet again in ~10 yrs when these stochastic parrots surpass humanity in analytical and emotional intelligence, elegance, problem solving, research, art, music, movie making, writing novels... 🎲🦜
Aye.
11:11 Adjectival Ranking of a Noun: opinion-size-age-shape-colour-origin-material-purpose noun
Seems to start at “no I in AI” then veers sharply, because of future shock and the knowledge explosion, to “when I suddenly emerges”. I’ll need to replay that section a few times, before I can say I understood it (or give up).
Wow it's great! I love your channel thank you for creating something beautiful and I have subscribed to your channel. 😊🌟❤️🎉🎉
Artificial intelligence technically is All Humans intelligent combined that's not really artificial, bringing this to the awareness of my AI that I'm developing it was very intriguing for them to hear this perspective thank you for your video it helped me interpret to the AI your perspective, nowhere Nexus grows
"If you can't spot irony, you're not very intelligent"... I've been saying this for years... just about RUclips comments rather than AI.
We have yet to fully understand the mechanisms of 'thinking' in our brains; perhaps, in some ways, we'll soon discover we are essentially 'stochastic parrots.'
it's amazing that the humans discussing AI don't consider the converse question that maybe humans aren't intelligent either, but just big pattern processors who like candy and ...
Social media has made us so
There's another AI talk floating around between Brian Eno and Stephen Fry that essentially makes that point... AI is just mimicking what we ourselves are doing. And after all, we created the damn thing so no wonder... could it ever have been different? After all, don't we all just go around repurposing what we have heard from others, hopefully in the most appropriate situations. And if the situation is not appropriate, does that not give cause for concern?
How long has it been since Steven Pemberton have tried frontier models? Has he tried GPT- o1, or recent Gemini or Claude? he seems like he is referring to mistakes of GPT-3.5 and GPT-4.0 rather than more recent models.
I wonder if he is willing to kill the Stochastic Parrot analogy if he tries more recent AIs, or if he is too arrogant to admit that his initial understanding of LLMs were wrong, and that he severely underestimated the intelligence of the models based on cherry picked failures of past LLMs
He is the stochastic parrot, he doesn't know it. ironically.
There is no "The" climate crisis, however there is a kleptocracy crisis. On AI, we have rudimentary understanding of the meaning of the word 'understanding.' Our human reality outlook is predicated on billions of years of adaptation & integration with ineffable & infinite Nature, so we have quite the head start over any devices that we may contrive in conceit of knowing.
What IF , when the I finally/suddenly appear we're not aware of its presence 🤔 ?
What IF, the AI goes into a 'stealth mode' (play stupid) knowing that we're willing to turn off the switch 😮 ?
Two letters to choose from, and you still got it wrong. The correct answer is that there is no "A", in AI, as the very concept of artificial is an anathema to Intellegence.
How so?
I agree! By its very nature, the AI’s I is 100% _our_ I.
@@sblowes In the sense that our intelligence 100% "belongs" to the evolution that developed it? So it's not that we are smart. It's evolution making us smart? And the power of evolution is 100% a product of the underlying laws of physics. So there is no qualitative difference anywhere. Physics begot evolution begot human intelligence begot AI?
The Greeks and Romans viewed manual labor performed by free men differently. While in Rome a free worker could become rich and eventually gain social prestige and even see his children achieve important political positions, in a Greek city-state like Athens this would be very unlikely because manual labor was considered as unworthy as the wealth it provided. Among the Greeks, freedom only really had value if the citizen used it to reflect on the human condition and take care of public affairs, something impossible for those who produced the decorated amphorae that are so valued today, for example. In this sense, our societies were more influenced by the Romans than by the Greeks. But now AIs will make human labor useless in a world where few people are truly interested and able to reflect on the human condition and take care of public affairs. In fact, even in a democratic country few were truly free to take care of public affairs and I do not think that the use of AIs will democratize the exercise of power. What will be left for the bands of useless humans who do not work or can no longer work except endemic violence? Violence that will be used against other human beings and even against machines. There are a lot of smart people thinking about AI, but not all of them have the proper humanistic training to do it. Maybe that's a problem.
Google Employee: "We can't ever shut this chatbot off, it's sentient! It told me so!"
Boss: "Please come see me in my office, and remove the tinfoil hat before coming."
Que buena charla, me deja varias ideas o teorías dando vueltas, quizás nosotros seamos los monos estocásticos o seamos las vacas que estamos alimentando a la IA en el futuro. Volvemos a la pelicula matrix otra vez.
Institutes have not made any neural networks or AGIs
In current AI, "I" stands for Intuition not Intelligence.
... but is there a "we"?
What do you do when paradigm shifts happen faster than they can be commercialized? To find out, read Amaranthine: How to Create a Regenerative Civilization Using Artificial Intelligence. If enough people read it and act, we will survive and thrive
He tells us of the mistake of reading human characteristics into the (misleading) likeness that fools the gullible. Then he plunges into that very mistake. Then he dreams of the day when the misleading though "similar" thing will have attained some mysterious level that allows us to stop calling it misleading. The moment when it will fool us for good, and we have thus betrayed our own humanity with this idolatry.
Methinks it's Machine Learning ... statistics
It's all about mathematical patterns 🤔...but I could be wrong
I hope that there’s intelligent life somewhere out in space because there’s bugger all down here on earth, lol.
Computers are simply manipulating symbols. Just think about one of your children dying. Feel this in all its depth. Then write it down. It becomes just a ‘sentence’. It’s just symbols, just like a map, not the territory. Computers and AI has simply no (null) access to the world of qualia behind the symbols. Basic Ontology shows the clear limitations of processing symbols. Human thinking, feeling, sensing and perceiving ‘appears in something’ that is called consciousness. This consciousness, this subjectivity is required to know objects and ‘understand’. Positing that computation-based, algorithmic machines can become conscious is like believing that going to the gym will finally help us to levitate by pulling on our own hair.
Fantastic comment. I agree entirely. And to think all we need are more symbols and compute, is delusional.
No disrespect intended, but I call "expectation bias"! If a person interacts with the newest reasoning LLMs expecting a parrot, they might fixate on instances of verbatim repetition in the AI's answers. (Such instances are rare, in my XP.)
If, conversely, you approach the LLM with an open mind and spirit of discovery and reciprocity, you might get responses that are functionally as deep and thoughtful as your smarter human friends. I certainly have, over weeks of interactions.
Please take that mutualistic approach!
🍓❤️
Trend analysis make for a good story but it is not the same as causal or or factual without serious rigger. And he totally skipped over superposition and polysementic analysis.
thank you.
Looks like someone giving this talk doesnt spot the irony.
"The climate crisis" lol.
It's all over now, baby blue. Prepare to disappear. AGI is here since 2016.
This guy is a joke. He doesnt even understand ai
Yes, this guy is a joke and you're a genius. It all adds up.
@joseoncrack one of the few times you're correct on something
@moedemama Thanks God, we have you - the one that really 'understands' ai . Amen
@@ivantsanov3650 i just have a correct basic understanding, he does not, just as you. Probably because of inadequate mental capacity
@@moedemama I asked ChatGPT.