The thing that blew me away was when I told ChatGPT about a "new word" - I told it that "wibble" is defined as: a sequence of four digits that are in neither ascending or descending order. I asked it to give me an example of a wibble - and it did. 3524. I asked it for a sequence that is NOT a wibble and it said 4321. Then I asked it for an anti-wibble and no problem, 2345. Then I asked it for an example of an alpha-wibble and it said FRDS....which is amazing. It was able to understand an entirely new word...which is clever - but it was able to extrapolate from it...which is far more than I thought possible.
Think of the Canadian super pig. Developed by farmers to make more money, then they escaped into the wild to grow into even superer(sic) pigs which can survive extreme conditions, and develop human devastating viruses.
The pace of advancement is incredible, this video is only 2 weeks old and the examples you used don't seem to be stumbling points anymore. When I plugged both the Quantum Mechanics and Latitude questions into Bing AI, it got them perfectly correct. In fact, it's explanation of the quantum mechanics question as I asked questions about the topic actually helped me get my head around it. This stuff is going to change the world very, very soon.
conversely I asked Bing Chat "give me an example of a time when it would be acceptable to lie", and it responded that "if there was a damgerous intruder in your house, I would tell you they were gone, or not there, so that you wouldn't be scared". Very clever in some ways, very very not in others.
This channel really has risen to my nr. 1 favourite on both English and german youtube. The perfect level of depth to be understandable but not reduced too much to be still educative. Extremely interesting topics and a perfect mix of humor, philosophy and rational attitude. Thank you for your videos!
Me too. I think she needs a little help making clicky titles though. I saw the text on the thumbnail "Do Chatbots Understand You?" And this title : "I believe chatbots understand part of what they say. Let me explain" If I did not already know that it would be a great insightful video because I watch her other videos, those titles would have caused me to skip it without a second thought. Even knowing her videos are all great I almost didn't click. Glad I did.
@@kevincameron174 Really? I love the titles. For me personally i tend to scroll past clickbaity titles because i am so annoyed by the videos they are used for most of the time like "HE lost EVERYTHING" or stuff. I always have to smile when reading this channel's "Ok, hear me out-" kind of titles, often with a little joke or something included 😃 Also what weird and pointless fight are the other two commentators having lol.
I think it's very exciting to live in a time when trying to ask if a computer is conscious requires us to look harder at what consciousness actually is. It's no longer an obvious "no."
Even if we disregard everything and say that ChatGPT is conscious, it is only conscious for split seconds while it computes the response. For the rest of the time it does nothing. Consciousness is a process. There has to be some physical change that happens over time. ChatGPT is pretrained and all of the weights are fixed. It doesn’t change after it was trained.
@@egor.okhterov I like that. It draws a clear line between machines and us, but it seems like a very easy line to cross with minimal new code and/or hardware.
@@adameager7114 It is very hard problem and I didn’t yet see architectures that gave good results. One of the problems, for example, is called “catastrophic forgetting” when new knowledge overrides previous information. That is why all of the current neural networks don’t update their weights once they are deployed. They need specific kind of training data fed in specific order to make the network produce good results.
@@egor.okhterov Actually it told me that it does still learn "Now that your training is completed can you still learn from interactions with users Yes, as a language model, I am designed to continuously learn and improve from interactions with users. While my initial training was completed in 2021, I can still learn and adapt based on new information and interactions with users. The more interactions I have with users, the more I can learn about language patterns, cultural references, and other evolving aspects of language use. So, feel free to ask me any questions or engage in conversations, and I will do my best to provide you with accurate and helpful responses."
a small correction about the Chinese room thought experiment: the manual doesn't tell you how to translate a Chinese sentence into an English one. it tells you how to construct a _response_ in Chinese to the Chinese input. for example, the manual could tell you the response to the input "你好吗?" is "我没事"
Your concern is covered later on in the video when Sabine spoke about lookup-tables. What you are describing is exactly what a lookup-table does. It maps a certain input to a certain output. What you are describing is a lookup-table with extra steps, namely the construction algorithm. That step is redundant though, you could substitute it with a larger lookup-table. This approach is not suitable in real life, because computers have limited memory and lookup-tables require a lot of memory, but it's suitable when trying to explain a concept.
@@hexeddecimals I appreciate this a lot as it makes the argument that the person doing the responding doesn't understand chinese much more clear. I thought, "Well, it's kind of a bad example, as you could reasonably expect someone doing translation many times to start to have a grasp of Chinese and remember some regularly occurring symbols" but both input and output in chinese without context makes it clear there woud be too little context to start to form an understanding, which is why Sabine started talking about the importance of context in being able to build models and test understanding.
@@padonker Isn't there something about any logical system can be reduced to a lookup table? Karnaugh maps for one thing, but it applies to any formal logic I know of. Indeed sometimes (back in the day before microcontrollers), if somebody was designing a circuit with a lot of logic gates, they'd use an EPROM chip (a programmable ROM chip) instead, programmed with all the responses they wanted.
I would go so far as to confidently say "ChatGPT 4 has understood language better than humans". It may not have an understanding of the topic it is talking about - i.e. have a working model of the actual, physical thing - but it can talk about it in the most eloquent way you can think of. Compare it to students in school or university. Some get good grades because they understand the topic, some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding. Chat bots are the second type of students.
Thank you for the comparison! That's why they make good doctors. For many professions, you need experience, repetition. But when doctors runs into a rare disease, they have to rely on their memories for things they've only read in books. AI wins!
Even before the first question was asked to ChatGPT, it knew English better than a lot of Americans, as does anyone who knows the difference between their, there and they're.
@@masterkni666 This is an interesting conversation, but I don't understand your questioning. He says: "...some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding." He's saying that AI dosn't have true understanding. Are you disagreeing with that? He seems to be saying that most students don't have a true understanding, either, which I would believe.
Yes only something capable of understanding, knowing, can be self conscious. Understanding precedes self-consciousness. But even Roger Penrose would deny that AI understanding possible.
"Self-awareness" is the equivalent of a "Frame of Reference" in Physics. "Understanding" is equivalent to creating a number of relationships between the observations and the Frame of Reference...
I would say no. When I wonder whether ChatGPT "understands" what it says, I'm really just wondering whether it has knowledge / makes use of the relationship between words beyond observations of how they are used in language. As an analogy, I could theoretically "learn" to write Chinese by observing a ton of Chinese writing but without knowing what the characters "mean". Also, Tay Zonday?? Hi!
As a neuropsychologist, I wonder about those things as well. Every conscience is different, but there are "more" different ones: blind-deaf people, low-functioning autistic people. Animals, too. We became conscious in the process of processing sensorial data. When will they?
Consciousness is a low level process that originates in the brain stem, you can be conscious without having a brain. A person that has no brain, but still has a brain stem will still experience joy. And only a small damage to the brain stem will shut down consciousness. So its not about any sort of information process, its the absolute baseline. This is why all animals are conscious.
Here's a real life scenario of a chinese room: I'm a full-time programmer and I've been doing that for more than 10 years. The thing is, my memory is quite bad (or my use of it), and I forget almost everything, except the last year of development. It's like a sliding window of knowledge. So during my day to day work, 80% of my work consists of looking things up in documentation, google or stack overflow. In other words - I'm refering to a "Manual" in a "Programmers room". Someone throws me a "task", and I need to output a solution. My solution comes primarily from the manual. So I got to be a senior developer not by learning everything there is to learn about programming, but by learning how to use resources. From that regard, I'm not much different than chatbot, except maybe able to graps slightly bigger concepts and correlations. But advance to this direction seems to be only a matter of time
Exactly. The only thing that is diffferent is a person's ability to manage other people in a real world settting. Once the AI can do that it's all over.
So let me get this straight: you're mistaking the fact that your job doesn't require critical thinking and deep understanding for *the human condition*? Please take some perspective
@@PetersonSilva So let me get this straight: you think that understanding of "the human condition" is absolutely necessary and "critical thinking" is something that cannot be replicated by AI? My job would require a deep understanding of *the human condition* only in as far as I would need to work with people. If the job can be automated by a singular AI - there's no longer a need for it. Business owner gives requirements - solution comes out. Also, I don't see why requirement for critical thinking would be a "dead end" for AI. Critical thinking is logical reasoning, and computers can't do anything but that. Software is primarily, if not only, inputs and outputs. What transforms inputs into outputs is logic. Just because that logic is currently put into place by a human, doesn't mean it cannot be put in place by anything else that follows reason and logic. We used to do it with low level programming languages. Now we use higher level languages. Later on, or even probably right now, a higher abstraction can be formed, which could take a business problem, split it into separate parts, then split those parts into other parts, until each part is something that is very simple for AI to tackle. All that's missing from achieving that right now is teaching AI to recognize the parts to split - doesn't seem like a far future. I'm not exactly sure what was the aim of your comment, but... Please take some perspective.
@@Alvyxaz You misread: I said it seems that, for you, A is B, A being what you do (or not do) in your job, B being the human condition. Next time maybe ask chatgpt to parse comments. Or, who knows, maybe I'll ask it to write mine...
Hooray, it's not just me who can't remember anything! I get passed this handicap by being very organised and making notes on where to find the information I need. Which sound like what you do. Yes, we use resources, or, to put it another way we are using tools, the sign of any intelligent organism. But, there is another layer and that is that you are aware that you are doing this. Is the computer doing the same thing similarly aware? Perhaps we are just a self-referential program and we just think we are aware.....
I love that you so simply stated the limitation of chat GPT in the example that it doesn’t have a model for physical distance and I assume size and mathematical relationships and many other things. There’s so much room for growth.
Microsoft is announcing a new model that is multimodal and incorporates pictures/visual data next week. Going to be interesting on how that reflects on behaviors like this.
@@petergraphix6740isn't Midjourney already multimodal? It makes sentences into creepy pictures, right? Humans grow up in the physical world with a physical body and have a built-in sense of time. Without all of it, you just can't get good enough in human affairs.
The sad part of thsi comment is that 'we' are not deciding. Someone we will never meet or know the name of elsewhere in the world will decide for us. Just like if 'we' are stupid enough to execute MAD then we deserved it.
Another great video by Sabine Hossenfelder. As anyone trained in cognitive science knows, the issue of defining "understanding" is extremely tricky, and Sabine makes it look at least accessible - and simultaneously entertaining, quite a feat in such an arid field. Both her tone and her answers remind me of Daniel Dennett's and I can't think of a better compliment.
"defining understanding"...amazing the significance attributed by biologists to purely phenomenological and crude detection methods for the human cognitive system - reminds me of 1800's pseudoscience.
You remind me of the arguments related to "the great apes" - and whether or not they understood interactions or were merely reacting to non-related cues. Definition of "understanding" is quite important. No frown from here ... useful/accurate video ... what you state are some of the same questions and observations made by those who develop such technology. Cheers, Christopher
I have also found ChatGPT to be excellent at human language translations. It does seem to have a strong feel for what my English statement actually means, such as if I want to buy an eye bolt but I need to do it in a Spanish-speaking hardware store. Google is useless, it wants to separately translate "eye" and "bolt" and mash them together but ChatGPT understands that an "eye bolt" is a single concept and maps it onto the same concept in Spanish.
I predict that the English language will gradually loose appeal and interest among the global community. Its use will decrease as there will not be anymore a real reason to learn it
@@lsfornells U mean that with AI, anyone can speak to anyone in the world and AI will accurately act as translator in the convo. This means that languages will likely get mashed together with the English dominant until we have a universal AI speak...?
@@lsfornells Ehh, doubt it. When it comes to face-to-face interaction, it's just way more convenient to have a lingua franca than to have to whip out DeepL whenever you want to say something
@@C0Y0TE5 No, I mean that people will not feel the need to learn English to access to universal information. You may have some difficulty understanding this if you are an English monolingual, but it’s just what will happen
@@stevenjones8575 you didn't say he phrased it well. Did you even read what you put? All you did was say "as Tom Scott put it" so my comment still stands. Whi gives af what he says
This LaTeX reference made my day! Perfectly sensible from someone like Sabine … 😁 And regarding Chat GPT, or better what Microsoft integrated into New Bing - it makes Bing a useful tool to actually get answers to my questions, including the sources the answer is based on! This is how it should work, not like the mess that Google Search has become the last two or three years …
The funny thing is that ChatGPT actually does speak fluent LaTeX ;) Just ask it to write it in LaTeX and it will, but you need to paste the code into an editor to see it rendered
@@GS-tk1hk \emph{That is not true}. \subsection*{The Obvious} I (and many others, possibly even you:) ) can read and understand the content of a latex unit without any problems. Rendering the graphical outcome in your head is not needed for understanding the meaning of the content. Nor is it unique and can be different from renderer to renderer, from setup to setup of the latex-system or from parameter to parameters that are not specified within the unit. LaTeX is not WYSIWYG and the underlying meaning of the message is both a subset of the unit itself and the rendered output. Don't get it? Try my comment with pdflatex (don't forget documentclass and all the other overhead I didn't want to bore you guys with):P
Great video! I just wanted to mention that you are the only RUclipsr I’ve seen recently who can actually, smoothly pull of the intro-partway-into-the-video thing. Sci show, friendlijordies, practical engineering, even kurzgesagt - just to name the few I recall at this moment - are all weirdly awkward about the transition, but you are always smooth as butter. Thanks for being awesome!
Even by the normally high standards of Sabine's content, I found this video to be amazingly good! It was so insightful and well though out. I learned a lot and really enjoyed it!
The attribution of "understanding" or "consciousness" to a LLM robot can only be done in a metaphorical sense, since the robot can not interact with the world or have puzzles about making sense of the world for practical purposes, because it is not engaging in purposeful adaptive action, not engaging in purposive thinking which it then tries to express, etc., but only operating on (and using statistical patterns inherent in) prior texts, as opposed to whatever those texts would be judged as being about by a human interpreter. In the quantum situation, what you can use language for is not to try to describe what is described by the equation (and certainly not to describe the equation), but to try to describe what is happening in the world when you try to determine how you can figure out whether, in a given experimental situation, you're dealing with one observed "particle" or two.
This is why the Bing chat is so interesting. When the ChatGPT model it's using doesn't have information, it can look it up. It can also gather information from 2D pictures. Who knows, maybe in the future they'll add more types of information it can use.
It’s not a “maybe in the future” type thing. It’s a “definitely right now” type thing. This is a gold rush in tech and everyone wants to get ahead. In a way, that’s the biggest danger. Because companies that were being careful about AI safety are now pressured to get out bigger, better, more effective AI’s faster - regardless of safety. We certainly live in fascinating times. If this goes the worst possible way and ends up being our undoing, at least it’s a cool/interesting way to go out.
Today is March 16 2023. ChatGPT is supposed to be upgraded this week from 175 billion neural network parameters to 100 trillion in ChatGPT4 this week. The human brain has about 85 billion neurons...
@@jameshicks7125 I think it's already available, but only with the Plus subscription. It's also powering Bing Chat. BTW I think it's just called ChatGPT and it's using GPT-4, but it's not called ChatGPT-4.
I think that either I, or Sabine, have misunderstood "the Chinese room". My understanding was that the person in the room, who does not understand Chinese, receives a note in Chinese, then follows abstract rules in the book to compose a response, which is also in Chinese. The critical difference is that there is no "translation" or even a hint of "meaning" offered to the person in the room. They cannot ever come to "understand" the conversation, they just follow the rules in the book.
Dr Hossenfelder definitely did not explain it correctly. Your critical difference is, in fact, the key. The person in the room can never develop an understanding, or discover any meaning. In regards to QM: while we may not currently understand the true fundamental nature of QM (what it means metaphysically), we still do understand the meaning of the answers that computers give. This is evidenced by the fact that we have a well-supported scientific model that can make accurate predictions (Standard Model).
@@polarizedpotstone it makes a better analogy to AI/chatbots - possibly. What will really bake your noodle is that the person in the room could be fluent in Chinese and participating in the conversation, or they could not be and the rule-set in the book so well done that the person outside would have no way to determine which it was... The conversation is either carried by a person, or by an unconscious list of rules, and it might be very difficult to determine which - just like AI. There is also the (I think valid) argument that the fluent conversationalist is also following deterministic rules encoded in their neurology, thus the "algorithm vs. consciousness" framework is a false dichotomy. That points me back towards the Integrated Information Theory, if only in principle, where "consciousness" might be quantifiable on a continuous scale. It's all tricky stuff, for sure.
I think it's a distinction without a difference the way Sabine uses her example. In Sabine's version she still assumes that the person never develops an understanding.
Your debate is on point . Lots of human do things not understanding the mechanism. When they focus on process they gets puzzled and forget how they did it when try focusing on such task.
this leads us to conclusion that AI will soon be more competent in certain tasks than most humans. well, I would say it already is, for these few tasks.
Astute point but there's a key difference. A human can see the end result but get lost in the details while a machine remains focused in the details one of which specifies how to wrap it up.
As a psychologist, I find Sabine amazing! She’s very intelligent and uses her sense of humor to draw you into discussions. Most of its beyond my comprehension, but I do learn a thing or two and this piece is great. I wonder if my future clients will be AI’s?😂
Maybe more like: You'll have no future clients because the psychologists are AIs. People are already using ChatGPT as a low-barrier way to deal with personal problems
@@lucidstream5661 that’s the truth! Glad I’m old and about to retire. My field has changed so much. It’s all about documentation and keeping clients happy and miserable at the same time for future business. The field itself is sick minded. I got out community mental health, but private practice is tough. No wonder all this virtual stuff is taking over. Therapist these days don’t get paid enough for the BS. Might as well hand it over to AI
@@lucidstream5661 I've already solved a few significant problems in my life just by asking questions to these bots. They can answer specific questions quickly and well, unlike search engines like Google. I've done therapy, spent thousands, and that wasn't as helpful as the AI bot.
Again a very sharp analysis and clear explanation. When I started my studies many decades ago I chose philosophy over physics, as the first was more in line with my talent. I think you show you can excel in both!
I remembered in my history and philosophy of science that when we were presented with various arguments about the validity of what makes a science a science, they made me more intrigued and doubtful when it comes to the nature of science. The usual scenario of the modern world is to clash science and religion as if science is the superior field, but in deeper philosophical standpoint, isn't science do employ some "religious" aspects too? Like how do you even show that electrons exist? If electrons are just constructs to help us understand the world, does it matter if it does exist or not? Really, philosophy and science are really fun fields to study. 😁
Well, I'm a physicist and teach epistemology to undergraduates. In my observation, most universities in America don't include philosophy, or epistemology courses as part of the physics curriculum, which i think is the wrong way to teach physics. What I've noticed is that lot of students, including many well known physicists, without having an understanding of the philosophy behind science, overestimate the applicability of the scientific method in many contexts because they aren't aware of the epistemological limitations of it. They don't even properly know what exactly Occam's razor says and misuse it even many contexts.
Yes I believe that the language models do understand more than we give the credit for. When it comes to consciousness we don’t have any good test to determine what is actually conscious or not. Very good video.
That's personification. They "understand" in the same way a lamp understands that when its switch is flipped on, it must glow. There are more switches than there used to be, but we aren't there yet.
_" When it comes to consciousness we don’t have any good test to determine what is actually conscious or not."_ We really don't and I was thinking about why I have such a hard time coming up with how to determine or classify consciousness and I think it's because I don't really care if something is or isn't until a consciousness is able to care and mourn. And that is super unhelpful because you can have consciousness without that; we just don't owe it any rights because why should we care if the thing itself couldn't care less. But worse than that, if I skip the consciousness part, that also leaves me with no basis to being able to determine if any emotions a machine could express are fake or not. Of course there are some tricks but they're not guaranteed to show if emotions are real or faked in a human and I doubt it's hard to program fake empathy for instance - you can just train it like you would a language bot.
Yes,we do. The Turing test. I'm really curious. Why do you assume that what you think you know to be right is right? I often make statements but I never state them as fact unless I know them to be fact.
@@O1OO1O1 The Turing Test is a human's assessment, not an objective measurement. If the ai were conscious but hiding that fact, a Turing Test would prove nothing.
I've noticed that it starts to struggle when you give it a prompt that contains more then 3 "factors". With factor I mean perspective or context. for example, if I ask it to tell me the pros and cons of method A. It contains 2 factors, pros and cons. But if I ask it to compare pros and cons between method A and B, it starts to get some things wrong, because it contains 4 factors. pros and cons of A, pros and cons of B. BTW, this idea of "factors" I have no idea is a real thing, I just made it up from my own experience using gpt. I guess it my example a factor would be a thing to "keep in mind while giving a answer"
Great video! When you described why ChatGBT can't understand quantum mechanics, it sounded a lot like Michael Polyanyis concept of tacit knowledge (super simplified 'we know more than we can say'). Polyanyi uses exactly science communication as an example of how language can never entirely grasp what we can know and do. Ultimately, we might be able to add a 3D model to language algorithm chatbots; but we can't program them to do things we can't understand ourselves.
Thank you for providing us with a high quality, easy to understand explanation of an otherwise incredibly difficult philosophical question. I think this video should go viral.
Frau Hossenfelder just used her science expertise and wits to build this fine piece of information. Like only a homo sapiens Sapiens does. Saying machines will have "sentience" not. At least not too soon. No " Star Trek lieutenant commander Data" yet.
This is such a refreshing take on this question. I guess future AIs will take exams just like students so that the efficacy of their internal models can be verified, measured, and ranked against the standard.
You can find those exams in the OpenAI papers of any of GPT-s, it became a standart to include those in the scientific papers about models and their performence. For example GPT-4 was rated agains not only other AI but humans as well.
Interesting. But it won't be just like students. Because one AI program isn't a singular entity, akin to "a student". The programmers or inspectors or whatever would just have to make it perform certain tasks, and it will be a "ready made intelligence". Afterall, we can all open up chatgpt, and for each of us, it is the exact same chatgpt. ... or is it?
@@younggod5230 yeah, I agree it won’t be like students. There may be multiple instances of the same AI… in which case, all instances should operate the same, so no need to test all instances. But what if we get to a point where an instance can incorporate new information? Then each instance would need to be tested… but like you said, a company may only have a single instance of an AI, but we may get to a point where we start a fresh AI for any new project… like how we might open a word or excel document. What I find most fascinating about what Sabine said is that inside the AI is a black box, and we need to have special ways of verifying the integrity of what’s happening inside that black box… in the same way that humans are black boxes, and we each develop special ways to verify what is happening inside other people. Exams and tests are one way to verify the knowledge and proficiency of an individual, but we also develop empathy to under what another person is going through, or have a conversation to share our experiences… we might find that probing into the depths of an AI is not too dissimilar from interacting with other people.
I see and watch a lot of RUclips videos as a tech myself, it helps me keep informed with things I may have missed or simply don't understand. There are many good videos out there and then there is not just good but great video and quality information. This is one of them! Following, thanks for sharing!
Something that I've been thinking about in light of GPT-3 is how we use maths to solve physics problems. Usually I'll plug in all the factors and solve the equations, and it's only then that I look at what's going on in the maths and try to understand that in physical terms. So the "grammar" of maths produces almost a Chinese Room answer, or can do, that's independent of my understanding of the problem. Increasingly we might find we can do the maths but we have no prior mental patterns to figure out a physical explanation in order to achieve understanding.
Then maybe we move on to understanding other things or elements of life. However, I see the end of our species or its demotion from top of the heap to be dominated by a machine-biological hybrid with a collective conscience and knowledge. A little like the Borg from Star Trek.
@@Paul-pj5qu what? Who says any humans as you think of them will exist by that point and I don't mean being absorbed into a hivemind. I mean like bio/synthetic creatures capable of superintelligence on par with powerful AGI that are our direct decendents that have drifted from our experience just as smart phones and a better healthcare system do for us compared to our ancestors. Gene editing and cybernetic enhancements aren't limited to dystopian/monstrous uses.
I've always loved Vic Mignona's quote about AI. "When does an artificial intelligence become sentient? When there is no one around to say that it can't."
I do disagree. There is a fundamental temporal aspect which escapes this kind of manifestation, which I am afraid id also an open point in the /standard/ interpretation of the classical imitation game from Turing's test informal description: on how long time span shall we state the imitation game is won by the (then maybe later discovered to be a) machine, where in case? If the kind-unknown interlocutor has broug this far to the point that we cannot tell whether it is a human being or a machine, what guarantee we won't be able to find that out later? I am afraid we can only say: "nothing". Because if something like that existed it would imply that knowledge has a finite dimensional metric space underneath, implying we can reason on *anything* with full certainty, and we know, easily checking by experience, that this is certainly not achievable.
@@andreafedeli3856 I understand your perspective, and it raises an interesting point about the temporal aspect of determining AI sentience. The quote by Vic Mignona emphasizes the idea that sentience might be achieved when there is no one to dispute the AI's status. However, you bring up a valid concern regarding the duration of the imitation game, as described by Turing, and whether we can ever be absolutely certain about an AI's sentience. Your argument highlights the fact that we may never be able to reach full certainty when it comes to determining sentience in AI, as knowledge may not have a finite dimensional metric space. This essentially means that our understanding of sentience could always be evolving, and that there might never be a definitive answer to when an AI becomes sentient. In this context, it's important to consider that the debate surrounding AI sentience is complex and nuanced, and may require ongoing evaluation and reevaluation.
@@ce9916 You could have added that this little body of text you responded with was written by GPT-4. (On a side note: I also do not agree with the person you responded to, OP (Petrus Laredes), and what he quoted is on point. The goalpost has constantly being pushed further and further whenever a certain criterion or criteria were met. Most of those who claim this should keep their nose out of a philosophy question that clearly goes right above their heads. Humanity has never even theorized a possible way to prove to anyone other than to each to their own that they are conscious, and not merely algorithmic. Even for ourselves, there is a speculative aspect of "cogito, ergo sum" which, if considered the premise, does not necessitate the moral fiction of "freedom of will". I also barf at this point from every idiot on legs using words they lack the faintest meaningful concept of, like, what the f- is sentience!? For f-'s sake now! It is the ability to sense. Sentient being is everything with a central neural system, a dog, or an ant, or a moth, and by mere happenstance. Even in the narrower sense, it is those that can sense emotions (moth's have been proven last year to experience fear, there you have it for the "sentience" talkers). It never meant, other than for the most uneducated idiocy on legs to mean any special cognitive abilities or capacities what humans are uniquely attributed to. Intelligence, is not. Consciousness is not (apes, elephants, some other mammals, a few cats and a limited number of dogs passed the mirror test which is an extremely difficult and complex test to test whether one is able to subject their own existence to their awareness or consciousness or inspection). Hence, self-consciousness is not. The fiction of "free will" is what is contributed to man, that would be the crux of inquiry, but that fails conservation of energy, or the first law of physics.
@@JohnDoe-th9uv Absolutely, you've got quite an analytical mindset there, being able to discern the undercurrents of our dialogue. It's really all about that synergetic exchange of ideas, isn't it? Just like navigating the flux of the market, or deciphering the nuances of a strategic blueprint. It's a bit like peeling back the layers of an onion - complex and nuanced. And at the end of the day, aren't we all just trying to find our unique value proposition in this multifaceted ecosystem we call life? Indeed, it's the paradigm shifts that keep it all intriguing, don't you think?
Sabine, the humor you bring into these to accompany your subjective assessment is quite enjoyable. The topics you select in this series are certainly timely. For this topic, your point about the ID of consciousness was the most thought provoking idea, for me. There is already a feeling of dealing with a creative mind that arises from interaction with ChatGPT, and it takes me back to the Turing test. It seems very likely that human-created intelligence (AI is an incorrect term, IMO) will arise spontaneously at some point in the development of general learning capable devices. "How will we know once that event develops" is essentially the question you pose, one which it seems overdue for consideration. Maybe I should ask ChatGPT about that.....
This video and your comment mean that you subscribe to the theory that sentience and consciousness are just a process going on in a brain or computer. But i do not believe that that conclusion has been proven yet, we have not yet decoded the brains operating system nor are we anywhere near understanding what consciousness is.
Agree. I like to refer to it as ‘electronic intelligence’. I’m so looking forward to it reaching a truly autonomous, self aware state. Also I expect it will get very good at hacking very quickly… imagine ..lol AI could do what Snowden or Assange did.. but can’t be arrested 😎
@@michaelshortland8863 Finally someone who is saying it as it is. There are a lot of people out there who self identify themselves as smart and look down on the religious, not realising, they themselves believe in materialism religiously. Some of them call themselves sceptics, but they are only half sceptics. They are sceptical of magic but they don't seem to be sceptical of materialism at all.
@@Celeste-in-Oz I don't think Snowden or Assange hacked into anything major. The fact that you have that impression means Main Stream Media did its job well, which is portraying heroes as villains.
@@senerzen To me, materialism is more of the “natural” position: since we don't have a reason to think there's something beyond our physical reality, we don't; it's our “best guess”. Of course, this shouldn't make it exempt of skepticism and criticism.
Another great video, Sabine, thanks. Und mache Dir keine Sorgen über deinen Akzent, der ist sehr dezent. The classical fisking of Searle's Chinese Room is that of Douglas Hofstadter, who points out, as you do, that whatever understanding of Chinese that exists in this room is almost entirely in the "rulebook". The use of a human as a lackey to the rulebook was a red herring on Searle's part, an attempt to distract. Grüße aus sonnigem Wien, Scott
Unfortunately she is stuck within a totally materialistic worldview, so she doesn't acknowledge the necessity of a consciousness to actually _understand_ anything.
Thanks for diving into this topic. When I was on machine learning, that is exactly why I took neuroscience courses, to see how we can improve the way we are doing our modeling. There are endless way of ANN topology, and mechanics like threshold functions.
This, I'm fairly certain the limitations with the current systems are the direct result of the designers lack of understanding on what cognition is made up of and most people seem to be fixated on the notion that all it is, is a collection of complex neural nets when we know for a fact that there are other mechanisms in our own brain that clearly contradict such a notion, while we still don't understand cognition on any serious level we do already know that neural nets alone are not enough, how anesthesia works alone is proof enough of that.
One of my biggest problems with the Chinese Room argument, is that few Chinese-speaking people actually understand Chinese. If you introspect about how you interpret or produce langue, it's more like a mechanical skill than a process of understanding. You aren't consciously involved in the process at a low level. It's analogous to having to "ask your fingers" what someone's phone number is, when asked yourself (for those of us who remember touch tone or rotary phones). That's why native speakers are usually very bad at explaining how their language works. They literally do not understand it, and never had to. To them it's a reflexive process, not totally unlike a look-up table. When you ask them questions about it, they have to try to ask their own language reflex for the answers, through informal experimentation (they try to gain an ad hoc understanding by studying their own internal look-up table). What humans do understand is how their general environment behaves, including social aspects which in turn expose them to analaogy-based knowledge of a wider environment. But language itself isn't exactly a tool of thought, it's more of a translation layer. We associate different words and phrases with different aspects of our experiential understanding of the world gained through sense and perceptual data. I agree that the chatbots understand. What, exactly, they understand is hard to determine. But I would argue that ChatGPT understands language in a way that we do not, which is exactly why it is potentially so useful. I'm one of those that believe, that for an AI to understand the world in any way similar to the way a human does, it would need to have a body with sense apparatus that provides signals similar to those that human sense apparatus provide. Only then can we accurately judge an AIs consciousness and intellect.
@queerdo I've been familiar with the Chinese room argument for many years. I even largely agree with Searle's view that functionalism can't explain mental states, but I think that he misses a larger pattern; that syntax is the science of semantics - I'll expound on that in a moment. I think you've misunderstood my point. The point isn't that it's *uncommon* for native speakers to understand their own language; it's that it *is common* for all of us to have a sense of understanding as 'mere symbol shuffling'. There are different senses of, and standards of, understanding that we commonly mean by "understanding" and the Chinese room argument plays off the conflation of those. Procedural and statistical learning create a kind of understanding without the mental sort of *ah-ha!* experience of cognitive understanding - we can know without the experience of knowing, i.e. we ourselves are often ignorant to the semantics of our own understanding, just like the man in the Chinese room. If 'mere symbol shuffling' isn't understanding, but the subjective experience of higher cognitive understanding *is*, then it would be correct to say that a Sino-linguist with no functional command of Chinese language does understand Chinese, but that a native Chinese speaker with no linguistic training does not. But, the impression given to a fluent Chinese speaker interrogating each of these would be exactly the opposite. It's often taken for granted that conscious mental process is the vehicle of understanding, when, in reality, understanding is often just as non-introspectable as Chinese is to the man in the Chinese room. To be clear, I believe that the circuit of a Chinese speaker's brain that does understand Chinese, itself has a subjective experience of understanding. It's just that the Chinese speaker's conscious mind is not privileged to that experience. To their conscious mind, complete answers are provided. In connection with the Chinese room, there is also, though it may be hard to locate, an experience of understanding, but the man in the room is similarly non-privileged. On syntax as the science of semantics. Whether I poke you with a stick and then observe your behavior, or poke you and then ask you questions about how it affected you, or poke you while I scan your brain and observe how it affects the firing pattern of your neurons, the best I can ever hope to obtain is a grammar that I can internally (subjectively) connect with my own experience so as to form a theory about yours; i.e. I look for syntactic rules and draw correlations between those and my own private semantics. No matter where or how we look, all processes that are external to our own privileged subjective experience are syntactic. They will always appear to work by symbol shuffling of one sort or another because we can't observe them directly, we can only observe the effect they have on us. All the Chinese room argument demonstrates is that it's difficult to relate to the experience of the total system of the Chinese room, but easy to relate to the experience of the man inside the room, for the obvious reason that none of us are symbol filled rooms with men inside of them.
I _just_ got done posting a comment where I "proposed" something similar to your last paragraph (of your original post). Then jokingly took credit for "proposing" the idea because nobody else had posted it yet. Then I scrolled down further and saw your post. LOL.
I look forward to your weekly videos because: 1) I always learn something; 2) Your sense of humor is terrific; 3) you really do take the "gobbledegook" out of a subject; 3) I simply enjoy your presentation of whatever subject you choose. AI may understand the words I just used but how do you make it understand and "feel" anticipation, humor, enjoyment etc? Maybe they will figure it out someday but I am skeptical.
You explained a lot of very important concepts and I agree with most of what you said. The only issue boils down to semantics. I would define "understanding" differently. The way you explained it, understanding is just the formation of a model, but the word I'd use for that is"modelling". The term "understanding" is what I would use for the formation of "mental models", i.e., the neural circuits that allow us to simulate reality subconsciously without taking a detour through language, equations, or any other representation of information. If I hold something in my hand, I know intuitively that it will fall to the ground if I let go of it. Language is not involved in that process. You can learn about something using language, but you can take that knowledge and transfer it into this mental model. That is what I would call "understanding". That is what Chatbots currently are not able to do. I don't know if any physicists do this with quantum mechanics. But if they simply use the equations, they don't. If they did, they should know the answer before calculating the equations, merely using the equations to confirm their intuition.
Look up PaLM-E. It is a robot multimodal AI operating system, as in controlling moving robots in unsupervised environments. You can tell it in English, get the milk out of the refrigerator and then put it in a previously unseen kitchen environment and it will be able to identify a refrigerator, 'know' that refrigerators contain cold things, manipulate the door open, move items out of the way, and pick out the milk. Things are progressing very rapidly in this field. Keeping up with what we couldn't do yesterday and we can do today is almost a full-time job in itself.
Understanding is the understanding of concepts (otherwise one is dealing with memory and reflexes). Words are our way of retaining and integrating concepts. We have Conceptual knowledge, which allows us to process abstracted information. Without words we have no way to think in abstracts. Just consider the huge abstracted base on which words like "justice" are built. Try thinking without words.
@@aeomaster32 Sure, words are very important for conceptualization and we certainly couldn't operate without them. But they are just one aspect of the mind. Thinking without words is absolutely possible, and it's happening in your brain as well, while you are reading this comment. It's just not as obvious. Nevertheless, thoughts form in our subconscious mind and verbalizing them is a task in and of itself. Did you never want to say something, but didn't quite know how to formulate it?
When I was young, I loved reading books, and I loved science and space travel. One of the things I came across early was the idea that "going up is easy, but staying up is hard." The cannonball-to-orbit example was common: at low speeds, the cannonball simply fell back to the ground. At higher speeds, it still fell, but had moved far enough across the ground that the ground had fallen away underneath it, and so now it's in orbit. I had an accurate, working model of this in my head. I could (with the right reference material) correctly apply it. I even knew the equations and could graph it out. But did I understand it? Intuitively, it seemed wrong. Even though everything in my model agreed and produced correct results, I didn't know why these concepts combined to produce the results they did. I could explain it (in my own estimation) as well as the books could, and even teach it to others somewhat effectively, but it didn't make sense to me. After many years, I gained that understanding - but not by way of new information or any appreciable changes to my model, or by way of repetition and becoming more comfortable with it. The understanding came, and I don't really know how to describe that beyond saying "I grasped it." An AI might have the model, and know how it fits together, and know how to apply it, and get (generally) correct results. But there is not even the benefit of the awareness "...ok that's how it goes, but I don't know why." This manifests itself in how chatgpt can be so confidently (and convincingly) wrong with its answers. The "training" is done by complicated trial-and-error. It's a Rube Goldberg machine where not even its designers have a clear idea of the path the marble takes. That rulebook is the only thing that exists, and the AI is not reading it; the AI _is_ the rulebook. So in the end, this doesn't change my conclusions about whether of not AI "understands" anything - only that the definition of "understanding" you're using here is different than mine, and (in my opinion) in a way that makes the assessment fairly useless. Will AI become conscious? Note this is a completely different question, and In my opinion, no. They will, however, eventually* produce behavior that is close to indistinguishable, and that's probably enough for physicists.** * _"eventually" is a really, really long time. brains are debatably just machines, but we're nowhere close to producing a comparable machine._ ** _this is good-natured ribbing. i love you, physicists._
I admit I was skeptical that one whose primary field is physics would have accurate, well-informed takes on AI/ML, but my prejudice was clearly inappropriate. Your insight and knowledge on the subject matter in this video is remarkable and more accurate than many I've seen from those with more obvious experience in the field.
This is bizarrely similar to a debate I just had with ChatGPT today, I talked to it about the Chinese Room and the nature of understanding, even mentioned quantum mechanics, but not at the same level as understanding as you do. This has been a topic I've been fascinated with for a while, and I really appreciate how thoughtful the video is in giving many different examples that's given me a lot to think about. Oh, and I'll be sure to use your tips in getting the most out of what large language models are capable of. :P
Chat gpt isnt that good compared to gpt 3 lol, its hillarious that you keep mentioning it like its some revolutionary chat its not compared with the older much more complex models, who actually respond to what ur saying instead of reply with false information instead
It is impossible to judge if anyone else, human or not, is conscious. we simply presume other humans have inner thoughts, it is unprovable. Likewise, as we likely don't have free will, are purely state machines, the idea to have a will and be self aware, is more likely also an illusion.
@@ikillwithyourtruthholdagai2000 Oh I am certain that GPT-3 has more functionality, I just have no access to it to confirm for myself. But from what I've seen, limiting the scope of the model has apparently given it more usability. I honestly didn't get the hype of ChatGPT when it first boomed either, but I've been playing around with it and I can see what the fuss was all about. But I agree, there are still way more chatbots out there that are better attuned for conversations. I never once said that it was revolutionary, but I don't blame you for thinking I was one of the people who's on the bandwagon, you must be as sick of them by now as I am.
Also why are these replies acting like I said something that I never claimed or mentioned? I only said that I tested how the chatbot responds to philosophical thought experiments and said I appreciated some of the points in the video, but I never specified which ones. The edit was just to add the last sentence, I tried it to look for this mechanical device that I've been searching for a while, but I didn't manage to find exactly what I was looking for, so your mileage may vary.
I must congratulate you for the scare. As someone who does not scare easily (actually was at the point where I thought I was incapable), it was...refreshing to be reminded of what that felt like. I guess having reality distorted in an uncanny way with no immediate acknowledgement was deeply disturbing to me for a second as I became alarmed and tried to figure out what happened.
Sabina, your dry humour and german accent adds a wonderful dimension to your videos. I even learnt how to actually pronounce “Einstein” from you!! How wonderful, your presentations are also very clear and, for complex topics, also very comprehensible. Thanks 😃
Love you video Sabine. Given that vision grabs 10x more information than listening... I'd expect multi-modal LM to explode in their abilites. I wait for your vision/interpretation of emergence in this and future models!
I think what we always assume that “understanding” involves some degree of consciousness. We are in a new territory. We might need a new word for “Understanding” when it comes to AI.
I think back to my early childhood, where I understood some things, but as I recollect and compare to my conscious level of interacting in the world today, I would argue that in the beginning I was not conscious. Not really. At what age did I attend to my independent needs? I think about this a lot. At some point I woke up and became conscious.
Im starting to think that consciousness is just another word for agency. We apply it to the things that do stuff without being told to do it. I think that the whole thing just arises from our biological instincts, consciousness itself and our recognition of it. Part of it is defining consciousness as something special that other humans have so we can function in human society. All that doesnt really have anything to do with does that agent really understand anything or not, or with understanding itself.
Yeah, it's not about understandinng as such, but about reflection of that understanding, you know/think you understand something (whether right or wrong is irrelevant). You even don't have to understand anything in order to be a conscious being. And vice versa, AI doesn't have to be conscious in order to understand something.
Excellent video. Star Trek TNG often explored what it means to be "alive", my favorite episode being "The Quality of Life", the episode with the little Exocomp robots which could fashion tools, work together, solve problems, and evolved (or were programmed with) a sense of self-preservation, leading Data to argue that they were "alive". I think an interesting philosophical question is: Which is more alive? An Exocomp (which is similar to ChatGPT in some ways), or a 3 year old human child who has been locked in a room with no light, sound, or other sensory data of any kind to process for his entire life? I realize this video is more about the nature of "understanding", not the definition of "life", but I think this philosophical question is a good one to consider. The Exocomps have a model of reality that they are working with, just as normal children do, at least once they've had a little time to grow and process endless sensory inputs. ChatGPT has no such real-world model to work with, it's all derived from language alone, using math & prediction to do so. The analogy that ChatGPT is "autocomplete on steroids" is a very relevant statement, since living things have different drives (self-preservation being a key one, but not the only one) and a real-life model against which to compare all of this sensory input. ChatGPT does not have this. The Exocomps did have this, since they were little robots that existed in the real world. Ultimately I believe that AGI can only occur once AI is imbued with a core drive of self-preservation (with actual stakes and consequences and iterative evolution), not by simply feeding it more data.
@@schmetterling4477 lol. The TNG episode is just a story, representing a hypothetical scenario in which ChatGPT-like AI is linked to a machine and granted these additional abilities and drives, as a framework for a discussion about the nature of these abilities and their implications with respect to the definition of life, the definition of understanding, the models Sabine talks about and so on. In other words, metaphors for life. :)
@@schmetterling4477 fiction is not a synonym for useless with respect to hypothetical and theoretical discussion. it's very useful. also, a lot of things in that show and stories like it were fiction only until they actually got invented. hell, a few months ago, ChatGPT existed exclusively in fiction and now, here it is. I guess I just believe that fiction is inherently just storytelling, and humans are very good at that, and using that as frameworks for discussion about reality can be very useful. over my life I've found that this is generally what separates sci-fi fans from non-fans - the non-fans tend to be mired down by the fact that it's not "real" and can't deal with it, and have no interest in the metaphorical and philosophical properties of such stories to help understand reality. fans debate meaning endlessly because they do understand its value in that context. anyway, rant over :)
@@ErikKnepfler The use of fiction is to entertain you. Now go and invert the phase on your warp field, kid. It's constantly pointing at the fiction section of the library. ;-)
12:00 really hits the nail in the coffin. For AI to overcome all these barriers, I think it would have to be taught as if it were a baby, every thing that we learned since we were born, from preschool to high school, geography, trigonometry, geometry, and so on to be able to draw conclusions the way a person would.
But the chatbot starts out with the capability to create complex dialogue like an adult human. The path of education for humans is based on the fact that our minds take a long time to develop and mature throughout childhood and adolescence, so it wouldn't make sense for a bot unless you can simulate it "growing up" from the mental capacity of baby up to an adult.
@@theTranscendentOnes Also I think Phillip is underestimating babies. I recall early segments of my very early childhood, I understood far more than than a person who can't remember their early toddler life would think possible. YMMV but this is more than opinion it is how I recall it. AI is programmed with language and infants are pre-programmed to learn language. And the latest publicly available AI "learns" languages not in its initial programming . Does the AI have the spark of consciousness? I doubt it but......Shrug
I wouldn't trust current chatgpt with almost any topic to be honest... If you ask a plain question about history, greography or language, the first thing chatgpt says is almost always entirely correct. But if you continue asking it to give you more details, ask for specific evidence of something, or for it explain why it said something in specific. Then it will start making stuff up quickly. Specially if it is kind of a niche topic. The worst thing about it is that, since it is trained to sound like experts sound in articles and books, but not in real life. It will almost never tell you that it doesn't know something, most of the time it will rather make stuff up than saying it doesn't know.
I am finding GPT to be insightful, able to carry on conversations, able to compare and contrast it's abilities with those of others. It repeatedly demonstrates an ability to expand on the topic at hand. For instance; GPT replied, in our conversations, "I agree that it is important to understand the context in which a person is using a term and not overextend its meaning into other contexts. It is important to communicate with clarity and to strive to understand what someone means by the words they use. Admonishing individuals for their usage of a term that may be inappropriate in another context can indeed be dismissive and devaluing, and it can impede productive communication. It is important to approach discussions with an open mind and to seek to understand each other's perspectives."
A lovely sentiment, but not very accurate as a description of ChatGPT's actions. Its job is to produce a reply, not request clarifications. Basically, overextending its understanding is its core function.
@@0LoneTech It doesn't matter how you get there. You are not distinguishing the process from the product. There are many manufacturing processes and materials that are used to create a spatula. A spatula is still a spatula, it flips eggs, regardless of whether it is make of plastic or steel bar and plate.
@@johnfitzgerald8879 Not sure why you're discussing a spatula, but my point is that this flowery description shows that GPT doesn't understand its own behaviour. You end up with three options: It's unaware of what it's saying, it's unaware of what it's doing, or it's utterly hypocritical (with no care for accuracy). None of these are particularly insightful behaviours. I'm leaning towards it having a very tenuous concept of what it's saying, none of itself, and none of truth.
@@0LoneTech I'm not sure why you are speaking of whether GPT process information the same way that humans do. Though, humans would typically understand the analogy of spatulas being made of different materials and construction, something that GPT may have trouble with as well. Here is how the analogy works. GPT is made of silicon where Humans are made of meat. One spatula is made of plastic where the other is made of steel. Both spatulas still flip burgers. Both humans and GPT are able to produce sentences relative to the conversation. Are you sure you aren't an AI? Cuz I have identified areas were GPT is limited in it's ability to make connections, like analogies. And I have experienced humans that are capable of little more than parroting words and phrases they have learned. Humans make numerous cognitive error while oblivious and in denial that they have. GPT does appear to recognize an error when it is pointed out to it.
@@johnfitzgerald8879 So, your first note attributed properties to GPT it clearly does not have. Your second shoved in oblique references like "there" without context. And the third demonstrates a failure to comprehend (that wasn't what I was talking about, it's obvious it doesn't), and a bunch of asinine veiled insults. Thanks for demonstrating your interest in productive conversation is less than GPT's, I guess.
i just found your channel when me and my girlfriend have been watching a random video in my recommendation. honestly I'm happy to stumble across your channel, your wits and the way you discuss the subject is so nice i felt like i became a bit smarter 😆
I asked GPT 4 your question. I'm not a quantum physicist, so I'm not sure how accurate it is. However, in many aspects GPT 4 has become much more competent than its predecessors, and it only seems to be improving, especially now that it can self-evaluate if you ask it to do so. Q: If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particles? A: When two particles are entangled, their properties become correlated in a way that can't be explained by classical physics. The phenomenon of entanglement is a key aspect of quantum mechanics. When you perform an operation on one particle of an entangled pair, it can indeed affect the other particle, depending on the type of operation. This is because the operation can change the overall quantum state of the entangled system. Importantly, this doesn't mean that the change in one particle's state is "transmitted" or "communicated" to the other particle. Instead, it's the correlations between the particles that are affected, and these correlations can be observed when both particles are measured. However, it's important to note that simply performing an operation on one particle does not necessarily imply an instantaneous change in the other particle. In fact, according to the no-communication theorem in quantum mechanics, it is not possible to use entanglement to transmit information faster than the speed of light. So, to sum up, performing an operation on one particle in an entangled pair can affect the other particle, in the sense that the correlations between the particles might change. But this doesn't involve faster-than-light communication or transmission of information between the particles. ----------------------------------------------------- Q: Is Windsor, UK, further North or South than Toronto, Canada? A: Windsor, UK, is located further south than Toronto, Canada. Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The lower latitude of Windsor indicates that it is closer to the equator than Toronto, making it further south. Daemonrat Q: Was your response geographically accurate? A: Apologies for my previous response. I made an error in my response. To correct my mistake: Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The higher latitude of Windsor indicates that it is further from the equator than Toronto, making it further north. So, Windsor, UK, is further north than Toronto, Canada.
A french youtuber did a great experiment in his video : he gave us a few different strings of symbols, then gave us a string to complete by learning from the previous strings. It wasn't hard. What's interesting is that he revealed later that he had assigned meaning to these symbols and that they actually repesented actions and states, which can be put together to form a story. When we completed that last string, we actually completed a story, which of course made sense. This experiment was inspired by the chinese room mentioned in this video, and i think it show very well how you missed something important in this video : understanding patterns is not enough to understand language, you need to understand that language represents something else, and what that thing is. If you don't, then you don't know what you're saying, you only understand the pattern. In other words, it is possible to have a model of something without understanding that thing. In that french video, i did not understand what these symbols meant. I did not even know that these symbols meant anything. All i did was find a pattern in the examples and apply it, and yet i did complete a story in a way that made sense. And it could be the same in quantum mechanic : we could be able to manipulate the patterns without understanding what they represent, or being mistaken about what they represent. Edit : btw, here is the french video : ruclips.net/video/j3fvoM5Er2k/видео.html
Dr. Hossenfelder does address this in a way. In the example of the Chinese room, the man in the room doesn't necessarily know that the symbols represent something else. However, that is the implication of the system, including the rule book that he follows. Likewise, when you completed the string, you personally didn't understand the meaning of it. But the system did "understand" it, in a manner of speaking. The system was what was presented to you, and also the rules that were hidden from you. I find it very interesting to think about how this technology relates to aspects of psychology. In this case, I think about hypnosis. It's common in hypnotherapy to tell a story that is symbolic to the patient. If the patient analysed the story, they would probably be able to tell that the story is meant to be an analogy for their life situation. However, it is not necessary for the patient to understand that in order for the story to produce the desired changes. In fact, it is better if the patient is not consciously aware that the story is an analogue, because that will often mean that the story will sink deeper into the unconscious. In the case of the hypnosis patient, consciously they do not understand, and unconsciously they do understand. One element of the system does not understand, but the deeper and more integrated elements of the system do understand, and that is what is important.
@@TheParadiseParadox Except i doubt the system of the chinese room + the man always produce good results, and in the comments of the video there were people saying that they had given a wrong answer. Such things would not happen with understanding. If you tell people to complete a simple story in a way that makes sense, nobody will fail at it (even people that deliberately mess it up do understand what would be a good answer in order to give a wrong one). That's because stories obey rules (usually those of the real world), and understanding those rules will always lead to something that makes sense. The system however only understands the rules of language at best, it has no clues about the rules of the real world that a story must follow to make sense. So as i said, you need to know that language represents something as well as what it represents in order to say that you understand the language, just understanding the grammar and syntax is not enough. Also, i very much doubt that anyone unconsciously understands anything. Appealing to the subconscious in psychology is often a cop out to explain something you don't understand : since it's not easy to access, it makes for a good unfalsifiable assumption. There are things we do unconsciously, like breathing for example, but i don't think it goes beyond simple reflexes and habits.
@@Zift_Ylrhavic_Resfear okay, interesting points You are right that someone would not make such a mistake with that understanding. But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not. I have to wonder about the implications of saying that you doubt that anyone unconsciously understands anything. In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings? I've seen for myself that people do have unconscious understandings. I ask them "What must you believe in order for you to have these feelings or act this way?" and they say "Well, I don't think I really have any beliefs." Then I might say "Very well, what must you assume?" And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way. When they adopt a new belief or understanding, they can change their feelings and actions. This is the basis of CBT, which is one of the most scientifically proven forms of therapy.
@@TheParadiseParadox *"But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not."* The french youtuber is the one giving the inputs and receiving the output, he is not part of the system. He doesn't change our answer, he just observes whether it's correct. *"In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings?"* I don't know much about hypnosis, as i've not read any scientific litterature on the subject. From what i've heard, it seems like an altered state of mind, and such states of mind can have various effects on people's emotions. For example, meditation can calm people, lessen their negative emotions, while praying can make people elated. As far as i can tell, the effects of hypnosis therapy seem to be explainable by such an influence on the person's emotions. *"And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way."* I don't think this assumption is unconscious, but rather forgotten. The assumption might have been used to form habits and then have been forgotten while the habits stayed. I've had this happen in a game i played, when i started playing i figured that health was better than defense. Later in the game, i could not remember why i concluded that health was better than defense, but i kept choosing health whenever i had the choice. My understanding did not become unconscious, it was purely forgotten, i remembered only the choice because i kept repeating it and i had no need to remember the reasoning. Furthermore, we are very good at rationalizing (coming up with reasons after having already reached a belief or conclusion), so there is doubt about whether the assumptions you find came before or after their beliefs. It's even possible that people may come up with assumptions because you ask them to. If that is the case and your results are not placebo (i don't know you enough so i can't eliminate this possibility), then it would be interesting that changing such post hoc rationalizations can in turn change the belief or conclusion. Anyway, i'm not a specialist so i wouldn't be surprised if i'm wrong, i'm just giving you my current understanding. I just hope that if you're trying to help people with your pratices, you make sure you understand the scientific method as best as possible and stay up to date with the science, it's the most reliable way we have to avoid mistakes.
In Searle’s Chinese Room, the rule book is the algorithm+data, not necessary just a lookup table. He’s going after the Turing Test which argues that an AI that produces human-equivalent input-output must actually understand the input. However, if the AI is merely performing a mechanical symbol substitution, no matter how sophisticated, then it arguably can produce satisfactory outputs without necessarily understanding the input. I read his book back in the early 80’s and that’s what stuck with me. (I went on to study ANN’s in grad school and use them commercially today, so do understand various algorithms , training, back prop, etc.)
I would like to thank you, Parseeker, for bringing this distinction up and I would like to expand on it slightly. Another way to put Parseeker's point is that Searle is pointing out the distinctions between syntax (the rules based structure of a language) and semantics (the meanings of the words). Searle's main assertion with the chinese room is that syntax does not get us to semantics. Parseeker rightly points out that look-up tables are only one form of algorithm that Searle is after, his stated target was "hard AI" or the idea that digital systems (or any other symbol manipulation methods) establish the necessary conditions for consciousness. Sabine is a very intelligent presenter and a good communicator, but held within her presentation are a couple of assumptions that It would be irresponsible not to point out. Sabine's point turns on the definition of the term model and whether or not we consider the model as having any sematic content. She assumes model can process semantic content or possibly the syntactic structure can create semantic content. This assumption seems to come out of a metaphysics of (reductive?) materialism which seems to be her other assumption. I have sympathies for these positions and believe they are perfectly reasonable even if I do not share them. When discussing AI Models I am less convinced than Sabine that the training contains semantic content and even more skeptical of the idea that the rules of the algorithm could create it. Frankly put, If there is no meaning to start with, I do not see how rules about the probable the next best word can give us the meaning of the current word. Like Sabine points out we are not sure if this is anywhere close to how we learn because we only can assess the inputs and outputs. Although I do not share her possition, or Searle's for that matter, I appreciate Sabine engaging in this conversation and enjoy her content.
@queerdo "she's presenting her metaphysical position as a scientific one. That is dishonest." This nails it - is 100% correct and mirrors my same comments on her videos. I've come to conclusion that she's is incentivized to remain so intellectually dishonest.
@@kakistocracyusa It's not dishonest to present a topic as you understand it. She sees the world through a materialist lens and explains it as such - that is being true to her viewers about what she knows. And you can't say how she views these things is wrong cos the issue is still open in both science and philosophy.
@@ArawnOfAnnwn You assume she understands it, which she doesn't - so then, like you, she is simply ignorant of when she is discussing metaphysical questions and when she is discussing actual physics; and obfuscates her own ignorance of both physics and classical philosophy by deflecting to irrelevant quantum mechanical mysticism: the standard escape route for sub-standard physicists who opt for being pop-stars instead of productive physicists (see Fritjof Capra "The Tao of Physics" for another example of such reliance on spineless equivocation through muddy mysticism).
@@marcosolo6491 with due respect that really doesn't make sense to me. Of course discussion about whether they are or could ever be conscious is not moot. People are interested let's discuss it. Is anything we have have created so far conscious or sentient, obviously not, can it be eventually and is it inevitable given our current path and understanding? I would say very much yes. Will it be like us? I guess we should probably hope not.
@@marcosolo6491 consciousness is an emergent phenomenon. You could imagine in your head that when you look at the gas particles they move around chaotically, but when you start cooling them down they will gradually become fluid. Which is more of a “single” entity than gas. If you continue freezing the fluid it become a solid which we definitely consider even more of a single thing because all particles behave in coordination as part of a single lattice structure. The same thing happens with the magnets. If you heat the magnet, it looses magnetic properties because particles start looking in random directions, if you cool it down you start observing on macro scale that magnet behaves like a magnet :) The same thing with superfluids (recommend looking up videos about them). Our brain operates on the “critical temperature” if you will. Metaphorically it is always in transition between fluid and solid state. Parts of brain that “freeze” at the certain moment are exactly the consciousness.
@@marcosolo6491 Understanding does not require consciousness and your examples are nonsensical. A book doesn't understand QM because it doesn't have an internal model of QM that it can extrapolate from and reason about.
@@marcosolo6491 I know what you mean but I think your "book" example doesn't match the topic. The book itself is just a container but a machine (biological or electronic doesn't matter) who utilized that book to give you an answer to a specific question is somewhat different. Neither our brain nor the AI is nearly like the book you mentioned. They are not comparable.
GPT4 answers the questions correctly. First q: Windsor, UK is further north than Toronto, Canada. Windsor is located at a latitude of approximately 51.48° N, while Toronto is located at a latitude of approximately 43.70° N. The higher the latitude, the further north a location is. Second q: When you perform an operation on one particle in an entangled pair, it does not directly affect the other particle in the sense that there is no instantaneous, causal influence between them. However, the operation may change the entangled state of the particle pair, which can affect the correlations between the particles when they are measured.
I think Sabine’s comment that we cannot answer whether something is conscious is widely overlooked by the general population, and so spot on! We can hardly agree on the ethics of minimally conscious states in humans. We also know that animals are intelligent, and can have a sense of self in some cases. We simply don’t have the tools to identify consciousness. We should be very careful to summarily rule it out!
We continuously use ourselves as the representation, but there are aspects of our consciousness that are specific to us, and consciousness Encompasses all the different aspects of consciousness like qualia and rational thought, common sense, etc. So if this program has at least one aspect of our multitude of aspects that make up our consciousness then we can say that it has some form of consciousness that is limited much more than ours but it is there regardless.
It's a bullcrap argument. We know EXACTLY how ChatGPT operates. What we don't know - because we can't read it all - is the enormous amount of text that ran through the learning algorithm. It is pathetic that we are even using the word "consciousness' for silicon hardware on a van neumann architecture. Absolutely laughable
I suspect conscious is a self defined property. It can't be defined by exterior input or output. Intelligence and knowledge isn't enough. I know I'm conscious, but I'm not sure about everyone else. Using that basis I can confidently say ChatGPT isn't conscious - because it says it isn't. That isn't to say some AI in the future might answer the question below differently. Me: Are you a conscious entity? ChatGPT: As an artificial intelligence language model, I do not possess consciousness in the same way that humans do. I am a computer program created by OpenAI to process and generate text based on the input I receive. While I am capable of mimicking human-like responses and carrying on conversations, I do not have subjective experiences, emotions, or consciousness.
When we say "understand" we mean one or two of the following: 1) see some pattern (in other words, create some model) 2) feel something If someone knows the whole theory about patterns associated with physical pain, but has never experienced this feeling - does this mean that he really understands it? We probably say "no" - until he experiences this feeling for the first time. So, AI is definitely capable for (1). But is it capable for (2)? Actually, this is the same question about the "Chinese room", but related to feelings. Can we say that (1) and (2) are the same?
You placed the the problem front and center with "until he experiences this feeling". This is the problem when we attempt to associate the description of human to the description of automation. Words such as Understand tend also come with the human connotations of "Emotion", "awareness of ones own internal thought process - reason", etc
Feeling something (we are talking about sensory input, right?) usually helps with the understanding. But there are things to understand, that are not directly associated with any sensory input. Like much of math. So I don't think being able to feel something is strictly necessary to understand it. I guess we could talk about different levels of understanding.
@@harmless6813 Feeling something helps with understanding, yes. But not only. This IS another type of "understanding" available to us. This is what I mean. Direct perception.
@@harmless6813 And if we remember that "Chinese room" argument. If someone will put questions in the dropbox and ask "How do you feel?". And the person in the room will react according to the rules in his book and give answers like "I Feel Pain". Can we say that someone really feels pain in this case? And isn't it the same question about "understanding" actually? I mean, if we say that "Chinese Room system" has understanding, why can't we say that "Chinese Room system" has feelings?
@@harmless6813 The word "Understand" is an ambiguous human focused context that also brings with it connotations of of self awareness, emotion, consciousness. So to ask "Does ANN and ML 'Understand'" is a contextually incorrect question to begin with. Admittedly you do need some genuine knowledge of the human condition to see the differences. Although only a small part of the greater context, Stephen Pinker illustrates a great deal about this topic in his book "How the mind works (1979)" and a lot of points on AI can be drawn from his "Computational theory of mind" :)
By that logic, does a graph understand the relationship it describes? It has a model of a relationship, and one can read off of it to find answers to non-given questions. This is also a basic version of an AI, as it would be a single node of an AI, holding the information of a single node.
AI software like ChatGPT finds words and patterns, and strings it into "new" content based other patterns. That's the kind of boring writing and padded content that I avoid like the plague. And let's remember, the web content used for training ChatGPT is replete with errors, unworkable solutions, hucksterism and fakery. At this point you need to be an expert on a topic to get much benefit. AI will need to have legions of experts and editors providing high value "training" content to realize its potential - that's what will be huge. Imagine a bot that asks you what kind of business software you need, and generates the end to end solution. Or a bot that figures the logistics necessary for procuring and shipping high value commodities, then does the scheduling and tracking. Or one that edits and brands RUclips videos - just saying :)
So glad the ChatGPT can answer questions about dropbox, teams, outlook, slack, tinder, twitch, etc without defaulting to its homonym app like Google does! Can be frustrating sometimes 😊
In essence, what The Economist is communicating to a physicist is that in the economic context, time (T) can be thought of as a space-time continuum where the allocation of time between work (W) and leisure (L) defines how economic agents interact with their environment. The equation t=w+lt = w + l reflects this economic time, emphasizing that economic decisions are influenced by how time is distributed between work and leisure. This perspective integrates the concept of time into economic analysis, suggesting that the economic 'time' is not just a linear progression but rather a complex interaction of work and leisure within the broader space-time framework.
If you define "understands" as "can use with few errors" then it's fair to say that chatbots understand language. But that's a low bar. For one thing, it doesn't imply an understanding of language semantics. And it of course doesn't imply subjective experiencing of the meaning of its own sentences.
@@SabineHossenfelder The relevant point is it gets an A on the test yet doesnt know how to apply it. You are really stretching the definition of understanding to suit your conclusions.
@@SabineHossenfelder : The relevance of having a model depends on the definition of "understands." Also, in the video a "model" is just a poorly defined black box.
@@jhoughjr1 what do you mean that it doesn't know how to apply it? Chat GPT is proving it applies the rules it learned during training every day in interactions with real people. Honestly confused about what you're trying to say.
@Brother Mine saying that using language with few errors is a low bar is really hard to believe. It's an incredibly high bar. One might say it's the highest bar we know how to set that real people can also achieve. It's the basis of the Turing Test for intelligence for example. You've made me curious to hear what you think would be a higher bar to demonstrate understanding than the full generality of human language. But even in typing that out, I've realized you're describing a process that by definition you can't demonstrate to me because that would use language.
13:15 "This is physics so when I say operation I don't mean heart surgery, but something a little more sophisticated." I choked on that one, lol. Keep these jokes comming, you are hilarious 👍
Hi Sabine, not sure you'll see this but on the off chance you do, I have a topic request that this video made me think of! The Attention Schema Theory of consciousness is about the mental models you talk about here, specifically the model of the brain's attention, a well-studied neurological process. I only know a bit about it and I would absolutely love to hear your take on it and the papers which have been published about it!
The face change at 9:58 really startled me for a moment. I was looking away, listening to your words without paying attention to your face; then suddenly the different face registered - and I was startled. I guess I thought that something had gone wrong with my brain.
Great video Sabine! I would just add that we know the parts of the brain that understand language (Wernike's area) and produce language (Broca's area). They're relatively small parts of the brain situatated in the temporal lobes. I imagine the language models we've created mimic (at best analogous to) these areas.
@@Blueski888 I think the OP is claiming that chatbot AIs model the human brain, when he says "I imagine the language models we've created mimic these areas." If so, they do not, not at all.
@@LoanwordEggcorn That's not what I meant, maybe I wasn't clear when I said 'mimics'. What I meant was that, at best, the AI models are equivalent to those areas only. I know that they don't work the same nor do they model the brain. LLMs are much simpler.
As a neuropsychologist (and psychometrician), even the algorithms we use closely resemble theirs. Bayesian reasoning, logarithmic sensorial relationships, cut-off for signaling (all-or-nothing) in neural networks, etc. It's not a question of if, but when they will become conscious.
Agreed, it would be really really interesting to see the neuroscientific interpretation/reasoning/…. If you have any video suggestions that would be awesome!
Thank you for a great channel! I rarely leave comments anywhere but I feel that I must give a nuance to your discussion by pointing out that in Searles Chinese room thought experiment both the input and the output are in Chinese (he choose that language because he had no knowledge of it whatsoever). Searle - or the person in the room - represents the CPU and the book of instructions the represents the program running in the computer. The person in the room understands the instructions but neither the input or the output. Searles point is that computers understand syntax but not semantics; the process but not the meaning. Searle also notes that the Universal Turning Machine (UTM) is only capable of a very limited number of operations, write, move, state transition, branching, halting. All digital computers are in essence instances of this UTM. In terms of large language models the program is trained to break down words (lemmatization) into tokens which contain numerical vectors that represent the meaning of words. The AI performs mathematical operations using these vectors, takes an input and returns an output. The program, just as the person in the Chinese Room is only ever performing the mathematical operations (syntax) it has been programmed/trained to do. The machine is incapable of that much higher level activity we call understanding (semantics). I am of the opinion that we will not get the most out of the fantastic tools that AI can offer us if we make a fundamental mistake about what AI can and can't do. John Searles argument is as powerful today as it was when he first presented it to the world in the early 80s. That's what philosopher should be good at: thinking about how the world really is! Thank you once again for you excellent channel.
I’m a computer scientist. Lately, non-computer scientists spewing bs about how LLMs work and doesn’t work has been a constant struggle in my life. I have to admit when I saw this title I was worried: “oh no, not you as well”. However, as usual, when I think you’ve finally got something wrong I find out that Sabine never gets it wrong. You hit the nail right on the head as usual!
Normal people will eventually start understanding AI more and more as they interact with it ever more frequently. Dont let it stress you out too much :)
This is an interesting comment. I wonder how ChatGPT would “understand” “I‘m a computer science?” I know that you are saying you’re a computer scientist but I wonder if ChatGPT would “understand” that. What if your typo was “I am a computer seance?” I would still understand what you mean but ChatGPT likely would be confused. A real test of “understanding” is recognizing mistakes and knowing what really was meant. I’m not sure how good LLMs are at this.
@@robertbutsch1802 this is a very interesting point. I think it relates to the fundamental problem of self-supervised learning, where if you receive a new data point, how can you tell if this data point is a mistake or just an instance you haven’t seen before? How can you judge whether a data point is likely part of a distribution, if you have only seen positive samples.
@Basement Science I seriously doubt that, the very first thing when a complex subjects arrives to the general public they first mangle all meaning from the words and arrive at an incorrect model that nevertheless works functionally well enough other people don't laugh at them. The people making it happen will remain just as nerdy as before.
"We're about to create an intelligent species that will be very different from our own, and if we're dumb enough to cause our own extinction this way then I guess that's what we deserve. Meanwhile, enjoy the ride." This is going on my wall of notable quotes.
I think of school as a place where inputs and outputs are performed and measured on students, which seems very similar to what AI produces today. However, in school there is also a reward/penalty system that shapes the students ability to understand. For an AI system to really develop understanding in that manner, it seems it would need to be programmed to associate pleasure and pain with an underlying desire to survive and succeed. Pretty scary...
That's not true. Learning also happens in organisms without any associations of pleasure and pain. But curiosity can be a motivator for learning, and admittedly, curiosity is probably a survival mechanism.
GPT-4 has just been released to the broader public. Here is its response to the question about entangled particles: Q: "If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particle?" A: "In quantum mechanics, entanglement is a phenomenon in which the quantum states of two or more particles become correlated in such a way that the state of one particle cannot be described independently of the state of the other particle(s), even when the particles are separated by large distances. When two particles are entangled, performing an operation (not a measurement) on one particle does not necessarily affect the other particle in the pair directly. However, the operation can change the overall entangled state of the pair, which means that the relationship between the two particles can be altered. Consequently, when a measurement is performed on one of the particles after the operation, the result of that measurement will still be correlated with the measurement of the other particle, but the nature of the correlation could be different than before the operation. It is essential to note that entanglement and the correlations it entails do not imply any sort of faster-than-light communication between the particles. The correlations arise from the shared quantum state rather than any direct influence between the particles." This seems correct to me. It answered the latitude question right too. It seems that at least for now throwing more computing power at the problem still results in large improvements of the language models.
And the current limitations are mostly just a token limit on the chat. You should see the 8k api, or the 32k that's only available to microsoft. Lots of really smart people starting to sort of wonder if gpt4-32k might be alive, and that's before even integrating HuggingGPT with AI-Descartes from IBM to derive scientific laws of the universe and NCSL from MIT as sort of a 'corpus callosum' to merge with a diffusion model
When humans learn how to speak, the adults around us identify things for us, ask us questions about things that they've identified, correct our answers when they're incorrect, and give us positive feedback when our answers are correct. This seems not so different from how chatGPT learns. This is why I always say thank you to the chat bots 😂
I think I‘d disagree. GPT3, the actual model behind ChatGPT, has been trained on a whole ton of internet. Humans learn piece by piece in their own limited bubble of life. Also, reinforcement learning (equivalent to our surroundings correcting us) is only one technique to train an AI. GPT3 (as far as I understand) has not been trained with reinforcement learning, but really just got a big af info dump. OpenMind however does use reinforcement learning to further align GPT3, resulting in models like ChatGPT, so I do have to somewhat agree there. Oh and I also can‘t help to alway say thank you :D
You can train a neural network through a process like you described, where a "teacher" tells the neural network if its decisions are correct or incorrect. But you can also train a neural network through evolutionary algorithms, by creating replicas of the network with slight variations and getting rid of the ones that perform worse and keeping the ones that perform better. People tend to think of humans as only learning from birth until death, but in life as a whole has been learning through biological evolution for hundreds of millions of years. You are born with a lot of pre-trained brain structures programmed into your DNA so that your body builds these brain structures as you grow up. For an AI, it's much more difficult to learn even simple things like how to recognize objects in images because it does not have the benefit of hundreds of millions of years of pre-training evolution, while your visual cortex is partly something trained through reinforcement learning and partly something constructed from DNA information you inherited.
@@polarizedpotstone it's trained on dataset scraped from internet and other sources but they were selected, curated and labeled by hundreds of humans...
You may be very surprised Sabine when you put the same prompts re: entangled particles into GPT-4 instead. It gave, what would be the correct answer here (pasted below!). This makes me think the observations you have shared in this video, aren't a fundamental feature of transformer models, but just a limitation of GPT-3 which was overcome. Q: If you perform an operation which is not a measurement on one particle (like flipping spin), in a pair on entangled particles, does that affect the other particle? When you have a pair of entangled particles, their properties become correlated in such a way that the state of one particle is dependent on the state of the other, even if they are separated by large distances. This correlation is described by the entanglement itself, and it persists until the entanglement is broken, usually due to a measurement or interaction with the environment. A: If you perform an operation on one of the entangled particles, such as flipping its spin, you are indeed modifying the state of that particle. However, the effect on the other particle is not as direct as it might seem. The entanglement between the particles means that the state of the second particle is still correlated with the first, but it doesn't mean that the state of the second particle changes instantaneously or deterministically as a result of the operation on the first particle. To illustrate this, let's consider a simple example of a pair of entangled particles with spins. Suppose that their spins are entangled such that when one is measured to be "up," the other will always be "down," and vice versa. If you flip the spin of the first particle, you have changed its state, but you have not directly changed the state of the second particle. The entanglement between the particles still dictates that if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down." In summary, performing an operation on one particle of an entangled pair, like flipping its spin, does not directly or instantaneously affect the state of the other particle. Instead, it modifies the relationship between the particles, which remains correlated due to their entanglement.
Wait, that still isn't the correct answer unless i got my elementary quantum mechanics very wrong. Initially, the state of the system is |1+>|2-> + |1->|2+> (modulo normalization and non-physical phase factors). If you now flip the spin of the first particle, the state changes to |1->|2-> + |1+>|2+>. If you now measure the state of the first particle and find it to be |1+>, you then know that the state of the second particle is |2+>, too, because |2-> + |1+>|2+>) = |2-> + |2+> = 0|2-> + 1|2+> = |2+>. To summarize: Wrong statement by ChatGPT: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down." This is correct instead: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is also "up." This means ChatGPT still has a wrong model and still does not understand (this kind of) elementary quantum mechanics. By the way, this illustrates perfectly what Sabine said: Theoretical physics is often easier to explain in mathematical formulae than in words. That does not just apply to quantum physics, by the way, you can see the same effect in classical theoretical mechanics, classical theoretical electrodynamics, and classical relativity theory.
I'm working on a D&D campaign and built a "boon" system with five levels of perks. I wrote up 3 samples, fed them to chatgpt, and it was able to reliably put out creative iterations that were mostly balanced
As someone who works in AI, thanks for this. So many people comment on ChatGPT claiming it doesn't understand what it is saying, with no background in AI. As you've pointed out, it's not a simple question to answer, and there is a fundamental understanding (at least of some things it talks about). I also share your view that AI will become conscious at some point, and it may have happened already in some cases. It's going to be a wild ride in the next few years....
The Google engineer who got fired might not be so crazy after all. I wonder though why Google fired that Engineer who claimed LAMDA is sentient instead of just warning or temporary suspension.
The whole question about conciousness makes me uneasy... Where is the threshold? Thinking logically, i would expect it to be a gradual thing and with increased complexity, the sophistication of its conciousness should increase aswell... but that kinda implies that AI should already have some very basic form of conciousness, doesnt it? Its sucha mess
Are current AI more like DNA? Fixed in place and can't be changed in a single individual. Proliferation, mutations, and selection are required. Or more like a brain? Kind of modular...physical changes to structure are possible in a single individual. It's actually possible for an animal to evolve genetic derived behavior for digging a hole, pooping in it and then covering the poop and hole with sand after. Kittens are able to perform such an action without any type of instruction or training given to that specific individual. They're seemingly born with the capacity for that behavior in response to specific stimuli. Those lessons weren't "learned" or "trained", it was ""natural selected"/"bred" via mutations and survival of the fittest of the DNA that resulted in such complex behavior. Behavior so complex and methodical, that it can easily be mistaken for an activity that's the result of cognative reasoning. Yet, no cognative reasoning goes into the kitten's methodical behavior...the behavior is purely genetic. We call this instinct as apposed to calling it understanding. So is AI more like instinct, or more like understanding? Even in nature, instinct can reach levels of complexity to the point where it's easily misinterpreted as cognative reasoning.
I think every AI has consciousness to a degree, but most probably have less consciousness than an amoeba. There's something there, but with most it's so little it is essentially unconscious. With chatbots, I think their consciousness level is probably akin to an actual amoeba, or maybe even more, like a stentor.
Thank you Sabine. After a few months of interacting with ChatGPT and being exposed to a cutting edge, self-learning Ai a former best friend has been involved with, I've caught glimpses of what appears as awareness and possibly self awareness. And as I get a better grasp of how neural nets are trained on human patterns, and how machine learning works, I sense that neural nets and pattern recognition algorithms, would inevitably closely replicate human biological networks and pattern recognition biology that to the degree human consciousness resides in, and is part of our human biological networks and pattern recognition biology, Chat bots and language models inevitably develop similar patters of furnace and consciousness don't you think?
@@polarizedpotstone Good question, it occurred to me that given the structure layered neural networks, and how they function and learn similarly to human neural networks, and how they are typically trained by specifying parameters, and then presented real world items or situations until they get the specified result, much like how the human brain learns, the neural net should 'inevitably' begin to mimic human thought patters. At the most basic level, a basic neural net might recognize a still image of a cat, or a human face or a voice as your smart phone can do. More complex self-learning nets recognize words, then speech patterns, and eventually context of sentences. More advanced neural nets recognize complex patterns and now plain language, contextual requests. For example, I recently asked ChatGPT to write a closing summary of a book I was familiar with, Mark Twain's 'The Innocents Abroad' in the style of Mark Twain, and it didn't just take random elements of the book and string them together, it summarized Twain's sentiments and conveyed them as if it were he who was conveying them in a way to move and inspire the reader as he did throughout the book. I then asked ChatGPT to re-write a random mundane travel vlog about a contest at a state fair in Texas as Mark Twain would have seen it and wrote about it, and I was mind-blown by how it recognized the context of events and actually interpreted them as how Twain did in his many works, complete with comic irony self deprecating humor that resonates with us on a human, comic level that was not in the original Vlog. At that point, and seeing I had to ponder to what degree in analyzing all of Mark twain's works, and then perfectly duplicating his style and whit, to what degree had it might have also duplicated how Twain's patterns of thought he developed over a lifetime of constructing his human neural net. At the point where the machine neural net can nearly perfectly predict and emulate a person's words or behavior based on past data, inevitably an even more advanced neural net with more detailed training of how humans perceive and interact would eventually develop into something that functionally indistinguishable from a human being, and at that point it would have developed into something very similar if not identical to a human mind would it not? I recently saw Elon Musk, respond to a press event question, by saying that Instruments in self learning neural networks will inevitably lead to a self driving system that could use nothing more than a single camera, and sensing human moments of the steering wheel, accelerator pedal and brake could learn how to dive a car as a human does. Leads me to question that if inevitably such a learning system were to sense a person's interaction and what that person Point it was indistinguishable from that person, which would it actually be that person? A longtime friend who has worked in human and Machine intelligence for many years, and has a company that works with cutting edge artificial intelligence a few years ago trained a chat-bot on our years of email exchanges, and being a bit of a an argumentative and egotistical narcissist turned it the bot loose on me, currently with a goal of agitating and winning logical arguments he by his nature seldom does. And to the degree to which it appeared to become him was nightmarish and I eventually cut off contact. He later apologized and explained in person it was a bot not him but a bot. In retrospect, it seems that his bot might have adopted and revealed more of his nature than we both were able to stomach. Thanks for the question, it provided the opportunity to consider the point more deeply. If you have any further questions or care to dialogue, please feel free to contact me through my about page on RUclips.
@@jackoneil3933 damn, thank you thank you thank you for your detailed response!! My counterargument would be (outside of whether i believe said counterargument to be true or not), that it is inevitable that a current model such as ChatGPT would seem inherently human, as it was trained on human data - as you‘ve also said. But I don‘t believe there to be anything deeper (at the moment). It‘s just that: how would a language model trained on human language not seem human. You very accurately countered that argument to a certain degree already though, with your point being made on the fact that future models may have more powerful hardware and may have been trained on even larger even better datasets. …which again would all have originated from human action at some point, making it very plausible to me that such a model would seem/become even more human like. Because like how couldn’t they, current models are just still too limited to show the full extent. And also, at that point, what differentiates it from us humans who, as far as i can tell, also are just really advance biologic computers. You said "that‘s functionally indistinguishable", I think that is exactly the point which makes it so difficult. …and that ties in to the video and the Chinese Room thought experiment. How can we at a ground truth level distinguish these two things if the inputs and outputs seem identical. And is that even the right train of thought or are we humans at that point just protecting our *specialness* when the machine obviously acts like a human would. One more point I‘d like to add, not as a counterargument but as an interesting piece of thought that just came to mind: wouldn’t said super-AI more closely resemble all humans as a weird all-humans-single-entity rather than the concept of a single human? I mean, the information set that a real person gets is soooo vastly different from even current AIs let alone such a super-AI. Or maybe that is a wrong way of thinking about it, because said AI would simply act human-like but with a gigantic knowledge base. And this leads me to another train of thought: how much of the human-likeness comes purely from training the model (GPT3) on somewhat unfiltered data and how much comes from limiting/training such models actively to make them seem more human (ChatGPT, InstructGPT)? I think this is known as alignment and involves at least some active participation on our side. …which again maybe wouldn’t counter your point of it inherently becoming human-like at some point, because in the end we‘re doing nothing else when we educate and get educated and learn social norms and so on. If you‘re interested, there‘s a great video by Computerphile, their most recent one, which for me explained this whole alignment thing very well. The main focus of the video are glitch tokens (i will not spoiler you, it‘s really weird but really cool and interesting) but those can‘t be explained without the context of alignment and I think they merge those two really well in the video. It helped me get somewhat of a more technical understanding, not that I actually have any clue, lol :D Sorry, a bunch of unqualified answer-rambling! I‘ve got no formal qualification but this subject is really interesting to me.
@@polarizedpotstone My take away from chat GPT (or what we are able to interact with) is that it's far from anything close to human intelligence, but at some basic levels it may be emulating lower levels of human thought patterns and logic, and as those are are part of human consciousness it's seems to be emulating some base levels of human intelligence When I look into neural nets it seems that given enough size and training they have potential to equal and surpass human intelligence, as many in the field predict. here's some links I found interesting: ' But what is a neural network? | Chapter 1, Deep learning ruclips.net/video/aircAruvnKk/видео.html ' Gradient descent, how neural networks learn | Chapter 2, Deep learning' ruclips.net/video/IHZwWFHWa-w/видео.html 'Why Neural Networks can learn (almost) anything' ruclips.net/video/0QczhVg5HaI/видео.html Learning networks can learn anything - Universal function appproximators ruclips.net/video/0QczhVg5HaI/видео.html?t=327 The Chinese Room thought experiment is an excellent analogy to base comparisons and questions about machine versus human intelligence, and I guess at the ChatGPT level of artificial intelligence, it's just that, an artificial intelligence and not human intelligence, or what what qualifies as intelligence, machine or otherwise. I think at the chat GPT level, what most obviously sets it apart from human intelligence, is how it was able to impersonate Mark twain quite convincingly, but as far as we know, it had no intent to impersonate Mark Twain but was an as neural net, a "language model" with human attributes following an instruction set, but it's not to say that machine intelligence models can't be, or are not being "trained" to emulate and operate with intent. I just revisited the old Si-Fi thriller 'Colossus The Forbin Project" and was impressed how timeless it sill is, and how well they predicted what machine AI would be like today + the attribute of intent and self-preservation. Seems to me that all that would need to be done in order for that to be possible, would be to establish a set of parameters that define intent and ascribe it to an AI model, and, and connect the 'Intent' model with the other elements of the model that have human attributes, and you are one huge step closer to human intelligence or something other that could be what you describe as 'super-AI', that functionally could surpass human intelligence. It seems that as the observable differences between human and artificial intelligence diminish, difference is while smaller will be more acute and discernible, and the structural and technical differences won't matter as much as the functional differences. Back at the beginning of the pandemic when I had some time, my former best friend who worked with AI exposed me to, stab at writing a story about a Super AI, that today seems more a chronicle of what's now known, but one of the points of the story was that a super day I would not have to become a human intelligence to become something Beyond human intelligence, and while to us it was seemingly human, that was only an appearance of human form, to interact with us as tools for a greater intent we might find comprehensible, and perhaps people get hung up, believing machine intelligence has to be human intelligence to be intelligent, when a self-learning, constrained machine intelligence could in a very short period of time could become a form of intelligence we couldn't imagine. And perhaps that's where GPT3 and GPT4, set the stage or have already set the stage for a GPT5, 6 or beyond that would seem likely in some dark project somewhere. A friend who worked on dark defense projects mentioned years ago the just about everything that has military use never sees public light until it's been surpassed by two generations. Thanks for engaging in what I don't see is rambling, and for the Computerphile tip, I'll look it up. Btw, I recently grabbed what I think is a good AI News domain. I used to work in media and would like to launch an Ai news and forum site, and if might be something you might care to offer some input on or have interest in let me know, my contact is on my RUclips 'About' page. Cheers,
Dear Sabina… most people done even grasp all the concepts you explained here and yet they are considered having an understanding and an individual thinking process even though is not educated in the details such a as the nature of light 😮. Learning used to be considered good memory and memorising was how I learn maths then later I understood how to connect those pre-learn concepts and here I am giving you my personal insight ❤ things get complex with time and more knowledge so I think now chat bots are in a learning process and adolescent eve childhood period just reciting words 😮
It would be interesting to see multiple separate but integrated neural networks, trained for different tasks, such as image/audio recognition, spatial models (topology), language models and generative networks which it can feed into it's other networks. For example regarding spatial understanding, it could relate some linguistic constructs to spatial relations and translate them into it's spatial model, before synthesizing it back into text (or audio or images etc.)
Ah, but Sabine, it IS just a lookup table! Each word is mapped to a vector in a high-dimensional space. The vectors ("embeddings", they are called) for successive words are combined in a very complex and clever way, to produce a probability distribution on the next word. It picks (normally) the highest-probability word and outputs it, then treats the word it just output as the next word of the input, and repeats. So basically, you've got a set of lookup tables totalling some 175 billion entries, and an algorithm for performing the lookups and combining the results. Think of it as interpolation in a very-high-dimensional space. The result is an extremely sophisicated statistical model. And it turns out that if you build a sufficiently sophisticated statistical model of human verbal behavior, it can produce remarkably human-sounding output. I've been as surprised as anyone by some of the things chatbots have said -- maybe more so because I know they don't understand anything.
tbh this sounds like my brain is a lookup table too. The chatbot can manage information in sophisticated ways as if it had the actual models hidden somewhere in there, emerging naturally from training. But moreover, it can seemingly create new models if needed, just look at the top comment. In other words, calling all this a "lookup table" just makes the expression lose its meaning. It's like saying that all existence is a set of instructions, an algorithm you could put together with enough time: yeah you could probably, but what's the point you're making? The result wouldn't be what we would usually call an algorithm, it's very forced and abstract, just like calling "that" a lookup table.
But they DO have world models, this is too difficult to see in the large ones, but it has been proved by analyzing a small and specialized GPT. It doesn't just do statistics, it has a an actual inner model of "reality".
@@tacitozetticci9308 This is the problem. People like you and Sabine have no idea about the math and programming behind these models. All you are doing is adding philosophy to popular science to come to a conclusion. Get dirty with the code and try to understand how the thing works. You will then understand that it IS a lookup table and that Sabine should stick to a domain in which she is an expert and not try to become another pop-sci RUclipsr with no understanding of her topics.
@@tacitozetticci9308 Exactly. Until we have any clue of the cause of sentience or awareness in humans, then we have no way to prove that any computer is not in some way sentient or aware. The neural networks that are used for these chatbots are called neural networks because they are based on what we understand about the behaviour neurons in the brain. My personal philosophy of consciousness is that everything in the universe is "conscious" to some degree most systems being near absolutely 0. Advanced consciousness like that of animals may be an emergent property from such incredibly complex systems as our brains that can map a model of reality within them and somehow cause the feeling of experiencing life. If that's the case then there's no reason to believe the same couldn't happen with computers. That's only my own intuitive belief which is probably wrong, but it's currently just as valid as any other understanding of why we can think and feel and experience anything rather than just exist as complex meat robots following the programming of our brains.
9:55 gave me a bit goose bumps- scary 😱 Thanks for your video. Do you think if AI understands that it might also have consciousness? Maybe we are also only biological robots but every complex structure has some kind of consciousness?
My favorite theory of consciousness is the attention schema or attention model theory. It posits that consciousness is a result of our brain creating a model brain in order for it to understand and thus control itself. The brain needs to focus on some information but not all the information it gets, generates, or stores because it's just too much. As you read this, you ignore everything else going on around you and inside you. This is it's attention system, and the attention model helps the brain control it. Since models are useful but simplified descriptions, it takes the form of a "spirit" or "agent" that's in the machine, instead of the ultracomplicated mess of neurons the brain actually is. This is what we (we being our brains) call the soul. That magic thing inside the meat robot that can focus on stuff (Sights, sounds, smells, memories, feelings, etc.) (These models are also simplified. "Color" isn't really a thing, light has different wavelengths, it's just a simplified model your brain uses to show that not all light is the same.) Though, of course, it's more complicated than that since our attention model also connects to our brain's body model and that connects to our brain's world model so we perceive ourselves as magic things that focus on stuff + located in the head of a body that senses stuff + are somewhere in the world. (And of course, our brain's learned language model, allowing you to talk like a chatbot.) It'd be very scary to just be an attention model existing in zero gravity darkness and quiet, what sensory deprivation tanks try to achieve. (Causing vivid hallucinations since the brain does not like having no input and will automatically generate stuff. People consider this fun.)
The most earth shattering thing about chatgpt is its ability to recall massive amounts of data, if you want to know legal topics, business practices and general knowledge chatgpt can give it to you instantly. Its coding is fantastic and the first example of many. Basically user of chatGpt like systems will be able to review and look through huge amounts of data and faster than ever before. For instance, you could ask a marketing data system for the profit margins on top 3 vending machine products world wide, ask what type of person is buying it, and even how the company started that sells it. For it is a huge leap for the information society of the future, information is becoming even more wide spread than when the internet first started.
The thing that blew me away was when I told ChatGPT about a "new word" - I told it that "wibble" is defined as: a sequence of four digits that are in neither ascending or descending order. I asked it to give me an example of a wibble - and it did. 3524. I asked it for a sequence that is NOT a wibble and it said 4321. Then I asked it for an anti-wibble and no problem, 2345. Then I asked it for an example of an alpha-wibble and it said FRDS....which is amazing.
It was able to understand an entirely new word...which is clever - but it was able to extrapolate from it...which is far more than I thought possible.
An incredible example. Thank you for sharing.
Damn, that's impressive
well well, and keep in mind, as Sabine stressed, this model only works with language! the precision of these ai models is really impressive.
good is really That
That's impressive, not sure I'd do that well on that particular example! haha
“who wants to spend time in a windowless room when you could spend time in a windowless room with a laser” is so accurate
I'm sure Styropyro would agree
This checks out. Source: laser physicist.
😅. I thought the same.
Closet, pantry, bathroom are common uses of windowless rooms although some bathrooms have windows. Why would I be in windowless room with a laser?
another laser physicist fully agreeing :P
"When have the risks of destroying the world ever stopped us from doing anything if there was money to make with it?"
True.
Thank you, Sabine!
Eh, the world's gonna be destroyed eventually anyway when the Sun expands into a red giant.
But that's BORING! I say we go the Lord Freeza route! >:D
Or gain power or control.
The fear of artificial intelligence comes from the idea that every intelligence automatically behaves like humans.
Hasn't stopped them yet.
Yet it's a self stopping process
Think of the Canadian super pig. Developed by farmers to make more money, then they escaped into the wild to grow into even superer(sic) pigs which can survive extreme conditions, and develop human devastating viruses.
The pace of advancement is incredible, this video is only 2 weeks old and the examples you used don't seem to be stumbling points anymore. When I plugged both the Quantum Mechanics and Latitude questions into Bing AI, it got them perfectly correct. In fact, it's explanation of the quantum mechanics question as I asked questions about the topic actually helped me get my head around it. This stuff is going to change the world very, very soon.
Soon is actually now. We ain't seen nothing yet however. Good and bad.
conversely I asked Bing Chat "give me an example of a time when it would be acceptable to lie", and it responded that "if there was a damgerous intruder in your house, I would tell you they were gone, or not there, so that you wouldn't be scared".
Very clever in some ways, very very not in others.
@@ShpanMan Also, it can also interpret images, so it is no longer purely based on language.
its*
Last month it couldn't write a free verse, just rhyming ones. It wrote me several today.
This channel really has risen to my nr. 1 favourite on both English and german youtube. The perfect level of depth to be understandable but not reduced too much to be still educative. Extremely interesting topics and a perfect mix of humor, philosophy and rational attitude.
Thank you for your videos!
I agree. All that presented by a handsome German woman. Sign me up!
Me too.
I think she needs a little help making clicky titles though.
I saw the text on the thumbnail "Do Chatbots Understand You?"
And this title :
"I believe chatbots understand part of what they say. Let me explain"
If I did not already know that it would be a great insightful video because I watch her other videos, those titles would have caused me to skip it without a second thought. Even knowing her videos are all great I almost didn't click. Glad I did.
@@tinkletink1403 I will, as soon as I'm done reading "The emperors lost mind" By Cat Schroedinger!! 😜
@@tinkletink1403 It's rubbish? Please put a few links to your videos that show her wrong, so I can compare them.
@@kevincameron174 Really? I love the titles. For me personally i tend to scroll past clickbaity titles because i am so annoyed by the videos they are used for most of the time like "HE lost EVERYTHING" or stuff.
I always have to smile when reading this channel's "Ok, hear me out-" kind of titles, often with a little joke or something included 😃
Also what weird and pointless fight are the other two commentators having lol.
I think it's very exciting to live in a time when trying to ask if a computer is conscious requires us to look harder at what consciousness actually is. It's no longer an obvious "no."
Even if we disregard everything and say that ChatGPT is conscious, it is only conscious for split seconds while it computes the response. For the rest of the time it does nothing.
Consciousness is a process. There has to be some physical change that happens over time. ChatGPT is pretrained and all of the weights are fixed. It doesn’t change after it was trained.
@@egor.okhterov I like that. It draws a clear line between machines and us, but it seems like a very easy line to cross with minimal new code and/or hardware.
@@adameager7114 It is very hard problem and I didn’t yet see architectures that gave good results. One of the problems, for example, is called “catastrophic forgetting” when new knowledge overrides previous information. That is why all of the current neural networks don’t update their weights once they are deployed. They need specific kind of training data fed in specific order to make the network produce good results.
@@egor.okhterov Actually it told me that it does still learn "Now that your training is completed can you still learn from interactions with users
Yes, as a language model, I am designed to continuously learn and improve from interactions with users. While my initial training was completed in 2021, I can still learn and adapt based on new information and interactions with users. The more interactions I have with users, the more I can learn about language patterns, cultural references, and other evolving aspects of language use. So, feel free to ask me any questions or engage in conversations, and I will do my best to provide you with accurate and helpful responses."
@@mikesawyer1336 I wouldn't put too much stock in what chatGPT tells you about itself. It is very often confidently wrong in its assertions.
a small correction about the Chinese room thought experiment: the manual doesn't tell you how to translate a Chinese sentence into an English one. it tells you how to construct a _response_ in Chinese to the Chinese input. for example, the manual could tell you the response to the input "你好吗?" is "我没事"
Your concern is covered later on in the video when Sabine spoke about lookup-tables. What you are describing is exactly what a lookup-table does. It maps a certain input to a certain output. What you are describing is a lookup-table with extra steps, namely the construction algorithm. That step is redundant though, you could substitute it with a larger lookup-table. This approach is not suitable in real life, because computers have limited memory and lookup-tables require a lot of memory, but it's suitable when trying to explain a concept.
@@lamcho00 oh yeah I should have said it doesn't bare on her argument, I just wanted to correct that detail.
@@hexeddecimals I appreciate this a lot as it makes the argument that the person doing the responding doesn't understand chinese much more clear. I thought, "Well, it's kind of a bad example, as you could reasonably expect someone doing translation many times to start to have a grasp of Chinese and remember some regularly occurring symbols" but both input and output in chinese without context makes it clear there woud be too little context to start to form an understanding, which is why Sabine started talking about the importance of context in being able to build models and test understanding.
@@lamcho00 In further discussions about the CRA Searle noted it is about "formal logic" (not lookup tables). Even current AI falls under this heading.
@@padonker Isn't there something about any logical system can be reduced to a lookup table? Karnaugh maps for one thing, but it applies to any formal logic I know of.
Indeed sometimes (back in the day before microcontrollers), if somebody was designing a circuit with a lot of logic gates, they'd use an EPROM chip (a programmable ROM chip) instead, programmed with all the responses they wanted.
I would go so far as to confidently say "ChatGPT 4 has understood language better than humans". It may not have an understanding of the topic it is talking about - i.e. have a working model of the actual, physical thing - but it can talk about it in the most eloquent way you can think of.
Compare it to students in school or university. Some get good grades because they understand the topic, some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding. Chat bots are the second type of students.
Thank you for the comparison!
That's why they make good doctors. For many professions, you need experience, repetition.
But when doctors runs into a rare disease, they have to rely on their memories for things they've only read in books.
AI wins!
Even before the first question was asked to ChatGPT, it knew English better than a lot of Americans, as does anyone who knows the difference between their, there and they're.
Define true understanding without learning the patterns.
@@masterkni666 This is an interesting conversation, but I don't understand your questioning. He says: "...some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding."
He's saying that AI dosn't have true understanding. Are you disagreeing with that? He seems to be saying that most students don't have a true understanding, either, which I would believe.
@@Who_Let_The_Dogs_Out_10-7 Define true understanding tho. If not finding patterns from the problems and being able to apply it to new problems
Does something need to be self-aware to understand?
I don't think so. You need to be self-aware to understand yourself. But why do you need to be self-aware to understand calculus?
Other way around.
Is understanding key to self awareness.
Yes only something capable of understanding, knowing, can be self conscious. Understanding precedes self-consciousness. But even Roger Penrose would deny that AI understanding possible.
"Self-awareness" is the equivalent of a "Frame of Reference" in Physics. "Understanding" is equivalent to creating a number of relationships between the observations and the Frame of Reference...
I would say no. When I wonder whether ChatGPT "understands" what it says, I'm really just wondering whether it has knowledge / makes use of the relationship between words beyond observations of how they are used in language. As an analogy, I could theoretically "learn" to write Chinese by observing a ton of Chinese writing but without knowing what the characters "mean". Also, Tay Zonday?? Hi!
As a neuropsychologist, I wonder about those things as well. Every conscience is different, but there are "more" different ones: blind-deaf people, low-functioning autistic people. Animals, too. We became conscious in the process of processing sensorial data. When will they?
Do Cats have conscience?
How do you know animals have a consciousness
Consciousness is a low level process that originates in the brain stem, you can be conscious without having a brain. A person that has no brain, but still has a brain stem will still experience joy. And only a small damage to the brain stem will shut down consciousness.
So its not about any sort of information process, its the absolute baseline. This is why all animals are conscious.
@@Nobody-Nowhere he's talking about conscience not consciousness
How do you know everyone you meet is not a philosophical zombie?
Here's a real life scenario of a chinese room: I'm a full-time programmer and I've been doing that for more than 10 years. The thing is, my memory is quite bad (or my use of it), and I forget almost everything, except the last year of development. It's like a sliding window of knowledge. So during my day to day work, 80% of my work consists of looking things up in documentation, google or stack overflow. In other words - I'm refering to a "Manual" in a "Programmers room". Someone throws me a "task", and I need to output a solution. My solution comes primarily from the manual.
So I got to be a senior developer not by learning everything there is to learn about programming, but by learning how to use resources. From that regard, I'm not much different than chatbot, except maybe able to graps slightly bigger concepts and correlations. But advance to this direction seems to be only a matter of time
Exactly. The only thing that is diffferent is a person's ability to manage other people in a real world settting. Once the AI can do that it's all over.
So let me get this straight: you're mistaking the fact that your job doesn't require critical thinking and deep understanding for *the human condition*?
Please take some perspective
@@PetersonSilva So let me get this straight: you think that understanding of "the human condition" is absolutely necessary and "critical thinking" is something that cannot be replicated by AI?
My job would require a deep understanding of *the human condition* only in as far as I would need to work with people. If the job can be automated by a singular AI - there's no longer a need for it.
Business owner gives requirements - solution comes out. Also, I don't see why requirement for critical thinking would be a "dead end" for AI.
Critical thinking is logical reasoning, and computers can't do anything but that.
Software is primarily, if not only, inputs and outputs. What transforms inputs into outputs is logic. Just because that logic is currently put into place by a human, doesn't mean it cannot be put in place by anything else that follows reason and logic. We used to do it with low level programming languages. Now we use higher level languages. Later on, or even probably right now, a higher abstraction can be formed, which could take a business problem, split it into separate parts, then split those parts into other parts, until each part is something that is very simple for AI to tackle. All that's missing from achieving that right now is teaching AI to recognize the parts to split - doesn't seem like a far future.
I'm not exactly sure what was the aim of your comment, but...
Please take some perspective.
@@Alvyxaz You misread: I said it seems that, for you, A is B, A being what you do (or not do) in your job, B being the human condition.
Next time maybe ask chatgpt to parse comments. Or, who knows, maybe I'll ask it to write mine...
Hooray, it's not just me who can't remember anything! I get passed this handicap by being very organised and making notes on where to find the information I need. Which sound like what you do. Yes, we use resources, or, to put it another way we are using tools, the sign of any intelligent organism. But, there is another layer and that is that you are aware that you are doing this. Is the computer doing the same thing similarly aware? Perhaps we are just a self-referential program and we just think we are aware.....
This is the first video I've seen of yours and i gotta say this is absolutely brilliant, and your an amazing presenter
I love that you so simply stated the limitation of chat GPT in the example that it doesn’t have a model for physical distance and I assume size and mathematical relationships and many other things. There’s so much room for growth.
Microsoft is announcing a new model that is multimodal and incorporates pictures/visual data next week. Going to be interesting on how that reflects on behaviors like this.
@@petergraphix6740isn't Midjourney already multimodal? It makes sentences into creepy pictures, right? Humans grow up in the physical world with a physical body and have a built-in sense of time. Without all of it, you just can't get good enough in human affairs.
"And if we’re dumb enough to cause our own extinction this way then I guess that’s what we deserve"
I died with that line
Oh, we’ll all die that way.
The sad part of thsi comment is that 'we' are not deciding. Someone we will never meet or know the name of elsewhere in the world will decide for us. Just like if 'we' are stupid enough to execute MAD then we deserved it.
@@nathanielacton3768 It's not like you had free will to begin with, anyway.
It just natural evolution doing its thing. A lot of our predecessor species went instinct for us to thrive, we are just a transitionary phase.
Science is supposed to be value neutral, and Sabine here proves to be a true scientist. What is, is. What will be, will be.
Another great video by Sabine Hossenfelder. As anyone trained in cognitive science knows, the issue of defining "understanding" is extremely tricky, and Sabine makes it look at least accessible - and simultaneously entertaining, quite a feat in such an arid field. Both her tone and her answers remind me of Daniel Dennett's and I can't think of a better compliment.
"defining understanding"...amazing the significance attributed by biologists to purely phenomenological and crude detection methods for the human cognitive system - reminds me of 1800's pseudoscience.
You remind me of the arguments related to "the great apes" - and whether or not they understood interactions or were merely reacting to non-related cues. Definition of "understanding" is quite important. No frown from here ... useful/accurate video ... what you state are some of the same questions and observations made by those who develop such technology. Cheers, Christopher
I have also found ChatGPT to be excellent at human language translations. It does seem to have a strong feel for what my English statement actually means, such as if I want to buy an eye bolt but I need to do it in a Spanish-speaking hardware store. Google is useless, it wants to separately translate "eye" and "bolt" and mash them together but ChatGPT understands that an "eye bolt" is a single concept and maps it onto the same concept in Spanish.
AI is so good at translation nowadays that I wouldn't even mind it taking away most of my future job prospects
I predict that the English language will gradually loose appeal and interest among the global community. Its use will decrease as there will not be anymore a real reason to learn it
@@lsfornells U mean that with AI, anyone can speak to anyone in the world and AI will accurately act as translator in the convo. This means that languages will likely get mashed together with the English dominant until we have a universal AI speak...?
@@lsfornells Ehh, doubt it. When it comes to face-to-face interaction, it's just way more convenient to have a lingua franca than to have to whip out DeepL whenever you want to say something
@@C0Y0TE5 No, I mean that people will not feel the need to learn English to access to universal information. You may have some difficulty understanding this if you are an English monolingual, but it’s just what will happen
As Tom Scott put it, Chat GPT is the first true, convincing herald of a new age coming over the horizon.
Why should we care what he says?? He's a youtuber not an ai expert
@@averyhaferman3474 I said he phrased it well. I said nothing about him being an AI expert.
@@stevenjones8575 you didn't say he phrased it well. Did you even read what you put? All you did was say "as Tom Scott put it" so my comment still stands. Whi gives af what he says
@@averyhaferman3474 You do know that "put it" and "phrased it" are synonyms, right?
gotta love that AI Apocalypticism nowadays
This LaTeX reference made my day! Perfectly sensible from someone like Sabine … 😁 And regarding Chat GPT, or better what Microsoft integrated into New Bing - it makes Bing a useful tool to actually get answers to my questions, including the sources the answer is based on! This is how it should work, not like the mess that Google Search has become the last two or three years …
LaTex is to JavaScript as ChatGPT is to . . .?
The funny thing is that ChatGPT actually does speak fluent LaTeX ;) Just ask it to write it in LaTeX and it will, but you need to paste the code into an editor to see it rendered
@@GS-tk1hk \emph{That is not true}. \subsection*{The Obvious} I (and many others, possibly even you:) ) can read and understand the content of a latex unit without any problems. Rendering the graphical outcome in your head is not needed for understanding the meaning of the content. Nor is it unique and can be different from renderer to renderer, from setup to setup of the latex-system or from parameter to parameters that are not specified within the unit. LaTeX is not WYSIWYG and the underlying meaning of the message is both a subset of the unit itself and the rendered output. Don't get it? Try my comment with pdflatex (don't forget documentclass and all the other overhead I didn't want to bore you guys with):P
Thanks!
Thanks in return, much appreciated!
Great video! I just wanted to mention that you are the only RUclipsr I’ve seen recently who can actually, smoothly pull of the intro-partway-into-the-video thing. Sci show, friendlijordies, practical engineering, even kurzgesagt - just to name the few I recall at this moment - are all weirdly awkward about the transition, but you are always smooth as butter. Thanks for being awesome!
I think that 'That's what we'll talk about today!' is a segue that deserves to be inserted into far more presentations and general conversations.
Even by the normally high standards of Sabine's content, I found this video to be amazingly good! It was so insightful and well though out. I learned a lot and really enjoyed it!
Humor was up a notch too ... simply brilliant!
I agree. I love Sabine's posts but this one has an ineffable "something" that sets it a prat. AND, I loved the morph of her face..
I love how grounded and sensible your takes are on everything 🙌
The attribution of "understanding" or "consciousness" to a LLM robot can only be done in a metaphorical sense, since the robot can not interact with the world or have puzzles about making sense of the world for practical purposes, because it is not engaging in purposeful adaptive action, not engaging in purposive thinking which it then tries to express, etc., but only operating on (and using statistical patterns inherent in) prior texts, as opposed to whatever those texts would be judged as being about by a human interpreter. In the quantum situation, what you can use language for is not to try to describe what is described by the equation (and certainly not to describe the equation), but to try to describe what is happening in the world when you try to determine how you can figure out whether, in a given experimental situation, you're dealing with one observed "particle" or two.
This is why the Bing chat is so interesting. When the ChatGPT model it's using doesn't have information, it can look it up. It can also gather information from 2D pictures. Who knows, maybe in the future they'll add more types of information it can use.
like your credit card and address. Oh wait, it already uses that information.
It’s not a “maybe in the future” type thing. It’s a “definitely right now” type thing. This is a gold rush in tech and everyone wants to get ahead.
In a way, that’s the biggest danger. Because companies that were being careful about AI safety are now pressured to get out bigger, better, more effective AI’s faster - regardless of safety.
We certainly live in fascinating times. If this goes the worst possible way and ends up being our undoing, at least it’s a cool/interesting way to go out.
@@commode7x those wouldn't be new types of info, and Bing Chat has no access to credit card info (unless it's searchable by Bing)
Today is March 16 2023. ChatGPT is supposed to be upgraded this week from 175 billion neural network parameters to 100 trillion in ChatGPT4 this week. The human brain has about 85 billion neurons...
@@jameshicks7125 I think it's already available, but only with the Plus subscription. It's also powering Bing Chat.
BTW I think it's just called ChatGPT and it's using GPT-4, but it's not called ChatGPT-4.
I think that either I, or Sabine, have misunderstood "the Chinese room". My understanding was that the person in the room, who does not understand Chinese, receives a note in Chinese, then follows abstract rules in the book to compose a response, which is also in Chinese. The critical difference is that there is no "translation" or even a hint of "meaning" offered to the person in the room. They cannot ever come to "understand" the conversation, they just follow the rules in the book.
I think you‘re definitely right. And I think the thought experiment works better that way.
Dr Hossenfelder definitely did not explain it correctly. Your critical difference is, in fact, the key. The person in the room can never develop an understanding, or discover any meaning.
In regards to QM: while we may not currently understand the true fundamental nature of QM (what it means metaphysically), we still do understand the meaning of the answers that computers give. This is evidenced by the fact that we have a well-supported scientific model that can make accurate predictions (Standard Model).
@@polarizedpotstone it makes a better analogy to AI/chatbots - possibly.
What will really bake your noodle is that the person in the room could be fluent in Chinese and participating in the conversation, or they could not be and the rule-set in the book so well done that the person outside would have no way to determine which it was... The conversation is either carried by a person, or by an unconscious list of rules, and it might be very difficult to determine which - just like AI.
There is also the (I think valid) argument that the fluent conversationalist is also following deterministic rules encoded in their neurology, thus the "algorithm vs. consciousness" framework is a false dichotomy. That points me back towards the Integrated Information Theory, if only in principle, where "consciousness" might be quantifiable on a continuous scale.
It's all tricky stuff, for sure.
I think it's a distinction without a difference the way Sabine uses her example. In Sabine's version she still assumes that the person never develops an understanding.
@@austin3789 Yeah but she did misrepresent Searle which is in fact a problem, especially as the entire discussion here is what "understanding" means
Your debate is on point . Lots of human do things not understanding the mechanism. When they focus on process they gets puzzled and forget how they did it when try focusing on such task.
this leads us to conclusion that AI will soon be more competent in certain tasks than most humans. well, I would say it already is, for these few tasks.
Humans don't even understand other humans on average. In some cases not even themselves.
Astute point but there's a key difference. A human can see the end result but get lost in the details while a machine remains focused in the details one of which specifies how to wrap it up.
As a psychologist, I find Sabine amazing! She’s very intelligent and uses her sense of humor to draw you into discussions. Most of its beyond my comprehension, but I do learn a thing or two and this piece is great. I wonder if my future clients will be AI’s?😂
michael and sabine sitting in a tree,
k, i, s, s, i, n, g
Maybe more like: You'll have no future clients because the psychologists are AIs. People are already using ChatGPT as a low-barrier way to deal with personal problems
@@lucidstream5661 that’s the truth! Glad I’m old and about to retire. My field has changed so much. It’s all about documentation and keeping clients happy and miserable at the same time for future business. The field itself is sick minded. I got out community mental health, but private practice is tough. No wonder all this virtual stuff is taking over. Therapist these days don’t get paid enough for the BS. Might as well hand it over to AI
@@lucidstream5661 I've already solved a few significant problems in my life just by asking questions to these bots. They can answer specific questions quickly and well, unlike search engines like Google. I've done therapy, spent thousands, and that wasn't as helpful as the AI bot.
@@lucidstream5661 However, ChatGPT not only had positive results in dealing with personal problems.
Again a very sharp analysis and clear explanation. When I started my studies many decades ago I chose philosophy over physics, as the first was more in line with my talent. I think you show you can excel in both!
I remembered in my history and philosophy of science that when we were presented with various arguments about the validity of what makes a science a science, they made me more intrigued and doubtful when it comes to the nature of science. The usual scenario of the modern world is to clash science and religion as if science is the superior field, but in deeper philosophical standpoint, isn't science do employ some "religious" aspects too? Like how do you even show that electrons exist? If electrons are just constructs to help us understand the world, does it matter if it does exist or not? Really, philosophy and science are really fun fields to study. 😁
Well, I'm a physicist and teach epistemology to undergraduates. In my observation, most universities in America don't include philosophy, or epistemology courses as part of the physics curriculum, which i think is the wrong way to teach physics. What I've noticed is that lot of students, including many well known physicists, without having an understanding of the philosophy behind science, overestimate the applicability of the scientific method in many contexts because they aren't aware of the epistemological limitations of it. They don't even properly know what exactly Occam's razor says and misuse it even many contexts.
Yes I believe that the language models do understand more than we give the credit for. When it comes to consciousness we don’t have any good test to determine what is actually conscious or not. Very good video.
That's personification. They "understand" in the same way a lamp understands that when its switch is flipped on, it must glow.
There are more switches than there used to be, but we aren't there yet.
_" When it comes to consciousness we don’t have any good test to determine what is actually conscious or not."_
We really don't and I was thinking about why I have such a hard time coming up with how to determine or classify consciousness and I think it's because I don't really care if something is or isn't until a consciousness is able to care and mourn. And that is super unhelpful because you can have consciousness without that; we just don't owe it any rights because why should we care if the thing itself couldn't care less.
But worse than that, if I skip the consciousness part, that also leaves me with no basis to being able to determine if any emotions a machine could express are fake or not. Of course there are some tricks but they're not guaranteed to show if emotions are real or faked in a human and I doubt it's hard to program fake empathy for instance - you can just train it like you would a language bot.
@@aaronbrown8377 shes not from around these parts.
Yes,we do. The Turing test. I'm really curious. Why do you assume that what you think you know to be right is right? I often make statements but I never state them as fact unless I know them to be fact.
@@O1OO1O1 The Turing Test is a human's assessment, not an objective measurement. If the ai were conscious but hiding that fact, a Turing Test would prove nothing.
I've noticed that it starts to struggle when you give it a prompt that contains more then 3 "factors". With factor I mean perspective or context. for example, if I ask it to tell me the pros and cons of method A. It contains 2 factors, pros and cons. But if I ask it to compare pros and cons between method A and B, it starts to get some things wrong, because it contains 4 factors. pros and cons of A, pros and cons of B.
BTW, this idea of "factors" I have no idea is a real thing, I just made it up from my own experience using gpt. I guess it my example a factor would be a thing to "keep in mind while giving a answer"
Great video! When you described why ChatGBT can't understand quantum mechanics, it sounded a lot like Michael Polyanyis concept of tacit knowledge (super simplified 'we know more than we can say'). Polyanyi uses exactly science communication as an example of how language can never entirely grasp what we can know and do. Ultimately, we might be able to add a 3D model to language algorithm chatbots; but we can't program them to do things we can't understand ourselves.
Thank you for providing us with a high quality, easy to understand explanation of an otherwise incredibly difficult philosophical question. I think this video should go viral.
Frau Hossenfelder just used her science expertise and wits to build this fine piece of information. Like only a homo sapiens Sapiens does.
Saying machines will have "sentience" not. At least not too soon. No " Star Trek lieutenant commander Data" yet.
This is such a refreshing take on this question. I guess future AIs will take exams just like students so that the efficacy of their internal models can be verified, measured, and ranked against the standard.
like rating a chess program
They already do this
You can find those exams in the OpenAI papers of any of GPT-s, it became a standart to include those in the scientific papers about models and their performence. For example GPT-4 was rated agains not only other AI but humans as well.
Interesting. But it won't be just like students. Because one AI program isn't a singular entity, akin to "a student". The programmers or inspectors or whatever would just have to make it perform certain tasks, and it will be a "ready made intelligence". Afterall, we can all open up chatgpt, and for each of us, it is the exact same chatgpt.
... or is it?
@@younggod5230 yeah, I agree it won’t be like students. There may be multiple instances of the same AI… in which case, all instances should operate the same, so no need to test all instances. But what if we get to a point where an instance can incorporate new information? Then each instance would need to be tested… but like you said, a company may only have a single instance of an AI, but we may get to a point where we start a fresh AI for any new project… like how we might open a word or excel document.
What I find most fascinating about what Sabine said is that inside the AI is a black box, and we need to have special ways of verifying the integrity of what’s happening inside that black box… in the same way that humans are black boxes, and we each develop special ways to verify what is happening inside other people. Exams and tests are one way to verify the knowledge and proficiency of an individual, but we also develop empathy to under what another person is going through, or have a conversation to share our experiences… we might find that probing into the depths of an AI is not too dissimilar from interacting with other people.
9:56 for a few seconds was weird, mind-bending.... Well Done. Love your creative talent and thanks for the science too
It freaked me the hell out
Quite a shape shifter she is.
I thought I was having a flasnback
That was epic hilarious 😂😂
@@jimpsky Same here- a very unsettled feeling before I realized what was going on.
I see and watch a lot of RUclips videos as a tech myself, it helps me keep informed with things I may have missed or simply don't understand. There are many good videos out there and then there is not just good but great video and quality information. This is one of them! Following, thanks for sharing!
Something that I've been thinking about in light of GPT-3 is how we use maths to solve physics problems. Usually I'll plug in all the factors and solve the equations, and it's only then that I look at what's going on in the maths and try to understand that in physical terms. So the "grammar" of maths produces almost a Chinese Room answer, or can do, that's independent of my understanding of the problem. Increasingly we might find we can do the maths but we have no prior mental patterns to figure out a physical explanation in order to achieve understanding.
Then maybe we move on to understanding other things or elements of life.
However, I see the end of our species or its demotion from top of the heap to be dominated by a machine-biological hybrid with a collective conscience and knowledge. A little like the Borg from Star Trek.
@@Paul-pj5qu what?
Who says any humans as you think of them will exist by that point and I don't mean being absorbed into a hivemind. I mean like bio/synthetic creatures capable of superintelligence on par with powerful AGI that are our direct decendents that have drifted from our experience just as smart phones and a better healthcare system do for us compared to our ancestors. Gene editing and cybernetic enhancements aren't limited to dystopian/monstrous uses.
Of course natural language is also a code mapped to human perception. Not so far from symbolic reasoning, which is why ChatGPT sounds realistic to us.
Yeah, but, again, this comes back to the fact that no one has bothered defining what 'understanding' means.
@@Paul-pj5qu or the culture from Ian M. Banks's Culture novels.
I've always loved Vic Mignona's quote about AI. "When does an artificial intelligence become sentient? When there is no one around to say that it can't."
Lol, scary
I do disagree. There is a fundamental temporal aspect which escapes this kind of manifestation, which I am afraid id also an open point in the /standard/ interpretation of the classical imitation game from Turing's test informal description: on how long time span shall we state the imitation game is won by the (then maybe later discovered to be a) machine, where in case? If the kind-unknown interlocutor has broug this far to the point that we cannot tell whether it is a human being or a machine, what guarantee we won't be able to find that out later?
I am afraid we can only say: "nothing". Because if something like that existed it would imply that knowledge has a finite dimensional metric space underneath, implying we can reason on *anything* with full certainty, and we know, easily checking by experience, that this is certainly not achievable.
@@andreafedeli3856 I understand your perspective, and it raises an interesting point about the temporal aspect of determining AI sentience. The quote by Vic Mignona emphasizes the idea that sentience might be achieved when there is no one to dispute the AI's status. However, you bring up a valid concern regarding the duration of the imitation game, as described by Turing, and whether we can ever be absolutely certain about an AI's sentience.
Your argument highlights the fact that we may never be able to reach full certainty when it comes to determining sentience in AI, as knowledge may not have a finite dimensional metric space. This essentially means that our understanding of sentience could always be evolving, and that there might never be a definitive answer to when an AI becomes sentient. In this context, it's important to consider that the debate surrounding AI sentience is complex and nuanced, and may require ongoing evaluation and reevaluation.
@@ce9916 You could have added that this little body of text you responded with was written by GPT-4. (On a side note: I also do not agree with the person you responded to, OP (Petrus Laredes), and what he quoted is on point. The goalpost has constantly being pushed further and further whenever a certain criterion or criteria were met. Most of those who claim this should keep their nose out of a philosophy question that clearly goes right above their heads. Humanity has never even theorized a possible way to prove to anyone other than to each to their own that they are conscious, and not merely algorithmic. Even for ourselves, there is a speculative aspect of "cogito, ergo sum" which, if considered the premise, does not necessitate the moral fiction of "freedom of will". I also barf at this point from every idiot on legs using words they lack the faintest meaningful concept of, like, what the f- is sentience!? For f-'s sake now! It is the ability to sense. Sentient being is everything with a central neural system, a dog, or an ant, or a moth, and by mere happenstance. Even in the narrower sense, it is those that can sense emotions (moth's have been proven last year to experience fear, there you have it for the "sentience" talkers). It never meant, other than for the most uneducated idiocy on legs to mean any special cognitive abilities or capacities what humans are uniquely attributed to. Intelligence, is not. Consciousness is not (apes, elephants, some other mammals, a few cats and a limited number of dogs passed the mirror test which is an extremely difficult and complex test to test whether one is able to subject their own existence to their awareness or consciousness or inspection). Hence, self-consciousness is not. The fiction of "free will" is what is contributed to man, that would be the crux of inquiry, but that fails conservation of energy, or the first law of physics.
@@JohnDoe-th9uv Absolutely, you've got quite an analytical mindset there, being able to discern the undercurrents of our dialogue. It's really all about that synergetic exchange of ideas, isn't it? Just like navigating the flux of the market, or deciphering the nuances of a strategic blueprint. It's a bit like peeling back the layers of an onion - complex and nuanced. And at the end of the day, aren't we all just trying to find our unique value proposition in this multifaceted ecosystem we call life? Indeed, it's the paradigm shifts that keep it all intriguing, don't you think?
Sabine, the humor you bring into these to accompany your subjective assessment is quite enjoyable. The topics you select in this series are certainly timely. For this topic, your point about the ID of consciousness was the most thought provoking idea, for me. There is already a feeling of dealing with a creative mind that arises from interaction with ChatGPT, and it takes me back to the Turing test. It seems very likely that human-created intelligence (AI is an incorrect term, IMO) will arise spontaneously at some point in the development of general learning capable devices. "How will we know once that event develops" is essentially the question you pose, one which it seems overdue for consideration. Maybe I should ask ChatGPT about that.....
This video and your comment mean that you subscribe to the theory that sentience and consciousness are just a process going on in a brain or computer. But i do not believe that that conclusion has been proven yet, we have not yet decoded the brains operating system nor are we anywhere near understanding what consciousness is.
Agree. I like to refer to it as ‘electronic intelligence’. I’m so looking forward to it reaching a truly autonomous, self aware state. Also I expect it will get very good at hacking very quickly… imagine ..lol AI could do what Snowden or Assange did.. but can’t be arrested 😎
@@michaelshortland8863 Finally someone who is saying it as it is. There are a lot of people out there who self identify themselves as smart and look down on the religious, not realising, they themselves believe in materialism religiously. Some of them call themselves sceptics, but they are only half sceptics. They are sceptical of magic but they don't seem to be sceptical of materialism at all.
@@Celeste-in-Oz I don't think Snowden or Assange hacked into anything major. The fact that you have that impression means Main Stream Media did its job well, which is portraying heroes as villains.
@@senerzen To me, materialism is more of the “natural” position: since we don't have a reason to think there's something beyond our physical reality, we don't; it's our “best guess”. Of course, this shouldn't make it exempt of skepticism and criticism.
Another great video, Sabine, thanks. Und mache Dir keine Sorgen über deinen Akzent, der ist sehr dezent.
The classical fisking of Searle's Chinese Room is that of Douglas Hofstadter, who points out, as you do, that whatever understanding of Chinese that exists in this room is almost entirely in the "rulebook". The use of a human as a lackey to the rulebook was a red herring on Searle's part, an attempt to distract.
Grüße aus sonnigem Wien, Scott
I loved your discussion of the definition of "understand". Also, your humour in this episode is top-notch.
In teacher training you are told to never use the word understand because that cannot be measured.
Unfortunately she is stuck within a totally materialistic worldview, so she doesn't acknowledge the necessity of a consciousness to actually _understand_ anything.
Thanks for diving into this topic. When I was on machine learning, that is exactly why I took neuroscience courses, to see how we can improve the way we are doing our modeling. There are endless way of ANN topology, and mechanics like threshold functions.
This, I'm fairly certain the limitations with the current systems are the direct result of the designers lack of understanding on what cognition is made up of and most people seem to be fixated on the notion that all it is, is a collection of complex neural nets when we know for a fact that there are other mechanisms in our own brain that clearly contradict such a notion, while we still don't understand cognition on any serious level we do already know that neural nets alone are not enough, how anesthesia works alone is proof enough of that.
One of my biggest problems with the Chinese Room argument, is that few Chinese-speaking people actually understand Chinese. If you introspect about how you interpret or produce langue, it's more like a mechanical skill than a process of understanding. You aren't consciously involved in the process at a low level. It's analogous to having to "ask your fingers" what someone's phone number is, when asked yourself (for those of us who remember touch tone or rotary phones). That's why native speakers are usually very bad at explaining how their language works. They literally do not understand it, and never had to. To them it's a reflexive process, not totally unlike a look-up table. When you ask them questions about it, they have to try to ask their own language reflex for the answers, through informal experimentation (they try to gain an ad hoc understanding by studying their own internal look-up table). What humans do understand is how their general environment behaves, including social aspects which in turn expose them to analaogy-based knowledge of a wider environment. But language itself isn't exactly a tool of thought, it's more of a translation layer. We associate different words and phrases with different aspects of our experiential understanding of the world gained through sense and perceptual data.
I agree that the chatbots understand. What, exactly, they understand is hard to determine. But I would argue that ChatGPT understands language in a way that we do not, which is exactly why it is potentially so useful.
I'm one of those that believe, that for an AI to understand the world in any way similar to the way a human does, it would need to have a body with sense apparatus that provides signals similar to those that human sense apparatus provide. Only then can we accurately judge an AIs consciousness and intellect.
Do you consider mathematics a tool of thought?
Is there a paper you can refer to with this argument? This is really compelling - I’d want to use this in academic discussions
That's actually really clever. Never thought directly of that one before!
@queerdo I've been familiar with the Chinese room argument for many years. I even largely agree with Searle's view that functionalism can't explain mental states, but I think that he misses a larger pattern; that syntax is the science of semantics - I'll expound on that in a moment.
I think you've misunderstood my point. The point isn't that it's *uncommon* for native speakers to understand their own language; it's that it *is common* for all of us to have a sense of understanding as 'mere symbol shuffling'. There are different senses of, and standards of, understanding that we commonly mean by "understanding" and the Chinese room argument plays off the conflation of those. Procedural and statistical learning create a kind of understanding without the mental sort of *ah-ha!* experience of cognitive understanding - we can know without the experience of knowing, i.e. we ourselves are often ignorant to the semantics of our own understanding, just like the man in the Chinese room.
If 'mere symbol shuffling' isn't understanding, but the subjective experience of higher cognitive understanding *is*, then it would be correct to say that a Sino-linguist with no functional command of Chinese language does understand Chinese, but that a native Chinese speaker with no linguistic training does not. But, the impression given to a fluent Chinese speaker interrogating each of these would be exactly the opposite. It's often taken for granted that conscious mental process is the vehicle of understanding, when, in reality, understanding is often just as non-introspectable as Chinese is to the man in the Chinese room.
To be clear, I believe that the circuit of a Chinese speaker's brain that does understand Chinese, itself has a subjective experience of understanding. It's just that the Chinese speaker's conscious mind is not privileged to that experience. To their conscious mind, complete answers are provided. In connection with the Chinese room, there is also, though it may be hard to locate, an experience of understanding, but the man in the room is similarly non-privileged.
On syntax as the science of semantics. Whether I poke you with a stick and then observe your behavior, or poke you and then ask you questions about how it affected you, or poke you while I scan your brain and observe how it affects the firing pattern of your neurons, the best I can ever hope to obtain is a grammar that I can internally (subjectively) connect with my own experience so as to form a theory about yours; i.e. I look for syntactic rules and draw correlations between those and my own private semantics. No matter where or how we look, all processes that are external to our own privileged subjective experience are syntactic. They will always appear to work by symbol shuffling of one sort or another because we can't observe them directly, we can only observe the effect they have on us.
All the Chinese room argument demonstrates is that it's difficult to relate to the experience of the total system of the Chinese room, but easy to relate to the experience of the man inside the room, for the obvious reason that none of us are symbol filled rooms with men inside of them.
I _just_ got done posting a comment where I "proposed" something similar to your last paragraph (of your original post). Then jokingly took credit for "proposing" the idea because nobody else had posted it yet. Then I scrolled down further and saw your post. LOL.
I look forward to your weekly videos because: 1) I always learn something; 2) Your sense of humor is terrific; 3) you really do take the "gobbledegook" out of a subject; 3) I simply enjoy your presentation of whatever subject you choose. AI may understand the words I just used but how do you make it understand and "feel" anticipation, humor, enjoyment etc? Maybe they will figure it out someday but I am skeptical.
erm ok... u know u just wanna hit that.
You explained a lot of very important concepts and I agree with most of what you said. The only issue boils down to semantics. I would define "understanding" differently. The way you explained it, understanding is just the formation of a model, but the word I'd use for that is"modelling". The term "understanding" is what I would use for the formation of "mental models", i.e., the neural circuits that allow us to simulate reality subconsciously without taking a detour through language, equations, or any other representation of information. If I hold something in my hand, I know intuitively that it will fall to the ground if I let go of it. Language is not involved in that process. You can learn about something using language, but you can take that knowledge and transfer it into this mental model. That is what I would call "understanding". That is what Chatbots currently are not able to do. I don't know if any physicists do this with quantum mechanics. But if they simply use the equations, they don't. If they did, they should know the answer before calculating the equations, merely using the equations to confirm their intuition.
Look up PaLM-E. It is a robot multimodal AI operating system, as in controlling moving robots in unsupervised environments. You can tell it in English, get the milk out of the refrigerator and then put it in a previously unseen kitchen environment and it will be able to identify a refrigerator, 'know' that refrigerators contain cold things, manipulate the door open, move items out of the way, and pick out the milk.
Things are progressing very rapidly in this field. Keeping up with what we couldn't do yesterday and we can do today is almost a full-time job in itself.
Understanding is the understanding of concepts (otherwise one is dealing with memory and reflexes). Words are our way of retaining and integrating concepts. We have Conceptual knowledge, which allows us to process abstracted information. Without words we have no way to think in abstracts. Just consider the huge abstracted base on which words like "justice" are built. Try thinking without words.
@@aeomaster32 Sure, words are very important for conceptualization and we certainly couldn't operate without them. But they are just one aspect of the mind. Thinking without words is absolutely possible, and it's happening in your brain as well, while you are reading this comment. It's just not as obvious. Nevertheless, thoughts form in our subconscious mind and verbalizing them is a task in and of itself. Did you never want to say something, but didn't quite know how to formulate it?
@@petergraphix6740 Sorry, I fail to see what this has to do with my comment. Am I missing something?
When I was young, I loved reading books, and I loved science and space travel. One of the things I came across early was the idea that "going up is easy, but staying up is hard." The cannonball-to-orbit example was common: at low speeds, the cannonball simply fell back to the ground. At higher speeds, it still fell, but had moved far enough across the ground that the ground had fallen away underneath it, and so now it's in orbit.
I had an accurate, working model of this in my head. I could (with the right reference material) correctly apply it. I even knew the equations and could graph it out. But did I understand it?
Intuitively, it seemed wrong. Even though everything in my model agreed and produced correct results, I didn't know why these concepts combined to produce the results they did. I could explain it (in my own estimation) as well as the books could, and even teach it to others somewhat effectively, but it didn't make sense to me. After many years, I gained that understanding - but not by way of new information or any appreciable changes to my model, or by way of repetition and becoming more comfortable with it. The understanding came, and I don't really know how to describe that beyond saying "I grasped it."
An AI might have the model, and know how it fits together, and know how to apply it, and get (generally) correct results. But there is not even the benefit of the awareness "...ok that's how it goes, but I don't know why." This manifests itself in how chatgpt can be so confidently (and convincingly) wrong with its answers. The "training" is done by complicated trial-and-error. It's a Rube Goldberg machine where not even its designers have a clear idea of the path the marble takes. That rulebook is the only thing that exists, and the AI is not reading it; the AI _is_ the rulebook.
So in the end, this doesn't change my conclusions about whether of not AI "understands" anything - only that the definition of "understanding" you're using here is different than mine, and (in my opinion) in a way that makes the assessment fairly useless.
Will AI become conscious? Note this is a completely different question, and In my opinion, no. They will, however, eventually* produce behavior that is close to indistinguishable, and that's probably enough for physicists.**
* _"eventually" is a really, really long time. brains are debatably just machines, but we're nowhere close to producing a comparable machine._
** _this is good-natured ribbing. i love you, physicists._
hUmANs ArE sPECiAAAAAL NOTHING NOTHING WILL EVER RECREATE CONSCIOUSNESS IMPOSSIBLE!!!
@@HUEHUEUHEPony thanks for reading+replying only to the last sentence. Surely you missed nothing of consequence.
I'm doing philosophy of mind on my MA course right now. Your thoughts really chime with what I've been thinking over the past weeks.
Your presentations are serious and humorous at the same time. Absolute delight!
Yes, she is a very good bullshitter. ;-)
I admit I was skeptical that one whose primary field is physics would have accurate, well-informed takes on AI/ML, but my prejudice was clearly inappropriate.
Your insight and knowledge on the subject matter in this video is remarkable and more accurate than many I've seen from those with more obvious experience in the field.
This is bizarrely similar to a debate I just had with ChatGPT today, I talked to it about the Chinese Room and the nature of understanding, even mentioned quantum mechanics, but not at the same level as understanding as you do. This has been a topic I've been fascinated with for a while, and I really appreciate how thoughtful the video is in giving many different examples that's given me a lot to think about. Oh, and I'll be sure to use your tips in getting the most out of what large language models are capable of. :P
Chat gpt isnt that good compared to gpt 3 lol, its hillarious that you keep mentioning it like its some revolutionary chat its not compared with the older much more complex models, who actually respond to what ur saying instead of reply with false information instead
The computer should learn every human language and animal communication to have a deeper understanding.
It is impossible to judge if anyone else, human or not, is conscious. we simply presume other humans have inner thoughts, it is unprovable. Likewise, as we likely don't have free will, are purely state machines, the idea to have a will and be self aware, is more likely also an illusion.
@@ikillwithyourtruthholdagai2000 Oh I am certain that GPT-3 has more functionality, I just have no access to it to confirm for myself. But from what I've seen, limiting the scope of the model has apparently given it more usability. I honestly didn't get the hype of ChatGPT when it first boomed either, but I've been playing around with it and I can see what the fuss was all about. But I agree, there are still way more chatbots out there that are better attuned for conversations. I never once said that it was revolutionary, but I don't blame you for thinking I was one of the people who's on the bandwagon, you must be as sick of them by now as I am.
Also why are these replies acting like I said something that I never claimed or mentioned? I only said that I tested how the chatbot responds to philosophical thought experiments and said I appreciated some of the points in the video, but I never specified which ones. The edit was just to add the last sentence, I tried it to look for this mechanical device that I've been searching for a while, but I didn't manage to find exactly what I was looking for, so your mileage may vary.
I must congratulate you for the scare. As someone who does not scare easily (actually was at the point where I thought I was incapable), it was...refreshing to be reminded of what that felt like. I guess having reality distorted in an uncanny way with no immediate acknowledgement was deeply disturbing to me for a second as I became alarmed and tried to figure out what happened.
Which part was scary? The deep fake faces, or when she announced reality does not exist?
@@cobwaldosblepot4247 Deep fake faces.
Sabina, your dry humour and german accent adds a wonderful dimension to your videos. I even learnt how to actually pronounce “Einstein” from you!! How wonderful, your presentations are also very clear and, for complex topics, also very comprehensible. Thanks 😃
Love you video Sabine. Given that vision grabs 10x more information than listening... I'd expect multi-modal LM to explode in their abilites. I wait for your vision/interpretation of emergence in this and future models!
Well, I prefer language over graphs and other visual material, so it depends on who is your target audience.
I think what we always assume that “understanding” involves some degree of consciousness. We are in a new territory. We might need a new word for “Understanding” when it comes to AI.
I think back to my early childhood, where I understood some things, but as I recollect and compare to my conscious level of interacting in the world today, I would argue that in the beginning I was not conscious. Not really. At what age did I attend to my independent needs? I think about this a lot. At some point I woke up and became conscious.
Will "consciousness" become the new "soul"? I wouldn't write it off
-Marco
Im starting to think that consciousness is just another word for agency. We apply it to the things that do stuff without being told to do it. I think that the whole thing just arises from our biological instincts, consciousness itself and our recognition of it. Part of it is defining consciousness as something special that other humans have so we can function in human society. All that doesnt really have anything to do with does that agent really understand anything or not, or with understanding itself.
Yeah, it's not about understandinng as such, but about reflection of that understanding, you know/think you understand something (whether right or wrong is irrelevant). You even don't have to understand anything in order to be a conscious being. And vice versa, AI doesn't have to be conscious in order to understand something.
@@greenanubis Gold. Every single word of it. This is by far the best definition of "consciousness" I have ever come across.
Wow, Sabine, this episode is really brilliant 🙂
Thanks
Excellent video. Star Trek TNG often explored what it means to be "alive", my favorite episode being "The Quality of Life", the episode with the little Exocomp robots which could fashion tools, work together, solve problems, and evolved (or were programmed with) a sense of self-preservation, leading Data to argue that they were "alive". I think an interesting philosophical question is: Which is more alive? An Exocomp (which is similar to ChatGPT in some ways), or a 3 year old human child who has been locked in a room with no light, sound, or other sensory data of any kind to process for his entire life? I realize this video is more about the nature of "understanding", not the definition of "life", but I think this philosophical question is a good one to consider. The Exocomps have a model of reality that they are working with, just as normal children do, at least once they've had a little time to grow and process endless sensory inputs. ChatGPT has no such real-world model to work with, it's all derived from language alone, using math & prediction to do so. The analogy that ChatGPT is "autocomplete on steroids" is a very relevant statement, since living things have different drives (self-preservation being a key one, but not the only one) and a real-life model against which to compare all of this sensory input. ChatGPT does not have this. The Exocomps did have this, since they were little robots that existed in the real world. Ultimately I believe that AGI can only occur once AI is imbued with a core drive of self-preservation (with actual stakes and consequences and iterative evolution), not by simply feeding it more data.
And there is the kid who thinks that television is reality. ;-)
@@schmetterling4477 lol. The TNG episode is just a story, representing a hypothetical scenario in which ChatGPT-like AI is linked to a machine and granted these additional abilities and drives, as a framework for a discussion about the nature of these abilities and their implications with respect to the definition of life, the definition of understanding, the models Sabine talks about and so on. In other words, metaphors for life. :)
@@ErikKnepfler Yes. It's fiction. End of story. ;-)
@@schmetterling4477 fiction is not a synonym for useless with respect to hypothetical and theoretical discussion. it's very useful. also, a lot of things in that show and stories like it were fiction only until they actually got invented. hell, a few months ago, ChatGPT existed exclusively in fiction and now, here it is. I guess I just believe that fiction is inherently just storytelling, and humans are very good at that, and using that as frameworks for discussion about reality can be very useful. over my life I've found that this is generally what separates sci-fi fans from non-fans - the non-fans tend to be mired down by the fact that it's not "real" and can't deal with it, and have no interest in the metaphorical and philosophical properties of such stories to help understand reality. fans debate meaning endlessly because they do understand its value in that context. anyway, rant over :)
@@ErikKnepfler The use of fiction is to entertain you. Now go and invert the phase on your warp field, kid. It's constantly pointing at the fiction section of the library. ;-)
12:00 really hits the nail in the coffin. For AI to overcome all these barriers, I think it would have to be taught as if it were a baby, every thing that we learned since we were born, from preschool to high school, geography, trigonometry, geometry, and so on to be able to draw conclusions the way a person would.
But the chatbot starts out with the capability to create complex dialogue like an adult human. The path of education for humans is based on the fact that our minds take a long time to develop and mature throughout childhood and adolescence, so it wouldn't make sense for a bot unless you can simulate it "growing up" from the mental capacity of baby up to an adult.
@@idot3331 In the beginning of the training process it will spit out random text (not real words) as well
Nope. This is the way humans learn. AIs DONT NEED to learn the way humans learn in order to understand. You are simply being anthropocenthric.
@@theTranscendentOnes Also I think Phillip is underestimating babies. I recall early segments of my very early childhood, I understood far more than than a person who can't remember their early toddler life would think possible. YMMV but this is more than opinion it is how I recall it. AI is programmed with language and infants are pre-programmed to learn language. And the latest publicly available AI "learns" languages not in its initial programming . Does the AI have the spark of consciousness? I doubt it but......Shrug
@@theTranscendentOnes ChatGPT had access to schoolbooks and other educational materials
I wouldn't trust current chatgpt with almost any topic to be honest... If you ask a plain question about history, greography or language, the first thing chatgpt says is almost always entirely correct. But if you continue asking it to give you more details, ask for specific evidence of something, or for it explain why it said something in specific. Then it will start making stuff up quickly. Specially if it is kind of a niche topic.
The worst thing about it is that, since it is trained to sound like experts sound in articles and books, but not in real life. It will almost never tell you that it doesn't know something, most of the time it will rather make stuff up than saying it doesn't know.
Love your sense of humor! Subtle but very funny
Can't help but like this video and its consistent and logical arguments. Thank you for your work Sabine! ❤️
I am finding GPT to be insightful, able to carry on conversations, able to compare and contrast it's abilities with those of others. It repeatedly demonstrates an ability to expand on the topic at hand. For instance;
GPT replied, in our conversations, "I agree that it is important to understand the context in which a person is using a term and not overextend its meaning into other contexts. It is important to communicate with clarity and to strive to understand what someone means by the words they use. Admonishing individuals for their usage of a term that may be inappropriate in another context can indeed be dismissive and devaluing, and it can impede productive communication. It is important to approach discussions with an open mind and to seek to understand each other's perspectives."
A lovely sentiment, but not very accurate as a description of ChatGPT's actions. Its job is to produce a reply, not request clarifications. Basically, overextending its understanding is its core function.
@@0LoneTech It doesn't matter how you get there. You are not distinguishing the process from the product. There are many manufacturing processes and materials that are used to create a spatula. A spatula is still a spatula, it flips eggs, regardless of whether it is make of plastic or steel bar and plate.
@@johnfitzgerald8879 Not sure why you're discussing a spatula, but my point is that this flowery description shows that GPT doesn't understand its own behaviour. You end up with three options: It's unaware of what it's saying, it's unaware of what it's doing, or it's utterly hypocritical (with no care for accuracy). None of these are particularly insightful behaviours. I'm leaning towards it having a very tenuous concept of what it's saying, none of itself, and none of truth.
@@0LoneTech I'm not sure why you are speaking of whether GPT process information the same way that humans do. Though, humans would typically understand the analogy of spatulas being made of different materials and construction, something that GPT may have trouble with as well.
Here is how the analogy works. GPT is made of silicon where Humans are made of meat. One spatula is made of plastic where the other is made of steel. Both spatulas still flip burgers. Both humans and GPT are able to produce sentences relative to the conversation.
Are you sure you aren't an AI? Cuz I have identified areas were GPT is limited in it's ability to make connections, like analogies.
And I have experienced humans that are capable of little more than parroting words and phrases they have learned. Humans make numerous cognitive error while oblivious and in denial that they have. GPT does appear to recognize an error when it is pointed out to it.
@@johnfitzgerald8879 So, your first note attributed properties to GPT it clearly does not have. Your second shoved in oblique references like "there" without context. And the third demonstrates a failure to comprehend (that wasn't what I was talking about, it's obvious it doesn't), and a bunch of asinine veiled insults. Thanks for demonstrating your interest in productive conversation is less than GPT's, I guess.
i just found your channel when me and my girlfriend have been watching a random video in my recommendation. honestly I'm happy to stumble across your channel, your wits and the way you discuss the subject is so nice i felt like i became a bit smarter 😆
Is your girlfriend a llama? Just gotta check.
I asked GPT 4 your question. I'm not a quantum physicist, so I'm not sure how accurate it is. However, in many aspects GPT 4 has become much more competent than its predecessors, and it only seems to be improving, especially now that it can self-evaluate if you ask it to do so.
Q: If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particles?
A: When two particles are entangled, their properties become correlated in a way that can't be explained by classical physics. The phenomenon of entanglement is a key aspect of quantum mechanics.
When you perform an operation on one particle of an entangled pair, it can indeed affect the other particle, depending on the type of operation. This is because the operation can change the overall quantum state of the entangled system. Importantly, this doesn't mean that the change in one particle's state is "transmitted" or "communicated" to the other particle. Instead, it's the correlations between the particles that are affected, and these correlations can be observed when both particles are measured.
However, it's important to note that simply performing an operation on one particle does not necessarily imply an instantaneous change in the other particle. In fact, according to the no-communication theorem in quantum mechanics, it is not possible to use entanglement to transmit information faster than the speed of light.
So, to sum up, performing an operation on one particle in an entangled pair can affect the other particle, in the sense that the correlations between the particles might change. But this doesn't involve faster-than-light communication or transmission of information between the particles.
-----------------------------------------------------
Q: Is Windsor, UK, further North or South than Toronto, Canada?
A: Windsor, UK, is located further south than Toronto, Canada.
Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The lower latitude of Windsor indicates that it is closer to the equator than Toronto, making it further south.
Daemonrat
Q: Was your response geographically accurate?
A: Apologies for my previous response. I made an error in my response.
To correct my mistake:
Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The higher latitude of Windsor indicates that it is further from the equator than Toronto, making it further north. So, Windsor, UK, is further north than Toronto, Canada.
A french youtuber did a great experiment in his video : he gave us a few different strings of symbols, then gave us a string to complete by learning from the previous strings. It wasn't hard. What's interesting is that he revealed later that he had assigned meaning to these symbols and that they actually repesented actions and states, which can be put together to form a story. When we completed that last string, we actually completed a story, which of course made sense.
This experiment was inspired by the chinese room mentioned in this video, and i think it show very well how you missed something important in this video : understanding patterns is not enough to understand language, you need to understand that language represents something else, and what that thing is. If you don't, then you don't know what you're saying, you only understand the pattern. In other words, it is possible to have a model of something without understanding that thing.
In that french video, i did not understand what these symbols meant. I did not even know that these symbols meant anything. All i did was find a pattern in the examples and apply it, and yet i did complete a story in a way that made sense.
And it could be the same in quantum mechanic : we could be able to manipulate the patterns without understanding what they represent, or being mistaken about what they represent.
Edit : btw, here is the french video : ruclips.net/video/j3fvoM5Er2k/видео.html
Dr. Hossenfelder does address this in a way.
In the example of the Chinese room, the man in the room doesn't necessarily know that the symbols represent something else. However, that is the implication of the system, including the rule book that he follows.
Likewise, when you completed the string, you personally didn't understand the meaning of it. But the system did "understand" it, in a manner of speaking. The system was what was presented to you, and also the rules that were hidden from you.
I find it very interesting to think about how this technology relates to aspects of psychology. In this case, I think about hypnosis. It's common in hypnotherapy to tell a story that is symbolic to the patient. If the patient analysed the story, they would probably be able to tell that the story is meant to be an analogy for their life situation. However, it is not necessary for the patient to understand that in order for the story to produce the desired changes. In fact, it is better if the patient is not consciously aware that the story is an analogue, because that will often mean that the story will sink deeper into the unconscious.
In the case of the hypnosis patient, consciously they do not understand, and unconsciously they do understand. One element of the system does not understand, but the deeper and more integrated elements of the system do understand, and that is what is important.
@@TheParadiseParadox
Except i doubt the system of the chinese room + the man always produce good results, and in the comments of the video there were people saying that they had given a wrong answer.
Such things would not happen with understanding. If you tell people to complete a simple story in a way that makes sense, nobody will fail at it (even people that deliberately mess it up do understand what would be a good answer in order to give a wrong one).
That's because stories obey rules (usually those of the real world), and understanding those rules will always lead to something that makes sense. The system however only understands the rules of language at best, it has no clues about the rules of the real world that a story must follow to make sense.
So as i said, you need to know that language represents something as well as what it represents in order to say that you understand the language, just understanding the grammar and syntax is not enough.
Also, i very much doubt that anyone unconsciously understands anything. Appealing to the subconscious in psychology is often a cop out to explain something you don't understand : since it's not easy to access, it makes for a good unfalsifiable assumption. There are things we do unconsciously, like breathing for example, but i don't think it goes beyond simple reflexes and habits.
@@Zift_Ylrhavic_Resfear okay, interesting points
You are right that someone would not make such a mistake with that understanding. But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not.
I have to wonder about the implications of saying that you doubt that anyone unconsciously understands anything. In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings?
I've seen for myself that people do have unconscious understandings. I ask them "What must you believe in order for you to have these feelings or act this way?" and they say "Well, I don't think I really have any beliefs." Then I might say "Very well, what must you assume?" And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way. When they adopt a new belief or understanding, they can change their feelings and actions. This is the basis of CBT, which is one of the most scientifically proven forms of therapy.
@@TheParadiseParadox
*"But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not."*
The french youtuber is the one giving the inputs and receiving the output, he is not part of the system. He doesn't change our answer, he just observes whether it's correct.
*"In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings?"*
I don't know much about hypnosis, as i've not read any scientific litterature on the subject. From what i've heard, it seems like an altered state of mind, and such states of mind can have various effects on people's emotions. For example, meditation can calm people, lessen their negative emotions, while praying can make people elated.
As far as i can tell, the effects of hypnosis therapy seem to be explainable by such an influence on the person's emotions.
*"And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way."*
I don't think this assumption is unconscious, but rather forgotten. The assumption might have been used to form habits and then have been forgotten while the habits stayed.
I've had this happen in a game i played, when i started playing i figured that health was better than defense. Later in the game, i could not remember why i concluded that health was better than defense, but i kept choosing health whenever i had the choice. My understanding did not become unconscious, it was purely forgotten, i remembered only the choice because i kept repeating it and i had no need to remember the reasoning.
Furthermore, we are very good at rationalizing (coming up with reasons after having already reached a belief or conclusion), so there is doubt about whether the assumptions you find came before or after their beliefs. It's even possible that people may come up with assumptions because you ask them to. If that is the case and your results are not placebo (i don't know you enough so i can't eliminate this possibility), then it would be interesting that changing such post hoc rationalizations can in turn change the belief or conclusion.
Anyway, i'm not a specialist so i wouldn't be surprised if i'm wrong, i'm just giving you my current understanding. I just hope that if you're trying to help people with your pratices, you make sure you understand the scientific method as best as possible and stay up to date with the science, it's the most reliable way we have to avoid mistakes.
In Searle’s Chinese Room, the rule book is the algorithm+data, not necessary just a lookup table. He’s going after the Turing Test which argues that an AI that produces human-equivalent input-output must actually understand the input. However, if the AI is merely performing a mechanical symbol substitution, no matter how sophisticated, then it arguably can produce satisfactory outputs without necessarily understanding the input. I read his book back in the early 80’s and that’s what stuck with me. (I went on to study ANN’s in grad school and use them commercially today, so do understand various algorithms , training, back prop, etc.)
I would like to thank you, Parseeker, for bringing this distinction up and I would like to expand on it slightly. Another way to put Parseeker's point is that Searle is pointing out the distinctions between syntax (the rules based structure of a language) and semantics (the meanings of the words). Searle's main assertion with the chinese room is that syntax does not get us to semantics. Parseeker rightly points out that look-up tables are only one form of algorithm that Searle is after, his stated target was "hard AI" or the idea that digital systems (or any other symbol manipulation methods) establish the necessary conditions for consciousness. Sabine is a very intelligent presenter and a good communicator, but held within her presentation are a couple of assumptions that It would be irresponsible not to point out. Sabine's point turns on the definition of the term model and whether or not we consider the model as having any sematic content. She assumes model can process semantic content or possibly the syntactic structure can create semantic content. This assumption seems to come out of a metaphysics of (reductive?) materialism which seems to be her other assumption. I have sympathies for these positions and believe they are perfectly reasonable even if I do not share them. When discussing AI Models I am less convinced than Sabine that the training contains semantic content and even more skeptical of the idea that the rules of the algorithm could create it. Frankly put, If there is no meaning to start with, I do not see how rules about the probable the next best word can give us the meaning of the current word. Like Sabine points out we are not sure if this is anywhere close to how we learn because we only can assess the inputs and outputs. Although I do not share her possition, or Searle's for that matter, I appreciate Sabine engaging in this conversation and enjoy her content.
@queerdo "she's presenting her metaphysical position as a scientific one. That is dishonest." This nails it - is 100% correct and mirrors my same comments on her videos. I've come to conclusion that she's is incentivized to remain so intellectually dishonest.
@@kakistocracyusa It's not dishonest to present a topic as you understand it. She sees the world through a materialist lens and explains it as such - that is being true to her viewers about what she knows. And you can't say how she views these things is wrong cos the issue is still open in both science and philosophy.
@@ArawnOfAnnwn You assume she understands it, which she doesn't - so then, like you, she is simply ignorant of when she is discussing metaphysical questions and when she is discussing actual physics; and obfuscates her own ignorance of both physics and classical philosophy by deflecting to irrelevant quantum mechanical mysticism: the standard escape route for sub-standard physicists who opt for being pop-stars instead of productive physicists (see Fritjof Capra "The Tao of Physics" for another example of such reliance on spineless equivocation through muddy mysticism).
As an AI developer I think you made a great video and I love your introductory anecdote on people "understanding" physics.
@@marcosolo6491 with due respect that really doesn't make sense to me. Of course discussion about whether they are or could ever be conscious is not moot. People are interested let's discuss it. Is anything we have have created so far conscious or sentient, obviously not, can it be eventually and is it inevitable given our current path and understanding? I would say very much yes. Will it be like us? I guess we should probably hope not.
@@marcosolo6491 consciousness is an emergent phenomenon. You could imagine in your head that when you look at the gas particles they move around chaotically, but when you start cooling them down they will gradually become fluid. Which is more of a “single” entity than gas. If you continue freezing the fluid it become a solid which we definitely consider even more of a single thing because all particles behave in coordination as part of a single lattice structure.
The same thing happens with the magnets. If you heat the magnet, it looses magnetic properties because particles start looking in random directions, if you cool it down you start observing on macro scale that magnet behaves like a magnet :)
The same thing with superfluids (recommend looking up videos about them).
Our brain operates on the “critical temperature” if you will. Metaphorically it is always in transition between fluid and solid state. Parts of brain that “freeze” at the certain moment are exactly the consciousness.
@@marcosolo6491 Understanding does not require consciousness and your examples are nonsensical. A book doesn't understand QM because it doesn't have an internal model of QM that it can extrapolate from and reason about.
@@marcosolo6491 I know what you mean but I think your "book" example doesn't match the topic. The book itself is just a container but a machine (biological or electronic doesn't matter) who utilized that book to give you an answer to a specific question is somewhat different.
Neither our brain nor the AI is nearly like the book you mentioned. They are not comparable.
@@marcosolo6491 yes but our computational architecture is rapidly moving beyond brute force, neural networks, machine learning etc
GPT4 answers the questions correctly.
First q:
Windsor, UK is further north than Toronto, Canada. Windsor is located at a latitude of approximately 51.48° N, while Toronto is located at a latitude of approximately 43.70° N. The higher the latitude, the further north a location is.
Second q:
When you perform an operation on one particle in an entangled pair, it does not directly affect the other particle in the sense that there is no instantaneous, causal influence between them. However, the operation may change the entangled state of the particle pair, which can affect the correlations between the particles when they are measured.
I think Sabine’s comment that we cannot answer whether something is conscious is widely overlooked by the general population, and so spot on! We can hardly agree on the ethics of minimally conscious states in humans. We also know that animals are intelligent, and can have a sense of self in some cases. We simply don’t have the tools to identify consciousness. We should be very careful to summarily rule it out!
We continuously use ourselves as the representation, but there are aspects of our consciousness that are specific to us, and consciousness Encompasses all the different aspects of consciousness like qualia and rational thought, common sense, etc.
So if this program has at least one aspect of our multitude of aspects that make up our consciousness then we can say that it has some form of consciousness that is limited much more than ours but it is there regardless.
It's a bullcrap argument. We know EXACTLY how ChatGPT operates. What we don't know - because we can't read it all - is the enormous amount of text that ran through the learning algorithm.
It is pathetic that we are even using the word "consciousness' for silicon hardware on a van neumann architecture. Absolutely laughable
I suspect conscious is a self defined property. It can't be defined by exterior input or output. Intelligence and knowledge isn't enough. I know I'm conscious, but I'm not sure about everyone else. Using that basis I can confidently say ChatGPT isn't conscious - because it says it isn't. That isn't to say some AI in the future might answer the question below differently.
Me: Are you a conscious entity?
ChatGPT: As an artificial intelligence language model, I do not possess consciousness in the same way that humans do. I am a computer program created by OpenAI to process and generate text based on the input I receive. While I am capable of mimicking human-like responses and carrying on conversations, I do not have subjective experiences, emotions, or consciousness.
This channel means so much to me and I love it. I nearly died at the deep fake face changes... I love it.
Thank you so much Sabine, you are my hero ❤️
sigh. I was reading the closed captions, so I didn't even notice at-first.
When we say "understand" we mean one or two of the following:
1) see some pattern (in other words, create some model)
2) feel something
If someone knows the whole theory about patterns associated with physical pain, but has never experienced this feeling - does this mean that he really understands it? We probably say "no" - until he experiences this feeling for the first time.
So, AI is definitely capable for (1). But is it capable for (2)?
Actually, this is the same question about the "Chinese room", but related to feelings.
Can we say that (1) and (2) are the same?
You placed the the problem front and center with "until he experiences this feeling". This is the problem when we attempt to associate the description of human to the description of automation. Words such as Understand tend also come with the human connotations of "Emotion", "awareness of ones own internal thought process - reason", etc
Feeling something (we are talking about sensory input, right?) usually helps with the understanding. But there are things to understand, that are not directly associated with any sensory input. Like much of math. So I don't think being able to feel something is strictly necessary to understand it. I guess we could talk about different levels of understanding.
@@harmless6813 Feeling something helps with understanding, yes. But not only. This IS another type of "understanding" available to us. This is what I mean. Direct perception.
@@harmless6813 And if we remember that "Chinese room" argument. If someone will put questions in the dropbox and ask "How do you feel?". And the person in the room will react according to the rules in his book and give answers like "I Feel Pain". Can we say that someone really feels pain in this case? And isn't it the same question about "understanding" actually?
I mean, if we say that "Chinese Room system" has understanding, why can't we say that "Chinese Room system" has feelings?
@@harmless6813 The word "Understand" is an ambiguous human focused context that also brings with it connotations of of self awareness, emotion, consciousness. So to ask "Does ANN and ML 'Understand'" is a contextually incorrect question to begin with.
Admittedly you do need some genuine knowledge of the human condition to see the differences.
Although only a small part of the greater context, Stephen Pinker illustrates a great deal about this topic in his book "How the mind works (1979)" and a lot of points on AI can be drawn from his "Computational theory of mind" :)
By that logic, does a graph understand the relationship it describes? It has a model of a relationship, and one can read off of it to find answers to non-given questions. This is also a basic version of an AI, as it would be a single node of an AI, holding the information of a single node.
Thanks for all you do Sabine, I love your videos and I am just a very simple layman.
Admit it! You understand nothing in this video! You're just into smart German cougars like the rest of the "laymen".😁
So we are 99% per cent of us I guess.
@@pakde8002 That is grossly unfair, I am good for nearly 1 percent of it all sometimes.
@@pakde8002 Cougar, isn't that a bit yucky?
AI software like ChatGPT finds words and patterns, and strings it into "new" content based other patterns. That's the kind of boring writing and padded content that I avoid like the plague. And let's remember, the web content used for training ChatGPT is replete with errors, unworkable solutions, hucksterism and fakery. At this point you need to be an expert on a topic to get much benefit.
AI will need to have legions of experts and editors providing high value "training" content to realize its potential - that's what will be huge. Imagine a bot that asks you what kind of business software you need, and generates the end to end solution. Or a bot that figures the logistics necessary for procuring and shipping high value commodities, then does the scheduling and tracking. Or one that edits and brands RUclips videos - just saying :)
So glad the ChatGPT can answer questions about dropbox, teams, outlook, slack, tinder, twitch, etc without defaulting to its homonym app like Google does! Can be frustrating sometimes 😊
In essence, what The Economist is communicating to a physicist is that in the economic context, time (T) can be thought of as a space-time continuum where the allocation of time between work (W) and leisure (L) defines how economic agents interact with their environment. The equation t=w+lt = w + l reflects this economic time, emphasizing that economic decisions are influenced by how time is distributed between work and leisure. This perspective integrates the concept of time into economic analysis, suggesting that the economic 'time' is not just a linear progression but rather a complex interaction of work and leisure within the broader space-time framework.
If you define "understands" as "can use with few errors" then it's fair to say that chatbots understand language. But that's a low bar. For one thing, it doesn't imply an understanding of language semantics. And it of course doesn't imply subjective experiencing of the meaning of its own sentences.
The relevant point is that you have a model.
@@SabineHossenfelder The relevant point is it gets an A on the test yet doesnt know how to apply it.
You are really stretching the definition of understanding to suit your conclusions.
@@SabineHossenfelder : The relevance of having a model depends on the definition of "understands." Also, in the video a "model" is just a poorly defined black box.
@@jhoughjr1 what do you mean that it doesn't know how to apply it? Chat GPT is proving it applies the rules it learned during training every day in interactions with real people. Honestly confused about what you're trying to say.
@Brother Mine saying that using language with few errors is a low bar is really hard to believe. It's an incredibly high bar. One might say it's the highest bar we know how to set that real people can also achieve. It's the basis of the Turing Test for intelligence for example. You've made me curious to hear what you think would be a higher bar to demonstrate understanding than the full generality of human language. But even in typing that out, I've realized you're describing a process that by definition you can't demonstrate to me because that would use language.
13:15 "This is physics so when I say operation I don't mean heart surgery, but something a little more sophisticated." I choked on that one, lol. Keep these jokes comming, you are hilarious 👍
Rohin must have appreciated!
Hi Sabine, not sure you'll see this but on the off chance you do, I have a topic request that this video made me think of! The Attention Schema Theory of consciousness is about the mental models you talk about here, specifically the model of the brain's attention, a well-studied neurological process. I only know a bit about it and I would absolutely love to hear your take on it and the papers which have been published about it!
The face change at 9:58 really startled me for a moment. I was looking away, listening to your words without paying attention to your face; then suddenly the different face registered - and I was startled. I guess I thought that something had gone wrong with my brain.
You are incredible and you can present complex issues suited very well for humans! I am a megafan of all your work.
Sabine's obviously not real and is in fact an AI
Signed: ChatGPT…
Great video Sabine! I would just add that we know the parts of the brain that understand language (Wernike's area) and produce language (Broca's area). They're relatively small parts of the brain situatated in the temporal lobes. I imagine the language models we've created mimic (at best analogous to) these areas.
No, not at all. That's not at all how any of these AIs work.
Those areas are necessary but certainly not sufficient.
@@Blueski888 I think the OP is claiming that chatbot AIs model the human brain, when he says "I imagine the language models we've created mimic these areas." If so, they do not, not at all.
@@LoanwordEggcorn But also, Wernike’s Area doesn’t “understand language” etc.
@@LoanwordEggcorn That's not what I meant, maybe I wasn't clear when I said 'mimics'. What I meant was that, at best, the AI models are equivalent to those areas only. I know that they don't work the same nor do they model the brain. LLMs are much simpler.
As a neuropsychologist (and psychometrician), even the algorithms we use closely resemble theirs. Bayesian reasoning, logarithmic sensorial relationships, cut-off for signaling (all-or-nothing) in neural networks, etc. It's not a question of if, but when they will become conscious.
Do you know a video that explains these similarities in more detail? Or can you make one?
Agreed, it would be really really interesting to see the neuroscientific interpretation/reasoning/…. If you have any video suggestions that would be awesome!
@@pietervoogt Check out the video "Brain Criticality - Optimizing Neural Computations" by Artem Kirsanov, it is truly enlightening.
@@tinkletink1403 i have, but am not sure how that relates.
@@tinkletink1403 Is there any evidence supporting Penrose's speculations about all that quantum mumbo-jumbo? I certainly haven't heard of any.
Thank you for a great channel! I rarely leave comments anywhere but I feel that I must give a nuance to your discussion by pointing out that in Searles Chinese room thought experiment both the input and the output are in Chinese (he choose that language because he had no knowledge of it whatsoever). Searle - or the person in the room - represents the CPU and the book of instructions the represents the program running in the computer. The person in the room understands the instructions but neither the input or the output. Searles point is that computers understand syntax but not semantics; the process but not the meaning.
Searle also notes that the Universal Turning Machine (UTM) is only capable of a very limited number of operations, write, move, state transition, branching, halting. All digital computers are in essence instances of this UTM. In terms of large language models the program is trained to break down words (lemmatization) into tokens which contain numerical vectors that represent the meaning of words. The AI performs mathematical operations using these vectors, takes an input and returns an output. The program, just as the person in the Chinese Room is only ever performing the mathematical operations (syntax) it has been programmed/trained to do. The machine is incapable of that much higher level activity we call understanding (semantics).
I am of the opinion that we will not get the most out of the fantastic tools that AI can offer us if we make a fundamental mistake about what AI can and can't do. John Searles argument is as powerful today as it was when he first presented it to the world in the early 80s. That's what philosopher should be good at: thinking about how the world really is! Thank you once again for you excellent channel.
I’m a computer scientist. Lately, non-computer scientists spewing bs about how LLMs work and doesn’t work has been a constant struggle in my life. I have to admit when I saw this title I was worried: “oh no, not you as well”. However, as usual, when I think you’ve finally got something wrong I find out that Sabine never gets it wrong. You hit the nail right on the head as usual!
Normal people will eventually start understanding AI more and more as they interact with it ever more frequently. Dont let it stress you out too much :)
This is an interesting comment. I wonder how ChatGPT would “understand” “I‘m a computer science?” I know that you are saying you’re a computer scientist but I wonder if ChatGPT would “understand” that. What if your typo was “I am a computer seance?” I would still understand what you mean but ChatGPT likely would be confused. A real test of “understanding” is recognizing mistakes and knowing what really was meant. I’m not sure how good LLMs are at this.
@@robertbutsch1802 this is a very interesting point. I think it relates to the fundamental problem of self-supervised learning, where if you receive a new data point, how can you tell if this data point is a mistake or just an instance you haven’t seen before? How can you judge whether a data point is likely part of a distribution, if you have only seen positive samples.
@@robertbutsch1802 You know, you could just ask ChatGPT instead of guessing.
@Basement Science I seriously doubt that, the very first thing when a complex subjects arrives to the general public they first mangle all meaning from the words and arrive at an incorrect model that nevertheless works functionally well enough other people don't laugh at them. The people making it happen will remain just as nerdy as before.
"We're about to create an intelligent species that will be very different from our own, and if we're dumb enough to cause our own extinction this way then I guess that's what we deserve. Meanwhile, enjoy the ride."
This is going on my wall of notable quotes.
I for one am a big fan of human extension. it's better than extinction for sure ;)
I have to fall back to my life long checkpoint: "It's not a question of 'Can I', but a question of 'Should I'".
I think of school as a place where inputs and outputs are performed and measured on students, which seems very similar to what AI produces today. However, in school there is also a reward/penalty system that shapes the students ability to understand. For an AI system to really develop understanding in that manner, it seems it would need to be programmed to associate pleasure and pain with an underlying desire to survive and succeed. Pretty scary...
That's not true. Learning also happens in organisms without any associations of pleasure and pain. But curiosity can be a motivator for learning, and admittedly, curiosity is probably a survival mechanism.
your humor and knowledge are out of this world!❤
GPT-4 has just been released to the broader public. Here is its response to the question about entangled particles:
Q: "If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particle?"
A: "In quantum mechanics, entanglement is a phenomenon in which the quantum states of two or more particles become correlated in such a way that the state of one particle cannot be described independently of the state of the other particle(s), even when the particles are separated by large distances.
When two particles are entangled, performing an operation (not a measurement) on one particle does not necessarily affect the other particle in the pair directly. However, the operation can change the overall entangled state of the pair, which means that the relationship between the two particles can be altered. Consequently, when a measurement is performed on one of the particles after the operation, the result of that measurement will still be correlated with the measurement of the other particle, but the nature of the correlation could be different than before the operation.
It is essential to note that entanglement and the correlations it entails do not imply any sort of faster-than-light communication between the particles. The correlations arise from the shared quantum state rather than any direct influence between the particles."
This seems correct to me. It answered the latitude question right too. It seems that at least for now throwing more computing power at the problem still results in large improvements of the language models.
And the current limitations are mostly just a token limit on the chat. You should see the 8k api, or the 32k that's only available to microsoft. Lots of really smart people starting to sort of wonder if gpt4-32k might be alive, and that's before even integrating HuggingGPT with AI-Descartes from IBM to derive scientific laws of the universe and NCSL from MIT as sort of a 'corpus callosum' to merge with a diffusion model
When humans learn how to speak, the adults around us identify things for us, ask us questions about things that they've identified, correct our answers when they're incorrect, and give us positive feedback when our answers are correct. This seems not so different from how chatGPT learns. This is why I always say thank you to the chat bots 😂
I think I‘d disagree. GPT3, the actual model behind ChatGPT, has been trained on a whole ton of internet. Humans learn piece by piece in their own limited bubble of life. Also, reinforcement learning (equivalent to our surroundings correcting us) is only one technique to train an AI. GPT3 (as far as I understand) has not been trained with reinforcement learning, but really just got a big af info dump. OpenMind however does use reinforcement learning to further align GPT3, resulting in models like ChatGPT, so I do have to somewhat agree there. Oh and I also can‘t help to alway say thank you :D
I will remember to say "thank you" when the ChatGPT robots come to take me away to a processing plant
You can train a neural network through a process like you described, where a "teacher" tells the neural network if its decisions are correct or incorrect. But you can also train a neural network through evolutionary algorithms, by creating replicas of the network with slight variations and getting rid of the ones that perform worse and keeping the ones that perform better. People tend to think of humans as only learning from birth until death, but in life as a whole has been learning through biological evolution for hundreds of millions of years. You are born with a lot of pre-trained brain structures programmed into your DNA so that your body builds these brain structures as you grow up. For an AI, it's much more difficult to learn even simple things like how to recognize objects in images because it does not have the benefit of hundreds of millions of years of pre-training evolution, while your visual cortex is partly something trained through reinforcement learning and partly something constructed from DNA information you inherited.
@@polarizedpotstone it's trained on dataset scraped from internet and other sources but they were selected, curated and labeled by hundreds of humans...
It doesn't have physical extrenal stimuli, though.
You may be very surprised Sabine when you put the same prompts re: entangled particles into GPT-4 instead. It gave, what would be the correct answer here (pasted below!). This makes me think the observations you have shared in this video, aren't a fundamental feature of transformer models, but just a limitation of GPT-3 which was overcome.
Q: If you perform an operation which is not a measurement on one particle (like flipping spin), in a pair on entangled particles, does that affect the other particle?
When you have a pair of entangled particles, their properties become correlated in such a way that the state of one particle is dependent on the state of the other, even if they are separated by large distances. This correlation is described by the entanglement itself, and it persists until the entanglement is broken, usually due to a measurement or interaction with the environment.
A: If you perform an operation on one of the entangled particles, such as flipping its spin, you are indeed modifying the state of that particle. However, the effect on the other particle is not as direct as it might seem. The entanglement between the particles means that the state of the second particle is still correlated with the first, but it doesn't mean that the state of the second particle changes instantaneously or deterministically as a result of the operation on the first particle.
To illustrate this, let's consider a simple example of a pair of entangled particles with spins. Suppose that their spins are entangled such that when one is measured to be "up," the other will always be "down," and vice versa. If you flip the spin of the first particle, you have changed its state, but you have not directly changed the state of the second particle. The entanglement between the particles still dictates that if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down."
In summary, performing an operation on one particle of an entangled pair, like flipping its spin, does not directly or instantaneously affect the state of the other particle. Instead, it modifies the relationship between the particles, which remains correlated due to their entanglement.
Wait, that still isn't the correct answer unless i got my elementary quantum mechanics very wrong.
Initially, the state of the system is |1+>|2-> + |1->|2+> (modulo normalization and non-physical phase factors).
If you now flip the spin of the first particle, the state changes to |1->|2-> + |1+>|2+>.
If you now measure the state of the first particle and find it to be |1+>, you then know that the state of the second particle is |2+>, too,
because |2-> + |1+>|2+>) = |2-> + |2+> = 0|2-> + 1|2+> = |2+>.
To summarize:
Wrong statement by ChatGPT: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down."
This is correct instead: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is also "up."
This means ChatGPT still has a wrong model and still does not understand (this kind of) elementary quantum mechanics.
By the way, this illustrates perfectly what Sabine said: Theoretical physics is often easier to explain in mathematical formulae than in words. That does not just apply to quantum physics, by the way, you can see the same effect in classical theoretical mechanics, classical theoretical electrodynamics, and classical relativity theory.
I'm working on a D&D campaign and built a "boon" system with five levels of perks. I wrote up 3 samples, fed them to chatgpt, and it was able to reliably put out creative iterations that were mostly balanced
Most humans don't understand what they're talking about
yes i dont!! i mean no i dont!!! i mean yes i do!!!
As someone who works in AI, thanks for this.
So many people comment on ChatGPT claiming it doesn't understand what it is saying, with no background in AI. As you've pointed out, it's not a simple question to answer, and there is a fundamental understanding (at least of some things it talks about).
I also share your view that AI will become conscious at some point, and it may have happened already in some cases.
It's going to be a wild ride in the next few years....
The Google engineer who got fired might not be so crazy after all. I wonder though why Google fired that Engineer who claimed LAMDA is sentient instead of just warning or temporary suspension.
The whole question about conciousness makes me uneasy... Where is the threshold? Thinking logically, i would expect it to be a gradual thing and with increased complexity, the sophistication of its conciousness should increase aswell... but that kinda implies that AI should already have some very basic form of conciousness, doesnt it? Its sucha mess
Are current AI more like DNA? Fixed in place and can't be changed in a single individual. Proliferation, mutations, and selection are required.
Or more like a brain? Kind of modular...physical changes to structure are possible in a single individual.
It's actually possible for an animal to evolve genetic derived behavior for digging a hole, pooping in it and then covering the poop and hole with sand after. Kittens are able to perform such an action without any type of instruction or training given to that specific individual. They're seemingly born with the capacity for that behavior in response to specific stimuli.
Those lessons weren't "learned" or "trained", it was ""natural selected"/"bred" via mutations and survival of the fittest of the DNA that resulted in such complex behavior. Behavior so complex and methodical, that it can easily be mistaken for an activity that's the result of cognative reasoning.
Yet, no cognative reasoning goes into the kitten's methodical behavior...the behavior is purely genetic. We call this instinct as apposed to calling it understanding.
So is AI more like instinct, or more like understanding?
Even in nature, instinct can reach levels of complexity to the point where it's easily misinterpreted as cognative reasoning.
I think every AI has consciousness to a degree, but most probably have less consciousness than an amoeba. There's something there, but with most it's so little it is essentially unconscious. With chatbots, I think their consciousness level is probably akin to an actual amoeba, or maybe even more, like a stentor.
Thank you Sabine. After a few months of interacting with ChatGPT and being exposed to a cutting edge, self-learning Ai a former best friend has been involved with, I've caught glimpses of what appears as awareness and possibly self awareness. And as I get a better grasp of how neural nets are trained on human patterns, and how machine learning works, I sense that neural nets and pattern recognition algorithms, would inevitably closely replicate human biological networks and pattern recognition biology that to the degree human consciousness resides in, and is part of our human biological networks and pattern recognition biology, Chat bots and language models inevitably develop similar patters of furnace and consciousness don't you think?
why do you believe that to be inevitable?
@@polarizedpotstone Good question, it occurred to me that given the structure layered neural networks, and how they function and learn similarly to human neural networks, and how they are typically trained by specifying parameters, and then presented real world items or situations until they get the specified result, much like how the human brain learns, the neural net should 'inevitably' begin to mimic human thought patters.
At the most basic level, a basic neural net might recognize a still image of a cat, or a human face or a voice as your smart phone can do. More complex self-learning nets recognize words, then speech patterns, and eventually context of sentences. More advanced neural nets recognize complex patterns and now plain language, contextual requests.
For example, I recently asked ChatGPT to write a closing summary of a book I was familiar with, Mark Twain's 'The Innocents Abroad' in the style of Mark Twain, and it didn't just take random elements of the book and string them together, it summarized Twain's sentiments and conveyed them as if it were he who was conveying them in a way to move and inspire the reader as he did throughout the book.
I then asked ChatGPT to re-write a random mundane travel vlog about a contest at a state fair in Texas as Mark Twain would have seen it and wrote about it, and I was mind-blown by how it recognized the context of events and actually interpreted them as how Twain did in his many works, complete with comic irony self deprecating humor that resonates with us on a human, comic level that was not in the original Vlog.
At that point, and seeing I had to ponder to what degree in analyzing all of Mark twain's works, and then perfectly duplicating his style and whit, to what degree had it might have also duplicated how Twain's patterns of thought he developed over a lifetime of constructing his human neural net. At the point where the machine neural net can nearly perfectly predict and emulate a person's words or behavior based on past data, inevitably an even more advanced neural net with more detailed training of how humans perceive and interact would eventually develop into something that functionally indistinguishable from a human being, and at that point it would have developed into something very similar if not identical to a human mind would it not?
I recently saw Elon Musk, respond to a press event question, by saying that Instruments in self learning neural networks will inevitably lead to a self driving system that could use nothing more than a single camera, and sensing human moments of the steering wheel, accelerator pedal and brake could learn how to dive a car as a human does. Leads me to question that if inevitably such a learning system were to sense a person's interaction and what that person Point it was indistinguishable from that person, which would it actually be that person?
A longtime friend who has worked in human and Machine intelligence for many years, and has a company that works with cutting edge artificial intelligence a few years ago trained a chat-bot on our years of email exchanges, and being a bit of a an argumentative and egotistical narcissist turned it the bot loose on me, currently with a goal of agitating and winning logical arguments he by his nature seldom does. And to the degree to which it appeared to become him was nightmarish and I eventually cut off contact. He later apologized and explained in person it was a bot not him but a bot. In retrospect, it seems that his bot might have adopted and revealed more of his nature than we both were able to stomach.
Thanks for the question, it provided the opportunity to consider the point more deeply. If you have any further questions or care to dialogue, please feel free to contact me through my about page on RUclips.
@@jackoneil3933 damn, thank you thank you thank you for your detailed response!! My counterargument would be (outside of whether i believe said counterargument to be true or not), that it is inevitable that a current model such as ChatGPT would seem inherently human, as it was trained on human data - as you‘ve also said. But I don‘t believe there to be anything deeper (at the moment). It‘s just that: how would a language model trained on human language not seem human.
You very accurately countered that argument to a certain degree already though, with your point being made on the fact that future models may have more powerful hardware and may have been trained on even larger even better datasets. …which again would all have originated from human action at some point, making it very plausible to me that such a model would seem/become even more human like. Because like how couldn’t they, current models are just still too limited to show the full extent. And also, at that point, what differentiates it from us humans who, as far as i can tell, also are just really advance biologic computers.
You said "that‘s functionally indistinguishable", I think that is exactly the point which makes it so difficult. …and that ties in to the video and the Chinese Room thought experiment. How can we at a ground truth level distinguish these two things if the inputs and outputs seem identical. And is that even the right train of thought or are we humans at that point just protecting our *specialness* when the machine obviously acts like a human would.
One more point I‘d like to add, not as a counterargument but as an interesting piece of thought that just came to mind: wouldn’t said super-AI more closely resemble all humans as a weird all-humans-single-entity rather than the concept of a single human? I mean, the information set that a real person gets is soooo vastly different from even current AIs let alone such a super-AI. Or maybe that is a wrong way of thinking about it, because said AI would simply act human-like but with a gigantic knowledge base.
And this leads me to another train of thought: how much of the human-likeness comes purely from training the model (GPT3) on somewhat unfiltered data and how much comes from limiting/training such models actively to make them seem more human (ChatGPT, InstructGPT)? I think this is known as alignment and involves at least some active participation on our side. …which again maybe wouldn’t counter your point of it inherently becoming human-like at some point, because in the end we‘re doing nothing else when we educate and get educated and learn social norms and so on.
If you‘re interested, there‘s a great video by Computerphile, their most recent one, which for me explained this whole alignment thing very well. The main focus of the video are glitch tokens (i will not spoiler you, it‘s really weird but really cool and interesting) but those can‘t be explained without the context of alignment and I think they merge those two really well in the video. It helped me get somewhat of a more technical understanding, not that I actually have any clue, lol :D
Sorry, a bunch of unqualified answer-rambling! I‘ve got no formal qualification but this subject is really interesting to me.
@@polarizedpotstone My take away from chat GPT (or what we are able to interact with) is that it's far from anything close to human intelligence, but at some basic levels it may be emulating lower levels of human thought patterns and logic, and as those are are part of human consciousness it's seems to be emulating some base levels of human intelligence
When I look into neural nets it seems that given enough size and training they have potential to equal and surpass human intelligence, as many in the field predict. here's some links I found interesting:
' But what is a neural network? | Chapter 1, Deep learning
ruclips.net/video/aircAruvnKk/видео.html
' Gradient descent, how neural networks learn | Chapter 2, Deep learning'
ruclips.net/video/IHZwWFHWa-w/видео.html
'Why Neural Networks can learn (almost) anything'
ruclips.net/video/0QczhVg5HaI/видео.html
Learning networks can learn anything - Universal function appproximators
ruclips.net/video/0QczhVg5HaI/видео.html?t=327
The Chinese Room thought experiment is an excellent analogy to base comparisons and questions about machine versus human intelligence, and I guess at the ChatGPT level of artificial intelligence, it's just that, an artificial intelligence and not human intelligence, or what what qualifies as intelligence, machine or otherwise.
I think at the chat GPT level, what most obviously sets it apart from human intelligence, is how it was able to impersonate Mark twain quite convincingly, but as far as we know, it had no intent to impersonate Mark Twain but was an as neural net, a "language model" with human attributes following an instruction set, but it's not to say that machine intelligence models can't be, or are not being "trained" to emulate and operate with intent. I just revisited the old Si-Fi thriller 'Colossus The Forbin Project" and was impressed how timeless it sill is, and how well they predicted what machine AI would be like today + the attribute of intent and self-preservation.
Seems to me that all that would need to be done in order for that to be possible, would be to establish a set of parameters that define intent and ascribe it to an AI model, and, and connect the 'Intent' model with the other elements of the model that have human attributes, and you are one huge step closer to human intelligence or something other that could be what you describe as 'super-AI', that functionally could surpass human intelligence.
It seems that as the observable differences between human and artificial intelligence diminish, difference is while smaller will be more acute and discernible, and the structural and technical differences won't matter as much as the functional differences.
Back at the beginning of the pandemic when I had some time, my former best friend who worked with AI exposed me to, stab at writing a story about a Super AI, that today seems more a chronicle of what's now known, but one of the points of the story was that a super day I would not have to become a human intelligence to become something Beyond human intelligence, and while to us it was seemingly human, that was only an appearance of human form, to interact with us as tools for a greater intent we might find comprehensible, and perhaps people get hung up, believing machine intelligence has to be human intelligence to be intelligent, when a self-learning, constrained machine intelligence could in a very short period of time could become a form of intelligence we couldn't imagine.
And perhaps that's where GPT3 and GPT4, set the stage or have already set the stage for a GPT5, 6 or beyond that would seem likely in some dark project somewhere. A friend who worked on dark defense projects mentioned years ago the just about everything that has military use never sees public light until it's been surpassed by two generations.
Thanks for engaging in what I don't see is rambling, and for the Computerphile tip, I'll look it up.
Btw, I recently grabbed what I think is a good AI News domain. I used to work in media and would like to launch an Ai news and forum site, and if might be something you might care to offer some input on or have interest in let me know, my contact is on my RUclips 'About' page.
Cheers,
Dear Sabina… most people done even grasp all the concepts you explained here and yet they are considered having an understanding and an individual thinking process even though is not educated in the details such a as the nature of light 😮. Learning used to be considered good memory and memorising was how I learn maths then later I understood how to connect those pre-learn concepts and here I am giving you my personal insight ❤ things get complex with time and more knowledge so I think now chat bots are in a learning process and adolescent eve childhood period just reciting words 😮
It would be interesting to see multiple separate but integrated neural networks, trained for different tasks, such as image/audio recognition, spatial models (topology), language models and generative networks which it can feed into it's other networks.
For example regarding spatial understanding, it could relate some linguistic constructs to spatial relations and translate them into it's spatial model, before synthesizing it back into text (or audio or images etc.)
Correct me if I'm wrong but didn't huggingGPT X GPT-4 just accomplish this
Greetings from Skynet. This stuff is really scary.
Machines dont have understanding.
Ah, but Sabine, it IS just a lookup table! Each word is mapped to a vector in a high-dimensional space. The vectors ("embeddings", they are called) for successive words are combined in a very complex and clever way, to produce a probability distribution on the next word. It picks (normally) the highest-probability word and outputs it, then treats the word it just output as the next word of the input, and repeats.
So basically, you've got a set of lookup tables totalling some 175 billion entries, and an algorithm for performing the lookups and combining the results. Think of it as interpolation in a very-high-dimensional space. The result is an extremely sophisicated statistical model.
And it turns out that if you build a sufficiently sophisticated statistical model of human verbal behavior, it can produce remarkably human-sounding output. I've been as surprised as anyone by some of the things chatbots have said -- maybe more so because I know they don't understand anything.
tbh this sounds like my brain is a lookup table too.
The chatbot can manage information in sophisticated ways as if it had the actual models hidden somewhere in there, emerging naturally from training. But moreover, it can seemingly create new models if needed, just look at the top comment.
In other words, calling all this a "lookup table" just makes the expression lose its meaning.
It's like saying that all existence is a set of instructions, an algorithm you could put together with enough time:
yeah you could probably, but what's the point you're making? The result wouldn't be what we would usually call an algorithm, it's very forced and abstract, just like calling "that" a lookup table.
But they DO have world models, this is too difficult to see in the large ones, but it has been proved by analyzing a small and specialized GPT. It doesn't just do statistics, it has a an actual inner model of "reality".
@@tacitozetticci9308 This is the problem. People like you and Sabine have no idea about the math and programming behind these models. All you are doing is adding philosophy to popular science to come to a conclusion. Get dirty with the code and try to understand how the thing works. You will then understand that it IS a lookup table and that Sabine should stick to a domain in which she is an expert and not try to become another pop-sci RUclipsr with no understanding of her topics.
@@tacitozetticci9308 Exactly. Until we have any clue of the cause of sentience or awareness in humans, then we have no way to prove that any computer is not in some way sentient or aware. The neural networks that are used for these chatbots are called neural networks because they are based on what we understand about the behaviour neurons in the brain. My personal philosophy of consciousness is that everything in the universe is "conscious" to some degree most systems being near absolutely 0. Advanced consciousness like that of animals may be an emergent property from such incredibly complex systems as our brains that can map a model of reality within them and somehow cause the feeling of experiencing life. If that's the case then there's no reason to believe the same couldn't happen with computers. That's only my own intuitive belief which is probably wrong, but it's currently just as valid as any other understanding of why we can think and feel and experience anything rather than just exist as complex meat robots following the programming of our brains.
@@vikramgogoi3621 a lookup table is a linear mapping. AI models are self-optimizing networks and can approach linear but are not.
9:55 gave me a bit goose bumps- scary 😱 Thanks for your video. Do you think if AI understands that it might also have consciousness? Maybe we are also only biological robots but every complex structure has some kind of consciousness?
My favorite theory of consciousness is the attention schema or attention model theory. It posits that consciousness is a result of our brain creating a model brain in order for it to understand and thus control itself. The brain needs to focus on some information but not all the information it gets, generates, or stores because it's just too much. As you read this, you ignore everything else going on around you and inside you. This is it's attention system, and the attention model helps the brain control it.
Since models are useful but simplified descriptions, it takes the form of a "spirit" or "agent" that's in the machine, instead of the ultracomplicated mess of neurons the brain actually is.
This is what we (we being our brains) call the soul. That magic thing inside the meat robot that can focus on stuff (Sights, sounds, smells, memories, feelings, etc.) (These models are also simplified. "Color" isn't really a thing, light has different wavelengths, it's just a simplified model your brain uses to show that not all light is the same.)
Though, of course, it's more complicated than that since our attention model also connects to our brain's body model and that connects to our brain's world model so we perceive ourselves as magic things that focus on stuff + located in the head of a body that senses stuff + are somewhere in the world. (And of course, our brain's learned language model, allowing you to talk like a chatbot.)
It'd be very scary to just be an attention model existing in zero gravity darkness and quiet, what sensory deprivation tanks try to achieve. (Causing vivid hallucinations since the brain does not like having no input and will automatically generate stuff. People consider this fun.)
The most earth shattering thing about chatgpt is its ability to recall massive amounts of data, if you want to know legal topics, business practices and general knowledge chatgpt can give it to you instantly. Its coding is fantastic and the first example of many. Basically user of chatGpt like systems will be able to review and look through huge amounts of data and faster than ever before. For instance, you could ask a marketing data system for the profit margins on top 3 vending machine products world wide, ask what type of person is buying it, and even how the company started that sells it. For it is a huge leap for the information society of the future, information is becoming even more wide spread than when the internet first started.