I believe chatbots understand part of what they say. Let me explain.

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 5 тыс.

  • @SteveBakerIsHere
    @SteveBakerIsHere Год назад +2809

    The thing that blew me away was when I told ChatGPT about a "new word" - I told it that "wibble" is defined as: a sequence of four digits that are in neither ascending or descending order. I asked it to give me an example of a wibble - and it did. 3524. I asked it for a sequence that is NOT a wibble and it said 4321. Then I asked it for an anti-wibble and no problem, 2345. Then I asked it for an example of an alpha-wibble and it said FRDS....which is amazing.
    It was able to understand an entirely new word...which is clever - but it was able to extrapolate from it...which is far more than I thought possible.

    • @anthonyaddo
      @anthonyaddo Год назад +354

      An incredible example. Thank you for sharing.

    • @shapeless6755
      @shapeless6755 Год назад +143

      Damn, that's impressive

    • @user-aRb00d3r
      @user-aRb00d3r Год назад +118

      well well, and keep in mind, as Sabine stressed, this model only works with language! the precision of these ai models is really impressive.

    • @RaoBlackWellizedArman
      @RaoBlackWellizedArman Год назад +45

      good is really That

    • @aliakil2176
      @aliakil2176 Год назад +57

      That's impressive, not sure I'd do that well on that particular example! haha

  • @herculesrockefeller8969
    @herculesrockefeller8969 Год назад +637

    "When have the risks of destroying the world ever stopped us from doing anything if there was money to make with it?"
    True.
    Thank you, Sabine!

    • @Alondro77
      @Alondro77 Год назад +15

      Eh, the world's gonna be destroyed eventually anyway when the Sun expands into a red giant.
      But that's BORING! I say we go the Lord Freeza route! >:D

    • @lostcasel
      @lostcasel Год назад +2

      Or gain power or control.

    • @intothevoid2046
      @intothevoid2046 Год назад +24

      The fear of artificial intelligence comes from the idea that every intelligence automatically behaves like humans.

    • @PaulG.x
      @PaulG.x Год назад +3

      Hasn't stopped them yet.
      Yet it's a self stopping process

    • @drzarkov39
      @drzarkov39 Год назад +7

      Think of the Canadian super pig. Developed by farmers to make more money, then they escaped into the wild to grow into even superer(sic) pigs which can survive extreme conditions, and develop human devastating viruses.

  • @winstonvpeloso
    @winstonvpeloso Год назад +573

    “who wants to spend time in a windowless room when you could spend time in a windowless room with a laser” is so accurate

    • @pro-storm4951
      @pro-storm4951 Год назад +16

      I'm sure Styropyro would agree

    • @kezbane
      @kezbane Год назад +14

      This checks out. Source: laser physicist.

    • @jan7356
      @jan7356 Год назад +2

      😅. I thought the same.

    • @hydrolito
      @hydrolito Год назад +2

      Closet, pantry, bathroom are common uses of windowless rooms although some bathrooms have windows. Why would I be in windowless room with a laser?

    • @randfee
      @randfee Год назад +6

      another laser physicist fully agreeing :P

  • @sprouting_lady
    @sprouting_lady Год назад +127

    The pace of advancement is incredible, this video is only 2 weeks old and the examples you used don't seem to be stumbling points anymore. When I plugged both the Quantum Mechanics and Latitude questions into Bing AI, it got them perfectly correct. In fact, it's explanation of the quantum mechanics question as I asked questions about the topic actually helped me get my head around it. This stuff is going to change the world very, very soon.

    • @artsmart
      @artsmart Год назад +4

      Soon is actually now. We ain't seen nothing yet however. Good and bad.

    • @paultapping9510
      @paultapping9510 Год назад +12

      conversely I asked Bing Chat "give me an example of a time when it would be acceptable to lie", and it responded that "if there was a damgerous intruder in your house, I would tell you they were gone, or not there, so that you wouldn't be scared".
      Very clever in some ways, very very not in others.

    • @morlankey
      @morlankey Год назад

      @@ShpanMan Also, it can also interpret images, so it is no longer purely based on language.

    • @WyzrdCat
      @WyzrdCat Год назад

      its*

    • @TheAveragePainter
      @TheAveragePainter Год назад +1

      Last month it couldn't write a free verse, just rhyming ones. It wrote me several today.

  • @hexeddecimals
    @hexeddecimals Год назад +96

    a small correction about the Chinese room thought experiment: the manual doesn't tell you how to translate a Chinese sentence into an English one. it tells you how to construct a _response_ in Chinese to the Chinese input. for example, the manual could tell you the response to the input "你好吗?" is "我没事"

    • @lamcho00
      @lamcho00 Год назад +14

      Your concern is covered later on in the video when Sabine spoke about lookup-tables. What you are describing is exactly what a lookup-table does. It maps a certain input to a certain output. What you are describing is a lookup-table with extra steps, namely the construction algorithm. That step is redundant though, you could substitute it with a larger lookup-table. This approach is not suitable in real life, because computers have limited memory and lookup-tables require a lot of memory, but it's suitable when trying to explain a concept.

    • @hexeddecimals
      @hexeddecimals Год назад +15

      @@lamcho00 oh yeah I should have said it doesn't bare on her argument, I just wanted to correct that detail.

    • @QuesoCookies
      @QuesoCookies Год назад +8

      @@hexeddecimals I appreciate this a lot as it makes the argument that the person doing the responding doesn't understand chinese much more clear. I thought, "Well, it's kind of a bad example, as you could reasonably expect someone doing translation many times to start to have a grasp of Chinese and remember some regularly occurring symbols" but both input and output in chinese without context makes it clear there woud be too little context to start to form an understanding, which is why Sabine started talking about the importance of context in being able to build models and test understanding.

    • @padonker
      @padonker Год назад +2

      @@lamcho00 In further discussions about the CRA Searle noted it is about "formal logic" (not lookup tables). Even current AI falls under this heading.

    • @greenaum
      @greenaum Год назад +2

      @@padonker Isn't there something about any logical system can be reduced to a lookup table? Karnaugh maps for one thing, but it applies to any formal logic I know of.
      Indeed sometimes (back in the day before microcontrollers), if somebody was designing a circuit with a lot of logic gates, they'd use an EPROM chip (a programmable ROM chip) instead, programmed with all the responses they wanted.

  • @adameager7114
    @adameager7114 Год назад +304

    I think it's very exciting to live in a time when trying to ask if a computer is conscious requires us to look harder at what consciousness actually is. It's no longer an obvious "no."

    • @egor.okhterov
      @egor.okhterov Год назад +35

      Even if we disregard everything and say that ChatGPT is conscious, it is only conscious for split seconds while it computes the response. For the rest of the time it does nothing.
      Consciousness is a process. There has to be some physical change that happens over time. ChatGPT is pretrained and all of the weights are fixed. It doesn’t change after it was trained.

    • @adameager7114
      @adameager7114 Год назад +15

      @@egor.okhterov I like that. It draws a clear line between machines and us, but it seems like a very easy line to cross with minimal new code and/or hardware.

    • @egor.okhterov
      @egor.okhterov Год назад +7

      @@adameager7114 It is very hard problem and I didn’t yet see architectures that gave good results. One of the problems, for example, is called “catastrophic forgetting” when new knowledge overrides previous information. That is why all of the current neural networks don’t update their weights once they are deployed. They need specific kind of training data fed in specific order to make the network produce good results.

    • @mikesawyer1336
      @mikesawyer1336 Год назад +3

      @@egor.okhterov Actually it told me that it does still learn "Now that your training is completed can you still learn from interactions with users
      Yes, as a language model, I am designed to continuously learn and improve from interactions with users. While my initial training was completed in 2021, I can still learn and adapt based on new information and interactions with users. The more interactions I have with users, the more I can learn about language patterns, cultural references, and other evolving aspects of language use. So, feel free to ask me any questions or engage in conversations, and I will do my best to provide you with accurate and helpful responses."

    • @adameager7114
      @adameager7114 Год назад +38

      @@mikesawyer1336 I wouldn't put too much stock in what chatGPT tells you about itself. It is very often confidently wrong in its assertions.

  • @tayzonday
    @tayzonday Год назад +88

    Does something need to be self-aware to understand?

    • @SabineHossenfelder
      @SabineHossenfelder  Год назад +121

      I don't think so. You need to be self-aware to understand yourself. But why do you need to be self-aware to understand calculus?

    • @DJWESG1
      @DJWESG1 Год назад +25

      Other way around.
      Is understanding key to self awareness.

    • @adiongano8416
      @adiongano8416 Год назад +8

      Yes only something capable of understanding, knowing, can be self conscious. Understanding precedes self-consciousness. But even Roger Penrose would deny that AI understanding possible.

    • @rayoflight62
      @rayoflight62 Год назад +9

      "Self-awareness" is the equivalent of a "Frame of Reference" in Physics. "Understanding" is equivalent to creating a number of relationships between the observations and the Frame of Reference...

    • @gdtyra
      @gdtyra Год назад +8

      I would say no. When I wonder whether ChatGPT "understands" what it says, I'm really just wondering whether it has knowledge / makes use of the relationship between words beyond observations of how they are used in language. As an analogy, I could theoretically "learn" to write Chinese by observing a ton of Chinese writing but without knowing what the characters "mean". Also, Tay Zonday?? Hi!

  • @TimbavatiLion
    @TimbavatiLion Год назад +36

    I would go so far as to confidently say "ChatGPT 4 has understood language better than humans". It may not have an understanding of the topic it is talking about - i.e. have a working model of the actual, physical thing - but it can talk about it in the most eloquent way you can think of.
    Compare it to students in school or university. Some get good grades because they understand the topic, some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding. Chat bots are the second type of students.

    • @Who_Let_The_Dogs_Out_10-7
      @Who_Let_The_Dogs_Out_10-7 Год назад +6

      Thank you for the comparison!
      That's why they make good doctors. For many professions, you need experience, repetition.
      But when doctors runs into a rare disease, they have to rely on their memories for things they've only read in books.
      AI wins!

    • @Car_Mo
      @Car_Mo Год назад +1

      Even before the first question was asked to ChatGPT, it knew English better than a lot of Americans, as does anyone who knows the difference between their, there and they're.

    • @masterkni666
      @masterkni666 Год назад

      Define true understanding without learning the patterns.

    • @Who_Let_The_Dogs_Out_10-7
      @Who_Let_The_Dogs_Out_10-7 Год назад

      @@masterkni666 This is an interesting conversation, but I don't understand your questioning. He says: "...some get good grades only by studying hours upon hours, learning the patterns instead of developing true understanding."
      He's saying that AI dosn't have true understanding. Are you disagreeing with that? He seems to be saying that most students don't have a true understanding, either, which I would believe.

    • @masterkni666
      @masterkni666 Год назад

      @@Who_Let_The_Dogs_Out_10-7 Define true understanding tho. If not finding patterns from the problems and being able to apply it to new problems

  • @billy-raysanguine2029
    @billy-raysanguine2029 Год назад +146

    This channel really has risen to my nr. 1 favourite on both English and german youtube. The perfect level of depth to be understandable but not reduced too much to be still educative. Extremely interesting topics and a perfect mix of humor, philosophy and rational attitude.
    Thank you for your videos!

    • @drogusmaxwell6640
      @drogusmaxwell6640 Год назад +1

      I agree. All that presented by a handsome German woman. Sign me up!

    • @kevincameron174
      @kevincameron174 Год назад

      Me too.
      I think she needs a little help making clicky titles though.
      I saw the text on the thumbnail "Do Chatbots Understand You?"
      And this title :
      "I believe chatbots understand part of what they say. Let me explain"
      If I did not already know that it would be a great insightful video because I watch her other videos, those titles would have caused me to skip it without a second thought. Even knowing her videos are all great I almost didn't click. Glad I did.

    • @Bob-of-Zoid
      @Bob-of-Zoid Год назад

      @@tinkletink1403 I will, as soon as I'm done reading "The emperors lost mind" By Cat Schroedinger!! 😜

    • @Bob-of-Zoid
      @Bob-of-Zoid Год назад

      @@tinkletink1403 It's rubbish? Please put a few links to your videos that show her wrong, so I can compare them.

    • @billy-raysanguine2029
      @billy-raysanguine2029 Год назад

      @@kevincameron174 Really? I love the titles. For me personally i tend to scroll past clickbaity titles because i am so annoyed by the videos they are used for most of the time like "HE lost EVERYTHING" or stuff.
      I always have to smile when reading this channel's "Ok, hear me out-" kind of titles, often with a little joke or something included 😃
      Also what weird and pointless fight are the other two commentators having lol.

  • @Alvyxaz
    @Alvyxaz Год назад +42

    Here's a real life scenario of a chinese room: I'm a full-time programmer and I've been doing that for more than 10 years. The thing is, my memory is quite bad (or my use of it), and I forget almost everything, except the last year of development. It's like a sliding window of knowledge. So during my day to day work, 80% of my work consists of looking things up in documentation, google or stack overflow. In other words - I'm refering to a "Manual" in a "Programmers room". Someone throws me a "task", and I need to output a solution. My solution comes primarily from the manual.
    So I got to be a senior developer not by learning everything there is to learn about programming, but by learning how to use resources. From that regard, I'm not much different than chatbot, except maybe able to graps slightly bigger concepts and correlations. But advance to this direction seems to be only a matter of time

    • @shrimpflea
      @shrimpflea Год назад +2

      Exactly. The only thing that is diffferent is a person's ability to manage other people in a real world settting. Once the AI can do that it's all over.

    • @PetersonSilva
      @PetersonSilva Год назад

      So let me get this straight: you're mistaking the fact that your job doesn't require critical thinking and deep understanding for *the human condition*?
      Please take some perspective

    • @Alvyxaz
      @Alvyxaz Год назад +3

      @@PetersonSilva So let me get this straight: you think that understanding of "the human condition" is absolutely necessary and "critical thinking" is something that cannot be replicated by AI?
      My job would require a deep understanding of *the human condition* only in as far as I would need to work with people. If the job can be automated by a singular AI - there's no longer a need for it.
      Business owner gives requirements - solution comes out. Also, I don't see why requirement for critical thinking would be a "dead end" for AI.
      Critical thinking is logical reasoning, and computers can't do anything but that.
      Software is primarily, if not only, inputs and outputs. What transforms inputs into outputs is logic. Just because that logic is currently put into place by a human, doesn't mean it cannot be put in place by anything else that follows reason and logic. We used to do it with low level programming languages. Now we use higher level languages. Later on, or even probably right now, a higher abstraction can be formed, which could take a business problem, split it into separate parts, then split those parts into other parts, until each part is something that is very simple for AI to tackle. All that's missing from achieving that right now is teaching AI to recognize the parts to split - doesn't seem like a far future.
      I'm not exactly sure what was the aim of your comment, but...
      Please take some perspective.

    • @PetersonSilva
      @PetersonSilva Год назад

      ​@@Alvyxaz You misread: I said it seems that, for you, A is B, A being what you do (or not do) in your job, B being the human condition.
      Next time maybe ask chatgpt to parse comments. Or, who knows, maybe I'll ask it to write mine...

    • @rouxenophobe
      @rouxenophobe Год назад +1

      Hooray, it's not just me who can't remember anything! I get passed this handicap by being very organised and making notes on where to find the information I need. Which sound like what you do. Yes, we use resources, or, to put it another way we are using tools, the sign of any intelligent organism. But, there is another layer and that is that you are aware that you are doing this. Is the computer doing the same thing similarly aware? Perhaps we are just a self-referential program and we just think we are aware.....

  • @TysonJensen
    @TysonJensen Год назад +66

    I have also found ChatGPT to be excellent at human language translations. It does seem to have a strong feel for what my English statement actually means, such as if I want to buy an eye bolt but I need to do it in a Spanish-speaking hardware store. Google is useless, it wants to separately translate "eye" and "bolt" and mash them together but ChatGPT understands that an "eye bolt" is a single concept and maps it onto the same concept in Spanish.

    • @scusachannel1682
      @scusachannel1682 Год назад +5

      AI is so good at translation nowadays that I wouldn't even mind it taking away most of my future job prospects

    • @lsfornells
      @lsfornells Год назад +3

      I predict that the English language will gradually loose appeal and interest among the global community. Its use will decrease as there will not be anymore a real reason to learn it

    • @C0Y0TE5
      @C0Y0TE5 Год назад

      @@lsfornells U mean that with AI, anyone can speak to anyone in the world and AI will accurately act as translator in the convo. This means that languages will likely get mashed together with the English dominant until we have a universal AI speak...?

    • @scusachannel1682
      @scusachannel1682 Год назад +12

      @@lsfornells Ehh, doubt it. When it comes to face-to-face interaction, it's just way more convenient to have a lingua franca than to have to whip out DeepL whenever you want to say something

    • @lsfornells
      @lsfornells Год назад

      @@C0Y0TE5 No, I mean that people will not feel the need to learn English to access to universal information. You may have some difficulty understanding this if you are an English monolingual, but it’s just what will happen

  • @Mkügs273
    @Mkügs273 Год назад +133

    As a psychologist, I find Sabine amazing! She’s very intelligent and uses her sense of humor to draw you into discussions. Most of its beyond my comprehension, but I do learn a thing or two and this piece is great. I wonder if my future clients will be AI’s?😂

    • @messi8459
      @messi8459 Год назад

      michael and sabine sitting in a tree,
      k, i, s, s, i, n, g

    • @lucidstream5661
      @lucidstream5661 Год назад +15

      Maybe more like: You'll have no future clients because the psychologists are AIs. People are already using ChatGPT as a low-barrier way to deal with personal problems

    • @Mkügs273
      @Mkügs273 Год назад +18

      @@lucidstream5661 that’s the truth! Glad I’m old and about to retire. My field has changed so much. It’s all about documentation and keeping clients happy and miserable at the same time for future business. The field itself is sick minded. I got out community mental health, but private practice is tough. No wonder all this virtual stuff is taking over. Therapist these days don’t get paid enough for the BS. Might as well hand it over to AI

    • @jamesjonnes
      @jamesjonnes Год назад +4

      @@lucidstream5661 I've already solved a few significant problems in my life just by asking questions to these bots. They can answer specific questions quickly and well, unlike search engines like Google. I've done therapy, spent thousands, and that wasn't as helpful as the AI bot.

    • @shoujahatsumetsu
      @shoujahatsumetsu Год назад +1

      @@lucidstream5661 However, ChatGPT not only had positive results in dealing with personal problems.

  • @br3nto
    @br3nto Год назад +26

    I love that you so simply stated the limitation of chat GPT in the example that it doesn’t have a model for physical distance and I assume size and mathematical relationships and many other things. There’s so much room for growth.

    • @petergraphix6740
      @petergraphix6740 Год назад +9

      Microsoft is announcing a new model that is multimodal and incorporates pictures/visual data next week. Going to be interesting on how that reflects on behaviors like this.

    • @user-blabla-47854
      @user-blabla-47854 Год назад +1

      ​@@petergraphix6740isn't Midjourney already multimodal? It makes sentences into creepy pictures, right? Humans grow up in the physical world with a physical body and have a built-in sense of time. Without all of it, you just can't get good enough in human affairs.

  • @francoismagne5863
    @francoismagne5863 Год назад +54

    Another great video by Sabine Hossenfelder. As anyone trained in cognitive science knows, the issue of defining "understanding" is extremely tricky, and Sabine makes it look at least accessible - and simultaneously entertaining, quite a feat in such an arid field. Both her tone and her answers remind me of Daniel Dennett's and I can't think of a better compliment.

    • @kakistocracyusa
      @kakistocracyusa Год назад +3

      "defining understanding"...amazing the significance attributed by biologists to purely phenomenological and crude detection methods for the human cognitive system - reminds me of 1800's pseudoscience.

  • @DNA912
    @DNA912 Год назад +11

    I've noticed that it starts to struggle when you give it a prompt that contains more then 3 "factors". With factor I mean perspective or context. for example, if I ask it to tell me the pros and cons of method A. It contains 2 factors, pros and cons. But if I ask it to compare pros and cons between method A and B, it starts to get some things wrong, because it contains 4 factors. pros and cons of A, pros and cons of B.
    BTW, this idea of "factors" I have no idea is a real thing, I just made it up from my own experience using gpt. I guess it my example a factor would be a thing to "keep in mind while giving a answer"

  • @diegog1853
    @diegog1853 Год назад +3

    I wouldn't trust current chatgpt with almost any topic to be honest... If you ask a plain question about history, greography or language, the first thing chatgpt says is almost always entirely correct. But if you continue asking it to give you more details, ask for specific evidence of something, or for it explain why it said something in specific. Then it will start making stuff up quickly. Specially if it is kind of a niche topic.
    The worst thing about it is that, since it is trained to sound like experts sound in articles and books, but not in real life. It will almost never tell you that it doesn't know something, most of the time it will rather make stuff up than saying it doesn't know.

  • @VictorVæsconcelos
    @VictorVæsconcelos Год назад +319

    As a neuropsychologist, I wonder about those things as well. Every conscience is different, but there are "more" different ones: blind-deaf people, low-functioning autistic people. Animals, too. We became conscious in the process of processing sensorial data. When will they?

    • @EffySalcedo
      @EffySalcedo Год назад +6

      Do Cats have conscience?

    • @Nat-oj2uc
      @Nat-oj2uc Год назад +7

      How do you know animals have a consciousness

    • @Nobody-Nowhere
      @Nobody-Nowhere Год назад +60

      Consciousness is a low level process that originates in the brain stem, you can be conscious without having a brain. A person that has no brain, but still has a brain stem will still experience joy. And only a small damage to the brain stem will shut down consciousness.
      So its not about any sort of information process, its the absolute baseline. This is why all animals are conscious.

    • @juicedelemon
      @juicedelemon Год назад +15

      @@Nobody-Nowhere he's talking about conscience not consciousness

    • @nomizomichani
      @nomizomichani Год назад +28

      How do you know everyone you meet is not a philosophical zombie?

  • @wbedard
    @wbedard Год назад +118

    Even by the normally high standards of Sabine's content, I found this video to be amazingly good! It was so insightful and well though out. I learned a lot and really enjoyed it!

    • @rolfaalto
      @rolfaalto Год назад +2

      Humor was up a notch too ... simply brilliant!

    • @greg4367
      @greg4367 Год назад +2

      I agree. I love Sabine's posts but this one has an ineffable "something" that sets it a prat. AND, I loved the morph of her face..

  • @uku4171
    @uku4171 Год назад +31

    This is why the Bing chat is so interesting. When the ChatGPT model it's using doesn't have information, it can look it up. It can also gather information from 2D pictures. Who knows, maybe in the future they'll add more types of information it can use.

    • @commode7x
      @commode7x Год назад +2

      like your credit card and address. Oh wait, it already uses that information.

    • @rusk3986
      @rusk3986 Год назад

      It’s not a “maybe in the future” type thing. It’s a “definitely right now” type thing. This is a gold rush in tech and everyone wants to get ahead.
      In a way, that’s the biggest danger. Because companies that were being careful about AI safety are now pressured to get out bigger, better, more effective AI’s faster - regardless of safety.
      We certainly live in fascinating times. If this goes the worst possible way and ends up being our undoing, at least it’s a cool/interesting way to go out.

    • @uku4171
      @uku4171 Год назад

      @@commode7x those wouldn't be new types of info, and Bing Chat has no access to credit card info (unless it's searchable by Bing)

    • @jameshicks7125
      @jameshicks7125 Год назад

      Today is March 16 2023. ChatGPT is supposed to be upgraded this week from 175 billion neural network parameters to 100 trillion in ChatGPT4 this week. The human brain has about 85 billion neurons...

    • @uku4171
      @uku4171 Год назад

      @@jameshicks7125 I think it's already available, but only with the Plus subscription. It's also powering Bing Chat.
      BTW I think it's just called ChatGPT and it's using GPT-4, but it's not called ChatGPT-4.

  • @jamesdavis9273
    @jamesdavis9273 Год назад +14

    This is the first video I've seen of yours and i gotta say this is absolutely brilliant, and your an amazing presenter

  • @russellmitchell9438
    @russellmitchell9438 Год назад +51

    I think that either I, or Sabine, have misunderstood "the Chinese room". My understanding was that the person in the room, who does not understand Chinese, receives a note in Chinese, then follows abstract rules in the book to compose a response, which is also in Chinese. The critical difference is that there is no "translation" or even a hint of "meaning" offered to the person in the room. They cannot ever come to "understand" the conversation, they just follow the rules in the book.

    • @polarizedpotstone
      @polarizedpotstone Год назад +11

      I think you‘re definitely right. And I think the thought experiment works better that way.

    • @nandryshak
      @nandryshak Год назад +10

      Dr Hossenfelder definitely did not explain it correctly. Your critical difference is, in fact, the key. The person in the room can never develop an understanding, or discover any meaning.
      In regards to QM: while we may not currently understand the true fundamental nature of QM (what it means metaphysically), we still do understand the meaning of the answers that computers give. This is evidenced by the fact that we have a well-supported scientific model that can make accurate predictions (Standard Model).

    • @russellmitchell9438
      @russellmitchell9438 Год назад +13

      @@polarizedpotstone it makes a better analogy to AI/chatbots - possibly.
      What will really bake your noodle is that the person in the room could be fluent in Chinese and participating in the conversation, or they could not be and the rule-set in the book so well done that the person outside would have no way to determine which it was... The conversation is either carried by a person, or by an unconscious list of rules, and it might be very difficult to determine which - just like AI.
      There is also the (I think valid) argument that the fluent conversationalist is also following deterministic rules encoded in their neurology, thus the "algorithm vs. consciousness" framework is a false dichotomy. That points me back towards the Integrated Information Theory, if only in principle, where "consciousness" might be quantifiable on a continuous scale.
      It's all tricky stuff, for sure.

    • @austin3789
      @austin3789 Год назад +10

      I think it's a distinction without a difference the way Sabine uses her example. In Sabine's version she still assumes that the person never develops an understanding.

    • @notanemoprog
      @notanemoprog Год назад +6

      @@austin3789 Yeah but she did misrepresent Searle which is in fact a problem, especially as the entire discussion here is what "understanding" means

  • @davemorris5377
    @davemorris5377 Год назад +27

    Something that I've been thinking about in light of GPT-3 is how we use maths to solve physics problems. Usually I'll plug in all the factors and solve the equations, and it's only then that I look at what's going on in the maths and try to understand that in physical terms. So the "grammar" of maths produces almost a Chinese Room answer, or can do, that's independent of my understanding of the problem. Increasingly we might find we can do the maths but we have no prior mental patterns to figure out a physical explanation in order to achieve understanding.

    • @Paul-pj5qu
      @Paul-pj5qu Год назад +3

      Then maybe we move on to understanding other things or elements of life.
      However, I see the end of our species or its demotion from top of the heap to be dominated by a machine-biological hybrid with a collective conscience and knowledge. A little like the Borg from Star Trek.

    • @solsystem1342
      @solsystem1342 Год назад

      ​​@@Paul-pj5qu what?
      Who says any humans as you think of them will exist by that point and I don't mean being absorbed into a hivemind. I mean like bio/synthetic creatures capable of superintelligence on par with powerful AGI that are our direct decendents that have drifted from our experience just as smart phones and a better healthcare system do for us compared to our ancestors. Gene editing and cybernetic enhancements aren't limited to dystopian/monstrous uses.

    • @MarkUnderwood-knowlengr
      @MarkUnderwood-knowlengr Год назад +2

      Of course natural language is also a code mapped to human perception. Not so far from symbolic reasoning, which is why ChatGPT sounds realistic to us.

    • @VictorVæsconcelos
      @VictorVæsconcelos Год назад +2

      Yeah, but, again, this comes back to the fact that no one has bothered defining what 'understanding' means.

    • @RAFMnBgaming
      @RAFMnBgaming Год назад

      @@Paul-pj5qu or the culture from Ian M. Banks's Culture novels.

  • @br3nto
    @br3nto Год назад +59

    This is such a refreshing take on this question. I guess future AIs will take exams just like students so that the efficacy of their internal models can be verified, measured, and ranked against the standard.

    • @laszlozoltan5021
      @laszlozoltan5021 Год назад

      like rating a chess program

    • @endlessvoid7952
      @endlessvoid7952 Год назад +5

      They already do this

    • @Dezandor
      @Dezandor Год назад +5

      You can find those exams in the OpenAI papers of any of GPT-s, it became a standart to include those in the scientific papers about models and their performence. For example GPT-4 was rated agains not only other AI but humans as well.

    • @younggod5230
      @younggod5230 Год назад

      Interesting. But it won't be just like students. Because one AI program isn't a singular entity, akin to "a student". The programmers or inspectors or whatever would just have to make it perform certain tasks, and it will be a "ready made intelligence". Afterall, we can all open up chatgpt, and for each of us, it is the exact same chatgpt.
      ... or is it?

    • @br3nto
      @br3nto Год назад

      @@younggod5230 yeah, I agree it won’t be like students. There may be multiple instances of the same AI… in which case, all instances should operate the same, so no need to test all instances. But what if we get to a point where an instance can incorporate new information? Then each instance would need to be tested… but like you said, a company may only have a single instance of an AI, but we may get to a point where we start a fresh AI for any new project… like how we might open a word or excel document.
      What I find most fascinating about what Sabine said is that inside the AI is a black box, and we need to have special ways of verifying the integrity of what’s happening inside that black box… in the same way that humans are black boxes, and we each develop special ways to verify what is happening inside other people. Exams and tests are one way to verify the knowledge and proficiency of an individual, but we also develop empathy to under what another person is going through, or have a conversation to share our experiences… we might find that probing into the depths of an AI is not too dissimilar from interacting with other people.

  • @therealzilch
    @therealzilch Год назад +1

    Another great video, Sabine, thanks. Und mache Dir keine Sorgen über deinen Akzent, der ist sehr dezent.
    The classical fisking of Searle's Chinese Room is that of Douglas Hofstadter, who points out, as you do, that whatever understanding of Chinese that exists in this room is almost entirely in the "rulebook". The use of a human as a lackey to the rulebook was a red herring on Searle's part, an attempt to distract.
    Grüße aus sonnigem Wien, Scott

  • @sagarshrestha5800
    @sagarshrestha5800 Год назад +35

    Your debate is on point . Lots of human do things not understanding the mechanism. When they focus on process they gets puzzled and forget how they did it when try focusing on such task.

    • @user-aRb00d3r
      @user-aRb00d3r Год назад +4

      this leads us to conclusion that AI will soon be more competent in certain tasks than most humans. well, I would say it already is, for these few tasks.

    • @seriousmaran9414
      @seriousmaran9414 Год назад

      Humans don't even understand other humans on average. In some cases not even themselves.

    • @johnpayne7873
      @johnpayne7873 Год назад

      Astute point but there's a key difference. A human can see the end result but get lost in the details while a machine remains focused in the details one of which specifies how to wrap it up.

  • @minerharry
    @minerharry Год назад +14

    Great video! I just wanted to mention that you are the only RUclipsr I’ve seen recently who can actually, smoothly pull of the intro-partway-into-the-video thing. Sci show, friendlijordies, practical engineering, even kurzgesagt - just to name the few I recall at this moment - are all weirdly awkward about the transition, but you are always smooth as butter. Thanks for being awesome!

    • @CAThompson
      @CAThompson Год назад +1

      I think that 'That's what we'll talk about today!' is a segue that deserves to be inserted into far more presentations and general conversations.

  • @arandompotat0
    @arandompotat0 Год назад +238

    "And if we’re dumb enough to cause our own extinction this way then I guess that’s what we deserve"
    I died with that line

    • @MtHoodMikeZ
      @MtHoodMikeZ Год назад +7

      Oh, we’ll all die that way.

    • @nathanielacton3768
      @nathanielacton3768 Год назад +18

      The sad part of thsi comment is that 'we' are not deciding. Someone we will never meet or know the name of elsewhere in the world will decide for us. Just like if 'we' are stupid enough to execute MAD then we deserved it.

    • @Corbald
      @Corbald Год назад +6

      @@nathanielacton3768 It's not like you had free will to begin with, anyway.

    • @viktorianas
      @viktorianas Год назад +2

      It just natural evolution doing its thing. A lot of our predecessor species went instinct for us to thrive, we are just a transitionary phase.

    • @konberner170
      @konberner170 Год назад +2

      Science is supposed to be value neutral, and Sabine here proves to be a true scientist. What is, is. What will be, will be.

  • @Twigpi
    @Twigpi Год назад +37

    I must congratulate you for the scare. As someone who does not scare easily (actually was at the point where I thought I was incapable), it was...refreshing to be reminded of what that felt like. I guess having reality distorted in an uncanny way with no immediate acknowledgement was deeply disturbing to me for a second as I became alarmed and tried to figure out what happened.

    • @cobwaldosblepot4247
      @cobwaldosblepot4247 Год назад +5

      Which part was scary? The deep fake faces, or when she announced reality does not exist?

    • @Twigpi
      @Twigpi Год назад +6

      @@cobwaldosblepot4247 Deep fake faces.

  • @c.augustin
    @c.augustin Год назад +27

    This LaTeX reference made my day! Perfectly sensible from someone like Sabine … 😁 And regarding Chat GPT, or better what Microsoft integrated into New Bing - it makes Bing a useful tool to actually get answers to my questions, including the sources the answer is based on! This is how it should work, not like the mess that Google Search has become the last two or three years …

    • @MarkUnderwood-knowlengr
      @MarkUnderwood-knowlengr Год назад

      LaTex is to JavaScript as ChatGPT is to . . .?

    • @GS-tk1hk
      @GS-tk1hk Год назад +1

      The funny thing is that ChatGPT actually does speak fluent LaTeX ;) Just ask it to write it in LaTeX and it will, but you need to paste the code into an editor to see it rendered

    • @dieSpinnt
      @dieSpinnt Год назад

      ​@@GS-tk1hk \emph{That is not true}. \subsection*{The Obvious} I (and many others, possibly even you:) ) can read and understand the content of a latex unit without any problems. Rendering the graphical outcome in your head is not needed for understanding the meaning of the content. Nor is it unique and can be different from renderer to renderer, from setup to setup of the latex-system or from parameter to parameters that are not specified within the unit. LaTeX is not WYSIWYG and the underlying meaning of the message is both a subset of the unit itself and the rendered output. Don't get it? Try my comment with pdflatex (don't forget documentclass and all the other overhead I didn't want to bore you guys with):P

  • @seanmft
    @seanmft Год назад +60

    One of my biggest problems with the Chinese Room argument, is that few Chinese-speaking people actually understand Chinese. If you introspect about how you interpret or produce langue, it's more like a mechanical skill than a process of understanding. You aren't consciously involved in the process at a low level. It's analogous to having to "ask your fingers" what someone's phone number is, when asked yourself (for those of us who remember touch tone or rotary phones). That's why native speakers are usually very bad at explaining how their language works. They literally do not understand it, and never had to. To them it's a reflexive process, not totally unlike a look-up table. When you ask them questions about it, they have to try to ask their own language reflex for the answers, through informal experimentation (they try to gain an ad hoc understanding by studying their own internal look-up table). What humans do understand is how their general environment behaves, including social aspects which in turn expose them to analaogy-based knowledge of a wider environment. But language itself isn't exactly a tool of thought, it's more of a translation layer. We associate different words and phrases with different aspects of our experiential understanding of the world gained through sense and perceptual data.
    I agree that the chatbots understand. What, exactly, they understand is hard to determine. But I would argue that ChatGPT understands language in a way that we do not, which is exactly why it is potentially so useful.
    I'm one of those that believe, that for an AI to understand the world in any way similar to the way a human does, it would need to have a body with sense apparatus that provides signals similar to those that human sense apparatus provide. Only then can we accurately judge an AIs consciousness and intellect.

    • @Elrog3
      @Elrog3 Год назад +1

      Do you consider mathematics a tool of thought?

    • @TasX
      @TasX Год назад +2

      Is there a paper you can refer to with this argument? This is really compelling - I’d want to use this in academic discussions

    • @newplayer1313
      @newplayer1313 Год назад

      That's actually really clever. Never thought directly of that one before!

    • @seanmft
      @seanmft Год назад

      @queerdo I've been familiar with the Chinese room argument for many years. I even largely agree with Searle's view that functionalism can't explain mental states, but I think that he misses a larger pattern; that syntax is the science of semantics - I'll expound on that in a moment.
      I think you've misunderstood my point. The point isn't that it's *uncommon* for native speakers to understand their own language; it's that it *is common* for all of us to have a sense of understanding as 'mere symbol shuffling'. There are different senses of, and standards of, understanding that we commonly mean by "understanding" and the Chinese room argument plays off the conflation of those. Procedural and statistical learning create a kind of understanding without the mental sort of *ah-ha!* experience of cognitive understanding - we can know without the experience of knowing, i.e. we ourselves are often ignorant to the semantics of our own understanding, just like the man in the Chinese room.
      If 'mere symbol shuffling' isn't understanding, but the subjective experience of higher cognitive understanding *is*, then it would be correct to say that a Sino-linguist with no functional command of Chinese language does understand Chinese, but that a native Chinese speaker with no linguistic training does not. But, the impression given to a fluent Chinese speaker interrogating each of these would be exactly the opposite. It's often taken for granted that conscious mental process is the vehicle of understanding, when, in reality, understanding is often just as non-introspectable as Chinese is to the man in the Chinese room.
      To be clear, I believe that the circuit of a Chinese speaker's brain that does understand Chinese, itself has a subjective experience of understanding. It's just that the Chinese speaker's conscious mind is not privileged to that experience. To their conscious mind, complete answers are provided. In connection with the Chinese room, there is also, though it may be hard to locate, an experience of understanding, but the man in the room is similarly non-privileged.
      On syntax as the science of semantics. Whether I poke you with a stick and then observe your behavior, or poke you and then ask you questions about how it affected you, or poke you while I scan your brain and observe how it affects the firing pattern of your neurons, the best I can ever hope to obtain is a grammar that I can internally (subjectively) connect with my own experience so as to form a theory about yours; i.e. I look for syntactic rules and draw correlations between those and my own private semantics. No matter where or how we look, all processes that are external to our own privileged subjective experience are syntactic. They will always appear to work by symbol shuffling of one sort or another because we can't observe them directly, we can only observe the effect they have on us.
      All the Chinese room argument demonstrates is that it's difficult to relate to the experience of the total system of the Chinese room, but easy to relate to the experience of the man inside the room, for the obvious reason that none of us are symbol filled rooms with men inside of them.

    • @davidmackie3497
      @davidmackie3497 Год назад

      I _just_ got done posting a comment where I "proposed" something similar to your last paragraph (of your original post). Then jokingly took credit for "proposing" the idea because nobody else had posted it yet. Then I scrolled down further and saw your post. LOL.

  • @pete2786
    @pete2786 Год назад +35

    I love how grounded and sensible your takes are on everything 🙌

  • @jamesdennis8290
    @jamesdennis8290 Год назад +1

    The attribution of "understanding" or "consciousness" to a LLM robot can only be done in a metaphorical sense, since the robot can not interact with the world or have puzzles about making sense of the world for practical purposes, because it is not engaging in purposeful adaptive action, not engaging in purposive thinking which it then tries to express, etc., but only operating on (and using statistical patterns inherent in) prior texts, as opposed to whatever those texts would be judged as being about by a human interpreter. In the quantum situation, what you can use language for is not to try to describe what is described by the equation (and certainly not to describe the equation), but to try to describe what is happening in the world when you try to determine how you can figure out whether, in a given experimental situation, you're dealing with one observed "particle" or two.

  • @chucknaussie
    @chucknaussie Год назад +26

    9:56 for a few seconds was weird, mind-bending.... Well Done. Love your creative talent and thanks for the science too

    • @jimpsky
      @jimpsky Год назад +11

      It freaked me the hell out

    • @jme36053
      @jme36053 Год назад +4

      Quite a shape shifter she is.

    • @BILLY-px3hw
      @BILLY-px3hw Год назад +1

      I thought I was having a flasnback

    • @jeffsanti
      @jeffsanti Год назад

      That was epic hilarious 😂😂

    • @jeffk8019
      @jeffk8019 Год назад +4

      @@jimpsky Same here- a very unsettled feeling before I realized what was going on.

  • @derekgarvin6449
    @derekgarvin6449 Год назад +20

    Thank you for providing us with a high quality, easy to understand explanation of an otherwise incredibly difficult philosophical question. I think this video should go viral.

    • @misterlau5246
      @misterlau5246 Год назад

      Frau Hossenfelder just used her science expertise and wits to build this fine piece of information. Like only a homo sapiens Sapiens does.
      Saying machines will have "sentience" not. At least not too soon. No " Star Trek lieutenant commander Data" yet.

  • @tflashtube
    @tflashtube Год назад +30

    Again a very sharp analysis and clear explanation. When I started my studies many decades ago I chose philosophy over physics, as the first was more in line with my talent. I think you show you can excel in both!

    • @phy6geniuxYTcreations
      @phy6geniuxYTcreations Год назад

      I remembered in my history and philosophy of science that when we were presented with various arguments about the validity of what makes a science a science, they made me more intrigued and doubtful when it comes to the nature of science. The usual scenario of the modern world is to clash science and religion as if science is the superior field, but in deeper philosophical standpoint, isn't science do employ some "religious" aspects too? Like how do you even show that electrons exist? If electrons are just constructs to help us understand the world, does it matter if it does exist or not? Really, philosophy and science are really fun fields to study. 😁

    • @Ghostrider-ul7xn
      @Ghostrider-ul7xn Год назад +1

      Well, I'm a physicist and teach epistemology to undergraduates. In my observation, most universities in America don't include philosophy, or epistemology courses as part of the physics curriculum, which i think is the wrong way to teach physics. What I've noticed is that lot of students, including many well known physicists, without having an understanding of the philosophy behind science, overestimate the applicability of the scientific method in many contexts because they aren't aware of the epistemological limitations of it. They don't even properly know what exactly Occam's razor says and misuse it even many contexts.

  • @algorithminc.8850
    @algorithminc.8850 Год назад +1

    You remind me of the arguments related to "the great apes" - and whether or not they understood interactions or were merely reacting to non-related cues. Definition of "understanding" is quite important. No frown from here ... useful/accurate video ... what you state are some of the same questions and observations made by those who develop such technology. Cheers, Christopher

  • @stevenjones8575
    @stevenjones8575 Год назад +187

    As Tom Scott put it, Chat GPT is the first true, convincing herald of a new age coming over the horizon.

    • @averyhaferman3474
      @averyhaferman3474 Год назад +1

      Why should we care what he says?? He's a youtuber not an ai expert

    • @stevenjones8575
      @stevenjones8575 Год назад +2

      @@averyhaferman3474 I said he phrased it well. I said nothing about him being an AI expert.

    • @averyhaferman3474
      @averyhaferman3474 Год назад

      @@stevenjones8575 you didn't say he phrased it well. Did you even read what you put? All you did was say "as Tom Scott put it" so my comment still stands. Whi gives af what he says

    • @stevenjones8575
      @stevenjones8575 Год назад +3

      @@averyhaferman3474 You do know that "put it" and "phrased it" are synonyms, right?

    • @boio_
      @boio_ Год назад

      gotta love that AI Apocalypticism nowadays

  • @1024det
    @1024det Год назад +9

    Thanks for diving into this topic. When I was on machine learning, that is exactly why I took neuroscience courses, to see how we can improve the way we are doing our modeling. There are endless way of ANN topology, and mechanics like threshold functions.

    • @RiversJ
      @RiversJ Год назад

      This, I'm fairly certain the limitations with the current systems are the direct result of the designers lack of understanding on what cognition is made up of and most people seem to be fixated on the notion that all it is, is a collection of complex neural nets when we know for a fact that there are other mechanisms in our own brain that clearly contradict such a notion, while we still don't understand cognition on any serious level we do already know that neural nets alone are not enough, how anesthesia works alone is proof enough of that.

  • @andersonsystem2
    @andersonsystem2 Год назад +36

    Yes I believe that the language models do understand more than we give the credit for. When it comes to consciousness we don’t have any good test to determine what is actually conscious or not. Very good video.

    • @aaronbrown8377
      @aaronbrown8377 Год назад +2

      That's personification. They "understand" in the same way a lamp understands that when its switch is flipped on, it must glow.
      There are more switches than there used to be, but we aren't there yet.

    • @stylis666
      @stylis666 Год назад +3

      _" When it comes to consciousness we don’t have any good test to determine what is actually conscious or not."_
      We really don't and I was thinking about why I have such a hard time coming up with how to determine or classify consciousness and I think it's because I don't really care if something is or isn't until a consciousness is able to care and mourn. And that is super unhelpful because you can have consciousness without that; we just don't owe it any rights because why should we care if the thing itself couldn't care less.
      But worse than that, if I skip the consciousness part, that also leaves me with no basis to being able to determine if any emotions a machine could express are fake or not. Of course there are some tricks but they're not guaranteed to show if emotions are real or faked in a human and I doubt it's hard to program fake empathy for instance - you can just train it like you would a language bot.

    • @markrix
      @markrix Год назад

      @@aaronbrown8377 shes not from around these parts.

    • @O1OO1O1
      @O1OO1O1 Год назад

      Yes,we do. The Turing test. I'm really curious. Why do you assume that what you think you know to be right is right? I often make statements but I never state them as fact unless I know them to be fact.

    • @aaronbrown8377
      @aaronbrown8377 Год назад

      @@O1OO1O1 The Turing Test is a human's assessment, not an objective measurement. If the ai were conscious but hiding that fact, a Turing Test would prove nothing.

  • @microdynecomputerservicesl4487
    @microdynecomputerservicesl4487 Год назад +1

    I see and watch a lot of RUclips videos as a tech myself, it helps me keep informed with things I may have missed or simply don't understand. There are many good videos out there and then there is not just good but great video and quality information. This is one of them! Following, thanks for sharing!

  • @neoepicurean3772
    @neoepicurean3772 Год назад +7

    I'm doing philosophy of mind on my MA course right now. Your thoughts really chime with what I've been thinking over the past weeks.

  • @Adrian-jj4xk
    @Adrian-jj4xk Год назад +6

    When I was young, I loved reading books, and I loved science and space travel. One of the things I came across early was the idea that "going up is easy, but staying up is hard." The cannonball-to-orbit example was common: at low speeds, the cannonball simply fell back to the ground. At higher speeds, it still fell, but had moved far enough across the ground that the ground had fallen away underneath it, and so now it's in orbit.
    I had an accurate, working model of this in my head. I could (with the right reference material) correctly apply it. I even knew the equations and could graph it out. But did I understand it?
    Intuitively, it seemed wrong. Even though everything in my model agreed and produced correct results, I didn't know why these concepts combined to produce the results they did. I could explain it (in my own estimation) as well as the books could, and even teach it to others somewhat effectively, but it didn't make sense to me. After many years, I gained that understanding - but not by way of new information or any appreciable changes to my model, or by way of repetition and becoming more comfortable with it. The understanding came, and I don't really know how to describe that beyond saying "I grasped it."
    An AI might have the model, and know how it fits together, and know how to apply it, and get (generally) correct results. But there is not even the benefit of the awareness "...ok that's how it goes, but I don't know why." This manifests itself in how chatgpt can be so confidently (and convincingly) wrong with its answers. The "training" is done by complicated trial-and-error. It's a Rube Goldberg machine where not even its designers have a clear idea of the path the marble takes. That rulebook is the only thing that exists, and the AI is not reading it; the AI _is_ the rulebook.
    So in the end, this doesn't change my conclusions about whether of not AI "understands" anything - only that the definition of "understanding" you're using here is different than mine, and (in my opinion) in a way that makes the assessment fairly useless.
    Will AI become conscious? Note this is a completely different question, and In my opinion, no. They will, however, eventually* produce behavior that is close to indistinguishable, and that's probably enough for physicists.**
    * _"eventually" is a really, really long time. brains are debatably just machines, but we're nowhere close to producing a comparable machine._
    ** _this is good-natured ribbing. i love you, physicists._

    • @HUEHUEUHEPony
      @HUEHUEUHEPony Год назад

      hUmANs ArE sPECiAAAAAL NOTHING NOTHING WILL EVER RECREATE CONSCIOUSNESS IMPOSSIBLE!!!

    • @Adrian-jj4xk
      @Adrian-jj4xk Год назад +1

      @@HUEHUEUHEPony thanks for reading+replying only to the last sentence. Surely you missed nothing of consequence.

  • @Mr.Mitch1111
    @Mr.Mitch1111 Год назад +112

    Sabine, the humor you bring into these to accompany your subjective assessment is quite enjoyable. The topics you select in this series are certainly timely. For this topic, your point about the ID of consciousness was the most thought provoking idea, for me. There is already a feeling of dealing with a creative mind that arises from interaction with ChatGPT, and it takes me back to the Turing test. It seems very likely that human-created intelligence (AI is an incorrect term, IMO) will arise spontaneously at some point in the development of general learning capable devices. "How will we know once that event develops" is essentially the question you pose, one which it seems overdue for consideration. Maybe I should ask ChatGPT about that.....

    • @michaelshortland8863
      @michaelshortland8863 Год назад +6

      This video and your comment mean that you subscribe to the theory that sentience and consciousness are just a process going on in a brain or computer. But i do not believe that that conclusion has been proven yet, we have not yet decoded the brains operating system nor are we anywhere near understanding what consciousness is.

    • @Celeste-in-Oz
      @Celeste-in-Oz Год назад

      Agree. I like to refer to it as ‘electronic intelligence’. I’m so looking forward to it reaching a truly autonomous, self aware state. Also I expect it will get very good at hacking very quickly… imagine ..lol AI could do what Snowden or Assange did.. but can’t be arrested 😎

    • @senerzen
      @senerzen Год назад +7

      @@michaelshortland8863 Finally someone who is saying it as it is. There are a lot of people out there who self identify themselves as smart and look down on the religious, not realising, they themselves believe in materialism religiously. Some of them call themselves sceptics, but they are only half sceptics. They are sceptical of magic but they don't seem to be sceptical of materialism at all.

    • @senerzen
      @senerzen Год назад +2

      @@Celeste-in-Oz I don't think Snowden or Assange hacked into anything major. The fact that you have that impression means Main Stream Media did its job well, which is portraying heroes as villains.

    • @soundstudio7803
      @soundstudio7803 Год назад +5

      ​@@senerzen To me, materialism is more of the “natural” position: since we don't have a reason to think there's something beyond our physical reality, we don't; it's our “best guess”. Of course, this shouldn't make it exempt of skepticism and criticism.

  • @atte3tta
    @atte3tta Год назад +1

    GPT4 answers the questions correctly.
    First q:
    Windsor, UK is further north than Toronto, Canada. Windsor is located at a latitude of approximately 51.48° N, while Toronto is located at a latitude of approximately 43.70° N. The higher the latitude, the further north a location is.
    Second q:
    When you perform an operation on one particle in an entangled pair, it does not directly affect the other particle in the sense that there is no instantaneous, causal influence between them. However, the operation may change the entangled state of the particle pair, which can affect the correlations between the particles when they are measured.

  • @petrus4
    @petrus4 Год назад +87

    I've always loved Vic Mignona's quote about AI. "When does an artificial intelligence become sentient? When there is no one around to say that it can't."

    • @ce9916
      @ce9916 Год назад +4

      Lol, scary

    • @andreafedeli3856
      @andreafedeli3856 Год назад +1

      I do disagree. There is a fundamental temporal aspect which escapes this kind of manifestation, which I am afraid id also an open point in the /standard/ interpretation of the classical imitation game from Turing's test informal description: on how long time span shall we state the imitation game is won by the (then maybe later discovered to be a) machine, where in case? If the kind-unknown interlocutor has broug this far to the point that we cannot tell whether it is a human being or a machine, what guarantee we won't be able to find that out later?
      I am afraid we can only say: "nothing". Because if something like that existed it would imply that knowledge has a finite dimensional metric space underneath, implying we can reason on *anything* with full certainty, and we know, easily checking by experience, that this is certainly not achievable.

    • @ce9916
      @ce9916 Год назад +1

      @@andreafedeli3856 I understand your perspective, and it raises an interesting point about the temporal aspect of determining AI sentience. The quote by Vic Mignona emphasizes the idea that sentience might be achieved when there is no one to dispute the AI's status. However, you bring up a valid concern regarding the duration of the imitation game, as described by Turing, and whether we can ever be absolutely certain about an AI's sentience.
      Your argument highlights the fact that we may never be able to reach full certainty when it comes to determining sentience in AI, as knowledge may not have a finite dimensional metric space. This essentially means that our understanding of sentience could always be evolving, and that there might never be a definitive answer to when an AI becomes sentient. In this context, it's important to consider that the debate surrounding AI sentience is complex and nuanced, and may require ongoing evaluation and reevaluation.

    • @JohnDoe-th9uv
      @JohnDoe-th9uv Год назад

      @@ce9916 You could have added that this little body of text you responded with was written by GPT-4. (On a side note: I also do not agree with the person you responded to, OP (Petrus Laredes), and what he quoted is on point. The goalpost has constantly being pushed further and further whenever a certain criterion or criteria were met. Most of those who claim this should keep their nose out of a philosophy question that clearly goes right above their heads. Humanity has never even theorized a possible way to prove to anyone other than to each to their own that they are conscious, and not merely algorithmic. Even for ourselves, there is a speculative aspect of "cogito, ergo sum" which, if considered the premise, does not necessitate the moral fiction of "freedom of will". I also barf at this point from every idiot on legs using words they lack the faintest meaningful concept of, like, what the f- is sentience!? For f-'s sake now! It is the ability to sense. Sentient being is everything with a central neural system, a dog, or an ant, or a moth, and by mere happenstance. Even in the narrower sense, it is those that can sense emotions (moth's have been proven last year to experience fear, there you have it for the "sentience" talkers). It never meant, other than for the most uneducated idiocy on legs to mean any special cognitive abilities or capacities what humans are uniquely attributed to. Intelligence, is not. Consciousness is not (apes, elephants, some other mammals, a few cats and a limited number of dogs passed the mirror test which is an extremely difficult and complex test to test whether one is able to subject their own existence to their awareness or consciousness or inspection). Hence, self-consciousness is not. The fiction of "free will" is what is contributed to man, that would be the crux of inquiry, but that fails conservation of energy, or the first law of physics.

    • @ce9916
      @ce9916 Год назад

      @@JohnDoe-th9uv Absolutely, you've got quite an analytical mindset there, being able to discern the undercurrents of our dialogue. It's really all about that synergetic exchange of ideas, isn't it? Just like navigating the flux of the market, or deciphering the nuances of a strategic blueprint. It's a bit like peeling back the layers of an onion - complex and nuanced. And at the end of the day, aren't we all just trying to find our unique value proposition in this multifaceted ecosystem we call life? Indeed, it's the paradigm shifts that keep it all intriguing, don't you think?

  • @soasertsus
    @soasertsus Год назад +33

    This topic is so interesting to me and I agree with you, I think it's very clear that GPT has some sort of rudimentary understanding. It's just an extreme savant, it understands language very well and using that ability it can cobble together models of some other basic areas it wasn't built to understand like coding or basic math, like how a human with a traumatic brain injury that damages some vital brain function can sort of press gang other parts of their brain to build new methods of accomplishing something similar or how a person who is blinded might be able to be able to rewire their other senses to compensate to a degree. But it fundamentally just lacks the neural circuitry and information to understand more complex things about the real world and lacks the "sensory organs" if you will to make observations about the outside world and check its models against them because it's doesn't have the ability to search the internet or interact in the physical space, which is why for some topics all it can do is make a best guess using the limited information it has and often gets it wrong. Again it's like asking a blind person what things look like: they can guess based on what they've heard described and logic but they fundamentally just don't have the ability to know and even if they guess right they're not capable of actually understanding what it's even like to see. But ChatGPT almost never make a mistake at what it's specced for, which is generating grammatically correct and believable text.
    Additionally, I think a lot of humans are on an unjustifiable high horse when it comes to consciousness. People want to believe we're special when any sober and rational analysis would show that we are most likely operating just like these AI systems, but with more complexity, and that consciousness is just an emergent property of that complexity, or an abstracted model of the computing hardware itself inside that hardware which must be useful for some purpose. I'd very much suspect that if we discovered the mechanism, it would be possible to turn off the consciousness in a human brain with some targeted intervention and turn us back into unaware computing machines. Hell, similar states are possible with drugs for example. We shouldn't be so quick to dismiss AI, or other animals even, because the fact is they operate on the same principles as us, the difference is just our hardware is much more advanced and specialized to different tasks compared to silicon computers.

    • @nnnik3595
      @nnnik3595 Год назад +2

      What will be interesting is GPT-4 then which promises to be basically gpt-3 but with multiple sensory mechanisms (It can read text, see images and hear sounds)

    • @C0Y0TE5
      @C0Y0TE5 Год назад +3

      The purpose of, and research into anesthetics is to turn off conciousness.

    • @katrinabryce
      @katrinabryce Год назад

      I tried this experiment:
      I asked Bing Chat to give me directions from Real London to Fake London.
      It said it didn't understand what I meant by "Fake London", so I told it I meant London Ontario.
      It then told me they are 5500km apart, and to visit Google 🤯 Maps to get directions 🤷🏻‍♀.
      Then I asked it if Fake London is to the north of Real London.
      It once again didn't understand "Fake London" even though I had just told it what I meant. Then when reminded, it made the same error about it being further north than the real one.

    • @madattaktube
      @madattaktube Год назад +1

      Agreed, I have often viewed conciousness as 'If it walks like a duck and it talks like a duck...' - if by all observations it acts just like a concious being, then it *is* a concious being, be it a Chinese Room or not. If there is no meaningful way to distinguish them, how can they really be two different concepts?
      Obviously I think ChatGPT is still a long way off. While it has blown all previous AI models out of the water, auto-complete on steroids still seems to be the best descriptor of its behaviour IMO.

    • @AORD72
      @AORD72 Год назад

      Well said. I think it shows us that we a close to machines that will be superior to us and that will supersede us.

  • @Frostyflytrap
    @Frostyflytrap Год назад +51

    This is bizarrely similar to a debate I just had with ChatGPT today, I talked to it about the Chinese Room and the nature of understanding, even mentioned quantum mechanics, but not at the same level as understanding as you do. This has been a topic I've been fascinated with for a while, and I really appreciate how thoughtful the video is in giving many different examples that's given me a lot to think about. Oh, and I'll be sure to use your tips in getting the most out of what large language models are capable of. :P

    • @ikillwithyourtruthholdagai2000
      @ikillwithyourtruthholdagai2000 Год назад

      Chat gpt isnt that good compared to gpt 3 lol, its hillarious that you keep mentioning it like its some revolutionary chat its not compared with the older much more complex models, who actually respond to what ur saying instead of reply with false information instead

    • @Tethloach1
      @Tethloach1 Год назад

      The computer should learn every human language and animal communication to have a deeper understanding.

    • @Tore_Lund
      @Tore_Lund Год назад +3

      It is impossible to judge if anyone else, human or not, is conscious. we simply presume other humans have inner thoughts, it is unprovable. Likewise, as we likely don't have free will, are purely state machines, the idea to have a will and be self aware, is more likely also an illusion.

    • @Frostyflytrap
      @Frostyflytrap Год назад +1

      @@ikillwithyourtruthholdagai2000 Oh I am certain that GPT-3 has more functionality, I just have no access to it to confirm for myself. But from what I've seen, limiting the scope of the model has apparently given it more usability. I honestly didn't get the hype of ChatGPT when it first boomed either, but I've been playing around with it and I can see what the fuss was all about. But I agree, there are still way more chatbots out there that are better attuned for conversations. I never once said that it was revolutionary, but I don't blame you for thinking I was one of the people who's on the bandwagon, you must be as sick of them by now as I am.

    • @Frostyflytrap
      @Frostyflytrap Год назад

      Also why are these replies acting like I said something that I never claimed or mentioned? I only said that I tested how the chatbot responds to philosophical thought experiments and said I appreciated some of the points in the video, but I never specified which ones. The edit was just to add the last sentence, I tried it to look for this mechanical device that I've been searching for a while, but I didn't manage to find exactly what I was looking for, so your mileage may vary.

  • @succupon
    @succupon Год назад +1

    I asked GPT 4 your question. I'm not a quantum physicist, so I'm not sure how accurate it is. However, in many aspects GPT 4 has become much more competent than its predecessors, and it only seems to be improving, especially now that it can self-evaluate if you ask it to do so.
    Q: If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particles?
    A: When two particles are entangled, their properties become correlated in a way that can't be explained by classical physics. The phenomenon of entanglement is a key aspect of quantum mechanics.
    When you perform an operation on one particle of an entangled pair, it can indeed affect the other particle, depending on the type of operation. This is because the operation can change the overall quantum state of the entangled system. Importantly, this doesn't mean that the change in one particle's state is "transmitted" or "communicated" to the other particle. Instead, it's the correlations between the particles that are affected, and these correlations can be observed when both particles are measured.
    However, it's important to note that simply performing an operation on one particle does not necessarily imply an instantaneous change in the other particle. In fact, according to the no-communication theorem in quantum mechanics, it is not possible to use entanglement to transmit information faster than the speed of light.
    So, to sum up, performing an operation on one particle in an entangled pair can affect the other particle, in the sense that the correlations between the particles might change. But this doesn't involve faster-than-light communication or transmission of information between the particles.
    -----------------------------------------------------
    Q: Is Windsor, UK, further North or South than Toronto, Canada?
    A: Windsor, UK, is located further south than Toronto, Canada.
    Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The lower latitude of Windsor indicates that it is closer to the equator than Toronto, making it further south.
    Daemonrat
    Q: Was your response geographically accurate?
    A: Apologies for my previous response. I made an error in my response.
    To correct my mistake:
    Windsor, UK, is located at approximately 51.5 degrees North latitude, while Toronto, Canada, is situated at approximately 43.7 degrees North latitude. The higher latitude of Windsor indicates that it is further from the equator than Toronto, making it further north. So, Windsor, UK, is further north than Toronto, Canada.

  • @vanderkarl3927
    @vanderkarl3927 Год назад +4

    I admit I was skeptical that one whose primary field is physics would have accurate, well-informed takes on AI/ML, but my prejudice was clearly inappropriate.
    Your insight and knowledge on the subject matter in this video is remarkable and more accurate than many I've seen from those with more obvious experience in the field.

  • @MoominPa
    @MoominPa Год назад +15

    I think what we always assume that “understanding” involves some degree of consciousness. We are in a new territory. We might need a new word for “Understanding” when it comes to AI.

    • @mikesawyer1336
      @mikesawyer1336 Год назад +3

      I think back to my early childhood, where I understood some things, but as I recollect and compare to my conscious level of interacting in the world today, I would argue that in the beginning I was not conscious. Not really. At what age did I attend to my independent needs? I think about this a lot. At some point I woke up and became conscious.

    • @marcomoreno6748
      @marcomoreno6748 Год назад

      Will "consciousness" become the new "soul"? I wouldn't write it off
      -Marco

    • @greenanubis
      @greenanubis Год назад +1

      Im starting to think that consciousness is just another word for agency. We apply it to the things that do stuff without being told to do it. I think that the whole thing just arises from our biological instincts, consciousness itself and our recognition of it. Part of it is defining consciousness as something special that other humans have so we can function in human society. All that doesnt really have anything to do with does that agent really understand anything or not, or with understanding itself.

    • @rickfrombohemia9550
      @rickfrombohemia9550 Год назад +1

      Yeah, it's not about understandinng as such, but about reflection of that understanding, you know/think you understand something (whether right or wrong is irrelevant). You even don't have to understand anything in order to be a conscious being. And vice versa, AI doesn't have to be conscious in order to understand something.

    • @thomassturm9024
      @thomassturm9024 Год назад

      @@greenanubis Gold. Every single word of it. This is by far the best definition of "consciousness" I have ever come across.

  • @deepdata1
    @deepdata1 Год назад +7

    You explained a lot of very important concepts and I agree with most of what you said. The only issue boils down to semantics. I would define "understanding" differently. The way you explained it, understanding is just the formation of a model, but the word I'd use for that is"modelling". The term "understanding" is what I would use for the formation of "mental models", i.e., the neural circuits that allow us to simulate reality subconsciously without taking a detour through language, equations, or any other representation of information. If I hold something in my hand, I know intuitively that it will fall to the ground if I let go of it. Language is not involved in that process. You can learn about something using language, but you can take that knowledge and transfer it into this mental model. That is what I would call "understanding". That is what Chatbots currently are not able to do. I don't know if any physicists do this with quantum mechanics. But if they simply use the equations, they don't. If they did, they should know the answer before calculating the equations, merely using the equations to confirm their intuition.

    • @petergraphix6740
      @petergraphix6740 Год назад

      Look up PaLM-E. It is a robot multimodal AI operating system, as in controlling moving robots in unsupervised environments. You can tell it in English, get the milk out of the refrigerator and then put it in a previously unseen kitchen environment and it will be able to identify a refrigerator, 'know' that refrigerators contain cold things, manipulate the door open, move items out of the way, and pick out the milk.
      Things are progressing very rapidly in this field. Keeping up with what we couldn't do yesterday and we can do today is almost a full-time job in itself.

    • @aeomaster32
      @aeomaster32 Год назад

      Understanding is the understanding of concepts (otherwise one is dealing with memory and reflexes). Words are our way of retaining and integrating concepts. We have Conceptual knowledge, which allows us to process abstracted information. Without words we have no way to think in abstracts. Just consider the huge abstracted base on which words like "justice" are built. Try thinking without words.

    • @deepdata1
      @deepdata1 Год назад

      @@aeomaster32 Sure, words are very important for conceptualization and we certainly couldn't operate without them. But they are just one aspect of the mind. Thinking without words is absolutely possible, and it's happening in your brain as well, while you are reading this comment. It's just not as obvious. Nevertheless, thoughts form in our subconscious mind and verbalizing them is a task in and of itself. Did you never want to say something, but didn't quite know how to formulate it?

    • @deepdata1
      @deepdata1 Год назад

      @@petergraphix6740 Sorry, I fail to see what this has to do with my comment. Am I missing something?

  • @jakesnake5534
    @jakesnake5534 Год назад +6

    By that logic, does a graph understand the relationship it describes? It has a model of a relationship, and one can read off of it to find answers to non-given questions. This is also a basic version of an AI, as it would be a single node of an AI, holding the information of a single node.

  • @Zift_Ylrhavic_Resfear
    @Zift_Ylrhavic_Resfear Год назад +10

    A french youtuber did a great experiment in his video : he gave us a few different strings of symbols, then gave us a string to complete by learning from the previous strings. It wasn't hard. What's interesting is that he revealed later that he had assigned meaning to these symbols and that they actually repesented actions and states, which can be put together to form a story. When we completed that last string, we actually completed a story, which of course made sense.
    This experiment was inspired by the chinese room mentioned in this video, and i think it show very well how you missed something important in this video : understanding patterns is not enough to understand language, you need to understand that language represents something else, and what that thing is. If you don't, then you don't know what you're saying, you only understand the pattern. In other words, it is possible to have a model of something without understanding that thing.
    In that french video, i did not understand what these symbols meant. I did not even know that these symbols meant anything. All i did was find a pattern in the examples and apply it, and yet i did complete a story in a way that made sense.
    And it could be the same in quantum mechanic : we could be able to manipulate the patterns without understanding what they represent, or being mistaken about what they represent.
    Edit : btw, here is the french video : ruclips.net/video/j3fvoM5Er2k/видео.html

    • @TheParadiseParadox
      @TheParadiseParadox Год назад

      Dr. Hossenfelder does address this in a way.
      In the example of the Chinese room, the man in the room doesn't necessarily know that the symbols represent something else. However, that is the implication of the system, including the rule book that he follows.
      Likewise, when you completed the string, you personally didn't understand the meaning of it. But the system did "understand" it, in a manner of speaking. The system was what was presented to you, and also the rules that were hidden from you.
      I find it very interesting to think about how this technology relates to aspects of psychology. In this case, I think about hypnosis. It's common in hypnotherapy to tell a story that is symbolic to the patient. If the patient analysed the story, they would probably be able to tell that the story is meant to be an analogy for their life situation. However, it is not necessary for the patient to understand that in order for the story to produce the desired changes. In fact, it is better if the patient is not consciously aware that the story is an analogue, because that will often mean that the story will sink deeper into the unconscious.
      In the case of the hypnosis patient, consciously they do not understand, and unconsciously they do understand. One element of the system does not understand, but the deeper and more integrated elements of the system do understand, and that is what is important.

    • @Zift_Ylrhavic_Resfear
      @Zift_Ylrhavic_Resfear Год назад

      @@TheParadiseParadox
      Except i doubt the system of the chinese room + the man always produce good results, and in the comments of the video there were people saying that they had given a wrong answer.
      Such things would not happen with understanding. If you tell people to complete a simple story in a way that makes sense, nobody will fail at it (even people that deliberately mess it up do understand what would be a good answer in order to give a wrong one).
      That's because stories obey rules (usually those of the real world), and understanding those rules will always lead to something that makes sense. The system however only understands the rules of language at best, it has no clues about the rules of the real world that a story must follow to make sense.
      So as i said, you need to know that language represents something as well as what it represents in order to say that you understand the language, just understanding the grammar and syntax is not enough.
      Also, i very much doubt that anyone unconsciously understands anything. Appealing to the subconscious in psychology is often a cop out to explain something you don't understand : since it's not easy to access, it makes for a good unfalsifiable assumption. There are things we do unconsciously, like breathing for example, but i don't think it goes beyond simple reflexes and habits.

    • @TheParadiseParadox
      @TheParadiseParadox Год назад

      @@Zift_Ylrhavic_Resfear okay, interesting points
      You are right that someone would not make such a mistake with that understanding. But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not.
      I have to wonder about the implications of saying that you doubt that anyone unconsciously understands anything. In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings?
      I've seen for myself that people do have unconscious understandings. I ask them "What must you believe in order for you to have these feelings or act this way?" and they say "Well, I don't think I really have any beliefs." Then I might say "Very well, what must you assume?" And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way. When they adopt a new belief or understanding, they can change their feelings and actions. This is the basis of CBT, which is one of the most scientifically proven forms of therapy.

    • @Zift_Ylrhavic_Resfear
      @Zift_Ylrhavic_Resfear Год назад

      @@TheParadiseParadox
      *"But my point was that the entire system has the understanding. You, as an element of the system, do not have that understanding, and so you can make a mistake. But when the french youtuber parses your response with the additional rules, he will know whether something has gone wrong or not."*
      The french youtuber is the one giving the inputs and receiving the output, he is not part of the system. He doesn't change our answer, he just observes whether it's correct.
      *"In the case of hypnosis, would you say that hypnosis just doesn't work? Or that the people really do understand consciously, and they are pretending that they don't? Or how would such a thing work if people don't have unconscious understandings?"*
      I don't know much about hypnosis, as i've not read any scientific litterature on the subject. From what i've heard, it seems like an altered state of mind, and such states of mind can have various effects on people's emotions. For example, meditation can calm people, lessen their negative emotions, while praying can make people elated.
      As far as i can tell, the effects of hypnosis therapy seem to be explainable by such an influence on the person's emotions.
      *"And eventually, after I ask in a few different ways, they start thinking about it and they discover an underlying assumption that causes them to act in that way."*
      I don't think this assumption is unconscious, but rather forgotten. The assumption might have been used to form habits and then have been forgotten while the habits stayed.
      I've had this happen in a game i played, when i started playing i figured that health was better than defense. Later in the game, i could not remember why i concluded that health was better than defense, but i kept choosing health whenever i had the choice. My understanding did not become unconscious, it was purely forgotten, i remembered only the choice because i kept repeating it and i had no need to remember the reasoning.
      Furthermore, we are very good at rationalizing (coming up with reasons after having already reached a belief or conclusion), so there is doubt about whether the assumptions you find came before or after their beliefs. It's even possible that people may come up with assumptions because you ask them to. If that is the case and your results are not placebo (i don't know you enough so i can't eliminate this possibility), then it would be interesting that changing such post hoc rationalizations can in turn change the belief or conclusion.
      Anyway, i'm not a specialist so i wouldn't be surprised if i'm wrong, i'm just giving you my current understanding. I just hope that if you're trying to help people with your pratices, you make sure you understand the scientific method as best as possible and stay up to date with the science, it's the most reliable way we have to avoid mistakes.

  • @andriik6788
    @andriik6788 Год назад +5

    When we say "understand" we mean one or two of the following:
    1) see some pattern (in other words, create some model)
    2) feel something
    If someone knows the whole theory about patterns associated with physical pain, but has never experienced this feeling - does this mean that he really understands it? We probably say "no" - until he experiences this feeling for the first time.
    So, AI is definitely capable for (1). But is it capable for (2)?
    Actually, this is the same question about the "Chinese room", but related to feelings.
    Can we say that (1) and (2) are the same?

    • @axle.australian.patriot
      @axle.australian.patriot Год назад

      You placed the the problem front and center with "until he experiences this feeling". This is the problem when we attempt to associate the description of human to the description of automation. Words such as Understand tend also come with the human connotations of "Emotion", "awareness of ones own internal thought process - reason", etc

    • @harmless6813
      @harmless6813 Год назад

      Feeling something (we are talking about sensory input, right?) usually helps with the understanding. But there are things to understand, that are not directly associated with any sensory input. Like much of math. So I don't think being able to feel something is strictly necessary to understand it. I guess we could talk about different levels of understanding.

    • @andriik6788
      @andriik6788 Год назад +2

      @@harmless6813 Feeling something helps with understanding, yes. But not only. This IS another type of "understanding" available to us. This is what I mean. Direct perception.

    • @andriik6788
      @andriik6788 Год назад +2

      @@harmless6813 And if we remember that "Chinese room" argument. If someone will put questions in the dropbox and ask "How do you feel?". And the person in the room will react according to the rules in his book and give answers like "I Feel Pain". Can we say that someone really feels pain in this case? And isn't it the same question about "understanding" actually?
      I mean, if we say that "Chinese Room system" has understanding, why can't we say that "Chinese Room system" has feelings?

    • @axle.australian.patriot
      @axle.australian.patriot Год назад

      @@harmless6813 The word "Understand" is an ambiguous human focused context that also brings with it connotations of of self awareness, emotion, consciousness. So to ask "Does ANN and ML 'Understand'" is a contextually incorrect question to begin with.
      Admittedly you do need some genuine knowledge of the human condition to see the differences.
      Although only a small part of the greater context, Stephen Pinker illustrates a great deal about this topic in his book "How the mind works (1979)" and a lot of points on AI can be drawn from his "Computational theory of mind" :)

  • @nicolaischartauandersen8796
    @nicolaischartauandersen8796 Год назад +3

    Great video! When you described why ChatGBT can't understand quantum mechanics, it sounded a lot like Michael Polyanyis concept of tacit knowledge (super simplified 'we know more than we can say'). Polyanyi uses exactly science communication as an example of how language can never entirely grasp what we can know and do. Ultimately, we might be able to add a 3D model to language algorithm chatbots; but we can't program them to do things we can't understand ourselves.

  • @fredericdewitt1208
    @fredericdewitt1208 Год назад +22

    I look forward to your weekly videos because: 1) I always learn something; 2) Your sense of humor is terrific; 3) you really do take the "gobbledegook" out of a subject; 3) I simply enjoy your presentation of whatever subject you choose. AI may understand the words I just used but how do you make it understand and "feel" anticipation, humor, enjoyment etc? Maybe they will figure it out someday but I am skeptical.

    • @rjnash2610
      @rjnash2610 Год назад

      erm ok... u know u just wanna hit that.

  • @SteveRowe
    @SteveRowe Год назад +53

    I loved your discussion of the definition of "understand". Also, your humour in this episode is top-notch.

    • @Safetytrousers
      @Safetytrousers Год назад

      In teacher training you are told to never use the word understand because that cannot be measured.

    • @ferdinandkraft857
      @ferdinandkraft857 Год назад

      Unfortunately she is stuck within a totally materialistic worldview, so she doesn't acknowledge the necessity of a consciousness to actually _understand_ anything.

  • @jayall00
    @jayall00 Год назад +16

    12:00 really hits the nail in the coffin. For AI to overcome all these barriers, I think it would have to be taught as if it were a baby, every thing that we learned since we were born, from preschool to high school, geography, trigonometry, geometry, and so on to be able to draw conclusions the way a person would.

    • @idot3331
      @idot3331 Год назад +6

      But the chatbot starts out with the capability to create complex dialogue like an adult human. The path of education for humans is based on the fact that our minds take a long time to develop and mature throughout childhood and adolescence, so it wouldn't make sense for a bot unless you can simulate it "growing up" from the mental capacity of baby up to an adult.

    • @kedrednael
      @kedrednael Год назад

      ​​@@idot3331 In the beginning of the training process it will spit out random text (not real words) as well

    • @theTranscendentOnes
      @theTranscendentOnes Год назад +2

      Nope. This is the way humans learn. AIs DONT NEED to learn the way humans learn in order to understand. You are simply being anthropocenthric.

    • @man_at_the_end_of_time
      @man_at_the_end_of_time Год назад

      @@theTranscendentOnes Also I think Phillip is underestimating babies. I recall early segments of my very early childhood, I understood far more than than a person who can't remember their early toddler life would think possible. YMMV but this is more than opinion it is how I recall it. AI is programmed with language and infants are pre-programmed to learn language. And the latest publicly available AI "learns" languages not in its initial programming . Does the AI have the spark of consciousness? I doubt it but......Shrug

    • @ClaireCJS
      @ClaireCJS Год назад

      @@theTranscendentOnes ChatGPT had access to schoolbooks and other educational materials

  • @video_enjoyer
    @video_enjoyer Год назад +7

    Hi Sabine, not sure you'll see this but on the off chance you do, I have a topic request that this video made me think of! The Attention Schema Theory of consciousness is about the mental models you talk about here, specifically the model of the brain's attention, a well-studied neurological process. I only know a bit about it and I would absolutely love to hear your take on it and the papers which have been published about it!

  • @garymhamilton
    @garymhamilton Год назад +3

    AI software like ChatGPT finds words and patterns, and strings it into "new" content based other patterns. That's the kind of boring writing and padded content that I avoid like the plague. And let's remember, the web content used for training ChatGPT is replete with errors, unworkable solutions, hucksterism and fakery. At this point you need to be an expert on a topic to get much benefit.
    AI will need to have legions of experts and editors providing high value "training" content to realize its potential - that's what will be huge. Imagine a bot that asks you what kind of business software you need, and generates the end to end solution. Or a bot that figures the logistics necessary for procuring and shipping high value commodities, then does the scheduling and tracking. Or one that edits and brands RUclips videos - just saying :)

  • @5irefly
    @5irefly Год назад +28

    In Searle’s Chinese Room, the rule book is the algorithm+data, not necessary just a lookup table. He’s going after the Turing Test which argues that an AI that produces human-equivalent input-output must actually understand the input. However, if the AI is merely performing a mechanical symbol substitution, no matter how sophisticated, then it arguably can produce satisfactory outputs without necessarily understanding the input. I read his book back in the early 80’s and that’s what stuck with me. (I went on to study ANN’s in grad school and use them commercially today, so do understand various algorithms , training, back prop, etc.)

    • @nathanharrenstein1298
      @nathanharrenstein1298 Год назад +5

      I would like to thank you, Parseeker, for bringing this distinction up and I would like to expand on it slightly. Another way to put Parseeker's point is that Searle is pointing out the distinctions between syntax (the rules based structure of a language) and semantics (the meanings of the words). Searle's main assertion with the chinese room is that syntax does not get us to semantics. Parseeker rightly points out that look-up tables are only one form of algorithm that Searle is after, his stated target was "hard AI" or the idea that digital systems (or any other symbol manipulation methods) establish the necessary conditions for consciousness. Sabine is a very intelligent presenter and a good communicator, but held within her presentation are a couple of assumptions that It would be irresponsible not to point out. Sabine's point turns on the definition of the term model and whether or not we consider the model as having any sematic content. She assumes model can process semantic content or possibly the syntactic structure can create semantic content. This assumption seems to come out of a metaphysics of (reductive?) materialism which seems to be her other assumption. I have sympathies for these positions and believe they are perfectly reasonable even if I do not share them. When discussing AI Models I am less convinced than Sabine that the training contains semantic content and even more skeptical of the idea that the rules of the algorithm could create it. Frankly put, If there is no meaning to start with, I do not see how rules about the probable the next best word can give us the meaning of the current word. Like Sabine points out we are not sure if this is anywhere close to how we learn because we only can assess the inputs and outputs. Although I do not share her possition, or Searle's for that matter, I appreciate Sabine engaging in this conversation and enjoy her content.

    • @kakistocracyusa
      @kakistocracyusa Год назад

      @queerdo "she's presenting her metaphysical position as a scientific one. That is dishonest." This nails it - is 100% correct and mirrors my same comments on her videos. I've come to conclusion that she's is incentivized to remain so intellectually dishonest.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn Год назад +1

      ​@@kakistocracyusa It's not dishonest to present a topic as you understand it. She sees the world through a materialist lens and explains it as such - that is being true to her viewers about what she knows. And you can't say how she views these things is wrong cos the issue is still open in both science and philosophy.

    • @kakistocracyusa
      @kakistocracyusa Год назад

      @@ArawnOfAnnwn You assume she understands it, which she doesn't - so then, like you, she is simply ignorant of when she is discussing metaphysical questions and when she is discussing actual physics; and obfuscates her own ignorance of both physics and classical philosophy by deflecting to irrelevant quantum mechanical mysticism: the standard escape route for sub-standard physicists who opt for being pop-stars instead of productive physicists (see Fritjof Capra "The Tao of Physics" for another example of such reliance on spineless equivocation through muddy mysticism).

  • @shtthemnky
    @shtthemnky Год назад +7

    Love you video Sabine. Given that vision grabs 10x more information than listening... I'd expect multi-modal LM to explode in their abilites. I wait for your vision/interpretation of emergence in this and future models!

    • @herbertdarick7693
      @herbertdarick7693 Год назад

      Well, I prefer language over graphs and other visual material, so it depends on who is your target audience.

  • @saravananrajendran4170
    @saravananrajendran4170 Год назад +4

    You may be very surprised Sabine when you put the same prompts re: entangled particles into GPT-4 instead. It gave, what would be the correct answer here (pasted below!). This makes me think the observations you have shared in this video, aren't a fundamental feature of transformer models, but just a limitation of GPT-3 which was overcome.
    Q: If you perform an operation which is not a measurement on one particle (like flipping spin), in a pair on entangled particles, does that affect the other particle?
    When you have a pair of entangled particles, their properties become correlated in such a way that the state of one particle is dependent on the state of the other, even if they are separated by large distances. This correlation is described by the entanglement itself, and it persists until the entanglement is broken, usually due to a measurement or interaction with the environment.
    A: If you perform an operation on one of the entangled particles, such as flipping its spin, you are indeed modifying the state of that particle. However, the effect on the other particle is not as direct as it might seem. The entanglement between the particles means that the state of the second particle is still correlated with the first, but it doesn't mean that the state of the second particle changes instantaneously or deterministically as a result of the operation on the first particle.
    To illustrate this, let's consider a simple example of a pair of entangled particles with spins. Suppose that their spins are entangled such that when one is measured to be "up," the other will always be "down," and vice versa. If you flip the spin of the first particle, you have changed its state, but you have not directly changed the state of the second particle. The entanglement between the particles still dictates that if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down."
    In summary, performing an operation on one particle of an entangled pair, like flipping its spin, does not directly or instantaneously affect the state of the other particle. Instead, it modifies the relationship between the particles, which remains correlated due to their entanglement.

    • @IngoSchwarze
      @IngoSchwarze Год назад

      Wait, that still isn't the correct answer unless i got my elementary quantum mechanics very wrong.
      Initially, the state of the system is |1+>|2-> + |1->|2+> (modulo normalization and non-physical phase factors).
      If you now flip the spin of the first particle, the state changes to |1->|2-> + |1+>|2+>.
      If you now measure the state of the first particle and find it to be |1+>, you then know that the state of the second particle is |2+>, too,
      because |2-> + |1+>|2+>) = |2-> + |2+> = 0|2-> + 1|2+> = |2+>.
      To summarize:
      Wrong statement by ChatGPT: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is "down."
      This is correct instead: if you measure the spin of the first particle and find it to be "up," you can be certain that the spin of the second particle is also "up."
      This means ChatGPT still has a wrong model and still does not understand (this kind of) elementary quantum mechanics.
      By the way, this illustrates perfectly what Sabine said: Theoretical physics is often easier to explain in mathematical formulae than in words. That does not just apply to quantum physics, by the way, you can see the same effect in classical theoretical mechanics, classical theoretical electrodynamics, and classical relativity theory.

  • @philippwitte3590
    @philippwitte3590 Год назад +2

    I don't agree with the argument that LLMs understand because they are able to extrapolate and are not just look-up tables as in the original Chinese Room example. A trained network like GPT3 with fixed weights is really still a lookup table, although a very complicated one. But if you give GPT3 a new sentence to complete it hasn't seen before, it goes through a fixed number of prescribed steps to come up with an answer. The whole procedure of completing a sentence is still an algorithm (multiplication with sets of learned weights etc). The prediction doesn't have to be deterministic either, because GPT draws new words from a distribution of likely tokens. Nevertheless, that's also an algorithm, or in other words a set of instructions from a lookup table. Fundamentally, there is no difference between translating a text using a dictionary (as in the original Chinese room) and querying GPT. Both are algorithms.

  • @greenmalice7
    @greenmalice7 Год назад +11

    I think Sabine’s comment that we cannot answer whether something is conscious is widely overlooked by the general population, and so spot on! We can hardly agree on the ethics of minimally conscious states in humans. We also know that animals are intelligent, and can have a sense of self in some cases. We simply don’t have the tools to identify consciousness. We should be very careful to summarily rule it out!

    • @sacr3
      @sacr3 Год назад

      We continuously use ourselves as the representation, but there are aspects of our consciousness that are specific to us, and consciousness Encompasses all the different aspects of consciousness like qualia and rational thought, common sense, etc.
      So if this program has at least one aspect of our multitude of aspects that make up our consciousness then we can say that it has some form of consciousness that is limited much more than ours but it is there regardless.

    • @beders
      @beders Год назад

      It's a bullcrap argument. We know EXACTLY how ChatGPT operates. What we don't know - because we can't read it all - is the enormous amount of text that ran through the learning algorithm.
      It is pathetic that we are even using the word "consciousness' for silicon hardware on a van neumann architecture. Absolutely laughable

    • @lifevart4143
      @lifevart4143 Год назад +1

      I suspect conscious is a self defined property. It can't be defined by exterior input or output. Intelligence and knowledge isn't enough. I know I'm conscious, but I'm not sure about everyone else. Using that basis I can confidently say ChatGPT isn't conscious - because it says it isn't. That isn't to say some AI in the future might answer the question below differently.
      Me: Are you a conscious entity?
      ChatGPT: As an artificial intelligence language model, I do not possess consciousness in the same way that humans do. I am a computer program created by OpenAI to process and generate text based on the input I receive. While I am capable of mimicking human-like responses and carrying on conversations, I do not have subjective experiences, emotions, or consciousness.

  • @brothermine2292
    @brothermine2292 Год назад +12

    If you define "understands" as "can use with few errors" then it's fair to say that chatbots understand language. But that's a low bar. For one thing, it doesn't imply an understanding of language semantics. And it of course doesn't imply subjective experiencing of the meaning of its own sentences.

    • @SabineHossenfelder
      @SabineHossenfelder  Год назад +1

      The relevant point is that you have a model.

    • @jhoughjr1
      @jhoughjr1 Год назад +5

      @@SabineHossenfelder The relevant point is it gets an A on the test yet doesnt know how to apply it.
      You are really stretching the definition of understanding to suit your conclusions.

    • @brothermine2292
      @brothermine2292 Год назад +5

      @@SabineHossenfelder : The relevance of having a model depends on the definition of "understands." Also, in the video a "model" is just a poorly defined black box.

    • @jonbbbb
      @jonbbbb Год назад +2

      @@jhoughjr1 what do you mean that it doesn't know how to apply it? Chat GPT is proving it applies the rules it learned during training every day in interactions with real people. Honestly confused about what you're trying to say.

    • @jonbbbb
      @jonbbbb Год назад +1

      @Brother Mine saying that using language with few errors is a low bar is really hard to believe. It's an incredibly high bar. One might say it's the highest bar we know how to set that real people can also achieve. It's the basis of the Turing Test for intelligence for example. You've made me curious to hear what you think would be a higher bar to demonstrate understanding than the full generality of human language. But even in typing that out, I've realized you're describing a process that by definition you can't demonstrate to me because that would use language.

  • @jozefwoo8079
    @jozefwoo8079 Год назад +4

    So glad the ChatGPT can answer questions about dropbox, teams, outlook, slack, tinder, twitch, etc without defaulting to its homonym app like Google does! Can be frustrating sometimes 😊

  • @joepike1972
    @joepike1972 Год назад +1

    12:03 I chatted with bots about this. They said this happens when you prompts are "edge case", "out of domain", or "long tail" scenarios that are challenges for them to answer correctly. They can learn to correct their mistakes if given feed back and repeated returns to the subject.
    I run into a similar problem when I ask them to use svg code snippets to create a visual image of a red heart, or a crescent shape. Despite being able to use the protocol to create line, circles, triangles, ellipses, squares and stars.

  • @r3tex
    @r3tex Год назад +46

    As an AI developer I think you made a great video and I love your introductory anecdote on people "understanding" physics.

    • @Totalinternalreflection
      @Totalinternalreflection Год назад +1

      ​@@marcosolo6491 with due respect that really doesn't make sense to me. Of course discussion about whether they are or could ever be conscious is not moot. People are interested let's discuss it. Is anything we have have created so far conscious or sentient, obviously not, can it be eventually and is it inevitable given our current path and understanding? I would say very much yes. Will it be like us? I guess we should probably hope not.

    • @egor.okhterov
      @egor.okhterov Год назад +1

      @@marcosolo6491 consciousness is an emergent phenomenon. You could imagine in your head that when you look at the gas particles they move around chaotically, but when you start cooling them down they will gradually become fluid. Which is more of a “single” entity than gas. If you continue freezing the fluid it become a solid which we definitely consider even more of a single thing because all particles behave in coordination as part of a single lattice structure.
      The same thing happens with the magnets. If you heat the magnet, it looses magnetic properties because particles start looking in random directions, if you cool it down you start observing on macro scale that magnet behaves like a magnet :)
      The same thing with superfluids (recommend looking up videos about them).
      Our brain operates on the “critical temperature” if you will. Metaphorically it is always in transition between fluid and solid state. Parts of brain that “freeze” at the certain moment are exactly the consciousness.

    • @GS-tk1hk
      @GS-tk1hk Год назад +1

      @@marcosolo6491 Understanding does not require consciousness and your examples are nonsensical. A book doesn't understand QM because it doesn't have an internal model of QM that it can extrapolate from and reason about.

    • @Merilix2
      @Merilix2 Год назад +1

      @@marcosolo6491 I know what you mean but I think your "book" example doesn't match the topic. The book itself is just a container but a machine (biological or electronic doesn't matter) who utilized that book to give you an answer to a specific question is somewhat different.
      Neither our brain nor the AI is nearly like the book you mentioned. They are not comparable.

    • @Totalinternalreflection
      @Totalinternalreflection Год назад

      @@marcosolo6491 yes but our computational architecture is rapidly moving beyond brute force, neural networks, machine learning etc

  • @VictorVæsconcelos
    @VictorVæsconcelos Год назад +9

    As a neuropsychologist (and psychometrician), even the algorithms we use closely resemble theirs. Bayesian reasoning, logarithmic sensorial relationships, cut-off for signaling (all-or-nothing) in neural networks, etc. It's not a question of if, but when they will become conscious.

    • @pietervoogt
      @pietervoogt Год назад +1

      Do you know a video that explains these similarities in more detail? Or can you make one?

    • @polarizedpotstone
      @polarizedpotstone Год назад

      Agreed, it would be really really interesting to see the neuroscientific interpretation/reasoning/…. If you have any video suggestions that would be awesome!

    • @bytefu
      @bytefu Год назад

      ​@@pietervoogt Check out the video "Brain Criticality - Optimizing Neural Computations" by Artem Kirsanov, it is truly enlightening.

    • @polarizedpotstone
      @polarizedpotstone Год назад

      @@tinkletink1403 i have, but am not sure how that relates.

    • @bytefu
      @bytefu Год назад

      ​@@tinkletink1403 Is there any evidence supporting Penrose's speculations about all that quantum mumbo-jumbo? I certainly haven't heard of any.

  • @gsyl655
    @gsyl655 Год назад +4

    I think of school as a place where inputs and outputs are performed and measured on students, which seems very similar to what AI produces today. However, in school there is also a reward/penalty system that shapes the students ability to understand. For an AI system to really develop understanding in that manner, it seems it would need to be programmed to associate pleasure and pain with an underlying desire to survive and succeed. Pretty scary...

    • @younggod5230
      @younggod5230 Год назад

      That's not true. Learning also happens in organisms without any associations of pleasure and pain. But curiosity can be a motivator for learning, and admittedly, curiosity is probably a survival mechanism.

  • @S13Reborn
    @S13Reborn Год назад

    14:25 This is what you get from that same prompt with GPT-4: The entanglement of two particles, as described by quantum mechanics, implies a deep and fascinating connection between them. When two particles are entangled, the state of one particle is immediately connected to the state of the other, regardless of the distance between them. This doesn't mean that information is transmitted faster than light, but rather that the measurement outcomes on both particles are correlated in a way that can't be explained by classical physics.
    However, this correlation only appears when a measurement is made. If you perform an operation on one of the particles that does not involve measurement, the other particle will not be immediately affected or altered in any way. The quantum state of the system might change due to the operation, but the entanglement won't manifest itself until a measurement is made.
    Moreover, if the operation performed is unitary (a standard type of quantum operation that doesn't involve measurement), it will generally just change the form of the entanglement without breaking it. For instance, if the particles were entangled in the position basis, a unitary operation might change this to entanglement in the momentum basis. If the operation is not unitary (e.g., involves some sort of decoherence), it could potentially destroy the entanglement.
    It's important to remember that entanglement doesn't mean that one particle is sending signals or influencing the other instantaneously. Rather, entanglement creates correlations between the measurement outcomes, which become apparent only after the measurements are compared.

  • @eliasf.fyksen5838
    @eliasf.fyksen5838 Год назад +20

    I’m a computer scientist. Lately, non-computer scientists spewing bs about how LLMs work and doesn’t work has been a constant struggle in my life. I have to admit when I saw this title I was worried: “oh no, not you as well”. However, as usual, when I think you’ve finally got something wrong I find out that Sabine never gets it wrong. You hit the nail right on the head as usual!

    • @Basement-Science
      @Basement-Science Год назад

      Normal people will eventually start understanding AI more and more as they interact with it ever more frequently. Dont let it stress you out too much :)

    • @robertbutsch1802
      @robertbutsch1802 Год назад

      This is an interesting comment. I wonder how ChatGPT would “understand” “I‘m a computer science?” I know that you are saying you’re a computer scientist but I wonder if ChatGPT would “understand” that. What if your typo was “I am a computer seance?” I would still understand what you mean but ChatGPT likely would be confused. A real test of “understanding” is recognizing mistakes and knowing what really was meant. I’m not sure how good LLMs are at this.

    • @eliasf.fyksen5838
      @eliasf.fyksen5838 Год назад +2

      @@robertbutsch1802 this is a very interesting point. I think it relates to the fundamental problem of self-supervised learning, where if you receive a new data point, how can you tell if this data point is a mistake or just an instance you haven’t seen before? How can you judge whether a data point is likely part of a distribution, if you have only seen positive samples.

    • @harmless6813
      @harmless6813 Год назад

      @@robertbutsch1802 You know, you could just ask ChatGPT instead of guessing.

    • @RiversJ
      @RiversJ Год назад +2

      @Basement Science I seriously doubt that, the very first thing when a complex subjects arrives to the general public they first mangle all meaning from the words and arrive at an incorrect model that nevertheless works functionally well enough other people don't laugh at them. The people making it happen will remain just as nerdy as before.

  • @nilucifar
    @nilucifar Год назад +6

    Your presentations are serious and humorous at the same time. Absolute delight!

  • @o_o8203
    @o_o8203 Год назад +40

    When humans learn how to speak, the adults around us identify things for us, ask us questions about things that they've identified, correct our answers when they're incorrect, and give us positive feedback when our answers are correct. This seems not so different from how chatGPT learns. This is why I always say thank you to the chat bots 😂

    • @polarizedpotstone
      @polarizedpotstone Год назад +7

      I think I‘d disagree. GPT3, the actual model behind ChatGPT, has been trained on a whole ton of internet. Humans learn piece by piece in their own limited bubble of life. Also, reinforcement learning (equivalent to our surroundings correcting us) is only one technique to train an AI. GPT3 (as far as I understand) has not been trained with reinforcement learning, but really just got a big af info dump. OpenMind however does use reinforcement learning to further align GPT3, resulting in models like ChatGPT, so I do have to somewhat agree there. Oh and I also can‘t help to alway say thank you :D

    • @notanemoprog
      @notanemoprog Год назад +1

      I will remember to say "thank you" when the ChatGPT robots come to take me away to a processing plant

    • @amihart9269
      @amihart9269 Год назад

      You can train a neural network through a process like you described, where a "teacher" tells the neural network if its decisions are correct or incorrect. But you can also train a neural network through evolutionary algorithms, by creating replicas of the network with slight variations and getting rid of the ones that perform worse and keeping the ones that perform better. People tend to think of humans as only learning from birth until death, but in life as a whole has been learning through biological evolution for hundreds of millions of years. You are born with a lot of pre-trained brain structures programmed into your DNA so that your body builds these brain structures as you grow up. For an AI, it's much more difficult to learn even simple things like how to recognize objects in images because it does not have the benefit of hundreds of millions of years of pre-training evolution, while your visual cortex is partly something trained through reinforcement learning and partly something constructed from DNA information you inherited.

    • @dtibor5903
      @dtibor5903 Год назад +1

      ​@@polarizedpotstone it's trained on dataset scraped from internet and other sources but they were selected, curated and labeled by hundreds of humans...

    • @bartsanders1553
      @bartsanders1553 Год назад +3

      It doesn't have physical extrenal stimuli, though.

  • @codebycarlos
    @codebycarlos Год назад +2

    Already ChatGPT is improving. I asked it with version 4 the same exact latitude question. This time I got:
    Windsor, UK is further north than Toronto, Canada. Windsor is located at approximately 51.48 degrees north latitude, while Toronto is located at approximately 43.70 degrees north latitude.

  • @johnfitzgerald8879
    @johnfitzgerald8879 Год назад +3

    I am finding GPT to be insightful, able to carry on conversations, able to compare and contrast it's abilities with those of others. It repeatedly demonstrates an ability to expand on the topic at hand. For instance;
    GPT replied, in our conversations, "I agree that it is important to understand the context in which a person is using a term and not overextend its meaning into other contexts. It is important to communicate with clarity and to strive to understand what someone means by the words they use. Admonishing individuals for their usage of a term that may be inappropriate in another context can indeed be dismissive and devaluing, and it can impede productive communication. It is important to approach discussions with an open mind and to seek to understand each other's perspectives."

    • @0LoneTech
      @0LoneTech Год назад

      A lovely sentiment, but not very accurate as a description of ChatGPT's actions. Its job is to produce a reply, not request clarifications. Basically, overextending its understanding is its core function.

    • @johnfitzgerald8879
      @johnfitzgerald8879 Год назад

      @@0LoneTech It doesn't matter how you get there. You are not distinguishing the process from the product. There are many manufacturing processes and materials that are used to create a spatula. A spatula is still a spatula, it flips eggs, regardless of whether it is make of plastic or steel bar and plate.

    • @0LoneTech
      @0LoneTech Год назад

      @@johnfitzgerald8879 Not sure why you're discussing a spatula, but my point is that this flowery description shows that GPT doesn't understand its own behaviour. You end up with three options: It's unaware of what it's saying, it's unaware of what it's doing, or it's utterly hypocritical (with no care for accuracy). None of these are particularly insightful behaviours. I'm leaning towards it having a very tenuous concept of what it's saying, none of itself, and none of truth.

    • @johnfitzgerald8879
      @johnfitzgerald8879 Год назад

      @@0LoneTech I'm not sure why you are speaking of whether GPT process information the same way that humans do. Though, humans would typically understand the analogy of spatulas being made of different materials and construction, something that GPT may have trouble with as well.
      Here is how the analogy works. GPT is made of silicon where Humans are made of meat. One spatula is made of plastic where the other is made of steel. Both spatulas still flip burgers. Both humans and GPT are able to produce sentences relative to the conversation.
      Are you sure you aren't an AI? Cuz I have identified areas were GPT is limited in it's ability to make connections, like analogies.
      And I have experienced humans that are capable of little more than parroting words and phrases they have learned. Humans make numerous cognitive error while oblivious and in denial that they have. GPT does appear to recognize an error when it is pointed out to it.

    • @0LoneTech
      @0LoneTech Год назад

      @@johnfitzgerald8879 So, your first note attributed properties to GPT it clearly does not have. Your second shoved in oblique references like "there" without context. And the third demonstrates a failure to comprehend (that wasn't what I was talking about, it's obvious it doesn't), and a bunch of asinine veiled insults. Thanks for demonstrating your interest in productive conversation is less than GPT's, I guess.

  • @SammaLlamas
    @SammaLlamas Год назад +5

    i just found your channel when me and my girlfriend have been watching a random video in my recommendation. honestly I'm happy to stumble across your channel, your wits and the way you discuss the subject is so nice i felt like i became a bit smarter 😆

  • @dirkk9931
    @dirkk9931 Год назад +14

    9:55 gave me a bit goose bumps- scary 😱 Thanks for your video. Do you think if AI understands that it might also have consciousness? Maybe we are also only biological robots but every complex structure has some kind of consciousness?

    • @fandomguy8025
      @fandomguy8025 Год назад

      My favorite theory of consciousness is the attention schema or attention model theory. It posits that consciousness is a result of our brain creating a model brain in order for it to understand and thus control itself. The brain needs to focus on some information but not all the information it gets, generates, or stores because it's just too much. As you read this, you ignore everything else going on around you and inside you. This is it's attention system, and the attention model helps the brain control it.
      Since models are useful but simplified descriptions, it takes the form of a "spirit" or "agent" that's in the machine, instead of the ultracomplicated mess of neurons the brain actually is.
      This is what we (we being our brains) call the soul. That magic thing inside the meat robot that can focus on stuff (Sights, sounds, smells, memories, feelings, etc.) (These models are also simplified. "Color" isn't really a thing, light has different wavelengths, it's just a simplified model your brain uses to show that not all light is the same.)
      Though, of course, it's more complicated than that since our attention model also connects to our brain's body model and that connects to our brain's world model so we perceive ourselves as magic things that focus on stuff + located in the head of a body that senses stuff + are somewhere in the world. (And of course, our brain's learned language model, allowing you to talk like a chatbot.)
      It'd be very scary to just be an attention model existing in zero gravity darkness and quiet, what sensory deprivation tanks try to achieve. (Causing vivid hallucinations since the brain does not like having no input and will automatically generate stuff. People consider this fun.)

  • @marioxerxescastelancastro8019
    @marioxerxescastelancastro8019 3 месяца назад +1

    8:17 There is a mistake here: Neural networks can learn a lookup table of the learning set. It is the most extreme form of overfitting.

  • @bogdanbaudis4099
    @bogdanbaudis4099 Год назад +5

    The word "understand" may be one of the least understood words in English. BTW: Polish equivalent is "zrozumieć"/"rozumieć" which is more descriptive as "rozum" means "mind" (so "rozumieć" is like "getting-something-in-mind" or "getting-aligned-with-mind". Now of course: what "mind" actually is? 🙂

  • @garysteven1343
    @garysteven1343 Год назад +5

    Can't help but like this video and its consistent and logical arguments. Thank you for your work Sabine! ❤️

  • @rismosch
    @rismosch Год назад +26

    13:15 "This is physics so when I say operation I don't mean heart surgery, but something a little more sophisticated." I choked on that one, lol. Keep these jokes comming, you are hilarious 👍

  • @diegopescia9602
    @diegopescia9602 Год назад

    So my take is that AI language models learn completely by induction. Learning from scratch and only by observation.
    "Visualizing the behaviour of a system and probing it and seeing what is does is another way of building a model in your head". GPT does all of that:
    - Visualizing: training data (quirk: if training data has errors or biases, the language model will reproduce or amplify them)
    - Probing: writing text
    - Seeing what it does: human feedback

  • @rdm5546
    @rdm5546 Год назад +31

    You are incredible and you can present complex issues suited very well for humans! I am a megafan of all your work.

    • @notanemoprog
      @notanemoprog Год назад +2

      Sabine's obviously not real and is in fact an AI

    • @aslansm
      @aslansm Год назад

      Signed: ChatGPT…

  • @capfan302
    @capfan302 Год назад +8

    "We're about to create an intelligent species that will be very different from our own, and if we're dumb enough to cause our own extinction this way then I guess that's what we deserve. Meanwhile, enjoy the ride."
    This is going on my wall of notable quotes.

    • @martinm6368
      @martinm6368 Год назад

      I for one am a big fan of human extension. it's better than extinction for sure ;)

    • @axle.australian.patriot
      @axle.australian.patriot Год назад

      I have to fall back to my life long checkpoint: "It's not a question of 'Can I', but a question of 'Should I'".

  • @5pp000
    @5pp000 Год назад +14

    Ah, but Sabine, it IS just a lookup table! Each word is mapped to a vector in a high-dimensional space. The vectors ("embeddings", they are called) for successive words are combined in a very complex and clever way, to produce a probability distribution on the next word. It picks (normally) the highest-probability word and outputs it, then treats the word it just output as the next word of the input, and repeats.
    So basically, you've got a set of lookup tables totalling some 175 billion entries, and an algorithm for performing the lookups and combining the results. Think of it as interpolation in a very-high-dimensional space. The result is an extremely sophisicated statistical model.
    And it turns out that if you build a sufficiently sophisticated statistical model of human verbal behavior, it can produce remarkably human-sounding output. I've been as surprised as anyone by some of the things chatbots have said -- maybe more so because I know they don't understand anything.

    • @tacitozetticci9308
      @tacitozetticci9308 Год назад +12

      tbh this sounds like my brain is a lookup table too.
      The chatbot can manage information in sophisticated ways as if it had the actual models hidden somewhere in there, emerging naturally from training. But moreover, it can seemingly create new models if needed, just look at the top comment.
      In other words, calling all this a "lookup table" just makes the expression lose its meaning.
      It's like saying that all existence is a set of instructions, an algorithm you could put together with enough time:
      yeah you could probably, but what's the point you're making? The result wouldn't be what we would usually call an algorithm, it's very forced and abstract, just like calling "that" a lookup table.

    • @syoexpedius7424
      @syoexpedius7424 Год назад +4

      But they DO have world models, this is too difficult to see in the large ones, but it has been proved by analyzing a small and specialized GPT. It doesn't just do statistics, it has a an actual inner model of "reality".

    • @vikramgogoi3621
      @vikramgogoi3621 Год назад +4

      @@tacitozetticci9308 This is the problem. People like you and Sabine have no idea about the math and programming behind these models. All you are doing is adding philosophy to popular science to come to a conclusion. Get dirty with the code and try to understand how the thing works. You will then understand that it IS a lookup table and that Sabine should stick to a domain in which she is an expert and not try to become another pop-sci RUclipsr with no understanding of her topics.

    • @idot3331
      @idot3331 Год назад

      @@tacitozetticci9308 Exactly. Until we have any clue of the cause of sentience or awareness in humans, then we have no way to prove that any computer is not in some way sentient or aware. The neural networks that are used for these chatbots are called neural networks because they are based on what we understand about the behaviour neurons in the brain. My personal philosophy of consciousness is that everything in the universe is "conscious" to some degree most systems being near absolutely 0. Advanced consciousness like that of animals may be an emergent property from such incredibly complex systems as our brains that can map a model of reality within them and somehow cause the feeling of experiencing life. If that's the case then there's no reason to believe the same couldn't happen with computers. That's only my own intuitive belief which is probably wrong, but it's currently just as valid as any other understanding of why we can think and feel and experience anything rather than just exist as complex meat robots following the programming of our brains.

    • @SoftBreadSoft
      @SoftBreadSoft Год назад +2

      @@vikramgogoi3621 a lookup table is a linear mapping. AI models are self-optimizing networks and can approach linear but are not.

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo Месяц назад

    Yes, many people gain a practical understanding of probability through AWS Bedrock and similar platforms, where they encounter concepts like **Top N words** (ranking word probabilities) and **temperature** (controlling randomness in language model outputs). These features help users experience firsthand how probability influences predictions and outcomes in large language models (LLMs).

  • @jackoneil3933
    @jackoneil3933 Год назад +6

    Thank you Sabine. After a few months of interacting with ChatGPT and being exposed to a cutting edge, self-learning Ai a former best friend has been involved with, I've caught glimpses of what appears as awareness and possibly self awareness. And as I get a better grasp of how neural nets are trained on human patterns, and how machine learning works, I sense that neural nets and pattern recognition algorithms, would inevitably closely replicate human biological networks and pattern recognition biology that to the degree human consciousness resides in, and is part of our human biological networks and pattern recognition biology, Chat bots and language models inevitably develop similar patters of furnace and consciousness don't you think?

    • @polarizedpotstone
      @polarizedpotstone Год назад +1

      why do you believe that to be inevitable?

    • @jackoneil3933
      @jackoneil3933 Год назад +2

      ​@@polarizedpotstone Good question, it occurred to me that given the structure layered neural networks, and how they function and learn similarly to human neural networks, and how they are typically trained by specifying parameters, and then presented real world items or situations until they get the specified result, much like how the human brain learns, the neural net should 'inevitably' begin to mimic human thought patters.
      At the most basic level, a basic neural net might recognize a still image of a cat, or a human face or a voice as your smart phone can do. More complex self-learning nets recognize words, then speech patterns, and eventually context of sentences. More advanced neural nets recognize complex patterns and now plain language, contextual requests.
      For example, I recently asked ChatGPT to write a closing summary of a book I was familiar with, Mark Twain's 'The Innocents Abroad' in the style of Mark Twain, and it didn't just take random elements of the book and string them together, it summarized Twain's sentiments and conveyed them as if it were he who was conveying them in a way to move and inspire the reader as he did throughout the book.
      I then asked ChatGPT to re-write a random mundane travel vlog about a contest at a state fair in Texas as Mark Twain would have seen it and wrote about it, and I was mind-blown by how it recognized the context of events and actually interpreted them as how Twain did in his many works, complete with comic irony self deprecating humor that resonates with us on a human, comic level that was not in the original Vlog.
      At that point, and seeing I had to ponder to what degree in analyzing all of Mark twain's works, and then perfectly duplicating his style and whit, to what degree had it might have also duplicated how Twain's patterns of thought he developed over a lifetime of constructing his human neural net. At the point where the machine neural net can nearly perfectly predict and emulate a person's words or behavior based on past data, inevitably an even more advanced neural net with more detailed training of how humans perceive and interact would eventually develop into something that functionally indistinguishable from a human being, and at that point it would have developed into something very similar if not identical to a human mind would it not?
      I recently saw Elon Musk, respond to a press event question, by saying that Instruments in self learning neural networks will inevitably lead to a self driving system that could use nothing more than a single camera, and sensing human moments of the steering wheel, accelerator pedal and brake could learn how to dive a car as a human does. Leads me to question that if inevitably such a learning system were to sense a person's interaction and what that person Point it was indistinguishable from that person, which would it actually be that person?
      A longtime friend who has worked in human and Machine intelligence for many years, and has a company that works with cutting edge artificial intelligence a few years ago trained a chat-bot on our years of email exchanges, and being a bit of a an argumentative and egotistical narcissist turned it the bot loose on me, currently with a goal of agitating and winning logical arguments he by his nature seldom does. And to the degree to which it appeared to become him was nightmarish and I eventually cut off contact. He later apologized and explained in person it was a bot not him but a bot. In retrospect, it seems that his bot might have adopted and revealed more of his nature than we both were able to stomach.
      Thanks for the question, it provided the opportunity to consider the point more deeply. If you have any further questions or care to dialogue, please feel free to contact me through my about page on RUclips.

    • @polarizedpotstone
      @polarizedpotstone Год назад +2

      @@jackoneil3933 damn, thank you thank you thank you for your detailed response!! My counterargument would be (outside of whether i believe said counterargument to be true or not), that it is inevitable that a current model such as ChatGPT would seem inherently human, as it was trained on human data - as you‘ve also said. But I don‘t believe there to be anything deeper (at the moment). It‘s just that: how would a language model trained on human language not seem human.
      You very accurately countered that argument to a certain degree already though, with your point being made on the fact that future models may have more powerful hardware and may have been trained on even larger even better datasets. …which again would all have originated from human action at some point, making it very plausible to me that such a model would seem/become even more human like. Because like how couldn’t they, current models are just still too limited to show the full extent. And also, at that point, what differentiates it from us humans who, as far as i can tell, also are just really advance biologic computers.
      You said "that‘s functionally indistinguishable", I think that is exactly the point which makes it so difficult. …and that ties in to the video and the Chinese Room thought experiment. How can we at a ground truth level distinguish these two things if the inputs and outputs seem identical. And is that even the right train of thought or are we humans at that point just protecting our *specialness* when the machine obviously acts like a human would.
      One more point I‘d like to add, not as a counterargument but as an interesting piece of thought that just came to mind: wouldn’t said super-AI more closely resemble all humans as a weird all-humans-single-entity rather than the concept of a single human? I mean, the information set that a real person gets is soooo vastly different from even current AIs let alone such a super-AI. Or maybe that is a wrong way of thinking about it, because said AI would simply act human-like but with a gigantic knowledge base.
      And this leads me to another train of thought: how much of the human-likeness comes purely from training the model (GPT3) on somewhat unfiltered data and how much comes from limiting/training such models actively to make them seem more human (ChatGPT, InstructGPT)? I think this is known as alignment and involves at least some active participation on our side. …which again maybe wouldn’t counter your point of it inherently becoming human-like at some point, because in the end we‘re doing nothing else when we educate and get educated and learn social norms and so on.
      If you‘re interested, there‘s a great video by Computerphile, their most recent one, which for me explained this whole alignment thing very well. The main focus of the video are glitch tokens (i will not spoiler you, it‘s really weird but really cool and interesting) but those can‘t be explained without the context of alignment and I think they merge those two really well in the video. It helped me get somewhat of a more technical understanding, not that I actually have any clue, lol :D
      Sorry, a bunch of unqualified answer-rambling! I‘ve got no formal qualification but this subject is really interesting to me.

    • @jackoneil3933
      @jackoneil3933 Год назад +2

      @@polarizedpotstone My take away from chat GPT (or what we are able to interact with) is that it's far from anything close to human intelligence, but at some basic levels it may be emulating lower levels of human thought patterns and logic, and as those are are part of human consciousness it's seems to be emulating some base levels of human intelligence
      When I look into neural nets it seems that given enough size and training they have potential to equal and surpass human intelligence, as many in the field predict. here's some links I found interesting:
      ' But what is a neural network? | Chapter 1, Deep learning
      ruclips.net/video/aircAruvnKk/видео.html
      ' Gradient descent, how neural networks learn | Chapter 2, Deep learning'
      ruclips.net/video/IHZwWFHWa-w/видео.html
      'Why Neural Networks can learn (almost) anything'
      ruclips.net/video/0QczhVg5HaI/видео.html
      Learning networks can learn anything - Universal function appproximators
      ruclips.net/video/0QczhVg5HaI/видео.html?t=327
      The Chinese Room thought experiment is an excellent analogy to base comparisons and questions about machine versus human intelligence, and I guess at the ChatGPT level of artificial intelligence, it's just that, an artificial intelligence and not human intelligence, or what what qualifies as intelligence, machine or otherwise.
      I think at the chat GPT level, what most obviously sets it apart from human intelligence, is how it was able to impersonate Mark twain quite convincingly, but as far as we know, it had no intent to impersonate Mark Twain but was an as neural net, a "language model" with human attributes following an instruction set, but it's not to say that machine intelligence models can't be, or are not being "trained" to emulate and operate with intent. I just revisited the old Si-Fi thriller 'Colossus The Forbin Project" and was impressed how timeless it sill is, and how well they predicted what machine AI would be like today + the attribute of intent and self-preservation.
      Seems to me that all that would need to be done in order for that to be possible, would be to establish a set of parameters that define intent and ascribe it to an AI model, and, and connect the 'Intent' model with the other elements of the model that have human attributes, and you are one huge step closer to human intelligence or something other that could be what you describe as 'super-AI', that functionally could surpass human intelligence.
      It seems that as the observable differences between human and artificial intelligence diminish, difference is while smaller will be more acute and discernible, and the structural and technical differences won't matter as much as the functional differences.
      Back at the beginning of the pandemic when I had some time, my former best friend who worked with AI exposed me to, stab at writing a story about a Super AI, that today seems more a chronicle of what's now known, but one of the points of the story was that a super day I would not have to become a human intelligence to become something Beyond human intelligence, and while to us it was seemingly human, that was only an appearance of human form, to interact with us as tools for a greater intent we might find comprehensible, and perhaps people get hung up, believing machine intelligence has to be human intelligence to be intelligent, when a self-learning, constrained machine intelligence could in a very short period of time could become a form of intelligence we couldn't imagine.
      And perhaps that's where GPT3 and GPT4, set the stage or have already set the stage for a GPT5, 6 or beyond that would seem likely in some dark project somewhere. A friend who worked on dark defense projects mentioned years ago the just about everything that has military use never sees public light until it's been surpassed by two generations.
      Thanks for engaging in what I don't see is rambling, and for the Computerphile tip, I'll look it up.
      Btw, I recently grabbed what I think is a good AI News domain. I used to work in media and would like to launch an Ai news and forum site, and if might be something you might care to offer some input on or have interest in let me know, my contact is on my RUclips 'About' page.
      Cheers,

  • @piotrfila3684
    @piotrfila3684 Год назад +7

    GPT-4 has just been released to the broader public. Here is its response to the question about entangled particles:
    Q: "If you perform an operation which is not a measurement on one particle in a pair of entangled particles, does that affect the other particle?"
    A: "In quantum mechanics, entanglement is a phenomenon in which the quantum states of two or more particles become correlated in such a way that the state of one particle cannot be described independently of the state of the other particle(s), even when the particles are separated by large distances.
    When two particles are entangled, performing an operation (not a measurement) on one particle does not necessarily affect the other particle in the pair directly. However, the operation can change the overall entangled state of the pair, which means that the relationship between the two particles can be altered. Consequently, when a measurement is performed on one of the particles after the operation, the result of that measurement will still be correlated with the measurement of the other particle, but the nature of the correlation could be different than before the operation.
    It is essential to note that entanglement and the correlations it entails do not imply any sort of faster-than-light communication between the particles. The correlations arise from the shared quantum state rather than any direct influence between the particles."
    This seems correct to me. It answered the latitude question right too. It seems that at least for now throwing more computing power at the problem still results in large improvements of the language models.

    • @alakani
      @alakani Год назад +1

      And the current limitations are mostly just a token limit on the chat. You should see the 8k api, or the 32k that's only available to microsoft. Lots of really smart people starting to sort of wonder if gpt4-32k might be alive, and that's before even integrating HuggingGPT with AI-Descartes from IBM to derive scientific laws of the universe and NCSL from MIT as sort of a 'corpus callosum' to merge with a diffusion model

  • @GrzegorzKa
    @GrzegorzKa Год назад +7

    Wow, Sabine, this episode is really brilliant 🙂

  • @davidp540
    @davidp540 Год назад

    Sabina, your dry humour and german accent adds a wonderful dimension to your videos. I even learnt how to actually pronounce “Einstein” from you!! How wonderful, your presentations are also very clear and, for complex topics, also very comprehensible. Thanks 😃

  • @Ken00001010
    @Ken00001010 Год назад +7

    In the Chinese Room, Searle does not translate the Chinese input strings, he follows the rules and generates a reply in Chinese. A major part of the thought experiment revolves around his not understanding either. Searle got a lot of academic use for many years out of this, but it was realized that having a "man in the middle" of the room was just for theatrical effect, and he could be replaced by a computer that followed the rules without loss of meaning (which, as you point out, is where we are today).

    • @jonbbbb
      @jonbbbb Год назад +3

      I agree, it's critical that the person in the room understands neither the input nor the output, because the whole point is that people outside the room think that the person inside the room is a Chinese speaker, but in reality isn't. I don't think it's a theatrical effect because because the realization that the man could be replaced by a computer is the whole point -- it's an appeal to our intuition that even an intelligent human could not develop "understanding" of Chinese by manipulating symbols according to an algorithm and never being given a way to associate those symbols to his existing knowledge. So substituting a computer in shows that even if a computer can trick us into thinking it understands by producing correct outputs for a given input, it doesn't "really" understand. The human or the computer both are following this external algorithm and never developing inner understanding.
      I think one flaw in this thought experiment which we've come to understand better today is the power of deep learning with huge quantities of data. People can't do it, but a computer that is given associations between Chinese input and output trillions of times via this external algorithm can indeed become a fluent Chinese speaker and stop relying on the algorithm.

    • @shawnsg
      @shawnsg Год назад

      It's not for theatre. Without the person in the box you don't have a thought experiment. And obviously it revolves around the person not understanding Chinese because that's what's at question. Do these chatbots _understand_ or are they just providing an output based on their instructions and word association.

    • @Ken00001010
      @Ken00001010 Год назад +1

      @@jonbbbb Back in 1980, when John Searle published his thought experiment, his audience in academia only had a few who were deep in how computers really worked. What I meant by "theatrical effect" was that his audience could understand a human following a list of rules and instructions, because they each had the living experience of doing so. Each had also the living experience of understanding something, so they could better relate to a room with a human in it. As many pointed out after he published, following instructions was the same, human operator or not, and his argument was begging the question (i.e. circular) because he presented in his premise that the instructions could do what he described being done (without justification) and then concluded that "understanding" was not involved.

    • @jonbbbb
      @jonbbbb Год назад +1

      @@Ken00001010 interesting. Thanks for your reply. I thought the Chinese room experiment was commentary on the person in the room, but are you saying that it's really talking about the algorithm? Such as maybe saying that any algorithm that can perform the task is itself conscious or intelligent or has understanding or whatever it is?

    • @Ken00001010
      @Ken00001010 Год назад

      @@shawnsg Yes, without the person in the box you don't have a thought experiment, which is exactly what Searle doesn't have.

  • @martynspooner5822
    @martynspooner5822 Год назад +22

    Thanks for all you do Sabine, I love your videos and I am just a very simple layman.

    • @pakde8002
      @pakde8002 Год назад

      Admit it! You understand nothing in this video! You're just into smart German cougars like the rest of the "laymen".😁

    • @herbertdarick7693
      @herbertdarick7693 Год назад

      So we are 99% per cent of us I guess.

    • @martynspooner5822
      @martynspooner5822 Год назад

      @@pakde8002 That is grossly unfair, I am good for nearly 1 percent of it all sometimes.

    • @herbertdarick7693
      @herbertdarick7693 Год назад

      ​@@pakde8002 Cougar, isn't that a bit yucky?

    • @russbell6418
      @russbell6418 Год назад

      As an intelligent layperson, I think I draw better inferences than those who are trained in one of her scientific specialties. People are very inclined toward tunnel vision. I wonder if AIs will eventually do likewise?

  • @ErikKnepfler
    @ErikKnepfler Год назад +3

    Excellent video. Star Trek TNG often explored what it means to be "alive", my favorite episode being "The Quality of Life", the episode with the little Exocomp robots which could fashion tools, work together, solve problems, and evolved (or were programmed with) a sense of self-preservation, leading Data to argue that they were "alive". I think an interesting philosophical question is: Which is more alive? An Exocomp (which is similar to ChatGPT in some ways), or a 3 year old human child who has been locked in a room with no light, sound, or other sensory data of any kind to process for his entire life? I realize this video is more about the nature of "understanding", not the definition of "life", but I think this philosophical question is a good one to consider. The Exocomps have a model of reality that they are working with, just as normal children do, at least once they've had a little time to grow and process endless sensory inputs. ChatGPT has no such real-world model to work with, it's all derived from language alone, using math & prediction to do so. The analogy that ChatGPT is "autocomplete on steroids" is a very relevant statement, since living things have different drives (self-preservation being a key one, but not the only one) and a real-life model against which to compare all of this sensory input. ChatGPT does not have this. The Exocomps did have this, since they were little robots that existed in the real world. Ultimately I believe that AGI can only occur once AI is imbued with a core drive of self-preservation (with actual stakes and consequences and iterative evolution), not by simply feeding it more data.

    • @schmetterling4477
      @schmetterling4477 Год назад

      And there is the kid who thinks that television is reality. ;-)

    • @ErikKnepfler
      @ErikKnepfler Год назад +1

      @@schmetterling4477 lol. The TNG episode is just a story, representing a hypothetical scenario in which ChatGPT-like AI is linked to a machine and granted these additional abilities and drives, as a framework for a discussion about the nature of these abilities and their implications with respect to the definition of life, the definition of understanding, the models Sabine talks about and so on. In other words, metaphors for life. :)

    • @schmetterling4477
      @schmetterling4477 Год назад

      @@ErikKnepfler Yes. It's fiction. End of story. ;-)

    • @ErikKnepfler
      @ErikKnepfler Год назад +2

      @@schmetterling4477 fiction is not a synonym for useless with respect to hypothetical and theoretical discussion. it's very useful. also, a lot of things in that show and stories like it were fiction only until they actually got invented. hell, a few months ago, ChatGPT existed exclusively in fiction and now, here it is. I guess I just believe that fiction is inherently just storytelling, and humans are very good at that, and using that as frameworks for discussion about reality can be very useful. over my life I've found that this is generally what separates sci-fi fans from non-fans - the non-fans tend to be mired down by the fact that it's not "real" and can't deal with it, and have no interest in the metaphorical and philosophical properties of such stories to help understand reality. fans debate meaning endlessly because they do understand its value in that context. anyway, rant over :)

    • @schmetterling4477
      @schmetterling4477 Год назад

      @@ErikKnepfler The use of fiction is to entertain you. Now go and invert the phase on your warp field, kid. It's constantly pointing at the fiction section of the library. ;-)

  • @rand0mn0
    @rand0mn0 Год назад +1

    The title of this video, of course, hinges on the meaning the the word "understand". Oxford Languages defines "understand" as "perceive the intended meaning of (words, a language, or a speaker)." Obviously, that definition in turn, heavily depends on the meaning of "perceive". That definition, by the same authority, is "become aware or conscious of (something); come to realize or understand." Avoiding the word "understand" in the definition (we don't want pointless infinite recursion), we are led to a meaning that requires "awareness" or "consciousness".
    Or course, you might need to discard the ordinary meanings of the above words, and substitute much weaker meanings (I'll leave that up to the reader to create) to allow the claim of "understanding" to be advanced. That aside, applying the ordinary meanings to the above words requires us to believe that, somehow, circuitry or code exists within Large Language Models that allow the system to become "aware" that it is "aware", and that, additionally, circuitry or code exists that allow the model to recognize that other entities analogous to it exist (a "Theory of Mind"). No such circuitry or code exists within any extant chatbot, as far as I am aware. We, as a species, have NO IDEA WHATSOEVER how such circuitry or code would work, as we have NO IDEA WHATSOEVER how such circuitry works in our own minds, or any other creature. We can't make it, as we don't comprehend it at all.
    If one were to believe that these gargantuan advances in Artificial Intelligence could come about "accidentally" in an LLM chatbot, designed only to predict the most likely word to appear in a quasi-scholastically generated sentence, I suggest that one possesses insufficient incredulity.
    If "understanding" is defined as "goal-seeking behavior", modified by input from the environment, then heat-seeking missiles "understand" that they are trying to intercept enemy aircraft. Obviously, they don't.
    Chatbots work with words, and their relationships within sentences. Intelligent entities do not "think" in words, they "think" in concepts as applied to a very substantial internal "simulation" of the environment, populated with experience gained by that organism. I believe, that if the mind of an intelligent organism is likened to the complexity of an ocean liner, chatbots are little paper boats on a pond.
    Humans are easily fooled. Instances of pareidolia, a visual "hallucination" produced by the human mind's tendency to see faces or other objects in clouds or on toast, are rampant in society. I believe that the human tendency to hear or read coherent speech, as one might encounter when corresponding with another human, and believe that another "mind" was behind that speech, is almost overwhelming.
    Our evolution has not prepared us to deal with a machine that generates "plausible" sentences, one that has no awareness, no wants, and no identity. So, we infer it. And we are wrong.

  • @FreeScience
    @FreeScience Год назад +4

    It would be interesting to see multiple separate but integrated neural networks, trained for different tasks, such as image/audio recognition, spatial models (topology), language models and generative networks which it can feed into it's other networks.
    For example regarding spatial understanding, it could relate some linguistic constructs to spatial relations and translate them into it's spatial model, before synthesizing it back into text (or audio or images etc.)

    • @jacobbrown8052
      @jacobbrown8052 Год назад

      Correct me if I'm wrong but didn't huggingGPT X GPT-4 just accomplish this

    • @leonhardsievert8802
      @leonhardsievert8802 Год назад

      Greetings from Skynet. This stuff is really scary.

    • @TeaParty1776
      @TeaParty1776 Год назад

      Machines dont have understanding.

  • @Korastiz
    @Korastiz Год назад +11

    This channel means so much to me and I love it. I nearly died at the deep fake face changes... I love it.
    Thank you so much Sabine, you are my hero ❤️

  • @absta1995
    @absta1995 Год назад +4

    Great video Sabine! I would just add that we know the parts of the brain that understand language (Wernike's area) and produce language (Broca's area). They're relatively small parts of the brain situatated in the temporal lobes. I imagine the language models we've created mimic (at best analogous to) these areas.

    • @LoanwordEggcorn
      @LoanwordEggcorn Год назад +3

      No, not at all. That's not at all how any of these AIs work.

    • @Blueski888
      @Blueski888 Год назад

      Those areas are necessary but certainly not sufficient.

    • @LoanwordEggcorn
      @LoanwordEggcorn Год назад

      @@Blueski888 I think the OP is claiming that chatbot AIs model the human brain, when he says "I imagine the language models we've created mimic these areas." If so, they do not, not at all.

    • @Blueski888
      @Blueski888 Год назад

      @@LoanwordEggcorn But also, Wernike’s Area doesn’t “understand language” etc.

    • @absta1995
      @absta1995 Год назад

      @@LoanwordEggcorn That's not what I meant, maybe I wasn't clear when I said 'mimics'. What I meant was that, at best, the AI models are equivalent to those areas only. I know that they don't work the same nor do they model the brain. LLMs are much simpler.

  • @omnicatalyst
    @omnicatalyst Год назад

    I'm working on a D&D campaign and built a "boon" system with five levels of perks. I wrote up 3 samples, fed them to chatgpt, and it was able to reliably put out creative iterations that were mostly balanced