The Chinese Room Thought Experiment Debunked: Joscha Bach's Analysis

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024

Комментарии • 13

  • @KnowTime
    @KnowTime  2 дня назад

    Do you agree with Joscha Bach? Is the Chinese Room an intelligent system?

  • @briancunning423
    @briancunning423 3 дня назад +2

    Very profound.

  • @Bronco541
    @Bronco541 3 дня назад +2

    Brilliant! I felt there was something off about that thought experiment but couldnt quite put my finger on it

    • @KnowTime
      @KnowTime  3 дня назад

      Glad you liked it!

  • @NilsEchterling
    @NilsEchterling 3 дня назад +3

    The point is simple: The guy in the box may not understand Mandarin, but the box does.

    • @glamdrag
      @glamdrag 2 дня назад

      does it though? Depends on your definition of "understanding". For me personally, to understand something comes with an experience. A feeling that confirms with me that the information that is received corresponds with my training or logic. A system that checks if the input info is correct and then flashes a red or green light does not understand by my definition because it does not consciously feel if it makes sense or not. So for me to understand is to be conscious and to feel, the feeling itself is understanding. A computersystem that can check by logic if something is correct or not but doesn't feel it i wouldn't call it understanding. It's semantics maybe, but for me this feels right, and that's why humans often disagree on things,. because we attribute understanding to feeling, and not to deeply fact and logic checked analysis.

    • @NilsEchterling
      @NilsEchterling 2 дня назад

      @@glamdrag You can use this definition, but I think it is not useful.
      Language is only used to describe an observed behaviour.
      If something is observed to behave like something that can understand something, then it is not helpful to assume that there is something we cannot observe, which could contradict that.
      Joscha says it in the longer video: Maybe simulated water does not make you wet, but it is just not useful to assume this.
      If water behaves like something that makes you wet, then we should assume that it is something that makes you wet, because we do not describe anything else with our words in the first place.
      A box (including the guy inside) which appears to understand Mandarin, understands Mandarin according to any helpful definition of the word "understanding".
      The guy inside the box does not understand Mandarin, just like the atoms in your mind are not conscious. Yet you are. The whole is more than the sum of its parts.

    • @KnowTime
      @KnowTime  2 дня назад

      @@NilsEchterling this is a great summary, so the only thing I'll add here is that Joscha is espousing the philosophy of functionalism in the video: the idea that mental states are defined by their function, or role, in the cognitive system, rather than by their internal composition.

  • @Maciej-Komosinski
    @Maciej-Komosinski 2 дня назад +1

    So, in other words, *emergence*.
    I'm curious to hear your thoughts on formalizing these concepts through "mapping functions" between different levels of description, as outlined in our open access paper "Mappism: formalizing classical and artificial life views on mind and consciousness".

  • @mattiawatchingvideos-bg1ok
    @mattiawatchingvideos-bg1ok 2 дня назад

    Turn down the bass bro

  • @aceofspades25
    @aceofspades25 2 дня назад

    He didn't debunk it, instead he failed to understand or engage with the problem.
    The problem is not that the whole cannot understand what it is doing if the parts don't understand what they are doing. For example my brain can understand the meaning conveyed by this sentence but my individual neurons cannot each do that - that was obviously not John Searle's point.
    His point was that there is a difference between appearing to give a meaningful response to a question and actually understanding the question. Our current generation of LLMs do not actually understand concepts and this can be demonstrated by testing them against novel problems that they have not encountered in their training yet, what they do instead is link related words together in a probabilistic fashion to give the appearance of them understanding the question.

    • @KnowTime
      @KnowTime  День назад +1

      Is that just a problem with the training data? You could make the case that we don't really understand natural language either, but rather link words together in a probabilistic fashion. There's a network effect with words and sentences, and we all agree on what a word represents, but when it comes to understanding - you can't really build natural language from first principles (like Wittgenstein tried). In a way, children also learn natural language like LLMs do - through brute forcing the meaning of words, and seeing what gets a positive response?

  • @hk95030
    @hk95030 2 дня назад

    Useless argument ?