AI: Computers and Minds | Philosophy Tube

Поделиться
HTML-код
  • Опубликовано: 26 авг 2024

Комментарии • 792

  • @kiranduggirala2786
    @kiranduggirala2786 4 года назад +15

    In case anyone's watching this in the future, I think there's an important caveat to add to Searle's Chinese Room. The Room is operating on the old framework of AI, where you program in questions and rules and use it to spit out answers. In this case, it would be relevant to point out that the intelligence would likely be a function of the programmer and not the room. However, AI no longer works this way. Modern Neural Networks are given questions and answers and use it to deduce rules (if you want to get into the math of it, I highly recommend the video series done by 3Blue1Brown). What this means is that it is able to generalize: it can take a question it's never seen before and give you an answer (something the person in the Chinese Room could never do). I think it is this property that allows us to say that the neural network is in some respects intelligent because it is not just following a preset rulebook.
    I should note that my use of "question" and "answer", as well as my claim that neural nets can "generalize", should be taken with a grain of salt since the technology is not exactly there yet. First off, most applications are not in the realm of asking it questions per se, but rather classification problems (the most famous example is using the MNIST database to get it to interpret handwritten digits). This is where neural nets do extremely well since there are certain bounds on the problem that allow it to generalize to any given handwritten digit. However, in the more vague sense of asking it questions, neural networks struggle with generalizing beyond a certain extent. Despite the current limitations on the technology, I think this kind of generalizing AI would more resemble the argument that the room is thinking, since its ability to generalize is an emergent property of our mathematical models.

    • @juliapressner7818
      @juliapressner7818 3 года назад +1

      This is super interesting, thank you for sharing! But I don't quite understand how the ability to "generalize" and provide answers to questions that were never seen before equates to genuine understanding. Wouldn't it just be a more sophisticated version of input/output? The person in the room would be able to synthesize answers to new questions based on what they know about the symbols in general, but they still wouldn't actually understand Chinese. Maybe I'm missing something here though?

    • @PaulGaither
      @PaulGaither Год назад

      Nobody should be watching this in the past.

  • @Xidnaf
    @Xidnaf 9 лет назад +86

    There's an interesting argument for the existence of qualia I heard a while back. Say a girl is trapped in a room for their entire life that contains absolutely no green objects. There are, however, a huge number of books that explain everything you could ever want to know about neuroscience and optics. The idea is that, by reading all of these things (never mind how she learned to read), she would eventually understand everything physical involved in the event of seeing something green (ignore the fact that this would be impossible sense it would involve a brain having complete knowledge of how brains work, meaning that it would have to be more complicated than itself). However, when she steps outside for the first time and sees the grass, she still learns something: what it's like to see green. The argument is that, if she understood everything physical about seeing green things before stepping outside, what did she learn upon her exit if not something fundamentally non-physical and indescribable?
    Personally, I don't buy this argument and don't believe in qualia, but I think it's interesting.

    • @CompilerHack
      @CompilerHack 9 лет назад +16

      That's such an interesting argument! Thanks so much for triggering a thought train.
      I can see but one flaw in it-
      It claims that the girl knows 'everything there is to know about green' without seeing the color itself. The first idea that strikes is that she'll study the physics of light and structure of eye and so on. The flaw here is that we don't yet know all there is to know about the color green! The nature of light is a highly debated issue, its perception by the brain being even more obscure. We merely know all there is to know 'for practical purposes'.
      So I feel it'd be a bit hasty to say she knows all there is to know about green without seeing it and yet gains new information when she actually sees it.
      What if, when we truly gain knowledge of all there is to know about green, we would experience the same Qualia as seeing the color green!?

    • @redeamed19
      @redeamed19 9 лет назад +9

      +Xidnaf I take a counter approach to qualia. I see "green" as a sort of word the brain has make up to distinguish one wave length of light from the next. I totally accept that qualia "exist" but they are a property of the brain modeling the world around it.
      Here we run into some nuance with the definition of "exist", and what we choose to mean by that. Does the word "brain" exist? Does it exist in the same way the object the word it describes exists?
      In either case I don't see how qualia are beyond replication by machines. We know we can effect qualia by affecting the sensors in the eyes or artificially signalling the brain. We may not have detailed control over it but these influences seem to suggest there is little difference between the output we perceive as qualia and the output of a monitor. Indeed the only significant and varifyable difference is that multiple people can look at the output of a monitor and qualia is restricted a single persons experience.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад +4

      +Xidnaf It's called "what Mary didn't know", by Frank Jackson.

    • @mertkocak4436
      @mertkocak4436 9 лет назад +2

      +Xidnaf This is an unreliable method for building a theory of mind,why do you think our intuitions about this particular thought experiment would reveal how our minds work.
      By the way she will most probably see everything gray or experience blindsight for color.

    • @redeamed19
      @redeamed19 9 лет назад +1

      mert kocak "By the way she will most probably see everything gray or experience blindsight for color." why do you say that? They have returned vision people blind since birth and they see colors. This hypothetical only suggests that she is never shown anything green, not that she is incapable of seeing it.

  • @daperez40
    @daperez40 8 лет назад +12

    This video was my entire semester this year! Awesome coverage in 10 minutes.

    • @PhilosophyTube
      @PhilosophyTube  8 лет назад +5

      +daperez40 hah, awesome, glad it could help!

  • @unvergebeneid
    @unvergebeneid 9 лет назад +44

    When I heard this I was like "Dreyfus is an idiot who hasn't looked at computer science in 50 years." Then I found out that his book was indeed published in 1972 so I wasn't too far off. Why his arguments are still treated as relevant, however, when connectionism made the concepts behind artificial neural networks even accessible to the humanities, is beyond me.

    • @unvergebeneid
      @unvergebeneid 9 лет назад +22

      +Penny Lane Ok, I should probably give a sketch how connectionism works: so you have a network of connected nodes and activation flows along these connections and activates neighboring nodes according to the connection strength. This is obviously trivial to implement in a computer. Individual nodes can represent concepts or just have some functional role ... not important. Then if you want to, say, buy biscuits, and you see a packet of biscuits on fire, you don't need a _rule_ telling you that buying biscuits that are on fire is bad. You just have a connection between your "on fire" node and your "bad" node (yeah, guess what, I'm simplifying here) and when the "bad" node and the "my food" node are active at the same time, your "wanna buy" node gets inhibited (yes, there's negative connection strength ... just bear with me). Now there's also a connection between the "crawling with insects" node and the "bad" node, so you don't need an extra rule for not buying biscuits full of ants (because there are no rules, remember?). When you then learn that acrylamide is bad for you, you don't need a rule that says "don't buy biscuits with acrylamide" and another with "don't buy chips with acrylamide" and so on. You just need that "bad" node to be connected with the "acrylamide" node.
      Now, this also explains why we often just follow gut feelings when making decisions. It's just a huge, messy network where activation flows from one node to the next and in the end either the "yes" or the "no" node light up. Who the hell is supposed to know how we made that decision? It seems like magic but that doesn't mean that it can't be modeled by an algorithm.

    • @Trollitytrolltroll
      @Trollitytrolltroll 9 лет назад

      +Penny Lane I agree!

    • @tetrapharmakos8868
      @tetrapharmakos8868 9 лет назад +1

      +Penny Lane Thank You!

    • @frantisekzverina473
      @frantisekzverina473 5 лет назад +1

      The computers have not really changed in those 50 years, they just have more computing power. Dreyfus' arguments are still valid. What you explained is just a different approach to programming from the standard if-then-else but it is only slightly closer to actual human decision making. It might seem to be the same to you since in your last paragraph you are trying to define human thinking within the boundaries of programming but that doesn't really work well and just proves Dreyfus right.

    • @fieuline2536
      @fieuline2536 4 года назад +1

      I know professors who still treat Kantian and Aristotelian physics as relevant.
      Philosophical ideas never die, they just get increasingly unpopular.

  • @ybra
    @ybra 9 лет назад +16

    Couldn't you also argue that a human is also just a Chinese room? We lean to respond to what we gather for our sensory organs, just like the guy in the room responds to the inputs. Only difference is that we think we are somehow in control, but are we really? Or are we just responding according to the world according to our own set of rules?
    Edit: I don't think the human brain is special and couldn't be replicated by a computer. Even if our current computers are missing some fundamental part, like needing something more than just information processing. Whatever that missing part is, I still think we could make that part and add it to the computer and have AI.

    • @MarkLucasProductions
      @MarkLucasProductions 9 лет назад

      +ybra The first part of your post is nicely put and I wanted simply to compliment you on it but given your "edit" I want to insist that it has always been apparent to me that AI is impossible. As I see things - consciousness must necessarily precede intelligence. If anything is going to qualify as your 'missing ingredient' it would have to be consciousness. In order to build an artificial intelligence you must first build an artificial consciousness but to do so is virtually self refuting - it's a bit like setting out to create artificial matter. If you succeed then you fail because by definition matter cannot be artificial. It can only ever be real - in the same class as natural mater. Similarly, artificial consciousness is not different from or distinguishable from real, actual, natural consciousness. The intelligence of a person is not supervened, anthropomorphised or otherwise attributed to them - it is intrinsic. True AI needs be intrinsic not simply apparent or interpretable as intelligence.

    • @ybra
      @ybra 9 лет назад +3

      Mark Lucas "Similarly, artificial consciousness is not different from or distinguishable from real, actual, natural consciousness."
      I don't get your argument here. If we make consciousness so well that it is just like the real thing, why would that be "self refuting"?
      The thing that makes it artificial is just the fact that we created it. The product can be exactly the same, natural vs artificial is just a difference in origin. If it's just like the real thing, that just means we did an incredibly good job.
      The question is, can we make consciousness? And I don't see why not. Our brain is just made of matter, and if consciousness can arise inside our brain, I don't see why it would be impossible for it to do so in a machine brain.

    • @jonathanhaines9094
      @jonathanhaines9094 9 лет назад +1

      +Mark Lucas Doesn't this argument make assumptions about the nature of consciousness (which I'll accept is mostly all we can currently do)? What if consciousness, as a phenomenon, arises from the nature of how minds are made up and how they operate? If this were the case, wouldn't it follow that if you then constructed an 'artificial' mind, it would 'possess' a consciousness? - consciousness might be intrinsic to it's nature.
      Just a thought.

    • @lockvirtompson5287
      @lockvirtompson5287 9 лет назад

      +Mark Lucas Is consciousness such a spectacular thing?? Only consciousnesses have ever experienced a consciousness and that is itself. I will never attribute much value to self evaluation. I think the viewpoint is to subjective to be reliable. We don't know what we don't know. Maybe once we know how, we find that it is actually quite easy to create a consciousness.

    • @MarkLucasProductions
      @MarkLucasProductions 9 лет назад

      +ybra I anticipated the possibility of that exact response as I was writing. My point is that neither consciousness nor intelligence are designable things because they are not 'things' at all but rather consequences or upshots or properties that emerge from an interactive process between the thing that is identified as conscious or intelligent and that thing's environment. You can't build a 'space' but you can build a perimeter that encloses a space - thereby in a sense - a real sense - actually creating a previously non-existent space. Similarly, you can't design and or build conscious or intelligent thing beyond simply constructing a thing that responds to its environment in the manner that it does. We are what we are IE conscious and intelligent in virtue of being that which it is we are. Gold and silver are what they are in virtue of their inherent properties which is a product of their atomic structure. No matter how clever or dedicated or talented or determined you are, you will never be able to make gold out of Lego bricks. Whatever you build from Lego bricks may be have any of a virtually infinite number of properties but it will never be what gold is. Similarly, you may build an extremely impressive model of seeming human intelligence from Lego bricks but it will only ever be just another Chinese room.

  • @bytesnobjects
    @bytesnobjects 9 лет назад +27

    I can't shake the feeling, that Searle's paper doesn't really help us to get closer to answering the question what "thinking" actually is. The problem is that the whole argument is kind of circular. It assumes that a mere manipulation of symbols isn't thinking and then (not surprisingly) comes to the conclusion, that a mere manipulation of symbols can't be viewed as "thinking". What I miss from the argument is where the human mind differs from a (sufficiently complex) manipulation of symbols (neuron states for example). The whole argument of the paper has something of a "because I say so"-feeling to it.
    Take the example of learning. If we assume, that Crane is right in his assertion, that a computer is anything, that processes representations systematically and we observe, that existing computers are able to learn (i.e. adapt their processing system to new information and generalize), than we have to conclude that learning is something that can be done purely by processing representations systematically. Which leads us to the conclusion that the ability to learn cannot be used to distinguish between a human "thinker" and a computer "thinker".
    Take the example of a human suddenly seemingly deciding that he's not going to bother looking for the best value cookies: Who says, that the best value in this situation isn't the decision to not bother? Thinking about it, analyzing it takes time and energy. If said human is tired (i.e. low on energy) it could be the best decision to not bother and just grab the closest cookies. This is something a computer could do exactly the same way. It could even learn to do it that way, given the right feedback (e.g. it is in a bad "mood" after taking too long to decide). One could even include some randomness in the algorithm to simulate neurons firing accidentally.
    As far as we've understood the human brain today it basically is just modifying its structure to change the processing of representations (e.g. reactions to certain inputs). So far biology has not offered any clue that there might be something different at play (although, to be honest, it hasn't been able to fully explain how the "thinking" actually gets going from neurons firing).
    Long, rambling thoughts, but to sum it up: as long as we don't have an idea, what "thinking" should be and how we can distinguish it from "not thinking", no philosophy will get us any closer to demonstrate, whether computers can do it. So far it just all boils down to: "I can't distinguish it from thinking, but it still isn't the real thing!" If we had a criterion, than we could design a test to try and distinguish it.

    • @lockvirtompson5287
      @lockvirtompson5287 9 лет назад +1

      +Bytes'n'Objects Be it that i could give more than one thumbs up....Wait...I wont create more accounts. Thats going to far!
      Seriously thou, i agree. Clean and exhaustive comment.

  • @0hate9
    @0hate9 5 лет назад +7

    Just because we don't think in algorithms doesn't mean you can't represent a mind /fully accurately/ using an algorithm, or, more likely, a computer program. After all, computer programs aren't just algorithms.

  • @NickCybert
    @NickCybert 9 лет назад +7

    I think the main problem with Searle's argument is that it relies on intentionality to make the difference between the man in the room, and someone fluent in Chinese. But who's to say intentionality is even a thing? Sure, people think they can have intentions, but people also think they can have free will. It could be that what Searle is banking on to separate us from computers is a mere illusion.
    That said, I think the much easier way to distinguish ourselves from computers is to recognize that our brain doesn't represent the entirety of our conscious experience. Your body has a significant effect on they way you think. Furthermore, our conscious experience doesn't represent our entire mind. We have lots of autonomous processes in our minds, and our mind still functions when we are unconscious as well. Logical operations are a thing we share with computers, but there's a lot of things we have that computers don't. And we don't have to rely on hokey concepts of "understanding" or "intentionality" to say that.

    • @saeedbaig4249
      @saeedbaig4249 5 лет назад +1

      If I may speak in computers' defence for a moment, perhaps there aren't as many differences between us and computers as you think. To address your specific points:
      "our brain doesn't represent the entirety of our conscious experience. Your body has a significant effect on they way you think."
      Likewise, the state of the CPU/RAM/hard-drive doesn't necessarily represent the entirety of a program. Other hardware can also have a significant effect on the way a computer behaves. I remember my Comp Sci lecturer once telling us a story of how he spent days looking for a bug in his program, only to later discover the source of the bug was a faulty wire (the code itself was fine, but faulty hardware was having an unexpected effect on the way the program behaved, similar to how changes to our bodies can can have unexpected effects on the way we think).
      "Furthermore, our conscious experience doesn't represent our entire mind. We have lots of autonomous processes in our minds, and our mind still functions when we are unconscious as well."
      There are lots of autonomous processes in computers as well (e.g. the system clock). And although there might not be a perfect computer-analogue of "unconsciousness", I'll suggest sleep-mode/hibernation; the computer still functions even when your laptop lid is closed.
      (for a more technical but perhaps fitting analogue, the Intel Management Engine is a low-level firmware that runs so long as the chipset is connected to current (via battery or power supply); even if the computer is turned off).
      So perhaps we don't actually have as many distinguishing features from computers as we initially thought.
      And even if you COULD point out some unique thing _T_ that humans have for which there is no suitable analogue in computers, you would still have to prove that _T_ is necessary for the ability to think and is not simply an arbitrary, irrelevant difference.

  • @ewan.cartwright
    @ewan.cartwright 9 лет назад +23

    The thing about computer programs is that they're not programmed with instructions on what to do in any given situation, they're programmed with a set of tools to figure out what to do, the same way a calculator is not programmed with
    "1+1=2, 1+2= 3, 1+3=4" and so on. Instead it's programmed with the tools to figure it out for itself.
    The man in the room is a component of a machine that does not itself understand Chinese, but was presumably programmed by someone who does. And someone who can understand Chinese can create other things which utilize Chinese even though they do not comprehend it, the same way a calculator can carry out maths without actually comprehending what it is or what it means. Since the man is a mere component that could be replaced with another machine component, whether or not he understands Chinese is as important as whether or not my eyes understand a picture of a cat.
    Sorry if none of that makes any sense.

    • @MarkLucasProductions
      @MarkLucasProductions 9 лет назад +5

      +TheRecreator I was simply going to write Bravo, but seeing as you ended by saying "Sorry if none of that makes any sense." I have to say ARE YOU KIDDING! You have articulated the essence of the problem exceptionally well. I was the only one in my Philosophy of AI class who insisted AI was necessarily impossible. People still don't see my argument but you seem to be well positioned to grasp it.

    • @ewan.cartwright
      @ewan.cartwright 9 лет назад +2

      Thanks, that means a lot. It's just difficult to tell since no-one I know is interested in philosophy.
      However while I do believe that with our current computer systems, which are essentially just halfway between brains and calculators, AI is impossible, I think that if one were to perfectly replicate a human brain with electronic parts It would function exactly the same as an organic one. Though there would be no reason to create such a machine as it would act just like a regular human only using several hundred times as much power, and lacking a body.

    • @ricardo.mazeto
      @ricardo.mazeto 9 лет назад

      +Mark Lucas Man, sorry, but you're soo wrong, AI is completely possible!

    • @ricardo.mazeto
      @ricardo.mazeto 9 лет назад +2

      +TheRecreator """The thing about computer programs is that they're not programmed with instructions on what to do in any given situation"""
      But computer programs CAN be programed in this fashion! An algorithm can self-improve! That's how machine learning works! This is a whole new field on computer science.

    • @ewan.cartwright
      @ewan.cartwright 9 лет назад +1

      They can yes, but they're not because it would take a prohibitively long if not indefinite amount of time. I meant programming in *every single* answer to *every possible* question, and when you do that what you end up with is not an AI system, but a simple and incredibly large database.

  • @bubbyis1337
    @bubbyis1337 5 лет назад +2

    As a computer scientist, and specifically an AI researcher, I'd challenge your claims with this:
    Just because we cannot understand the brain and its decision process, in a full extent, right now (and thus, cannot model human decision making with an algorithm / set of algorithms), does not mean that we never will.
    Take for example, your analogy to algorithms in relation to the weather. Certainly, we cannot perfectly model the atmosphere right now, but if we could (that is, run a simulation that takes into account every particle in the earths atmosphere, sun activity, gravitational influences from other planets, etc.), we could potentially perfectly predict the weather across the entire globe. The same logic could be applied to a human brain. If we create a perfect simulation of all the particles and how they interact in a brain, we could model a consciousness.
    Just a thought. Would love to hear what you think.

  • @sofia.eris.bauhaus
    @sofia.eris.bauhaus 8 лет назад +17

    so basically the ai-couterargument is "i have a very limited idea of what computers can do" + "i think parts of my mind is magic that cannot be reproduced but still fulfills a function somehow".
    computers have moved pretty far from the old one-step-at-a-time calculator with a bit of control logic. and rule-based system are just one kind of computation that is turing complete. neural networks are another, and they are much closer to whatever "qualia" could be (perception systems). and even if our brain contained a completely novel way of information processing that is impossible to do efficiently in any known machine, then that doesn't mean we couldn't reengineer these things out of other materials. or even if we would use actual brain cells but grow them in neural circuits we designed, the result would be an _artificilial intelligence_.

    • @InternetLawman
      @InternetLawman 8 лет назад +2

      No sir, I disagree. I believe that the thought follows more along the lines of 'How our physical brains create our collective subjective experiences is, for the most part, not understood. Therefore, on what basis can we say that our minds function like computers, or vise-verse, when we barely understand how our minds function at all."

    • @sofia.eris.bauhaus
      @sofia.eris.bauhaus 8 лет назад +2

      not sir. if we understood all of human experiences we would have a sufficient model and thus a human-level ai. i find it much more likely that we suddenly build something that shows near- or superhuman performance and then we analyze it to find out how exactly it got there. that seems like how it often works with neural nets today.
      also obvously brains work very different from computers and that is completely irrelevant. a human can learn how to play computer (interpet machine code) without needing a computer in it's head. computers can simulate fluid mechanics, evolutionary processes, and all kinds of things without being physicallly similar.
      also i'm talking about human-level intelligence, not human brain emulation. if an ai has a structure closer to a cephalopod brain or something entirely different, but generally solves problems we expect humans to solve, what do you do? pout and complain how unhuman it is? it doesn't matter. human brain emulation would be cool for other reasons, but i think it would be a strange coincidence if it were the first general ai.

    • @CommanderM117
      @CommanderM117 7 лет назад

      we do understand the brain neurons fire in the brain for memories and action and release dopamine for that action/memories some basic machines can run on electric for action but we are yet to create stronger memories with electric signals for programmed ai

  • @redeamed19
    @redeamed19 9 лет назад +14

    I wonder if we don't make a similar mistake here that many make in evolution when discussing transitional forms. Many, including proponents of evolution, stick to discussing the transition of one species to another with "hard line" distinctions. By this I mean any time a new fossil comes up with traits of fossil A and traits of fossil B (along with the other qualifications needed to consider them potentially transitional) most people will say it needs to be classified as one group or the other. There are many transitional fossils where people still debate which of 2 groups in a lineage the fossil belongs in, as if the difference is so drastic.
    In this discussion of AI this line is attempted to be drawn at this thing called "understanding" which has been suggested to separate from the accurate applications of concepts. Is there any real value to this dichotomy? Or do we already have AI and the question isn't whether AI is possible but rather whether or not they can be as smart as us (or smarter). This is much more of a transitional concept and seems to more accurately depict what we commonly refer to as understanding, or intelligence.
    A dog, for example, can be said to understand some range or commands, even some words. But we can only judge this "understanding" by their response to things. And they make more mistakes because of a lack of ability to contextualize. Is it more accurate to simple categorize the dog as unintelligent or less intelligent?
    Additionally we make exceptions for types of intelligence in humans all the time when people are proficient in one area but not in others. Why should these exceptions be exclusive to human intelligence?

    • @lockvirtompson5287
      @lockvirtompson5287 9 лет назад +1

      +Kyle Davis I fully agree. I think it may be more yielding to perhaps talk about achieving properties of the mind rather than "is it that exact thing called a brain". The need to classify is a property of our mind and a tool for sience and discussion. But in reality recherches always has to battle with the world not being simple and always has to approximate it to fit the math and to build models.

    • @CompilerHack
      @CompilerHack 9 лет назад +2

      Good point! It is perfectly reasonable to believe that other mammals experience Qualia too, perhaps to a less sophisticated degree. Extending this thought to simpler animals, perhaps they too experience Qualia, to even a lesser degree, maybe a vague yet extreme pain or extreme joy, feelings getting simpler as we go down the complexity of animal kingdom.
      What if plants (or for that matter, individual parts of plants) too experience happiness when the sun shines in their leaves, is a brain necessary for Qualia?
      And if I allow myself to get too carried away, is it exclusive to living beings? what about viruses? What about rocks; it doesn't feel a thing if you kick it perhaps or even break it into pieces, but maybe it cries as it's salts dissolve in rain water.

    • @redeamed19
      @redeamed19 9 лет назад

      The Compiler your post got me thinking. Is "understanding" in this way they are using is supposed to be some form of qualia?
      I must admit I am still not sure what is meant by understanding in this context. which is Ironic because I can say " I don't understand "understanding".
      I would define it as the ability to appropriately apply a concept either abstractly or through action. However if this was the case then the Chinese Box would have understanding of the Chinese language, so I doubt that is the intended definition.

  • @DampeS8N
    @DampeS8N 9 лет назад +38

    The state of the machine (computer, brain, whatever) is what holds understanding or consciousness, not the hardware itself. So weather or not the guy can learn Chinese is irrelevant.

    • @OlavFilm
      @OlavFilm 9 лет назад

      (y)

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад +3

      +William Brall So does a book understand its own subject matter? After all, it holds the state of the subject matter (statically, but still).

    • @DampeS8N
      @DampeS8N 9 лет назад +6

      Emiliano Heyns A book isn't a machine. The text of the book isn't a set of changing values and instructions that constitutes a mental state. A brain is a machine. A brain's mental state is set of changing values and instructions. I was not saying that all states are conscious. I was saying that the "mind" held within a brain is not just the machinery of the brain but the specific state the brain is in. I'm saying that the collection of a person following instructions in a book is a separate machine from the person alone and that larger machine (person + book) can know Chinese when the person alone does not. And if sufficient, that machine comprised of a person and a book could, without the person's knowledge, have another layer of consciousness. I see no reason to think they it couldn't. If one set of purely physical processes can produce consciousness, any similar set of purely physical processes could. It would just be a matter of organization.
      Put another way; a book + the state of the text in it doesn't change. A standard computer program runs and changes values but doesn't meta-cogitate and so falls short of consciousness. Eventually, as you name more and more complex things you'll reach a brain. Somewhere along the line there consciousness arises. It might just be a case of meta-cognition, I don't know. I don't need to know. The point is that some purely physical things have consciousness, if we can say that humans have it. And certainly a dead brain doesn't.
      A third way; a brain alone has no consciousness. A brain plus a state does. In the thought experiment we have a person and a set of instructions in a book. That set of instructions is the state. The person being able to learn Chinese or not doesn't matter, a brain certainly doesn't learn Chinese, the state the brain is in plus the brain learns it. Change the state, and the knowledge is lost. Removing the instructions from the person is the same thing. It doesn't matter if the person can learn Chinese or not.
      Is any of that clearer for you?

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад +3

      Oh it's clear enough -- I was just picking on your wording in the earlier comment. In any case "Somewhere along the line there consciousness arises." is not a given, it is exactly the question under scrutiny. If you assume this is true you are begging the question.

    • @DampeS8N
      @DampeS8N 9 лет назад +5

      Emiliano Heyns I'm a monist. If you have some evidence that the mind is more than just physical, I'd love to see it. Until then, we can assume that the mind is physical and therefore constructed of physical interactions and therefore a machine of some other configuration that produces similar interactions or is able to simulate those physical interactions would also be able to be conscious. To suggest otherwise is to suggest dualism and that requires supporting evidence. So, yes, some organizations of physical objects and processes produce consciousness and while we don't know that electronics can, you'll notice that I didn't say electronics. I said machine. The human mind is a machine. Since we have one example of a machine with consciousness it is reasonable to assume that other arrangements could too.

  • @themimsyborogov42
    @themimsyborogov42 9 лет назад +11

    I would think that are you only shrinking the room rather than removing it, the room inputs changes from the letters through the hole to the eyes and ears etc and the output changes from the written letters to speaking words and body language etc and the instruction book changes to memory. The room understands argument still stands for me I feel.

    • @georgeparkins777
      @georgeparkins777 5 лет назад

      I wish I'd seen this before I posted. You said it more eloquently than I did.

    • @PedroAntonioLea-PlazaPuig
      @PedroAntonioLea-PlazaPuig 4 года назад

      You are "placing" the room "inside" the person's mind

  • @bangboom123
    @bangboom123 9 лет назад

    Coming at this as a psychologist with a side interest in philosophy.
    Topics to look up in regards this:
    -Ideasthesia
    -Embodied cognition/extended mind
    -Neutral monism (William James's initial interpretation of it in particular)
    -Human perception
    Basically: The human mind is not like computers as we understand them today, because computers tag representations for processing rather than ever understand what they represent, as per Searle. Human beings can, however, derive meaning automatically from any given percept they experience. The distinction between percept and representation is very important. A representation is a construct onto which meaning is to be assigned, a percept is by its nature already understood, it cannot be created without semantic information behind it (think when you look at ambiguous images how a familiar shape can suddenly emerge). This conjoined processing of percept and concept at once is what's articulated by the very new study into ideasthesia, a new model for understanding synaesthesia that I shan't get into here. This notion, that what we experience is in fact what we understand, is what brings me to neutral monism, a position in metaphysics that rejects that the world is entirely mental, entirely physical, or both separately. The problem with the computationalist perspective in psychology is that it simply says that mental information, be it qualia or semantics or whatever, is added partway through physical processes in the brain without explaining how they interact (see Searle's criticism again). The neutral monist perspective in conjunction with the work of phenomenologists, especially Maurice Merleau-Ponty, allowed for percepts and concepts to be considered as one, and helped give rise to the nascent study of embodied cognition, which shows the myriad impacts the body has upon the mind, and externalist philosophies like Chalmers' which accounts for how external "physical" information can be considered part of the mind.
    I rambled a bit there. Sorry. If you want to discuss this or know more than I do, please comment. Dialogical reasoning is better.

  • @Kram1032
    @Kram1032 5 лет назад +2

    I realize this video is like four years old by now, and a crazy amount of stuff has happened since in this field. But here's my future-informed take:
    I think brains very much do use algorithms of sorts to turn perceptions and experiences into actions. They are, however, relatively noisy and perhaps not as cleanly separable as your standard computer algorithm might be.
    That being said, even computers could take arbitrary decisions. Quasi-arbitrary by using what's called a Pseudo Random Number Generator (PRNG) or even properly arbitrary by using an external source of randomness as part of their algorithms. For instance, they could simply look at the cosmic microwave background and use that signal to drive decisions. This source of entropy should be pretty much enough to cover everything that algorithms which rely on super precise values at every step couldn't do.
    And even without such an external source of randomness, modern AIs can do a rather good job at coming up with diverse, "arbitrary" decisions to a given problem.
    There is some merit in the Chinese Room argument in terms of the current state of AI. For instance, AIs' failure cases are often weird and unexpected. They clearly do not, as yet, "think like us". To me these issues seem more temporary than fundamental though. Part of it is using too little data (effectively the pool of experience an AI could draw from), part is posing imprecise questions (The AI will learn to answer a question very close but not *identical* to what you actually wanted to have an answer for. Closeness in the sense that, for all the experience it got to draw from, the answer is going to match what you'd expect virtually always, but if you go a little further off-field, the answers will suddenly differ)
    And there are similar problems like that.
    Now, the fact that such a precise question is needed in the first place is pretty much like the setup in the Chinese Room Experiment. I think this can be overcome though. It's in big part due to these AIs being cut off from experiences similar to those we get in our lives. We get to draw from so many more sources than any AI to date. It's unreasonable to expect an AI to learn to be similarly good at ALL THE THINGS as we are (or even better than us).
    Another part is evolutionary biases. There is no reason to believe that we couldn't replicate those in AIs as well. They are baseline assumptions / ways of thinking/acting/responding which we are primed for simply by virtue of evolutionary history. Due to our genes. - These biases exist in part due to random chance (they didn't disadvantage us enough to be selected against) and in part for good reason (having them directly benefited us, hence they got selected for). An AI wouldn't have those, unless directly programmed to. But if we gave them to AIs, and I do believe we could, they would think practically identically to us.
    (What ever it means to "think like a human" in the first place. Like, I think like *me.* There is variability in this as well)

  • @mathymathymathy9091
    @mathymathymathy9091 7 лет назад +2

    Humans may not follow simple algorithms, but they do always follow algorithms that are extremely complex. We can, in principle, work out these algorithms by analysing every neuron in the brain. For example, one of a vast number of steps would be "If Neuron A received a signal from Neuron B, pass the signal onto Neuron C". I am simplifying things here, but if you could store that network of neurons as data on a computer you could simulate the mind, and ultimately what you would produce would be indistinguishable in every way, including in the experience of the mind, to a human brain. This is impractical at the moment, but it may be possible in the future.
    This is like the "China Brain" thought experiment. Assume for the purposes of the thought experiment that there are as many people in China as there are neurons in a brain. Give everyone a phone and a set of instructions, such as "If you receive a phone call from B, call C". They can then simulate the neurons inside a brain. (People outside China could also relay information to the brain by calling specific people at specific times; these simulate sensory neurons).
    Different authors have considered this and come to different conclusions; I favour Daniel Dennett's conclusion which is that in simulating a brain, the China Brain is indeed intelligent. The human brain is essentially a vastly complex algorithm and the China Brain follows the same algorithm. Now, suppose no Chinese person understands English. We could teach English to the China Brain, just as we teach English to humans. Crucially, we do not need to teach English to any Chinese person in order to do this. They just follow a series of rules which enables the China Brain to understand English.
    Now, consider this. Suppose we make one person (who doesn't know English) simulate every single Chinese person in the China Brain. They would essentially be following a vast algorithm that keeps track of every single neuron in the human brain. They would not understand English, but the "China Brain" still would. From this, the conclusion we could draw is that it is the algorithm, rather than the person, who would understand the language.

  • @liyans1
    @liyans1 8 лет назад +10

    regarding these rules that we can change and not crash, what about people with OCD/OCPD who literally do crash when they can't follow these rules? aren't we just very sophisticated computers that can program/reprogram ourselves as we need to? then wouldn't these be the 'bugs' that cause issues?

  • @codehorse8843
    @codehorse8843 4 года назад +3

    As far as modern science can tell there's no significant functional difference between a organic neuron and a mechanical neuron. We've made simple circuits with organic neurons and we've created organic learning with neural networks. As someone who is studying software engineering and is planning on working in the software development sphere I can tell you that most of the people who argue that a mechanical brain can't be conscious are not familiar with modern software and hardware advancements.
    To me the question is not whether it *is* possible to create mechanical consciousness but rather whether we should. Frankly I can't see a single significant reason to do so, any proposed use of such technology could be circumvented with a simpler and less morally ambiguous neural network, if we create consciousness it will only be for sadistic or selfish reasons, such as endless simulated torture or the like or to show it off in some museum and go "look at this, isn't this a fine specimen!" And I think most people agree that neither of those things are good in any sense of the word.

  •  9 лет назад +6

    The Good Old AI was (mostly) all about algorithms implementing what we thought was thinking, but all the new breed of AI is about (mostly) algorithms implementing the process of thinking, and not the thinking itself. And that's a huge difference. Instead of forcing our robots to walk following an algorithm we provide them with algorithms that learn to walk. As an engineer, the problem is that our systems do things and we don't know why. We see them doing the correct thing but we don't have even a theory of why the decision was taken.
    And talking about Qualia another pair de remarks: a) «Intuitions pumps» by Dennet attack the concept, b) what deep learning neural networks do is ... finding Qualia ! and their success is remarkable, watch ruclips.net/video/n1ViNeWhC24/видео.html

  • @tessarnold7597
    @tessarnold7597 4 года назад

    2 things: 1) the idea of the mind being like a computer is, like most mind metaphors, completely backwards. We created computers to be an amplified aspect of the human mind. So the comparison is a lot like, "I built this chair to do the job my legs do, only better. So, how are my legs meaningfully like this chair?" 2) The problem with the Chinese Room thought experiment is that it assumes the person inside the room has a definite separateness, in that the person alone could never learn the language even if they wanted to. But the person in the room is only part of the whole system. The book of precise rules is the other - in so far as processing the language goes. It's a polar system. Either, without each other, would be incomplete. There is an input, the rule book is the processing, the man creating symbols is the output. All three are inextricably linked to one another. It is the mistake of thinking we can separate things into discrete categories all the time. Some systems can not be understood that way. Consciousness, or general artificial intelligence, in this case, is really one of those types of systems.

  • @Pfhorrest
    @Pfhorrest 5 лет назад +1

    Searle is right that syntax doesn't equal semantics, but that's because semantics is about relating symbols to experiential phenomena. The room needs eyes and ears and so on. If the rulebooks in the room were also picture books, and the-cow-says-moo type books, and scratch-n-sniff books, so the symbols could be related to those sensory phenomena, and the man memorized all of the books, then the man WOULD have just learned Chinese. There is a functional difference between Searle's stock room and a native Chinese speaker: you can pass a native Chinese speaker a picture of a duck on a lake and ask "what kind of bird is on the water?" and it will give you an answer, while the man with just symbol-manipulation rules without any reference to images has no way of deciding what answer to spit out. Computer vision and hearing and so on are all about translating sensory experiences into abstract symbols, so there is no reason that a computer with all of the linguistic programs AND computer vision and hearing and so on could not in principle genuinely understand language, and demonstrate it by performing identical functions to those that human Chinese speakers do.

  • @JesterAzazel
    @JesterAzazel 5 лет назад

    Emotions are very important for decision making. There are case studies about it. That could explain that "botched" decision making process mentioned around 8 minutes in. It's not exactly botched as much as necessary, to keep decision making from being an overwhelmingly difficult task, as it was for Eliot.

  • @0744401
    @0744401 8 лет назад +12

    Let's say that the difference between human minds and computer AI is that humans have qualia.
    Let's say that we have a special, non physical organ - a sort of soul - that allow us to have qualia.
    Well, we know everything there is to know about the physics of computers, and it doesn't seem like they have anything physical that could allow them to have qualia.
    But they might have some non-physical, soul-like parts that allow them to have qualia.
    Or, maybe qualia aren't the result of souls, but, rather, «come out of nowhere». But then, if they come out of nowhere for us, they might as well come out of nowhere for computers.
    It's not like we know how and why we have qualia if we have them, and, by that logic, we can't rule out that computers might have them.
    Besides, the interesting question, for me, is this : under what circumstances do we have to treat a computer with respect, dignity and deference to its autonomy? It seems to me that, them being «somewhat likely» to have minds «close enough» to our own would be sufficient.

  • @Urchak
    @Urchak 9 лет назад

    The core flaw of the Chinese Room analogy turns on the definition of the word "understands". Searle designed an analogy specifically designed to exclude our usual understanding of understanding.
    Our idea of understanding is rooted in the way we understand. The human brain isnt a large single organ with a flow of information in, a processing of that information and then an output. Its a series of interconnected modules evolved over time with a lot of redundancy.
    If we wanted to construct a chinese room analogy that was more accurate in representing the way our understanding of understanding worked it would have to be more like a "Chinese Bureaucracy" analogy. In this analogy the Chinese room is only one tiny bit of a larger organizational structure. Information flows into and out of the Room from other Rooms in a constant stream. Some rooms follow instructions that associate certain symbols with sensory data. Others associate symbols with memories or desires. Some of the instructions in the rooms stay static, while others can be altered by the symbols passing through the rooms. The Searle room doesnt understand Chinese. But the massive network of rooms would be pretty indistinguishable from how a person understands chinese.
    If we wanted to deal with the idea of a "conscious understanding" of chinese, as in "do you have to be aware of your understanding to understand" there is a further expansion of the Bureaucracy that could add that ability. (For Zods sake building a computer to do this is a BAD IDEA).
    There is also an "awareness committee" of rooms that receive summaries of the symbols passing between other processing rooms. The awareness committee is often convinced that it has executive control over decisions, but in fact the vast majority of output happens before the awareness committee is even informed. The rule set of the awareness committee rooms is based on the idea of being a tie breaker when the rule sets of other rooms reach conflict or fail to produce an output. Otherwise the awareness committee's job is purely to observe and build a rule set based on those observations.

  • @OmegaCraftable
    @OmegaCraftable 9 лет назад

    Computer Science student here who has also done some neuroscience, to respond especially to the parts at around seven minutes, modern computers work solely on purely straightforward deterministic models and algorithms whereas minds have faculties such as the basal ganglia that work on probabilistic models and algorithms. The sorts of algorithms that are usually talked about are the former, as they are easier to understand and more applicable but outside the confines of precisely engineered systems probabilistic algorithms make the overwhelmingly vast majority processes.

  •  9 лет назад

    I would like to remark that computers are not necessarily implemented by "1's and 0's". That's only in the case of digital electronic computers. We had in the beginning analog computers, using continuum values (real values using physical quantities of charge or even mechanical devices). Selecting 0's and 1's was more about economy and engineering challenges than an inherent feature of computers.

  • @antonidamk
    @antonidamk 2 года назад +1

    It is really interesting to read a lot of comments here that effectively disprove the theory on the basis of modern scientific advances which have brought AI much closer to the human brain than it was before. However, I think it is worth commenting that even before these advances, the theory was weak because it relied on an assumption that just because at the time it was not possible to model the human brain with an algorithm, that meant it could not in principle be done, whereas there is just as much evidence for that argument as for saying that the algorithm is so complex that we haven't yet worked it out (and may not have the capacity to do so), but it might well be possible. And even if the algorithm is so complex that we could never do it, how close can a computer get before we say it is the same? Individual humans' brains work differently, and we still consider them all human, so why not the same with AI if it is within the same ballpark?

  • @bassem500
    @bassem500 2 года назад

    As a software engineer and a SciFi aficionado, AI is a subject for me. I would like to invite you to think about adding "will" into the mix for AI. Imagine we program a computer with the "will" not to be shut down. Next we give it a set of tools to find the risks which would result in a shutdown and another set of tools to mitigate found risks. Can you imagine that, in an iterative process, this computer will gather knowledge about the risks and develop procedures which make it ever more vigilant. Put another way: I am postulating that if we give a computer, connected to the Internet and with natural language analysis capabilities ( like Siri has ), the "will to live" it will develop self-awareness in the end. Giving a computer a purpose contextualises information and action for it. A fly, with its tiny brain/nerves-system has self-awareness of sorts and, for sure, a will to live... computers we are able to build today have orders of magnitude of calculation power compared to a fly. Even bacteria have a kind of "will to live"... "virus"? We already have those in digital form.

  • @redeamed19
    @redeamed19 9 лет назад +8

    "systematic representation" seems like an odd criteria to tac on to artificial intelligence. it seems to take our method of intelligence and suggest that all other intelligence must function the same way or be inferior. "if it's not like be it is not as good by definition". this is a massive flaw in this argument. Even if qualia are more than just a product of physical interactions, and even if qualia are necessary for our brand of intelligence this does not mean AI is impossible, nor that it must be inferior to our process.
    AI is about creating a system
    capable of dynamic decision making and learning across as wide a spectrum of tasks as possible.
    the Chinese box experiment. In Doctor Who there is a shapeshifting time traveling robot piloted by miniaturized people, let's ignore all of this except Piloted my miniaturized people. now assume " the system, takes all the various inputs we can (lights, sounds, etc) and passes those to a man in the head that follows some manual rules to forward the message to some other part of the body where someone else follows the rules to carry out the action "kick maybe?". same idea as the Chinese box but far more complex. an onlooker can in no way tell the difference between it and a person. what color is an object? assigned that wave length, or range about, a name like red and use that when other people ask. my question is how would a person even know this robot does not " understand"? what does it mean to "understand"? If no distinction can be made what is the point in claiming a difference?
    even in the language we speak we are regularly failing to accurately understand each other. Some times we use the wrong words because we're defining them wrong, other times people apply the wrong definition to a word you used and still more often words have such vague definitions they can be interpreted multiple ways. does this count against the intelligence score of our process of " intelligence "?

  • @Pfhorrest
    @Pfhorrest 5 лет назад +1

    The qualia problem goes away completely if you just adopt pan-proto-experientialism. Everything "has qualia", has a subjective first-person experience of what it's like to be that thing, but the quality of that experience varied in accordance with the function of the thing. A thing that functions a lot like a human brain will have a subjective first-person experience, and "qualia", a lot like that of a human. Things less and less like humans will have less and less similar experienced, but still have some kind of experience. Even a tree, a rock, an electron, have a first-person experience, but their functionality is so much simpler and so unlike a human experience that there really isn't anything of interest to say about them. But they're technically there, and our experiences are built up out of complex networks of those kinds of simpler experiences, in the same way that our behaviors are built up out of complex networks of the simpler behaviors of the things we're made of.

  • @kiancuratolo903
    @kiancuratolo903 Год назад

    I fall on Bodens argument along with a feeling of well, if something responds like a sentient being, interprets information like a sentient being, has an internal model or at least in all ways reports to have an internal model of consciousness, and just overall matches 1 to 1 with all qualities of a sentient being...its a sentient being, and the question of 'but it isn't at the most fundamental level' is as bad an argument as 'atoms can't think so people aren't conscious'. Sentience, true intelligence can emerge from any sufficiently complex system. I would highly suggest anyone interested in this to look at Mark Tegmarks video on integrated information theory, a physical theory of consciousness that i think provides some amazing philosophical fodder even if not the ultimate answer.

  • @slottmachine
    @slottmachine 9 лет назад

    As both a long time fan of philosophy tube, and a current computer science major, I can safely say that this has been my favorite video yet.
    basically I think of everything as computer science (this totally no doubt for sure isn't because I have to spend an incredible amount of time thinking about math and science because of my classes, so sir, no bias here, not on my watch) so every time I hear something like "you can have rules of thumb but your decision making isn't specified by precise rules", all I can think is "RULES THAT YOU ARE AWARE OF PUNK", but actually that's just a working theory. It's interesting to imagine computing so far as fundamentally different from cognition of any sort for a change. I particularly loved the idea of the field of A.I. being modern alchemy, although frankly, even if it is, it's giving us so much new technology that I'm down to just keep pretending.
    Do you think it's possible that A.I. is only impossible due to physical laws of nature, and that mathematically it is sort of possible to map out the fundamentals of intelligence? Kinda like how, technically, you can make a triangle with infinite circles, but you can never actually draw infinite circles? But then again, humans did.

  • @turtle4llama
    @turtle4llama 5 лет назад

    The thing about AI and alchemy is that we can do alchemy. We can get gold where there was none and we can artificially extend life, it's just not energy efficient. It doesn't matter if we make a "human" mind, just a mind indistinguishable to a layman.

  • @sunnyrainyday6820
    @sunnyrainyday6820 8 лет назад

    In Plato's divided line the thing higher than mathematical logic is the 'forms' a thing that can exist in the absence of numbers but numbers can't live in the absence of (like how a hand exists in the absence of its shadow but its shadow can't exist without the hand). I believe that the final deciding force that makes the 'uhh this one' choice is the responsibility of a form or some particular of that form that the 'mind or consciousness' possesses (or is made of).

  • @stefanklisarov4053
    @stefanklisarov4053 9 лет назад

    +Philosophy Tube
    Searle's argument aims to prove that computers can not become conscious, not that they can not think or be intelligent ( in the context of problem solving and acquisition of new knowledge ).
    We as conscious agents perceive information with some what immediate understanding, we have complex contexts and concepts behind the symbols , hence the semantics.
    But intelligence and consciousness are two different subjects , there is no doubt that machine can perform tasks that require intelligence and are able to learn.
    And as Searle puts it himself we know that creating a conscious machine is also possible , our brain is but a machine but as long as we use algorithms that operate only on symbol manipulation we will not achieve this with computers.
    Correct me if i am wrong but this is how i understand the argument.

  • @checol70
    @checol70 8 лет назад

    I would say that a computer will never think, but a program might. When I think of a mind I envision it as the software that runs the hardware of your body and recieves input not only from senses, but from the body itself.

  • @lodevijk
    @lodevijk 9 лет назад

    On how the most rudimentary of hardware, millions of times less complex than a neuron, can become completely unpredictable and impossible to analyze, read about Thompson's evolvable hardware. They programmed an FPGA using an evolutionary algorithm, and got something that works, and nobody knows how. Our brains have evolved similarly, from logically-working blocks, arranged in the most complex way, not by a designer, but by itself.

  • @ignosegnose5578
    @ignosegnose5578 3 года назад

    I don't know how frequently people read new comments on old videos like this, But I think the thing that Searle and others are missing is that if a computer were to think it would be in a completely novel way that would be similarly impossible for us to understand.
    Imagine if you will and AI that is sitting outside a room and the AI passed messages to the room and inside the room there was a front end computer program that translated what the AI was writing into the English language to be presented to a human. Surely the AI wouldn't understand English and wouldn't even think in English but that doesn't preclude the fact that the AI thinks all that it precludes is that it thinks in the way that we think.

  • @foxtrot.uniform.charlie.ki2824
    @foxtrot.uniform.charlie.ki2824 6 лет назад

    if you were to build a computer with a sensor that - like a neuron - send a signal fast or slow (Always same amplitude) depending on the input. Now the signal would change in rate as the input changed much like a neuron, then add other sensors that would modulate the signal slow it down or speed it up depending on other sensory inputs
    - Then you would end up with a brain.
    The human mind, however complex, is still made up of simple processes. It is the system in which these processes are arranged that is unimaginably complex - but given enough time and technology it should be possible to build a machine that is or will develope same complexity.

  • @diablominero
    @diablominero 4 года назад

    I had a vague memory that someone British had done an interesting presentation on the Chinese Room argument, and guessed it might be computerphile, so I searched youtube for "computerphile chinese room" and it figured out that I was thinking of this video despite that I didn't know myself. Either the search engine is rapidly approaching actual thought, or it's literally magic.

  • @ErikratKhandnalie
    @ErikratKhandnalie 9 лет назад

    I think the big question that this whole conversation dances around is this: Are humans deterministic? Do we or do we not have "free will"? If we are not deterministic and have free will, then the whole question is far beyond what we as a species can honestly address right now. If, however, we are deterministic, and do not have free will, then it is fairly safe to say that truly sentient AI can indeed exist, and that anything that acts like it has intelligence can and should be assumed to have that intelligence. After all, given the same inputs, a deterministic person would behave precisely the same way every time - that is to say, if presented with identical information multiple times (including memory, sensory input, etc) a deterministic person would always arrive at the same decisions - precisely as a computer would. The issue regarding algorithmic behavior can easily be solved with the application of heuristics - it's a tricky thing that computer science hasn't really been able to fully utilize, but it is how people tend to think. Our thought processes can indeed be described in terms of algorithms - they are simply very long, complex, flexible algorithms.

  • @Trollitytrolltroll
    @Trollitytrolltroll 9 лет назад +1

    Whether or not a computerized human-like mind is feasible comes down to what we're searching for. Scientifically, questioning its possibility is irrelevant, as there is a human-like mind in existence, and we're it. It is thus feasible to have an AI, as long as it uses the same mechanisms we do.
    There is a paradox involved, being that we wish to have a "perfect" mind that resembles us, who are so innately imperfect due to evolution. The artificiality of such a machine that would resemble us comes down to the degree to which we wish for it to be different from us.
    Much like the Chinese Room experiment, what we have done so far is only to predetermine the ways an "AI" can work. We are subject to exterior stimuli that we incorporate into our thinking by learning (much like AIs we have now), and we are also subject to relatively predetermined reactions, such as emotions, the release of certain hormones, and certain physiological and behavioral aspects entirely out of our control.
    We are made to respond to threats to our survival, to our need for sustenance, and by evolution have a redundant base of peripheries to do so.
    If we give these same predetermined bases to a machine capable of learning and deducing according to them the way we do, it's essentially what we would call an artificial human intelligence.
    TL;DR The question isn't whether it's possible, it's whether we want a human mind or a "perfect" mind, or something in between, which begs the question: what the hell is a perfect mind?

  • @ggoedert
    @ggoedert 8 лет назад +1

    So whats the difference in-between the room and a physical brain? I mean in the end you could just run the room as and simulation of all the neurons in the brain, and it would still be a room, but it would function exactly like the brain. The brain is just like this room, it processes the input and gives the output, one could even change the words 'room', 'papers' and ' person' in the paradox into 'brain', 'stored chemical memory' and 'electrical pulses' and just claim that the brain does not think, that the brain knows nothing, when actually the hole system of the brain, its state and history, its inputs and outputs that makes conscience emerge, if a extremely complex room with all these equivalent elements existed I would say that that system actually thinks.

  • @FatihErdemKzlkaya
    @FatihErdemKzlkaya 8 лет назад

    I always wondered how philosophers are so confident about the nature of computation and mind. I guess they must be knowing something more than computer scientists and cognitive scientists.

  • @adamjohnson3188
    @adamjohnson3188 8 лет назад

    I find myself persuaded by the idea that most things, including the neurons in our brains, are a function of some complex intermix of natural laws which can be approximately modeled by an algorithm, at least in principal if not in practice. So I guess that leads me to think not only are minds the product of an cosmic algorithm but they can be modeled to certain approximation.

  • @kfjw
    @kfjw 5 лет назад

    I think that a very important missing component this is a discussion of emergent phenomena. Many of the AI models in use currently do not operate on easily quantifiable rules. That may sound like a contradiction, given that they are implemented using computers, which operate entirely through explicitly quantified rules.
    What I mean is that high-level descriptions of the system's behavior aren't easily quantifiable. When a computer vision system recognizes an object in a picture, there's no code you can point to that explains why it made that decision in terms that can be understood on any high level. Instead, it's just the way the data flowed through the neural net or decision tree or whatever.
    I would really like to see the Tube Man do a video on emergent phenomena, and possibly chaos theory as well. This would cover two of the most important ways that deterministic systems can produce unexpected or unpredictable outcomes.

  • @vlayneberry578
    @vlayneberry578 4 года назад

    This interesting, but also largely based on old models. Part of that is that ML moves really fast, and there have been major advancements in AI in the last 5 years; another part of it is that this looking only at currently built technologies, and not considering the possibilities AI researchers are exploring nearly as much. I think it would be interesting to go through the arguments posited, and respond to each one in terms of possibilities for AI--and I think different arguments from different parts of this video are referring to different types of AI. But it's a lot more than I can put in a youtube comment, and than anyone will bother to read, lol. I might try to make a response video or smth of the sort sometime soon

  • @billames4367
    @billames4367 4 года назад

    What is thinking? It is the mind trying to rationally find the solution to a problem. Many things interfere with the meat computer's proper (rational) operation. Genetic defects, degradation of connections due to age or lack of blood supply, drugs of many types, a very bad life (no opportunity or bad choices), sickness of body or mind resulting in a poorly functional or relatively unused mind. Opportunities to think (to solve a problem) can be impeded when a problem is not recognized as such. Accepting a bad situation may be a lot easier than fixing it (thinking how to solve it.)

  • @biomerl
    @biomerl 9 лет назад +2

    Yo, person who made this video, it was posted to reddit's philosophy subreddit, if you want to see discussion, or more of it, look there.

  • @csbened16
    @csbened16 8 лет назад

    The algorithm you described ("just decide after gathering some data") is called anytime algorithm and is used in AI. So, it's easy to do it. Need to elaborate more then one acceptable solution (which the brain does paralel) and then choose one by random. There is no reason to believe anything extraordinary happens in the brain, could be modelled (we can model one neuron, even few thousands). The current computer is having other priorities (logic, exact computing, ...) and does not intend to compete with the cheap-to-produce brain.

  • @ricardo.mazeto
    @ricardo.mazeto 9 лет назад

    Hi Olly. I'm a programmer.
    About your analogy with the biscuit, if one goes "ah, whatever, this one", he reached a threshold that limits the amount of thinking to do about it. How much math is enough to get a satisfying result? We've a threshold for that. A child could choose the biscuit on fire, because their brain algorithm for choose isn't sharp enough, then came the parents teaching the kid that fire is dangerous and burns their finger. The kid's do not crash, they burn their fingers! When you learn about GMO's or the right way to store your food on the fridge, you're improving your algorithms!
    Here's an interesting fact, when ants find food they leave a path of hormone as they walk, so other ants can follow the path. But sometimes they paths close into a circle and they die in it. It's called the ant's death vortex. Isn't that an algorithm behave?
    Algorithms are just a set of rules that computes a given data. And so are our brains! DNA behaves completely algorithmically! DNA are computers? Hell yeah! Nowadays CPU's transistors are smaller than neurons. And neurons behaves much like transistors as far as I know.
    It's all algorithmically programmable. We haven't AI yet 'cos our brains had millions of years to evolve. We're trying to do accomplish the same in a fraction of that time. Research in that field is expensive and takes time.

  • @bertrandlecerf2565
    @bertrandlecerf2565 8 лет назад

    Anyone else think this issue could be applied to the story of Overwatch ?
    For those who are'nt familiar with that game, in Overwatch's lore, Omnics (sentient robots with actual feelings and emotions) have been created, and huge tensions rise because of that, between those who think Omnics are truly "alive" and should be treated as human beings, and those who think they are just robots, just buckets of bolts built to serve.
    The tensions eventually grow so high that international crisis and civil wars start appearing all over the globe, murder, chaos, war and terrorism are everywhere. And sure, that's just a game, but one could argue that eventually, should AI technology become advanced enough for us to create similar sentient robots, then we will most likely face that exact same problem.
    So, I guess if someone answered the question "Are minds like computers ?", then the answer to that question would raise another question: What should we do about it ? Should we even do anything about it ?
    (It's funny to think I can't be bothered to really think about actual problems plaguing our world right now, yet a hypothetical problem from a video game is keeping me up at night).

  • @auroreinara7322
    @auroreinara7322 Год назад +1

    This needs an update with how AI is flooding the world atm!

  • @petersmythe6462
    @petersmythe6462 6 лет назад

    At some point, if it isn't useful, and it's evenly-distributed, it can be modeled as random noise, which means it's extremely easy to just throw in error rates and a PRNG to turn mechanistic computation into a mind emulation. Again, see argument about a weighted self-modifying graph with a million nodes and a billion links being something you could run on a modern GPU with relative ease.

  • @Amy-zb6ph
    @Amy-zb6ph 7 лет назад

    I think creativity and spontaneity are what make our minds different from computers. We come up with creative ways to put ideas together, whereas a computer will only follow its programming. We also spontaneously decide to do things sometimes and, while randomness seems like more of something a computer could do, our spontaneous decisions still lie within certain parameters because we probably won't decide spontaneously to do something we don't like to do.

  • @Reddles37
    @Reddles37 3 года назад +3

    The Chinese room is a stupid argument because it mixes up its own analogy. The man in the room is only handling the inputs and outputs, he is like the eyes and the mouth of the system. Obviously your eyes don't understand what you see and your mouth doesn't understand what you say, but that is not some profound revelation. The book is the brain, it does all of the decision making and has all of the information. The book is the thing that understands Chinese!
    If that seems silly to you, just take a minute to think about what is actually involved in this 'book'. It cannot simply be a list of answers for any given input, it needs to store a 'memory' of previous inputs and change its future responses based on the context. It has to be able to 'learn' new information it is given. And it needs to have an incredibly complex network of how different ideas relate to each other, so that it can correctly determine how to respond to a conversation covering a variety of topics. The required complexity and flexibility of the rules is such that the 'book' would essentially be capable of thinking for all intents and purposes.
    In fact the idea of programming a long list of rules is pretty ridiculous, and researchers have long since abandoned this "good old AI" approach. These days the focus is much more on machine learning, where the computer learns rules and patterns on its own. This often takes the form of simulated artificial neural networks which operate quite similarly to neurons in a real brain.

  • @georgeparkins777
    @georgeparkins777 5 лет назад

    Here's the thing about Searle's response: If the guy memorized the rulebooks and the characters, he still wouldn't speak Chinese, but something in his brain would. He's equivocating between a person's consciousness and a person's brain, I think.

  • @xzonia1
    @xzonia1 8 лет назад

    So many questions raised in this video! Wow. It's a lot to go through.
    I think it's a fair description of how current computers work to describe them as "any thing which processes representations systematically." As science advances in this field, how computers work may change, but for now it's an adequate description. I would be interested in hearing other definitions people have proposed to explain how computers work.
    Then you presented the idea of qualia, which you said are "nonrepresentational subjective elements" that exist in thoughts, which would disqualify computers from being capable of functioning in the same way human thoughts work.
    You asked: "Do minds use representations?" and "Are minds systematic?"
    Now here's where it gets confusing. When we ask "Can a computer think like a human brain?" what are we really asking here? It's important to remember that the human brain is actually made up of three different parts:
    the reptilian brain - control's body's vital functions such as breathing
    the limbic brain - responsible for emotions
    the neocortex - region capable of thought
    So when we compare computer processes to brain functions, I think it's safe to say the area of the human brain we're concerned with is the neocortex. This is where "thought" occurs. As our limbic brain enables us to feel emotions, it's a separate region of the brain that enables us to experience "nonrepresentational subjective elements" in our cognitive experiences. So I don't think it really matters if qualia exist or not because it would relate to an area of the brain that we are not looking at when discussing whether or not computers can "think" the way humans do. The neocortex doesn't allow us to experience qualia; the limbic brain does.
    If we only look at how the neocortex functions, I think we would find a stronger parallel between how humans "think" and how computers "think." The example you give in the video of a person at the grocery store doing cost analysis for buying the best biscuits vs saying "eh, I'll just get these" without any logical thought to the selection is more an example of the limbic brain overriding or vetoing the neocortex's vote in which biscuits to buy. It does not speak to how our thought processes occur, but rather to how the three parts of our brain are capable of working cooperatively with each other. Sometimes the reptilian brain gets to make the decisions (like fight or flight), sometimes the limbic brain does (I'll buy these because I'm right here, I'm tired, and I want to leave, and I just don't care any more), and sometimes the neocortex gets to make the decisions (I have these coupons and this is purchase fits in my budget better, so I'll get these biscuits).
    I could argue that the neocortex does operate systemically, but that system is not binary, so it's more difficult to see how it functions in relation to how a computer functions. Computers use 0 / 1 binary representations, whereas the human brain may operate at base 10 or base 64 or base 10,000 (Who knows? The human brain calculates operations ridiculously fast compared to a computer... I'm sure there is some computer scientist or neurologist or mathematician who's figured out where the brain fits into this scale, but I've never read that calculation result before, so in the least I don't know).
    What I do know is that the human brain processes significantly more input than the most sophisticated computer in existence in the world today, so assuming the neocortex doesn't have rules governing how it arrives at decisions may be simplistic. Ironically, those rules may be too complex for the human mind to consciously understand. It would be our "operating system," after all. I'm not sure we could understood our own code at a conscious level. And again, the neocortex is sometimes vetoed in favor of what one or both of the other two areas of the brain want to do instead.
    So it's possible decision making is specifiable to specific rules, but not rules we consciously understand (yet). Maybe someday we will figure them out, but I doubt it as those rules would differ from person to person ... that's part of the reason why each person is unique. We all have "operating systems" that differ in subtle ways from each other. We learn how to make decisions from our life experiences, so our codes get written as we grow up. We can write computer programs now that "learn" as they're exposed to more and more information, so they can simulate "thinking" as humans do. How much of it is just a simulation, and how much of it is actually recreating how we think? That is the question.

  • @andrewmartin3671
    @andrewmartin3671 9 лет назад +2

    Few comments, firstly, very good, no stand out objections in a very contemporary and controversial subject. At 2:30 "some [computers] now apparently can [pass the Turing Test]", indeed with the correct combination of Turing Test Variant, AI, and judge, there have been some passes, most notably (and again controversially) Eugene Goostman in 2014. However, I should point out that a more serious competition is run annually by the AISB, founded by Hugh Loebner 25 years ago and the last event held in Bletchley Park on Sat 19 Sept saw a clean sheet for the humans with 16 out of 16 AIs being identified as such (four judges talk with four [AI + human] pairs = 16 "tests").
    For my money, the next step in picking apart the systematic electrochemical aspect of neurons from the adaptive dynamic behaviour produced by brains is to be found when the Philosophy of Dreyfus meets the Neurodynamics of Walter J Freeman III. This is what happens in Dreyfus (2007) "Why Heideggerian AI failed and how fixing it would require making it more Heideggerian".
    Keep up the good work Olly!
    [I'm a member of the AISB organising committee, btw]

  • @RatherGeekyStuff
    @RatherGeekyStuff 9 лет назад

    Great channel. I just began studying philosophy at the University of Copenhagen after 10 years as a musician. I must say, a whole new world is opening up to me. Keep up the good work! :)

  • @SendyTheEndless
    @SendyTheEndless 8 лет назад

    The thing with the Chinese Room is that you are supposed to be fooled into thinking the "room" is conversing with you in Chinese, right? Because a set of rules tells the guy inside how to reply in each case. But what if you sent the same message into the room over and over again? You'd get the same reply over and over again, wheras if the "room" had understanding of the language, it would eventually say "Why are you asking me the same thing over and over again?", but in Chinese of course. This is analoguous to how computers follow a linear set of rules, and can't "think outside the box".

  • @samanderson7057
    @samanderson7057 9 лет назад

    It sounds like a computer that could use sound recognition and visual recognition, it would side step the 'symbolic information' problem. So a computer that can interact with 'real world' without additional input would be able to interact with the 'real' objects.
    Also, many adaptive systems 'jump the gun' and go with a best guest just like (it is claimed) the human mind does.

  • @alexare_
    @alexare_ 4 года назад +1

    I know it's been awhile, but I would love to hear a discussion between you and Robert Miles, a RUclipsr with a channel on AI safety. I think it would be very interesting to see what thoughts each of your respective expertise inspire in the other.

  • @Garbaz
    @Garbaz 8 лет назад

    1:04
    What does it mean to "understand"?
    When learning a new language, most people act in the beginning (and some until the end) just like the man in the room: They memorize rules & words and apply those to construct sentences.
    The moment I would say somebody "knows" the language, is when they use those rules and words intuitively and no longer have to think about the specific rules and even forget their definitions.
    For example do I, after reading, writing, hearing and speaking English for years have sometimes a hard time explaining to people what certain words mean (Especially in my native language German), even though I use them frequently.
    But this does *not* mean that the brain forgets these rules, they are memorized and used in the background, encoded as connections between neurons.
    One can think about these like background processes on a computer (Or any other system [e.g. economy]), they work on their own, never interacting with the conscious user, but without them, nothing would work.

  • @rgaleny
    @rgaleny 9 лет назад

    A solution maybe found in a virtual reality program that, like the SIMS, simulated realty with a figure in ground. A VR person in a VR world with Realistic existential limitations would be driven by the conditions of living and thus have a Humanistic perspective. And yet, you could later, give it a Robot - like the NAO only with a few extra senses, to be an extension into Reality. As for info, It would have encyclopedic access to files of the Physical, biological, social, and intellectual as references to action. You can also wire up an AI to the system of a person who walks around all day and interacts - whose actions and feelings are "Taught to the system" over time.

  • @Hecatonicosachoron
    @Hecatonicosachoron 9 лет назад

    There is a thought experiment that I'd like to modify and reproduce here (I believe the original experiment was proposed by David Chalmers in a paper about zombies - the aware-seeming but actually unconscious entities).
    Let us suppose that we could study the brain well enough that we could build a very faithful model of it and that our understanding of cognitive processes has also grown tat we have an understanding of all biological mechanisms underpinning them. With that in mind let us conduct the following experiment:
    We begin with a willing conscious subject. We then proceed to replace her brain cells and her neurons, one by one, by mechanical neurons, which are relly miniature computers, which are functionally indistinguishable from her original neurons. {If you believe that the neuron is not the best unit out of which to build a brain then pick your favourite unit and use that in the experiment instead.}
    By the end of the experiment the entire nervous system will be replaced by mechanical parts that are controlled by a computer.
    At which point will the individual stop being capable of thought?
    Why should she stop?
    How will the researchers know that she has stopped - especially since each part they replaced is, by definition, functionally indistinguishable from the original part in her body?
    I'm interested in reading responses to this challenge.

  • @timetuner
    @timetuner 9 лет назад +1

    A classical computer probably couldn't ever go beyond the Chinese Room, but neural network style computers may eventually advance to be just as flexible and messy as the human mind.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад

      +Abraxian Absolution May be able to *respond* as flexible and messy as the human mind. Whether they have phenomenological experience is still unanswered at that point.

    • @timetuner
      @timetuner 9 лет назад

      Emiliano Heyns Considering that we are just about equally incapable of knowing whether other humans have phenomenological experience, I think that's a bit of a moot point.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад

      I think the qualia proponents would debate the "equally" part of that. We have *some* evidence of humans having phenomenality -- I have phenomenality, I am an animal with certain characteristics, therefore some animals with these characteristics have phenomenality. Weak support, for sure, but for humans there is no such weak support about non-animals or animals that do not share what we deem relevant characteristics.
      I am personally not inclined to say that the qualia problem principally excludes non-animal consciousness, but I'm also not inclined to say that anything that is not impossible should therefore be taken as a serious possibility. So far, I've not seen a positive argument for non-animal consciousness, only negative ones about our own. If the core of the argument should be "it's merely not *im*possible", I'd grant that without second thought and move on to something more interesting.
      Not that I have anything vested on consciousness being exclusive to animals. If I were to have a belief that a non-animal entity is conscious (against a given definition of consciousness), but an alief that it is not, it is my personal conviction that the alief must be battled. Same goes for moral consideration; my alief will probably have a very, very strong pull towards favoring humans over non-humans. That does not for a second mean that I should not change that unless I had good grounds for that bias -- grounds that I currently don't feel I have. This all aside the weakness of will problem.

    • @timetuner
      @timetuner 9 лет назад

      +Emiliano Heyns It's pretty difficult to give a positive argument because we don't seem to have a commonly understood and accepted set of necessary and sufficient conditions for consciousness. Even the human kind of consciousness is nigh impossible to peck apart. How human consciousness differs from that of closely related animals even more so.
      When we're talking about AI, everything between (rigid program that passes Turing tests) and (perfect simulation of a functioning human nervous system) is almost completely alien territory.
      Neural networks will advance as complex and chaotic learning systems in ways entirely different from what is possible by our biology and evolutionary history. There aren't even words yet for much of what we'll have to work out as AI advances.
      But yeah. It might be a while before these debates can go anywhere really interesting.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад

      Abraxian Absolution​ agreed, although I think the Turing test doesn't tell us anything about phenomenality. But since it is still almost completely alien territory, the people claiming "there's nothing to it but brain states" are making an unsupported (although not unsupportable) claim. That the claim is being made very confidently means nothing. That people think "I know about brains, consciousness = brains, thus I know about about consciousness" just tells me that someone didn't pay attention when the law of identity was explained. Never mind the pervasive confusion of intelligence and consciousness.
      Anyhow, I agree that we should be open minded about the forms consciousness could take, and not be anthropocentric about it.

  • @shuheihisagi6689
    @shuheihisagi6689 3 года назад

    I would like more of an explanation of why if something can be explained algorithmically, it doesn't mean it runs on an algorithm. Isn't the universe kind of running on a huge algorithm?

  • @SophiepTran
    @SophiepTran 8 лет назад

    Years ago neuroscientists have created a simple calculator using neurons from a leech. This illustrates the fact that these neurons can function in much the same way as registers in a CPU. It does not mean a register can operate like a neuron in a brain however. That being said, I'm of the belief that given the proper building blocks and advancements in solid state engineering we can approximate brain function to a degree but the current Computer Science paradigms need to change in order to do so. That is beyond the scope of this comment but suffice it to say that it can happen.
    Issue 2 is the term "Simulation". It is true that in order to run a simulation you would need extreme resources at the machine level to run it. This is because you not only need to run the subsystems to run the simulation but all the calculations done by the simulation as well. In this case technology is the limiting factor and we can not approximate a brain. That does not mean we can not create sentience. Which is the whole point of AI, isn't it? Notice how I dodged the life prerequisite. It does not need to be alive to be sentient. (Moral can of worms right there)
    Issue 3 is the uncanny valley which affects all artificial entities attempting to simulate human behaviour. There is a deep disdain that is inherent in the human psyche that rejects anything that does not seem fully human. A simulation will never be accepted as being human because it can only approximate and can never be human. Therefore the AI should not have human physiology or try to mimic human behaviour.
    Lastly, is the moral dilemma of intrinsic rights of a sentient being, whether alive or not. I like Daniel Dennett's positions on the different levels of minds and consciousness. It seeks to explains why it's acceptable to eat frog legs and not human legs. But I wrote too much already.

  • @bencrispe2497
    @bencrispe2497 9 лет назад

    You know, something just occurred to me in regards to the Chinese room argument. In this scenario, we are assuming that just because the man can't speak Chinese from what he is doing, that he doesn't know something the same way a computer would. The perspective on the human side of the analogy is too wide. We know that a computer has inputs, a processor, and outputs. Knowledge works much the same way. Lets say you heard the word "Apple", and you visualized an apple in your head. Well, you were only able to do that because you had knowledge of what an apple is. There's still the input of the sound entering your ear, the processing of all the brain cells firing in the sequence to trigger the memory, and the output of the image appearing in your mind's eye. This input-processing-output system is just what we have come to call memory, which is the basis of all knowledge, and since computers do the same thing in principle, a computer can "know" something just like a human can.

  • @alecchvirko6578
    @alecchvirko6578 9 лет назад

    The more our technology allows us to control our environments, the more our world becomes algorithmic. If for example we put nano bots in the atmosphere to control the weather patterns, then the weather would be controlled by an algorithm. So a system that thinks algorithmically might be more well suited to surviving in the future we are likely to create. A.I. might be the evolutionary inevitability of consciousness.

  • @MageJohnClanner
    @MageJohnClanner 10 месяцев назад

    Man I'd really love to sit down and have a proper debate about this theory, because I think there are some pretty big holes with how Searle is thinking about the Chinese Room. I know it's unlikely that you'll ever read this comment given the age of this video, but the topic fascinates me.
    For starters, assume that a human brain is a computer; Turing shows that any computer can simulate any other computer, so what the guy with the rules is doing is actually simulating a /different/ computer. It's that simulated computer that could potentially understand Chinese, not the guy himself. Whether the guy could learn some Chinese by observing the actions of the simulated computer is a completely different question.

  • @DrINTJ
    @DrINTJ 8 лет назад

    You are speaking of classical AI. Modern work on computational neuroscience is more akin to experiencing the world and interacting with it to form cognitive structures, not simply having an abstracted semantic network that only knows statistical language. This field of computational neuroscience is making slow but firm steps. Check out Stephen Grossberg for example.

  • @billames4367
    @billames4367 4 года назад

    It seems that thinking begins when a mind needs to solve a problem not previously solved. A major task for the programmer is to organize acts of problem-solving as single entities to eliminate duplication and support subsequent combining (as when thinking.) In young children, a problem solved, for example, is getting food from a bowl to the mouth using something other than fingers. Or drinking from a glass and not spilling it all over themselves. A first grader will have solved a lot of problems. By the time the child is thinking, combining solutions, infinitely more.

  • @dantedestefano1948
    @dantedestefano1948 9 лет назад

    It seems to me if the brain's processes are deterministic, then it is indeed operated based on determinable algorithms. As our understanding of neuroscience progresses, we will be able to better and better able to model human consciousness via synthetic means (computer programming, robots, etc). Remember, we don't need AI to be exactly like the human mind - we want it to be at least capable of what we can do (but likely more), less all the flaws in reasoning we are constantly battling.

  • @billames4367
    @billames4367 4 года назад

    So, you have your first grade AI programmed? What did you do for the following: Gender, race, physical makeup, personality type, family (siblings?), pets, responsibilities, social status, home life, intelligence, traumatic life events so far, skills? All the other children have these parameters and have been engaged with them for 5 or 6 years. They have limited life experience and probably do not remember much detail but they were selectively programmed by environment and situation. If there are 30 in the class they probably represent many combinations and even though they are different from each other they do not yet know how different they are from each other.

  • @Preda.Y
    @Preda.Y 7 лет назад

    Assuming a true artificial mind cannot be constructed, it stands to reason that effort could be put into simulating one. As in, creating something that through algorhythmic processes gives the appearance of smth non algorhythmic. Which begs the question: how accurate must a simulation be before the distinction between it and the real thing becomes irrelevant?
    This of course rules out non-digital-computing methods of achieving AI, such as growing a huge brain inside of a lab and training it to manage infrastructure or something. A bio-AI so to speak

  • @gaebren9021
    @gaebren9021 5 лет назад

    the biscuit buying would be decision making as specified by emotion. Like, I like these biscuits.

  • @p3tr0114
    @p3tr0114 8 лет назад

    If my memory serves Searle's argument is that there _is no way of knowing if the man in the room understands_ not that the man does not understand.

  • @TheGoodMorty
    @TheGoodMorty 8 лет назад

    searle's argument breaks down when you realize that the books of instructions rely upon someone who actually understands all of the chinese language enough to write this rule book, and that is where the true understanding lies. and when the man learns the language by getting a job in a restaurant, this is equivalent to the people writing the rule book teaching the man chinese. like when you install Rosetta Stone software to learn a language. all you have to do is teach the computer language the way we teach people, through direct experience, so we just need to simulate the interactions between cells in the brain in order to make humanlike intelligence, and then use what we learn to improve on things.
    but even if strong computer AI was fundamentally impossible, then we need to pump that research money into growing larger, more powerful, and intelligent biological brains cuz that's apparently the only place where thoughts and consciousness can happen :/

  • @davidbjacobs3598
    @davidbjacobs3598 8 лет назад

    Regarding the Chinese system:
    Humans learn to speak through very similar means. We listen to other talk and are able to infer meaning through context, etc. It takes a while but eventually we're able to start forming original sentences.
    Of course, we also have the advantage of our other senses. If someone points to a dog and says "dog", it's pretty useful in deciphering the meaning. Despite this, however, those blind since birth are clearly still able to learn the English language. Can the same be said of someone born without sight, taste, smell, or touch? There's probably some research on this I'm too lazy to look up, but I would think so.
    Therefore, the man in the Chinese machine should eventually be able to learn Chinese.

  • @RelemZidin
    @RelemZidin 9 лет назад

    Even if a brain is not/more than a Turing machine, I don't see how that affects whether or not we can build an artificial brain. What matters is understanding the brain.

  • @deckmar
    @deckmar 9 лет назад

    How about this: Is the existence of intelligence and interactions of a person confined to the brain and body? (yes) Is the brain and body made of matter that follow physical laws that we understand? (yes) Can a huge amount of matter (such as lots of neurons) be simulated by a computer? (yes) If so, then a computer can simulate an intelligent brain - although that would be a very inefficient way to do it - but possible with large enough computing power.

  • @ianyboo
    @ianyboo 9 лет назад +13

    at a certain point doesn't it just not matter anymore?
    what I mean by that is, if we have an advanced computer system that can interact with us answer just about any question we ask it create works of art and generally do just about anything a human can do just as well if not better it seems sort of pointless to declare that the computer system is just running a program after its done all that. at that point we might as well turn the finger back on ourselves and say we too are just running programs.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад

      +Ian G So should shutting such a computer down then be treated as a murder?

    • @ianyboo
      @ianyboo 9 лет назад +2

      Emiliano Heyns yes, if its as I described above. (I'm assuming you mean "shutting down" as in "turning off permanently" here, if you just mean "turning off then on again" I don't see an issue if the computer content to the reboot)

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад +4

      Fair enough -- was just wondering where you stand on this. The reboot scenario would still have to count as assault, but from your response, I figure you'd be fine with that.

    • @Trollitytrolltroll
      @Trollitytrolltroll 9 лет назад

      +Emiliano Heyns Made me laugh with that assault thing.
      The end result can take different routes, like someone speaking a language natively or mentally going through the grammar. Perhaps a computer "acting" human would not think exactly through the same processes we do, but the output would end up the same if made to do so.
      I guess if threatening such a computer at knife-point (or whatever can pierce its rigid metallic shell) would generate what seems to be fear from it, and that it responds to all stimuli as a human, the rest is effectively redundant in a certain sense.
      However, it would create a whole new field of study of this new "brain". Perhaps different wounds or defects would produce entirely new symptoms. Whereas a human may lose their inhibition, maybe this specific machine would want to turn everything into chairs and self-replicate.

    • @EmilianoHeyns
      @EmilianoHeyns 9 лет назад +1

      I wasn't kidding about the assault thing. If someone painlessly brought you into a coma, kept you there for a week, and then brought you back with no physical damage, I bet you'd still call that assault. If someone tortures a human being but leaves no physical trace, you can't just say nothing happened.

  • @beauw9454
    @beauw9454 9 лет назад

    I heard an interesting question/idea once about consciousness. Does the brain produce consciousness or is the brain a conduit for consciousness? I heard it in the context of whether or not there is a soul/mind beyond the physical body, but I think it has implications on AI design as well. If you're trying to design a generator but using a radio as your blueprint you may not have fruitful results.
    If indeed the mind does not wholly reside within the brain then the creation of AI would not be entirely about processing power, 'programs', and sensory imputs. In order for a system to comprehend and understand, repetitions would not be enough, we would need to also design a consciousness 'tuner'.

  • @venumeagle4264
    @venumeagle4264 9 лет назад

    Great video! As a uni student I find your videos really informative! I would enjoy going further into the relevant nature of viruses as an extension of this debate on AI, if for example you wanted to put forward contemporary provocative ideas. We not only have viruses in 'nature' but also in computers. I haven't gone into the exact recent study but I believe it was published in the journal Science Advances putting forward evidence that in genetics viruses are in fact alive. This could be an interesting debate when we link this back to what it is involved in classifying something as 'alive' and the link back to AI and computer technology.

  • @vishmonster
    @vishmonster 9 лет назад

    My hunch is that because software and minds run on such different substrates i.e.: silicon and meat, even if there is some process that computers can do that is analogous to thought (mind-ing) it will not and can not be the same....

  • @lockvirtompson5287
    @lockvirtompson5287 9 лет назад

    To the Dreyfus argument. You are imagining a simple program. It is simple(in this context) to make a weighted program that weights different conclusions agenst eatch other like:
    "I want to eat cookies" Is weighted for Eating cookies-> good feeling;
    and Eating cookies -> will stave of the bad feeling of being hungery.
    But a different conclusion might weight higher like:
    "Getting close to fire" being inside fire-> Very bad feeling; Being burnt -> hurts for a long time and may hinder functionality.
    And a different conclusion might be.
    Burned food -> Taste bad.
    Burned food -> Unhealthy.
    If we have a sufficiently large pool of these kinds of conclusions. And weights great emergent behavior will appear. This can then be combined with other functions that changes the weights of the conclusions and form new ones, and optimizes closely related conclusions. This is just a simple example. But I think it is enough to show Dreyfus argument is not very strong.
    Minds are being created every day, and unless there is some divine force(or other such) involved in the process. Then the mind is created in the womb from materials in the environment. And those physical materials interact in a way that make the mind appear. If it is not practically possible to make the same appear outside the womb. I would say that it is at least theoretically possible. And if we can theoretically do this, is it the case that it is only biological materials that can be arranged in such a way that a mind appears? Sounds strange to me. But still. Are we not all artificially created. Man made(Woman made).

  • @JeffreyPeckham_abic
    @JeffreyPeckham_abic 9 лет назад

    It feels like there's also an oversimplification of algorithms and their purpose here. Ultimately we write code and build machines to do things we'd rather not have to or aren't very good at. When I write code, often times, I want it to function very predictably. I'll pause a process, inspect its state and guess what it will do next and expect it to do that since that's the desired characteristics of it. However that type of predictability is not always desired, such as in shuffling a deck of cards for online poker. And when we look at large scale complex systems like search engines and image recognition systems (see www.businessinsider.com/these-trippy-images-show-how-googles-ai-sees-the-world-2015-6 to have some head explosion), we see systems that are designed to adapt and while they are deterministic, at that level of complexity I would not call them predictable.
    As for will we be able to build a computer that can act like a mind? I think we're limited by thinking of computers in only their current architecture. It might be more interesting to ask if humans can ever build something physical that can act like a mind. Unless you're a dualist you'd have to say that the brain is already a physical thing that can think, so there is a physical process that exists to make a mind. I think we could, given an extraordinary amount of effort, simply build a brain and feed it experiences and give it an ability to interact with the world, but would we even call that a computer at that point? And would we even want to build a brain? We already have 7 billion of those running around. Wouldn't we rather have something that worked differently?

  • @ivanclark2275
    @ivanclark2275 9 лет назад +3

    Our brains totally do use algorithms and systems. If you want to get the best biscuits, but they're on fire, another rule takes over that says "get best biscuits for money UNLESS it's too dangerous or inconvenient." Every thought we have is a mechanical response produced by the firing of neurons in a very complex pattern. There isn't any magical quality outside of our brains that makes the firing of neurons more than the firing of the neurons of an ant, or the processors of a computer, our brains are just more complexly interconnected.

  • @Dramscus
    @Dramscus 9 лет назад

    Theres a branch of ai programming that uses evolutionary techniques to produce interesting albeit fairly basic results. I think eventually that stands the most likely area to develop proper ai, Once people have almost no input into the system a program develops it could produce some interesting results given complex enough goals.
    This subject will only get more interesting with the developing cybernetic technologies which will start to blur the line between sentient and ai.

  • @unvergebeneid
    @unvergebeneid 9 лет назад

    I'll try to reply here to your response or lack thereof at the end of the next video. Hope that's not too confusing. I'd do it in the right thread but RUclips is broken so you'd probably never see the comment.
    Anyway, while my comment about artificial neural networks indeed was written with a software simulation of such networks in mind and therefore your argument of "but it's still only a simulation" bears some relevance, there are also implementations of such networks that are done in actual hardware. So then the question really becomes: does it matter that the exact same processes happen in lipids and proteins in one case and in silicon in the other? Do we really need biological neurons to give rise to qualia and consciousness? And if so, why? What makes them special? How much could we change before the magical properties of neurons are lost? There are slight variations in how neurons are built in different animals yet the notion that at least higher animals don't have feelings is not very popular these days.
    There are no clear answers to this because nobody knows but I hope I could convey that just saying "oh but this computer stuff is just a simulation while neurons are the real deal" doesn't carry a lot of weight. And the kinds of arguments Dreyfus was making were true at the time but have no bearing on artificial neural networks (which AFAIK he acknowledges himself, he just seems to be happy that rule-based artificial intelligence is no longer a thing).

  • @simplylinn
    @simplylinn 5 лет назад

    I know this is an old video, but I think humans can be seen as operating in accordance with a set of rules, what's in the field of AI is often called a "utility function", it's basically what, fundamentally, decides whether an agent will choose to do X or Y given some input.
    Even though the existence of a proper utility function in humans is unknown, and even if it was known, the nature of it is unknown, in a lot of ways humans act as IF they have a utility function.
    If we allow ourselves to assume that we do have utility functions, we can take a look at the biscuit example in a new light:
    The reason you ended up in the store looking for biscuits, is probably not because your ultimate goal in life is to eat biscuits. But your ultimate goal in life, whatever that may be, tells you that, by taking the step of eating biscuits, you're going to be in a more advantageous position relative to that goal than not. It also tells you that, since money is a resource you should be conservative about, in order to be able to get into even more advantageous position relative to the ultimate goal of the utility function in the future. So the intermediate goal could be expressed as "Get biscuits if the value of the biscuits as defined by the prediction of the utility function is not below the cost of what the utility function has deemed a maximum amount you're willing to spend on biscuits in order to not be in a more disadvantageous position relative to the utility function"
    In this case, biscuits on fire would have a negative value, and not be eligible for purchase, ants would have a similar effect. Since it can be reasonably assumed that the biscuits value, according to the utility function, at least has "edibility" as one pretty important parameter.
    AI is, to some extent, able to follow a utility function, and show behaviours that are adaptive and reminiscent of ways we'd think a creative intelligence would approach the goal we defined. If we tell an AI to, say, get as high of a score in an old NES game as possible, it can start experimenting with inputs and interpreting sensory data, eventually learning what actions gives points, and maybe even learn how to manipulate the random number generator of the NES, because it has managed to intuitively discover how the sequences of input is used by the underlying system as entropy for the random number generator. That's a pretty intelligent and creative strategy if your ultimate goal is to have as many points as possible.
    The problem isn't so much that we're unable to create systems that can optimize their actions towards a specific goal and do so in an intelligence-like fashion, the problem is to define something that means to the system what it means to us humans, encoding concepts like "beauty" or "morality" into the evaluation of world states you're trying to optimize is the hard part. It's not the state-to-state instructions that is the current problem here, an AI with a utility function wouldn't crash in the store buying biscuits, but if the AI doesn't have the same definition of "value" as you do, it's goals and yours are not aligned, and that's where the dangers of powerful AI comes in. Philosophy about AI, and AI safety is an area I think you would find highly interesting, because the definition of AI you used in this video is very outdated, and the new questions we have today is a blast to ponder.

  • @fl00fydragon
    @fl00fydragon 6 лет назад

    The brain is a computer
    But it's unique in the way that it is between digital and analog in terms of how the impulses fire (some suggest that it's a 27 digit digital system)
    However in terms of connectivity neurons are connected in a way that they form logic gates similar to those that we see in computers
    This means that there's, for example, a lot of if gates in our brains.
    However if we are to speak of mind uploading/digitalization then it's possibly unfeasible as the uploaded mind would not be the original person.
    HOWEVER if you were to replace one's brain cell by cell with nanomaschines over an extended period of time then the person would be the same despite becoming 100% synthetic.

  • @ronjohnson4566
    @ronjohnson4566 9 лет назад

    interesting... i teach a representative painting class and the main concept that i try to get across is "general to specific". I am no scientist but, here's an attempt. The human organism or maybe all sentient organisms collect data and store it in the brain. Our brain either grew for that reason or because of that reason.... anyway. Many of our responses to our environment are stored in our brains from day one of our lives. This data is/are retrieved when needed to deal with all that is going on. Here is how I think it works, The senses notice thousands of events, push most of them back, because they are unimportant to the situation. Then decide on the specific thing or things to do. Overall, general to specific. The more general data you can retrieve, and the better your particular you is functioning (how healthy it is) gives you a better chance to achieve your goal. So, if I'm anywhere close to truth, it seems to me that computers can't become a bad seed. Unlike man, computers had a creator. Their response can only be what the programmer introduced into the system. ....One more time, we sense something... smell, see, feel, taste or hear. We may have a reason to choose one or more to be interested in or need or don't need. Our brain analyzes and leads us in that direction of interest... I think a computer would want the answer, but we maybe distracted to something more interesting or just give up or forget what we needed to know about the stimulation anyway.... That seems to happen to me more and more these days... I don't think you will ever find a computer wandering around a field being invigorated by beauty of the day... the smells, the wind, the clouds, the feel and touch of grass.... oops there is a cow pie, better not step in it my programmer will be pissed!

  • @Sam_on_YouTube
    @Sam_on_YouTube 8 лет назад

    Part of the problem here is that the mind is not like a linear computer, like the dictionary in Searle's case. Parallel processing is very different for deep mathematical reasons that I can't go into here, but that's how the brain works (in part).

    • @Sam_on_YouTube
      @Sam_on_YouTube 8 лет назад

      Actually, the weather is a good example. You cannot predict the weather algorithmically for precisely the sane reason our mind can't be understood by reference to an ordered set of rules... chaos theory.

  • @NeWincpp
    @NeWincpp 7 лет назад

    The chinese room argument is valid as long as you assume that the man inside the room can't learn. Said another way: when a baby is born it has no referential or any knowledge language but (if he has a working brain working ears and working mouth) it will learn. This baby will be able to talk even if he has less information (the man in the room have a referential language because he can read the rules) and he is technically stuck in his own head and restricted by the inputs that our body get (like we all are). What the Chinese room mean is not that the man will never speak mandarin, it mean that as long as he sticks to the rules he won't understand. Which mean that essence of mind is in the learning process.
    And actually I think today's AI already have something that goes beyond "consciousness" as we understand it because they can share experience and thought directly without the lens of the language or any other form of communications so in a way AI are "more conscious" than we are (this is one of the biggest reason that make H+ want to merge with the machine but that's another story...). In addition the chinese room (like you) is admitting that the AI is the CPU (aka the computing part) but that's wrong, the IA is in the RAM, the memory, this is memory interacting with each other that make it intelligent not the fact that it process it. Like you and me the fact that I can process an image and see a face cannot generate my mind, the fact that I remember it can. Once again the mind is in the process of keeping memories and being able to put those together aka learning.
    An IA can have Qualia (if we admit it as "a concept generated only by subjective reference and no other frame of reference"), actually this is the thing that cose most problems in the research in Deep learning today. The intermediate steps of the learning process require to generate concepts, for exemple a deep learning algorithm made for detecting cars in a photo will (probably) generate the concept of wheel by itself but today it's pretty hard to identify if the concept generated is right or not because it's "personal" to the IA.
    And by the way if a computer "represent with 1 and 0" you brain "represent with dopamine and serotonin + 5 or 6 other other hormones". A neuron by itself isn't that complex, it's the big bag if neuron and other elements so called the brain that is. And no a neurone alone is not a mind, a bunch of neuron aren't a mind, a particular architecture of neuron generate a mind and that's a very big point that you missed.
    The decision making of the human is actually behaving by specific and specifiable rules, the difference between you are making here is 20 years of accumulating, packing, simplifing, modifing a lot of data, I mean A LOT of data (handreds of petaoctets). The problem here is that they are stored in your brain in a zone called "system 1" so the decision you are making is not entierely by the part of "you" that think is "you" (check the dual process theory). It's like someone with a lot more data is making the decision for you according to the ton of data you accumulate on your life. And don't get wrong, there is no "irrational" decision making only choices that you are consious and choices that you aren't. Neuro-marketing is build on this, today if you make a decision of buying something over another equivalent "randomly" that's just your "system 1" executing what the ads said and you don't know it.
    BTW actually this is not that hard to make write a neural network that work like the "System 1" of your brain (aka generating a very fast decision that usually benefit you using previous experience). This is hard but far impossible and it already exist if you restrain the environment/inputs enough.
    By the way you can "crash" while making, it's when you output "I don't know"...
    The weather is actually controlled by an algorithm...
    Considering the definition of an algorithm "an algorithm is a self-contained sequence of actions to be performed" then physics produce the set of actions and they are executed through time (with making a "sequence").
    So I will finish on this: IA like deepdream and a lot others already have a mind, already think, already have Qualia. The question isn't here the question is when do we grand them the status of being alive. Because with the status of IA development now you just cannot say "they aren't" without denying the "living being" status A LOT of the species on this planet (from unicellular to insects). The real question about AI here is down to the very definition of life (with every rights that goes with it) and we will need to answer this if very fast or we will face a very big society crisis (on the human side, the AI(s?) doesn't need us, we need them...).

  • @danielstarkey9953
    @danielstarkey9953 9 лет назад

    So I know this isn't a directly philosophical approach, but neurons really are quite a bit like transistors. It's very easy to create basic networks yourself that will take known inputs and then give predictable outputs. And this makes sense even when thinking about senses. The brain does thins digitally. A neuron is either firing or it's not. It can fire at different rates and those mean different things in different contexts to the system as a whole, but it's always discrete. Why can't we then understand our brain as a ludicrously complex network of parallel processing units running its own hardware-defined code? When we are born it's hard to argue that we don't start with a few algorithmic rules before shifting to heuristics as time goes on.

    • @CompilerHack
      @CompilerHack 9 лет назад

      What if we use many many high speed processors for a purpose, say mining bitcoins. Would this network be said to have a distinct consciousness at least of a primitive sort? Does it feel higher degree of Qualia when a new processor joins in?
      If I allow myself to her carried away:
      just like how we have an undeniable will-to-life(check school of life's video on Schopenhauer) and yet we feel happy/sad/other feelings about stuff completely unrelated to living, would any sufficiently complex network of processors like the Internet can be expected to have a Qualia for events unrelated to its existence?
      Does a LAN feel something when it received pings from other LAN searching for a third one?

  • @kafka9627
    @kafka9627 6 лет назад +1

    Whoa, as an autistic person I find the cookie example really interesting.
    With autism, that is my major problem in my every day life.
    I can't adapt my decision making to my every day life, like at all. I get a break down. I have to provide myself with the algorithm beforehand or the trip to the shop will be a nightmare for me.
    Does this mean I am a computer?