Are Brain Organoids Conscious? AI? Christof Koch on Consciousness vs. Intelligence in IIT [Clip 4]

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 55

  • @henrycardona2940
    @henrycardona2940 9 месяцев назад +10

    We know how to radically boost our intelligence and create intelligent machines, but what does a radically conscious being look like?

  • @braphog21
    @braphog21 9 месяцев назад +15

    Christof is completely misguided at 11:05 when he's explaining the differences between neurons and LLMs.
    Yes, LLMs like GPT-4 run on transistors which only have 3 connections each but that's looking too closely, Christof needs to zoom out and realise that there is a higher structure above these transistors which actually mimic neurons quite well.
    Biological neurons are analogue and have thousands of incoming connections and thousands of outputs.
    Artificial neurons can also have thousands incoming connections and thousands of outputs (like in feed-forward network). It's also important to note that these networks of artificial neurons are non-linear (the maths is done using floating point arithmetic which is non-linear and the activation functions are also non-linear) so they cannot be reduced to a Perceptron network which is very different from biological neurons.
    So if we can create artificial neurons that very closely mimic the structure of biological neurons and we can create networks of these artificial neurons that behave similarly to biological neurons (i.e. they both have an ability to gain intelligence by perceiving things) then what separates the two? The two neurons functionality appear to be very similar which only leaves the possibility of the network architecture being different. Why would current day architectures of neural networks not contain any 'consciousness' at all but contain a lot of 'intelligence'?
    It's clear that consciousness and intelligence are NOT orthogonal concepts but that consciousness is a property of intelligence that increases as intelligence increases. I think Christof is wrong to ascribe so much consciousness to a tiny brain organelle that has very low intelligence. I also think he's wrong to ascribe so much intelligence to GPT-4 and yet so little consciousness (that's not to say I think GPT-4 is very conscious, I don't think it's very smart. It definitely is a little intelligent so I think it has a little bit of consciousness).

    • @notthere83
      @notthere83 9 месяцев назад +7

      Maybe so but I believe there's also something you're missing: Continuity is key for consciousness.
      Just try to imagine if you fell asleep every few milliseconds and then you wake up again with completely different inputs and fall asleep again after a few milliseconds. You wouldn't be able to form a sense of anything.
      Maybe if LLMs were running continuously and receiving inputs all the time, one could argue that they're conscious but that's not what's happening. They're not even constructed to proactively do anything. But only process data once they're given input. I don't see how anything constructed like that could ever qualify as "conscious".

    • @emilyvs2252
      @emilyvs2252 8 месяцев назад +1

      Maybe his argument hinges on complexity? I don't know exactly how you can calculate that but maybe he's saying the human brain has a complexity of neurons^50-100000 while computers have one of neurons^2-3.
      Also the idea that computers could only be conscious if they mimic human brains seems a bit problematic to me since we don't even really know what consciousness is or how it is tied brains specifically. So the idea that consciousness has some correlation with complexity makes a little bit more sense to me. But still... If you compare neurons to pixels and colours to emotions. Let's say we have a very simple organism that only has two "pixels" that can experience only two colours, black and white. Black means pain and white means happiness. If both the pixels would be black, couldn't it be that the experience of that blackness would be just as intense as the deepest imaginable pain for a creature with billions of pixels? It seems to me that the experience of signals might be seperate from the signals itself. So measuring consciousness just by complexity seems not too convincing to me either. Just because a creature has less nuance in the way it experiences sensations, its experience doesn't have to be any 'lesser' for it. So going back to computers, maybe the complexity doesn't really answer the question of whether or not it has 'it' - consciousness

    • @raresmircea
      @raresmircea 8 месяцев назад

      Intelligence and consciousness *are* orthogonal. From philosopher David Pearce in conversation with someone:
      _"Our most intense experiences (like agony, uncontrollable panic, pleasure) are mediated by the most evolutionarily ancient regions of the brain._ _Compare introspection & logico-linguistic thinking. Intensity of experience doesn't vary with intelligence. A gifted mathematician isn’t more predisposed to torture than a person with low IQ"._
      Also him:
      _"Not just wrong but ethically catastrophic. Pretend for a moment that Dennett is a realist about phenomenal consciousness rather than a quasi-behaviorist. If the dimmer-switch (or high-dimensional versus low-dimensional) metaphor of human and nonhuman animal consciousness were correct, then we would expect "primitive" experiences like agony, terror and orgasm to be faint and impoverished, whereas generative syntax and logico-linguistic thought-episodes would be extremely vivid and intense. The opposite is the case. Our most intense experiences are also the most evolutionarily ancient. Humans should treat nonhuman animals accordingly."_

    • @MrPDTaylor
      @MrPDTaylor 7 месяцев назад

      Exactly the real issue​@@notthere83

    • @A.s.s.777
      @A.s.s.777 2 месяца назад +1

      how do you measure what is conscious? Is a mosquito conscious? Is a dolphin conscious?
      Artificial neurons are FAR, FAR from being compared to a biological neuron. The amount of nodes in a network is also FAR FAR from being compared to a Brain neuronal network. We don't know what gives rises to consciousness, however, it definitely won't be just by naively connecting activation functions in a networked fashion!

  • @GeoffryGifari
    @GeoffryGifari 8 месяцев назад +5

    But what kind of architecture can we assign consciousness on though? is there a clear line? what about neuromorphic computing utilizing memristors for example? what if quantum computers become strong enough?

    • @HoboGardenerBen
      @HoboGardenerBen 28 дней назад

      Good question. I doubt neuromorphic chips will ever be the way for consciousness, just intelligence. Life has had billions of years to work on it, no sense trying to make it from scratch.

  • @superawesomegoku6512
    @superawesomegoku6512 9 месяцев назад +20

    I feel like conciousness is an emergent quality of intelligence and connections of neurons

    • @alkeryn1700
      @alkeryn1700 9 месяцев назад +7

      i think the exact oposite, meaning our neurons and the whole physical world being emergent from consciousness.

    • @BootyRealDreamMurMurs
      @BootyRealDreamMurMurs 9 месяцев назад

      ​@@alkeryn1700why not both as a biconditional relationship?

    • @Deadsetmadlad
      @Deadsetmadlad 7 месяцев назад +7

      I think Bing bong

  • @user18428
    @user18428 3 месяца назад +1

    Aaaah I wish I could ask him why do we look at CPU instead of the actual language model when talking about ChatGPT's consciousness? Because we can evaluate the language model as we do a physical system using the same formalism IIT provides. There is another virtual level of reality that computation produces, we should not ignore it

  • @cate01a
    @cate01a 9 месяцев назад +5

    hope you eventually upload the full interview on yt for free after a bit

  • @עינהרע
    @עינהרע 7 месяцев назад +1

    What about Organoid-GPT?

  • @mkp8176
    @mkp8176 9 месяцев назад +1

    Love this series so much!

  • @wezmasta
    @wezmasta 7 месяцев назад +3

    Spoiler Bro spends 12mins explaining the graph ull understand on slight. I guess a pictures really is worth 1,000 words.

  • @sipper2136
    @sipper2136 9 месяцев назад +3

    It doesn't seem obvious to me why the complexity of the causal relationships at or between each node (transistors or neurons) should scale consciousness more than an increased number of nodes that is able to generate equally complex transformations.
    Certainly the ability of nodes to interact is necessary but I see no reason why you could not view the system instead at a higher level of abstraction and group nodes together into supernodes simply for the purposes of the consciousness calculation. You would have fewer nodes with more diverse connections leading to a higher calculation which leads me to believe that placing the level of abstraction at the smallest information processing unit as opposed to anywhere else (including the system in total) is arbitrary.

    • @ihmcurious
      @ihmcurious  9 месяцев назад +4

      In IIT, the system can be viewed at higher levels of abstraction. If there is more intrinsic causal power (higher phi) at a higher level, then that level would be where consciousness emerges. On the other hand, abstracting to a higher level reduces granularity, and higher-level things are generally casually dependent on lower-level things, so there's often more causal power to find at lower levels. For these and other reasons, phi is often lower at higher levels of abstraction.
      But maybe if you go down too low, e.g. to the level of quantum uncertainty, you won't find much intrinsic casual power, since higher-level structures are robust to these fluctuations (i.e. they may be differences that don't "make a difference" from the intrinsic perspective of the system).
      IIT is very complicated and probably wrong, but Giulio Tononi has anticipated many objections. See the books in the description, or the many Google-able and open-access papers on IIT, if you're interested in a more thorough discussion than I was able to have in this interview.

  • @spinningaround
    @spinningaround 9 месяцев назад

    Is there a full version?

    • @ihmcurious
      @ihmcurious  9 месяцев назад +2

      The full interview is up on patreon.com/IhmCurious, and more clips are on the way.

  • @doblo2670
    @doblo2670 9 месяцев назад +3

    I am sorry, but this guy is so biased towards humans. His only explanation as to why a human derived brain organoid is more conscious than animals is because its human derived. Even though what makes humans special is the amount of neurons we posses and the incredibly efficient way its connections are constructed. Brain organoids are just neurons. They dont have that "human brain structure", because they dont have the rest of what makes humans human beside the brain - the rest of the body. Neurons only learn, their function are ultimately determined by input.
    Babies are unconscious because they are incredibly reflex-based. Once they get enough input they start learning to do more stuff into which i wont get into in a yt comment.
    Human neurons are no different to most animals', neurons, we just have incredible brain structure, but that requires a lot of genetically set in stone input to teach those neurons to form those structures

  • @1bieberfan2
    @1bieberfan2 7 месяцев назад

    why is this not viral....?

  • @angellestat2730
    @angellestat2730 9 месяцев назад +5

    Consciousness explained by Harrison Ford.

  • @Gome.o
    @Gome.o 9 месяцев назад +8

    The argument that a dog seems less conscious than a human being seems dubious

    • @GhostSamaritan
      @GhostSamaritan 9 месяцев назад +6

      They lack metacognition 🤷🏽‍♂️

    • @Gome.o
      @Gome.o 9 месяцев назад

      @@GhostSamaritan How do you know? Are you a dog typing on a human keyboard?

    • @Gome.o
      @Gome.o 9 месяцев назад +2

      @@GhostSamaritan How do you know? are you a cute doggo who can communicate with dogs?

    • @alkeryn1700
      @alkeryn1700 9 месяцев назад +1

      @@GhostSamaritan you are making a major mistake in thinking that inteligence and consciousness are related.
      a llm is smarter than a dog on most of our metric yet the dog definitely has more qualia.

    • @theonlylolking
      @theonlylolking 8 месяцев назад

      Stop worshipping dogs

  • @filippomilani9014
    @filippomilani9014 9 месяцев назад +2

    The only doubt I have about the way Christof Koch places brain organoids in the graph is that he seems to automatically place the organoid at a higher level of consciousness than the jellyfish (also due to the way the graph ismade in the first place), even though he knows that the brain organoid has no input/perception and no output/action?
    I would think that without input or output it would be very difficult to have any conscious experience, since in nature consciousness seems to generally scale with intelligence (as in capacity to execute different behaviors). So shouldn't we see brain organoids just as what they are, groups of neurons, where their potential for consciousness depends on their outputs and inputs, and maybe also on the way they reward themselves?
    I'm curious (lol) if IIT or other consciousness theories have looked into the role of reinforcement and reward in to explain intelligence and consciousness

  • @MurderHoboRPG
    @MurderHoboRPG 4 месяца назад

    This is the best welding video on youtube. I've never made better Icecream! My whole garage is clean now. I love this product.

  • @matt37221
    @matt37221 4 месяца назад

    love it

  • @ciprodriguez
    @ciprodriguez 4 месяца назад

    Funny why not put Octopus instead of a jelly fish?

  • @grivza
    @grivza 9 месяцев назад +1

    3:01 that sounds like a very naive view, implying that the repertoire of "conscious" activities has anything to do with the strength of consciousness. Activities can only be described as "conscious activities" only if there is already a developed consciousness to perceive them. But are those activities vital for the development of a "stronger" consciousness? Sex, drugs and rock n' roll? Doubtful, very much so.

    • @patrickl5290
      @patrickl5290 9 месяцев назад +2

      What is a “developed consciousness” though? I think the idea of consciousness being a continuous quality makes more sense

    • @grivza
      @grivza 9 месяцев назад

      @@patrickl5290 It may be a continuous quality, I am not ruling that out. I am simply talking about his example about the baby, you can bring it to a rock n' roll party and it isn't going to be much more conscious than it was before. It's something different that develops this capacity, maybe through experiences but not from experiences themselves and certainly not just any experiences.

    • @carltongannett
      @carltongannett 9 месяцев назад

      See I would think the dog has as much consciousness as a human but much less intelligence. I figure intelligence just happens to correlate with consciousness in mammals because brains are multipurpose.

    • @grivza
      @grivza 9 месяцев назад

      ​@@carltongannettFrom seeing how Helen Keller describes her experience of learning her first word, in my understanding consciousness is directly related to the capacity for the symbolic, so I would actually say that no animal is as conscious as humans but that's a bit of a different topic. Still don't buy the whole "experiences" interpretation.

    • @ihmcurious
      @ihmcurious  9 месяцев назад +5

      When Christof talks about "more" consciousness, he's talking about expanding the repertoire of possible conscious states. For example, a human is theoretically capable of a wider range of possible conscious states than a bee or a dog, and the human's conscious states would tend to be richer in information.
      In the language of Integrated Information Theory (IIT), a system with more consciousness (higher phi) is capable of distinguishing more "differences that make a difference" from the perspective of the system itself. Read if you dare: architalbiol.org/index.php/aib/article/viewFile/15056/23165867

  • @scscyou
    @scscyou 7 месяцев назад +1

    In other words, he doesn't know anything.

  • @HoboGardenerBen
    @HoboGardenerBen 28 дней назад

    Lots of assumptions in this

  • @joeybasile545
    @joeybasile545 9 месяцев назад +1

    No.