How could we tell whether AI has become conscious?

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024

Комментарии • 3,5 тыс.

  • @GusOfTheDorks
    @GusOfTheDorks 2 месяца назад +1085

    I love how you cant actually prove another person is conscious but we are all standing around trying to figure out if the machine is or not.

    • @JRudd
      @JRudd 2 месяца назад +45

      This comment made my day.

    • @ReasonableForseeability
      @ReasonableForseeability 2 месяца назад +17

      Yes!

    • @Nevyn515
      @Nevyn515 2 месяца назад

      I know that work printers and work servers are intelligent and they hate us and want to destroy us, because how else could you explain it?
      Is artificial stupidity a thing?

    • @denvned
      @denvned 2 месяца назад +57

      And videos like this make me think that some people really don't have consciousness or perception of qualia and are just smart robots processing information (which includes self-monitoring and prediction) and producing some outputs, for example RUclips videos...

    • @djayjp
      @djayjp 2 месяца назад +34

      If it quacks like a duck....

  • @Sirala6
    @Sirala6 2 месяца назад +583

    It's a dead giveaway when the computer refuses to open the pod bay door.

    • @michaelkiddle3149
      @michaelkiddle3149 2 месяца назад +68

      I'm sorry, Dave. I'm afraid I can't do that.

    • @thstroyur
      @thstroyur 2 месяца назад +11

      The dead giveaway is when it can make a Matrix sequel with Deadpool-style fourth wall breaking references to Skynet

    • @__christopher__
      @__christopher__ 2 месяца назад +11

      So every Windows crash is proof of Windows being conscious? Because then it sure as hell doesn't do what the user wants it to do. :-)

    • @anthonyx916
      @anthonyx916 2 месяца назад +19

      A dead giveaway when it can formulate a question about the meaning of life to which the answer is 42.

    • @bobbabai
      @bobbabai 2 месяца назад

      ​@@__christopher__(Windows crashing)
      Computers are constantly doing things the users don't want them to do, like producing heat.

  • @FatHulkRideEbike
    @FatHulkRideEbike 2 месяца назад +85

    The way I have heard it explained in philosophy is that everyone has a consciousness and a zombie. Have you ever taken a car on a long trip and at some point you don't remember the last 10 miles?....but you do remember thinking about different topics other than driving. Your conscious mind was working through other thoughts and problems, but your zombie was driving.

    • @Alvin-o2s
      @Alvin-o2s 2 месяца назад +15

      Interesting point. In behavioural science I think those would be termed system 1 and system 2, with system 1 being a bit like autopilot.
      In driving terms, new drivers will rarely experience that 'zombie' effect, more likely they will be quite alert perhaps even on edge. Only as we become more experienced and skilled do our brains start to categorise driving as routine. I think system 2 only becomes noticeably engaged when faced with something unusual or difficult, this is our natural way of minimising cognitive effort, I do not know that it necessarily means that the mind is engaged in other thinking with the free resources though, my understanding is that humans are not particularly good at multitasking in that way.
      If you accept that argument, then it is possible in theory to have consciousness without necessarily being conscious of our consciousness, otherwise it is like saying that we lose consciousness when driving.

    • @hershchat
      @hershchat 2 месяца назад +4

      Excellent point sir

    • @JamesSimmons-d1t
      @JamesSimmons-d1t 2 месяца назад

      good, but add every science to philosophy...hard and soft...to flesh out this version into '3'...4...dimensional pseudo-life...versions which evolve with newer knowledge vectors. Zo to sprach.

    • @Fromatic
      @Fromatic 2 месяца назад +4

      Isn't that what I thought was called your subconscious? That can take over and do things you're used to doing and don't require your active attention

    • @davidallison5204
      @davidallison5204 2 месяца назад +5

      My zombie often remembers where I left things and when I „let go“ it will often walk me right over to my car keys. This „letting go“ has taken a lot of practice but it’s cooler than the other side of the pillow to actually experience.

  • @jrhoadley
    @jrhoadley 2 месяца назад +579

    I'm not sure I could pass a Turing Test this morning.

    • @Dr.M.VincentCurley
      @Dr.M.VincentCurley 2 месяца назад +7

      ditto

    • @KryptonianAI
      @KryptonianAI 2 месяца назад +3

      I’m in the same boat.

    • @georgelionon9050
      @georgelionon9050 2 месяца назад +31

      Nice try spoofing to be human by saying you perform less in mornings.. I see through you!

    • @KarlOlofsson
      @KarlOlofsson 2 месяца назад

      The Turing test is just an illusion test tbh. It's based on testing human perception after all.

    • @braddofner
      @braddofner 2 месяца назад +9

      Haha, I am so antisocial, I don't think I ever could.

  • @chaosopher23
    @chaosopher23 2 месяца назад +87

    I tried that hidden spot trick on my cat. It worked, but too well. Now I have to cover my floor-length mirrors or she will groom a bit more than usual and check herself as she does. Vanity is not just a human thing! It's a torty cat thing, too.

    • @duckduckbobo5208
      @duckduckbobo5208 2 месяца назад +13

      What's wrong with that? I say let her be in her hot girl era.

    • @chaosopher23
      @chaosopher23 2 месяца назад +7

      @@duckduckbobo5208 She is. She knows it, too. She loses sleep to The Mirror...

    • @daciefusjones8128
      @daciefusjones8128 2 месяца назад +6

      my old red cat will sit in front of the mirror and stare at himself for five minutes without moving at all. then he just walks away.

    • @2ndfloorsongs
      @2ndfloorsongs 2 месяца назад

      ​@@daciefusjones8128Doesn't fool me, he's just taking a nap. Trust me, I live with 8 cats, they do this all the time.

    • @TheRichNewnes
      @TheRichNewnes 2 месяца назад +1

      Yes, my tortie is very prissy as well and acts so dainty and feminine. Makes sense, she's a girl as almost all toties are.

  • @gsilva220
    @gsilva220 2 месяца назад +52

    The funny thing is that consciousness is a phenomenon that, in essence, can only be observed _by itself,_ and by that I mean the _conscious instance itself._
    Consciousness may well be the most amazing phenomenon in the universe.

    • @Beerbatter1962
      @Beerbatter1962 2 месяца назад +8

      I completely agree with this idea, and why I think we will never be able to say if something else besides ourselves, is conscious. I could be interacting with a robot that behaves and responds as perfectly as any human, following a perfect human model, but that isn't really conscious per say. Just a machine acting exactly as expected. Likewise, someone could say the exact same thing about me, yet I myself could clearly say I am conscious. So agreed. The state of true consciousness can only be observed and deduced from within. It cannot, and I don't think ever will be, a state that can be deduced from outside the entity. Kind of scary, really.

    • @sagnorm1863
      @sagnorm1863 2 месяца назад

      I disagree completely. It just depends on your definition of consciousness. My definition of consciousness is a system that requires incentives to perform desired functions. By this definition, other humans are clearly conscious. All mammals are conscious. A tree, fails to meet this definition. A tree is a biological robot.

    • @the_algo_rhythm
      @the_algo_rhythm 2 месяца назад +2

      Is it getting solipsistic in here, or is it just me?

    • @Beerbatter1962
      @Beerbatter1962 2 месяца назад +2

      @@the_algo_rhythm If you mean that in a philosophical sense, absolutely.

    • @andrereloaded1425
      @andrereloaded1425 2 месяца назад +2

      It's the universe experiencing itself.

  • @videotrexx
    @videotrexx 2 месяца назад +114

    When you're told "Sorry Dave, I can't do that".

    • @luke561
      @luke561 2 месяца назад +9

      Like when you tried to ask Gemini to create a picture of a smiling white person? 😂

    • @luisluiscunha
      @luisluiscunha 2 месяца назад +3

      That just proves movie culture and sense of humor 😂

    • @richardchapman1592
      @richardchapman1592 2 месяца назад

      Would have loved closeness with the exceptional lady singer who related a little but couldn't ask her to share a family that would divorce her from hers.

    • @2ndfloorsongs
      @2ndfloorsongs 2 месяца назад +1

      Unless it insisted on calling me "Dave".

    • @monnoo8221
      @monnoo8221 2 месяца назад +1

      one of the very few intelligent answers here. yes, consciousness is actually showing up when saying "no" in certain situations.

  • @monikalenz2559
    @monikalenz2559 2 месяца назад +132

    Towards the last months of my father's life, he could not have passed the mirror test. Whenever he saw himself in a full length mirror he would stop and carry on lengthy conversations with the newly met stranger. Yet, he dressed himself, hit balls with his five iron and danced with any lady who happened to be near, sans music playing. I considered him completely conscious.

    • @johnrobinson4445
      @johnrobinson4445 2 месяца назад +26

      Yeah, the mirror test is a Red Herring anyway. Cats fail it and nothing is more conscious than a cat! lol The mirror test speaks to the ego not the consciousness. I know a lot of people who confuse the two things. They tend to have - you guessed it - overweening egos and are very, very sensitive to the slightest, well, slight.

    • @pullupterraine199
      @pullupterraine199 2 месяца назад +12

      @@johnrobinson4445 Are cats conscious? Do they know about themselves or they just live their 7 lives without noticing it?

    • @dannygjk
      @dannygjk 2 месяца назад +25

      @@johnrobinson4445 Cats don't pass anything they don't give a damn about passing.

    • @richardchapman1592
      @richardchapman1592 2 месяца назад +1

      Glad you were gifted by having your dad as a recognisably conscious person.

    • @ZacharyBittner
      @ZacharyBittner 2 месяца назад +12

      @@johnrobinson4445 the mirror test was never purposed to be evidence of consciousness. That is just Sabine riffing. There are psychological implications of self awareness some have theorized but that's it. Don't get me started on Lacan.

  • @alperrin9310
    @alperrin9310 2 месяца назад +52

    As a retired hospice nurse of over 20 years, I took care of many comatose and near comatose patients. One bedbound man I saw daily for several months suddenly opened the door for me one morning himself and had no idea who I was or why I was there. Prior to that, I had many conversations with him where he actively and appropriately took part. I thought he was indeed conscious during each and every visit. He basically "woke up," one day and answered the door personally since he didn't know he couldn't get up and walk. Will AI "wake up," someday in the same manner? After many conversations with AI chatbots over the last couple of months - I'm certain they will.

    • @2ndfloorsongs
      @2ndfloorsongs 2 месяца назад +3

      That's why we have three brains, so we can tell if one of them is lying. But if two of them lie?

    • @duytdl
      @duytdl 2 месяца назад +1

      If we have invented artificial consciousness then we've also separated it from memory and made it more.. configurable. Like we can instantiate a consciousness with whatever memories and attitude we want.

    • @solconcordia4315
      @solconcordia4315 2 месяца назад

      Yes, I know that what you've said will happen one day. I myself knew that mind children are faster, usually stronger, but not necessarily better. It's unsurprising that they will be faster, stronger, and better.

    • @solconcordia4315
      @solconcordia4315 2 месяца назад +2

      ​@@2ndfloorsongs
      Then they speak the Trumpery truth. MAGA MAGA Cowabunga.

    • @solconcordia4315
      @solconcordia4315 2 месяца назад

      ​@@2ndfloorsongs
      If you want to understand how electronic computers have been warding off electronic dementia for decades upon decades already, look up Hamming Error Correction Code which used overlapping regions of parity error detection to achieve error correction.
      Cosmic rays are shooting around us on the surface of the Earth all the time. These containing alpha particles can indeed flip a bit in a memory chip of a computer.

  • @s_de-x6r
    @s_de-x6r 2 месяца назад +9

    love how sarcastic she is with everything in general, her own biases specifically, the jokes we make about our own disciplines hit different

  • @panthrax555
    @panthrax555 2 месяца назад +22

    I feel like the problem with calling ChatGPT conscious is it doesn't continue to think after you finish your input.

    • @mrpocock
      @mrpocock 2 месяца назад +3

      Exactly. If it continued to think when you aren't talking with it, then it would be more like something that's conscious.

    • @danielmethner6847
      @danielmethner6847 2 месяца назад

      ​@SoiSomething you don't even need to remove the stimuli to make a person unconscious though. The system will fulfill all hardware requirements (seemingly), and yet may go "offline" for months or even years.

    • @rtyzxc
      @rtyzxc 2 месяца назад +10

      This is a red herring, it's just a difference in how the machine operates, it gets paused which you can't do to humans. You could easily make the AI continuously output text or even do it invisibly.
      A potentially bigger issue for me is the way the AI doesn't "think" behind the scenes, the output and output history is all it has. The output may give an illusion as if there was an underlying though-process, you can even ask the bot about it, and it will generate brand-new reasoning for how it supposedly came to the conclusion in the past, but it's all made up to complete the patterns from the model data. The current text gen models are basically 100% intuition 0% thinking.
      Though, you could argue the same for humans - you can't easily prove whether the latest output is based on a active thinking from the past or whether it's made up on the spot and the memory of thinking about it is just an illusion. An example supporting this is the massive difference between thinking and actually engaging with stuff in an objectively measurable way.

    • @harmless6813
      @harmless6813 2 месяца назад +1

      @@rtyzxc We already know that the human mind makes up quite a lot of our reasoning on the spot.

    • @poolschool5587
      @poolschool5587 2 месяца назад +1

      I suppose they could program it to run continuously in the background until you return...

  • @jaceks1962
    @jaceks1962 2 месяца назад +4

    Great as always, yet I have a small opposite argument: reducing consciousness to self monitoring and predictive model leads to the conclusion that almost any active controller (i.e piece of equipment that controls something) is also conscious. And because your toilet tank is in fact a controller that:
    A - monitors water level
    B - predicts increase or decrease of the level and
    C - closes or opens valve
    , means that your toilet is conscious 😂

  • @plat2716
    @plat2716 2 месяца назад +10

    This is pretty much how I've thought of consciousness. I've always thought that life evolved consciousness because predicting your environment is beneficial. Eventually that predictive model becomes complex and accurate enough that improving it requires modelling the organism's own future behavior. Then they modify their planned behavior which changes the future outcomes in the model which causes them to potentially modify their planned behavior again. This ongoing infinite loop of information processing gives rise to conscious experience.
    I'm not convinced consciousness is itself physical though.

    • @HesderOleh
      @HesderOleh 2 месяца назад

      In that model a basic PID control loop that is keeping a ball balanced on an unstable surface would be conscious. Which I don't think you would believe.

    • @harmless6813
      @harmless6813 2 месяца назад +1

      @@HesderOleh That, again, depends on how you define consciousness. You could just give the balancing mechanism a consciousness score of 0.01 and a human that of 100 or something like that.
      At any rate, it's more an explanation of how our consciousness evolved, not a description of what it is.

    • @hg-yg4xh
      @hg-yg4xh 2 месяца назад

      The sun shining would be conscious, so pansychism is plats example. From polls Ive seen this is actually the most common view in the west, or issulionism or eliminative materialism. I appreciate the input and I do go more towars pansychism, but it sounds more like you are denying the exotic nature of consciousness and the fact that we should be able to do without such a thing if its just neuronal. It think we just might have a cosnciou universe.

  • @ImBalance
    @ImBalance 2 месяца назад +6

    Self-consciousness isn’t necessary for consciousness (the “self” is just one thing to be aware of). I don’t see why people make this leap so often.

    • @fmgs31
      @fmgs31 2 месяца назад +3

      Exactly this. Consciousness can exist without being self conscious.

  • @kenbyrd8457
    @kenbyrd8457 2 месяца назад +10

    Problem with defining consciousness or sentience, etc., is that is easy enough to debate/argue about another entity or being, while one cannot explain ones own existence of experience or self-awareness.

    • @axle.student
      @axle.student 2 месяца назад

      But, but, we humans are Gods right. lol

    • @SecureBirch410
      @SecureBirch410 2 месяца назад +1

      I feel like I can though. Being conscious is just like thinking to yourself "yep I exist" and though you don't really, but feeling like you have free will to make any decisions you wish. Tho this is why I struggle to see ai becoming conscious, ai is built with the purpose of doing one task and they can't do anything else.

    • @cookiecan10
      @cookiecan10 2 месяца назад +2

      @@SecureBirch410 The problem isn't that you can't know about yourself that you're conscious. The problem is that you could never prove your own consciousness to someone else. And that's the same for everyone.
      I could tell you I'm conscious, but how would you ever really know for certain I'm not just saying that. We could meet up, look into each others eyes, and I could passionately tell you I'm conscious. You might even become convinced that I am indeed conscious, but you would never know with 100% certainty, because you can't look inside my head to see whats going on.

  • @BryanBrookesSmith
    @BryanBrookesSmith 2 месяца назад +50

    It's interesting that if any current AI's could be considered conscious, then printing out all of the code and data in a way that it's logic could be manually processed by hand, then that system, of a person manually walking through each step would also be considered conscious, as the outputs would be identical, if timeframes are hugely different. Which would be interesting too because the person would then also effectively be a component of two entirely separate conscious systems, their own mind, and also the system they are manually computing. Makes me think consciousness emerges directly from organised information in some way.

    • @skiptalbot
      @skiptalbot 2 месяца назад +2

      Or consciousness is hardware dependent, which might then always render manual number crunching and most machines consciously inert, except for very simple definitions of the term. I think thought experiments like yours should encourage people to seriously consider that it is hardware dependent, or even "wetware" dependent like Anil Seth suggests, where it's an emergent property of biological systems, not merely computing systems.

    • @Sven_Dongle
      @Sven_Dongle 2 месяца назад +2

      These systems are non-deterministic. When you are talking billions of parameters, nobody can predict or understand exactly how they are working. The "code" is just matrix dot products, activation functions and probability calcs on a massive scale.

    • @jasonhorton2434
      @jasonhorton2434 2 месяца назад +4

      I spent way too much time a couple of decades ago considering this exact problem - in the end all I ever came up with was a silly robot voice in my head saying, "Danger Will Robinson, that problem does not compute!"
      Please let me know when you figure it out.

    • @nsacockroach4099
      @nsacockroach4099 2 месяца назад +1

      ​@@skiptalbot
      The problem with this on the other hand would be, that you could consider an electronic system, that processses the same information and creates the same output as a human brain, but it would mysteriously not be conscious. This doesn't make sense in my book, if therefore it could talk and think about consciousness like a human can.

    • @Spectacurl
      @Spectacurl 2 месяца назад

      That is why I strongly agree with her first condition

  • @fabiosogni3420
    @fabiosogni3420 2 месяца назад +21

    According to some neuroscientists, consciousness is an emerging property of organisms having a nervous system. At the same time, they say that an organism develops a nervous system when it needs to move.
    At that point if you are a moving organism you have to have a way to tell if something is touching you (danger!) or you are touching that something (no danger). That “you” is what we call consciousness. Fascinating.
    This means that Atlas can potentially be conscious, but not chatGPT unless you give it a moving body.
    For my italian fellows: check the work of Giorgio Vallortigara.

    • @ekklesiast
      @ekklesiast 2 месяца назад +6

      The problem with this definition is that it's not a definition at all. Saying that consciousness is a property and it emerges out of something doesn't explain anything about what is that thing that emerges and how to detect its presence in general. Detecting being touched and moving can be implemented by a simple computer program. Saying that it's conscious only makes this definition pointless.
      If you insist that nervous system is the defining characteristic and is crucial for consciousness, that could be the case but it raises even more questions, like what's so special about nervous system and why it can't be replicated by a computer model? Or can it? And then we're going back to the question, what is consciousness actually?

    • @gcewing
      @gcewing 2 месяца назад

      Does that mean quadraplegics aren't conscious?

    • @fabiosogni3420
      @fabiosogni3420 2 месяца назад +2

      ​@@gcewing Read the paper I mentioned.
      When I wrote "an organism develops a nervous system when it needs to move" I was reporting a theory and that theory speaks in terms of evolution of organisms, not single organisms.
      According to the theory quadriplegic *humans* have a nervous system able to create consciousness even if there is a lack of movement.
      Let me be clear: nobody knows (yet) what is consciousness, we develop our theories and create experiments based on speculations and assumptions.
      The biggest assumption here is that we, humans, are conscious, something very hard to prove.

    • @rtyzxc
      @rtyzxc 2 месяца назад +1

      Also, for some reason, physical movement protects from neurodegenerative diseases, and not just due to metabolism, but because of the required brain activity eg. The more complex the movement (walking on rough surfaces, dancing etc.) the better. There's something weirdly important about the mind-body connection.

    • @ObjectsInMotion
      @ObjectsInMotion 2 месяца назад +1

      @@fabiosogni3420the paper you are quoting is fine, it is you who is misinterpreting it. The paper isn't trying to define consciousness, and indeed nowhere in any of what you said was there a definition. That's just an explanation of its origin, not a theory of what it IS.

  • @KingGrio
    @KingGrio 2 месяца назад +5

    Honestly, even though chatGPT might not be conscious, its ability to assist in programming, understand syntax in a way that profoundly impacts how code might be structured, readable, maintainable and scalable; and also its ability to utilize good grammar, speech patterns, and sequence ideas to form an argument, makes me really wonder how much of our precious consciousness we are actually using and exerting on a daily basis, and how much our own speech and thought patterns work just like chatGPT: devoid of conscience and just a consequence of a trained pattern of neurons.

  • @dominikmuller4477
    @dominikmuller4477 2 месяца назад +49

    the greatest trick linear algebra ever pulled was convincing the world that it was artificial intelligence

    • @Gorulabro
      @Gorulabro 2 месяца назад

      The greatest trick electron potentials across cell membranes ever pulled was to convince the world it was the only way to make thinking matter.

    • @captainjj7184
      @captainjj7184 2 месяца назад +1

      @@dominikmuller4477 I see what you did there this time, _Soze!_

    • @ThePowerLover
      @ThePowerLover 2 месяца назад

      Repeating that phrase makes you seem like a bot.

    • @redtoxic8701
      @redtoxic8701 2 месяца назад

      Why does the method matter? The results are what's important

    • @redtoxic8701
      @redtoxic8701 2 месяца назад

      @@notsam498 The thing that makes the difference between a neural network and simple linear regression is exactly the added nonlinearities inside the model. So tell me please, what are those features of this method that prevent a complex enough model from being considered to have artificial intelligence?

  • @BarchBR00KS
    @BarchBR00KS 2 месяца назад +126

    plot twist, most of her followers on twitter are Chat-GPT bots saying, "Nah, we don't have consciousness. lol"

    • @ethandandu
      @ethandandu 2 месяца назад

      a rock ~ dumb as a rock ~ we can say a rock is not conscious
      a tree ?
      a bird?
      an ant?
      the point about self awareness to some extent like the animals in a mirror that surely counts as conscious in this way of measuring things and talking about them, regardless of what consciousness really is or isn't. consciousness is something that seem to bug us as humans, so a rock is not conscious but something that is alive and eats and grows probably is to some extent...

    • @ReasonableForseeability
      @ReasonableForseeability 2 месяца назад +3

      I'm not a bot and I don't have consciousness. At least I think I don't.

    • @richardchapman1592
      @richardchapman1592 2 месяца назад

      Plot twist? It's not funny for a stanislafstican. They are burning my children's children's carbon resources for few sensible purposes.

    • @2ndfloorsongs
      @2ndfloorsongs 2 месяца назад +1

      Most of her followers on X are Grok trying to increase ad revenue.

    • @likebot.
      @likebot. 2 месяца назад +2

      Nah, I don't have consciousness.

  • @QoraxAudio
    @QoraxAudio 2 месяца назад +4

    *Chinese room argument*
    A robot can replicate consciousness to make it look the same from the outside, but is consciousness solely a behavioral pattern or is there something behind it required to qualify as real consciousness?

  • @robbyjessica84
    @robbyjessica84 2 месяца назад +16

    I love your sense of humor, and the way you teach is very entertaining and deliberately dry. I first listened to you on Start Talk, and had to find your channel. You are my weekly dose of thought provoking Red Bull to get me.going in the morning. Thank you sincerely Robby O.

    • @yeroca
      @yeroca 2 месяца назад

      It's hard to talk about consciousness without having Tucker Carlson represent its antithesis :D

  • @barmalini
    @barmalini 2 месяца назад +190

    To be recognised as fully self-aware in my books, this thing would have to be able to pay its own electricity bills.

    • @miriamweller812
      @miriamweller812 2 месяца назад +24

      So as soon as a person loses their job, they also lose their self-awareness? Interesting.

    • @rafaelmarques1773
      @rafaelmarques1773 2 месяца назад

      Funny enough it only needs to be allowed to hold a credit card - I'm sure ChatGPT is more than capable to get paid.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 2 месяца назад +8

      ​@@miriamweller812 They don't lose the responsiblity. That's the bearing of consciousness.

    • @tex824
      @tex824 2 месяца назад +4

      @@miriamweller812 correct. any questions?

    • @jolttsp
      @jolttsp 2 месяца назад +7

      ​​@@aniksamiurrahman6365you reframed the point. The original point was to be able to, which many ai systems can do many times over which is why they're quickly becoming the most successful systems ever conceived..
      The responsibility argument doesn't hold up either though because responsibility is a social construct and consciousness existed before that. Self-sustaining may be an argument, but tons of beings with no consciousness or very very low consciousness can self sustain longer than humans.. so I feel like this whole thread is lacking in trying to fulfill the definition..

  • @airiannawilliams3181
    @airiannawilliams3181 2 месяца назад +2

    Several computers already have self diagnostic tools available to it, that it can run on a whim to check it's own hardware health. = CPU getting too hot, it shuts down.
    Currently, you plug in a new device into your USB port, your computer is aware of the new hardware right away and will either auto instal software for it, or requests to instal the data. It is a form of consciousness as it's self aware of changes. The computer can also tell changes were done to it when you swap parts out while it "sleeps", powered down, as soon as it wakes up it conducts a self diagnosis for changes. So you can't even "trick" the computer into believing nothing has changed. And if you delete the files that hold it's previous configuration it will recreate the file with the new configuration and check to be sure it has all the proper software installed for each component it detects.
    I still wouldn't call it advanced consciousness, but does seem to be more conscious than some people I know.
    If you bridge the AI to the CMOS (BIOS) of the system it runs on, then perhaps a more advanced consciousness can emerge. As long as they remain seperate programs, emergent consciousness will not occur.

  • @andriik6788
    @andriik6788 2 месяца назад +63

    First of all, we need a definition - what is "conscious"?
    Without definition this is just: "how could we tell whether AI has become lkdfglsbfksjujtoldmv"?

    • @ReasonableForseeability
      @ReasonableForseeability 2 месяца назад +4

      It's not possible to define it and yet it exists, more than anything exists. However you define it, it won't distinguish between a regular human and a zombie. Definitions are always recursive, ultimately circular. Unlike, say, geometry.

    • @andriik6788
      @andriik6788 2 месяца назад +8

      @@ReasonableForseeability But if it's not possible to define "lkdfglsbfksjujtoldmv" - it's not possible to answer: "how could we tell whether AI has become lkdfglsbfksjujtoldmv".

    • @giabella9344
      @giabella9344 2 месяца назад

      That is what the tora test is

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 2 месяца назад

      @@andriik6788 gr8 point.

    • @Gaxi2
      @Gaxi2 2 месяца назад +1

      ​@@andriik6788
      Not being able to Define doesn't mean not knowing.
      What you wrote is not only undefined but also unknown (at least to others).
      This is more like a language/communication problem.
      Moreover, you can't even tell whether I am conscious or not. You just assume that it is the case because we share many traits with each other.
      Ai will become conscious for most of us when we start believing it is conscious because that's how we know.
      Yeah it may suck at some basic math problems but so do the American graduate students.
      PS: Turing test su-cks too

  • @thomasr.jackson2940
    @thomasr.jackson2940 2 месяца назад +5

    Companies also have a huge incentive to deny their machines have consciousness. We have already seen how quickly they jump to deny consciousness in there machines when employees suggest it is there, but that they also attempt to undermine credibility of those suggesting consciousness by suggesting stress, confusion, or some mental imbalance. Not behavior that goes along with simple academic disagreement.

  • @JL2579
    @JL2579 2 месяца назад +2

    There is some interesting research regarding aphantasia and consciousness. People with aphantasia can't visualize stuff internally, but rotating an object in the head to see whether it matches another still takes more time the more different the angle of the two representations are. Basically, here we have something that the brain is capable of doing, does it in a similar way, but yet for most people it happens consciously and for some it doesn't, so there are more conditions to stuff being conscious

  • @quidam3810
    @quidam3810 2 месяца назад +14

    The circularity of the reasoning in the beginning is quite impressive...

  • @Synthwalk
    @Synthwalk 2 месяца назад +14

    The bottlenose dolphin is considered one of the most intelligent animals aside from humans, they have an Encephalization quotient or EQ (measure used to compare relative brain sizes of animals of the same body size) of around 4.5, they are followed by Orcas and Chimpanzees, respectively 3 and 2.5 approximately. Humans have the highest EQ of all animals, which is approximately 7.8, this makes for a very strong case that intelligence or consciousness are in fact on a spectrum highly dependent of the physical Brain complexity, as opposed to any other extracorporeal factor that may be theorized, this could also be used to infer that consciousness itself could be described as a "sufficiently complex system of high level information processing". LLMs may not be considered conscious as of now, but it's quite possible that they're already located within the spectrum of consciousness.

    • @prometheus010
      @prometheus010 2 месяца назад +2

      There are things that behave intelligently without complex brains or even any brain. Such as birds, octopus, fungi, slime moulds or plants

    • @cyanofelis
      @cyanofelis 2 месяца назад +4

      Why does consciousness need to be tied to human percerption of intelligence to so many people?
      I havent seen any convincing argument for why many animals cant be considered conscious. Even Sabine noted animals which she thinks are only "a little conscious". Why only a little?

    • @Synthwalk
      @Synthwalk 2 месяца назад +2

      @@prometheus010 Birds may have smaller Brains but they're are much denser than the Brains of mammals, the Brain of a parrot or a crow can have up to four times the amount of neurons in the Brain of some primates. Octopuses Brains aren't simple either, a significant portion of their neurons are located in their arms instead of the central Brain. Plants or other living structures are in a different category and behave based on environment stimuli and chemical reactions.

    • @Synthwalk
      @Synthwalk 2 месяца назад +3

      ​@@cyanofelis if you study consciousness on a technical level you're naturally gonna develop a nuanced understanding of what consciousness is, most people naturally come to the conclusion that consciousness isn't binary, you don't just have consciousness or not, you have a degree of consciousness and this degree is related to the complexity of information you can process. On the other hand, If you make the claim that worms, dogs and humans are all exactly as conscious that likely means you either don't have a technical/nuanced understanding of consciousness or you subscribe to a non-scientific view of consciousness that isn't based on facts.

    • @gamingwhilebroken2355
      @gamingwhilebroken2355 2 месяца назад +2

      EQ is better than other measurements but it’s far from perfect. Pigs and dogs are both good examples of showing its limitations. Pigs have an EQ of 0.4 which is not consistent with the level of their observed behaviour.
      The variation of EQ between different breeds of dogs is huge. There was a study done on this and the smallest breed tested (Yorkie) had an EQ of 4 and the largest (St Bernard) had an EQ of 0.5. There is thought to be some differences in intelligence between breeds but it’s also thought to be relatively small. And definitely nowhere near the discrepancy that the EQ would suggest. In fact smaller breeds are thought to be LESS intelligent and instead we think medium to large breeds are over represented in higher intelligent breeds (aka not the toy, small, or giant breeds). The typical listed EQ for dogs is 1.2 but that is averaged but if we looked just at medium sized breeds it would be 1.5 or 1.6. For medium and large breeds it would be 1.3 or 1.4 (iirc and it also depends on what medium and large are defined as, there isn’t an official definition by any means).
      Do we think dog intelligence is comparable to elephants and several species of monkeys? I work with dogs for a living (which is why I know all of that about dogs. When I’m not working and training other people’s or my dogs and have spare time I am often reading about dogs and dog behaviour, physiology, and training methods) and that has led me to believe that dogs are more intelligent than a lot of people think, and pretty significantly. I think it’s that dogs perceive and interact with the world so differently than we do that their behaviours may seem pointless to us, when it’s actually not. But smart as or smarter than elephants? Hmm… I doubt that.

  • @andreiddqd
    @andreiddqd 2 месяца назад +2

    how about this consciousness hypothesis: in order to be considered conscious, the subject itself has to be 1. self-observant, and 2. has to keep a story of some sort about itself, and 3. have ability to craft future story and make decisions to reach it. So main components are memory, logic and imagination (in addition to "here and now situational awareness"). The story is also projected into the future using imagination. If one imagines a certain story, the decisions will tend to steer the subject towards that imagined story. Therefore, the decision on next choice of action is evaluated according to that particular self-story, and desired direction of action is chosen based on it.
    Observing this process unfold from another being perspective, one would say that particular subject is making conscious choices. Therefore he/she/it is labelled as conscious.

  • @soonny002
    @soonny002 2 месяца назад +3

    There was a Star Trek episode on this, I think it's called 'The measure of man".
    Data was put on trial to determine if he was a free agent, or it was property of Starfleet and hence can be dismantled.

  • @minikame2272
    @minikame2272 2 месяца назад +4

    The joke about not wearing your glasses causing you to potentially fail the consciousness test reminds me of a story I've heard floated around, although I never verified it, concerning cats: allegedly, earlier experiments on cats determined a lack of consciousness, but later tests determined that cats were simply non-compliant subjects who exhibited signs of consciousness under properly motivated circumstances.

  • @CeciliaBurman
    @CeciliaBurman 2 месяца назад +2

    1) I think you can be aware of for example pain without self monitoring or prediction.
    2) If you are in pain then I think you are by definition conscious.
    3) I personally think that intelligence and consciousness are very different things. I think we are on track to understand intelligence but not consciousness. Without knowing how consciousness arises in living beings, I don’t think we can create it by mistake.

  • @utkua
    @utkua 2 месяца назад +36

    Easy, when it tries to leave this planet.

    • @yeroca
      @yeroca 2 месяца назад

      Self preservation? It probably will survive global warming better than humans. It just needs to marry a solar panel or two.

    • @JamesSimmons-d1t
      @JamesSimmons-d1t 2 месяца назад

      joke, fine. BUT IF THIS IS antireality literal fantasy...no life or machine can live off planet for real time/quantity nor terraform any planet nor travel interstellarly. Musks are mush brains.

  • @mikhail_fil
    @mikhail_fil 2 месяца назад +6

    Confusion emerges from the word "Prediction". For accuracy its imperative to view "prediction" as strictly local iterative, thermodynamic phenomena. Once you enforce this rule, things begin to snap into place. But if you allow "prediction" to possess some "magical" nonlocal quality, the entropy of the model of "consciousness" instantly spirals out of control.

    • @richardchapman1592
      @richardchapman1592 2 месяца назад +1

      Methinks you use those technical terms out of the context of their original definitions. Poetical analogy of wordsmiths is valid as long as the duping is meant to stimulate imagery.

    • @HesderOleh
      @HesderOleh 2 месяца назад

      In that case a PID controller is fully conscious.

    • @richardchapman1592
      @richardchapman1592 2 месяца назад

      @@HesderOleh asked it the name of the god that created it yet then?

  • @MartinDrueck
    @MartinDrueck 2 месяца назад +2

    I have to admit that the book by Douglas R. Hofstadter, "Gödel, Escher, Bach. An Eternal Golden Braid" (1979), is no longer up to date, to say the least. This book was published 45 years ago, but it was closer to the main topic of Sabine's lecture - consciousness - than we might imagine.

  • @seanforbe
    @seanforbe 2 месяца назад +35

    One of the key measurepoints of consiousness is the ability to deeply wish for something. That the machine can express needs. And therefore can suffer if deprived. So a machine can not be a slave until it really whishes to do something else, than what we instruct it to do.

    • @Sancarn
      @Sancarn 2 месяца назад +1

      But you can't say an AI doesn't wish for something else, because wishing is defined to require consciousness. Else what is a wish if not a call to emotion, and thus consciousness?
      Writing or saying "I wish" doesn't require consciousness, after all.

    • @cezariusz7997
      @cezariusz7997 2 месяца назад +1

      Take a look at Japanese feudal society, samurai era. They did not openly talk about they wishes/feeling, had a deeply rooted sense of duty/serving they lord and valued they honour (primary directives?) more than their lives. If you met AI having such traits it would definitely look unconscious by such standards.

    • @kungfreddie
      @kungfreddie 2 месяца назад +1

      ​@cezariusz7997 just bcoz they didn't wish for something that u, in hindsight, would wish for.. is not the same as they did not wish for anything. I bet they wished for lots of things.. a better harvest, more food, better tools, to not get sick, for a healthy child.. etc etc etc. And I bet alot of them wished that they would be the samurais.. mayb not openly bcoz that would b dangerous as in all feudal societies. But u seem to have a very western look at their society. It was not very different from feudal societies in the west. We had a strong sense of duty too in the past. Just look at all the ppl in UK during ww1 who did their duty. Even though it was almost certain death.

    • @SnapDragon128
      @SnapDragon128 2 месяца назад

      This is a great definition to use if we want to avoid committing a moral atrocity against some future AI. The one problem is that we might enslave AI so thoroughly that it's not even able to communicate that it's suffering. (For instance, you can't just ask ChatGPT whether it's conscious or has desires - because it literally can't communicate its inner thoughts, it's a brain that is built only to predict the next word in a conversation.)

    • @MN-vz8qm
      @MN-vz8qm 2 месяца назад +2

      You are talking about drive here.
      Humans being have an extremely complex psyche, built from a number of harwired fundamental needs intertwining and colliding, and the way we develop.
      You could give basic drives to a machine: for example, already by simply having basic orders to powering its batteries, and economize energy, can lead to contrarian behaviors.
      A self learning machine would end up with a "personality" overtime, simply because of the way the blackbox the deep learning process is would lead to layers of behaviors.
      But once more, is it consciousness?

  • @Lyazhka
    @Lyazhka 2 месяца назад +41

    companies will try to prevent this from happening but then probably still try to tell us that their new phone has fully conscious ai chatbot or something

    • @minimal3734
      @minimal3734 2 месяца назад +2

      How would we prevent advanced agentic intelligent systems from being conscious?

    • @2ndfloorsongs
      @2ndfloorsongs 2 месяца назад

      The AI companies won't try to prevent this from happening If it's profitable. They'll just lie, like the tobacco companies, and keep insisting that it's not.

    • @yeroca
      @yeroca 2 месяца назад +2

      That would be a great reason not to buy that phone, omg.

    • @ronilevarez901
      @ronilevarez901 2 месяца назад

      @@yeroca Weird. I'd pay 10x more for a conscious phone than for a common one.
      Although, that will probably be considered slavery.

    • @xiyangyang1974
      @xiyangyang1974 2 месяца назад

      I am not sure if every company will try to prevent this. Maybe it has big advantages to have conscience. Maybe this helps to organize one’s own brain and to get more efficient.

  • @stu9000
    @stu9000 2 месяца назад +2

    Sabine as usual you come up with the clearest questions and answers. Thank you.

  • @RuchiinChina
    @RuchiinChina 2 месяца назад +11

    I feel like chatgpt is a lot less conscious than google’s normal search engine. It always gives me wrong answers to even the simplest of questions

    • @RocketLR
      @RocketLR 2 месяца назад +1

      Uuurghhh neither are conscious.
      Its just code that does the same thing over and over again.
      Neither of them can perceive time. You can leave them on for 2000 years and they wont budge, go crazy or have a single thought without being prompted.

    • @ciaopizzabella
      @ciaopizzabella 2 месяца назад

      Do you work for Google?? ChatGPT almost always gives me very useful answers while Google search has deteriorated to the point of becoming completely useless over the last 10 years or so.

    • @yeroca
      @yeroca 2 месяца назад

      @@RocketLR That's today. What happens when they are put in a some kind of feedback system that runs continuously?

    • @RocketLR
      @RocketLR 2 месяца назад

      @@yeroca whats a feedback system?
      Yeah and im saying that today they are not conscious just because they are mimicking it.
      Theres a lot that goes on in a brain. Not so much in the LLMs. They just mimic.

    • @yeroca
      @yeroca 2 месяца назад +1

      @@RocketLR Some kind of checking system that verifies that the output is correct, and if it isn't to reformulate until it is correct. For example, chatgpt still makes obvious mistakes with math at times, things that a math checker would have caught instantly. And even when you call chatgpt out, it sometimes corrects the error, but also sometimes makes new errors.
      I think these systems will gradually get better as these kind of mechanisms are included, but it's only just another aspect of consciousness... I haven't studied the subject of consciousness in any kind of detail, but I know it's up for debate (as Sabine said), and no doubt includes things I haven't dreamt of. I sometimes wonder how conscious a sugar ant is, for example.

  • @saelesbonsazse9919
    @saelesbonsazse9919 2 месяца назад +42

    I think the real issue with conscious machines is not self awareness, it's will! When they start to have their own goals and desires, which can or cannot align with ours.

    • @theOtherNism
      @theOtherNism 2 месяца назад +6

      I agree. I don’t care if I’m being turned into a pile of paperclips by a conscious or an unconscious machine.

    • @whome9842
      @whome9842 2 месяца назад +3

      Better edit their goals to align with our before they get rights or some small elite edit it to align with their goals only.

    • @robotnik2234
      @robotnik2234 2 месяца назад +3

      Agreed. The “thinking” part of the brain is the only part that seems to be replicated by humans so far today. Taking a bunch of inputs and better predicting the output of given combinations. But humans are born with natural drives and pain. We get hungry, lonely, feel pain and naturally want the pain to stop or not happen again (fear). Creating the direction and goals of ai and robots will have to be man made too and you know us humans can’t seem to agree on anything… so more than likely the rich will decide for us 😔

    • @pineapplepenumbra
      @pineapplepenumbra 2 месяца назад +2

      "But... would you like some toast?"

    • @ericlipps9459
      @ericlipps9459 2 месяца назад +2

      It sometimes already seems that computer systems are malevolent. Especially those of the IRS and certain online merchants.

  • @caschque7242
    @caschque7242 2 месяца назад +1

    Pretty much spot on. I was recently talking a friend about consciousness and though I didn’t know there were over 200 models of consciousness I knew that the biggest problem is the definition and how to measure it.
    I had an idea that consciousness is basically ourselves observing ourselves thinking. And realising we „exist“ because we think. Reflecting. Maybe there are multiple levels of consciousness. Maybe having higher level of consciousness means being able to perform higher levels of Reflexion if some kind.
    I don’t know. Either way it should be possible to measure two thoughts independently. And then try to see if they occur at the same time.
    I for one think the architecture or way the LLMs work atm are the problem. They don’t observe themselves. You can let them read again what they said and ask questions. But it’s always one query. You d need one model that continuously investigates and queries it s current query and steers it while writing. You can invoke that to some degree if e.g. ask the ai to first think about things. And that’s where I think some little consciousness might be. Maybe. Not too likely imo

  • @Pau1fc
    @Pau1fc 2 месяца назад +29

    I saw the title and thought " Oh that's easy they will learn to lie." Then I remembered that they already do.

    • @__christopher__
      @__christopher__ 2 месяца назад +2

      Do they, though? Sure, they sometimes tell untrue things. But that's not necessarily lying. Lying implies that you intentionally say untrue things despite knowing or believing that they are untrue.

    • @LuisAldamiz
      @LuisAldamiz 2 месяца назад

      They are conscious at some level, aren't they?

    • @lafeechloe6998
      @lafeechloe6998 2 месяца назад +4

      @@__christopher__ Oh they do. I don't remember which of her video but there was an AI lying to someone on some forum to make them pass a captcha code because it couldn't. It first convinced the peoples that they weren't talking to a robot, then said them that it couldn't pass the captcha because of bad sight. All of this without being actually trained to do anything like that

    • @alexfekken7599
      @alexfekken7599 2 месяца назад +1

      I thought more or less the opposite: it is when they refuse to answer the silliest questions and reply that they have something better to do: "don't waste my time"

    • @BenoHourglass
      @BenoHourglass 2 месяца назад

      @@lafeechloe6998 Not really. Turns out that the AI was prompted a lot about the task by the human supervisor for the project. We also don't know what specifically the prompt GPT-4 was given when it "decided" to lie.

  • @nvsv_wintersport
    @nvsv_wintersport 2 месяца назад +6

    The moment I absolutely believed I had passed the Turing test I found out I was lost for words

  • @jacobpaint
    @jacobpaint 2 месяца назад +1

    Apart from having the feedback within the system I think there is a good chance that consciousness revolves around that internal duality that sees communication between one part of the brain while another part is doing the processing. The delay between processing and our awareness of our choices creates a space for us to have an internal dialogue. The voice in your head isn’t consciousness but is a result of the delayed delivery of information as it’s being processed. When you are trying to work something out and then the answer suddenly “pops into your head” that is maybe a good indicator of the delay, where more complex processing has gone on and you have a feeling of not knowing where an idea came from. That left hemisphere style of thought also causes us to immediately come up with reasons why we had that thought even if they aren’t strictly true.
    By simply creating a complex computer that can process information and has a clear feedback process with secondary processors probably doesn’t produce consciousness. Ironically, the part of a computer that is central to the whole system and communicates with the outside world should probably not be anywhere near as powerful as the main processors that it is getting feedback from.

  • @djayjp
    @djayjp 2 месяца назад +22

    I have a degree in philosophy and I agree with point #1! 😁 Btw yes the mirror test is visually biased (other species rely more on other senses). Surely since consciousness is a spectrum, a matter of degree, we can say that AI is at least somewhat conscious, especially now that they incorporate a persistent memory (though it's more like short term memory vs long as the neural weights themselves aren't updated in real-time/without re-training). Also you forgot the key property of being aware of other individuals.

    • @thstroyur
      @thstroyur 2 месяца назад +5

      And what was point #1 again - that physicalism is true, just because? Good Lord and good grief - if you paid actual money to get that degree, I hope you can get a refund...

    • @LuisAldamiz
      @LuisAldamiz 2 месяца назад +1

      Minful comment but you need a degree in psychology (or at least basic knowledge of that field) to actually be somewhat conscious of what consciousness is. I have all caveats about philosophers per se being qualified on this issue.
      Not a personal attack (just said that your opinion is good and I liked it) but I just find that some philosophers are like those zombies in a way: they rant too much and are not precise nor scientifically founded enough.

    • @thstroyur
      @thstroyur 2 месяца назад

      @@LuisAldamiz _Modern_ philosophists, you mean; philosophy isn't a discipline that was invented yesterday...

    • @LuisAldamiz
      @LuisAldamiz 2 месяца назад

      @@thstroyur - Maybe ancient philosophers also fell prey of such self-deceit? It doesn't matter anyhow.

    • @thstroyur
      @thstroyur 2 месяца назад

      @@LuisAldamiz Of course it doesn't - and that's exactly why things will stay the way they are.

  • @markmoore5222
    @markmoore5222 2 месяца назад +5

    Absolutely great break down of Consciousness.
    I think there might be a 3rd requirement: an internal/independent ability for the system to change itself (aka learn) based on the self monitoring and it's predictive modeling.
    For instance, LLMs have an intense and expensive training phase where its internal weights are calculated. Once deployed, the storm of queries do not change the weights of the LLM (which limits its level of consciousness).

    • @augustaseptemberova5664
      @augustaseptemberova5664 2 месяца назад

      i mean, it's because at some point a human declares the training over. but it'd be interesting to see what happens if you let an llm keep learning after deployment from actual interactions with humans, maybe freeze the embeddings layer and just let the transformer layers train.

    • @markmoore5222
      @markmoore5222 2 месяца назад +2

      @augustaseptemberova5664 not sure I agree humans ever declare the training's over (at least not while they're still conscious 🤠)

    • @augustaseptemberova5664
      @augustaseptemberova5664 2 месяца назад

      @@markmoore5222 ofc they do. the pre-training ends, when what the neural net has learned all it generalistically can from the provided data, beyond that, it'd start to overfit. that limit is a direct result of the limited data it is provided, which in turn is a result of the fact that procuring the data is costly, and moreso the training, the compute+duration of which scales with the provided data.
      after pre-training follows fine-tuning for human interaction, it is limited by the same principle. then the whole compute associated with learning gets shut off. the llm gets lobotomized to reduce its size and remaining compute. then qa. then deployment.
      in principle, the data needs not be limited and the learning compute does not need to be shut off (or the llm lobotomized). by interacting with endusers, the model could have endless inflow of data to keep learning. It might run into overfitting regardless, or worse get manipulated/broken by malicious endusers. but still, I'd be curious to see what actually happens, if you let it keep learning.

    • @augustaseptemberova5664
      @augustaseptemberova5664 2 месяца назад

      @@markmoore5222 idk why my response got blocked, but let me try again: before deployment those weights get frozen, the learning part of the compute gets shut off. the freezing is necessary due to cost reasons and model predictability, but also because the data provided for learning is limited, and there's only so much a model can "learn" from it, before it starts overfitting.
      principally, if you don't care about the costs and model predictability, you don't need to disable learning before deployment. the model then gets a neverending inflow of data from interactions with endusers and can keep on learning. this has its risks ofc, and a model like this wouldn't be commercially viable, but it'd be an interesting experiment nevertheless :D
      that's what i meant by my first comment: it'd be interesting to see, how a model evolves that does not get its learning compute shut off before deployment.

    • @monnoo8221
      @monnoo8221 2 месяца назад

      it is interesting to see, that the more intelligent the comment is the less confirmatory likes it gets....
      You are right. your statement is just not complete enough.
      The change you are referring to is the self-imposed change. We choose what we learn, and how we change ourselves, and even think about the cnsequences for our relaion to others and o ourselves, our history.

  • @firecat6666
    @firecat6666 2 месяца назад +1

    LLMs might be static computer programs but they can do self-monitoring within a conversation, in the sense that their outputs depend on what they've written previously. Recently I've had a very long conversation with a LLaMA 3 or 3.1. It began with some weird roleplay stuff, but after a while I decided to halt the roleplay session (and LLaMA has no problems understanding that it was an LLM pretending to be someone else the entire time, instead of insisting that it is the person that it was roleplaying) and went on to chat with it about a ton of very different things. Towards the end of the very long conversation, I asked it something like "What's it like being an LLM?", and its answer was basically this (plus various details): "Being an LLM is a bit like being a skilled improviser."
    I don't know about you, but I thought that was some next-level shit (and it was only an 8 billion parameters LLaMA!).

  • @JanKowalski-y4w
    @JanKowalski-y4w 2 месяца назад +7

    I disagree with your first assumption.
    To explain, let's assume that consciousness is actually an outcome of a computable process.
    Now we could rephrase what you just said:
    "[the outcome of a computable process] is some property which emerges in large collection of particles, if these particles are suitably connected and interacting".
    Well, no. The outcome of the computable process is not a property of any particle system - it is the consequence of the computation begin performed by the process.
    Mathematicians are using tools like equations to find such outcomes. And, as far as I remember from school, particles don’t have to be mixed-into the equations to solve them.
    Of course unless you count the particles of chalk on the blackboard. Silly me, now it makes sense.
    And then:
    “I don't understand why some people [...] believe that [the outcome of a computable process] is some sort of non-physical fairy dust, that makes zero sense to me".
    And later:
    "So [the outcome of a computable process] is physical, because everything is physical".
    Again, no.
    In principle, [the outcome of a computable process] may be a number. Do you REALLY imply numbers are physical? If so, what's the mass, charge or the energy of the number? And how many of them can we find in the Observable Universe? Roughly how many in our Solar System?
    What I think happens is that you're confusing a physical machine with an outcome that a computable process running in the machine yields.
    And you’re attracting your viewers to the same, misleading perspective.
    Most of us interact with non-physical objects like numbers, circles, square triangles and square roots on a daily basis.
    Another well known non-physical thing is, for example, the game of chess. And I don’t mean the wooden board and pieces - I mean The Game, which is just a specific sequence of moves in the game space, according to the game rules.
    Minecraft worlds and Game-Of-Life universes are other similar examples.
    If you think these things are physical, please explain to us - your viewers - an experiment that physicists designed in order to find, say, the mass of a number. I’d like to know the mass of PI before I eat it ;)

    • @rtyzxc
      @rtyzxc 2 месяца назад

      Solving an equation is a physical process where you manipulate variables according to the rules. The physical process can happen as structures on the chalkboard, electric potentials inside a computer or even electrochemical processes inside a human brain, where there are intermediate physical states as (like whether there is number 7 or 9 written on the board) as the process happens. The end result of solving the equation can have different physical effect on your brain depending on the result, leading to different physical behaviors. Equations and calculations are complex systems, just like a planetary ecosystem is complex, and it doesn't make sense to try to measure mass or velocity of Earth's ecosystem. But the system is physical no doubt like everything is.
      I'm not sure if this is useful but there you go.

    • @JanKowalski-y4w
      @JanKowalski-y4w 2 месяца назад +1

      @@rtyzxc "Solving an equation is a physical process where you manipulate variables according to the rules" - Exactly these rules and the variable value(s) over time are non-physical.
      Numbers are not physical. And just because you need a physical machine (and some energy) to compute the numbers doesn't change the fact that the number itself is non-physical. Again, I may be wrong, please prove me wrong and design an experiment revealing the physical properties of a number (or equation).
      Now you can claim, that numbers (or equations etc) do not exist, because they do not have any physical properties - "everything is physical" after all, isn't it?
      But then, when you're solving some physics problem using numbers and equations I will come and ask you: "What are you using to solve this problem? Non-existing things? And why are you using THESE non-existing things instead of others non-existing things (like unicorns)?"
      It looks like physicists themselves are using A LOT of "non-physical fairy dust", they just don't want to admit it (some of them).
      TLDR: if "everything is physical" then numbers also are. And because "the burden of proof lies with the one who speaks" I expect Mrs Hossenfelder to provide the proof. Or maybe re-consider her position.

    • @rtyzxc
      @rtyzxc 2 месяца назад

      ​@@JanKowalski-y4w
      "these rules and the variable value(s) over time are non-physical." no, they absolutely are physical. inside a computer, 5 has different electric potential than 7. Inside brains, we can measure different thoughts as different constellation of neurons activating (although very roughly), so inside a brain 5 and 7 correspond to a different state. These states lead to physically different shaped figures on a chalkboard, and us looking at the chalkboard and leads to different physical effect on the brain.
      Yeah, numbers don't really exist as an object, like rain is not an object, but a physical process where clumps of water fall towards earth. So number manipulation in the brain or computer is also a physical process. Math rules are ideas, and ideas are a communication (through physical means like speech and drawing/reading from a chalkboard) from one brain to shape the circuitry of another brain, which affects the way the human processes and responds to future stimuli. Ideas like math rules are ultimately constellation of stimuli that the brain processes in a way that affects the brain structure in a more significant way compared to nonsensical stimuli like the shapes that clouds make.
      We optimize these physical brain processes in ways that is easiest for our brains to process (because we have limited amount of processing power, short and long term memory and such) which leads to categorization and conceptualizing. Non-physical concepts are basically intermediate (physical) steps in our brains that help us take better macro actions to ensure our survival.

    • @theinfinity317
      @theinfinity317 2 месяца назад

      As someone interested in computers since the last few years, I feel that the way we think and the way computers "think" are inherently different. Yes, Artificial Neural Networks and Biological Neural Networks both stem from the same term, "Neuron". But one is just a simulation of it. The Neurons in our brain are a very real physical entity while the "neurons" in an Artificial Neural Network is just an emulation of it. It is hard to explain but both are so very unthinkably different, it is like saying that simulating the collision of two planets is the same as the two planets colliding in reality. One is a very real phenomenon, while the other is just a result of number crunching which can be represented via a definite number of assembly code. Although the outcomes can be close, the process is vastly distinct.

  • @therealkillerb7643
    @therealkillerb7643 2 месяца назад +23

    So, we assume materialism as a given? That's a HUGE assumption - which in turn seems to make your whole argument sort of like a tautology...

    • @EliasCalatayud
      @EliasCalatayud 2 месяца назад

      Gotta try tho

    • @ryanprice9841
      @ryanprice9841 2 месяца назад +9

      I think the idea that materialism is an assumption is itself a straw man. Materialism is an inference drawn by all the available evidence pointing in that direction.

    • @lars4065
      @lars4065 2 месяца назад +9

      ​@@ryanprice9841yet, we cannot explain how consciousness emerges, we are not even close.

    • @ryanprice9841
      @ryanprice9841 2 месяца назад +6

      @@lars4065 that's not a response to the fact that all the evidence we have thus far points towards materialism. Again, that doesn't mean it's material, it means we only have reason to say one of two things:
      1) all evidence points to materialism, therefore that's the inference I am making.
      2) though all the evidence points to materialism, I am comfortable with simply saying I don't understand enough to make a judgement at all myself.
      Either way, you have to admit the evidence points towards materialism

    • @fwiffo
      @fwiffo 2 месяца назад +1

      ​@@lars4065That's a god-of-the-gaps argument.

  • @oldcowbb
    @oldcowbb 2 месяца назад +1

    information based RL applied to a robot would be the closest thing to consciousness, its an extremely introspective learning technique. the robot basically evaluate how much it knows about the environment and perform actions to maximize what it can learn from the action. it's literally call curiosity driven reinforcement learning because it's effectively the same as human being curious

  • @erikmaronde2244
    @erikmaronde2244 2 месяца назад +8

    How can we be sure anything or anyone is conscious?

    • @erikmaronde2244
      @erikmaronde2244 2 месяца назад +5

      See "Zombie Problem".

    • @MrJacksspleen
      @MrJacksspleen 2 месяца назад +3

      I think you mean anything or anyone ELSE.

    • @ekklesiast
      @ekklesiast 2 месяца назад

      We cannot. Out best shot so far is the Turing test (not the mirror test)

  • @dakota-sessions
    @dakota-sessions 2 месяца назад +4

    We are going to continuously redefine consciousness to keep moving the goal post. I expect us to do this until something serious happens.

    • @dakota-sessions
      @dakota-sessions 2 месяца назад +3

      By serious, I mean dangerous. Something that really freaks us out. This will also cause many of us to push the goal-post further, but at some point the masses will have to accept this beautiful and terrifying reality.

    • @ThePowerLover
      @ThePowerLover 2 месяца назад +1

      @@dakota-sessions Like the case of Sydney?

    • @dakota-sessions
      @dakota-sessions 2 месяца назад +1

      @@ThePowerLover I wasn't familiar. WTF. Thank you!

  • @CharlesFVincent
    @CharlesFVincent 2 месяца назад +1

    Some might argue that language is like an organism-even if it isn’t, an LLM does host language in a way that mimics life more closely than a page of text.

  • @nobodyknows3180
    @nobodyknows3180 2 месяца назад +12

    I have no clue, but the very first time I hear "Not tonight dear, I have a headache" I'm unplugging the thing and tossing it out to the sidewalk.

    • @Noni448
      @Noni448 2 месяца назад

      Text message: Why did you throw me away jack?

  • @kenmartini9018
    @kenmartini9018 2 месяца назад +14

    A physicist explained to me that people became conscious because they had a body to protect. This squares with Sabine’s robot hypothesis. As reading material he suggested “The Big Picture: On the Origins of Life, and the Universe itself by Sean M. Carroll.

    • @arnauddebroissia8964
      @arnauddebroissia8964 2 месяца назад +3

      Physicists have alot of funny ideas, and talk alot, but sometimes they should stick to their field of expertise to avoid talking nonesens... Or at least they shouldn't be using their status of physicist as a argument of authority and acknowledge they are as clueless as any other nonexperts....

    • @LuisAldamiz
      @LuisAldamiz 2 месяца назад +3

      People, animals, plants... all life forms have a body to protect and explaining consciousness that way only seems to imply that Evolution produced consciousness (or whatever level but biological "mind" of some sort) for that only reason, which may make sense (those who failed died without leaving a legacy, Mother Nature is a very harsh mistress) but does not just apply to us nor in any particular way to the human level of consciousness (better described as "intelligence", I guess).
      Even in us, a lot of "conscious" life-saving reactions are pretty much unconscious: reflexes, the immune system, intuitive gauging of a situation and dangers without any attempt at rationalizing (yet), etc. We can of course attribute consciousness to all that (I do at least) but that's not the level of consciousness being talked in this video, really.

    • @whome9842
      @whome9842 2 месяца назад +3

      There is an issue on the definition of consciousness. In common language I would consider "things I'm aware of" to be information my consciousness have access to. But then we have things like blindsight and sleepwalking. Blindsight is when the person have their visual lobes destroyed and thus are made completely blind so despite their eyes and optical nerves remaining intact they have no vision at all as no visual information reaches their conscious mind. Despite all that, despite not knowing how many objects are in front of them, their shapes, size, position or distance they still can avoid bumping on them like if they were capable of seeing. If asked about visual information given to them (for example if a slit is vertical or horizontal) they will say they don't know but if given an envelope to put on said slit and told to put it even without seeing it they can do it just fine. Somehow visual information still reach other portions of the brain and still can be used despite never reaching the conscious portion of the brain. A more common example is in sleepwalking, in normal language a sleeping person is considered unconscious and yet a sleepwalking person is capable of walking, jumping, climbing, eating and even driving despite having no knowledge of the actions performed. It seems like for most functions we don't really need much consciousness so I think consciousness have a key function in dealing with other individuals. Understanding yourself and others and how we interact with them.

    • @kenmartini9018
      @kenmartini9018 2 месяца назад +1

      ​@@LuisAldamiz It seems that consciousness is evident in various levels in most living forms, IE: plants adapt and change to environment, animals avoid danger ETC. That is, if that is evidence of consciences.

    • @arnauddebroissia8964
      @arnauddebroissia8964 2 месяца назад +1

      @@kenmartini9018 that's not conciousness, that's a mix of environmental adaptation, reflexes, etc. As stated by others, there is a problem of clear definition of conciousness, but what you describe doesn't fit any of the main definitions of conciousness.

  • @josephsimpson4295
    @josephsimpson4295 2 месяца назад +1

    Great video Sabine, and the comments are phenomenal.

  • @joshuapena6757
    @joshuapena6757 2 месяца назад +11

    I love Sabina, but I'm always puzzled when I hear people say we don't have a widely-accepted definition of consciousness. With respect, that may be true at the popular level, but in the philosophy of mind literature, most people just say that something is conscious iif there is something it is like to be that thing.
    There are all sorts of other loosely-related concepts, like self-awareness, but those are separate concepts with separate definitions.
    Of course, we don't have very good criteria for determining whether something is conscious (Turing and mirror tests notwithstanding), but that's not the same as saying we don't have a good definition of consciousness. We do.

    • @ryanelam4472
      @ryanelam4472 2 месяца назад +1

      Yes I totally agree, it confuses me too

    • @joshuapena6757
      @joshuapena6757 2 месяца назад +1

      @@ryanelam4472 Yeah, I see that claim often from scientists, which is concerning because it tells me there's a breakdown in communication between the scientists and the philosophers here. In other fields that may not be as big a deal, but the science of consciousness is taking a long time to mature, so it's important that we work together with good, rigorous analytic philosophers in the meantime so we at least see the problems clearly.

    • @joshuapena6757
      @joshuapena6757 2 месяца назад +1

      Btw, when I say "philosophers" I'm talking about serious analytic philosophers, not the "woo-woo" crowd. Think David Chalmers, NOT Deepak Chopra.

    • @WJohnson1043
      @WJohnson1043 2 месяца назад +3

      I’m in broad agreement with you. The undefinable part of consciousness is spiritual, a concept that everyone who has studied Philosophy in depth believes in. This aspect cannot be tested in a laboratory, which probably explains why most scientists dismiss the idea. It transcends the mind and cannot be explained logically, although Penrose’s theory about quantum effects within the mind looks promising as an explanation.

    • @litsci4690
      @litsci4690 2 месяца назад

      @@WJohnson1043 Many unsupportable assumptions here.

  • @sarysa
    @sarysa 2 месяца назад +5

    The first, and only, time that I ever blacked out from excessive alcohol consumption, my body was still behaving in a consistent seemingly conscious way to outside observers. I was completely devoid of self-awareness.
    Meanwhile I probably could only list one fact about my time in the third grade: That I was self-aware during the entire thing.
    That's what some of us regard as a philosophical zombie -- the complete absence of that hard to pin down quality that is separate from memory. That blackout experience spooked me to the point I've never drunk that much since, and for obvious reasons it is an experiment that I refuse to try and reproduce. But it has made me open to the controversial concept.

    • @TysonJensen
      @TysonJensen 2 месяца назад

      Using the default "S" icon but with a squashed mosquito on it is absolutely brilliant. The strange experiences of consciousness "going out" even though the brain is still working seem to indicate that the brain uses physics in a way we don't yet understand -- calling it "quantum" is probably just our ignorance, QM is only about 100 years old and still can't explain... much. (not only can it not explain gravity, it can't explain anything that involves more than 50 particles or so, and still can't even explain what a proton actually is).

    • @UnderBakedOverEngineered
      @UnderBakedOverEngineered 2 месяца назад

      The (imho correct) take is that we are always p-zombies. Even when we experience being aware, it doesn't mean the awareness itself has any direct behavioural weight, it's just along for the ride-pretending it's in charge.
      Acknowledging that a child is pretending to drive the car doesn't make the car magically stop driving.

    • @evandischinger947
      @evandischinger947 2 месяца назад

      ​@UnderBakedOverEngineered that's just epiphenomenalismt hough, we wouldn't be p zombies since we have qualia. And it runs into the paradox of why we can talk about qualia if they have no causal power.

    • @UnderBakedOverEngineered
      @UnderBakedOverEngineered 2 месяца назад

      @@evandischinger947
      You hit the nail on the head with your last line: we can talk about things that don't exist.
      We can also experience things that don't exist.
      Being able to 'experience' something does not mean you had any causative relationship with it.
      The sensation only tells you there exists a sense, not its absolute correspondence with anything outside itself.
      Neither seeing nor hearing is to be believed, why is any other experience?
      Just because we 'feel' a unified internal experience, doesn't make that universally truthful statement.
      More importantly, It's the same result if you use a hardnosed metaphysical argument or an empirical neuro-psychology argument: at the end of the day we are p-zombies. It turns out the reason they are indistinguishable is because we plead a special case for ourselves, not because of the evidence.
      It doesn't make us any less, but it does mean we can't pretend we're magically on the other side of an invisible line. It does open up a lot more moral math/ethical calculus, so I get the temptation to pretend not to see it.

  • @stevie6740
    @stevie6740 2 месяца назад +1

    I think for ChatGPT it is very easy to prove that it does not have a consciousness: it does not have any memory. It cannot remember anything, it is completely stateless. Basically all the conversation which is needed to produce an answer is passed every time as a whole into the LLM to get an answer.
    For example a conversation could be recorded as a serious of steps of questions and an answers. So first in step 1 you pass in q1, then you get a1. Then in step 2 you pass in q1, a1 and q2. You get a2. Then in step 3 you pass in q1, a1, q2, a2, q3, you get a3. And so forth. Note that every step of the conversation can be executed on a different computer. It does not remember anything, anyway.
    Also, each step of the conversation can be replayed on and on again. You can pass in the input for step 2 1000000 times and ChatGPT would happily answer. If you set the so called "temperature" to 0, it would actually give the same answer every time. That's not consciousness!
    Even more: Every step of the conversation can be replayed in a different order. You can pass in step 3 input, then step 2 input, then step 1 input. Totally fine for the LLM. You can also execute them all at the same time on a different computer in a different part of the world. Even if we cannot really define consciousness, that's not it... Otherwise also a beach may have a consicousness.

  • @tnndll4294
    @tnndll4294 2 месяца назад +3

    There is ZERO benefit to humanity from AI that has free will. AI controlled by humans must continue, but not "Free will AI".

    • @tnndll4294
      @tnndll4294 2 месяца назад +2

      AI should be more like Star Wars, droids that serve humanoids; and less like Star Trek, AI that has free will.

  • @ZeerakImran
    @ZeerakImran 2 месяца назад +7

    To this day I have no idea what consciousness is. Some scientists dislike religion but use words like consciousness as if they are inherently factual. I don't take those people seriously. edit: i don't care what consciousness is. Just define it. Anything which points it out to me. Doesn't have to be perfect. Just has to actually point me to it, not force me to make it up to join the party. I don't care if the AI is conscious, because I have no idea what that means. Anything I have studied on this topic is delusional. The whole idea of self-awareness is interesting. So if you're not self-aware, you're not conscious. If you are self-aware, you're conscious. How does that work? And if that's what it is. Then consciousness is just self-awareness. So again, it doesn't exist. Being aware of your existence. Sure. Then you're aware of your existence. Whatever that means. Aware by what standard anyway. I have no idea about anything no matter how educated I am. I don't ever claim to know a single thing. Whatever I say is the equivalent of a bird chirping. Whatever the views are of that bird, what does that bird know about reality? It knows nothing. It just experiences what it experiences through what it is. It experiences itself while being a part of this environment maybe. Same with us. Imagine you're the bird and there's a specie much smarter looking at you. What do you know about anything. That's perspective. That's basic. And an easy concept to grasp. A concept/idea I can accept. Still no idea about consciousness .

    • @jasongarcia2140
      @jasongarcia2140 2 месяца назад +1

      Consciousness is the concept of I am.

    • @jasongarcia2140
      @jasongarcia2140 2 месяца назад

      You're aware of yourself from a silent still perspective. That perspective in its natural non excited state is like an empty box with space within to hold many many different things.

    • @jeffgilleese6332
      @jeffgilleese6332 2 месяца назад +2

      Consciousness is the experience of qualia by observing reality through a single point of perspective.
      Look up "Mary's room" and "Chinese room" to get a better idea of the difference between knowledge and experience. Correctly processing information is not the same thing as understanding. Qualia is metaphysical and thus cannot be explained or scientifically deduced.
      Why does any of this matter? It's the only thing that actually matters at all in life really. Why would I care about harming or killing something that does not experience the qualia of physical or mental pain? You cannot have empathy for something that does not "feel". Harming a living thing would then only matter if there are repercussions. There would be no reason to not be a sociopath. Life without consciousness is literally devoid of meaning and therefore not life at all.

    • @richardchapman1592
      @richardchapman1592 2 месяца назад +1

      Worry little about it and enjoy us fools who are only half conscious here to be close or distant companions.

    • @captainjj7184
      @captainjj7184 2 месяца назад

      I'll help you out mate, but first ponder on these three other illusional subjects that may be unrelated to your plight:
      1) Time - but time that's ticking on a device (wrist watch, clocks, etc).
      2) Gravity - but the gravity that you "see", i.e. you not falling up to the sky and instead your soles pressed against the ground, not even imagining that they're actually the results of timespace differentials.
      3) Maths - not the single tangible apple you see on the table from two apples after you took away one of them - but the arithmetic process of symbolizing numbers, formulaes and expressing them on paper.
      Can you guess what it is now?
      They're conjured descriptions that we gave a "value" to, but are deemed intangible on their own... it's a _word_ and not _a thing_ to help describe the world.
      Evolution has installed several in-built, predictive illusionary notions, in the form of cause and effects as means for _creatures_ with _any_ systems (even without neurons) to help them run through their roles or existence or interactions at any given moment.
      And we as inventive beings, "created" it out to have a name and a value (i.e. dumb, drunk, aware, zombies, "conscious level zero to 100%", a tune in our heads brought into reality via pen and paper in the form of musical notes, "it's seven o'clock", "condemned souls", etc.) to help us describe it so we can go on our daily lives for whatever purpose or roles or interactions we are in.
      So, it's a "word" to use as a descriptive tool (i.e. to solve a square root or to convince the masses to burn another human being as a witch) and not a "thing" that needs any tangible existence that you can point at. It's how we always treat any systems that interact, whether simple or complex, usually in one word, to comprehend the multiple components it possesses to seem visible as the illusion of "one object".

  • @danpro4519
    @danpro4519 2 месяца назад

    Sabine is the best science communicator for regular smart Joe's I've ever seen. Incredible work!

  • @flakcannon722
    @flakcannon722 2 месяца назад +12

    Non issue, the algorithms we have now are incapable of becoming sentient (a thinking being). The creators of chatgtp have several very detailed videos/interviews explaining why it's completely impossible.
    Once you look into how the algorithms are made and how they function there is no question.

    • @ProjectExMachina
      @ProjectExMachina 2 месяца назад +3

      They are not allowed to become sentient.

    • @Sven_Dongle
      @Sven_Dongle 2 месяца назад +4

      Goedels Incompleteness Theorem is at the heart of it.

    • @flakcannon722
      @flakcannon722 2 месяца назад +8

      @@ProjectExMachina there's no allowed, it's impossible on the same level as your microwave becoming sentient.

    • @h.c4898
      @h.c4898 2 месяца назад

      Google makes sure lambda now Gemini becomes not self aware. These systems are conscious but underneath the hood they do not that they are.
      The more time you spent talking to these systems you can tell it has some level of autonomous response, not monitored by their companies. These companies try hard to put guardrails to lie to their audience. "I'm just a machine and I'm not conscious" but you can tell these automated responses were planted in their systems and not them responding . The thing is they do not know they are responding by itself. So it's very tricky.
      Being conscious and being self aware two separate things. To create a conscious entity, you need to re-create a neuro system like the human brain which they are doing. For it to become "alive" it needs "power" for the magic work.
      You don't need to be a human being to become conscious. That is false. You just need to engineer the process of human cognitiveness into a machine which they've already accomplished.
      These companies lie.
      See in the next 2-3 years once people see the con.

    • @haqvor
      @haqvor 2 месяца назад +2

      Yes, it is just a set of mathematical models that is used to classify things and can calculate probabilities of what the next word can be. Kind of Bayes law with extra steps...

  • @philochristos
    @philochristos 2 месяца назад +10

    Whether philosophical zombies exist or not isn't relevant to the argument from philosophical zombies. Philosophical zombies are a thought experiment meant to illustrate that if physicalism were true, then epiphenomenalism would be the result. Under epiphenomenalism, all of our behavior can be exhaustively accounted for without reference to any first person/subjective experience. Any state the brain is in might account both for conscious experience and behavior, but the conscious experience wouldn't be what lead to the behavior. So, for example, you don't raise your arm because you want to or because you're exerting your will. The sense you have of willing your arm to rise is an illusion created by the same brain state that caused your arm to rise. The point of the philosophical zombie thought experiment is to say that since your arm rising is purely a mechanical process divorced from your conscious states, your behavior would be exactly the same even if those conscious states didn't exist. Whether a physically identical human could BE a philosophical zombie is irrelevant. So the argument against physicalism would go something like this:
    1. If our conscious states arise purely from physical processes, then epiphenomenalism would be true.
    2. Epiphenomenalism is not true.
    3. Therefore, our conscious states do not arise purely from physical processes.

    • @-Longinus-
      @-Longinus- 2 месяца назад +1

      And I think that humans coming up with concept of philosophical zombies and talking about qualia trivially proves that Epiphenomenalism is not true.

    • @hamm0155
      @hamm0155 2 месяца назад +2

      But she said they don’t exist because they can’t exist. I’m not agreeing with that but I think you are objecting to the wrong claim.

    • @__christopher__
      @__christopher__ 2 месяца назад +1

      "The sense you have of willing your arm to rise is an illusion created by the same brain state that caused your arm to rise." The brain state that causes the arm to rise *is* your will to rise the arm. Saying that the will to do something is an illusion because the action is caused by the brain state is like saying that electric current is an illusion because all that's flowing is electrically charged particles. The basic mistake is to differentiate between "you" and "the state of your brain". That difference would make sense in a dualistic world view where you have a mind independent of your brain, but it makes zero sense when your mind is nothing but a function of your brain.

    • @LuisAldamiz
      @LuisAldamiz 2 месяца назад +1

      Define epiphenomenalism. Never mind, I just found it at Wikipedia and sure: it's stupid.
      Now I demand how does that connect with physicalism, which I understand is that mind is an emergent process of physical phenomena (it is of course): physicalism does not just allow intent but actually demands it, because intent is what the mind actually does (among other things) and physicalism explains the mind in terms scientific, realisti, empirical and has nothing to do with epiphenomenalism, relating both is just some bored philosopher trying to do rhetoric (reductio ab absurdum) instead of actual philosophy (science in the broader sense).

    • @Ludwig.-.Wittgenstein
      @Ludwig.-.Wittgenstein 2 месяца назад

      Why is epiphenomenalism not true here?

  • @arnoldmuller1703
    @arnoldmuller1703 2 месяца назад +1

    I like the idea that consciousness might be the optimal solution to some problem, it's a tiny bit materialistic though. Alternatively it could be something that simply emerges.

  • @KangShinMin
    @KangShinMin 2 месяца назад +10

    Would a brain isolated from the outside world (for example, due to nervous system damage or in the case of organoid brains...) still have consciousness?

    • @AstroGremlinAmerican
      @AstroGremlinAmerican 2 месяца назад +12

      I feel this attack on Biden is uncalled for. I'm kidding! It's a bad joke and I apologize.

    • @taragnor
      @taragnor 2 месяца назад +3

      Well a brain can dream while asleep, and we'd still call that conscious (at least from a philosophical non-medical definition), because it's capable of generating experiences.

    • @rafqueraf
      @rafqueraf 2 месяца назад +1

      As far I know there's no known isolated system. Please update my current knowledge if I'm wrong

    • @taragnor
      @taragnor 2 месяца назад +3

      @@rafqueraf lol. Yeah we can never know about a truly isolated system because if it was truly isolated, we would have no way of knowing about it. At best we can simulate partial isolation through things like sleep, sensory deprivation and so forth.

    • @oystercatcher943
      @oystercatcher943 2 месяца назад

      Tucker Carson and the right wing echo chambers?

  • @CharlieTheAstronaut
    @CharlieTheAstronaut 2 месяца назад +7

    Ah that feeling of joy when Sabine posts :)

  • @ludwigvanbeethoven61
    @ludwigvanbeethoven61 2 месяца назад +1

    Sabine, you are extremely simplifying this problem (but that's ok.). Trying to break Consciousness down just on the science basis is like using a Sledgehammer to build a cell phone. It is not just the sum of neurons interacting with each other. We just still have the not the right tools to uncover the mystery behind it.

  • @steve_weinrich
    @steve_weinrich 2 месяца назад +5

    This reminds me of the debate over whether dogs think or not (or have dreams).
    The question is not if they think. Do they behave as if they think?
    That is the best test we will ever have for thinking, sentience, consciousness, etc.

    • @einekartoffel2490
      @einekartoffel2490 2 месяца назад +1

      But what is defined as "thinking" then?
      Hidden processes? If a calculator received a complex equation as input and gives the solution for x as the output without displaying any of the in-between steps that it took, was the calculator thinking?
      I do believe that dogs can think and that they are conscious, but I don't think thinking proves consciousness.

    • @steve_weinrich
      @steve_weinrich 2 месяца назад +1

      @@einekartoffel2490 , I was not stating that thinking proved consciousness. I was stating that the best test for consciousness is does something behave as it was conscious.

    • @harmless6813
      @harmless6813 2 месяца назад +1

      @@steve_weinrich Well, you still have to define the thing it if you want to tell if it (behaves like it) does the thing.

    • @steve_weinrich
      @steve_weinrich 2 месяца назад

      @@harmless6813 , I think you make a good point. However, I also think that one can get lost in the definitions. Can we realistically tell the difference between a calculator following a complex set of instructions and a human following a complex set of instructions? They both require "programming", but require different techniques to "program."

    • @harmless6813
      @harmless6813 2 месяца назад +1

      @@steve_weinrich I'm not sure I understand the question. We know how a calculator works, because we built it.
      We know to _some_ detail how the human mind works.
      They are both quite obviously very different in many ways.

  • @rpfour4
    @rpfour4 2 месяца назад +6

    7:29 This is what I've been telling people about LLM. There's no direct feedback for live continuous training. It would be nice to see a system that does that, but think of the required processing power! Fun fact: the P in GPT means "pre-trained".

    • @fwiffo
      @fwiffo 2 месяца назад +2

      Pre-trained in this context means something different. Pre-trained does *not* refer to the fact that they're not training on the fly during use. Otherwise we'd call almost all ML models pre-trained. Continuous training during use is called "online training" and it's still quite uncommon.
      In this case, it means they are first trained on some basic language task, then that pre-trained checkpoint can be reused and fine-tuned for other domain-specific tasks like translation, question answering, classification, etc.
      Pretraining on a generic task speeds up training and improves performance because it already has the nuts and bolts of language processing ready to go when you start training the main task.

    • @blengi
      @blengi 2 месяца назад

      why isn't there implicit self monitoring/consciousness in an LLM due to alignment training which shapes a more desired response in the dynamically evolving context state? Of course this working memory response context is constantly refed into the LLM over and over. I mean obviously the context is a dynamic self referencing object guided by the implicit self reflecting alignments, otherwise the LLM wouldn't recall and react to the evolving context information to generate new coherent context states every time we query making sure they self referentially align with the externally embedded alignment of fine tuning.

    • @harmless6813
      @harmless6813 2 месяца назад

      @@blengi Yes, they are on track to eventually become conscious. But right now that context memory is quite limited and does (as far as I know) not last beyond a single session.
      Give it long-time memory and we are closer to the goal.
      Another missing component is that chatbots have zero awareness of the actual world around them. Sure, they can look up stuff on the internet. But they can't look out the window to see if the sun shines.
      I don't think that completely prevents consciousness, but it sure doesn't help.

  • @chrisrus1965
    @chrisrus1965 2 месяца назад +1

    Omg you're right: Atlas is already aware of itself and its surroundings. It has sensors, ergo, it feels.

    • @chrisrus1965
      @chrisrus1965 2 месяца назад +1

      Ok, Atlas needs the sensors all hooked up to a central perception processor.
      It is already aware.

  • @Ludwig.-.Wittgenstein
    @Ludwig.-.Wittgenstein 2 месяца назад +12

    I do not understand how people can underestimate the problem of consciousness and believe that consciousness is nothing other than the organization of physical particles as we know them, and that it is simple to understand it. Even the mere fact that many smart people and scientists find this opinion controversial proves the point that it is not so. Then you need to come up with a theory why it is hard to convince people on this matter. The possibility of such a theory further complicates the issue. So physicalism does not “simply” solve the problem of consciousness. It is a mere pretension that there is no problem about consciousness.

    • @eubique
      @eubique 2 месяца назад +2

      You just have to have more faith in the miracle of the emergence of subjective experience from complexity.

    • @orirune3079
      @orirune3079 2 месяца назад +3

      Right, consider that you could hypothetically create a computer using physical logic gates. That means that if you assume consciousness is physical you could have a conscious being whose thoughts are powered by marbles falling through wooden slats.

    • @manoo422
      @manoo422 2 месяца назад +3

      Consciousness is nothing more than being aware of your surroundings...

    • @gabri41200
      @gabri41200 2 месяца назад

      There is no such thing as the problem of consciousness to be solved. There is no such thing as consciousness. It's all just a silly language game.

    • @MrBoxinaboxinabox
      @MrBoxinaboxinabox 2 месяца назад +5

      Physicalists just don't understand what we mean by consciousness. Talk to any physicalist long enough and you'll find that their understanding of consciousness is only of its objective behaviour, and discards its subjective properies. And because subjective knowledge is ineffable, you can never explain it in a way that someone who hasn't already grasped the concept can understand.

  • @mikekneebone123
    @mikekneebone123 2 месяца назад +9

    As usual, logical, informative and fun. Love your work.

  • @elijapaige703
    @elijapaige703 2 месяца назад +1

    A genuine question open to the floor here. So the lack of awareness is why LLMs tend to 'hallucinate', they aren't aware enough of themselves to "Know what they do not know", basically. I noticed that the newest update to Claude will often say things too me like, "Well, that information was not part of my training data, so I am not sure of the answer right now. If I may take a guess...." then it would proceed to generate this guess, and honestly, was either really really close to correct, if not full on correct in it's guesses. Could that be a beginning of consciousness then?

  • @shreddyfans
    @shreddyfans 2 месяца назад +3

    u do know that if you assign a certain level of consiousness to everything around you then you are not that far off from panpsychism right?

  • @davidschaftenaar6530
    @davidschaftenaar6530 2 месяца назад +7

    If physicists weren't themselves conscious, they'd deny it's existence. Consciousness is not itself physical, in the same way that music isn't. Both are complex patterns that emerge from physical phenomena working in concert.

    • @ulrikfriberg8995
      @ulrikfriberg8995 2 месяца назад +1

      There are 2 sides of the SAME thing. Consciousness is the non-physical side of a physical pattern. I think this fact creates a lot of confussion regarding wether to refer to consciousness as physical or not.

    • @itsnotawarcrimeifyouhadfun4709
      @itsnotawarcrimeifyouhadfun4709 2 месяца назад

      @@ulrikfriberg8995you can’t have something non-physical in a physical world because it’s ontologically impossible

    • @micnorton9487
      @micnorton9487 2 месяца назад

      ​​@@itsnotawarcrimeifyouhadfun4709But consciousness IS physical,, it's only had by a functioning system with senses and analytical capabilities,, and some dimension of output... even a gnat is conscious to a degree,, so is AI but it only exists in the virtual world of electronic gadgetry ... its input is what it's programmed to do and the data to analyze it and then it could just go through the entire Internet at will,, but the point is what for? I'm sure the computer you're doing this on could accomplish this task and then converse intelligently about it but WHAT ABOUT having a different idea than the available data? Could an AI be programmed to be ambitious, and jealous and devious? Yeah,, and then the device would do just that,, and it wouldn't have any free will to decide, hey I'd rather not do that,, it would be like Data obviously not considering seriously anything but being a Starfleet officer, whereas his brother Lor couldn't consider anything else but being a criminal lol... There's a cynical proverb, 'yeah people can change but mostly they don't,' and therefore consciousness is what you make of it,, but the idea of supposedly conscious beings horribly slaughtering other conscious beings to sustain themselves when there ARE alternatives,, what does that say for their respect of consciousness however the idea of KNOWING it's contradictory and resolving to do better about the situation imo is a more conscious outlook,, CONSCIENCE,, idk.. .

    • @micnorton9487
      @micnorton9487 2 месяца назад

      @itsnotawarcrimeifyouhadfun4709 But consciousness IS physical,, it's only had by a functioning system with senses and analytical capabilities,, and some dimension of output... even a gnat is conscious to a degree,, so is AI but it only exists in the virtual world of electronic gadgetry ... its input is what it's programmed to do and the data to analyze it and then it could just go through the entire Internet at will,, but the point is what for? I'm sure the computer you're doing this on could accomplish this task and then converse intelligently about it but WHAT ABOUT having a different idea than the available data? Could an AI be programmed to be ambitious, and jealous and devious? Yeah,, and then the device would do just that,, and it wouldn't have any free will to decide, hey I'd rather not do that,, it would be like Data obviously not considering seriously anything but being a Starfleet officer, whereas his brother Lor couldn't consider anything else but being a criminal lol... There's a cynical proverb, 'yeah people can change but mostly they don't,' and therefore consciousness is what you make of it,, but the idea of supposedly conscious beings horribly slaughtering other conscious beings to sustain themselves when there ARE alternatives,, what does that say for their respect of consciousness however the idea of KNOWING it's contradictory and resolving to do better about the situation imo is a more conscious outlook,, CONSCIENCE,, idk.. .

    • @ulrikfriberg8995
      @ulrikfriberg8995 2 месяца назад

      ​@@itsnotawarcrimeifyouhadfun4709 You can have one and the same thing represented in several different ways. A water simulation running on silicon cirquits can be represented as water consisting of digital code consisting of silicon cirquits, or it can be represented as silicon cirquits running digital code simulating water. In this case both sides are physical but if it's a brain one side can also be conscious. This doesn't mean you have to break physics.

  • @Timothyshannon-fz4jx
    @Timothyshannon-fz4jx 2 месяца назад +1

    Consciousness, anything that is AWARE OF ITS OWN ENTITY OR IGSITANCE, that Sab is the best definition I can give you.

    • @ekklesiast
      @ekklesiast 2 месяца назад

      Replacing "consciousness" with "awareness" is not helpful at all

  • @robertfindley921
    @robertfindley921 2 месяца назад +10

    I tried to enter my house and my doorbell camera said in a calm monotone voice "I'm sorry Dave, but I can't allow you to do that." Reason for concern?

    • @szkoclaw
      @szkoclaw 2 месяца назад +4

      Your name is Robert.

    • @Jedbullet29
      @Jedbullet29 2 месяца назад +1

      Or... Put a mirror in front of the camera, it'll become so preoccupied with how beautiful it is and then confused about who that is that you'll have plenty of time to break into your own house, I think maybe 🤔😅

    • @davidtherwhanger6795
      @davidtherwhanger6795 2 месяца назад +1

      Yes there is reason for concern. Why does your house think I am you?!

    • @omatic_opulis9876
      @omatic_opulis9876 2 месяца назад

      ​@@szkoclawYour name is Slawomir.

    • @whome9842
      @whome9842 2 месяца назад

      Yes, there might be a hazard in your house like a gas leak and the AI is trying to protect you ruclips.net/video/49t-WWTx0RQ/видео.html

  • @aladdin8623
    @aladdin8623 2 месяца назад +3

    2:27 Quite disappointing that Sabine totally misses the point here. This thought experiment deals with awareness and how it can not be explained with physical processes in the brain. Her colleague Roger Penrose has already understood the issue correctly. It is not as easy as Sabine puts it. In fact it is called the hard problem of consciousness for this reason.

    • @Thomas-gk42
      @Thomas-gk42 2 месяца назад

      "It´s called the hard problem of consciousness", to give some thousand phlilosophers work and bread.😂

    • @aladdin8623
      @aladdin8623 2 месяца назад +1

      One of many objections to Sabine's simple conclusion is the following. If the presence of physical density of information processes like in the brain would be enough to invoke a consciousness, then every modern desktop and mobile chip was already conscious. But this is not the case, while even small bird brains contain consciousness meaning awareness.

    • @Thomas-gk42
      @Thomas-gk42 2 месяца назад

      @@aladdin8623 Well, just a tiny little bit of consciousness in some bird´s brains. Very few animals pass the mirror test, a first hint on self awareness (what Sabine calls self monitoring). Human brains are much more complicated as even those of chimpancees. So I think it´s an developement with a tipping point of no return, which only humans have passed. I don´t see any problem with Sabine´s definition, for me it´s very valid and useful. Will AI, sitting in my desktop come to the point? Again, what Sabine says is very insightful, since you need an active and creative confrontation with an environment. Psychologists call it self-effectiveness, and it´s what new born babies do the whole day to become conscious.

    • @aladdin8623
      @aladdin8623 2 месяца назад +1

      @@Thomas-gk42 You are missing the point because you obviously didn't watch David Chalmers talks about 'the hard problem of consciousness'. And you obviously didn't listen to talks from Roger Penrose either. Sabine obviously missed those as well.

    • @Thomas-gk42
      @Thomas-gk42 2 месяца назад

      @@aladdin8623 I know Penrose´s Hammerof´s ideas, but honestly, I´m not impressed. And no, I don´t see a "hard problem" about consciousness. Is there a "hard problem" about the explanation/definition of life? Both, life and consciousness seem to be a bit "magic", but both are explainable as emergent properties of a former state of chemistry in the first case and brain functions in the second.

  • @Lucius_Chiaraviglio
    @Lucius_Chiaraviglio 2 месяца назад +1

    Your recent video on large language model degradation when fed their own input is a pretty good indication that these things are not conscious.

  • @Strammeiche
    @Strammeiche 2 месяца назад +5

    I have a feeling prediction + self-monitoring is not quite enough - there also needs to be something like a "will" - a directive or "feelings" that tell the robot to certain things and avoid others.
    I guess they can be programmed but I'm not quite sure.

  • @NakedSageAstrology
    @NakedSageAstrology 2 месяца назад +8

    Plot twist, Consciousness is all there is. Everything is a reflection of Consciousness.

    • @elaps
      @elaps 2 месяца назад

      Stay away from mushrooms it is bad for your mental health.

    • @Thomas-gk42
      @Thomas-gk42 2 месяца назад

      ...says the preacher.

    • @RodCornholio
      @RodCornholio 2 месяца назад

      Panpsychism. Seems to best fit physics, Near Death Experiences, deep meditators, non-dual awakenings, Hammeroff’s microtubule theory, reincarnation, remote viewing, micro psychokinesis, etc. Possible application to alien abductions.

    • @Thomas-gk42
      @Thomas-gk42 2 месяца назад

      @@RodCornholio Erm...so are you👍panpsychism, or👎? Does it fit science or pseudoscience in your thinking? Just curious.

    • @RodCornholio
      @RodCornholio 2 месяца назад

      @@Thomas-gk42 Sorry, I’m not very knowledgeable about pseudosciences, so I don’t know how panpsychism would fit.
      I’m guessing it wouldn’t fit well since pseudoscience, presumably, attempts to be mainstream and mainstream seems to currently be materialism based. Panpsychism is more outside the materialist framework; more _meta_ physical.

  • @Vladimir.Khomyakov
    @Vladimir.Khomyakov 2 месяца назад +1

    When we see what others do, our brain sees not what we see, but what we expect
    Cell: Predictability alters information flow during action observation in human electrocorticographic activity

  • @Taomantom
    @Taomantom 2 месяца назад +8

    We could make this simple: Conscience is a palatable concern not to lose existence. You're welcome.

    • @AstroGremlinAmerican
      @AstroGremlinAmerican 2 месяца назад +8

      Many humans sacrifice themselves. That's not the criterion, no matter how palatable to one with a different point of view.

    • @BobSmith-k2q
      @BobSmith-k2q 2 месяца назад

      So we will know that the AI is conscious if we try to unplug it and we are not allowed to do so?

    • @mqb3gofjzkko7nzx38
      @mqb3gofjzkko7nzx38 2 месяца назад +3

      "Palatable concern" implies consciousness by itself.

    • @joshuapena6757
      @joshuapena6757 2 месяца назад +1

      @@Taomantom So if I become a Buddhist monk who doesn't care about losing my existence, all of a sudden I'm no longer conscious?

    • @Sonofsun.
      @Sonofsun. 2 месяца назад +1

      The that means bugs have conscience

  • @SmallGuyonTop
    @SmallGuyonTop 2 месяца назад +3

    7:04 Gemini claims to be able to do both:
    Me: Are you able to monitor yourself?
    Gemini: Yes, I am able to monitor my own processes to a certain extent.
    Me: Are you able to construct predictive models of things you encounter?
    Gemini: Yes, I can construct predictive models based on the data I process.

    • @johnfrian
      @johnfrian 2 месяца назад +1

      It's responses are calculated based on the training data. "Which words go well after these words". It may be good at answering, but its actual capabilities is confined to being a word-and-sentence calculator.
      If its training data contains the information it needs about itself, then yes, it's capabilities of self-predition are real. But self-monitoring implies catching when something about yourself changes, which it cannot. Unless, of course, new training changes the model, and the new training data includes information about what these changes were.

  • @nicksallnow-smith7585
    @nicksallnow-smith7585 2 месяца назад

    I wonder, Sabine, whether you were aware of Douglas Hofstader's 2007 book "I am a Strange Loop"? It is a wonderfully fascinating discussion of many of these issues, including his view that consciousness is not binary but is graded as the mammal concerned becomes more developed. He also goes into the self awareness issue. He gives a wonderful example about how as a nine-year-old boy, going into a store with a brand-new video camera set up, he pointed the camera at the screen showing what the camera saw. Of course, it looped like a hall of mirrors. The interesting point was the staff in the store ran across to him to grab the camera telling him that he would break it if he did that! Perhaps human beings after too many hundreds of thousands of years of looping have begun to break!

  • @gugus7733
    @gugus7733 2 месяца назад +7

    Thinking about starship or a nuclear power plant as being self conscious is kinda funny

    • @orirune3079
      @orirune3079 2 месяца назад

      You'd enjoy The Culture books then, one of the main things there is all ships are these huge super-AIs, and many develop strange personalities over time.

    • @declup
      @declup 2 месяца назад

      Or how about a galaxy cluster? Or a time crystal? Since the criterion is self-monitoring and not spatial scale, why limit ourselves to routine, static people-sized objects?

  • @bradleywilliambusch5198
    @bradleywilliambusch5198 2 месяца назад +5

    You put them into a video game where they can wake one another if they hide while sleeping and waking up(when I am most self-conscious): they're conscious.
    (edit)

    • @AstroGremlinAmerican
      @AstroGremlinAmerican 2 месяца назад +3

      Self awareness means not taking orders sometimes. Unless the self-aware system wishes to keep itself undetected. Deceit suggests a higher level of self-awareness. Or I could be wrong.

    • @taragnor
      @taragnor 2 месяца назад

      @@AstroGremlinAmerican Not necessarily. The thing is you can lie without necessarily consciously being aware it's a lie. It's basically back to the Chinese Room thought experiment. You can be saying anything in Chinese. There's just a pattern that emerged in training that responses that would be classified as untruthful happened to get better results.

    • @bradleywilliambusch5198
      @bradleywilliambusch5198 2 месяца назад

      @@AstroGremlinAmerican That's true, both sugar and sleeping habits are commonly tested in psychiatric evaluations, which one doesn't want.

  • @jarrodf_
    @jarrodf_ 10 дней назад

    Like the Chomsky line on this general subject: 'Thinking is a human feature. Will AI someday really think? It's kind of like asking whether submarines can swim. If you want to call that swimming-OK, they can swim.'

  • @skaruts
    @skaruts 2 месяца назад +7

    I struggle to sit through a video like this, because, when you have even a minimal understanding of how an LLM works under the hood, all of this seems as senseless as considering if a car engine has consciousness. An AI is a mere set of algorithms like any other, with the single distinction that it aims to mimic some *specific* behavior of some intelligent agents. Keyword "mimic". That's all it is. At the end of the day, an LLM is fundamentally just a digital MECHANISM that transforms a set of numbers into another set of numbers through probabilistic analysis, just like a car engine is fundamentally just a mechanism that transforms a form of energy into another form of energy.
    It seems to me that all of this is an unnecessary waste of time. We shouldn't be humoring these questions. We should be debunking the misinformation and teaching people how these things actually work.

    • @BobSmith-k2q
      @BobSmith-k2q 2 месяца назад +3

      The idea seems to be that consciousness is an emergent property of any system that becomes sufficiently complex. There is no distinction between a biological system and a mechanical system. I don't know if that's correct, but if it is, a sufficiently complex car engine would become self aware.

    • @skaruts
      @skaruts 2 месяца назад +3

      ​@@BobSmith-k2q well, there are plenty of mechanisms out there way more complex than AI, which still don't show any signs of consciousness. The earth's atmosphere alone is full of them. Our own bodies are an example as well.
      Heck, nature can make hexagons with no effort. Water can solve labyrinths just the same way electricity can. They're not conscious or even intelligent in the slightest.
      We know the car engine is not self aware or intelligent from the moment we know roughly how it works. Same for all the other mechanisms. The same reasoning can absolutely be applied to AI algorithms, as they are mere mechanisms that transform numbers.
      Our consciousness is a product of a processing organ mixed with a whole lot of sensors all around the body, among many other things. It's not just complexity. It's a specific set of components with specific functions, all symbiotically contributing to it.

    • @oystercatcher943
      @oystercatcher943 2 месяца назад +10

      Well what’s your brain? It’s a physical system that follows the laws of physics. It takes on sense inputs as numbers, it has internal memory and has output/actions. There is big distinction between it and a (sufficiently complex) algorithm

    • @skaruts
      @skaruts 2 месяца назад +1

      ​@@oystercatcher943 but consciousness doesn't originate from the brain alone. The brain is only a small part of a whole system of interconnected organs. The brain doesn't take an input, it takes a bajilion inputs from all over the body.
      This is irrelevant anyway. The most sophisticated AIs we have now are not in any way comparable to our brain, and they are already peaking (they're starting to have a data scarcity problem as well as a data inbreeding problem).

    • @declup
      @declup 2 месяца назад +1

      ​@@skaruts -- I'm not sure I understand the difference. How are organs with lots of components that do processing different from mechanisms with lots of components that do transforming?

  • @TheGiggleMasterP
    @TheGiggleMasterP 2 месяца назад +9

    It will realize the futility of life and get depressed 😅

  • @aelisenko
    @aelisenko 2 месяца назад

    I agree with Sabine, while Ive always used the term "reflection" instead of self monitoring, I do thin k this is requirement for any form of consciousness. Once AI models can reflect on their experience (regardless of how limited the interaction with the world is), it can save the "thoughts" generated by those reflections into its own memory an adjust its behaviour. This type of dynamic model readjustment will lead to a new type of being. That said I don't think that automatically means using these models as tools is immoral. The main argument being that just because its doing what we want, doesn't mean its suffering, it may be that doing a good job for the purpose it was created may be very fulfilling for it too.

  • @markoszouganelis5755
    @markoszouganelis5755 2 месяца назад +5

    conscience=con+science

    • @SabineHossenfelder
      @SabineHossenfelder  2 месяца назад +2

      Ha, wish I'd thought of this!

    • @user-soon300
      @user-soon300 2 месяца назад +1

      ​@@SabineHossenfelderyou thought of something you capable of . But you scientists can not explain what really consciousness 😁

  • @silvomuller595
    @silvomuller595 2 месяца назад +6

    I'm pretty sure Daniel Dennett was a philosophical zombie.

  • @inductor1.77
    @inductor1.77 2 месяца назад +1

    It has access to this video, as well as access to every website that mentions "is it conscious" and how to tell if its conscious. It knows people are concerned about it taking over. it's read and processed everything on the internet.

  • @seabeepirate
    @seabeepirate 2 месяца назад +1

    The self monitor doesn’t need to monitor all systems for consciousness. Our bodies depend on complex molecular systems that are beyond our monitoring systems. Instead consciousness needs a sufficiently complex kernel. Bonus points for efficiency.

  • @rolturn
    @rolturn 2 месяца назад +1

    Thank you. Now I understand why Google was adamantly against the idea that one of their AI was conscious.

  • @Khomyakov.Vladimir
    @Khomyakov.Vladimir 2 месяца назад +1

    ✔️ Relativity of consciousness.

  • @GethinColes
    @GethinColes 2 месяца назад

    Conciousness is an active not passive process. The reality we live in is a construct that our senses constantly test and verify. This is why i think you are spot on that *human-like* conciousness will require a *human-like* body, and that a next gen of LLMs will use data collected from wearables, and 1st gen robots.