Fascinating discussion. The whole Machine Learning Street Talk atmosphere seems like a continuous developmental process. It not only shows many outstanding individuals, such as Prof. Mark Solms, but, also, produces sparks of joy by stimulating curiosity every single episode! Kudos and keep on going with incredible content!
This guy is great. I agree about not getting lost in the arcana. It's something I detest. I think a lot of people don't see the forrest for the trees. I'm often told I'm being simplistic, but I see it as elegance. Prof. Solms phrases his ideas really well. At their core, these are not difficult notions. Why cloud them with Byzantine terminology and academic non sequiturs? Occam's razor is a kind of error function indicator.
Mark is such a wonderful thinker, his thoughts on consciousness are most compelling. He has done great things for psychoanalysis through his work utilising neuroscientific methods. I enjoyed chatting with him on accounts of dreaming and conciousnees 🎉
Indeed. But i don't get why he rejects IIT so readily. His objection seems weak: he asks who is the subject, and the subject is obviously the neural system itself, that is rendered conscious by its sensing its own state and adjudicating values to it. Furthermore, why wouldn't a neural model running in a computer feel it like something, a la Nagel, when it is in a certain state with a proper model architecture that associates such state with vital urges for its running instance? I don't see how feelings could be anything else other than just that, categorized state values that are strongly associated to certain preprogrammed urges (such as fleeing, agression, etc).
I really enjoy Mark Solms' passion to understand and explain the processes of consciousness. I also learned much from his book, although I remain a constructivist with regard to the shaping of subcortical affective signalling into contextualised PFC emotional expressions. Great to hear the linking of value and meaning to information. Information is just noise without a primed encoding receiver.
I had a lucid dream that I went to the mirror and I was an Asian guy, looked totally different and I could sense my personality was also different yet I felt totally like myself but a different self. Took me a few minutes as I woke to remember who I am. That convinced me that the sense of self is not dependent on the perception and experience of self.
This was totally convincing to me. A discussion of consciousness, its relationship to a physicalist theory, the fundamental role of affect and the parts of our central nervous system that produce it, how affect arises and produces the relatively inefficient product of consciousness, just a bunch of great stuff
I very much like Prof. Solms' use of the phrase "mental experience." It's an important distinction from the general term "experience" as it is often thrown about in discussions of consciousness. Mental experience implies cognitive processing. Experience merely implies being the object of an action.
Well done lads. Some nice probes into the brain and a sober consideration of the role and purpose of emotions as central to individualization and survival, and not just a superfluous, extraneous, repressible behavior.
I agree with consciousness emerging from "feelings". Because affect guides us towards our "goals", sets our standards and tells us what we want. Thus, from positive and negative feelings we learn a better version of ourself more cognitively capable, and ultimately achieve consciousness. However, two comments: 1 - Would we say the rain is explained by the clouds, and not by water evaporation or the sun? We cant say consciousness comes from affect. In my opinion, it comes from free energy principle prediction force which evolved from darwin's survival. Strong predictions require better cognitive systems in which the agent is able to study his actions and the environment responses (thus, acknowledging himself as a "self" in an environment). 2 - Even if consciousness was derived from affect, how could we get insight into it only using affect? It is a really difficult question to make machines "feel", but rather easy to make machines follow a loss function that optimizes free energy.
Beautiful. This discussion reminded me of the book Zen and the Art of Motorcycle Maintenance. Especially the part where it’s mentioned that the brain filters information based on an underlying notion of quality.
We are constrained to act. In an agent, we can do this in a loop. Recognize a state using a model, for example, and select an action in however complicated a way you want. Affect is chemical context and it's cascading effects. In pythonic terms, it's a decorator.
2 месяца назад
Another fascinating episode. Your work is very much appreciated. Thank you!!!
The neo-cortex is involved with future time actions. The older brain stem is to do with current time, being in the now. Carl G Jung was an insightful person, and has a lot of thoughts on dreams. Plus he had a sharp wit if you watch anything.
Solms progress here is to locate the correlates of consciousness in the Reticular Activating System (RAS). He makes no progress on the hard problem of consciousness, admitting in the end that consciousness is indeed real and that tracking it to its precise physical correlates is enough. He declares he can work with that. I seem to recall that Solms has a project of building a conscious system. I question whether he can do that without an ontology of consciousness. Because of that, he cannot say whether consciousness can be built in silicon or that substrate does not matter. A question I would ask of him is whether consciousness is what many would characterize as an information process, as in the control exerted by motor neurons, or whether it is a purely physical thing like muscle, bone, and ligament - teleological mechanical devices being controlled. We can get different motor responses by how we signal our motor system. We can also get different feelings by how we signal the RAS. How is RAS in this sense not simply another muscle?
You could answer and understand the progress Solms has made on the ontology of consciousness by reading (The Hidden Spring). He comes closest to providing the “why consciousness arose” aspect of the hard question than anyone else has so far.
@@wp9860 the proposition that Solm’s and Karl Friston use to explain how consciousness arose is the free energy principle. Consciousness, but specifically affect resulting in an advantage when “feeling one’s way” around novel challenges.
@@jameswilliams-ey9dq Friston has stated that the Free Energy Principle is not a theory about consciousness. He does offer a more speculative theory on how consciousness developed. I'm struggling to recall the context. It may have been relative to some high level or interpersonal function. I also do not recall if he address why certain things must be done consciously. Why couldn't a philosophical zombie, acting without conscious thought, perform everything that we as conscious human beings can? We control our blood pressure in a unconscious, zombie like fashion. Why not everything else? Additionally, what explains the experience of perceiving red?
Elegant as all this is, it is haunted by the same question that haunts all consciousness theories: why is there subjective experience? We can imagine affective decisions being taken without someone experiencing affects. The only way out is to assume the first-person-perspective as the substrate of reality - what AGI researcher Colin Hales calls the FPP. What we call waking consciousness is where there is a greater degree of integration, memory and self-reflection. Many have said that this is required to handle novelty, but that idea has always puzzled me. Do unconscious systems not handle novelty? What about the immune system? Is it conscious? If not, how does it cope with novel pathogens? Why does solving math problems in our head require consciousness while computers can do it without being conscious?
@@sirkiz1181 moving the goal post. The question is why, not how. All features you have exist for the same reason, the same why, because they were an advantage to your survival so they persisted; that's it, that's the end, there is no deeper to dig.
@@sirkiz1181 a sense of self is the ultimate reward function for learning, and thus surviving. This is the missing piece in AI everyone is working on now, a generalized reward function for training.
Great episode, but I wonder, does feeling actively help us survive directly in the moment, or is it something that comes after actions have already been decided subconsciously and thus it’s only purpose Is to allow communication to others about how we feel?
We need more dual aspect monists. The popular story of serontonin causing happiness as being the causal flow when you consider that happiness ALSO causes serontonin makes me wonder why more people aren''t dual aspect monists.
The functions and processes of consciousness are probably more simple than most people think they are. They are probably present even in arthropods and mollusks. We just need to find the right cognitive architecture to activate consciousness in artificial agents. And I think that could happen anytime soon, if we do the proper experiments with the software.
Feeling should not be equated with or bound to chemical context injections. I think what we call feeling is fundamentally a property of emotional intelligence. In other words, in humans, all sorts of context is assembled for processing, including chemical messages, but there is nothing intrinsically special about chemical context. It's just more context. What humans do with this context is extraordinary, but given the same complex pipeline, an entity could do this just as well from the value of a state variable.
It's a very logical and nice story, but it still doesn't explain HOW the brain generates inner subjective feelings. How does the firing of neurons in the brainstem give rise to feelings?
I'm very convinced of the purpose of consciousness, as Solms describes it. But yes it doesn't explain where consciousness comes from, and for me the door is still open for theories like panpsychism, or any other theory of consciousness as fundamental. Evolution may simply have leveraged consciousness for the purpose that Solms describes.
I am my subconscious. I don’t have a conscious in the sense that it is modeled today. What is mistakenly considered to be our conscious is actually just a part of our brain that reflects back awareness of our subconscious.
So according to you the brain is both like a mirror and a magician since it not only "reflects" awareness of subconsciousness but simultaneously somehow reveals itself by taking the "sub" out of subconsciousness. Not a convincing hypothesis...
I think your opinion is a result of your conception of what computation is and misses the profound, amazing complexity that lies behind that word, complexity and beauty that springs from simplicity.
People worry about what a super intelligent AI might do but I’d look at it as payback for all the times one of us miserable monkeys scraped the cortex off a neonatal mammal to see what would happen.
His definition of information is... odd. He seems to think once you know something it stops being information, that's just not how the word is used. Knowledge seems to be what he is talking about, knowledge being information that one knows. Being dismissive of Chalmers because he adopts nonstandard definitions doesn't make sense. Also Mary's room has an easy answer. The brain is categorically a computer, consciousness part of the software it is running. Mary's room becomes equivalent to someone wondering why you can know everything there possibly is to know about a Nintendo and cartridge and how they work then why do they still have to use the console to play Super Mario Bros and cannot just emulate the curcuits and run the code with their brain. Philosophy of mind seems inexplicably stuck in enlightenment era ideas. I don't get it but as public facing communicators about AI research you guys should really be applying the consciousness as software paradigm to this supposed mysterious consciousness conundrums when they come up in these conversations.
Fascinating discussion. The whole Machine Learning Street Talk atmosphere seems like a continuous developmental process. It not only shows many outstanding individuals, such as Prof. Mark Solms, but, also, produces sparks of joy by stimulating curiosity every single episode!
Kudos and keep on going with incredible content!
Please do a follow up interview, that is twice as long. Many great insights.
I agree.
You could also read his book. It's fascinating.
This guy is great. I agree about not getting lost in the arcana. It's something I detest. I think a lot of people don't see the forrest for the trees. I'm often told I'm being simplistic, but I see it as elegance. Prof. Solms phrases his ideas really well. At their core, these are not difficult notions. Why cloud them with Byzantine terminology and academic non sequiturs? Occam's razor is a kind of error function indicator.
Feeling life through many joys and many sorrows. You don’t need an AI model to reflect it back to say it feels like something
in my humble opinion, these are ground-breaking insights and a fundamental theory. a perfectly logical, naturalistic explanation
Mark is such a wonderful thinker, his thoughts on consciousness are most compelling.
He has done great things for psychoanalysis through his work utilising neuroscientific methods. I enjoyed chatting with him on accounts of dreaming and conciousnees 🎉
Indeed. But i don't get why he rejects IIT so readily. His objection seems weak: he asks who is the subject, and the subject is obviously the neural system itself, that is rendered conscious by its sensing its own state and adjudicating values to it. Furthermore, why wouldn't a neural model running in a computer feel it like something, a la Nagel, when it is in a certain state with a proper model architecture that associates such state with vital urges for its running instance? I don't see how feelings could be anything else other than just that, categorized state values that are strongly associated to certain preprogrammed urges (such as fleeing, agression, etc).
what a great and profound discussion.. thank you so much. all my respect to prof. Solms and the fantastic interviewer
I really enjoy Mark Solms' passion to understand and explain the processes of consciousness. I also learned much from his book, although I remain a constructivist with regard to the shaping of subcortical affective signalling into contextualised PFC emotional expressions. Great to hear the linking of value and meaning to information. Information is just noise without a primed encoding receiver.
Mark is a manic genius, you can tell his mind is just bursting at the seems with insight
I had a lucid dream that I went to the mirror and I was an Asian guy, looked totally different and I could sense my personality was also different yet I felt totally like myself but a different self. Took me a few minutes as I woke to remember who I am. That convinced me that the sense of self is not dependent on the perception and experience of self.
This was totally convincing to me. A discussion of consciousness, its relationship to a physicalist theory, the fundamental role of affect and the parts of our central nervous system that produce it, how affect arises and produces the relatively inefficient product of consciousness, just a bunch of great stuff
This conversation is a real gem. Thank you!
Solms has a lot of interesting insights, a great interview and I would love to hear from him again!
Mark Solms is on my list of intellectual heroes.
One of this month's best content you could listen to!
omg i love Mark Solms Hidden Spring. It totally changed my perspective on how the mind works
I just ordered it on Amazon.
One of the best books I've ever read.
It doesn’t mine. Just planc it all down with a supreme of understanding of 5. Then a thousand years talk about it
I very much like Prof. Solms' use of the phrase "mental experience." It's an important distinction from the general term "experience" as it is often thrown about in discussions of consciousness. Mental experience implies cognitive processing. Experience merely implies being the object of an action.
Well done lads. Some nice probes into the brain and a sober consideration of the role and purpose of emotions as central to individualization and survival, and not just a superfluous, extraneous, repressible behavior.
I agree with consciousness emerging from "feelings". Because affect guides us towards our "goals", sets our standards and tells us what we want. Thus, from positive and negative feelings we learn a better version of ourself more cognitively capable, and ultimately achieve consciousness.
However, two comments:
1 - Would we say the rain is explained by the clouds, and not by water evaporation or the sun? We cant say consciousness comes from affect. In my opinion, it comes from free energy principle prediction force which evolved from darwin's survival. Strong predictions require better cognitive systems in which the agent is able to study his actions and the environment responses (thus, acknowledging himself as a "self" in an environment).
2 - Even if consciousness was derived from affect, how could we get insight into it only using affect? It is a really difficult question to make machines "feel", but rather easy to make machines follow a loss function that optimizes free energy.
Please don't limit the time!
Make a 3 or 4 hour podcast, we could handle any duration! :)
Thank you!
Beautiful. This discussion reminded me of the book Zen and the Art of Motorcycle Maintenance. Especially the part where it’s mentioned that the brain filters information based on an underlying notion of quality.
interesting connection
so dense and I'm only 1 minute in. amazing.
We are constrained to act. In an agent, we can do this in a loop. Recognize a state using a model, for example, and select an action in however complicated a way you want. Affect is chemical context and it's cascading effects. In pythonic terms, it's a decorator.
Another fascinating episode. Your work is very much appreciated. Thank you!!!
Reticular activator... Love this bit of brain .. not heard much about it recently... Had almost forgotten about it!! 🎉🎉
Wow, loved this episode. ❤🎉
The neo-cortex is involved with future time actions.
The older brain stem is to do with current time, being in the now.
Carl G Jung was an insightful person, and has a lot of thoughts on dreams. Plus he had a sharp wit if you watch anything.
Amazing story about Mark's brother and his research
I lost my sister in a fire. It takes a long time to recover. I can emotionally here the same in his voice
Solms progress here is to locate the correlates of consciousness in the Reticular Activating System (RAS). He makes no progress on the hard problem of consciousness, admitting in the end that consciousness is indeed real and that tracking it to its precise physical correlates is enough. He declares he can work with that. I seem to recall that Solms has a project of building a conscious system. I question whether he can do that without an ontology of consciousness. Because of that, he cannot say whether consciousness can be built in silicon or that substrate does not matter. A question I would ask of him is whether consciousness is what many would characterize as an information process, as in the control exerted by motor neurons, or whether it is a purely physical thing like muscle, bone, and ligament - teleological mechanical devices being controlled. We can get different motor responses by how we signal our motor system. We can also get different feelings by how we signal the RAS. How is RAS in this sense not simply another muscle?
You could answer and understand the progress Solms has made on the ontology of consciousness by reading (The Hidden Spring). He comes closest to providing the “why consciousness arose” aspect of the hard question than anyone else has so far.
@@jameswilliams-ey9dq And in a nut shell, how does Solms explain how consciousness arose?
@@wp9860 the proposition that Solm’s and Karl Friston use to explain how consciousness arose is the free energy principle. Consciousness, but specifically affect resulting in an advantage when “feeling one’s way” around novel challenges.
@@jameswilliams-ey9dq Friston has stated that the Free Energy Principle is not a theory about consciousness. He does offer a more speculative theory on how consciousness developed. I'm struggling to recall the context. It may have been relative to some high level or interpersonal function. I also do not recall if he address why certain things must be done consciously. Why couldn't a philosophical zombie, acting without conscious thought, perform everything that we as conscious human beings can? We control our blood pressure in a unconscious, zombie like fashion. Why not everything else? Additionally, what explains the experience of perceiving red?
Ingenious ❤
Just for the record: He is is borderline saying here that Commander Data is unconscious.
Ray Liotta's big brother! (once you see it you will never unsee it)
Elegant as all this is, it is haunted by the same question that haunts all consciousness theories: why is there subjective experience? We can imagine affective decisions being taken without someone experiencing affects. The only way out is to assume the first-person-perspective as the substrate of reality - what AGI researcher Colin Hales calls the FPP. What we call waking consciousness is where there is a greater degree of integration, memory and self-reflection. Many have said that this is required to handle novelty, but that idea has always puzzled me. Do unconscious systems not handle novelty? What about the immune system? Is it conscious? If not, how does it cope with novel pathogens? Why does solving math problems in our head require consciousness while computers can do it without being conscious?
> why is there subjective experience?
Because it offers an evolutionary advantage and was thus selected for by virtue of survival.
@@Gnaritas42 How so?
@@sirkiz1181 moving the goal post. The question is why, not how. All features you have exist for the same reason, the same why, because they were an advantage to your survival so they persisted; that's it, that's the end, there is no deeper to dig.
@@Gnaritas42 No I meant why do you think it was an evolutionary advantage to have a subjective conscious experience?
@@sirkiz1181 a sense of self is the ultimate reward function for learning, and thus surviving. This is the missing piece in AI everyone is working on now, a generalized reward function for training.
Great episode, but I wonder, does feeling actively help us survive directly in the moment, or is it something that comes after actions have already been decided subconsciously and thus it’s only purpose Is to allow communication to others about how we feel?
We need more dual aspect monists. The popular story of serontonin causing happiness as being the causal flow when you consider that happiness ALSO causes serontonin makes me wonder why more people aren''t dual aspect monists.
Interesting comments about reporting consciousness vs measuring it, especially in view of ongoing chain of thought LLM hype ;)
The functions and processes of consciousness are probably more simple than most people think they are. They are probably present even in arthropods and mollusks.
We just need to find the right cognitive architecture to activate consciousness in artificial agents. And I think that could happen anytime soon, if we do the proper experiments with the software.
You should talk with Naotsugu Tsuchiya.
I said this 10 years ago and it's still a mystery to me.
Given the existence of LLMs, does it prompt us to question whether there is a specific subjective experience associated with speaking?
Are you thinking that LLMs may be conscious when they're generating text?
@@dungeon_architect No, I don't think that but it seems to follow from what Prof. Solms was saying.
Can you get György buzaki in the show please
Feeling should not be equated with or bound to chemical context injections. I think what we call feeling is fundamentally a property of emotional intelligence. In other words, in humans, all sorts of context is assembled for processing, including chemical messages, but there is nothing intrinsically special about chemical context. It's just more context. What humans do with this context is extraordinary, but given the same complex pipeline, an entity could do this just as well from the value of a state variable.
It's a very logical and nice story, but it still doesn't explain HOW the brain generates inner subjective feelings. How does the firing of neurons in the brainstem give rise to feelings?
I'm very convinced of the purpose of consciousness, as Solms describes it. But yes it doesn't explain where consciousness comes from, and for me the door is still open for theories like panpsychism, or any other theory of consciousness as fundamental. Evolution may simply have leveraged consciousness for the purpose that Solms describes.
I am my subconscious. I don’t have a conscious in the sense that it is modeled today. What is mistakenly considered to be our conscious is actually just a part of our brain that reflects back awareness of our subconscious.
So according to you the brain is both like a mirror and a magician since it not only "reflects" awareness of subconsciousness but simultaneously somehow reveals itself by taking the "sub" out of subconsciousness. Not a convincing hypothesis...
Whatever consciousness is, it is not computation!
That's ridiculous. Of course it is just a computation. A self-referential one.
@arde4 You definitely need to read a book about the Theory of computation and Turing machines !
@@BoosterShot1010 already did, several. There's no reason why computation couldn't implement consciousness. Check out Blum's Conscious Turing Machine, or Graziano's model.
@@BoosterShot1010Don't you say. I've a MSc in CS. What would your objection be?
I think your opinion is a result of your conception of what computation is and misses the profound, amazing complexity that lies behind that word, complexity and beauty that springs from simplicity.
why did you invit chris hemsworth father 😂
People worry about what a super intelligent AI might do but I’d look at it as payback for all the times one of us miserable monkeys scraped the cortex off a neonatal mammal to see what would happen.
There exists an amazing questionable thing about that. Hooking up two hallucinations of being perfectly acceptable to both.
sorry but I dissent. The reticular activation system is not about feeling or consciousness, it is only the dumb power supply of the wet computer.
His definition of information is... odd. He seems to think once you know something it stops being information, that's just not how the word is used. Knowledge seems to be what he is talking about, knowledge being information that one knows. Being dismissive of Chalmers because he adopts nonstandard definitions doesn't make sense.
Also Mary's room has an easy answer. The brain is categorically a computer, consciousness part of the software it is running. Mary's room becomes equivalent to someone wondering why you can know everything there possibly is to know about a Nintendo and cartridge and how they work then why do they still have to use the console to play Super Mario Bros and cannot just emulate the curcuits and run the code with their brain.
Philosophy of mind seems inexplicably stuck in enlightenment era ideas. I don't get it but as public facing communicators about AI research you guys should really be applying the consciousness as software paradigm to this supposed mysterious consciousness conundrums when they come up in these conversations.
1:12:00 w(°o°)w