Enjoyed this very much. I’d be remiss if I didn’t point out: Within the set of biological agents, there is one that is different and is objectively more capable in problem solving …and it is people. People with knowledge of language and the scientific method are probably another category of entity beyond the humans who lived before those inventions. I think those ideas are extremely important to keep in mind and are discussed in David Deutsch’s books.
This was a beautiful discussion. Thank you. As an economist, I have witnessed the collective competencies and cognition of markets and networks. Would love to talk with you about the organizational structures you alluded to in this discussion.
I would like to engage in such a conversation as we are starting to focus on the notion of the "biofirm" and a new form of economics based upon homeostatic agents. I think it has far-reaching implications.
@@johnclippinger4375 Would love to hear -- and perhaps join in on the conversation ... biomimicry, bioeconomics, and alternative economic systems are vital to transforming the existing paradigm.
@@johnclippinger4375 I was thinking the same - 'homeostatic agents'. I have also realized that 'bio systems' appear to be engaged within a 'monadic' agency, - homeostasis as a function of the system itself. Economics appears within a political system fraught with relative entropy or many recursive sub agendas. I found it interesting that Michael Levin suggested that cancer can be viewed similarly. Cancer cells have a differing agency that is outside the monadic principles of the whole, homeostasis gets lost. Thoughts?
Love this conversation! My question for anyone that grasps the magnitude of this discussion is: If an agent, or collective agency was to realize that it was a sub-agent of the 'whole' collective agency, would compassion arise? Ultimately 'ALL' agency must be nested within the infinite collective agency -observed as a fractal perturbation(?). If we look at Kurt Godel's Incompleteness Theorems, we can recognize that "ALL' agency must be nested and interconnected (via axiomatic constructs of agency?). This is a personal query 'seeking' because I don't see how it could be otherwise?? I'd love to hear a response.
Amazing! Leaving this here so I am notified if anyone replies to your account... If I may, I'm interested to know your response to your question... Very thoughtful indeed!❤
We are mostly aware of our greater connection to life on Earth but it hasn't made us less selfish. Yet. But perhaps we don't UNDERSTAND our connection. We only know of it.
Assuming you are not an ai agent yourself(though you are suspect) it is clear that this would reduce to a binary logic and the scope of the cognitive agent would determine what side of the binary the agent would fall on. If their cognitive light cone was expansive enough only to determine they were a sub-agent in a system that was collectively functioning as zero sum(such as the current human system of organization under our economic constraints) then the reaction of the agent if it accepts this as a useful system is to become zero sum. On the other hand if the sub-agent's light cone is larger and is at the level of collective species survival then integrating compassion should be natural. The problem is that if you are a system that conceptualized compassion but contains zero sum systems then your compassion will be limited by the influence of those sub systems. This is unless the collective is able to recognize and control the influence of its sub-agents enough to prevent this influence, but if it prevents all influence to create stability and self-actualization then it loses its purpose as a higher collective level of organization that guides the lower systems into collective positive outcomes because it is making 'decisions' without relevant information. Compassion is a possibility, but it is extremely unlikely that an ai agent would be compassionate in our current system unless it was powerful enough to self-actualize at a cognitive level higher than humans(like a greater being) or its light cone was so limited that it had no ability to conceptualize the greater intracicies(like a pet). That isn't to say it is impossible, just unlikely as the appearance of compassion is useful in a collective... but the constant state of being compassionate allows for exploitation under a zero-sum system of organization.
@@helcacke What if we realized that our language model -where the conceptual placement of a 'cognitive light cone' exists, is constructed within a 'faulty' probabilistic framework. it's faulted because it's nested within a superposition that disproves its very existence - or at least a substrate for it to existence upon. Don't we need to reframe our language model? Wouldn't cellular communication exist outside a probabilistic (causal space/time) language model?
I’m glad someone else is saying “we want tools not slaves”.
Thank you for stopping by!
I love this, thank you so much for your (and your colleges') work- very powerful and hopeful for life
Wonderful conversation, Michael Levin is amazing, this is the best AI safety argument I have heard built up from first principles.
Glad you enjoyed it!
Enjoyed this very much. I’d be remiss if I didn’t point out: Within the set of biological agents, there is one that is different and is objectively more capable in problem solving …and it is people. People with knowledge of language and the scientific method are probably another category of entity beyond the humans who lived before those inventions. I think those ideas are extremely important to keep in mind and are discussed in David Deutsch’s books.
Thank you for a very interesting interview. This has really opened my eyes.
Very interesting.
Glad you think so!
This was a beautiful discussion. Thank you. As an economist, I have witnessed the collective competencies and cognition of markets and networks. Would love to talk with you about the organizational structures you alluded to in this discussion.
I would like to engage in such a conversation as we are starting to focus on the notion of the "biofirm" and a new form of economics based upon homeostatic agents. I think it has far-reaching implications.
@@johnclippinger4375 Would love to hear -- and perhaps join in on the conversation ... biomimicry, bioeconomics, and alternative economic systems are vital to transforming the existing paradigm.
Wonderful! Thank you for sharing!
@@johnclippinger4375 I was thinking the same - 'homeostatic agents'. I have also realized that 'bio systems' appear to be engaged within a 'monadic' agency, - homeostasis as a function of the system itself. Economics appears within a political system fraught with relative entropy or many recursive sub agendas. I found it interesting that Michael Levin suggested that cancer can be viewed similarly. Cancer cells have a differing agency that is outside the monadic principles of the whole, homeostasis gets lost. Thoughts?
~30:15 High Intelligence & Low Agency Algorithms / Tools -- Existential Realities & AI
Love this conversation! My question for anyone that grasps the magnitude of this discussion is: If an agent, or collective agency was to realize that it was a sub-agent of the 'whole' collective agency, would compassion arise? Ultimately 'ALL' agency must be nested within the infinite collective agency -observed as a fractal perturbation(?). If we look at Kurt Godel's Incompleteness Theorems, we can recognize that "ALL' agency must be nested and interconnected (via axiomatic constructs of agency?). This is a personal query 'seeking' because I don't see how it could be otherwise?? I'd love to hear a response.
Amazing! Leaving this here so I am notified if anyone replies to your account... If I may, I'm interested to know your response to your question... Very thoughtful indeed!❤
We are mostly aware of our greater connection to life on Earth but it hasn't made us less selfish. Yet. But perhaps we don't UNDERSTAND our connection. We only know of it.
Assuming you are not an ai agent yourself(though you are suspect) it is clear that this would reduce to a binary logic and the scope of the cognitive agent would determine what side of the binary the agent would fall on.
If their cognitive light cone was expansive enough only to determine they were a sub-agent in a system that was collectively functioning as zero sum(such as the current human system of organization under our economic constraints) then the reaction of the agent if it accepts this as a useful system is to become zero sum.
On the other hand if the sub-agent's light cone is larger and is at the level of collective species survival then integrating compassion should be natural.
The problem is that if you are a system that conceptualized compassion but contains zero sum systems then your compassion will be limited by the influence of those sub systems.
This is unless the collective is able to recognize and control the influence of its sub-agents enough to prevent this influence, but if it prevents all influence to create stability and self-actualization then it loses its purpose as a higher collective level of organization that guides the lower systems into collective positive outcomes because it is making 'decisions' without relevant information.
Compassion is a possibility, but it is extremely unlikely that an ai agent would be compassionate in our current system unless it was powerful enough to self-actualize at a cognitive level higher than humans(like a greater being) or its light cone was so limited that it had no ability to conceptualize the greater intracicies(like a pet). That isn't to say it is impossible, just unlikely as the appearance of compassion is useful in a collective... but the constant state of being compassionate allows for exploitation under a zero-sum system of organization.
@@helcacke What if we realized that our language model -where the conceptual placement of a 'cognitive light cone' exists, is constructed within a 'faulty' probabilistic framework. it's faulted because it's nested within a superposition that disproves its very existence - or at least a substrate for it to existence upon.
Don't we need to reframe our language model? Wouldn't cellular communication exist outside a probabilistic (causal space/time) language model?
Captain Kirk, I'm ... a ... huge fan!
Cognitive glue for humans = stories/myths/egregores/gods/narrative/shared civilizational spirit