To answer the first question in the QA related to church Turing (1:09:00) in a way that might be more colloquial: is that the church Turing thesis is a statement that all systems can be computed by a Turing machine, where as computational equivalence is a statement that all systems are Turing machines, and he spent the NKS book mostly proving this by showing that these CA’s can emulate each other and thus the proving R110 that one can in principle create a string of emulations to R110 and thereby proving the universality of the whole rule class and by extension that all systems following rules (even simple ones) are Turing machines. As a result, if all systems are Turing machines according to Computation Equivalence then they must also sit in the same problem class as the halting problem, which means one can’t construct a way to prove that an algorithm will determine if the system will halt. Therefor all systems even finite simple ones are irreducibly difficult to know what they will do.
The brain operates as a hierarchical Bayesian inference system, continuously minimizing free energy across nested Markov blankets to perceive and interact with a branching universe. The complexity and dynamism of this process lead to computational irreducibility, where the exact state of the system cannot be simplified without performing the entire computation.
Perhaps free will is made possible as an experience due to the nature of computational irreducibility and the bounds of the computational capacity of the observer. How is program sophistication defined? How can we conclude a program is sophistically equivalent to that of an observing brain? Is it when the observer no longer is able to formalize generalizations or abstractions about the program? - "Time goes more slowly (when moving) because you have used up more computational budget of recreating yourself moving across space." Is time a function of computation? From a neuroscience perspective, time-slowing is considered a function of recollection and not perception, are these ideas compatible? I find what Stephen Wolfram is doing to be truly fascinating. The inclination to map out the concepts with semantic significance is brilliant. I wonder if it is possible to generate embeddings of the theorems that entails this rather than the superficial details. Can a cellular automaton be used as a benchmark to test the intelligence of a program and how well it is able to predict patterns? Like an inductive reasoning ARC-puzzle.
Probably the most profound and revolutionary world view and is provable... Wolfram may go down as the greatest genius of the century. No one has ever had such a profound vision of reality and can explain the central role of observation. The fact that observers like us can only a universe like this is incredibly anti intuitive and poerful.
@@SentientsSave I don't think it's hyperbolic. I'm currently on the same endeavor as him, and making a lot of the same discoveries and conclusions internally and in my writings. But I would never be able to articulate them like he does, so eloquently and clearly. To add to that is the sheer amount of supportive data, simulations, and examples he provides with these discoveries as well. He's an incredibly creative, deep and original thinker. But what brings him into the realm of "the greatest genius of the century" is the whole package he provides. He's incredibly sharp and quick, mentally. He's able to process these concepts and information much faster than the average person, and implement strategies to probe them much faster than others. He has an incredible intuition. There are a lot of people with some of these traits, but he stands out because he has all of them, at a high level. He really is in a class of his own.
So this stepwise progress is akin to steps of time, but for the size of the steps and effect. We actually do not know much about time, what is the step size ( quantum but surely random?) may be it is continuous and first of all what is stepping? Well it is connected to gravity may be even so that space is the emergent part and time create causality?
In simple words: LLM "Understanding” is the dataset, and it’s only as good as it contains the dataset accurately. Forgetting and abstracting away the dataset isn’t happening
Adding a parallel cost analysis and use of developed technologies vs. non-conventional could help limit the computational search. :Language is a real problem and the AI needs to use a base language that is greatly simplified, eliminating ambiguity of the branches.. I have come to the conclusion that many languages have formulated ambiguities in order to obscure and give the sayer an advantage (causing many lawsuits [Troy oz, vs standard). AI needs limits to prevent computational 'over-run'.
@@SentientsSave is the unicorn a fiction... maybe one day the thought of the unicorn will lead to the genetic engineers creating a living unicorn. In which case what is fiction and what is real and when.
@@kalliste23 Nope, unicorns are not real. Neither is the curvature of space, nor the quantum waves. Being real means being perceived, being physical. The acceleration of particles is real, force is not real.
@@saimbhat6243 apparently your reading comprehension is lacking: I postulated genetically engineered unicorms that make them real. People are talking about turning chickens back into dinosaurs for that matter.
He has integrated new ideas into it. For example, this idea that hobbling AI is limiting their ability to computationally reducible algorithms is new and profound.
To answer the first question in the QA related to church Turing (1:09:00) in a way that might be more colloquial: is that the church Turing thesis is a statement that all systems can be computed by a Turing machine, where as computational equivalence is a statement that all systems are Turing machines, and he spent the NKS book mostly proving this by showing that these CA’s can emulate each other and thus the proving R110 that one can in principle create a string of emulations to R110 and thereby proving the universality of the whole rule class and by extension that all systems following rules (even simple ones) are Turing machines.
As a result, if all systems are Turing machines according to Computation Equivalence then they must also sit in the same problem class as the halting problem, which means one can’t construct a way to prove that an algorithm will determine if the system will halt. Therefor all systems even finite simple ones are irreducibly difficult to know what they will do.
Stephen has a truly modern mind.
He's definitely pointing the right way
The brain operates as a hierarchical Bayesian inference system, continuously minimizing free energy across nested Markov blankets to perceive and interact with a branching universe. The complexity and dynamism of this process lead to computational irreducibility, where the exact state of the system cannot be simplified without performing the entire computation.
Thankyou
Perhaps free will is made possible as an experience due to the nature of computational irreducibility and the bounds of the computational capacity of the observer.
How is program sophistication defined?
How can we conclude a program is sophistically equivalent to that of an observing brain?
Is it when the observer no longer is able to formalize generalizations or abstractions about the program?
- "Time goes more slowly (when moving) because you have used up more computational budget of recreating yourself moving across space."
Is time a function of computation?
From a neuroscience perspective, time-slowing is considered a function of recollection and not perception, are these ideas compatible?
I find what Stephen Wolfram is doing to be truly fascinating.
The inclination to map out the concepts with semantic significance is brilliant. I wonder if it is possible to generate embeddings of the theorems that entails this rather than the superficial details.
Can a cellular automaton be used as a benchmark to test the intelligence of a program and how well it is able to predict patterns? Like an inductive reasoning ARC-puzzle.
Love it!!!!!!!!
Probably the most profound and revolutionary world view and is provable... Wolfram may go down as the greatest genius of the century. No one has ever had such a profound vision of reality and can explain the central role of observation. The fact that observers like us can only a universe like this is incredibly anti intuitive and poerful.
A bit hyperbolic. But Stephen Wolfram is certainly a creative and original thinker and a prolific inventor.
Wolfram is most underrated physics scientist of our times. IMHO
@@SentientsSave I don't think it's hyperbolic. I'm currently on the same endeavor as him, and making a lot of the same discoveries and conclusions internally and in my writings. But I would never be able to articulate them like he does, so eloquently and clearly. To add to that is the sheer amount of supportive data, simulations, and examples he provides with these discoveries as well. He's an incredibly creative, deep and original thinker.
But what brings him into the realm of "the greatest genius of the century" is the whole package he provides. He's incredibly sharp and quick, mentally. He's able to process these concepts and information much faster than the average person, and implement strategies to probe them much faster than others. He has an incredible intuition. There are a lot of people with some of these traits, but he stands out because he has all of them, at a high level. He really is in a class of his own.
He's the Musk of mathematics $ he just has a lot of time and money /\
So this stepwise progress is akin to steps of time, but for the size of the steps and effect. We actually do not know much about time, what is the step size ( quantum but surely random?) may be it is continuous and first of all what is stepping? Well it is connected to gravity may be even so that space is the emergent part and time create causality?
In simple words: LLM "Understanding” is the dataset, and it’s only as good as it contains the dataset accurately. Forgetting and abstracting away the dataset isn’t happening
❤
Adding a parallel cost analysis and use of developed technologies vs. non-conventional could help limit the computational search. :Language is a real problem and the AI needs to use a base language that is greatly simplified, eliminating ambiguity of the branches.. I have come to the conclusion that many languages have formulated ambiguities in order to obscure and give the sayer an advantage (causing many lawsuits [Troy oz, vs standard). AI needs limits to prevent computational 'over-run'.
He is a physicist.
Is the thought of a unicorn a real thought?
Why not? Unicorn is a fiction, but the thought about it is real.
@@SentientsSave is the unicorn a fiction... maybe one day the thought of the unicorn will lead to the genetic engineers creating a living unicorn. In which case what is fiction and what is real and when.
@@kalliste23 Nope, unicorns are not real. Neither is the curvature of space, nor the quantum waves. Being real means being perceived, being physical. The acceleration of particles is real, force is not real.
@@saimbhat6243 apparently your reading comprehension is lacking: I postulated genetically engineered unicorms that make them real. People are talking about turning chickens back into dinosaurs for that matter.
does he not tire of giving the same lecture over and over?
He has integrated new ideas into it. For example, this idea that hobbling AI is limiting their ability to computationally reducible algorithms is new and profound.
thru the years i have seen similar parts but always i hear something interesting, something to think about
Bro thinks he's unified physics! Delusional
"We cannot recognize the best among us, because we simply do not have the competency to be able to recognize how competent those people are."
@@synthclubyou can recognize the best or they wouldn’t have your attention. Spending time amongst them is addictive and you learn a few things