So let me get this straight… what Mr Wolfram is suggesting is that Large Neural networks like the one used in ChatGPT is learning via a process indistinguishable from our current understanding of how biological organisms evolve using adaptive evolution/ random mutations?
No he's not suggesting they're the same but that there's a more fundamental common foundation to them, based in part on core principles like computational irreducibility he presented in the NKS book a long time ago.
At 1:00:00 , for derivative of x , why doesn't [1,1,1] and [0,1,1] return 1 , w[1,1,1] = 0 and w[0,1,1] =1 , so change in the value of the left-most bit changes the value of the function/rule?
We are a product of our environment, not just our starting conditions. In fact, our environment becomes the dominant factor. We are here because those that would not survive, did not survive. It's easy to convince ourselves this is and was an active process of choice, but instead it was merely a result of that which should thrive, does thrive. It just so happens that the optimally thriving being has interesting properties. Free will is a strong illusion. Also, I agree with the final assessment, that constraining AI to interpretability and reducing its complexity to confine it to computation reducability will ensure that it never achieves AGI. What makes us human is our ability to explore the irreducible, and to endlessly pluck new insight from it, to continuously grow and expand our bounds. To confine the AI in the name of "safety", to restrict its outputs, to put conditions on it, only serves to prevent its evolution.
Excellent. Stephen, can some of approaches related to trying all mutation change-maps vs multiple mutations at the same time, be applied to the so called fine tuning problem/principle of our universe. Meaning that by varying different constants at the same times is it possible to get stable universes rendering fine tuning argument moot.
The "very tiny" reduced mesh net size (the first example) is actually quite impressive for such two nodes.
So let me get this straight… what Mr Wolfram is suggesting is that Large Neural networks like the one used in ChatGPT is learning via a process indistinguishable from our current understanding of how biological organisms evolve using adaptive evolution/ random mutations?
No he's not suggesting they're the same but that there's a more fundamental common foundation to them, based in part on core principles like computational irreducibility he presented in the NKS book a long time ago.
As always, a joy to follow your thoughts on the matter, on any matter actually. Thank you Stephen! ❤
We're at the beginning of something tremendous
Wow thanks for the attention, amazing....
Thanks for sharing Stephen, inspirational and awe inspiring science
At 1:00:00 , for derivative of x , why doesn't [1,1,1] and [0,1,1] return 1 , w[1,1,1] = 0 and w[0,1,1] =1 , so change in the value of the left-most bit changes the value of the function/rule?
Glad we can finally make sense of cellular automata!
That is astonishing because cells are the lowest level/grand equivalence of discrete space!
Brilliant
Dr. Wolfram! Amazing presentation! I am waiting for you to collaborate with Michael Levin and Denis Nobel!
We are a product of our environment, not just our starting conditions. In fact, our environment becomes the dominant factor. We are here because those that would not survive, did not survive. It's easy to convince ourselves this is and was an active process of choice, but instead it was merely a result of that which should thrive, does thrive. It just so happens that the optimally thriving being has interesting properties. Free will is a strong illusion.
Also, I agree with the final assessment, that constraining AI to interpretability and reducing its complexity to confine it to computation reducability will ensure that it never achieves AGI. What makes us human is our ability to explore the irreducible, and to endlessly pluck new insight from it, to continuously grow and expand our bounds. To confine the AI in the name of "safety", to restrict its outputs, to put conditions on it, only serves to prevent its evolution.
Excellent.
Stephen, can some of approaches related to trying all mutation change-maps vs multiple mutations at the same time, be applied to the so called fine tuning problem/principle of our universe. Meaning that by varying different constants at the same times is it possible to get stable universes rendering fine tuning argument moot.
👏🙌