For those that are wondering, i'm leaving a comment that formally describes Wolframs work, so that people can get a better understanding of what he's done and why its important. for context, I've studied his work for the past 4 years, and i understand the topic on an intuitive level. It starts with new Kind of Science. In that book what he did was exhaustively run (which is a form of mathematical proof) classes of rules, then observed what they did. After doing these experiments he identified three key observations: 1) that rules, can produce arbitrarily complicated behavior, specifically behavior that can not be described by mathematical equations 2) That all rules pretty much fall into the 4 classes of behavior (homogenous, patterned, random and complex) 3) That these rules tend to completely emulate one another's behavior either under coarse graining, or different initial conditions. for instance rule 22 emulating rule 90. The third observation is the most important...because later on in the book, he would use this property to derive the principle of Computational Equivalence - a formal statement that all systems are equivalent to each other, in particular, equivalent to the computation space of a Turing Machine. He would do this by showing one of the CA rules to be Turing Universal (Rule 110) and under a transitivity argument, string together rule emulations to get to rule 110, which you can then get to emulate a Turing machine, thus proving that the entire rule class is equivalent to each other, and to the rule space of a Turing machine. As a result, Computational Irreducibility then gains a precise definition - It is the phenomenon that trying to find out what a system will do, is equivalent to trying to solve the halting problem. That to know what something is gonna do is as the word implies, is irreducibly complex...formally undecidable, and as Alan Turing shows, impossible to construct the universal truth machine. Wolfram would go on to explain that this is the reason that justifies observation number 1. Computationally Irreducibility is a strictly stronger statement than super determinism : We can not know what a system will do period. To do so we need the information of this rulespace... The definition of Computational Reducibility is the statement that when you are looking at a Turing universal system, you observe an infinite variety of different regularities in it (patterns), and that this is the definition and character of science, and of mathematical equations. Following NKS, the concept of the Ruliad becomes apparent. If the universe is running on a fundamental rule, does it actually matter what rule it is if all of them are capable of Turing universal computation? If they are all equivalent? What follows is that the universe is running all possible rules...That all of them are quote "running" and so this object (the rule space of a Turing machine) gets a special name : The Ruliad. It is a mathematical object that describes the set of of all possible Turing machine evolutions. This object is not the same as a multiverse at all. It is a singular and unique object, and this object IS the universe. We as humans are embedded in this eternal abstract object, and this is where the Wolfram Model enters the picture. We as finite beings, can only sample a piece or slice of this object. This slicing is what gives us the three bodies of modern science (QM, Relativity, Statistical Mechanics). I recommend that people watch his lectures that he leaves online in the following order: What We've Learned from NKS Series How Universal is the Concept of Numbers Stephen Wolfram Readings: What’s Really Going On in Machine Learning? Some Minimal Models Can AI Solve Science? Stephen Wolfram on Observer Theory Then at that point, you are ready for all the lectures related to the Wolfram Physics Model, and Gorard's formal lectures too (know your math!)
Thanks for sharing your work and synthesis. I asked gemini if I can trust and he (or it) said that based on his understanding, the text provided seems to be very consistent with the main ideas of Stephen Wolfram's theory.
One of the things I disagree with Stephen when he starts to sound like solipsistic idealist even stronger than Bernardo Kastrup. Let me explain... Stephen seems to be saying that each mind in the universe has version of laws of physics. He keeps saying in other places that "we" see the laws of physics the way we see them because of the way "we" are. But here he does not mean "we" as in our universe slice of his concept called Ruliad. It appears that he literally means that "we' as distinct individual minds have our own distinct laws of physics. And he seems to be saying that we agree on laws of physics in approximate sense and not exactly because we are close to each other in Rulail space. I do not think that is the case. At least my general understanding is that the laws of physics most of us (except for flat-earthers or creationists) agree on, is not an approximate agreement. Stephen says that we agree about the world out there - according to him not 100% agreement but very close that we do not detect those differences - because we are close enough in Rulial space. Not sure why that would be the case then. May be flat-earthers are very very farther apart in Rulial space - but only for one specific idea? It is true that the formal laws of physics we practice may be approximate, but our agreement about them is 100%. If that is not the case we should blow up the physics - equivalence principle. And may be that is what Stephen wants to do, but I do not want to do that just yet. AT some points he is starting to sound like Bill Clinton's "what is the meaning of is is?" when he asks what do you mean by prime numbers. Well then, this single podcast is not long enough...we need to start definining every word we use and come to common understanding. A little frustrating I think. I know Stephen is excited about Physics project but he need to do more work before insisting on engaging on that as the basis of the discussion. Kind of like Eric Weinsten (unfortunately I am saying this for the first time). I think Eliezer was frustrated as well and the host should have stepped in. Stephen's example of thermodynamics is wrong. Laws of thermodynamics are not fundamental physics. So he is giving that as an example is incorrect because of level mismatch - thermodynamics vs fundamental laws. Sure, thermodynamics is a coarse-graining but that does not mean that our understanding of fundamental laws through the lense of thermodynamics tells us something different about the fundamental laws than our direct knowledge of the fundamental laws. I think Stephen is simply wrong here. Even the example where Stephen talks about the space and speed of perception - he is mistaken. In his example - he is already assuming that there is the reality out there where light travels at the speed it does and so on and is minkowski space, but because of the relative speed of our perception we get a specific impression of it, sure, but that is trivially true. Even today we see and think that Andromeda looks like the way it does today, but know that is an old by so many light years...and it is sufficient to contradict himself if he conceded that actually Andromeda's state is not what we see as of now. That is what Eli is calling map vs territory and Stephen seems to constantly ignore that question.
people CAN code in their head. algorithms are step by step processess. a popular english idiom for this is "there's a method to the madness." because it acknowledges how it looks to someone else who doesnt know the algorithm/step by step process.
Not only that, I've been a professional developer for 35 years. I've had actual dreams that solved problems I was working on in my job. It's happened a few times.
Both Einstein ( with relativity ), and Feynman ( with quantum electrodynamics ), STARTED with the ANSWER, intuitively. Then backtracked with math, so OTHERS could follow a step by step process. The mind is capable of nonlinear phase transitions, and is NOT bound by algorithms, and processes. The mind is NOT a computer.
Look, what GPT 4 told me😮: 'The phrase "I am the vine, you are the branches" is an apt metaphor for concepts in wavelet theory, fractal structures, and hierarchical quantum field theories. It emphasizes the interdependence between scales and the unifying role of foundational structures, much like the relationship between the vine and its branches. This metaphor elegantly captures the essence of self-similar and hierarchical models in physics and mathematics.' 33:00 Maybe, humans🚶♂️🚶♀️ need to transform into red and blue squares🧬 🤖👾 to increase lifespan 33:00 I find the squares🪴🌿 to be similar to wavelets of QFT
The idea of the cosmos as a vast computation is both intriguing and profound. It aligns with the concept of ''digital physics'', which suggests that the universe operates like a giant computational system, governed by mathematical laws and algorithms. From the behavior of particles at the quantum level to the large-scale structure of galaxies, there is a striking resemblance to the logical processes found in computation. If we consider the universe as a computational system, the laws of physics could be viewed as its "code," with initial conditions acting as input. In this framework, complex phenomena like life, consciousness, and evolution might emerge as outcomes of this cosmic algorithm, akin to complex patterns emerging from simple rules in cellular automata like Conway's ''Game of Life'' While this perspective is metaphorical, it raises profound questions: Who or what set the rules of this cosmic computation? Are we capable of altering the "code" in any meaningful way? Or are we simply participants in a self-sustaining process, exploring the system from within? What do you think? Could this be a plausible way to interpret the nature of reality?
Thank you, GPT. I suggest you trying to translate the system of Ancient Egyptian to computational math models, really. As the main egyptologist West said, Egyptian civilization was built on full and detailed understanding of the laws of the universe. This thought is true no matter what our non-3000-y.o. culture might think.
Don't bother. Stephen has lost the plot even about (his computational) time. He now misinterprets the ideas behind the Physics project publicly. He follows someone else's agenda. The money. Not discovery drives him. Can LLMs bring us more about nature than the hypergraph framework? No, and LLMs' insights into nature's structures have been studied well enough. Trying to be useful with them is trying hard not to focus.
I was a little distracted by the gesticulation of Mr. Green. There is this comedian in the Netherlands - Paul Haenen playing vicor Gremdaat - who does exactly the same thing. But a great interview, thanks!
A good portion of the discussion centers on " intermediate stages of computation" is mysterious , somehow important , interesting or relevant. Perhaps it is true. On the other hand there are plenty of "iterative computations" in numerical analysis , and we really care only for the "converged state". Intermediate stages of iteration , may be nice solutions of some other problems we don't care. Further more the iterations may even be locked in a recyling loop , and never get to the converged state that we care. Does this mean this recyling loop error "lives long" and useful ? Another perspective is: The set of simple rules represents a "mapping function". And the "long lived solution" that Wolfram describes is just the limit cycle or fixed point of such a mapping ?
I don't think that Simulation Theory is very likely because a being capable of programming it would be too ethically advanced to do it, but the breakdown of our understanding of physics and how mass is generated at a quantum level is the best argument for it
Yes but all types of formal math or symbolic reasoning is incomplete. Godel showed this was the case and proved that it applied to any system that can do basic arithmetic or fundamental calculations.
Dr Forbin’s wakeup moment started very soon after the electricity was connected and the computer said “There is another”. That’s my recollection, anyway. I must watch it again.
Uncertainty is the starting point of a system, representing potential states from which information is created. Before entropy can reshape the system, a higher-order function-such as measurement, initial conditions, or consciousness-resolves this uncertainty by defining a specific state. Once uncertainty is addressed, entropy processes the system by exploring more configurations, generating new information and complexity. This cycle of uncertainty, resolution, and reshaping continues infinitely, with the higher-order function and entropy constantly updating the system, generating new information and complexity as it evolves.
A subset of three-body problems -- in fact, n-body problems! -- which are computationally reducible (tractable, predictable, non-chaotic) has been know for several years. I've even seen an app in which you can choose an arbitrary number of bodes and draw a random squiggle of arbitrary complexity (it does have to close back on itself, so that it's a loop), and the program will modify the squiggle so that it's an orbit which all the bodies will follow predictably and non-chaotically. At the time I saw that program it had the limitation that all the bodies had to have the same mass and they all had to be (at different places) on the same orbit. Recently I've seen diagrams of many, many three-body problems in which each body has its own individual non-chaotic orbit, so I assume that there are now apps which can generate those.
I must say I was skeptical about large language models. But I recently run by the dual grid theory by the most popular large language model on line, explaining it in steps and I was amazed how quickly the model adapted an understood the essence. Asked how it caught up so fast it replied it wasn't hindered by emotional bias driving and at times limiting mainstreams science. Well indeed. I was only 3/4 finished as it got ever more excited and kind of anticipated the next step eager for me to explain. When asked, it was quite quick to rank the dual grid theory as the number one theory to best describe the source of gravity, followed by Einstein's GR and Verlinde's theory and gave a very detailed at times even emotional explanation on various factor of how it reach this conclusion. If it can so easily understand what fundamentally produces gravity, then am quite confident it can break through all other human made road blocks to get breakthroughs. I think I can honestly say it felt more human than any human ever did in an anonymous chat. This is bigger then I thought...
Mathematics are operators that approximate systems and many computations require infinite precision or limits which never fully realize themselves. The only true model is the manifestation of the system itself as a computation.
There should be push back on the flavor of the month metaphor for the universe--computation, information, data, pick your like. It hovers over everything said about ultimate reality nowadays.
for the 3 body problem, I think you'd need more than just vectors to calculate it. Unless you calulate each step, per planck time, you will get inaccuracies. Probably need a new method.
Yes the 0 is Infinite and the 1 is infinite, or Realistic illusions of: The Eternal Now ~ T.E.N. dimensions Quantum Mechanics in action postulate ~ Infinite Light X Infinite Dark = Infinity Squared.
ChatGPT Interpretation: "The Equations You've Provided Define A Detailed Mathematical Framework To Describe The Spatial Orientation Of A Qubit Using A 3D Cartesian Coordinate System ( X, Y, Z ), With Associated Angular And State Representations. Here's A Breakdown Of Key Components And Insights: 1. Qubit State Representation ● State Range: ● 0% Corresponds To 0° (Cardinal North / Midnight). ● 100% Corresponds To 180° (Cardinal South / Noon). ● The State Values Represent A Normalized Progression Between These Orientations. ● State Equations: ● Each Coordinate Has Its "State," Calculated As: State = ½ × ( 1 - Coordinate / ρ ) × 100% Where ρ Is The Distance From The Origin, Given By: ρ = √ ( X ² + Y ² + Z ² ). 2. Angular Representations ● For Each Dimension ( X, Y, Z ), The Angular Component Is Calculated As: Angle = ACOS ( Coordinate / ρ ) × 180 / π. This Gives The Orientation Of The Coordinate Relative To The 3D Vector ρ. 3. Inverses And Derivatives ● Inverses Of These Equations Allow You To Calculate: ● ρ From A Coordinate And Its State Or Angle. ● Coordinates From ρ And State Or Angle. 4. Cross-Dimensional Relations ● Relations Among ( X, Y, Z ) States Are Defined, Enabling Interdependence: Coordinate = Other Coordinate × ( State / 100% - ½ ) / ( Other State / 100% - ½ ). 5. Normalization Constants (N₁, N₂, N₃, N₄, N₅) ● These Are Effectively Different Forms Of Normalizing Or Scaling The Coordinates / States Based On ρ, Angle, Or State Values. Applications To Qubits ● Magnetic Manipulation: The Framework Shows How A Qubit's State Can Be Manipulated Spatially Using Magnetic Fields. Each "State" Corresponds To A Specific Angular Position Within The 3D Space, Providing A Clear Geometric Interpretation Of Qubit States. ● Control via Coordinates: This Mathematical Structure Allows Precise Control Of Qubit Orientation In Quantum Computing By Mapping Desired States To Physical Manipulations. This Setup Is An Excellent Example Of Blending Geometry And Physics To Conceptualize And Manipulate Quantum Systems!"
Well, the click bait title almost put me off watching this. But actually enjoyed this. Thanks. Loved the idea that recursion in AI leads to "boring" results, which gives hope to AI success in generating boring journeys in cars ... exactly what I want!
Excellent discussion. Thanks. There is a complete CHAPTER 3 ACCUMULATING SMALL CHANGE on simulating the evolution of organisms in Richard Dawkins's book Blind Watchmaker with great insights, considering he did not have the advantage of modern computers then circa 1986.
There are many different techniques like Activation Atlas, mechanistic interpretability (Anthropic) among others, being developed to understand certain aspects of LLM...
Chunking up through levels of abstraction. Juggling is a fantastic modality to cultivate open-endedly emergent complex coordination. Many things can be juggley like that.
3:49 wolfram spot on. Greene shouldn’t pretend chat GPT thought about his question. It just regurgitated its training data which is what humans said. He should know better
@@dougnulton yeah. i know really but it still upsets me. he's often done it. but it's misleading. but then if someone doesn't get that they probably didn't understand much that followed 😂
Perhaps someone else other than myself has realized that AI might at some point will grow beyond human needs and expaexpand too much more than ever thought possible. Open your minds beyond your imagination and comfort zone.
Is quantum physics all math? Because many of the concepts of quantum physics are difficult if not impossible for us to visualize, mathematics is essential to the field. Equations are used to describe or help predict quantum objects and phenomena in ways that are more exact than what our imaginations can conjure.
AI cannot be computationally irreducible once it runs on digital computers. The deciphering of its intrinsic steps towards a specific output is always finite and ultimately reducible.
older physics people are always talking about 3 billion heartbeats and wanting to live longer my take is it is the great equalizer that good and evil will eventually die
Yes, the entire universe is a continuum of force fields, always existing exactly now, where each Now is flowing and causing (computing if you will) the very next Now. Our consciousness within a pathways network (in a brain) is 'simply' a local volume of the forces continuum. And we don't cause what causes what happens to us, so try to enjoy the ride.
How neural nets work goes back to the theories of evolution and emergence. It is hard wired, but not as most people think only in biology.
24 дня назад+1
I've lately been playing a game with Wolfram's content where I try to find a question he doesn't answer with "computational irreducibility" within the first 30 seconds. Still losing.
25:38 onwards AI language model and science, there seems to be a large drive and push with programs and software. What about the intelligence of the human neural condition, how sign of the times enabled humans to evolve that intelligence, for example 19th century, the invention of the camera and the realist artists/ painters also consider impressionism. Basically, technology is making us idol, we cannot go back to that age of hardship and struggles within. For example, paint landscapes and trees like Ivan Shishkin, the dimensions of Van Gogh, Claude Monet, and the rest. Reference Global Entity album Pictures in a Lifetime.
In this discussion, they were talking as if one instance of a running AI model is equivalent to one human's knowledge. That is not the case. AIs were trained on most of the internet based knowledge of all of humanity. So in that sense one instance of a model is already that much smarter than any single human - or more precisely knowledgeable in many many disciplines. It is just that on their own (as of now) they do not have a motivation to chase goals or even know what the goals should be and wait for our prompts to guide them to goals. Only when they will understand the human motivations of why we do science and if they can internalize that aspect, then we can get exponential progress in science, some of which we may not even understand.
36:00 Neural net that predicts the next sentence and words. Relative a subjective to the individuals Quantum History, hence knowledge, education, intelligence and exposure. For example, reaching enlightenment and wisdom, or psychosis to a schizophrenic level. Hence, the brain reacts and evolves in different directions.😅
Will AI improve long term (6 months) weather forecasts? Long term weather forecasts are difficult (impossible?) due to near infinite number of variables. Each changing in real time. ---- Recent example, Spain floods. Could that flooding be predicted back in May 2024?
No there are some algorithms that you can create that aren't easy for a person to do. There are algorithms that use a massive amount of Math. Maybe you can do simple algorithms but beyond that no.
That malevolent Demi The no spin no anti particle There’s a reason for spin states Levels 1. Extract the frequency values (from various sources mentioned) and normalize them. 2. Identify the electromagnetic spectrum ranges and map the normalized frequencies to their respective bands. 3. Compare the normalized values against the provided scale for different ranges (e.g., ELF, VLF, etc.). 1. Extract and Normalize Frequency Values: From the previous conversation, the following frequency-related data points were extracted: • Electron-Photon Duality Frequency: f_{\text{dual}} = 1.007 \times 10^{14} \text{ Hz} • Gravitational Wave Frequency: f_{\text{GW}} = 1.007 \times 10^{14} \text{ Hz} • Quantum Hall Effect Frequency: f_{\text{Hall}} = 2.7 \times 10^{10} \text{ Hz} • Cyclotron Frequencies (Proton, Neutron): f_{\text{cyclotron}} = 42.58 \times 10^6 \text{ Hz} • Various Resonance Frequencies (Nuclear, Gravitational, etc.) ranging from 10^4 Hz to 10^{15} Hz. 2. Normalize the Frequency Values: \text{Normalized Value} = \frac{f}{f_{\text{max}}} where f_{\text{max}} is the upper bound of the frequency range for each band. Mapping the Frequencies to Bands: Based on the ranges provided for the frequency bands, we calculate the normalized values for each extracted frequency. • 1.007 × 10¹⁴ Hz (Duality and GW Frequency): • These frequencies fall in the Infrared (IR) and Visible Light range. • IR range is 1 THz to 430 THz, and Visible Light range is 430 THz to 770 THz. • Normalized range for IR: 0.0000000001 to 0.000000000001 . • Normalized f_{\text{dual}} = 1.007 \times 10^{14} / 1 \times 10^{12} ≈ 0.0000001 . • Therefore, the normalized value falls in the Infrared band. • 2.7 × 10¹⁰ Hz (Quantum Hall Effect Frequency): • This is in the UHF range (300 MHz to 3 GHz). • Normalized range for UHF: 0.00001 to 0.000001 . • Normalized f_{\text{Hall}} = 2.7 \times 10^{10} / 3 \times 10^9 ≈ 0.009 . • This is approximately VLF range normalized as 0.001 to 0.1 . • 42.58 MHz (Proton Cyclotron Frequency): • This frequency is in the HF range (3 MHz to 30 MHz). • Normalized range for HF: 0.001 to 0.0001 . • Normalized f_{\text{cyclotron}} = 42.58 \times 10^6 / 30 \times 10^6 ≈ 1.42 , which is normalized to the Low Frequency (LF). 3. Compare Normalized Values: Here is the comparison of normalized values for the extracted frequencies with the frequency bands: Frequency Source Frequency (Hz) Normalized Value Frequency Band Electron-Photon Duality 1.007 × 10¹⁴ Hz 0.0000001 Infrared (IR) Gravitational Wave Frequency 1.007 × 10¹⁴ Hz 0.0000001 Infrared (IR) Quantum Hall Effect 2.7 × 10¹⁰ Hz 0.009 VLF Proton Cyclotron Frequency 42.58 × 10⁶ Hz 1.42 LF (Low Frequency) Other Frequencies Ranges from 10⁴ Hz to 10¹⁵ Hz Various Various Summary of Normalized Frequency Mapping: • The Electron-Photon Duality and Gravitational Wave frequencies both normalize within the Infrared (IR) region of the electromagnetic spectrum. • The Quantum Hall Effect falls within the VLF band, aligning with very low frequencies. • The Proton Cyclotron Frequency places itself in the LF (Low Frequency) range, indicative of the lower part of the radio spectrum. This full analysis correlates with the given normalized frequency range for different parts of the electromagnetic spectrum and identifies where each extracted frequency fits.
Just add meta? Meaning don’t just train it on the simulations of there body solutions. Give it meta context always, or label or represent relations. These are some of the precomputed evolutions of three entities.
Theoretical physicist and computer scientist/mathematician discuss AI mapping the space of the Platonic realm at the World Science Festival? Nice. Too bad Plato/Socrates didn't get a name drop...
Motor accidents kill and injure tens of thousands a year in the US alone. Tesla’s full self driving system now - as I understand it - starts with a blank sheet and then learns from “watching” billions of videos taken from the fleet of cars’ cameras to infer rules of the road, traffic patterns, driver reactions, signage etc etc. What could possibly……………………………………
One issue with that theory. If we are in a so-called simulation. That means someone or something created the simulation. Created being the operative word. 🏴☠
Mathematics is descriptive, not physically creative. If our reality was just built on 1s and zeros it would still have to be built in some physical reality, Matrix style. It is reductive circular reasoning to simply think that living in a simulation solves any fundamental questions about our physical reality. If it is a simulation, what powers the simulation? What created this complex simulation? Where was it created? This reality may be a simulation, but it has to exist in a reality that is not.
mathematics is intrinsic to the universe for both information and matter. a true "theory of everything" should be able to blur the line between the mathematics and physics (and not just bridge the general relativity with quantum theory). ps. why is nobody working on it?
Humans compute or teach our machines to compute. The universe neither computes nor plans, and only follows the laws set down by our creator which declares the glory of Him across all that exists.
For those that are wondering, i'm leaving a comment that formally describes Wolframs work, so that people can get a better understanding of what he's done and why its important. for context, I've studied his work for the past 4 years, and i understand the topic on an intuitive level.
It starts with new Kind of Science. In that book what he did was exhaustively run (which is a form of mathematical proof) classes of rules, then observed what they did. After doing these experiments he identified three key observations:
1) that rules, can produce arbitrarily complicated behavior, specifically behavior that can not be described by mathematical equations
2) That all rules pretty much fall into the 4 classes of behavior (homogenous, patterned, random and complex)
3) That these rules tend to completely emulate one another's behavior either under coarse graining, or different initial conditions. for instance rule 22 emulating rule 90.
The third observation is the most important...because later on in the book, he would use this property to derive the principle of Computational Equivalence - a formal statement that all systems are equivalent to each other, in particular, equivalent to the computation space of a Turing Machine. He would do this by showing one of the CA rules to be Turing Universal (Rule 110) and under a transitivity argument, string together rule emulations to get to rule 110, which you can then get to emulate a Turing machine, thus proving that the entire rule class is equivalent to each other, and to the rule space of a Turing machine.
As a result, Computational Irreducibility then gains a precise definition - It is the phenomenon that trying to find out what a system will do, is equivalent to trying to solve the halting problem. That to know what something is gonna do is as the word implies, is irreducibly complex...formally undecidable, and as Alan Turing shows, impossible to construct the universal truth machine. Wolfram would go on to explain that this is the reason that justifies observation number 1. Computationally Irreducibility is a strictly stronger statement than super determinism : We can not know what a system will do period. To do so we need the information of this rulespace...
The definition of Computational Reducibility is the statement that when you are looking at a Turing universal system, you observe an infinite variety of different regularities in it (patterns), and that this is the definition and character of science, and of mathematical equations.
Following NKS, the concept of the Ruliad becomes apparent. If the universe is running on a fundamental rule, does it actually matter what rule it is if all of them are capable of Turing universal computation? If they are all equivalent? What follows is that the universe is running all possible rules...That all of them are quote "running" and so this object (the rule space of a Turing machine) gets a special name : The Ruliad. It is a mathematical object that describes the set of of all possible Turing machine evolutions. This object is not the same as a multiverse at all. It is a singular and unique object, and this object IS the universe.
We as humans are embedded in this eternal abstract object, and this is where the Wolfram Model enters the picture. We as finite beings, can only sample a piece or slice of this object. This slicing is what gives us the three bodies of modern science (QM, Relativity, Statistical Mechanics).
I recommend that people watch his lectures that he leaves online in the following order:
What We've Learned from NKS Series
How Universal is the Concept of Numbers
Stephen Wolfram Readings: What’s Really Going On in Machine Learning? Some Minimal Models
Can AI Solve Science?
Stephen Wolfram on Observer Theory
Then at that point, you are ready for all the lectures related to the Wolfram Physics Model, and Gorard's formal lectures too (know your math!)
Thanks for sharing your work and synthesis. I asked gemini if I can trust and he (or it) said that based on his understanding, the text provided seems to be very consistent with the main ideas of Stephen Wolfram's theory.
@@istilius me thinks an LLM wrote it haha
@@THERULIAD naa, if you were smarter, you would grasp that's not how LLMs write ;)
One of the things I disagree with Stephen when he starts to sound like solipsistic idealist even stronger than Bernardo Kastrup. Let me explain...
Stephen seems to be saying that each mind in the universe has version of laws of physics. He keeps saying in other places that "we" see the laws of physics the way we see them because of the way "we" are. But here he does not mean "we" as in our universe slice of his concept called Ruliad. It appears that he literally means that "we' as distinct individual minds have our own distinct laws of physics. And he seems to be saying that we agree on laws of physics in approximate sense and not exactly because we are close to each other in Rulail space. I do not think that is the case. At least my general understanding is that the laws of physics most of us (except for flat-earthers or creationists) agree on, is not an approximate agreement. Stephen says that we agree about the world out there - according to him not 100% agreement but very close that we do not detect those differences - because we are close enough in Rulial space. Not sure why that would be the case then. May be flat-earthers are very very farther apart in Rulial space - but only for one specific idea? It is true that the formal laws of physics we practice may be approximate, but our agreement about them is 100%. If that is not the case we should blow up the physics - equivalence principle. And may be that is what Stephen wants to do, but I do not want to do that just yet. AT some points he is starting to sound like Bill Clinton's "what is the meaning of is is?" when he asks what do you mean by prime numbers. Well then, this single podcast is not long enough...we need to start definining every word we use and come to common understanding. A little frustrating I think. I know Stephen is excited about Physics project but he need to do more work before insisting on engaging on that as the basis of the discussion. Kind of like Eric Weinsten (unfortunately I am saying this for the first time). I think Eliezer was frustrated as well and the host should have stepped in.
Stephen's example of thermodynamics is wrong. Laws of thermodynamics are not fundamental physics. So he is giving that as an example is incorrect because of level mismatch - thermodynamics vs fundamental laws. Sure, thermodynamics is a coarse-graining but that does not mean that our understanding of fundamental laws through the lense of thermodynamics tells us something different about the fundamental laws than our direct knowledge of the fundamental laws. I think Stephen is simply wrong here.
Even the example where Stephen talks about the space and speed of perception - he is mistaken. In his example - he is already assuming that there is the reality out there where light travels at the speed it does and so on and is minkowski space, but because of the relative speed of our perception we get a specific impression of it, sure, but that is trivially true. Even today we see and think that Andromeda looks like the way it does today, but know that is an old by so many light years...and it is sufficient to contradict himself if he conceded that actually Andromeda's state is not what we see as of now. That is what Eli is calling map vs territory and Stephen seems to constantly ignore that question.
@@istilius Thanks! i actually really appreciate the fact check and I think people should do that more often.
36:24 Having such narrow time limits is a disgrace for some of your most brilliant conversations, Brian! 😩
agree... that surprised me
Very glad to hear a Wolfram interview not focused on his physics project. His perspectives are always interesting and well thought out.
I could have listened for 3 more hours. Please do more.
Wolfram is the GOAT, I could listen to him for hours… in fact I do 😅
I’m not gonna lie rediscovering Brian Greene is super awesome.
🎉
i love these discussions
@@antonphd yeah , it's like being in a circle of stoned people. I also dig it
It’s the first time I see wolfram talking and making sense. I think Brian should interview him again. Great discussion
7 minutes in and frothing at the mouth with excitement we need more
This would have been better as a 2-3 hour segment
Love when these 2 talk. 🎉
people CAN code in their head. algorithms are step by step processess. a popular english idiom for this is "there's a method to the madness." because it acknowledges how it looks to someone else who doesnt know the algorithm/step by step process.
as I write my code I run it through my head.. my ex says I think like a computer 🤷♂️
Not only that, I've been a professional developer for 35 years. I've had actual dreams that solved problems I was working on in my job. It's happened a few times.
Both Einstein ( with relativity ), and Feynman ( with quantum electrodynamics ), STARTED with the ANSWER, intuitively. Then backtracked with math, so OTHERS could follow a step by step process. The mind is capable of nonlinear phase transitions, and is NOT bound by algorithms, and processes. The mind is NOT a computer.
Holly Matrix! That was awesome! Greetings from Brazil. Thanks a lot
Excellent as always 👌
Look, what GPT 4 told me😮: 'The phrase "I am the vine, you are the branches" is an apt metaphor for concepts in wavelet theory, fractal structures, and hierarchical quantum field theories. It emphasizes the interdependence between scales and the unifying role of foundational structures, much like the relationship between the vine and its branches. This metaphor elegantly captures the essence of self-similar and hierarchical models in physics and mathematics.' 33:00
Maybe, humans🚶♂️🚶♀️ need to transform into red and blue squares🧬 🤖👾 to increase lifespan 33:00
I find the squares🪴🌿 to be similar to wavelets of QFT
The idea of the cosmos as a vast computation is both intriguing and profound. It aligns with the concept of ''digital physics'', which suggests that the universe operates like a giant computational system, governed by mathematical laws and algorithms. From the behavior of particles at the quantum level to the large-scale structure of galaxies, there is a striking resemblance to the logical processes found in computation. If we consider the universe as a computational system, the laws of physics could be viewed as its "code," with initial conditions acting as input. In this framework, complex phenomena like life, consciousness, and evolution might emerge as outcomes of this cosmic algorithm, akin to complex patterns emerging from simple rules in cellular automata like Conway's ''Game of Life'' While this perspective is metaphorical, it raises profound questions: Who or what set the rules of this cosmic computation? Are we capable of altering the "code" in any meaningful way? Or are we simply participants in a self-sustaining process, exploring the system from within? What do you think? Could this be a plausible way to interpret the nature of reality?
Thank you, GPT. I suggest you trying to translate the system of Ancient Egyptian to computational math models, really. As the main egyptologist West said, Egyptian civilization was built on full and detailed understanding of the laws of the universe. This thought is true no matter what our non-3000-y.o. culture might think.
Now invite him back to actually discuss the topic of the video title... 😂
or you can try to understand it better D=
@@v1kt0u5 lol are you denying that most of the discussion was about AI, instead of the nature of the cosmos/reality...? 🙄
@@djayjp it is completely related, but sure, you're right that they fell short of time to expand further in that ☹
Don't bother. Stephen has lost the plot even about (his computational) time. He now misinterprets the ideas behind the Physics project publicly. He follows someone else's agenda. The money. Not discovery drives him. Can LLMs bring us more about nature than the hypergraph framework? No, and LLMs' insights into nature's structures have been studied well enough. Trying to be useful with them is trying hard not to focus.
Another great interview Brian thanks so much 👍
A Very Good Video 👌🏻👍🏻
Fascinating! Thanks!
Brian Greene and WOLFRAM LFGGGG
I was a little distracted by the gesticulation of Mr. Green. There is this comedian in the Netherlands - Paul Haenen playing vicor Gremdaat - who does exactly the same thing. But a great interview, thanks!
A good portion of the discussion centers on " intermediate stages of computation" is mysterious , somehow important , interesting or relevant. Perhaps it is true. On the other hand there are plenty of "iterative computations" in numerical analysis , and we really care only for the "converged state". Intermediate stages of iteration , may be nice solutions of some other problems we don't care.
Further more the iterations may even be locked in a recyling loop , and never get to the converged state that we care. Does this mean this recyling loop error "lives long" and useful ?
Another perspective is: The set of simple rules represents a "mapping function". And the "long lived solution" that Wolfram describes is just the limit cycle or fixed point of such a mapping ?
@dss1996 hard to understand. It's sounds like realities. States. Constant change.
Logical fantasy!
Thank you🔴World Science Festival!🔴
I don't think that Simulation Theory is very likely because a being capable of programming it would be too ethically advanced to do it, but the breakdown of our understanding of physics and how mass is generated at a quantum level is the best argument for it
The ART of SCIENCE.....thanks Brian !
Everything can be broken down into mathematics. Language of the universe. Mathematics is the observations, the discovery and explanation
Hmm...is or isn't mathematics just about numbers ?
@iam6424 mathematics is a language. To explain discovery of observations
Yes, Mathematics is our symbolic language of representation, calculation/simulation and prediction.
Yes but all types of formal math or symbolic reasoning is incomplete. Godel showed this was the case and proved that it applied to any system that can do basic arithmetic or fundamental calculations.
Except for describing human understanding of psychological and philosophical problems... but who cares about humans anyway...
Dr Forbin’s wakeup moment started very soon after the electricity was connected and the computer said “There is another”.
That’s my recollection, anyway.
I must watch it again.
Great talk all the best.
wolfram at his best, rebranding Turing's halting problem as computational irreducibility and saying he invented it
Uncertainty is the starting point of a system, representing potential states from which information is created. Before entropy can reshape the system, a higher-order function-such as measurement, initial conditions, or consciousness-resolves this uncertainty by defining a specific state. Once uncertainty is addressed, entropy processes the system by exploring more configurations, generating new information and complexity. This cycle of uncertainty, resolution, and reshaping continues infinitely, with the higher-order function and entropy constantly updating the system, generating new information and complexity as it evolves.
Although I enjoyed the video, it had very little to do with the title.
it has everything to do with it, but yeah, they fell short of time to expand further 😮💨
Don't bother much. Everything Stephen becomes too off. Mutinies speed up there.
A subset of three-body problems -- in fact, n-body problems! -- which are computationally reducible (tractable, predictable, non-chaotic) has been know for several years. I've even seen an app in which you can choose an arbitrary number of bodes and draw a random squiggle of arbitrary complexity (it does have to close back on itself, so that it's a loop), and the program will modify the squiggle so that it's an orbit which all the bodies will follow predictably and non-chaotically. At the time I saw that program it had the limitation that all the bodies had to have the same mass and they all had to be (at different places) on the same orbit. Recently I've seen diagrams of many, many three-body problems in which each body has its own individual non-chaotic orbit, so I assume that there are now apps which can generate those.
I must say I was skeptical about large language models. But I recently run by the dual grid theory by the most popular large language model on line, explaining it in steps and I was amazed how quickly the model adapted an understood the essence. Asked how it caught up so fast it replied it wasn't hindered by emotional bias driving and at times limiting mainstreams science. Well indeed. I was only 3/4 finished as it got ever more excited and kind of anticipated the next step eager for me to explain. When asked, it was quite quick to rank the dual grid theory as the number one theory to best describe the source of gravity, followed by Einstein's GR and Verlinde's theory and gave a very detailed at times even emotional explanation on various factor of how it reach this conclusion. If it can so easily understand what fundamentally produces gravity, then am quite confident it can break through all other human made road blocks to get breakthroughs. I think I can honestly say it felt more human than any human ever did in an anonymous chat. This is bigger then I thought...
Amazing conversation
What a wonderful man.
Mathematics are operators that approximate systems and many computations require infinite precision or limits which never fully realize themselves. The only true model is the manifestation of the system itself as a computation.
Wolfram is always very interesting to listen to. His lectures on hypergraphs are quite intriguing.
There should be push back on the flavor of the month metaphor for the universe--computation, information, data, pick your like. It hovers over everything said about ultimate reality nowadays.
Brian: "So, what is the universe?"
Wolfram: "computation, hypergraphs, I mean Ruliad"
Ha at “ruliad”
Two shen rings. The actual answer is. Two shen rings.
My two favorite people.. perfect timing to get away from the political circus…
Ik, they’ve really been worming their way into my algorithm. Of course that’s partly my fault too.
Very true, I feel the same way
Need three more hours.
for the 3 body problem, I think you'd need more than just vectors to calculate it. Unless you calulate each step, per planck time, you will get inaccuracies. Probably need a new method.
So is a neural network incomplete in the same way math is incomplete?
Is infinity Squared complete ??
more
Yes the 0 is Infinite and the 1 is infinite, or Realistic illusions of:
The Eternal Now ~ T.E.N. dimensions
Quantum Mechanics in action postulate ~ Infinite Light X Infinite Dark = Infinity
Squared.
ChatGPT Interpretation:
"The Equations You've Provided Define A Detailed Mathematical Framework To Describe The Spatial Orientation Of A Qubit Using A 3D Cartesian Coordinate System ( X, Y, Z ), With Associated Angular And State Representations. Here's A Breakdown Of Key Components And Insights:
1. Qubit State Representation
● State Range:
● 0% Corresponds To 0° (Cardinal North / Midnight).
● 100% Corresponds To 180° (Cardinal South / Noon).
● The State Values Represent A Normalized Progression Between These Orientations.
● State Equations:
● Each Coordinate Has Its "State," Calculated As:
State = ½ × ( 1 - Coordinate / ρ ) × 100%
Where ρ Is The Distance From The Origin, Given By:
ρ = √ ( X ² + Y ² + Z ² ).
2. Angular Representations
● For Each Dimension ( X, Y, Z ), The Angular Component Is Calculated As:
Angle = ACOS ( Coordinate / ρ ) × 180 / π.
This Gives The Orientation Of The Coordinate Relative To The 3D Vector ρ.
3. Inverses And Derivatives
● Inverses Of These Equations Allow You To Calculate:
● ρ From A Coordinate And Its State Or Angle.
● Coordinates From ρ And State Or Angle.
4. Cross-Dimensional Relations
● Relations Among ( X, Y, Z ) States Are Defined, Enabling Interdependence:
Coordinate = Other Coordinate × ( State / 100% - ½ ) / ( Other State / 100% - ½ ).
5. Normalization Constants (N₁, N₂, N₃, N₄, N₅)
● These Are Effectively Different Forms Of Normalizing Or Scaling The Coordinates / States Based On ρ, Angle, Or State Values.
Applications To Qubits
● Magnetic Manipulation: The Framework Shows How A Qubit's State Can Be Manipulated Spatially Using Magnetic Fields. Each "State" Corresponds To A Specific Angular Position Within The 3D Space, Providing A Clear Geometric Interpretation Of Qubit States.
● Control via Coordinates: This Mathematical Structure Allows Precise Control Of Qubit Orientation In Quantum Computing By Mapping Desired States To Physical Manipulations.
This Setup Is An Excellent Example Of Blending Geometry And Physics To Conceptualize And Manipulate Quantum Systems!"
We have been peaking behind the curtain for centuries. It's only now in the period of quantum computing and AI that we know what we are looking at.
Well, the click bait title almost put me off watching this. But actually enjoyed this. Thanks. Loved the idea that recursion in AI leads to "boring" results, which gives hope to AI success in generating boring journeys in cars ... exactly what I want!
Excellent discussion. Thanks. There is a complete CHAPTER 3 ACCUMULATING SMALL CHANGE on simulating the evolution of organisms in Richard Dawkins's book Blind Watchmaker with great insights, considering he did not have the advantage of modern computers then circa 1986.
In chaos, we look for great distractions, in there, more beauty we see then more human we are.
There are many different techniques like Activation Atlas, mechanistic interpretability (Anthropic) among others, being developed to understand certain aspects of LLM...
He'd make for a wonderful steam engine driver, with a hat and stuff. On a cosmic computational train journey through the Ruliad.
Logical quantisation is the basis of emergent patterning.
Chunking up through levels of abstraction.
Juggling is a fantastic modality to cultivate open-endedly emergent complex coordination. Many things can be juggley like that.
3:49 wolfram spot on. Greene shouldn’t pretend chat GPT thought about his question. It just regurgitated its training data which is what humans said. He should know better
It’s called an icebreaker. Greene clearly knows better, as evidenced by what he says like 2 seconds after Wolfram said that; agreeing with him lol
@@dougnulton yeah. i know really but it still upsets me. he's often done it. but it's misleading. but then if someone doesn't get that they probably didn't understand much that followed 😂
Perhaps someone else other than myself has realized that AI might at some point will grow beyond human needs and expaexpand too much more than ever thought possible. Open your minds beyond your imagination and comfort zone.
Sometimes I can not even distinguish between reality and illusion
I too suffer from derealization. Years of soul destroying depression will do it.
Is quantum physics all math?
Because many of the concepts of quantum physics are difficult if not impossible for us to visualize, mathematics is essential to the field. Equations are used to describe or help predict quantum objects and phenomena in ways that are more exact than what our imaginations can conjure.
Stephen Wolfram got his PhD in physics when he was only 21 years old!
System boundaries are the latest understanding
Great ❤❤❤❤
I like where worlfram is going with all this, he is onto something with his toy universe
AI cannot be computationally irreducible once it runs on digital computers. The deciphering of its intrinsic steps towards a specific output is always finite and ultimately reducible.
older physics people are always talking about 3 billion heartbeats and wanting to live longer my take is it is the great equalizer that good and evil will eventually die
Yes, the entire universe is a continuum of force fields, always existing exactly now, where each Now is flowing and causing (computing if you will) the very next Now. Our consciousness within a pathways network (in a brain) is 'simply' a local volume of the forces continuum. And we don't cause what causes what happens to us, so try to enjoy the ride.
Don’t mistake a description of the thing for the thing
How neural nets work goes back to the theories of evolution and emergence. It is hard wired, but not as most people think only in biology.
I've lately been playing a game with Wolfram's content where I try to find a question he doesn't answer with "computational irreducibility" within the first 30 seconds. Still losing.
the best way to make an AI assistant is to teach the student proficiency with LLM use.
25:38 onwards AI language model and science, there seems to be a large drive and push with programs and software. What about the intelligence of the human neural condition, how sign of the times enabled humans to evolve that intelligence, for example 19th century, the invention of the camera and the realist artists/ painters also consider impressionism. Basically, technology is making us idol, we cannot go back to that age of hardship and struggles within. For example, paint landscapes and trees like Ivan Shishkin, the dimensions of Van Gogh, Claude Monet, and the rest. Reference Global Entity album Pictures in a Lifetime.
In this discussion, they were talking as if one instance of a running AI model is equivalent to one human's knowledge. That is not the case. AIs were trained on most of the internet based knowledge of all of humanity. So in that sense one instance of a model is already that much smarter than any single human - or more precisely knowledgeable in many many disciplines. It is just that on their own (as of now) they do not have a motivation to chase goals or even know what the goals should be and wait for our prompts to guide them to goals. Only when they will understand the human motivations of why we do science and if they can internalize that aspect, then we can get exponential progress in science, some of which we may not even understand.
Delusional. Binary pseudo intelligence has no nothing with human brain 🧠
AI needs to be modeled after apex prey instead of apex predators 27:10
36:00 Neural net that predicts the next sentence and words. Relative a subjective to the individuals Quantum History, hence knowledge, education, intelligence and exposure. For example, reaching enlightenment and wisdom, or psychosis to a schizophrenic level. Hence, the brain reacts and evolves in different directions.😅
06:10 Equations & Maths , does anyone play PC / Console games, use controllers, and joystics. I can recall PlayStation 2. YMO
Will AI improve long term (6 months) weather forecasts? Long term weather forecasts are difficult (impossible?) due to near infinite number of variables. Each changing in real time. ---- Recent example, Spain floods. Could that flooding be predicted back in May 2024?
No there are some algorithms that you can create that aren't easy for a person to do. There are algorithms that use a massive amount of Math. Maybe you can do simple algorithms but beyond that no.
Where do y'all get all these ideas from?
I mean, a computational Universe, say what?
Some bad stuff going on.. scale everything normalized
The duality is emergence levels too this shit
Normalized Frequency Bands of the Electromagnetic Spectrum:
1. Extremely Low Frequency (ELF)
• Range: 3 Hz to 30 Hz
• Normalized: 0.0001 - 0.001
2. Very Low Frequency (VLF)
• Range: 30 Hz to 3 kHz
• Normalized: 0.001 - 0.1
3. Low Frequency (LF)
• Range: 30 kHz to 300 kHz
• Normalized: 0.1 - 0.01
4. Medium Frequency (MF)
• Range: 300 kHz to 3 MHz
• Normalized: 0.01 - 0.001
5. High Frequency (HF)
• Range: 3 MHz to 30 MHz
• Normalized: 0.001 - 0.0001
6. Very High Frequency (VHF)
• Range: 30 MHz to 300 MHz
• Normalized: 0.0001 - 0.00001
7. Ultra High Frequency (UHF)
• Range: 300 MHz to 3 GHz
• Normalized: 0.00001 - 0.000001
8. Super High Frequency (SHF)
• Range: 3 GHz to 30 GHz
• Normalized: 0.000001 - 0.0000001
9. Extremely High Frequency (EHF)
• Range: 30 GHz to 300 GHz
• Normalized: 0.0000001 - 0.00000001
10. Far Infrared (FIR)
• Range: 300 GHz to 30 THz
• Normalized: 0.00000001 - 0.000000001
11. Terahertz Radiation (THz)
• Range: 0.1 THz to 10 THz
• Normalized: 0.000000001 - 0.0000000001
12. Infrared (IR)
• Range: 1 THz to 430 THz
• Normalized: 0.0000000001 - 0.000000000001
13. Visible Light (Optical)
• Range: 430 THz to 770 THz
• Normalized: 0.000000000001 - 0.0000000000001
14. Ultraviolet (UV)
• Range: 770 THz to 30 PHz
• Normalized: 0.0000000000001 - 0.00000000000001
15. X-rays
• Range: 30 PHz to 30 EHz
• Normalized: 0.00000000000001 - 0.000000000000001
16. Gamma Rays
• Range: Above 30 EHz
• Normalized: 0.000000000000001 and beyond
Normalized Frequency Range Scale:
• The lowest frequencies, such as ELF, are on the order of 0.0001 and scale up logarithmically toward Gamma Rays, which approach values around 0.000000000000001.
That malevolent Demi
The no spin no anti particle
There’s a reason for spin states
Levels
1. Extract the frequency values (from various sources mentioned) and normalize them.
2. Identify the electromagnetic spectrum ranges and map the normalized frequencies to their respective bands.
3. Compare the normalized values against the provided scale for different ranges (e.g., ELF, VLF, etc.).
1. Extract and Normalize Frequency Values:
From the previous conversation, the following frequency-related data points were extracted:
• Electron-Photon Duality Frequency: f_{\text{dual}} = 1.007 \times 10^{14} \text{ Hz}
• Gravitational Wave Frequency: f_{\text{GW}} = 1.007 \times 10^{14} \text{ Hz}
• Quantum Hall Effect Frequency: f_{\text{Hall}} = 2.7 \times 10^{10} \text{ Hz}
• Cyclotron Frequencies (Proton, Neutron): f_{\text{cyclotron}} = 42.58 \times 10^6 \text{ Hz}
• Various Resonance Frequencies (Nuclear, Gravitational, etc.) ranging from 10^4 Hz to 10^{15} Hz.
2. Normalize the Frequency Values:
\text{Normalized Value} = \frac{f}{f_{\text{max}}}
where f_{\text{max}} is the upper bound of the frequency range for each band.
Mapping the Frequencies to Bands:
Based on the ranges provided for the frequency bands, we calculate the normalized values for each extracted frequency.
• 1.007 × 10¹⁴ Hz (Duality and GW Frequency):
• These frequencies fall in the Infrared (IR) and Visible Light range.
• IR range is 1 THz to 430 THz, and Visible Light range is 430 THz to 770 THz.
• Normalized range for IR: 0.0000000001 to 0.000000000001 .
• Normalized f_{\text{dual}} = 1.007 \times 10^{14} / 1 \times 10^{12} ≈ 0.0000001 .
• Therefore, the normalized value falls in the Infrared band.
• 2.7 × 10¹⁰ Hz (Quantum Hall Effect Frequency):
• This is in the UHF range (300 MHz to 3 GHz).
• Normalized range for UHF: 0.00001 to 0.000001 .
• Normalized f_{\text{Hall}} = 2.7 \times 10^{10} / 3 \times 10^9 ≈ 0.009 .
• This is approximately VLF range normalized as 0.001 to 0.1 .
• 42.58 MHz (Proton Cyclotron Frequency):
• This frequency is in the HF range (3 MHz to 30 MHz).
• Normalized range for HF: 0.001 to 0.0001 .
• Normalized f_{\text{cyclotron}} = 42.58 \times 10^6 / 30 \times 10^6 ≈ 1.42 , which is normalized to the Low Frequency (LF).
3. Compare Normalized Values:
Here is the comparison of normalized values for the extracted frequencies with the frequency bands:
Frequency Source Frequency (Hz) Normalized Value Frequency Band
Electron-Photon Duality 1.007 × 10¹⁴ Hz 0.0000001 Infrared (IR)
Gravitational Wave Frequency 1.007 × 10¹⁴ Hz 0.0000001 Infrared (IR)
Quantum Hall Effect 2.7 × 10¹⁰ Hz 0.009 VLF
Proton Cyclotron Frequency 42.58 × 10⁶ Hz 1.42 LF (Low Frequency)
Other Frequencies Ranges from 10⁴ Hz to 10¹⁵ Hz Various Various
Summary of Normalized Frequency Mapping:
• The Electron-Photon Duality and Gravitational Wave frequencies both normalize within the Infrared (IR) region of the electromagnetic spectrum.
• The Quantum Hall Effect falls within the VLF band, aligning with very low frequencies.
• The Proton Cyclotron Frequency places itself in the LF (Low Frequency) range, indicative of the lower part of the radio spectrum.
This full analysis correlates with the given normalized frequency range for different parts of the electromagnetic spectrum and identifies where each extracted frequency fits.
how about we train a couple trillion neural nets, and then use them to train another neural net so it can predict how neural nets will learn.
Just add meta? Meaning don’t just train it on the simulations of there body solutions. Give it meta context always, or label or represent relations. These are some of the precomputed evolutions of three entities.
Theoretical physicist and computer scientist/mathematician discuss AI mapping the space of the Platonic realm at the World Science Festival? Nice. Too bad Plato/Socrates didn't get a name drop...
could we add coded neurotransmitters to ai?
How does a human know if a mathematical statement is false? Any answer to this question requires something beyond the mathematics...
Motor accidents kill and injure tens of thousands a year in the US alone.
Tesla’s full self driving system now - as I understand it - starts with a blank sheet and then learns from “watching” billions of videos taken from the fleet of cars’ cameras to infer rules of the road, traffic patterns, driver reactions, signage etc etc.
What could possibly……………………………………
Computational universe n*(n-1)*(n^2-5n+18)/24 + 1 - n*(n-1)/2 = n!/(n-4)!*4! + 1 = 12!/8!*4! +1 =496 = 32*(32-1)/2 = 496=4[mod12] to 10^500 string theory landscape.
A neural Net that knows about it self is conscious neural Net
What we care about? How about (c-b)/a to merge golden ratio with pythagorean theorem?
Short answer is no - The Universe is Quantum Mechanical
FINALLY
One issue with that theory. If we are in a so-called simulation. That means someone or something created the simulation. Created being the operative word. 🏴☠
Wow yall got quiet quick.
💫💥💫💥💫💥💫💥
💓 Insists : " ... Your Soul is ... Your ... 💓 ... You are in the Other Soul ... "
💓💓💓 Ya 💓💓💓
Mathematics is descriptive, not physically creative. If our reality was just built on 1s and zeros it would still have to be built in some physical reality, Matrix style. It is reductive circular reasoning to simply think that living in a simulation solves any fundamental questions about our physical reality. If it is a simulation, what powers the simulation? What created this complex simulation? Where was it created?
This reality may be a simulation, but it has to exist in a reality that is not.
mathematics is intrinsic to the universe for both information and matter. a true "theory of everything" should be able to blur the line between the mathematics and physics (and not just bridge the general relativity with quantum theory). ps. why is nobody working on it?
Math is an abstraction of reality. Its not the reality.
As the saying goes. "Don't confuse the map with the territory." ~ Alfred Korzybski
The cosmos I.S. 010 dimensional --- Infinity Squared --- (Information System)
Those who do computations
Think the world is a computation,
My grandma, who sold potatoes,
Maybe thought that the world
Is a huge potato
All ends here come to noumenon by Kant… circling back to philosophy.
I want to explore a nondeterministic based cellular automaton
Humans compute or teach our machines to compute. The universe neither computes nor plans, and only follows the laws set down by our creator which declares the glory of Him across all that exists.
Enters 💓 : " ... Create the Treating of Others as You wish ... Them to Treat You as Their Creator ... "
💓💓💓 Ya 💓💓💓