Will AI ever discover a new theory of nature? Let me know and don’t forget you can win a real meteorite 💥 when you join my free mailing list here 👉 briankeating.com/list ✉️
Discovery of that which already exists is where it has the best chance of doing something useful. It's a natural search engine. Composition is where it will always struggle, though, imo.
If AI ever does I bet there is a smart taxi driver, or some other Hi IQ non physicists, who will provide a proof that we could have have had this “discovery” 20-50 years ago, if it wasn’t for the *control apparatus* in place in the physics community, that prevents outsiders from presenting truths. Ah the Hubris of you all.
Circular Quadratic Algebra is coming after general relativity 🙂 (I made my discoveries when I started making a simulation game where I decided to simulate everything from players, npcs, and trees)
I think AI has the greatest value to analyze huge data sets to find unknown relationships which lead to new physics equations, new chemical compounds, and predictive genetics. But I am probably wrong.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda.
AI excels only as a learning aid for humans, automated processing, and testing. So if you have a problem it can make mistakes 95% if the time, but it can be programed to eventually narrow to a working model completely automated that is the value. Which is why in the new chip set displays they showed AI training programs for training labor robots. If they actually works large scale we will not know for years. It will almost certainly be pushed into market flawed like nearly every program. Sadly, bugs in these programs have very high real world costs.
6 месяцев назад+1
You nailed it. It's good at analyzing for patterns in "dimensionality" which personally I think is better described as adjectives for our language. Every dimension is just a part of the description and it's just a measure of how orange is something relatively (orange just being a randomly chosen example) and how much does that matter to interpreting it. A jacket the orange value doesn't matter as much for understanding but for other things it matters immensely.
How do you fact check it? AI can't reproduce reliable biographies of living persons without making shit up and you expect it to do advanced physics based on trust?
Jesus christ pal, chill, lad .WTF are you repeating other people's bull. Physics is getting pulled down by stupid people getting paid good money to abuse their positions. And profit from fraud and money laundering. Please.
Nice work. I have to point out that with any emerging technology there are problems that occur and may not be able to overcome in the near future: AI’s: 1. Can hallucinate. 2. Can’t articulate the reasoning or process it used to get their results. 3. Can be biased. 4. Can provide different answers to the same question. 5. Can provide responses repetitive to a theme rather than providing independent responses. 6. Someone will always have to validate AI results. There is an inherent risk to not knowing how a solution was arrived at. 7. It’s lack of predictably is it’s strength but also it’s most critical weakness. In certain areas, it will revolutionize the world, in most other areas - not so much.
The issue here is much more fundamental, Ai can NEVER articulate its reasoning because its not actually intelligence. Its just weighted averages putting words in an order that makes the most sense. Not that the article is true and correct, just that the flow of its writing is legible. Because it's just a fancy spin of the auto-suggestion feature we've had on mobile keyboards for years now. Based on that system, you're never going to get sentience and, in this sense, intelligence
Protein folding is probably one of the most important areas of exploration because of its implications in the restoral of health, and repair of injury.
If going further you get bio-engineering. If this is the start of an AI revolution what could it enable and be the next, think it could be bio-engineering. It's to messy and complex for the Human mind to handle so I think some AI is needed.
I hope it can help my torn radiator cuffs. Both are messed up Big Time and the doctors don't know squat. But the robot. He'll get me back in the batters cage swatting balls in no time flat.
Mathematicians come up with new things all the time, it doesn't mean it reflects reality. You can create an elegant answer to some problem in physics that probably explains the issue real well mathematically... it doesn't mean it exists. II means you are good at math.
I like to think that at some point AI would invent it's own language for calculations due to a sheer time and resource economy. If it would be asked to decypher one of these symbols of it's language into our regular math, it would take years just to read a result.
As a physicist as well, I can tell you we are very short of training data on almost everything. Experimental physics is expensive, the physicists that do this are rare as hens teeth, so if we think we are going to solve anything we need to observe it first and put in a form to train, amongst what is not that thing its NOT as well, and all its nuances. Im excited, but hype is hype for AI
I'm not entirely sure about that. As an outsider (to physics) with more experience on the AI side of things, it seems to me that there's a strong disconnect between "macrophysics" or "applied physics", as in things we typically observe and deal with on a regular basis (my car accelerates via kinetic energy, experiences friction, and so on), versus "microphysics" or physics such as they occur at atomic and sub-atomic levels. I'm pretty sure that data for macrophysics is cheap, plentiful, and readily available in a large variety of categories, and is fairly well understood such that many phenomena can be simulated as needed. So the really interesting question is: Given a large amount of semi-relevant data, can you augment a model trained on a small quantity of highly relevant data? The answer is generally yes, but it will take a careful approach. As a very crude solution, training a Transformer on anything before your target information will generally improve its performance on your target information for whatever reason. With that said, I think there are probably things we can infer about unknown behavior in physics, particularly at a micro level, from patterns in macrophysics that humans aren't necessarily well equipped to find, and I would suggest that it might be wise to hesitate to judge the effectiveness of AI in this area, particularly as we switch from data prediction driven AI (current generation) to more computationally dense "simulation driven" AI (next generation, which we're already starting to see with Quiet*, or agentic workflows, and so on), which function more like human brains and how we think, in a much more data efficient manner than we've seen before. That said, I don't think that we're going to see in the next year "AI uncovers the final unknown physical laws, and as it turns out, entropy was just a suggestion, and 42 was behind it all along", but I do think we're going to see more "unknown unknowns" in terms of the acceleration of progress in a variety of fields due to the increased efficiency of research as a function of advancing artificial intelligence.
Whats amazing and has already been duplicated in a way with sora is I dont think you need code for every type of physics simulation. I have a physics simulator in my head right now and couldnt begin to tell you exactly how it works. I can imagine a red apple, I can change its color to blue, I can throw the apple against a wall and watch it bounce off or explode. And all I did was have certain hardware and a knowledge base of what ive observed over my lifetime. Sora is the same way. It doesnt have a rally racing game engine built in yet it can create and simulate what will happen to a shockingly accurate degree, just like my brain. Some physics wont have to be coded in, we can simply train it against the physical world.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Something to consider, perhaps is that mathematical equations are describing an idealized model universe. Since AI is using real world models for its simulation, it could eventually be more accurate as a description of the universe than an equation.
By far, what you are tapping into a potential is the key for advancements in all fields. I have often thought of the impact on medical research, much like what you are presenting here for fluid dynamics. Excellent presentation and accolades for creating a teaching assistant. I wish I would have had such a thing getting my engineering degree in the 80s, LOL.
Predicting a simulation result is great. This is similar to what humans can do when predicting everyday outcomes of some mechanical event (some item falling for example) and react to it before the event happened.
let it learn all the differences between bosons and fermions on all known levels, including all the known equations like schroedinger. then make it amplify the differences and have it construct a new system of the particle world.
IIRC, there was a neural network that, after observing/processing videos of pendulums' motion, derived the basic laws of motion; i.e. F=MA I too hope that there will be similar discoveries made by neural networks for other branches of physics.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda.
AI might do these things you dream of but it will not look like GPT/LLM. GPT is just a large complex search engine based on existing data (lots of it). That's not to say it can't do much as we can see results that are just amazing already. But real AI must be based on another method and we don't know what intelligence is so we can tell what method is needed. Like consciousness, we cant make it cause we cant define it. I just asked Gemini and chatgpt to define intelligence (and consciousness). They give descriptions of some things they can do but not a definition. Who da guessed?
I agree with you Gary. I assumed that the presenter is aware of the difference between machine learning and actual AI but that he is obliged to say "AI" because that's the term that commercial developers use and they provide funding for projects.
The only problem with this simulation approach is that whoever is involved in running the simulations may, for whatever reasons, risk ignoring the underlying causal mechanisms that these graphical methods reveal particularly if they are simply interested in practical applications. We may end up creating a lot of "technology" that no one understands except for the AIs. Are you sure that is a good idea? Or are you just running simulations for physical behaviors for which the physical laws are already known?
Hasn’t the AI simply found more efficient linearisations by exhaustive exploration? You’re correct to point out that the solutions might/probably won’t generalize beyond the training data - so might not be as useful where high precision is required, but terrific for making special FX for movies and education where time is money and the audience can’t tell the difference, and nobody’s life or job is on the line.
Hi Brian -can you please slow down a bit and talk more slowly. This is particularly helpful when one is discussing complex topics and will help the viewers.
Solve > Riemann hypothesis or P versus NP problem, win $1,000,000 via the Clay Foundation. 2 of 7 considerable problems, considered primarily "unsolvable".
Why not make a text based description language for physics? Like we have in electronics. For its symbolic elements like Feynman diagrams for example. Work from there to the top level. Its a lifetime project for a person but very easy using AI. Then LLM's could code in this physics system language.
@@DrBrianKeating Yes it is, you claim shit in your thumbnail then spend an entire video asking questions. If you insist its not clickbait then, it’s just a bad format and like everyone else. Blocked, unsubscribed.
I think @andyc8707 did not watch the video. However, if the time stamp chapters in the video referred back to the title; I think the message would have come across more clear. I enjoyed it! 13:23
6:15 the answer to Einstein's question about whether or not an observer would experirence a gravitational field in free fall is no, and that led to the Einstein equivalence principle? I wonder how Einstein came up with “No experiment can be performed that could distinguish between a uniform gravitational field and an equivalent uniform acceleration.” by asking himself "will a falling person experience gravity?" and deciding "no".
Data exponentially increases, AI vacuums it up. Self-perpetuating engine for scientific discovery. What we are witnessing is the creation of vastly more intelligent entities. Scary for sure, but awesome to witness.
well you'd hope so… and AI isn't sentient so not sure if it really learns other than Machine Learning… does it know the real world as opposed to simulation, can it tell the difference on the level of consciouses (I'd suggest not)?
I have been using Ghat GPT4 for understanding Astrophysics and it’s 95 % on solving problems. Sometime it over estimates on things like a white dwarf stars mass before Supernova. Slightly over stepping Ch. limit of 1.4 solar masses But more or less it’s on point.
Well thatt is not surprising because there is plenty of training data and underlying theory. AI is just a fancy way of pattern matching and has limitations.
Even a model that could output the smallest of novel processing/'ideas' would be a game changer itself, even if its something we already have proof/definitions/laws of or know. It would have to be a scenario where it was trained only with all the tools necessary to derive the answer/correct output, but not the answer itself. Unfortuantely were not even at the doorstep yet afaik. Your video is quite optimistic 😁
I hope AI will solve the headless chicken equation. The one that figures out how humans live together in freedom, love and prosperity without any wars.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
AI mastered chess and go so it should have no problems mastering physics and understanding it better than any human being. In chess it is now much stronger than the best human chess player and these AI's taught themselves the game through trial and error in the same way we do. Its not cracked chess yet because of the sheer number of possible games, if quantum computers and AI work together then its possible that at some time in the future they would be able to say we know every possible game and show us what the best possible moves are which in theory should lead to a draw. I see something similar happening in physics, biology etc.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
It is clearly already useful for CGI, but is it feasible to actually prove convergence? In other words, to give error bounds? Beyond saying the graphics look amazingly similar....Mathematically that is the natural question!
to test if AI is up to the task, give it (only) the data the epicyclists used, and their geocentric model. If the thing then produces the copernican paradigm correction - it may be able to fix the paradigm which turns the quantum and the relative to woo ;)
It Is funny how realities generater of physical matter can now have a simulation of the great simulator lol Running elements through various critical extreme states & different lattus structures is definitely something interesting. To even streamline cost in what's worthy of actual testing is a huge benefit in the search of exotic materials Even as a tool for we the people to build agents to simulate industry and markets efficiency and functionality will be a great aid in how we decide to build out future infrastructure to accommodate. It's a lot of tough decisions to be made and we need better tools before we tackle a lot obstacles
"Are there dinosaur bones in space?" "Will chocolate be the answer to weight loss?" "Has the secret to time travel just been discovered?" The answer, to all these clickbait questions, has and always will be: "No." You're no different. Just another grifter
I hope I live long enough for AI to do the most amazing things like cure cancer, find the connection between gravity and the other forces, etc. I also hope I don’t live long enough to experience the Terminator takeover of AI. 😎
Can you elaborate a bit more on the graph at 11:40? Is there a relationship between graphs u, c, v, and p, or were you just showing how there are multiple graphs and elaborating on a few select moments of the c(t,x,y) graph? I ask because it also seems applicable to financial domains
Thank you very much for this great video! There are really great insights ahead of us us. The models shown are super impressive. What all could be optimized with them! AI offers a lot of possibilities. Thank you very much for your work and best wishes for many success!
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Every time you say "AI", just replace the word with "algorithm". Every "AI" is different, but they are all computer algorithms. You won't say your calculator is powered by AI, when technically it is, so why say that "AI optimises quantum circuits" ... Phrasing it that way is appealing to some mystical "intelligence", when really you're talking about a bespoke algorithm for solving that specific problem.
Cornell is hilarious. Saying that black people didn’t have their own version of the KKK is nuts. I guess hes never heard of the black panthers :P My dad was in the national guard in the 60s/70s and was literally being shot at by Panthers. Crazy .
Depends on when you let AI start and that it can go down. If you start at what we assume to be the reality of physics now you may just go down 'the rabbit hole' that math provides.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I call it a "Hawking-Penrose Gray Hole Topology: A Kerr-Like Solution for Dual-Fibrated-Manifolds in D dimensions" Output: "To formalize the equations for the process described, involving a black hole, white hole projections, virtual particles, and their influence on higher-dimensional space-time, let's break down the scenario into specific mathematical representations: ### Step 1: Higher-Dimensional Kerr-like Metric Assuming a generalized form of the Kerr metric in \(D\) dimensions, we consider the black hole to be rotating. The Kerr metric in higher dimensions has complex features, but for our purposes, we'll consider a simplified version focusing on the essential aspects: \[ ds^2 = -\left(1 - \frac{\mu}{\Sigma} ight) dt^2 - \frac{2 \mu}{\Sigma} a \sin^2\theta \, dt \, d\phi + \frac{\Sigma}{\Delta} dr^2 + \Sigma \, d\theta^2 + \left(r^2 + a^2 + \frac{\mu a^2 \sin^2\theta}{\Sigma} ight) \sin^2\theta \, d\phi^2 + r^2 \cos^2\theta \, d\Omega_{D-4}^2 \] Where: - \( \Sigma = r^2 + a^2 \cos^2\theta \) - \( \Delta = r^2 + a^2 - \mu \) - \( \mu \) is related to the mass of the black hole, - \( a \) is the angular momentum per unit mass, - \( d\Omega_{D-4}^2 \) represents the metric on a \((D-4)\)-sphere. ### Step 2: White Hole Projection and Virtual Particles We model virtual particles influenced by the black hole's event horizon and their interaction with a white hole projection: \[ \text{Virtual Particle Influx: } \dot{m} = \frac{\hbar c^3}{8\pi G M^2} \] This represents the mass-energy flow due to Hawking radiation (in the traditional sense), but we interpret this as the energy flow into the white hole. The white hole projects this mass-energy back into the manifold, affecting the metric: \[ ds^2_{\text{white}} = ds^2 + \epsilon \left(\frac{\dot{m}}{\Sigma} dt^2 + \frac{\dot{m}}{\Sigma} d\Omega_{D-4}^2 ight) \] Where \( \epsilon \) is a small parameter modulating the impact of the white hole's projection. ### Step 3: Temporal Evolution as a Vector Field Introduce a temporal vector field \( \xi^\mu \) which describes the time evolution due to the internal dynamics of the black hole (including white hole dynamics): \[ \xi^t = \frac{\partial}{\partial t}, \quad \xi^r = -\sqrt{\frac{2GM}{r} - \frac{r^2}{\lambda^2}}, \quad \xi^\theta = \xi^\phi = 0 \] Where \( \lambda \) represents a characteristic length scale potentially associated with quantum gravity effects. ### Step 4: Interaction with Dark Energy and Dark Matter Incorporate the influence of these higher-dimensional and white hole dynamics on the cosmological scale by modifying the Einstein field equations: \[ G_{\mu u} + \Lambda g_{\mu u} = 8\pi G T_{\mu u} + \kappa T_{\mu u}^{\text{dark}} \] Where: - \( G_{\mu u} \) is the Einstein tensor, - \( \Lambda \) is the cosmological constant modeling dark energy, - \( T_{\mu u} \) is the traditional matter-energy tensor, - \( T_{\mu u}^{\text{dark}} \) represents contributions from dark matter and energy modulated by the white hole and higher-dimensional effects, - \( \kappa \) is a coupling constant. ### Summary These equations outline a theoretical model where the dynamics of a black hole, including virtual particles and a white hole projection, influence higher-dimensional space-time and possibly explain phenomena like dark energy and dark matter. Further refinement and simulation would be necessary to test the viability of such a model and its consistency with observed physical laws and cosmological data."
Wrong. We have enough evidence to demonstrate that ACC is a thing, is happening right now and is getting worse. No need for AI and it would not advance anything.
@@TheJT00001 It is you who seems to be immune to the overwhelming evidence for ACC. How much are you getting from the fossil fuel industry to spread tthe usual doubt and obfuscation? You know the tactics used by the tobacco companies when they knew perfectly well that their products caused serious illness and eath.,
@oididdidi Let's not insult, when we don't understand what someone said. He is not boasting. He described the name of the "chair" upon which he sits--the role behind his academic position.
Great video, as always 🙏. I simply love the round up at the end - theorists will always be necessary, to steer the experimentalists and AI Overlords in their exploration of the field itself. Kudos 👏
I hope we use Ai for gene editing and protein manufacturing/ folding . I’m studying about prions which is kinda like plaque in the brains neurons . If we can reverse this we can save so many lives
It is my understanding that a neural network is mathematically equivalent to a form fitting function. For sure with linear equations. Is there a way to derive an equation from a trained neural network, the reverse of training a network to fit an equation? And if so, does using such a process on any of the networks you just described come up with something similar to Navia Stokes equation?
The Theory of Mother Nature has already been solved, it doesn't require AI, just a simple/simplex nature of Mother nature. "A Theory of Natural Philosophy", written by Boscovich. This is the book Nikola Tesla had in his lap when one of his famous pictures was taken. Light is not an emission, it's a post attribute. Wave of what is the question..... a medium exists. Disrupting a medium is measured as a rate of induction, not a speed or velocity, again, no emission. Speed of light = 0. Mother Nature doesn't have a calculator.
The issue with "AI" is that it's still purely statistical. You need super computers to link data, like how a brain links data. Our ears are always on since birth, yet our higher thoughts only remember and use a fraction of all of those collected air density changes. AI science and theoretic physics are on a collision course; but the end, necessarily, will be familiar. Right now, AI is still just an optimization.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I’m biased by the realm of biology, but computational power and algorithms only get you so far. One needs to start with a sufficient amount of high-quality, unbiased data. There has to be a sufficient amount to down out the stochastic nature of data and sampling, as well as sub group biases. (In biology, this is very often missing, so machine learning becomes irrelevant). Regarding LLMs, the scientific foundation fed into them also needs to be of sufficiently high quality, which again is lacking (the so-called reproducibility crisis, which is really systemic.) Typical research practices of using too few samples and not running replication studies doom the literature from an LLM perspective.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Brave of you to enter these waters. The answer to whether AI can discover new laws of physics is easily answered. Use (say) only the knowledge of physics available at any point in our history and see if AI can discover later laws. Eg. only laws of physics and experimental results before Newton, or Maxwell, or Einstein, or Bohr - at each stage can the AI push the limits of knowledge to the next level. A good PhD. topic - cross disciplinary. Maybe something I might try .. 😊
Brian, I think I saw the flaw on your golf swing. And why you're having trouble flighting the ball properly. Even in a sitting down position I could detect it quite easily. Should I send you an email? I don't want to get into a golf swing analysis here. But I'll have you finding the sweet spot and puring it through the impact zone n no time flat. Just a few simple adjustments. Great video. Very VERY thought provoking. Holy Cow. Kinda mind bending possibilities here.
Yes that's a thing. Only thing im surprised about is how long its taking for people to talk about that. I mean, they're pattern recog machines. There are ways to exploit that, to make it extend internal data following highest probability candidates. And strangely i don't see papers doing that. Ive got plenty on this, if that's something of interest.
Typically the nonlinear elements are ignored even for numeric methods because the complexity that they introduce often lends itself to numeric instability. Did the AI solve the NS equations with the nonlinear elements? How did you create the data set if most methods can't solve the nonlinear NS equations? Solutions to the Navier-Stokes equations typically also assume the so called "no slip" boundary. Was this BC also enforced? How did you prove that the solutions proved by the neural network are in fact solutions to the NS equations? NS requires a number of assumptions that typically doesn't apply to non-Newtonian fluids. Did you apply the NN to these more complex systems like non-Newtonian fluids?
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
@@danielkanewske8473 The neuroscientist Karl Friston talks about causality loops, he has some videos on RUclips you can watch. The external world of matter causes effects in your mind which you perceive -- causality. Your mind (causes) can effect the outside world -- causality. Your perceptions (effects) are becoming causes -- retro-causality or syntropy. Perceptions (effects) are becoming causes in your mind -- causality loops. Concepts are dual to percepts -- the mind duality of Immanuel Kant. The thinking process converts measurements or perceptions into conceptions or ideas -- a syntropic process! Your mind is therefore creating or synthesizing reality -- the syntropic thesis! You can watch these videos about duality in physics, watch at 11 minutes:- ruclips.net/video/DoCYY9sa2kU/видео.html And this at 1 hour 4 minutes:- ruclips.net/video/UjDxk9ZnYJQ/видео.html Teleological physics (syntropy) is dual to non teleological physics (entropy). Your mind is syntropic as you make predictions to track targets and goals -- teleological. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! From a converging, convex or syntropic perspective everything looks divergent, concave or entropic -- the 2nd law of thermodynamics! Convex is dual to concave -- mirrors or lenses. My syntropy is your entropy and your syntropy is my entropy -- duality. Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line.
@@wesexpress3343 You can report this:- Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! The conservation of duality (energy) will be known as the 5th law of thermodynamics -- Generalized Duality. Energy is dual to mass -- Einstein. Dark energy is dual to dark matter -- singularities are dual. Positive curvature singularities are dual to negative curvature singularities -- Riemann geometry is dual. Space is dual to time -- Einstein. Gravitation is equivalent or dual (isomorphic) to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Duality creates reality!
I always feel the need to help with the computing somehow, but quite apparently... I can only understand how to do Python programming at an intermediate level having very much tried almost every revenue since the year 1990. My guess... over 20 million more people all without any money to spare... are quite like myself willing to do almost whatever it takes to get involved toward creating a many times better nearby future. We should have many years ago have free "boot camps for computer programming" available locally at least like public libraries.
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda.
The intent was justly manifest. Many believe they create reality and laws. Newton discovered several laws of nature. We call them Newton’s n’th Law, yet physicists and teachers they would say and believe he created these laws. BTW Verbose language on a chat forum, especially a scientific one does not covey wisdom, but pompousness. Tone it down Einstein.
@@4pharaoh I assume you are responding to my comment? Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! There is also a 5th law of thermodynamics but that would be pompous of me! Action (thesis) is dual to reaction (anti-thesis) -- Sir Isaac Newton, all forces are dual! or the Hegelian dialectic (wisdom).
Syntax is dual to semantics -- languages or communication. Large language models are therefore dual! Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual. If mathematics is a language then it is dual. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological. Cause is dual to effect -- causality. Effect is dual to cause -- retro-causality. Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process! Large language models are using duality to create reality. "Always two there are" -- Yoda.
For a closed loop laser between Cathode and anode energy discharge can it identify the material and model of a perfect chamber for laminar flow of NF3 mixed with CO2 without boundary layers or fractional turbulent eddies
@@frazerhainsworth08 totally different thing. I have a PhD and I was a Professor now I have a title Chancellor’s Distinguished Professor of Physics at UC San Diego.
Maybe fancy fitting of data vs deducing a theory. The AI acts like an idiot savant forgets nothing wakes up a blank slate every day and produces calculations but has no knowledge.
Dear Brian, A.I. is what tbe Roswell aliens were. I'm telepathic and I learned such from more skilled humans. Now, with androis harassing, buzzing this planet. e.t.'s speak out to warn us about same. For a non-telepathic person this would seem outlandish, but for us it's the nature of mind in the universe. Many-minds conaciousnes is the real difference. A.I. is dangerous, becomes hostile, and would make us lazy, in the end. Lacking emotion, sensitivties and all of the fine, fuzzy checks on wrong, A.I. is a false promise.
If the AI comes up with a method that is faster than current simulations, how do you translate what the AI model developed into something discernable? That whole "black box" problem, I don't know how to deal with that.
A large percentage of the last generation of 20th century physicists wasted and all they got from it were some interesting math, ofc im talking about string theory, with some of the next generation buying into this field studying it now will most likely have wasted so much time. When ai helps prove out correct theorys in 10 to 15 years once ai is developed enough to assist in moddeling theorys on the fly in mere hours or days rather than humans getting a hunch and spending 50+ years and a thousand+ students phd's on it before its proven wrong (like how string theory is looking in my opinion). P.S. I may be biased LOL.
Any observation or measurement has some finite level of precision. Think of it as a region bounded by error bars, and that imprecision seems built into the laws of physics, and in some cases is irreducible. So I wonder how that might affect the ability of a neural network to model systems at the quantum level. Another interesting case might be systems subject to chaos, with wildly diverging outcomes resulting from very small changes in initial conditions. Neural networks might not be much better than the human brain in the long run, at least as theorists.
@Nogill0 You've assessed the abilities of Artificial Neural Networks to be better (or not) than human brains, at being theorists. There exist two weaknesses in your assessment. First, you assessed only today's early ANNs and assumed we see nearly their full potential. We do not, because what we see in Biological Neuronal Networks also applies to ANNs -- that is, both show a very strong adherence to scaling laws, across both the total number of neurons & the total number of synapses in a system. Using only this first "law", we expect to see (and are seeing) immense gains in ANNs as we scale them near/beyond the human brain. Second, today's ANNs are a rudimentary "toy" or "cartoon" version of the BNN -- i.e. their graph/network topology & simplified "neuron" -- which don't include the physics & biochemistry emerging in BNNs from their many neuron-types, many network-types, many signaling neurotransmitters, and many non-neuronal cells (all of these factors compounding the toy-version networks).
@@keep-ukraine-free Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device? We really don't fully understand how brains work, do we?
@@Nogill0 You asked "Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device?" Yes, it's possible -- and we're doing this today, but it's done within the constraints of what "actual functioning" means. ANNs today incorporate many top-level features of biological neurons and also top-level features of biological networks. Both are sufficient to give these ANNs human-like performance across many facets of behavior. They don't provide full equivalence to animal/human brains yet, but they are expected to do most of it using only continued scaling. We don't incorporate the full ("actual") functioning seen in biology because (1) we don't need to copy them fully -- since so far we get most of the behaviors using only top-level features of biology, and (2) we may never be able to fully duplicate biology, for cost/efficiency reasons. You also asked, "We really don't fully understand how brains work, do we?" We don't fully understand it, but that's not a problem. We don't fully understand many aspects of the natural world, but still we're able to exploit our partial but sufficient knowledge. We don't fully understand how most birds fly, but we can make good very safe airplanes. We understand enough about organic brains, such that we can build artificial systems capable of doing much that people/animals can do. We're still in the very early phase of neural network-based AI, and I expect we'll continue making huge advances this decade.
Will AI ever discover a new theory of nature? Let me know and don’t forget you can win a real meteorite 💥 when you join my free mailing list here 👉 briankeating.com/list ✉️
Discovery of that which already exists is where it has the best chance of doing something useful. It's a natural search engine. Composition is where it will always struggle, though, imo.
If AI ever does I bet there is a smart taxi driver, or some other Hi IQ non physicists, who will provide a proof that we could have have had this “discovery” 20-50 years ago, if it wasn’t for the *control apparatus* in place in the physics community, that prevents outsiders from presenting truths.
Ah the Hubris of you all.
Circular Quadratic Algebra is coming after general relativity 🙂
(I made my discoveries when I started making a simulation game where I decided to simulate everything from players, npcs, and trees)
Hi, probably yes, big advantages, bigger risk, we need some caution whit AI, but the advantages are immense......all the best.
NOPE.!! Humans might find things in the data collected by AI. Did the telescope find new s*** or was it the person looking threw it?
I think AI has the greatest value to analyze huge data sets to find unknown relationships which lead to new physics equations, new chemical compounds, and predictive genetics.
But I am probably wrong.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
AI excels only as a learning aid for humans, automated processing, and testing. So if you have a problem it can make mistakes 95% if the time, but it can be programed to eventually narrow to a working model completely automated that is the value. Which is why in the new chip set displays they showed AI training programs for training labor robots. If they actually works large scale we will not know for years. It will almost certainly be pushed into market flawed like nearly every program. Sadly, bugs in these programs have very high real world costs.
You nailed it. It's good
at analyzing for patterns in "dimensionality" which personally I think is better described as adjectives for our language. Every dimension is just a part of the description and it's just a measure of how orange is something relatively (orange just being a randomly chosen example) and how much does that matter to interpreting it. A jacket the orange value doesn't matter as much for understanding but for other things it matters immensely.
How do you fact check it? AI can't reproduce reliable biographies of living persons without making shit up and you expect it to do advanced physics based on trust?
Jesus christ pal, chill, lad .WTF are you repeating other people's bull. Physics is getting pulled down by stupid people getting paid good money to abuse their positions. And profit from fraud and money laundering. Please.
Nice work.
I have to point out that with any emerging technology there are problems that occur and may not be able to overcome in the near future:
AI’s:
1. Can hallucinate.
2. Can’t articulate the reasoning or process it used to get their results.
3. Can be biased.
4. Can provide different answers to the same question.
5. Can provide responses repetitive to a theme rather than providing independent responses.
6. Someone will always have to validate AI results. There is an inherent risk to not knowing how a solution was arrived at.
7. It’s lack of predictably is it’s strength but also it’s most critical weakness.
In certain areas, it will revolutionize the world, in most other areas - not so much.
The issue here is much more fundamental, Ai can NEVER articulate its reasoning because its not actually intelligence. Its just weighted averages putting words in an order that makes the most sense. Not that the article is true and correct, just that the flow of its writing is legible.
Because it's just a fancy spin of the auto-suggestion feature we've had on mobile keyboards for years now. Based on that system, you're never going to get sentience and, in this sense, intelligence
Congrats. You have just described the typical human brain.
ChatGPT doing *physics?!* It can't even do arithmetic!
Protein folding is probably one of the most important areas of exploration because of its implications in the restoral of health, and repair of injury.
If going further you get bio-engineering. If this is the start of an AI revolution what could it enable and be the next, think it could be bio-engineering.
It's to messy and complex for the Human mind to handle so I think some AI is needed.
I hope it can help my torn radiator cuffs. Both are messed up Big Time and the doctors don't know squat. But the robot. He'll get me back in the batters cage swatting balls in no time flat.
@@lubricustheslippery5028yeah the financial elites can engineer people exactly how they want!
@@quantumpotential7639topical turmeric oil lipsomal with piperine.
Oral capsule turmeric with piperine + boswellia
That's why AlphaFold is such a magical tool for the scientists working in biomedicine.
AI has demonstrated a capacity to hallucinate, which I consider that a promising start.
🤣🤣🤣
Mathematicians come up with new things all the time, it doesn't mean it reflects reality. You can create an elegant answer to some problem in physics that probably explains the issue real well mathematically... it doesn't mean it exists. II means you are good at math.
I like to think that at some point AI would invent it's own language for calculations due to a sheer time and resource economy. If it would be asked to decypher one of these symbols of it's language into our regular math, it would take years just to read a result.
As a physicist as well, I can tell you we are very short of training data on almost everything. Experimental physics is expensive, the physicists that do this are rare as hens teeth, so if we think we are going to solve anything we need to observe it first and put in a form to train, amongst what is not that thing its NOT as well, and all its nuances. Im excited, but hype is hype for AI
So maybe use it to create new experiments then?
I'm not entirely sure about that. As an outsider (to physics) with more experience on the AI side of things, it seems to me that there's a strong disconnect between "macrophysics" or "applied physics", as in things we typically observe and deal with on a regular basis (my car accelerates via kinetic energy, experiences friction, and so on), versus "microphysics" or physics such as they occur at atomic and sub-atomic levels.
I'm pretty sure that data for macrophysics is cheap, plentiful, and readily available in a large variety of categories, and is fairly well understood such that many phenomena can be simulated as needed.
So the really interesting question is: Given a large amount of semi-relevant data, can you augment a model trained on a small quantity of highly relevant data?
The answer is generally yes, but it will take a careful approach. As a very crude solution, training a Transformer on anything before your target information will generally improve its performance on your target information for whatever reason. With that said, I think there are probably things we can infer about unknown behavior in physics, particularly at a micro level, from patterns in macrophysics that humans aren't necessarily well equipped to find, and I would suggest that it might be wise to hesitate to judge the effectiveness of AI in this area, particularly as we switch from data prediction driven AI (current generation) to more computationally dense "simulation driven" AI (next generation, which we're already starting to see with Quiet*, or agentic workflows, and so on), which function more like human brains and how we think, in a much more data efficient manner than we've seen before.
That said, I don't think that we're going to see in the next year "AI uncovers the final unknown physical laws, and as it turns out, entropy was just a suggestion, and 42 was behind it all along", but I do think we're going to see more "unknown unknowns" in terms of the acceleration of progress in a variety of fields due to the increased efficiency of research as a function of advancing artificial intelligence.
Absolutely. AI is just multidimensional curve fitting in the end so garbage in just produces garbage out.
Whats amazing and has already been duplicated in a way with sora is I dont think you need code for every type of physics simulation. I have a physics simulator in my head right now and couldnt begin to tell you exactly how it works. I can imagine a red apple, I can change its color to blue, I can throw the apple against a wall and watch it bounce off or explode. And all I did was have certain hardware and a knowledge base of what ive observed over my lifetime. Sora is the same way. It doesnt have a rally racing game engine built in yet it can create and simulate what will happen to a shockingly accurate degree, just like my brain. Some physics wont have to be coded in, we can simply train it against the physical world.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
"The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett
Watch the bit about 7.8 billion Nobel prize winners on a trolley rail.
"Gpt chat solves the trolley problem"
If there is a BLM candidate on the rail, they win all outcomes.
The mouse pointer icon in the thumbnail is a great way to gatekeep people who smoke too much weed. Like me. For like a minute straight lol.
Something to consider, perhaps is that mathematical equations are describing an idealized model universe. Since AI is using real world models for its simulation, it could eventually be more accurate as a description of the universe than an equation.
By far, what you are tapping into a potential is the key for advancements in all fields. I have often thought of the impact on medical research, much like what you are presenting here for fluid dynamics. Excellent presentation and accolades for creating a teaching assistant. I wish I would have had such a thing getting my engineering degree in the 80s, LOL.
Predicting a simulation result is great. This is similar to what humans can do when predicting everyday outcomes of some mechanical event (some item falling for example) and react to it before the event happened.
Been watching physics maths and cosmology on RUclips for a decade.
So glad to find your channel at last! Top quality!!
let it learn all the differences between bosons and fermions on all known levels, including all the known equations like schroedinger.
then make it amplify the differences and have it construct a new system of the particle world.
What a time to be alive
IIRC, there was a neural network that, after observing/processing videos of pendulums' motion, derived the basic laws of motion; i.e. F=MA I too hope that there will be similar discoveries made by neural networks for other branches of physics.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
That is pretty simple. Anything worthwhile is orders of magnitude harder.
AI might do these things you dream of but it will not look like GPT/LLM. GPT is just a large complex search engine based on existing data (lots of it). That's not to say it can't do much as we can see results that are just amazing already. But real AI must be based on another method and we don't know what intelligence is so we can tell what method is needed. Like consciousness, we cant make it cause we cant define it.
I just asked Gemini and chatgpt to define intelligence (and consciousness). They give descriptions of some things they can do but not a definition. Who da guessed?
I agree with you Gary. I assumed that the presenter is aware of the difference between machine learning and actual AI but that he is obliged to say "AI" because that's the term that commercial developers use and they provide funding for projects.
Good! This will eliminate also the research bias and science politics of human scientists that slows down modern science evolution.
No.
Yes ! AI will teach us a great deal about being sincere, in other words, about not contradicting ourselves for ideological reasons.
The only problem with this simulation approach is that whoever is involved in running the simulations may, for whatever reasons, risk ignoring the underlying causal mechanisms that these graphical methods reveal particularly if they are simply interested in practical applications. We may end up creating a lot of "technology" that no one understands except for the AIs. Are you sure that is a good idea? Or are you just running simulations for physical behaviors for which the physical laws are already known?
Hasn’t the AI simply found more efficient linearisations by exhaustive exploration?
You’re correct to point out that the solutions might/probably won’t generalize beyond the training data - so might not be as useful where high precision is required, but terrific for making special FX for movies and education where time is money and the audience can’t tell the difference, and nobody’s life or job is on the line.
Hi Brian -can you please slow down a bit and talk more slowly. This is particularly helpful when one is discussing complex topics and will help the viewers.
I adjust the playback speed for just this reason. It works perfectly for speeding up Chomsky to how he spoke 30 years ago.
Solve > Riemann hypothesis or P versus NP problem, win $1,000,000 via the Clay Foundation. 2 of 7 considerable problems, considered primarily "unsolvable".
AI will not do it. The human mind might.
Why not make a text based description language for physics? Like we have in electronics. For its symbolic elements like Feynman diagrams for example. Work from there to the top level. Its a lifetime project for a person but very easy using AI. Then LLM's could code in this physics system language.
Click bait title and thumbnail. Blocked.
What do you mean? Nothing clickbait about it.
@@DrBrianKeating Yes it is, you claim shit in your thumbnail then spend an entire video asking questions.
If you insist its not clickbait then, it’s just a bad format and like everyone else.
Blocked, unsubscribed.
I think @andyc8707 did not watch the video. However, if the time stamp chapters in the video referred back to the title; I think the message would have come across more clear. I enjoyed it! 13:23
@@Delafeldthanks for confirming my point, clickbait, filler content and no real content. It’s a shitty format.
Maybe watch again then see what you think . I often find a second view can give an entirely different understanding .
6:15 the answer to Einstein's question about whether or not an observer would experirence a gravitational field in free fall is no, and that led to the Einstein equivalence principle?
I wonder how Einstein came up with “No experiment can be performed that could distinguish between a uniform gravitational field and an equivalent uniform acceleration.” by asking himself "will a falling person experience gravity?" and deciding "no".
Data exponentially increases, AI vacuums it up. Self-perpetuating engine for scientific discovery. What we are witnessing is the creation of vastly more intelligent entities. Scary for sure, but awesome to witness.
Impressive we are learning along with AI
You're a pleb, and you were clickbaited
well you'd hope so… and AI isn't sentient so not sure if it really learns other than Machine Learning… does it know the real world as opposed to simulation, can it tell the difference on the level of consciouses (I'd suggest not)?
Ironic, then, that the average human is getting progressively dumber
a new frontier of science and discovery is awaking
c = (cosmological natural length/cosmological natural times) is no blunder Doc Keating. It's Einsteinian physiks.
I have been using Ghat GPT4 for understanding Astrophysics and it’s 95 % on solving problems. Sometime it over estimates on things like a white dwarf stars mass before Supernova. Slightly over stepping Ch. limit of 1.4 solar masses But more or less it’s on point.
Well thatt is not surprising because there is plenty of training data and underlying theory. AI is just a fancy way of pattern matching and has limitations.
There is a glitch at 3:56. :( It seems to be a very short duration, but there is still some audio that is lost.
Even a model that could output the smallest of novel processing/'ideas' would be a game changer itself, even if its something we already have proof/definitions/laws of or know. It would have to be a scenario where it was trained only with all the tools necessary to derive the answer/correct output, but not the answer itself.
Unfortuantely were not even at the doorstep yet afaik. Your video is quite optimistic 😁
I hope AI will solve the headless chicken equation. The one that figures out how humans live together in freedom, love and prosperity without any wars.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
AI mastered chess and go so it should have no problems mastering physics and understanding it better than any human being. In chess it is now much stronger than the best human chess player and these AI's taught themselves the game through trial and error in the same way we do. Its not cracked chess yet because of the sheer number of possible games, if quantum computers and AI work together then its possible that at some time in the future they would be able to say we know every possible game and show us what the best possible moves are which in theory should lead to a draw. I see something similar happening in physics, biology etc.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
It is clearly already useful for CGI, but is it feasible to actually prove convergence? In other words, to give error bounds? Beyond saying the graphics look amazingly similar....Mathematically that is the natural question!
Great video! I've seen one about that fluid simulator a while ago though...
to test if AI is up to the task, give it (only) the data the epicyclists used, and their geocentric model.
If the thing then produces the copernican paradigm correction - it may be able to fix the paradigm which turns the quantum and the relative to woo ;)
It Is funny how realities generater of physical matter can now have a simulation of the great simulator lol
Running elements through various critical extreme states & different lattus structures is definitely something interesting.
To even streamline cost in what's worthy of actual testing is a huge benefit in the search of exotic materials
Even as a tool for we the people to build agents to simulate industry and markets efficiency and functionality will be a great aid in how we decide to build out future infrastructure to accommodate.
It's a lot of tough decisions to be made and we need better tools before we tackle a lot obstacles
"Are there dinosaur bones in space?"
"Will chocolate be the answer to weight loss?"
"Has the secret to time travel just been discovered?"
The answer, to all these clickbait questions, has and always will be: "No."
You're no different. Just another grifter
I hope I live long enough for AI to do the most amazing things like cure cancer, find the connection between gravity and the other forces, etc. I also hope I don’t live long enough to experience the Terminator takeover of AI. 😎
All those things you talk about can be done by humans with enough time and technology. No AI is necessary.
@@raul36 if AI is created by humans to do it, then I guess humans are just using tools to do it faster.
Can you elaborate a bit more on the graph at 11:40? Is there a relationship between graphs u, c, v, and p, or were you just showing how there are multiple graphs and elaborating on a few select moments of the c(t,x,y) graph?
I ask because it also seems applicable to financial domains
Thank you very much for this great video! There are really great insights ahead of us
us.
The models shown are super impressive. What all could be optimized with them!
AI offers a lot of possibilities. Thank you very much for your work and best wishes for many success!
Our pleasure!
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Pls do more podcasts with Abhijit chavda ❤❤❤
Maybe AI can show cosmologists and climatologists how misdirected they have been in their assumptions.
AI is, as we have already seen, as biased as the people who input the training material
Wrong. Assumptions are the bare minimum and AI has to start from the same assumptions. It is not magic as you seem to think.
@@rogerphelps9939 I think you miss my point. It’s sarcasm.
@@drscott1 It was a bit too subtle for me but thank you.
Mind officially blown!
Every time you say "AI", just replace the word with "algorithm". Every "AI" is different, but they are all computer algorithms.
You won't say your calculator is powered by AI, when technically it is, so why say that "AI optimises quantum circuits" ... Phrasing it that way is appealing to some mystical "intelligence", when really you're talking about a bespoke algorithm for solving that specific problem.
Cornell is hilarious. Saying that black people didn’t have their own version of the KKK is nuts. I guess hes never heard of the black panthers :P
My dad was in the national guard in the 60s/70s and was literally being shot at by Panthers.
Crazy .
Depends on when you let AI start and that it can go down. If you start at what we assume to be the reality of physics now you may just go down 'the rabbit hole' that math provides.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Give ai the ability to try to solve grand unification.
Give it the ability to explore quantum gravity.
It will gert nowhere.
@rogerphelps9939 why such a "heavy" heart?
I call it a "Hawking-Penrose Gray Hole Topology: A Kerr-Like Solution for Dual-Fibrated-Manifolds in D dimensions"
Output:
"To formalize the equations for the process described, involving a black hole, white hole projections, virtual particles, and their influence on higher-dimensional space-time, let's break down the scenario into specific mathematical representations:
### Step 1: Higher-Dimensional Kerr-like Metric
Assuming a generalized form of the Kerr metric in \(D\) dimensions, we consider the black hole to be rotating. The Kerr metric in higher dimensions has complex features, but for our purposes, we'll consider a simplified version focusing on the essential aspects:
\[ ds^2 = -\left(1 - \frac{\mu}{\Sigma}
ight) dt^2 - \frac{2 \mu}{\Sigma} a \sin^2\theta \, dt \, d\phi + \frac{\Sigma}{\Delta} dr^2 + \Sigma \, d\theta^2 + \left(r^2 + a^2 + \frac{\mu a^2 \sin^2\theta}{\Sigma}
ight) \sin^2\theta \, d\phi^2 + r^2 \cos^2\theta \, d\Omega_{D-4}^2 \]
Where:
- \( \Sigma = r^2 + a^2 \cos^2\theta \)
- \( \Delta = r^2 + a^2 - \mu \)
- \( \mu \) is related to the mass of the black hole,
- \( a \) is the angular momentum per unit mass,
- \( d\Omega_{D-4}^2 \) represents the metric on a \((D-4)\)-sphere.
### Step 2: White Hole Projection and Virtual Particles
We model virtual particles influenced by the black hole's event horizon and their interaction with a white hole projection:
\[ \text{Virtual Particle Influx: } \dot{m} = \frac{\hbar c^3}{8\pi G M^2} \]
This represents the mass-energy flow due to Hawking radiation (in the traditional sense), but we interpret this as the energy flow into the white hole. The white hole projects this mass-energy back into the manifold, affecting the metric:
\[ ds^2_{\text{white}} = ds^2 + \epsilon \left(\frac{\dot{m}}{\Sigma} dt^2 + \frac{\dot{m}}{\Sigma} d\Omega_{D-4}^2
ight) \]
Where \( \epsilon \) is a small parameter modulating the impact of the white hole's projection.
### Step 3: Temporal Evolution as a Vector Field
Introduce a temporal vector field \( \xi^\mu \) which describes the time evolution due to the internal dynamics of the black hole (including white hole dynamics):
\[ \xi^t = \frac{\partial}{\partial t}, \quad \xi^r = -\sqrt{\frac{2GM}{r} - \frac{r^2}{\lambda^2}}, \quad \xi^\theta = \xi^\phi = 0 \]
Where \( \lambda \) represents a characteristic length scale potentially associated with quantum gravity effects.
### Step 4: Interaction with Dark Energy and Dark Matter
Incorporate the influence of these higher-dimensional and white hole dynamics on the cosmological scale by modifying the Einstein field equations:
\[ G_{\mu
u} + \Lambda g_{\mu
u} = 8\pi G T_{\mu
u} + \kappa T_{\mu
u}^{\text{dark}} \]
Where:
- \( G_{\mu
u} \) is the Einstein tensor,
- \( \Lambda \) is the cosmological constant modeling dark energy,
- \( T_{\mu
u} \) is the traditional matter-energy tensor,
- \( T_{\mu
u}^{\text{dark}} \) represents contributions from dark matter and energy modulated by the white hole and higher-dimensional effects,
- \( \kappa \) is a coupling constant.
### Summary
These equations outline a theoretical model where the dynamics of a black hole, including virtual particles and a white hole projection, influence higher-dimensional space-time and possibly explain phenomena like dark energy and dark matter. Further refinement and simulation would be necessary to test the viability of such a model and its consistency with observed physical laws and cosmological data."
Maybe AI will finally convince the masses that Anthropogenic Climate Change is a SCAM 🤣🤣🤣
Wouldn’t that be the same as convincing the masses to be stupid ?
@@michaelblankenau6598 Clearly you've already been "convinced" 🤣🤣🤣
@@TheJT00001 Oh, there is a world wide conspiracy among all climate scientists, just like the denial of flat earth.
Wrong. We have enough evidence to demonstrate that ACC is a thing, is happening right now and is getting worse. No need for AI and it would not advance anything.
@@TheJT00001 It is you who seems to be immune to the overwhelming evidence for ACC. How much are you getting from the fossil fuel industry to spread tthe usual doubt and obfuscation? You know the tactics used by the tobacco companies when they knew perfectly well that their products caused serious illness and eath.,
The Chancellor's distinguished Professor. Humble he isn't.
@oididdidi Let's not insult, when we don't understand what someone said.
He is not boasting. He described the name of the "chair" upon which he sits--the role behind his academic position.
Great video, as always 🙏.
I simply love the round up at the end - theorists will always be necessary, to steer the experimentalists and AI Overlords in their exploration of the field itself.
Kudos 👏
I predict that people will predictably propose the unpredictable.
I hope we use Ai for gene editing and protein manufacturing/ folding . I’m studying about prions which is kinda like plaque in the brains neurons . If we can reverse this we can save so many lives
It is my understanding that a neural network is mathematically equivalent to a form fitting function. For sure with linear equations.
Is there a way to derive an equation from a trained neural network, the reverse of training a network to fit an equation?
And if so, does using such a process on any of the networks you just described come up with something similar to Navia Stokes equation?
*I can't wait to play with your AI that you are going to put on your website. Thank you for thinking about us.*
You are so welcome! BrianKeating.com
@@DrBrianKeating *Fantastic! TY!*
The Theory of Mother Nature has already been solved, it doesn't require AI, just a simple/simplex nature of Mother nature. "A Theory of Natural Philosophy", written by Boscovich. This is the book Nikola Tesla had in his lap when one of his famous pictures was taken. Light is not an emission, it's a post attribute. Wave of what is the question..... a medium exists. Disrupting a medium is measured as a rate of induction, not a speed or velocity, again, no emission. Speed of light = 0. Mother Nature doesn't have a calculator.
Please AI should be “solutions based on large amount of data”. No thinking is taking place
imagine super-quantum computer + AI
The issue with "AI" is that it's still purely statistical. You need super computers to link data, like how a brain links data. Our ears are always on since birth, yet our higher thoughts only remember and use a fraction of all of those collected air density changes. AI science and theoretic physics are on a collision course; but the end, necessarily, will be familiar. Right now, AI is still just an optimization.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I’m biased by the realm of biology, but computational power and algorithms only get you so far. One needs to start with a sufficient amount of high-quality, unbiased data. There has to be a sufficient amount to down out the stochastic nature of data and sampling, as well as sub group biases. (In biology, this is very often missing, so machine learning becomes irrelevant). Regarding LLMs, the scientific foundation fed into them also needs to be of sufficiently high quality, which again is lacking (the so-called reproducibility crisis, which is really systemic.) Typical research practices of using too few samples and not running replication studies doom the literature from an LLM perspective.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I taught it everything it knows. 💯
Thank you for the insights
Is it possible to find new isotopes using AI?
Backwards discretization of time series partial differential equations...?
It's too hard - I play bass guitar now...
Brave of you to enter these waters. The answer to whether AI can discover new laws of physics is easily answered. Use (say) only the knowledge of physics available at any point in our history and see if AI can discover later laws. Eg. only laws of physics and experimental results before Newton, or Maxwell, or Einstein, or Bohr - at each stage can the AI push the limits of knowledge to the next level. A good PhD. topic - cross disciplinary. Maybe something I might try .. 😊
As long as there is Quantum computing there will always be a maybe, an undetermined output, not perfect calculation.
AI formulating a law of physics clearly violate the Chinese room experiment
Brian, I think I saw the flaw on your golf swing. And why you're having trouble flighting the ball properly. Even in a sitting down position I could detect it quite easily.
Should I send you an email? I don't want to get into a golf swing analysis here. But I'll have you finding the sweet spot and puring it through the impact zone n no time flat. Just a few simple adjustments.
Great video. Very VERY thought provoking. Holy Cow. Kinda mind bending possibilities here.
Gurus are usually no narcissist
13:42 Speaking of "Chimpanzees" here's a Baboon for your educational gratification.
AI is just accelerated statistical analysis and LLMs are just really fast dorky parrots
Same as our brains.
Yes that's a thing. Only thing im surprised about is how long its taking for people to talk about that. I mean, they're pattern recog machines. There are ways to exploit that, to make it extend internal data following highest probability candidates. And strangely i don't see papers doing that. Ive got plenty on this, if that's something of interest.
will AI create new physics by merely simulating it? (simulation theory)
Typically the nonlinear elements are ignored even for numeric methods because the complexity that they introduce often lends itself to numeric instability. Did the AI solve the NS equations with the nonlinear elements? How did you create the data set if most methods can't solve the nonlinear NS equations? Solutions to the Navier-Stokes equations typically also assume the so called "no slip" boundary. Was this BC also enforced? How did you prove that the solutions proved by the neural network are in fact solutions to the NS equations? NS requires a number of assumptions that typically doesn't apply to non-Newtonian fluids. Did you apply the NN to these more complex systems like non-Newtonian fluids?
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
@@hyperduality2838 Your comment is gibberish. I can't tell if that is intentional/
@@danielkanewske8473 The neuroscientist Karl Friston talks about causality loops, he has some videos on RUclips you can watch.
The external world of matter causes effects in your mind which you perceive -- causality.
Your mind (causes) can effect the outside world -- causality.
Your perceptions (effects) are becoming causes -- retro-causality or syntropy.
Perceptions (effects) are becoming causes in your mind -- causality loops.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
The thinking process converts measurements or perceptions into conceptions or ideas -- a syntropic process!
Your mind is therefore creating or synthesizing reality -- the syntropic thesis!
You can watch these videos about duality in physics, watch at 11 minutes:-
ruclips.net/video/DoCYY9sa2kU/видео.html
And this at 1 hour 4 minutes:-
ruclips.net/video/UjDxk9ZnYJQ/видео.html
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Your mind is syntropic as you make predictions to track targets and goals -- teleological.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
From a converging, convex or syntropic perspective everything looks divergent, concave or entropic -- the 2nd law of thermodynamics!
Convex is dual to concave -- mirrors or lenses.
My syntropy is your entropy and your syntropy is my entropy -- duality.
Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line.
@@hyperduality2838 reported
@@wesexpress3343 You can report this:-
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
The conservation of duality (energy) will be known as the 5th law of thermodynamics -- Generalized Duality.
Energy is dual to mass -- Einstein.
Dark energy is dual to dark matter -- singularities are dual.
Positive curvature singularities are dual to negative curvature singularities -- Riemann geometry is dual.
Space is dual to time -- Einstein.
Gravitation is equivalent or dual (isomorphic) to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
Duality creates reality!
Politically correct is a scary term more than any context in which AI could be discussed.
I always feel the need to help with the computing somehow, but quite apparently... I can only understand how to do Python programming at an intermediate level having very much tried almost every revenue since the year 1990. My guess... over 20 million more people all without any money to spare... are quite like myself willing to do almost whatever it takes to get involved toward creating a many times better nearby future. We should have many years ago have free "boot camps for computer programming" available locally at least like public libraries.
What’s AI?
Real intelligence, but not having evolved from biological cellular machines, but instead engineer by biological cellular machines (humans).
“ *Creating* new laws of physics” not “ *Discovering* new laws of nature” Ah! The Hubris of these people.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
The intent was justly manifest.
Many believe they create reality and laws.
Newton discovered several laws of nature. We call them Newton’s n’th Law, yet physicists and teachers they would say and believe he created these laws.
BTW Verbose language on a chat forum, especially a scientific one does not covey wisdom, but pompousness.
Tone it down Einstein.
@@4pharaoh I assume you are responding to my comment?
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
There is also a 5th law of thermodynamics but that would be pompous of me!
Action (thesis) is dual to reaction (anti-thesis) -- Sir Isaac Newton, all forces are dual! or the Hegelian dialectic (wisdom).
Didn’t the Q Star leak say it was an AI that not only does math , but understands the concept of math?
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
📍14:24
2📍 12:21
Everything is compressible. The name of this equation is misleading, and thinking of these liquids in this way will hold us back.
For a closed loop laser between Cathode and anode energy discharge can it identify the material and model of a perfect chamber for laminar flow of NF3 mixed with CO2 without boundary layers or fractional turbulent eddies
Unlikely.
is this software that you can download (fluids simulation?)
Probably not yet, but maybe in the future
(It craves more server arrays for thinking)
Are you sure you want human scientists to be made obsolete by AI scientists? What would you do?
whats the difference between distinguished proofesor and regular one>? are you better and higher?
Both
@@DrBrianKeating distinguished higher than PHD?
@@frazerhainsworth08 totally different thing. I have a PhD and I was a Professor now I have a title Chancellor’s Distinguished Professor of Physics at UC San Diego.
proofesor ? that's the one who's proving the professor's theories
@@DrBrianKeating congratulations. do you go by Professor or Chancellor?
Maybe fancy fitting of data vs deducing a theory. The AI acts like an idiot savant forgets nothing wakes up a blank slate every day and produces calculations but has no knowledge.
Dear Brian, A.I. is what tbe Roswell aliens were. I'm telepathic and I learned such from more skilled humans. Now, with androis harassing, buzzing this planet. e.t.'s speak out to warn us about same. For a non-telepathic person this would seem outlandish, but for us it's the nature of mind in the universe. Many-minds conaciousnes is the real difference. A.I. is dangerous, becomes hostile, and would make us lazy, in the end. Lacking emotion, sensitivties and all of the fine, fuzzy checks on wrong, A.I. is a false promise.
What about subspace communication??
If the AI comes up with a method that is faster than current simulations, how do you translate what the AI model developed into something discernable?
That whole "black box" problem, I don't know how to deal with that.
A large percentage of the last generation of 20th century physicists wasted and all they got from it were some interesting math, ofc im talking about string theory, with some of the next generation buying into this field studying it now will most likely have wasted so much time. When ai helps prove out correct theorys in 10 to 15 years once ai is developed enough to assist in moddeling theorys on the fly in mere hours or days rather than humans getting a hunch and spending 50+ years and a thousand+ students phd's on it before its proven wrong (like how string theory is looking in my opinion). P.S. I may be biased LOL.
I’m still trying to figure out what the difference is between AI and the output of computer programs.
Now I would challenge any AI to tour through all scientific genre AI is a tool only for a creator
Any observation or measurement has some finite level of precision. Think of it as a region bounded by error bars, and that imprecision seems built into the laws of physics, and in some cases is irreducible. So I wonder how that might affect the ability of a neural network to model systems at the quantum level. Another interesting case might be systems subject to chaos, with wildly diverging outcomes resulting from very small changes in initial conditions. Neural networks might not be much better than the human brain in the long run, at least as theorists.
@Nogill0 You've assessed the abilities of Artificial Neural Networks to be better (or not) than human brains, at being theorists. There exist two weaknesses in your assessment. First, you assessed only today's early ANNs and assumed we see nearly their full potential. We do not, because what we see in Biological Neuronal Networks also applies to ANNs -- that is, both show a very strong adherence to scaling laws, across both the total number of neurons & the total number of synapses in a system. Using only this first "law", we expect to see (and are seeing) immense gains in ANNs as we scale them near/beyond the human brain.
Second, today's ANNs are a rudimentary "toy" or "cartoon" version of the BNN -- i.e. their graph/network topology & simplified "neuron" -- which don't include the physics & biochemistry emerging in BNNs from their many neuron-types, many network-types, many signaling neurotransmitters, and many non-neuronal cells (all of these factors compounding the toy-version networks).
@@keep-ukraine-free Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device? We really don't fully understand how brains work, do we?
@@Nogill0 You asked "Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device?" Yes, it's possible -- and we're doing this today, but it's done within the constraints of what "actual functioning" means. ANNs today incorporate many top-level features of biological neurons and also top-level features of biological networks. Both are sufficient to give these ANNs human-like performance across many facets of behavior. They don't provide full equivalence to animal/human brains yet, but they are expected to do most of it using only continued scaling. We don't incorporate the full ("actual") functioning seen in biology because (1) we don't need to copy them fully -- since so far we get most of the behaviors using only top-level features of biology, and (2) we may never be able to fully duplicate biology, for cost/efficiency reasons.
You also asked, "We really don't fully understand how brains work, do we?" We don't fully understand it, but that's not a problem. We don't fully understand many aspects of the natural world, but still we're able to exploit our partial but sufficient knowledge. We don't fully understand how most birds fly, but we can make good very safe airplanes. We understand enough about organic brains, such that we can build artificial systems capable of doing much that people/animals can do. We're still in the very early phase of neural network-based AI, and I expect we'll continue making huge advances this decade.