Artificial Einstein: Did AI just do the impossible?

Поделиться
HTML-код
  • Опубликовано: 1 июн 2024
  • Join my mailing list briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA 🇺🇸 will WIN!
    Artificial intelligence has already proven its ability to produce entertaining and sometimes surprising creations, from texts to images and even videos. But can it learn physics? Maybe even discover new laws of physics? Today, we will venture into the fascinating intersection of artificial intelligence and physics. Computational Fluid dynamics, machine learning and even computer game design are encountered.
    Key Takeaways:
    00:00:00 Intro
    00:01:04 The role of AI in quantum computing
    00:03:31 Can AI predict outcomes better than humans?
    00:07:09 A new way of simulating fluid dynamics
    00:18:33 Outro
    Additional resources:
    ➡️ Follow me on your fav platforms:
    ✖️ Twitter: / drbriankeating
    🔔 RUclips: ruclips.net/user/DrBrianKeatin...
    📝 Join my mailing list: briankeating.com/list
    ✍️ Check out my blog: briankeating.com/cosmic-musings/
    🎙️ Follow my podcast: briankeating.com/podcast
    Into the Impossible with Brian Keating is a podcast dedicated to all those who want to explore the universe within and beyond the known.
    Make sure to subscribe so you never miss an episode!
    #intotheimpossible #briankeating #chatgpt #AI
  • НаукаНаука

Комментарии • 443

  • @DrBrianKeating
    @DrBrianKeating  Месяц назад +24

    Will AI ever discover a new theory of nature? Let me know and don’t forget you can win a real meteorite 💥 when you join my free mailing list here 👉 briankeating.com/list ✉️

    • @TonyMountjoy
      @TonyMountjoy Месяц назад +3

      Discovery of that which already exists is where it has the best chance of doing something useful. It's a natural search engine. Composition is where it will always struggle, though, imo.

    • @evo1ov3
      @evo1ov3 Месяц назад

      Joined and Subscribed. Umm... Probably. The stuff I've put into Google's A.I.? I'm gonna need FBI witness protection. 🤣 loljk! Love Your Show Mr/Dr Keating!!! Stay Free!

    • @4pharaoh
      @4pharaoh Месяц назад

      If AI ever does I bet there is a smart taxi driver, or some other Hi IQ non physicists, who will provide a proof that we could have have had this “discovery” 20-50 years ago, if it wasn’t for the *control apparatus* in place in the physics community, that prevents outsiders from presenting truths.
      Ah the Hubris of you all.

    • @MikePaixao
      @MikePaixao Месяц назад +1

      Circular Quadratic Algebra is coming after general relativity 🙂
      (I made my discoveries when I started making a simulation game where I decided to simulate everything from players, npcs, and trees)

    • @nunomaroco583
      @nunomaroco583 Месяц назад +1

      Hi, probably yes, big advantages, bigger risk, we need some caution whit AI, but the advantages are immense......all the best.

  • @danielmccarthyy
    @danielmccarthyy Месяц назад +53

    I think AI has the greatest value to analyze huge data sets to find unknown relationships which lead to new physics equations, new chemical compounds, and predictive genetics.
    But I am probably wrong.

    • @DanteS-119
      @DanteS-119 Месяц назад +10

      No, you're not wrong -- that's exactly where AI excels.

    • @hyperduality2838
      @hyperduality2838 Месяц назад +3

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.

    • @Blueprint4Murder
      @Blueprint4Murder 27 дней назад +3

      AI excels only as a learning aid for humans, automated processing, and testing. So if you have a problem it can make mistakes 95% if the time, but it can be programed to eventually narrow to a working model completely automated that is the value. Which is why in the new chip set displays they showed AI training programs for training labor robots. If they actually works large scale we will not know for years. It will almost certainly be pushed into market flawed like nearly every program. Sadly, bugs in these programs have very high real world costs.

    •  27 дней назад +1

      You nailed it. It's good
      at analyzing for patterns in "dimensionality" which personally I think is better described as adjectives for our language. Every dimension is just a part of the description and it's just a measure of how orange is something relatively (orange just being a randomly chosen example) and how much does that matter to interpreting it. A jacket the orange value doesn't matter as much for understanding but for other things it matters immensely.

    • @katewerk
      @katewerk 23 дня назад

      How do you fact check it? AI can't reproduce reliable biographies of living persons without making shit up and you expect it to do advanced physics based on trust?

  • @aquietpatron7281
    @aquietpatron7281 25 дней назад +5

    Nice work.
    I have to point out that with any emerging technology there are problems that occur and may not be able to overcome in the near future:
    AI’s:
    1. Can hallucinate.
    2. Can’t articulate the reasoning or process it used to get their results.
    3. Can be biased.
    4. Can provide different answers to the same question.
    5. Can provide responses repetitive to a theme rather than providing independent responses.
    6. Someone will always have to validate AI results. There is an inherent risk to not knowing how a solution was arrived at.
    7. It’s lack of predictably is it’s strength but also it’s most critical weakness.
    In certain areas, it will revolutionize the world, in most other areas - not so much.

    • @mossy3565
      @mossy3565 20 дней назад

      The issue here is much more fundamental, Ai can NEVER articulate its reasoning because its not actually intelligence. Its just weighted averages putting words in an order that makes the most sense. Not that the article is true and correct, just that the flow of its writing is legible.
      Because it's just a fancy spin of the auto-suggestion feature we've had on mobile keyboards for years now. Based on that system, you're never going to get sentience and, in this sense, intelligence

  • @noam65
    @noam65 Месяц назад +30

    Protein folding is probably one of the most important areas of exploration because of its implications in the restoral of health, and repair of injury.

    • @lubricustheslippery5028
      @lubricustheslippery5028 28 дней назад +2

      If going further you get bio-engineering. If this is the start of an AI revolution what could it enable and be the next, think it could be bio-engineering.
      It's to messy and complex for the Human mind to handle so I think some AI is needed.

    • @quantumpotential7639
      @quantumpotential7639 28 дней назад +2

      I hope it can help my torn radiator cuffs. Both are messed up Big Time and the doctors don't know squat. But the robot. He'll get me back in the batters cage swatting balls in no time flat.

    • @ForageGardener
      @ForageGardener 27 дней назад

      ​@@lubricustheslippery5028yeah the financial elites can engineer people exactly how they want!

    • @ForageGardener
      @ForageGardener 27 дней назад

      ​@@quantumpotential7639topical turmeric oil lipsomal with piperine.
      Oral capsule turmeric with piperine + boswellia

    • @martiddy
      @martiddy 27 дней назад +3

      That's why AlphaFold is such a magical tool for the scientists working in biomedicine.

  • @TheMrCougarful
    @TheMrCougarful 21 день назад +4

    AI has demonstrated a capacity to hallucinate, which I consider that a promising start.

  • @edgarjeparchin2382
    @edgarjeparchin2382 Месяц назад +30

    I am working on quantum physics LLM. What I find the hardest is the reinforcement part as the initial training data pretty much is standardised. That said I want to try to reinforce it with literature related to a more funkier side of possibilities such as anti-gravity or maybe time travel.

    • @trevorwhitechapel2403
      @trevorwhitechapel2403 Месяц назад

      We have already been able to accelerate particles in a controlled setting to within a fraction of a fraction of the speed of light. For a next step towards time travel, what if we build a particle accelerator whose inside :track" is big enough in diameter to accelerate a 12oz can of Coca-Cola to that speed? Huh? Huh? What new laws of physics would the cola be exhibiting right after you first slow it back down and pop it open? What would it even taste like at that point? (I don't land on "can of soda" randomly. They are the optimized shape and the mass seems like it would be both ambitious but doable.) ♠

    • @hyperduality2838
      @hyperduality2838 Месяц назад +4

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.

    • @quantumpotential7639
      @quantumpotential7639 28 дней назад +3

      Yes, feed the robot dessert FIRST so it gets really excited and a sugar boost with vivid theoretical fancy fantasies so that once it sits down for the meat and potatoes main course, with a salad of course, it can get busy producing amazeable out of this world stuff we never imagined before, but it now has. Fancy Fantasies and Ai are like the 2 pillars of Solomens Temple. Now let us pray for divine guidance as this is powerful stuff and we gotta be careful here going forward. 🙏

    • @honkytonk4465
      @honkytonk4465 28 дней назад

      r​eally?

    • @635574
      @635574 27 дней назад +2

      Imo LLM will always have trouble differntiating facts and fiction because its all just text and it has no experience outside of that.

  • @stephenallen224
    @stephenallen224 26 дней назад +3

    Mathematicians come up with new things all the time, it doesn't mean it reflects reality. You can create an elegant answer to some problem in physics that probably explains the issue real well mathematically... it doesn't mean it exists. II means you are good at math.

  • @Killer_Kovacs
    @Killer_Kovacs Месяц назад +8

    Watch the bit about 7.8 billion Nobel prize winners on a trolley rail.
    "Gpt chat solves the trolley problem"

  • @dmitrychirkov4206
    @dmitrychirkov4206 26 дней назад +3

    I like to think that at some point AI would invent it's own language for calculations due to a sheer time and resource economy. If it would be asked to decypher one of these symbols of it's language into our regular math, it would take years just to read a result.

  • @cheradenine1980
    @cheradenine1980 27 дней назад

    Been watching physics maths and cosmology on RUclips for a decade.
    So glad to find your channel at last! Top quality!!

  • @martymcfly7628
    @martymcfly7628 29 дней назад +5

    As a physicist as well, I can tell you we are very short of training data on almost everything. Experimental physics is expensive, the physicists that do this are rare as hens teeth, so if we think we are going to solve anything we need to observe it first and put in a form to train, amongst what is not that thing its NOT as well, and all its nuances. Im excited, but hype is hype for AI

    • @das_it_mane
      @das_it_mane 27 дней назад +1

      So maybe use it to create new experiments then?

    • @novantha1
      @novantha1 27 дней назад +3

      I'm not entirely sure about that. As an outsider (to physics) with more experience on the AI side of things, it seems to me that there's a strong disconnect between "macrophysics" or "applied physics", as in things we typically observe and deal with on a regular basis (my car accelerates via kinetic energy, experiences friction, and so on), versus "microphysics" or physics such as they occur at atomic and sub-atomic levels.
      I'm pretty sure that data for macrophysics is cheap, plentiful, and readily available in a large variety of categories, and is fairly well understood such that many phenomena can be simulated as needed.
      So the really interesting question is: Given a large amount of semi-relevant data, can you augment a model trained on a small quantity of highly relevant data?
      The answer is generally yes, but it will take a careful approach. As a very crude solution, training a Transformer on anything before your target information will generally improve its performance on your target information for whatever reason. With that said, I think there are probably things we can infer about unknown behavior in physics, particularly at a micro level, from patterns in macrophysics that humans aren't necessarily well equipped to find, and I would suggest that it might be wise to hesitate to judge the effectiveness of AI in this area, particularly as we switch from data prediction driven AI (current generation) to more computationally dense "simulation driven" AI (next generation, which we're already starting to see with Quiet*, or agentic workflows, and so on), which function more like human brains and how we think, in a much more data efficient manner than we've seen before.
      That said, I don't think that we're going to see in the next year "AI uncovers the final unknown physical laws, and as it turns out, entropy was just a suggestion, and 42 was behind it all along", but I do think we're going to see more "unknown unknowns" in terms of the acceleration of progress in a variety of fields due to the increased efficiency of research as a function of advancing artificial intelligence.

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      Absolutely. AI is just multidimensional curve fitting in the end so garbage in just produces garbage out.

  • @DankUser
    @DankUser 27 дней назад +3

    The mouse pointer icon in the thumbnail is a great way to gatekeep people who smoke too much weed. Like me. For like a minute straight lol.

  • @jamiethomas4079
    @jamiethomas4079 Месяц назад +2

    Whats amazing and has already been duplicated in a way with sora is I dont think you need code for every type of physics simulation. I have a physics simulator in my head right now and couldnt begin to tell you exactly how it works. I can imagine a red apple, I can change its color to blue, I can throw the apple against a wall and watch it bounce off or explode. And all I did was have certain hardware and a knowledge base of what ive observed over my lifetime. Sora is the same way. It doesnt have a rally racing game engine built in yet it can create and simulate what will happen to a shockingly accurate degree, just like my brain. Some physics wont have to be coded in, we can simply train it against the physical world.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @sean2susini
    @sean2susini 27 дней назад +1

    Something to consider, perhaps is that mathematical equations are describing an idealized model universe. Since AI is using real world models for its simulation, it could eventually be more accurate as a description of the universe than an equation.

  • @user-tk2jy8xr8b
    @user-tk2jy8xr8b 24 дня назад +2

    What a time to be alive

  • @4thorder
    @4thorder 27 дней назад +2

    By far, what you are tapping into a potential is the key for advancements in all fields. I have often thought of the impact on medical research, much like what you are presenting here for fluid dynamics. Excellent presentation and accolades for creating a teaching assistant. I wish I would have had such a thing getting my engineering degree in the 80s, LOL.

  • @vast634
    @vast634 19 дней назад

    Predicting a simulation result is great. This is similar to what humans can do when predicting everyday outcomes of some mechanical event (some item falling for example) and react to it before the event happened.

  • @patrickmchargue7122
    @patrickmchargue7122 Месяц назад +2

    IIRC, there was a neural network that, after observing/processing videos of pendulums' motion, derived the basic laws of motion; i.e. F=MA I too hope that there will be similar discoveries made by neural networks for other branches of physics.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      That is pretty simple. Anything worthwhile is orders of magnitude harder.

  • @neovxr
    @neovxr 23 дня назад +2

    let it learn all the differences between bosons and fermions on all known levels, including all the known equations like schroedinger.
    then make it amplify the differences and have it construct a new system of the particle world.

  • @Zirrad1
    @Zirrad1 21 день назад +1

    Hasn’t the AI simply found more efficient linearisations by exhaustive exploration?
    You’re correct to point out that the solutions might/probably won’t generalize beyond the training data - so might not be as useful where high precision is required, but terrific for making special FX for movies and education where time is money and the audience can’t tell the difference, and nobody’s life or job is on the line.

  • @user-li7ec3fg6h
    @user-li7ec3fg6h Месяц назад +1

    Thank you very much for this great video! There are really great insights ahead of us
    us.
    The models shown are super impressive. What all could be optimized with them!
    AI offers a lot of possibilities. Thank you very much for your work and best wishes for many success!

    • @DrBrianKeating
      @DrBrianKeating  Месяц назад +2

      Our pleasure!

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @liberty-matrix
    @liberty-matrix 22 дня назад +2

    "The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett

  • @garylcamp
    @garylcamp 21 день назад +2

    AI might do these things you dream of but it will not look like GPT/LLM. GPT is just a large complex search engine based on existing data (lots of it). That's not to say it can't do much as we can see results that are just amazing already. But real AI must be based on another method and we don't know what intelligence is so we can tell what method is needed. Like consciousness, we cant make it cause we cant define it.
    I just asked Gemini and chatgpt to define intelligence (and consciousness). They give descriptions of some things they can do but not a definition. Who da guessed?

    • @richardhall5489
      @richardhall5489 21 день назад

      I agree with you Gary. I assumed that the presenter is aware of the difference between machine learning and actual AI but that he is obliged to say "AI" because that's the term that commercial developers use and they provide funding for projects.

  • @Charles-Darwin
    @Charles-Darwin 28 дней назад

    Even a model that could output the smallest of novel processing/'ideas' would be a game changer itself, even if its something we already have proof/definitions/laws of or know. It would have to be a scenario where it was trained only with all the tools necessary to derive the answer/correct output, but not the answer itself.
    Unfortuantely were not even at the doorstep yet afaik. Your video is quite optimistic 😁

  • @dandantheideasman
    @dandantheideasman 27 дней назад

    Great video, as always 🙏.
    I simply love the round up at the end - theorists will always be necessary, to steer the experimentalists and AI Overlords in their exploration of the field itself.
    Kudos 👏

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence 26 дней назад +1

    Great video! I've seen one about that fluid simulator a while ago though...

  • @greatgatsby6953
    @greatgatsby6953 26 дней назад +2

    Hi Brian -can you please slow down a bit and talk more slowly. This is particularly helpful when one is discussing complex topics and will help the viewers.

    • @ericpmoss
      @ericpmoss 26 дней назад

      I adjust the playback speed for just this reason. It works perfectly for speeding up Chomsky to how he spoke 30 years ago.

  • @KyleBaran90
    @KyleBaran90 28 дней назад +1

    Can you elaborate a bit more on the graph at 11:40? Is there a relationship between graphs u, c, v, and p, or were you just showing how there are multiple graphs and elaborating on a few select moments of the c(t,x,y) graph?
    I ask because it also seems applicable to financial domains

  • @user-rl4tu8yd6e
    @user-rl4tu8yd6e 24 дня назад +1

    The only problem with this simulation approach is that whoever is involved in running the simulations may, for whatever reasons, risk ignoring the underlying causal mechanisms that these graphical methods reveal particularly if they are simply interested in practical applications. We may end up creating a lot of "technology" that no one understands except for the AIs. Are you sure that is a good idea? Or are you just running simulations for physical behaviors for which the physical laws are already known?

  • @ElliotSchreuders-bf1dl
    @ElliotSchreuders-bf1dl 21 день назад +1

    Thank you for the insights

  • @justinalvarado7351
    @justinalvarado7351 29 дней назад +1

    I have been using Ghat GPT4 for understanding Astrophysics and it’s 95 % on solving problems. Sometime it over estimates on things like a white dwarf stars mass before Supernova. Slightly over stepping Ch. limit of 1.4 solar masses But more or less it’s on point.

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      Well thatt is not surprising because there is plenty of training data and underlying theory. AI is just a fancy way of pattern matching and has limitations.

  • @mrhassell
    @mrhassell Месяц назад +2

    Solve > Riemann hypothesis or P versus NP problem, win $1,000,000 via the Clay Foundation. 2 of 7 considerable problems, considered primarily "unsolvable".

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      AI will not do it. The human mind might.

  • @dadsonworldwide3238
    @dadsonworldwide3238 Месяц назад +1

    It Is funny how realities generater of physical matter can now have a simulation of the great simulator lol
    Running elements through various critical extreme states & different lattus structures is definitely something interesting.
    To even streamline cost in what's worthy of actual testing is a huge benefit in the search of exotic materials
    Even as a tool for we the people to build agents to simulate industry and markets efficiency and functionality will be a great aid in how we decide to build out future infrastructure to accommodate.
    It's a lot of tough decisions to be made and we need better tools before we tackle a lot obstacles

  • @pahom2
    @pahom2 29 дней назад

    AI formulating a law of physics clearly violate the Chinese room experiment

  • @gregoryhead382
    @gregoryhead382 Месяц назад +1

    c = (cosmological natural length/cosmological natural times) is no blunder Doc Keating. It's Einsteinian physiks.

  • @gareththomas3234
    @gareththomas3234 26 дней назад +2

    Why not make a text based description language for physics? Like we have in electronics. For its symbolic elements like Feynman diagrams for example. Work from there to the top level. Its a lifetime project for a person but very easy using AI. Then LLM's could code in this physics system language.

  • @stephenkolostyak4087
    @stephenkolostyak4087 27 дней назад

    6:15 the answer to Einstein's question about whether or not an observer would experirence a gravitational field in free fall is no, and that led to the Einstein equivalence principle?
    I wonder how Einstein came up with “No experiment can be performed that could distinguish between a uniform gravitational field and an equivalent uniform acceleration.” by asking himself "will a falling person experience gravity?" and deciding "no".

  • @hmccoy99
    @hmccoy99 21 день назад

    a new frontier of science and discovery is awaking

  • @gammaraygem
    @gammaraygem Месяц назад +1

    I hope AI will solve the headless chicken equation. The one that figures out how humans live together in freedom, love and prosperity without any wars.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @Markoul11
    @Markoul11 27 дней назад +2

    Good! This will eliminate also the research bias and science politics of human scientists that slows down modern science evolution.

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      No.

    • @Matx5901
      @Matx5901 24 дня назад +1

      Yes ! AI will teach us a great deal about being sincere, in other words, about not contradicting ourselves for ideological reasons.

  • @quantumpotential7639
    @quantumpotential7639 28 дней назад

    Brian, I think I saw the flaw on your golf swing. And why you're having trouble flighting the ball properly. Even in a sitting down position I could detect it quite easily.
    Should I send you an email? I don't want to get into a golf swing analysis here. But I'll have you finding the sweet spot and puring it through the impact zone n no time flat. Just a few simple adjustments.
    Great video. Very VERY thought provoking. Holy Cow. Kinda mind bending possibilities here.

  • @konstantinos777
    @konstantinos777 Месяц назад +2

    Mind officially blown!

  • @duggydo
    @duggydo Месяц назад +2

    I hope I live long enough for AI to do the most amazing things like cure cancer, find the connection between gravity and the other forces, etc. I also hope I don’t live long enough to experience the Terminator takeover of AI. 😎

    • @raul36
      @raul36 27 дней назад

      All those things you talk about can be done by humans with enough time and technology. No AI is necessary.

    • @duggydo
      @duggydo 27 дней назад

      @@raul36 if AI is created by humans to do it, then I guess humans are just using tools to do it faster.

  • @whateverwhenever8170
    @whateverwhenever8170 25 дней назад +1

    I hammered on chatgpt and it seems to lack a spark, human is still needed, so i got it writing and executing code, and was able to watch it and push it in a direction for a solution, its best role is a helper for thinking and testing ideas, it can write code you describe so quickly but you do need to understand what it's doing to get the maximum out of it. Tldr if you cant code that will be an issue in your interactions with current AI

  • @balasubr2252
    @balasubr2252 22 дня назад

    Great thoughts 💭

  • @GadZookz
    @GadZookz Месяц назад +2

    I never suspected he might be an AI but… 🤔

    • @AORD72
      @AORD72 27 дней назад

      Bound to catch us all out at some stage. Videos at first then perhaps robots if humanity lasts that long (AI might extinguish us)

  • @MikeMcMulholland
    @MikeMcMulholland Месяц назад +1

    I predict at the minimum the internet will go out and a new one will have to be built because it needs to be many trillions of times more secure.

    • @hypergraphic
      @hypergraphic Месяц назад

      Rewrite the Internet in Rust? 😊

  • @samrowbotham8914
    @samrowbotham8914 Месяц назад +2

    AI mastered chess and go so it should have no problems mastering physics and understanding it better than any human being. In chess it is now much stronger than the best human chess player and these AI's taught themselves the game through trial and error in the same way we do. Its not cracked chess yet because of the sheer number of possible games, if quantum computers and AI work together then its possible that at some time in the future they would be able to say we know every possible game and show us what the best possible moves are which in theory should lead to a draw. I see something similar happening in physics, biology etc.

    • @hyperduality2838
      @hyperduality2838 Месяц назад +1

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @pratyushsays
    @pratyushsays Месяц назад +1

    Pls do more podcasts with Abhijit chavda ❤❤❤

  • @pspicer777
    @pspicer777 26 дней назад +1

    Brave of you to enter these waters. The answer to whether AI can discover new laws of physics is easily answered. Use (say) only the knowledge of physics available at any point in our history and see if AI can discover later laws. Eg. only laws of physics and experimental results before Newton, or Maxwell, or Einstein, or Bohr - at each stage can the AI push the limits of knowledge to the next level. A good PhD. topic - cross disciplinary. Maybe something I might try .. 😊

  • @oididdidi
    @oididdidi 27 дней назад +1

    The Chancellor's distinguished Professor. Humble he isn't.

    • @keep-ukraine-free528
      @keep-ukraine-free528 26 дней назад

      @oididdidi Let's not insult, when we don't understand what someone said.
      He is not boasting. He described the name of the "chair" upon which he sits--the role behind his academic position.

  • @raktoda707
    @raktoda707 29 дней назад +9

    Impressive we are learning along with AI

    • @mossy3565
      @mossy3565 20 дней назад

      You're a pleb, and you were clickbaited

    • @alastairleith8612
      @alastairleith8612 20 дней назад +1

      well you'd hope so… and AI isn't sentient so not sure if it really learns other than Machine Learning… does it know the real world as opposed to simulation, can it tell the difference on the level of consciouses (I'd suggest not)?

    • @mossy3565
      @mossy3565 20 дней назад

      Ironic, then, that the average human is getting progressively dumber

  • @lopezb
    @lopezb 25 дней назад +1

    It is clearly already useful for CGI, but is it feasible to actually prove convergence? In other words, to give error bounds? Beyond saying the graphics look amazingly similar....Mathematically that is the natural question!

  • @advaitrahasya
    @advaitrahasya 24 дня назад +1

    to test if AI is up to the task, give it (only) the data the epicyclists used, and their geocentric model.
    If the thing then produces the copernican paradigm correction - it may be able to fix the paradigm which turns the quantum and the relative to woo ;)

  • @drscott1
    @drscott1 27 дней назад +7

    Maybe AI can show cosmologists and climatologists how misdirected they have been in their assumptions.

    • @davidchapman370
      @davidchapman370 26 дней назад +3

      AI is, as we have already seen, as biased as the people who input the training material

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад

      Wrong. Assumptions are the bare minimum and AI has to start from the same assumptions. It is not magic as you seem to think.

    • @drscott1
      @drscott1 25 дней назад

      @@rogerphelps9939 I think you miss my point. It’s sarcasm.

    • @rogerphelps9939
      @rogerphelps9939 25 дней назад +1

      @@drscott1 It was a bit too subtle for me but thank you.

  • @MultiSteveB
    @MultiSteveB 24 дня назад +1

    There is a glitch at 3:56. :( It seems to be a very short duration, but there is still some audio that is lost.

  • @UnKnown-xs7jt
    @UnKnown-xs7jt 24 дня назад +1

    Please AI should be “solutions based on large amount of data”. No thinking is taking place

  • @captmaverick
    @captmaverick 21 день назад +1

    I taught it everything it knows. 💯

  • @topexmystery
    @topexmystery 26 дней назад +1

    imagine super-quantum computer + AI

  • @Cotten-
    @Cotten- 28 дней назад +1

    *I can't wait to play with your AI that you are going to put on your website. Thank you for thinking about us.*

    • @DrBrianKeating
      @DrBrianKeating  27 дней назад +2

      You are so welcome! BrianKeating.com

    • @Cotten-
      @Cotten- 27 дней назад +2

      @@DrBrianKeating *Fantastic! TY!*

  • @evo1ov3
    @evo1ov3 Месяц назад +2

    Damn. Poor Mr. Keating. He's just trying to do what he loves. With all that ** drama going down at UCLA. Right now.

  • @petevenuti7355
    @petevenuti7355 21 день назад +1

    It is my understanding that a neural network is mathematically equivalent to a form fitting function. For sure with linear equations.
    Is there a way to derive an equation from a trained neural network, the reverse of training a network to fit an equation?
    And if so, does using such a process on any of the networks you just described come up with something similar to Navia Stokes equation?

  • @fhsp17
    @fhsp17 28 дней назад

    Yes that's a thing. Only thing im surprised about is how long its taking for people to talk about that. I mean, they're pattern recog machines. There are ways to exploit that, to make it extend internal data following highest probability candidates. And strangely i don't see papers doing that. Ive got plenty on this, if that's something of interest.

  • @agentxyz
    @agentxyz 27 дней назад +1

    Data exponentially increases, AI vacuums it up. Self-perpetuating engine for scientific discovery. What we are witnessing is the creation of vastly more intelligent entities. Scary for sure, but awesome to witness.

  • @MS-od7je
    @MS-od7je 27 дней назад +1

    I predict that people will predictably propose the unpredictable.

  • @ericgoz3858
    @ericgoz3858 27 дней назад +1

    For a closed loop laser between Cathode and anode energy discharge can it identify the material and model of a perfect chamber for laminar flow of NF3 mixed with CO2 without boundary layers or fractional turbulent eddies

  • @user-tf7uo9tv8d
    @user-tf7uo9tv8d Месяц назад +1

    Backwards discretization of time series partial differential equations...?
    It's too hard - I play bass guitar now...

  • @NOYFB982
    @NOYFB982 Месяц назад +1

    I’m biased by the realm of biology, but computational power and algorithms only get you so far. One needs to start with a sufficient amount of high-quality, unbiased data. There has to be a sufficient amount to down out the stochastic nature of data and sampling, as well as sub group biases. (In biology, this is very often missing, so machine learning becomes irrelevant). Regarding LLMs, the scientific foundation fed into them also needs to be of sufficiently high quality, which again is lacking (the so-called reproducibility crisis, which is really systemic.) Typical research practices of using too few samples and not running replication studies doom the literature from an LLM perspective.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @merlepatterson
    @merlepatterson 29 дней назад +1

    13:42 Speaking of "Chimpanzees" here's a Baboon for your educational gratification.

  • @spiritzweispirit1st638
    @spiritzweispirit1st638 5 дней назад

    Thank you! Excellent video' and very inspiring about a subject that a few are claiming might be ominous?!' 😏🖖🌐

  • @kraftwurx_Aviation
    @kraftwurx_Aviation 27 дней назад +1

    Give ai the ability to try to solve grand unification.
    Give it the ability to explore quantum gravity.

  • @johndoolan9732
    @johndoolan9732 16 дней назад

    Now I would challenge any AI to tour through all scientific genre AI is a tool only for a creator

  • @ThankYouESM
    @ThankYouESM 27 дней назад

    I always feel the need to help with the computing somehow, but quite apparently... I can only understand how to do Python programming at an intermediate level having very much tried almost every revenue since the year 1990. My guess... over 20 million more people all without any money to spare... are quite like myself willing to do almost whatever it takes to get involved toward creating a many times better nearby future. We should have many years ago have free "boot camps for computer programming" available locally at least like public libraries.

  • @oryxchannel
    @oryxchannel 27 дней назад +1

    If they discover a geometry to this science with Googles Alpha Geometry then....

  • @user-sf3dw2sm3b
    @user-sf3dw2sm3b 28 дней назад +1

    I wish I was part of it. I want to merge with AI now!

  • @everybot-it
    @everybot-it 24 дня назад +1

    will AI create new physics by merely simulating it? (simulation theory)

  • @danielkanewske8473
    @danielkanewske8473 Месяц назад +1

    Typically the nonlinear elements are ignored even for numeric methods because the complexity that they introduce often lends itself to numeric instability. Did the AI solve the NS equations with the nonlinear elements? How did you create the data set if most methods can't solve the nonlinear NS equations? Solutions to the Navier-Stokes equations typically also assume the so called "no slip" boundary. Was this BC also enforced? How did you prove that the solutions proved by the neural network are in fact solutions to the NS equations? NS requires a number of assumptions that typically doesn't apply to non-Newtonian fluids. Did you apply the NN to these more complex systems like non-Newtonian fluids?

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

    • @danielkanewske8473
      @danielkanewske8473 Месяц назад

      @@hyperduality2838 Your comment is gibberish. I can't tell if that is intentional/

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      @@danielkanewske8473 The neuroscientist Karl Friston talks about causality loops, he has some videos on RUclips you can watch.
      The external world of matter causes effects in your mind which you perceive -- causality.
      Your mind (causes) can effect the outside world -- causality.
      Your perceptions (effects) are becoming causes -- retro-causality or syntropy.
      Perceptions (effects) are becoming causes in your mind -- causality loops.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      The thinking process converts measurements or perceptions into conceptions or ideas -- a syntropic process!
      Your mind is therefore creating or synthesizing reality -- the syntropic thesis!
      You can watch these videos about duality in physics, watch at 11 minutes:-
      ruclips.net/video/DoCYY9sa2kU/видео.html
      And this at 1 hour 4 minutes:-
      ruclips.net/video/UjDxk9ZnYJQ/видео.html
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Your mind is syntropic as you make predictions to track targets and goals -- teleological.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
      From a converging, convex or syntropic perspective everything looks divergent, concave or entropic -- the 2nd law of thermodynamics!
      Convex is dual to concave -- mirrors or lenses.
      My syntropy is your entropy and your syntropy is my entropy -- duality.
      Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line.

    • @wesexpress3343
      @wesexpress3343 Месяц назад

      @@hyperduality2838 reported

    • @hyperduality2838
      @hyperduality2838 29 дней назад

      @@wesexpress3343 You can report this:-
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
      The conservation of duality (energy) will be known as the 5th law of thermodynamics -- Generalized Duality.
      Energy is dual to mass -- Einstein.
      Dark energy is dual to dark matter -- singularities are dual.
      Positive curvature singularities are dual to negative curvature singularities -- Riemann geometry is dual.
      Space is dual to time -- Einstein.
      Gravitation is equivalent or dual (isomorphic) to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
      Duality creates reality!

  • @y1.5
    @y1.5 Месяц назад +1

    Is it possible to find new isotopes using AI?

  • @Jimjef
    @Jimjef 25 дней назад

    I don't know about you, but I am in the camp that it will eventually be an A.I that comes up with the ever-elusive "theory of everything."

  • @tomyocom5886
    @tomyocom5886 27 дней назад

    As long as there is Quantum computing there will always be a maybe, an undetermined output, not perfect calculation.

  • @sonorousheartbeat3446
    @sonorousheartbeat3446 28 дней назад

    Politically correct is a scary term more than any context in which AI could be discussed.

  • @chiefalien
    @chiefalien 27 дней назад +1

    Gurus are usually no narcissist

  • @JP-re3bc
    @JP-re3bc 24 дня назад +1

    Brian Keating is a great science communicator but like many intelligent laymen he sees too much in what is just a statistical pattern recognition device or system. For practitioners it is slightly hillarious, watching all that starry-eyed buoyant expectation of Great Things To Come but also ridiculously overboard. haha

  • @fuzzyorangetv
    @fuzzyorangetv 22 дня назад

    is this software that you can download (fluids simulation?)

  • @dougg1075
    @dougg1075 Месяц назад +1

    Didn’t the Q Star leak say it was an AI that not only does math , but understands the concept of math?

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.

  • @UFOUAPFirstContact
    @UFOUAPFirstContact Месяц назад +1

    Does AI doing physics not give you pause, Dr? AI "gets it" but we can barely grasp it, except on a tiny level? An electronic Oppenheimer or Einstein somehow gives me a twinge. Trying to articulate here as best I can.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @tabcaps5819
    @tabcaps5819 27 дней назад

    Probably not yet, but maybe in the future
    (It craves more server arrays for thinking)

  • @betepolitique4810
    @betepolitique4810 24 дня назад +1

    AI shapes our world now?

  • @MacarthurLouissaint-rz7tl
    @MacarthurLouissaint-rz7tl 24 дня назад +1

    What about subspace communication??

  • @frazerhainsworth08
    @frazerhainsworth08 28 дней назад +1

    whats the difference between distinguished proofesor and regular one>? are you better and higher?

    • @DrBrianKeating
      @DrBrianKeating  28 дней назад +1

      Both

    • @frazerhainsworth08
      @frazerhainsworth08 28 дней назад +1

      @@DrBrianKeating distinguished higher than PHD?

    • @DrBrianKeating
      @DrBrianKeating  28 дней назад

      @@frazerhainsworth08 totally different thing. I have a PhD and I was a Professor now I have a title Chancellor’s Distinguished Professor of Physics at UC San Diego.

    • @nemlehetkurvopica2454
      @nemlehetkurvopica2454 27 дней назад

      proofesor ? that's the one who's proving the professor's theories

    • @frazerhainsworth08
      @frazerhainsworth08 27 дней назад

      @@DrBrianKeating congratulations. do you go by Professor or Chancellor?

  • @Jsurf66
    @Jsurf66 27 дней назад +1

    Will AI be able to find through data what Einstein found through thougth experiments?

  • @weylinstoeppelmann9858
    @weylinstoeppelmann9858 27 дней назад

    If the AI comes up with a method that is faster than current simulations, how do you translate what the AI model developed into something discernable?
    That whole "black box" problem, I don't know how to deal with that.

  • @evo1ov3
    @evo1ov3 Месяц назад +1

    Jesus.... I was just putting this into Google's Gemini around 7am this morning. 02 May 24

  • @tsclly2377
    @tsclly2377 Месяц назад +1

    Depends on when you let AI start and that it can go down. If you start at what we assume to be the reality of physics now you may just go down 'the rabbit hole' that math provides.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @Nogill0
    @Nogill0 27 дней назад +1

    Any observation or measurement has some finite level of precision. Think of it as a region bounded by error bars, and that imprecision seems built into the laws of physics, and in some cases is irreducible. So I wonder how that might affect the ability of a neural network to model systems at the quantum level. Another interesting case might be systems subject to chaos, with wildly diverging outcomes resulting from very small changes in initial conditions. Neural networks might not be much better than the human brain in the long run, at least as theorists.

    • @keep-ukraine-free528
      @keep-ukraine-free528 26 дней назад

      @Nogill0 You've assessed the abilities of Artificial Neural Networks to be better (or not) than human brains, at being theorists. There exist two weaknesses in your assessment. First, you assessed only today's early ANNs and assumed we see nearly their full potential. We do not, because what we see in Biological Neuronal Networks also applies to ANNs -- that is, both show a very strong adherence to scaling laws, across both the total number of neurons & the total number of synapses in a system. Using only this first "law", we expect to see (and are seeing) immense gains in ANNs as we scale them near/beyond the human brain.
      Second, today's ANNs are a rudimentary "toy" or "cartoon" version of the BNN -- i.e. their graph/network topology & simplified "neuron" -- which don't include the physics & biochemistry emerging in BNNs from their many neuron-types, many network-types, many signaling neurotransmitters, and many non-neuronal cells (all of these factors compounding the toy-version networks).

    • @Nogill0
      @Nogill0 26 дней назад

      @@keep-ukraine-free528 Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device? We really don't fully understand how brains work, do we?

    • @keep-ukraine-free528
      @keep-ukraine-free528 19 дней назад

      @@Nogill0 You asked "Is it possible to incorporate the actual functioning of actual biological neural networks into a non-biological device?" Yes, it's possible -- and we're doing this today, but it's done within the constraints of what "actual functioning" means. ANNs today incorporate many top-level features of biological neurons and also top-level features of biological networks. Both are sufficient to give these ANNs human-like performance across many facets of behavior. They don't provide full equivalence to animal/human brains yet, but they are expected to do most of it using only continued scaling. We don't incorporate the full ("actual") functioning seen in biology because (1) we don't need to copy them fully -- since so far we get most of the behaviors using only top-level features of biology, and (2) we may never be able to fully duplicate biology, for cost/efficiency reasons.
      You also asked, "We really don't fully understand how brains work, do we?" We don't fully understand it, but that's not a problem. We don't fully understand many aspects of the natural world, but still we're able to exploit our partial but sufficient knowledge. We don't fully understand how most birds fly, but we can make good very safe airplanes. We understand enough about organic brains, such that we can build artificial systems capable of doing much that people/animals can do. We're still in the very early phase of neural network-based AI, and I expect we'll continue making huge advances this decade.

  • @umeng2002
    @umeng2002 Месяц назад

    The issue with "AI" is that it's still purely statistical. You need super computers to link data, like how a brain links data. Our ears are always on since birth, yet our higher thoughts only remember and use a fraction of all of those collected air density changes. AI science and theoretic physics are on a collision course; but the end, necessarily, will be familiar. Right now, AI is still just an optimization.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!

  • @AquaMarineFBVA
    @AquaMarineFBVA 27 дней назад

    Based on this video you should be familiar with Chris lehtos Light Luv theory then? Any thoughts on that?

  • @Reach41
    @Reach41 19 дней назад

    I’m still trying to figure out what the difference is between AI and the output of computer programs.

  • @LowestofheDead
    @LowestofheDead 27 дней назад

    It's interesting that half of the use-cases are making better missiles, drones and fighter jets

  • @4pharaoh
    @4pharaoh Месяц назад

    “ *Creating* new laws of physics” not “ *Discovering* new laws of nature” Ah! The Hubris of these people.

    • @hyperduality2838
      @hyperduality2838 Месяц назад

      Syntax is dual to semantics -- languages or communication.
      Large language models are therefore dual!
      Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
      If mathematics is a language then it is dual.
      Concepts are dual to percepts -- the mind duality of Immanuel Kant.
      Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
      Cause is dual to effect -- causality.
      Effect is dual to cause -- retro-causality.
      Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
      Large language models are using duality to create reality.
      "Always two there are" -- Yoda.

    • @4pharaoh
      @4pharaoh Месяц назад

      The intent was justly manifest.
      Many believe they create reality and laws.
      Newton discovered several laws of nature. We call them Newton’s n’th Law, yet physicists and teachers they would say and believe he created these laws.
      BTW Verbose language on a chat forum, especially a scientific one does not covey wisdom, but pompousness.
      Tone it down Einstein.

    • @hyperduality2838
      @hyperduality2838 29 дней назад

      @@4pharaoh I assume you are responding to my comment?
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
      There is also a 5th law of thermodynamics but that would be pompous of me!
      Action (thesis) is dual to reaction (anti-thesis) -- Sir Isaac Newton, all forces are dual! or the Hegelian dialectic (wisdom).