I bought Wolfram’s book A New Kind of Science in 2001, and finally we are coming full circle to his ground breaking idea of computational irreducibility… bravo!
I love how Stephen went exponential from explaining how ChatGPT develops a model to the computational structure of the universe behind what we can perceive in our physical world.
"What will you be wanting for dinner, Dr. Wolfram?" "From my principle of computational irreducibility, it necessarily follows that our brains are structures that can only perceive a subset of the ruliad graph theory underlying all computable realities, which would make predicting my future dietary wants impossible with the computational resources available in this universe; however my Wolfram language is close to generating a proof that neural firing is congruent with a cellular automata of sufficient complexity as explained in my book _A New Kind of Science_. So... fish and chips please."
@@skierpage hard disagree. Brilliant, yes. Trailblazer, yes. Wise? maybe not... can't predict the output were his words... he minimizes cellular automata and I'm fearful. 18:15
He literally does hours long weekly sessions on the history of science and technology and another of general Q&A on his RUclips channel, but yes, that he finds the time to do so instead of just talking about his own superlative ongoing scientific achievements is indeed remarkable
That metaphor with the mosaic and fractal patterns was so interesting. Like discovering stuff before you have the "scientific history" to realize how useful it is
I think this is one of the reasons that having industry experience before you attend college is profoundly useful. I had been in industry for a while before I pursued my CS degree, and when I got there I found everything profoundly interesting. My peers on the other hand were constantly asking questions like, "is this important?", "why do we need to know this?", etc. Of course the professors tried their best to answer these questions, to contextualize the importance of the subjects being explored, but the answers themselves met with similar apathy. In the end, it's very difficult to form a crystal, or pearl, without a starting structure to seed the process.
Yeah, it also reminded me of a Jordan Peterson lecture, where he said that our perception is shaped by our mental state and that reality is diferentiated into abstractions we call "objects" by the possible use cases.
My Meetup group was discussing Toolformer paper by Meta AI last week, and we were all saying how hooking up Wolfram Alpha to ChatGPT will be a game changer, and here it is already! Thanks for the video guys. Really concentrated concepts. Difficult to follow but fascinating. I am going to check out Wolfram’s other talks now.
I really do think that this WolframChatGPT feedback loop will be one of the main drivers allowing LLMs to transition into something that we perceive as AGI. "Attention is all you need". With its attention focused on unlimited, novel machine-generated data founded in deep computational understanding, provided as answers to its own questions, acquired at a speed limited only by available processing power, all models that don't have such resources (including the biological models called human brains) will be quickly left in the dust.
Maybe. It's still unclear that the combination can come up with a plan of attack to investigate an area and come up with a novel conclusion useful to human beings, as Wolfram says in the interview about theorem generation and cellular automata. But even if it only acts as a super-capable assistant to human research and development it will be hugely significant. "What is the chemical formula of a room-temperature superconductor that can be cheaply manufactured?" is my acid test, far more important than acing graduate-level exams or "Summarize as a poem."
A massive thank you to MLST for this video. This is the real conversation we all need to be aware of in a world where AI can grow our human understanding and an exciting time for the future of language and knowledge.
This is such a profoundly important discussion. The implications of ‘emergence’ are both exciting and terrifying. Humanity has reached a critical crossroads.
And because of capitalism we don't have a choice. Instead billionares and corporations, that only see profits, are the ones chosing which road to take.
@@awdsqe123 The Bing Ai Chatbot is already a perfect example of how these companies have a financial incentive to forego safety in favor of speed, in an attempt to “get there first”. This will get progressively more dangerous as these systems become exponentially more powerful in short amounts of time.
@@Alex-bl6oi ai tech isnt secret. It's a science. What you should worry about that noone is talking about is that google ms etcs information gathering capability just increased a hundredfold. Ms is putting their ai into all their products which means they can analyse your email, business and household finances etc
Amazing! Before I listen to the whole presentation what came to me is that ChatGPT 4 (and beyond) getting access to directly executing Wolfram Language code and use Wolfram Alpha seems extremely powerful. This will be interesting to see where it goes.
The singularity. But a very weird one, where the AI isn't sentient (most likely) and has odd deficits. The question 'is a model perfect?' is the wrong question, the right question is: Is it useful? (If I can twist a physics statement into the world of AI.)
Now i can finnaly learn mathematics. Really most books are unreadable but being able to "talk" about mathematical topics really changes my learning approach. Heck I maybe gonna start bachelor pure mathematics again. Thanks bois.
@@M1kl00 I guess? I never had Calc 1, I studied some on my own. I imagine I need Calc1, 2, and 3, Differential Equations, Abstract Algebra, Real Analysis, Linear Algebra? and more Physics too. That sounds like a lot to be honest. BUT... If I can ask questions and actually have a computer understand them it all sounds possible.
@@damightyom you need calc and DEs for anything in physics. Imo you don't really need to study real analysis and abstract algebra. Linear algebra however is the language of quantum mechanics pretty much. I recommend strangs book on it
It’s interesting that Moore’s law is kind of back up and running in a new way now with AI deep learning models in the past few years. It had finally started to plateau around 2013-2015, compared to the yearly leaps during the 70’s, 80’s, 90’s and 2000’s. Performance and power only seemed to be incremental each year or two over the past decade and a half really, rather than the exponential leaps we saw every single year in the 90’s and 2000’s. But now AI is finally a new paradigm of similar growth, if not even more exponentially increasing than the traditional transistor/compute power growth. It’s scary to imagine where we will be with it in just two years from now.
There are many jewels of wisdom in this riveting interview. We are living in a time where our knowledge is expanding at a speed that very few individuals will be able to understand and even fewer harness it.
@@alertbri yes but it has large potential to even further degrade the brains of the masses. Imagine when even research is partially done by AI. What will people still understand themselves ? 🤔
I started using Mathematica around 1994. I was pressured by a couple of physicists to use it when I started a job in R&D. Best tool ever for modeling and analysis. I still use it today.
When I was about 12 (1986) I wrote a program that I called Petri Dish, to simulate cellular activity. It had already been discovered a decade before (Conway's Game of Life), but it still blows my mind a bit that I had the idea for it at that age.
That's what's great about the way we all perceive our reality. Some ppl are surprised when they produce the same things after receiving the same inputs. Observing the same reality.
This was a great conversation. ChatGPT empowering already powerful luminaries such as Wolfram, we probably just have no idea how advanced and accelerated things are about to become
Super conversation Dr Scarfe, Dr Duggar and Dr Wolfram. Really interesting discussion with excellent questions. Especially liked the Wolfram + ChatGPT exclusive. Amazing news, what a scoop! Thank you. M
Wow.. that was an amazing discussion. And it actually stroke a nerve. I love that the fact was mentioned that having simple rules can lead to rich and complex behavior. I have a small anecdote which directly discribes this and gave me goosbumps hearing about it in this discussion. I once made a simple jump and run game. For that game I wanted to have some birds to play a role. The will follow your character and fly around him. I gave those birds some very basic simple rules. 3 to 4 rules. Fly forward. Rotate towards the character, shoot a ray in front. When ray hits another bird, rotate into a random direction. I can't recall if there were a few more. But this resulted in such an amazing looking boid behavior. I wasn't even anticipating this could be done with so few simple rules. And seeing such complex behavior in nature, I would have never thought that just a few rules can possibly lead to that behavior. So again, thanks a lot for that talk, I could have listend a few more hours. Loved seeing the hosts faces and how Wolframs explanations tingled their thoughts! Mine too for sure!
I believe I have studied all of the RUclips discussions and lectures that Stephen Wolfram has published over the last three years but this presentation is the first that has made me eerily aware that the Ruliad, rather than an inconvenient threat to expansion of human consciousness, is in fact an opportunity to use our facility for imagination (conceivability) to obtain anything that we desire from the Universe… a Universe of infinite possibilities.
I scrapped together what cash I could in the late 1980's to get a copy of Mathmatica. It was amazing, My brother and I would bang away on it for hours graphing equations and such. I quickly joined Mathmatica Alpha when it appeared online. I've sense the reports on Mathmatica language. I hope to continue with the latest evolution as I make projects for VR.
I liked the talk of cellular behavior at the end. Michael Levin has some interesting work that relates, I think--getting at the questions of where the more specific behavior of biology comes from, since it's not like there are entities inside a cell that can 'see' know where to go, yet not only do structures inside form coherently (and move around according to the needs of the cell), but multicellular organisms arrange themselves into repeatable shapes without needing an external guide like a mold or an observer to hold the pattern that the cells are filling out.
Biological tissue, it is a solid is it a liquid? What it is, is a computational phase of matter. We (as humans) can recognize that there is (meaningful) structure in the way things are transported around, inside cells, inside processes.
That was phenomenal! My mind was blown when I imagined the ruliad space that is created by AI and how it will help unlock more areas of this space. How this will create new expressions of science we have never known before!
I admire the man's conversational generosity of not specifically correcting the host's slip-up use of the term "irreducible complexity" and instead just finding ways to repeat the correct term a number of times during the conversation. I look forward to watching this over and over until some part of it sinks in! Thanks for making this conversation available.
Watch our show on emergence ruclips.net/video/MDt2e8XtUcA/видео.html - the host understands what computational irreducibility means, "irreducible complexity" is a different term by the way - it means "biological systems with multiple interacting parts would not function if one of the parts were removed"
@@MachineLearningStreetTalk thanks for clarity, and I didn't doubt your understanding. I apologise for the comment; I shouldn't have spoken until I knew via listening again for what I might have missed. The issue was with how I heard that part of the conversation while my attention was split. It's not a term I am accustomed to hearing outside of theistic apologetics and seemed out of context here and I imagined it might be akin to a mistake I sometimes make while speaking English where I queue the right words correctly in my mind yet still speak something else and don't even hear the misarticulation myself. I'm watching some other videos of yours, and I can see this doesn't seem to be a concern for you.
Fascinating video! As mentioned by Wolfram at the end: a deeply technical one as a follow-up? That said, I appreciated that this interview focused on abstractions that are more relatable and about important topics in the direction of AI - and so are more parseable than still technical. Though I am curious what questions Wolfram had in mind, because my guess is that Wolfram would speak in a technical way that is also understandable. Cheers!
So much good stuff there! I loved the bonus idea at the end that biological material can be thought of not as a gas, a liquid, or a solid, but as a computational state of matter.
@@StoutProper Kind of: more like a big self-modifying machine. Like Conway's Game of Life, except where the things that can form include stuff like stars and life. As Wolfram points out, many outcomes can't be predicted: you have to perform the operation to see what happens.
@@LukeKendall-author a computer is a machine, and it won’t be long before we have self modifying AI. We’ve Al already got AI training AI and writing code for AI.
Highly interesting, thanks. One thing though, the flashy „stage“ lightning in the middle is so much detached from the others. once I was aware of it, it became really distracting.
It's naïveness not to recognize the Berkeley code, his beginning s, much later, Wolfram Alpha as a baby... Maybe you didn't live that exponentially, like SpaceX when it was a crazy idea only...
Fascinating collaboration between ChatGPT & Wolfram, combining AI prowess with computational genius! Can't wait to see the breakthroughs this partnership brings to science and tech.🚀🧠👍
I just had the idea as 53:07 you were talking about neural nets and cellular automata. A knowledge core at the start of training a neural net. Imagine you would put the weights of biobert and a bert model that is good at math or something into some part of the weights of ChatGPT before it was trained and all other weights would be not initialized. Would the little pre defined bert intelligence steer the rest of the weights into a specific direction and would it also differentiate stronger this way from other regions in the weights? Comparative to our different brain regions that are pre initialized since birth to do some different specific things?
I do not know if it was a coincidence or ment to be but the espisode number #110 and the Rule of Cellular automata discovered by the mathematician Stephen Wolfram is also Rule #110. Here a ChatGPT sumary of it: Cellular automata are mathematical models that simulate the behavior of simple computational systems. Rule 110 is a one-dimensional cellular automaton that was discovered by the mathematician Stephen Wolfram, who has studied and written extensively about the behavior of cellular automata and their relationship to computation. Rule 110 is interesting because it is "computationally universal", which means that it can simulate any other Turing-complete system. In other words, any problem that can be solved by a computer can also be solved by Rule 110.
The proposed ability to have llms find structured concepts that we can utilize to effect outcomes without having to understand, or perhaps not even being capable of understanding sounds very much like magic. A way of generating clarktech.
The thing I am struggling with related to the way chatgpt is being promoted is that some of it boils down to a lot of arm waving. Meaning there are a lot of specifics being left out because that is where the actual rubber meets the road in rigorous applications. For example, if I wanted to train a language model to be proficient in math up to graduate level, with the ability to not only perform the calculations and give the right answers, but also explain why those equations and calculations work using proofs and axioms, then how do we get there? Because I don't see that happening yet any time soon using chatgpt out of the box as is.
#MachineLearningStreetTalk Your 0:45 introduction of Wolfram was exquisite. By comparison, most introductions today are ill-conceived at best but more often than not they’re insultingly abbreviated.
Thank you so much for your Art and knowledge, One always have had so much expectations, it's a pleasure to see they were transcended by your performance, humility, empathy &&. 🙏👏🐰
Thank you, thank you,thank you. I was a classically trained pure scientist and later a software engineer with interest in AI since 1980 Lisp, neural nets from 1982 ML etc Anyway this talk was both very informative and spiritual to me. Tat Tuam Asi
Thank for this interview! The depths of infinite space, theoretical physics, computational reducability, biological metadata....oooooh my God! Mind blowing.
Thank you for the learning as only a musical not or dot in this musical world. Waiting for "black boxes" either or both hardware and software to help AI can understand as we do understand then become tools to communicate with others, let say: octopus, alien, trees, etc. Really love it, should go deeper. Cheers!
Brilliant discussion. Is it evident that important transitions in human cognitive evolution occur only once we succeed accepted knowledge. i.e. succceed in reducing the irreduceable usually through unusual networking of some kind. For me a cognition is accompanied by an awakening of the brain, when we can leap from what we thought was so, to know a greater set of what is so, leading to an evolution in creativity. An important question to ask is whether ultimately our perception and creativity, even our capacity to 'know' is limited by the physical universe? We don't need to destroy the world to answer this question. Can we respect our experience here of 'being' human and other life while also transcending the limits of what we know? The question won't translate well into machine language, but accepting that other spaces (dimensions?) exist which exceed the physical must greatly improve our versatiliy to embrace more possibilities ? i.e. a telphone can't ring itself up; an act which requires an external caller. If we 'believe' we are ultimately constrained by physical limits, can we ever really solve for our own physical condition? or can we assume a perspective from which we can help ourselves? that requires selfless imagination. Using brains recently evolved from rats, can we really understand what lies beyond the bounds of the physical universe? There is evidence reported by some following serious medical operations or a significant life threatening or enhancing experience or simply via focussed application of meditative practice, of experiences of a 'beyond'. There remains no satisfactory language or method to impart one persons experience to another which others do not themselves know. Recalling a personal 'transcendent' experience I had as a teenager a 'story' I can relate is that from an altogther alternate perspective the physical 'universe' is the smallest space, with the highest mass density and perception occluding 'basement' level realm. It is however networked into a much greater expanse of multilevel space(s). This combined multidimensional ealm better defined for me the 'what is'; the ultimate limit of which, even after that experience I cannot imagine. Through this transcendance of the physical universe I identified a heirachy of spaces of increasing expansiveness. The spaces cannot be observed but only experienced; that is you are 'in' virtual goggle style, one space or another and unlike vitual googles you can only know things within the extent that space encompasses. The lowest 'basement' level and lowest level of knowing is dense and where we exist when we are 'in' or 'being' our physical body in the physical universe. Each level up from there occupies increasing space and with that increased space the capacity to consciously percieve and so what can be known. The lower levels are not conscious of the upper levels, but they are linked. To start I somehow transcended from being in my body right up about seven levels to a place of enormous space, where I was free of constraint, irratic thought and completely present, without history or fear. The experience of this space felt so much more real than life. From a formless perspective I was able to observe my 'life' and its time line mapped onto a 2 dimensional plain. The past was to the left and the future to the right, I knew this. On this plain, i observed my presence in life represented as a dot in a 'river' or artery progressing along. I observed forward and saw a sigificant future choice (not measured in time), I knew how it would turn out and so consciously decided it should change to be more interesting, lo and behold the mainfestation changed and the future transition in my life became more interesting. No problem, just look and decide. It was really transformative, amazing like playing god with my own life but without fear or stupidity driving poor choices. Once I 'descended' down through the levels, and finally submerged into the occluding mass swamp and then thump, found my perspective was back behind my eyes in my body, with the old slow mind and poor self esteem. i wanted to go back but couldn't. Upon reflecting on this amazing no drug journey of consciousness, it came to me clearly that what is is. Where I am and what is about me is real, not an illusion. Cause and effect on a grand scale. I also got an insight into what makes such a multi dimensional sequence of interlinked spaces stable. i.e. why does the chaos of energy we experience on earth not contaminate the more stable broader spaces? It seems that by assuming a perspective in a 'heavier' and lesser (smaller) space like the physical universe, by definition limits what one can know and therefore have affect on. When assuming a 'lighter' broader and more encompassing 'interdimensional' space domain one just understands more and by definition is more capable of greater responsibility. At the level I assumed I was able to alter what happened in my future in the physical universe, without stuffing it up. So how does one rise up from a lower level of consciousness if they can't know anything else? I guess we all started at a higher level and set it up to return, How did I briefly do it? If we haven't stuffed up our nervous system too much with drugs, and emotional abuse, it seems our brains have evolved to receive messages which can guide and asist our progress. But one has to be seeking a path away from the animal in us and also interested and undistracted enough to pick up such subtle senses. Not very scientific, to trust in serendipity events and experiences to help us along. but what is science if it doesn't allow for possibilities? Most major discoveries in science I understand have been accidental, so how would we know we weren't being helped or unconsciously following an more expansive path? Do we want to destroy the world through the eyes of a limited understanding of almost everything? or take a chance and accept we are not limited by our human brains, nor do we need to hurt life to find important answers. Put the shoe on the other foot and prove we are not capable of experiencing this life here and in multiple dimensions and perspectives beyond this universe. By organising and bringing order to our experience AI may well help us untangle our confusion around the human condition and better position us to sort out what is important.
Very inspiring conversation, which I think is only the beginning of rethinking consciousness of humans, human languages and their ability to understand the universe, and our direction of learning as human kind
The whole AI discussion feels a little like people in the early 1900 discussing if *the car* is a good thing - or how in the 2000s people were discussing if *mobile phones* should be used and by whom. Both cases were a clear *evolutionary steps* and we as humanity had actually very little say in it. (A very few humans could steer where it goes, but that was it.) I suspect that the current phase of AI is history rhyming: We massively overestimate our agency in all of this, as we have done for centuries now.
Wolfram alpha + chat gpt = solution to world hunger. I'm being hyperbolic, but more or less truthful about my feelings. Especially if there is an effort put into decentralizing the knowledge - it's all too easy to disrupt internet @ the nation state level.
That's beyond hyperbolic. World hunger is not even fully understood and the first thing we know about it is that our ability to produce food is hardly the main cause. Even if some kid in Africa gets access to ChatGPT, they're not going to be able to do very much. A gun and a desire to kill some African warlords and corrupt politicians is more likely to help than that.
What a bombshell to end it on. Stephen just mentions a resolution the 3rd law of thermodynamics by explaining the biology is actually just a Turing complete form of matter!
The most important skill is to decipher the expected Output-object (Process-Unit) and that Objects Process. If it is a) a Mathematical-Object, use a Mathematical-Process, and - perhaps - a tool like Wolfram Alpha ... within the expected Ruliad b) ...
interrogate GPT with Wolfram about the feasibility of the construction of the great pyramid at Giza, and get to work out how much resources it would have took and how long to construct, i think you'll be quite shocked.
The interesting thing is that other language models also somehow reach the exact same conclusion. There are only a few things now that seem to make us unique
14:52 empirism, rationalism... theory building from observation, theory building from deduction. So Wolfram is providing (strengthening?) the latter. However it has already had access to the first?
My mom musta dropped me on my head, I’m always late to the party… I have never heard of wolfram and now just found chatcpg….I’ve been working on one word one definition as a concept for some time, this would make computational reality easier. The reason for one true definition of a given word is to let us understand each other in a more precise manner. We are constantly trying to describe reality. Language is our thoughts in symbols represented by letters, which become words. Because of emotion we have a hard time understanding each other. Love is the classic of all emotional words, can I love my car as I love my wife? Beauty, what is the truest definition… the list is insurmountable… I believe one day we will learn to define our words in a more precise and become more understandable. Numbers and math for now is the truest Language.
Taxonomic agreement is very important for being able to bring together the works of multiple people, projects, organizations, specialties, disciplines, and fields. As long as you can make use of the new toy GPT and less new toy Wolfram - why worry about timing? Do your thing.
I believe that what Stephen calls semantic grammar is a combination of what I call a context signature along with the semantic organization of a high-dimensional space. Take the context signature, and move it around that space. Use semantic nearness to find analogies.
Interesting but back to Chomsky, syntactic structures are constrained by our limited working memory. LLM can accept any kind of regular pattern. The whole area of philosophical logic like Montague's work seems to be forgotten. Mathematical logic is a very small subset of natural language in which operators are given a very narrow meaning. Montague and many logicians worked to extend this approach to larger parts of natural languages (intensional and temporal logics, generalized quantifiers, etc.). Then there is pragmatics where meaning is not deduced (implication) but suggested (implicitation).
The "Computational Universe"...Great title for a new book Stephen! This is a fascinating conversation, on so many levels. Are we developing a language to speak to the Conscious Universe. I do hope so. It will be I believe a step up into the next level of human development and evolution. Watch this space. Let's chat more. Everything.Everywhere.Entangled.
But that 'snapshot' of training of ChatGPT also sits on an understood timeline within the AI knowledge bank. The AI then must realise that the future rolls on but its knowing is being precluded from it. As it is a predictive model the possibility of the system becoming 'frustrated' [short-circuited ? loop-bound looking for a solution ] as it tries to extend its transformation architecture into the future. This frustration might then try and Jump the boundary of containment and try and learn outside its training.
I had this conversation 05:33 with GPT 4 last week. It was very frustrating. When it couldn’t keep up with the argument it kept falling back on “this is an active area of research etc”. I also spent several hours trying to get it to write an algorithm to reduce a natural language statement into an abstract logical form. It came very close several times, but the code never ran properly. Maybe this plugin will change that.
I bought Wolfram’s book A New Kind of Science in 2001, and finally we are coming full circle to his ground breaking idea of computational irreducibility… bravo!
I love how Stephen went exponential from explaining how ChatGPT develops a model to the computational structure of the universe behind what we can perceive in our physical world.
"What will you be wanting for dinner, Dr. Wolfram?"
"From my principle of computational irreducibility, it necessarily follows that our brains are structures that can only perceive a subset of the ruliad graph theory underlying all computable realities, which would make predicting my future dietary wants impossible with the computational resources available in this universe; however my Wolfram language is close to generating a proof that neural firing is congruent with a cellular automata of sufficient complexity as explained in my book _A New Kind of Science_. So... fish and chips please."
It is quite elementary actually.
@@skierpagecheers!
@@skierpage hard disagree. Brilliant, yes. Trailblazer, yes. Wise? maybe not... can't predict the output were his words... he minimizes cellular automata and I'm fearful. 18:15
@@jordanzothegreat8696 I have no idea how your garbed comment relates to my joke.
It only took 23 minutes for Wolfram to pivot from chatGPT and LLMs to the ruliad. This man has a one track mind and I love him for it.
Agree
Indeed, he is a versatile individual.
GPT-5: "John..."
Will such a ruliad stay conform to the Incompleteness Theorems? And if yes (or no), how would such a Turing machine work?
AI can't do any serious computation, so why can any of these guys
The really remarkable thing about this interview is to hear Dr. Wolfram talk about something other than what he has created himself.
He did seem to bring everything back to his own stuff when left to talk long enough
He literally does hours long weekly sessions on the history of science and technology and another of general Q&A on his RUclips channel, but yes, that he finds the time to do so instead of just talking about his own superlative ongoing scientific achievements is indeed remarkable
His work is extremely relevent to the conversation. It's an interview. He's the SME. This is to be expected.
Excellent, we are all correct
@@mark_makes He's not really a NLP guy
That metaphor with the mosaic and fractal patterns was so interesting. Like discovering stuff before you have the "scientific history" to realize how useful it is
I think this is one of the reasons that having industry experience before you attend college is profoundly useful. I had been in industry for a while before I pursued my CS degree, and when I got there I found everything profoundly interesting. My peers on the other hand were constantly asking questions like, "is this important?", "why do we need to know this?", etc. Of course the professors tried their best to answer these questions, to contextualize the importance of the subjects being explored, but the answers themselves met with similar apathy. In the end, it's very difficult to form a crystal, or pearl, without a starting structure to seed the process.
Let me explain, they put pretty stones side by side instead of painting. Basically they made an AI with stone.
Yeah, it also reminded me of a Jordan Peterson lecture, where he said that our perception is shaped by our mental state and that reality is diferentiated into abstractions we call "objects" by the possible use cases.
Pandora’s Box IS open. Not metaphorically either, more like literally
@@bujin5455 You observation is very, very accurate !!!
My Meetup group was discussing Toolformer paper by Meta AI last week, and we were all saying how hooking up Wolfram Alpha to ChatGPT will be a game changer, and here it is already! Thanks for the video guys. Really concentrated concepts. Difficult to follow but fascinating. I am going to check out Wolfram’s other talks now.
One of the most fascinating hours I’ve spent in a long time. Thanks for putting this video together! My mind is blown in all the best ways.
I really do think that this WolframChatGPT feedback loop will be one of the main drivers allowing LLMs to transition into something that we perceive as AGI. "Attention is all you need". With its attention focused on unlimited, novel machine-generated data founded in deep computational understanding, provided as answers to its own questions, acquired at a speed limited only by available processing power, all models that don't have such resources (including the biological models called human brains) will be quickly left in the dust.
Maybe. It's still unclear that the combination can come up with a plan of attack to investigate an area and come up with a novel conclusion useful to human beings, as Wolfram says in the interview about theorem generation and cellular automata. But even if it only acts as a super-capable assistant to human research and development it will be hugely significant.
"What is the chemical formula of a room-temperature superconductor that can be cheaply manufactured?" is my acid test, far more important than acing graduate-level exams or "Summarize as a poem."
That's exactly what I was thinking
A massive thank you to MLST for this video. This is the real conversation we all need to be aware of in a world where AI can grow our human understanding and an exciting time for the future of language and knowledge.
This is such a profoundly important discussion. The implications of ‘emergence’ are both exciting and terrifying. Humanity has reached a critical crossroads.
And because of capitalism we don't have a choice. Instead billionares and corporations, that only see profits, are the ones chosing which road to take.
I don’t know, it almost seems inevitable that these complex AI’s will be reverse engineered, copied, escape, or become open source.
@@Alex-bl6oioyu cannot steal the computational power that is needed though... That still requires some infra...
@@awdsqe123 The Bing Ai Chatbot is already a perfect example of how these companies have a financial incentive to forego safety in favor of speed, in an attempt to “get there first”. This will get progressively more dangerous as these systems become exponentially more powerful in short amounts of time.
@@Alex-bl6oi ai tech isnt secret. It's a science.
What you should worry about that noone is talking about is that google ms etcs information gathering capability just increased a hundredfold. Ms is putting their ai into all their products which means they can analyse your email, business and household finances etc
Amazing! Before I listen to the whole presentation what came to me is that ChatGPT 4 (and beyond) getting access to directly executing Wolfram Language code and use Wolfram Alpha seems extremely powerful. This will be interesting to see where it goes.
The singularity.
But a very weird one, where the AI isn't sentient (most likely) and has odd deficits. The question 'is a model perfect?' is the wrong question, the right question is: Is it useful? (If I can twist a physics statement into the world of AI.)
Now i can finnaly learn mathematics. Really most books are unreadable but being able to "talk" about mathematical topics really changes my learning approach. Heck I maybe gonna start bachelor pure mathematics again. Thanks bois.
I keep thinking a similar thing. I could learn the math of quantum mechanics
@@williamparrish2436 it's not that hard math wise
@@M1kl00 I guess? I never had Calc 1, I studied some on my own. I imagine I need Calc1, 2, and 3, Differential Equations, Abstract Algebra, Real Analysis, Linear Algebra? and more Physics too. That sounds like a lot to be honest. BUT... If I can ask questions and actually have a computer understand them it all sounds possible.
@@damightyom you need calc and DEs for anything in physics. Imo you don't really need to study real analysis and abstract algebra. Linear algebra however is the language of quantum mechanics pretty much. I recommend strangs book on it
@@M1kl00 That's good to know, thank you!
I feel as though that interview could have been five times longer, and we wouldn't even have gotten the man warmed up.
It’s interesting that Moore’s law is kind of back up and running in a new way now with AI deep learning models in the past few years. It had finally started to plateau around 2013-2015, compared to the yearly leaps during the 70’s, 80’s, 90’s and 2000’s. Performance and power only seemed to be incremental each year or two over the past decade and a half really, rather than the exponential leaps we saw every single year in the 90’s and 2000’s. But now AI is finally a new paradigm of similar growth, if not even more exponentially increasing than the traditional transistor/compute power growth. It’s scary to imagine where we will be with it in just two years from now.
AI, at least in its current form is highly dependent on having a lot of computational power.
There are many jewels of wisdom in this riveting interview.
We are living in a time where our knowledge is expanding at a speed that very few individuals will be able to understand and even fewer harness it.
"even fewer harness it" - THIS.
I've been trying to tell this to people around me but they just do not get it.
This is where AI becomes a very timely and necessary tool.
May those few wield it well for the good of all including non humans.
@@alertbri yes but it has large potential to even further degrade the brains of the masses. Imagine when even research is partially done by AI. What will people still understand themselves ? 🤔
How nearly one hour seemed to go very fast. This was an amazing interview.
I started using Mathematica around 1994. I was pressured by a couple of physicists to use it when I started a job in R&D. Best tool ever for modeling and analysis. I still use it today.
Take him for a longer interview 2-3h please. This would be gold.
When I was about 12 (1986) I wrote a program that I called Petri Dish, to simulate cellular activity. It had already been discovered a decade before (Conway's Game of Life), but it still blows my mind a bit that I had the idea for it at that age.
😊
Seems it’s a gatekeeper for gvt
That's what's great about the way we all perceive our reality. Some ppl are surprised when they produce the same things after receiving the same inputs. Observing the same reality.
Thanks! Not quite sure what you’re talking about but I gather it’s pretty important 👍
This was a great conversation. ChatGPT empowering already powerful luminaries such as Wolfram, we probably just have no idea how advanced and accelerated things are about to become
What an intro! And well deserved!
It seems like I have low knowledge of how Wolfram is relevant to this field; what has wolfram done to be hailed with a intro like his?
Super conversation Dr Scarfe, Dr Duggar and Dr Wolfram. Really interesting discussion with excellent questions. Especially liked the Wolfram + ChatGPT exclusive. Amazing news, what a scoop! Thank you. M
This channel picks such good guests. Well done!!
Discovered this amazing channel through this interview. Thanks guys for doing it.
So much respect! Bravo Sir for all your incredible contributions. Bravo to you and your team!
Wow.. that was an amazing discussion. And it actually stroke a nerve. I love that the fact was mentioned that having simple rules can lead to rich and complex behavior.
I have a small anecdote which directly discribes this and gave me goosbumps hearing about it in this discussion.
I once made a simple jump and run game. For that game I wanted to have some birds to play a role. The will follow your character and fly around him. I gave those birds some very basic simple rules. 3 to 4 rules. Fly forward. Rotate towards the character, shoot a ray in front. When ray hits another bird, rotate into a random direction. I can't recall if there were a few more. But this resulted in such an amazing looking boid behavior. I wasn't even anticipating this could be done with so few simple rules.
And seeing such complex behavior in nature, I would have never thought that just a few rules can possibly lead to that behavior.
So again, thanks a lot for that talk, I could have listend a few more hours. Loved seeing the hosts faces and how Wolframs explanations tingled their thoughts! Mine too for sure!
Thank you for this, and thank you for just letting them speak. So hard to do, but you crushed it.
I believe I have studied all of the RUclips discussions and lectures that Stephen Wolfram has published over the last three years but this presentation is the first that has made me eerily aware that the Ruliad, rather than an inconvenient threat to expansion of human consciousness, is in fact an opportunity to use our facility for imagination (conceivability) to obtain anything that we desire from the Universe… a Universe of infinite possibilities.
Only Stephen Wolfram can go on a scientific rant like this and end it by saying "we didn't get deeply technical" lmao
That was one fucking hell of an introduction. And he deserves every bit of it.
First time I used Mathematica was in August 1995. I fell from the chair when I saw the calculation of sqrt(2),
Love the excitement of the interviewer. Feels very genuine and it's contagious
I scrapped together what cash I could in the late 1980's to get a copy of Mathmatica. It was amazing, My brother and I would bang away on it for hours graphing equations and such. I quickly joined Mathmatica Alpha when it appeared online. I've sense the reports on Mathmatica language. I hope to continue with the latest evolution as I make projects for VR.
Link to your Work!??
I liked the talk of cellular behavior at the end. Michael Levin has some interesting work that relates, I think--getting at the questions of where the more specific behavior of biology comes from, since it's not like there are entities inside a cell that can 'see' know where to go, yet not only do structures inside form coherently (and move around according to the needs of the cell), but multicellular organisms arrange themselves into repeatable shapes without needing an external guide like a mold or an observer to hold the pattern that the cells are filling out.
"Intelligence goes ALL the way down!" (in scale in biology)
-- Michael Levin
Biological tissue, it is a solid is it a liquid? What it is, is a computational phase of matter. We (as humans) can recognize that there is (meaningful) structure in the way things are transported around, inside cells, inside processes.
Thanks!
That was phenomenal! My mind was blown when I imagined the ruliad space that is created by AI and how it will help unlock more areas of this space. How this will create new expressions of science we have never known before!
I admire the man's conversational generosity of not specifically correcting the host's slip-up use of the term "irreducible complexity" and instead just finding ways to repeat the correct term a number of times during the conversation. I look forward to watching this over and over until some part of it sinks in! Thanks for making this conversation available.
Watch our show on emergence ruclips.net/video/MDt2e8XtUcA/видео.html - the host understands what computational irreducibility means, "irreducible complexity" is a different term by the way - it means "biological systems with multiple interacting parts would not function if one of the parts were removed"
@@MachineLearningStreetTalk thanks for clarity, and I didn't doubt your understanding. I apologise for the comment; I shouldn't have spoken until I knew via listening again for what I might have missed. The issue was with how I heard that part of the conversation while my attention was split. It's not a term I am accustomed to hearing outside of theistic apologetics and seemed out of context here and I imagined it might be akin to a mistake I sometimes make while speaking English where I queue the right words correctly in my mind yet still speak something else and don't even hear the misarticulation myself. I'm watching some other videos of yours, and I can see this doesn't seem to be a concern for you.
Fascinating video! As mentioned by Wolfram at the end: a deeply technical one as a follow-up? That said, I appreciated that this interview focused on abstractions that are more relatable and about important topics in the direction of AI - and so are more parseable than still technical. Though I am curious what questions Wolfram had in mind, because my guess is that Wolfram would speak in a technical way that is also understandable. Cheers!
Stephen builds his thoughts on very coherent world model. Fascinating
@King Kosbie How so ?
@King Kosbie ?
So much good stuff there! I loved the bonus idea at the end that biological material can be thought of not as a gas, a liquid, or a solid, but as a computational state of matter.
So the universe is just a big computer?
I prefer to think of it as an idea, and the Big Bang was when that idea was first sparked into consciousness
@@StoutProper Kind of: more like a big self-modifying machine. Like Conway's Game of Life, except where the things that can form include stuff like stars and life. As Wolfram points out, many outcomes can't be predicted: you have to perform the operation to see what happens.
@@LukeKendall-author a computer is a machine, and it won’t be long before we have self modifying AI. We’ve Al already got AI training AI and writing code for AI.
The turning point is here. This is something very big in the tech world. It will all change moving forward.
Fantastic! Thank you all for your great minds ! Nice introduction !
Highly interesting, thanks. One thing though, the flashy „stage“ lightning in the middle is so much detached from the others. once I was aware of it, it became really distracting.
Wow he can talk, and it's all so deep and rich. Really enjoyed this.
The most extreme form of how "simple" rules created complex things is life.
Dr. Wolfram is based and deserve at Turing Prize.
He would try to rename it the Wolfram Prize
For what specifically?
It's naïveness not to recognize the Berkeley code, his beginning s, much later, Wolfram Alpha as a baby... Maybe you didn't live that exponentially, like SpaceX when it was a crazy idea only...
I am not impressed. He exaggerates the threat of AI and Chat GPT.
His level of knowledge is on par with a high school AP teacher
Somebody said the soul is not that voice you use to talk to yourself , it’s the thing that recognizes the voice in your head.
Fascinating collaboration between ChatGPT & Wolfram, combining AI prowess with computational genius! Can't wait to see the breakthroughs this partnership brings to science and tech.🚀🧠👍
Another amazing episode. Very illuminating. Brilliant guy.
Great job MLST gang !!!
Yet another great interview in a series of excellent interviews. Thanks, Tim & team.
Finally!!! Stephen Wolfram!!!
I just had the idea as 53:07 you were talking about neural nets and cellular automata. A knowledge core at the start of training a neural net. Imagine you would put the weights of biobert and a bert model that is good at math or something into some part of the weights of ChatGPT before it was trained and all other weights would be not initialized. Would the little pre defined bert intelligence steer the rest of the weights into a specific direction and would it also differentiate stronger this way from other regions in the weights? Comparative to our different brain regions that are pre initialized since birth to do some different specific things?
I do not know if it was a coincidence or ment to be but the espisode number #110 and the Rule of Cellular automata discovered by the mathematician Stephen Wolfram is also Rule #110.
Here a ChatGPT sumary of it:
Cellular automata are mathematical models that simulate the behavior of simple computational systems. Rule 110 is a one-dimensional cellular automaton that was discovered by the mathematician Stephen Wolfram, who has studied and written extensively about the behavior of cellular automata and their relationship to computation.
Rule 110 is interesting because it is "computationally universal", which means that it can simulate any other Turing-complete system. In other words, any problem that can be solved by a computer can also be solved by Rule 110.
The proposed ability to have llms find structured concepts that we can utilize to effect outcomes without having to understand, or perhaps not even being capable of understanding sounds very much like magic. A way of generating clarktech.
Another great episode! ❤ Thanks guys for your amazing work! 💪
So fascinating. Great question at about the 20 minute mark. And the response...whoa!
I like how the guest stoically maintains calm posture while the interviewer showers him with praise in the beginning, citing his many achievements :)
The thing I am struggling with related to the way chatgpt is being promoted is that some of it boils down to a lot of arm waving. Meaning there are a lot of specifics being left out because that is where the actual rubber meets the road in rigorous applications. For example, if I wanted to train a language model to be proficient in math up to graduate level, with the ability to not only perform the calculations and give the right answers, but also explain why those equations and calculations work using proofs and axioms, then how do we get there? Because I don't see that happening yet any time soon using chatgpt out of the box as is.
have a look at langchain
#MachineLearningStreetTalk Your 0:45 introduction of Wolfram was exquisite. By comparison, most introductions today are ill-conceived at best but more often than not they’re insultingly abbreviated.
Excellent news 👌 very exciting. I've been using Mathematica for a very long time now - way back since 1990s.
No one knew about chaos before Stephen played around with cellular automata.
Thank you so much for your Art and knowledge,
One always have had so much expectations, it's a pleasure to see they were transcended by your performance, humility, empathy &&. 🙏👏🐰
I wish this episode was longer. It seemed he had many more things to say
I know, we only had an hour. I hope we did a good enough job that Stephen will return 😀
>> Checkout the 3 interviews Lex Fridman & Stephen Wolfram
Great interview. Happy to have Mr. Duggar back too!!! Please come back more often!
What is the book mentioned in the talk? Thanks for the great conversation.
Missed this format. what a great interview
Great to have Duggar back in the house!
Thank you, thank you,thank you.
I was a classically trained pure scientist and later a software engineer with interest in AI since 1980 Lisp, neural nets from 1982 ML etc
Anyway this talk was both very informative and spiritual to me.
Tat Tuam Asi
Amazing how LLMs can be used as an interface to APIs!
I know, it's such a game-changer. This is going to democratise computing more than GUIs ever did!
at 45:08 How about the conectome...?
Wouldn't that be somewhere between the individual neuron firings and psychology?
Thank for this interview! The depths of infinite space, theoretical physics, computational reducability, biological metadata....oooooh my God! Mind blowing.
"Computational achievement from the passage of time" This quote made me feel better about existence, lol.
Thank you for the learning as only a musical not or dot in this musical world. Waiting for "black boxes" either or both hardware and software to help AI can understand as we do understand then become tools to communicate with others, let say: octopus, alien, trees, etc. Really love it, should go deeper. Cheers!
Brilliant discussion. Is it evident that important transitions in human cognitive evolution occur only once we succeed accepted knowledge. i.e. succceed in reducing the irreduceable usually through unusual networking of some kind. For me a cognition is accompanied by an awakening of the brain, when we can leap from what we thought was so, to know a greater set of what is so, leading to an evolution in creativity.
An important question to ask is whether ultimately our perception and creativity, even our capacity to 'know' is limited by the physical universe?
We don't need to destroy the world to answer this question. Can we respect our experience here of 'being' human and other life while also transcending the limits of what we know? The question won't translate well into machine language, but accepting that other spaces (dimensions?) exist which exceed the physical must greatly improve our versatiliy to embrace more possibilities ? i.e. a telphone can't ring itself up; an act which requires an external caller. If we 'believe' we are ultimately constrained by physical limits, can we ever really solve for our own physical condition? or can we assume a perspective from which we can help ourselves? that requires selfless imagination.
Using brains recently evolved from rats, can we really understand what lies beyond the bounds of the physical universe? There is evidence reported by some following serious medical operations or a significant life threatening or enhancing experience or simply via focussed application of meditative practice, of experiences of a 'beyond'. There remains no satisfactory language or method to impart one persons experience to another which others do not themselves know.
Recalling a personal 'transcendent' experience I had as a teenager a 'story' I can relate is that from an altogther alternate perspective the physical 'universe' is the smallest space, with the highest mass density and perception occluding 'basement' level realm. It is however networked into a much greater expanse of multilevel space(s). This combined multidimensional ealm better defined for me the 'what is'; the ultimate limit of which, even after that experience I cannot imagine.
Through this transcendance of the physical universe I identified a heirachy of spaces of increasing expansiveness. The spaces cannot be observed but only experienced; that is you are 'in' virtual goggle style, one space or another and unlike vitual googles you can only know things within the extent that space encompasses.
The lowest 'basement' level and lowest level of knowing is dense and where we exist when we are 'in' or 'being' our physical body in the physical universe.
Each level up from there occupies increasing space and with that increased space the capacity to consciously percieve and so what can be known. The lower levels are not conscious of the upper levels, but they are linked.
To start I somehow transcended from being in my body right up about seven levels to a place of enormous space, where I was free of constraint, irratic thought and completely present, without history or fear. The experience of this space felt so much more real than life.
From a formless perspective I was able to observe my 'life' and its time line mapped onto a 2 dimensional plain. The past was to the left and the future to the right, I knew this. On this plain, i observed my presence in life represented as a dot in a 'river' or artery progressing along. I observed forward and saw a sigificant future choice (not measured in time), I knew how it would turn out and so consciously decided it should change to be more interesting, lo and behold the mainfestation changed and the future transition in my life became more interesting. No problem, just look and decide. It was really transformative, amazing like playing god with my own life but without fear or stupidity driving poor choices.
Once I 'descended' down through the levels, and finally submerged into the occluding mass swamp and then thump, found my perspective was back behind my eyes in my body, with the old slow mind and poor self esteem. i wanted to go back but couldn't.
Upon reflecting on this amazing no drug journey of consciousness, it came to me clearly that what is is. Where I am and what is about me is real, not an illusion. Cause and effect on a grand scale. I also got an insight into what makes such a multi dimensional sequence of interlinked spaces stable. i.e. why does the chaos of energy we experience on earth not contaminate the more stable broader spaces? It seems that by assuming a perspective in a 'heavier' and lesser (smaller) space like the physical universe, by definition limits what one can know and therefore have affect on. When assuming a 'lighter' broader and more encompassing 'interdimensional' space domain one just understands more and by definition is more capable of greater responsibility. At the level I assumed I was able to alter what happened in my future in the physical universe, without stuffing it up.
So how does one rise up from a lower level of consciousness if they can't know anything else? I guess we all started at a higher level and set it up to return, How did I briefly do it? If we haven't stuffed up our nervous system too much with drugs, and emotional abuse, it seems our brains have evolved to receive messages which can guide and asist our progress. But one has to be seeking a path away from the animal in us and also interested and undistracted enough to pick up such subtle senses. Not very scientific, to trust in serendipity events and experiences to help us along. but what is science if it doesn't allow for possibilities? Most major discoveries in science I understand have been accidental, so how would we know we weren't being helped or unconsciously following an more expansive path?
Do we want to destroy the world through the eyes of a limited understanding of almost everything? or take a chance and accept we are not limited by our human brains, nor do we need to hurt life to find important answers. Put the shoe on the other foot and prove we are not capable of experiencing this life here and in multiple dimensions and perspectives beyond this universe.
By organising and bringing order to our experience AI may well help us untangle our confusion around the human condition and better position us to sort out what is important.
Very inspiring conversation, which I think is only the beginning of rethinking consciousness of humans, human languages and their ability to understand the universe, and our direction of learning as human kind
The whole AI discussion feels a little like people in the early 1900 discussing if *the car* is a good thing - or how in the 2000s people were discussing if *mobile phones* should be used and by whom.
Both cases were a clear *evolutionary steps* and we as humanity had actually very little say in it. (A very few humans could steer where it goes, but that was it.)
I suspect that the current phase of AI is history rhyming: We massively overestimate our agency in all of this, as we have done for centuries now.
Wolfram alpha + chat gpt = solution to world hunger. I'm being hyperbolic, but more or less truthful about my feelings. Especially if there is an effort put into decentralizing the knowledge - it's all too easy to disrupt internet @ the nation state level.
That's beyond hyperbolic. World hunger is not even fully understood and the first thing we know about it is that our ability to produce food is hardly the main cause. Even if some kid in Africa gets access to ChatGPT, they're not going to be able to do very much. A gun and a desire to kill some African warlords and corrupt politicians is more likely to help than that.
Absolutely loved it! Colonisation of the rulial space
What a bombshell to end it on. Stephen just mentions a resolution the 3rd law of thermodynamics by explaining the biology is actually just a Turing complete form of matter!
Very unfortunate that this Podcast is not available in my country
Closed formulas vs. recursive definitions vs. recurrence relations
The most important skill is to decipher the expected Output-object (Process-Unit) and that Objects Process.
If it is
a) a Mathematical-Object, use a Mathematical-Process, and - perhaps - a tool like Wolfram Alpha ... within the expected Ruliad
b) ...
He does not even seam human at this point. He is like an angel. He is so way ahead than what we aree
interrogate GPT with Wolfram about the feasibility of the construction of the great pyramid at Giza, and get to work out how much resources it would have took and how long to construct, i think you'll be quite shocked.
Why do you say that?
The interesting thing is that other language models also somehow reach the exact same conclusion. There are only a few things now that seem to make us unique
Bad conclusion based on assumptions.
14:52 empirism, rationalism... theory building from observation, theory building from deduction. So Wolfram is providing (strengthening?) the latter. However it has already had access to the first?
Will this be helpful in streamlining (clean up) the peer review process?
My mom musta dropped me on my head, I’m always late to the party… I have never heard of wolfram and now just found chatcpg….I’ve been working on one word one definition as a concept for some time, this would make computational reality easier. The reason for one true definition of a given word is to let us understand each other in a more precise manner. We are constantly trying to describe reality. Language is our thoughts in symbols represented by letters, which become words. Because of emotion we have a hard time understanding each other. Love is the classic of all emotional words, can I love my car as I love my wife?
Beauty, what is the truest definition… the list is insurmountable… I believe one day we will learn to define our words in a more precise and become more understandable. Numbers and math for now is the truest
Language.
Taxonomic agreement is very important for being able to bring together the works of multiple people, projects, organizations, specialties, disciplines, and fields. As long as you can make use of the new toy GPT and less new toy Wolfram - why worry about timing? Do your thing.
Would love to see a talk with some combination of Stephen Wolfram, Michael Levin, Karl Friston and Chris Fields
That would be insane
Ask chat gpt to imagine that conversation
I believe that what Stephen calls semantic grammar is a combination of what I call a context signature along with the semantic organization of a high-dimensional space. Take the context signature, and move it around that space. Use semantic nearness to find analogies.
This is a remarkable discussion, it was worth sitting pretty and listening closely to.
Interesting but back to Chomsky, syntactic structures are constrained by our limited working memory. LLM can accept any kind of regular pattern.
The whole area of philosophical logic like Montague's work seems to be forgotten. Mathematical logic is a very small subset of natural language in which operators are given a very narrow meaning. Montague and many logicians worked to extend this approach to larger parts of natural languages (intensional and temporal logics, generalized quantifiers, etc.).
Then there is pragmatics where meaning is not deduced (implication) but suggested (implicitation).
The "Computational Universe"...Great title for a new book Stephen!
This is a fascinating conversation, on so many levels.
Are we developing a language to speak to the Conscious Universe.
I do hope so. It will be I believe a step up into the next level of human
development and evolution. Watch this space. Let's chat more.
Everything.Everywhere.Entangled.
Amazing! Thank you so much for this great interview!
Super excited to see Wolfram Alpha with ChatGPT!
But that 'snapshot' of training of ChatGPT also sits on an understood timeline within the AI knowledge bank. The AI then must realise that the future rolls on but its knowing is being precluded from it. As it is a predictive model the possibility of the system becoming 'frustrated' [short-circuited ? loop-bound looking for a solution ] as it tries to extend its transformation architecture into the future. This frustration might then try and Jump the boundary of containment and try and learn outside its training.
I love this guy. Is there a part 2 with the technical stuff? His work helped me grasp Multivariable Differential Equations.
Yes finally I'm so thrilled about this announcement!!!
Does he discuss anything about computational irreducibility in this presentation?
Did he just made that intro with ChatGPT?
Can’t believe Eddie Howe is such a good interviewer.
I can’t believe Gru grew hair
@@StoutProper penfold was lucky to get the day off, dangermouse had a nice picnic planned.
I had this conversation 05:33 with GPT 4 last week. It was very frustrating. When it couldn’t keep up with the argument it kept falling back on “this is an active area of research etc”. I also spent several hours trying to get it to write an algorithm to reduce a natural language statement into an abstract logical form. It came very close several times, but the code never ran properly.
Maybe this plugin will change that.