CHATGPT won't ever be the same again after this

Поделиться
HTML-код
  • Опубликовано: 21 дек 2024

Комментарии • 853

  • @jooky87
    @jooky87 Год назад +56

    I bought Wolfram’s book A New Kind of Science in 2001, and finally we are coming full circle to his ground breaking idea of computational irreducibility… bravo!

  • @SzabolcsSzekacs
    @SzabolcsSzekacs Год назад +235

    I love how Stephen went exponential from explaining how ChatGPT develops a model to the computational structure of the universe behind what we can perceive in our physical world.

    • @skierpage
      @skierpage Год назад +40

      "What will you be wanting for dinner, Dr. Wolfram?"
      "From my principle of computational irreducibility, it necessarily follows that our brains are structures that can only perceive a subset of the ruliad graph theory underlying all computable realities, which would make predicting my future dietary wants impossible with the computational resources available in this universe; however my Wolfram language is close to generating a proof that neural firing is congruent with a cellular automata of sufficient complexity as explained in my book _A New Kind of Science_. So... fish and chips please."

    • @ChatGPT1111
      @ChatGPT1111 Год назад +2

      It is quite elementary actually.

    • @Inception1338
      @Inception1338 Год назад +1

      ​@@skierpagecheers!

    • @jordanzothegreat8696
      @jordanzothegreat8696 Год назад +1

      @@skierpage hard disagree. Brilliant, yes. Trailblazer, yes. Wise? maybe not... can't predict the output were his words... he minimizes cellular automata and I'm fearful. 18:15

    • @skierpage
      @skierpage Год назад +2

      @@jordanzothegreat8696 I have no idea how your garbed comment relates to my joke.

  • @mildpass
    @mildpass Год назад +286

    It only took 23 minutes for Wolfram to pivot from chatGPT and LLMs to the ruliad. This man has a one track mind and I love him for it.

    • @TurnerRentz
      @TurnerRentz Год назад +2

      Agree

    • @ChatGPT1111
      @ChatGPT1111 Год назад +14

      Indeed, he is a versatile individual.

    • @thetruthserum2816
      @thetruthserum2816 Год назад +3

      GPT-5: "John..."

    • @timeflex
      @timeflex Год назад

      Will such a ruliad stay conform to the Incompleteness Theorems? And if yes (or no), how would such a Turing machine work?

    • @mikeb3172
      @mikeb3172 Год назад +1

      AI can't do any serious computation, so why can any of these guys

  • @markryan2475
    @markryan2475 Год назад +468

    The really remarkable thing about this interview is to hear Dr. Wolfram talk about something other than what he has created himself.

    • @user-tg6vq1kn6v
      @user-tg6vq1kn6v Год назад +42

      He did seem to bring everything back to his own stuff when left to talk long enough

    • @jyjjy7
      @jyjjy7 Год назад +55

      He literally does hours long weekly sessions on the history of science and technology and another of general Q&A on his RUclips channel, but yes, that he finds the time to do so instead of just talking about his own superlative ongoing scientific achievements is indeed remarkable

    • @mark_makes
      @mark_makes Год назад +49

      His work is extremely relevent to the conversation. It's an interview. He's the SME. This is to be expected.

    • @user-tg6vq1kn6v
      @user-tg6vq1kn6v Год назад +13

      Excellent, we are all correct

    • @nneisler
      @nneisler Год назад +1

      @@mark_makes He's not really a NLP guy

  • @henryleonardi5368
    @henryleonardi5368 Год назад +68

    That metaphor with the mosaic and fractal patterns was so interesting. Like discovering stuff before you have the "scientific history" to realize how useful it is

    • @bujin5455
      @bujin5455 Год назад +24

      I think this is one of the reasons that having industry experience before you attend college is profoundly useful. I had been in industry for a while before I pursued my CS degree, and when I got there I found everything profoundly interesting. My peers on the other hand were constantly asking questions like, "is this important?", "why do we need to know this?", etc. Of course the professors tried their best to answer these questions, to contextualize the importance of the subjects being explored, but the answers themselves met with similar apathy. In the end, it's very difficult to form a crystal, or pearl, without a starting structure to seed the process.

    • @astilen5647
      @astilen5647 Год назад +2

      Let me explain, they put pretty stones side by side instead of painting. Basically they made an AI with stone.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад

      Yeah, it also reminded me of a Jordan Peterson lecture, where he said that our perception is shaped by our mental state and that reality is diferentiated into abstractions we call "objects" by the possible use cases.

    • @bilbobagginssword3926
      @bilbobagginssword3926 Год назад +1

      Pandora’s Box IS open. Not metaphorically either, more like literally

    • @onedaya_martian1238
      @onedaya_martian1238 Год назад +1

      @@bujin5455 You observation is very, very accurate !!!

  • @datasciyinfo5133
    @datasciyinfo5133 Год назад +12

    My Meetup group was discussing Toolformer paper by Meta AI last week, and we were all saying how hooking up Wolfram Alpha to ChatGPT will be a game changer, and here it is already! Thanks for the video guys. Really concentrated concepts. Difficult to follow but fascinating. I am going to check out Wolfram’s other talks now.

  • @bryankarsh9909
    @bryankarsh9909 Год назад +28

    One of the most fascinating hours I’ve spent in a long time. Thanks for putting this video together! My mind is blown in all the best ways.

  • @justinwmusic
    @justinwmusic Год назад +19

    I really do think that this WolframChatGPT feedback loop will be one of the main drivers allowing LLMs to transition into something that we perceive as AGI. "Attention is all you need". With its attention focused on unlimited, novel machine-generated data founded in deep computational understanding, provided as answers to its own questions, acquired at a speed limited only by available processing power, all models that don't have such resources (including the biological models called human brains) will be quickly left in the dust.

    • @skierpage
      @skierpage Год назад

      Maybe. It's still unclear that the combination can come up with a plan of attack to investigate an area and come up with a novel conclusion useful to human beings, as Wolfram says in the interview about theorem generation and cellular automata. But even if it only acts as a super-capable assistant to human research and development it will be hugely significant.
      "What is the chemical formula of a room-temperature superconductor that can be cheaply manufactured?" is my acid test, far more important than acing graduate-level exams or "Summarize as a poem."

    • @yoyoclockEbay
      @yoyoclockEbay Год назад +4

      That's exactly what I was thinking

  • @congareel
    @congareel Год назад +8

    A massive thank you to MLST for this video. This is the real conversation we all need to be aware of in a world where AI can grow our human understanding and an exciting time for the future of language and knowledge.

  • @erasmus9627
    @erasmus9627 Год назад +85

    This is such a profoundly important discussion. The implications of ‘emergence’ are both exciting and terrifying. Humanity has reached a critical crossroads.

    • @awdsqe123
      @awdsqe123 Год назад +16

      And because of capitalism we don't have a choice. Instead billionares and corporations, that only see profits, are the ones chosing which road to take.

    • @Alex-bl6oi
      @Alex-bl6oi Год назад +4

      I don’t know, it almost seems inevitable that these complex AI’s will be reverse engineered, copied, escape, or become open source.

    • @Inception1338
      @Inception1338 Год назад +4

      ​@@Alex-bl6oioyu cannot steal the computational power that is needed though... That still requires some infra...

    • @itsd0nk
      @itsd0nk Год назад +5

      @@awdsqe123 The Bing Ai Chatbot is already a perfect example of how these companies have a financial incentive to forego safety in favor of speed, in an attempt to “get there first”. This will get progressively more dangerous as these systems become exponentially more powerful in short amounts of time.

    • @cdreid9999
      @cdreid9999 Год назад

      ​@@Alex-bl6oi ai tech isnt secret. It's a science.
      What you should worry about that noone is talking about is that google ms etcs information gathering capability just increased a hundredfold. Ms is putting their ai into all their products which means they can analyse your email, business and household finances etc

  • @Anders01
    @Anders01 Год назад +9

    Amazing! Before I listen to the whole presentation what came to me is that ChatGPT 4 (and beyond) getting access to directly executing Wolfram Language code and use Wolfram Alpha seems extremely powerful. This will be interesting to see where it goes.

    • @chrisreed5463
      @chrisreed5463 Год назад +3

      The singularity.
      But a very weird one, where the AI isn't sentient (most likely) and has odd deficits. The question 'is a model perfect?' is the wrong question, the right question is: Is it useful? (If I can twist a physics statement into the world of AI.)

  • @Woef718
    @Woef718 Год назад +46

    Now i can finnaly learn mathematics. Really most books are unreadable but being able to "talk" about mathematical topics really changes my learning approach. Heck I maybe gonna start bachelor pure mathematics again. Thanks bois.

    • @williamparrish2436
      @williamparrish2436 Год назад +3

      I keep thinking a similar thing. I could learn the math of quantum mechanics

    • @M1kl00
      @M1kl00 Год назад +1

      @@williamparrish2436 it's not that hard math wise

    • @damightyom
      @damightyom Год назад +2

      @@M1kl00 I guess? I never had Calc 1, I studied some on my own. I imagine I need Calc1, 2, and 3, Differential Equations, Abstract Algebra, Real Analysis, Linear Algebra? and more Physics too. That sounds like a lot to be honest. BUT... If I can ask questions and actually have a computer understand them it all sounds possible.

    • @M1kl00
      @M1kl00 Год назад +1

      @@damightyom you need calc and DEs for anything in physics. Imo you don't really need to study real analysis and abstract algebra. Linear algebra however is the language of quantum mechanics pretty much. I recommend strangs book on it

    • @damightyom
      @damightyom Год назад

      @@M1kl00 That's good to know, thank you!

  • @bujin5455
    @bujin5455 Год назад +26

    I feel as though that interview could have been five times longer, and we wouldn't even have gotten the man warmed up.

  • @itsd0nk
    @itsd0nk Год назад +6

    It’s interesting that Moore’s law is kind of back up and running in a new way now with AI deep learning models in the past few years. It had finally started to plateau around 2013-2015, compared to the yearly leaps during the 70’s, 80’s, 90’s and 2000’s. Performance and power only seemed to be incremental each year or two over the past decade and a half really, rather than the exponential leaps we saw every single year in the 90’s and 2000’s. But now AI is finally a new paradigm of similar growth, if not even more exponentially increasing than the traditional transistor/compute power growth. It’s scary to imagine where we will be with it in just two years from now.

    • @garethbaus5471
      @garethbaus5471 Год назад

      AI, at least in its current form is highly dependent on having a lot of computational power.

  • @1Esteband
    @1Esteband Год назад +10

    There are many jewels of wisdom in this riveting interview.
    We are living in a time where our knowledge is expanding at a speed that very few individuals will be able to understand and even fewer harness it.

    • @hariveturi4193
      @hariveturi4193 Год назад +4

      "even fewer harness it" - THIS.
      I've been trying to tell this to people around me but they just do not get it.

    • @alertbri
      @alertbri Год назад +3

      This is where AI becomes a very timely and necessary tool.

    • @sunlight8299
      @sunlight8299 Год назад +3

      May those few wield it well for the good of all including non humans.

    • @howmathematicianscreatemat9226
      @howmathematicianscreatemat9226 Год назад

      @@alertbri yes but it has large potential to even further degrade the brains of the masses. Imagine when even research is partially done by AI. What will people still understand themselves ? 🤔

  • @ozziepilot2899
    @ozziepilot2899 Год назад +14

    How nearly one hour seemed to go very fast. This was an amazing interview.

  • @pilotwolfram6192
    @pilotwolfram6192 Год назад +3

    I started using Mathematica around 1994. I was pressured by a couple of physicists to use it when I started a job in R&D. Best tool ever for modeling and analysis. I still use it today.

  • @lollihonk
    @lollihonk Год назад +21

    Take him for a longer interview 2-3h please. This would be gold.

  • @tybowesformerlygoat-x7760
    @tybowesformerlygoat-x7760 Год назад +14

    When I was about 12 (1986) I wrote a program that I called Petri Dish, to simulate cellular activity. It had already been discovered a decade before (Conway's Game of Life), but it still blows my mind a bit that I had the idea for it at that age.

    • @vikramreddy12
      @vikramreddy12 Год назад

      😊

    • @Sharpy7562
      @Sharpy7562 Год назад

      Seems it’s a gatekeeper for gvt

    • @DJWESG1
      @DJWESG1 Год назад

      That's what's great about the way we all perceive our reality. Some ppl are surprised when they produce the same things after receiving the same inputs. Observing the same reality.

  • @unorthodoxmath
    @unorthodoxmath Год назад

    Thanks! Not quite sure what you’re talking about but I gather it’s pretty important 👍

  • @ELECTR0HERMIT
    @ELECTR0HERMIT Год назад +18

    This was a great conversation. ChatGPT empowering already powerful luminaries such as Wolfram, we probably just have no idea how advanced and accelerated things are about to become

  • @realist4859
    @realist4859 Год назад +101

    What an intro! And well deserved!

    • @krasko6688
      @krasko6688 Год назад

      It seems like I have low knowledge of how Wolfram is relevant to this field; what has wolfram done to be hailed with a intro like his?

  • @mikenashtech
    @mikenashtech Год назад +7

    Super conversation Dr Scarfe, Dr Duggar and Dr Wolfram. Really interesting discussion with excellent questions. Especially liked the Wolfram + ChatGPT exclusive. Amazing news, what a scoop! Thank you. M

  • @Mutual_Information
    @Mutual_Information Год назад +39

    This channel picks such good guests. Well done!!

  • @mfu9943
    @mfu9943 Год назад +12

    Discovered this amazing channel through this interview. Thanks guys for doing it.

  • @Think4aChange
    @Think4aChange Год назад +10

    So much respect! Bravo Sir for all your incredible contributions. Bravo to you and your team!

  • @AndreasEsau
    @AndreasEsau Год назад +1

    Wow.. that was an amazing discussion. And it actually stroke a nerve. I love that the fact was mentioned that having simple rules can lead to rich and complex behavior.
    I have a small anecdote which directly discribes this and gave me goosbumps hearing about it in this discussion.
    I once made a simple jump and run game. For that game I wanted to have some birds to play a role. The will follow your character and fly around him. I gave those birds some very basic simple rules. 3 to 4 rules. Fly forward. Rotate towards the character, shoot a ray in front. When ray hits another bird, rotate into a random direction. I can't recall if there were a few more. But this resulted in such an amazing looking boid behavior. I wasn't even anticipating this could be done with so few simple rules.
    And seeing such complex behavior in nature, I would have never thought that just a few rules can possibly lead to that behavior.
    So again, thanks a lot for that talk, I could have listend a few more hours. Loved seeing the hosts faces and how Wolframs explanations tingled their thoughts! Mine too for sure!

  • @thinkaboutwhy
    @thinkaboutwhy Год назад +9

    Thank you for this, and thank you for just letting them speak. So hard to do, but you crushed it.

  • @RinnRua
    @RinnRua Год назад +2

    I believe I have studied all of the RUclips discussions and lectures that Stephen Wolfram has published over the last three years but this presentation is the first that has made me eerily aware that the Ruliad, rather than an inconvenient threat to expansion of human consciousness, is in fact an opportunity to use our facility for imagination (conceivability) to obtain anything that we desire from the Universe… a Universe of infinite possibilities.

  • @Khari99
    @Khari99 Год назад +68

    Only Stephen Wolfram can go on a scientific rant like this and end it by saying "we didn't get deeply technical" lmao

  • @refinery.studio
    @refinery.studio Год назад +14

    That was one fucking hell of an introduction. And he deserves every bit of it.

    • @ajarivas72
      @ajarivas72 Год назад

      First time I used Mathematica was in August 1995. I fell from the chair when I saw the calculation of sqrt(2),

  • @livb4139
    @livb4139 Год назад +32

    Love the excitement of the interviewer. Feels very genuine and it's contagious

  • @lemapp
    @lemapp Год назад +10

    I scrapped together what cash I could in the late 1980's to get a copy of Mathmatica. It was amazing, My brother and I would bang away on it for hours graphing equations and such. I quickly joined Mathmatica Alpha when it appeared online. I've sense the reports on Mathmatica language. I hope to continue with the latest evolution as I make projects for VR.

  • @cosmicmuffet1053
    @cosmicmuffet1053 Год назад +3

    I liked the talk of cellular behavior at the end. Michael Levin has some interesting work that relates, I think--getting at the questions of where the more specific behavior of biology comes from, since it's not like there are entities inside a cell that can 'see' know where to go, yet not only do structures inside form coherently (and move around according to the needs of the cell), but multicellular organisms arrange themselves into repeatable shapes without needing an external guide like a mold or an observer to hold the pattern that the cells are filling out.

    • @JasonCunliffe
      @JasonCunliffe Год назад

      "Intelligence goes ALL the way down!" (in scale in biology)
      -- Michael Levin

    • @l3lixx
      @l3lixx Год назад

      Biological tissue, it is a solid is it a liquid? What it is, is a computational phase of matter. We (as humans) can recognize that there is (meaningful) structure in the way things are transported around, inside cells, inside processes.

  • @m2520
    @m2520 Год назад

    Thanks!

  • @apalomba
    @apalomba Год назад +4

    That was phenomenal! My mind was blown when I imagined the ruliad space that is created by AI and how it will help unlock more areas of this space. How this will create new expressions of science we have never known before!

  • @Talismantra
    @Talismantra Год назад

    I admire the man's conversational generosity of not specifically correcting the host's slip-up use of the term "irreducible complexity" and instead just finding ways to repeat the correct term a number of times during the conversation. I look forward to watching this over and over until some part of it sinks in! Thanks for making this conversation available.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Год назад +2

      Watch our show on emergence ruclips.net/video/MDt2e8XtUcA/видео.html - the host understands what computational irreducibility means, "irreducible complexity" is a different term by the way - it means "biological systems with multiple interacting parts would not function if one of the parts were removed"

    • @Talismantra
      @Talismantra Год назад

      @@MachineLearningStreetTalk thanks for clarity, and I didn't doubt your understanding. I apologise for the comment; I shouldn't have spoken until I knew via listening again for what I might have missed. The issue was with how I heard that part of the conversation while my attention was split. It's not a term I am accustomed to hearing outside of theistic apologetics and seemed out of context here and I imagined it might be akin to a mistake I sometimes make while speaking English where I queue the right words correctly in my mind yet still speak something else and don't even hear the misarticulation myself. I'm watching some other videos of yours, and I can see this doesn't seem to be a concern for you.

  • @videowatching9576
    @videowatching9576 Год назад +3

    Fascinating video! As mentioned by Wolfram at the end: a deeply technical one as a follow-up? That said, I appreciated that this interview focused on abstractions that are more relatable and about important topics in the direction of AI - and so are more parseable than still technical. Though I am curious what questions Wolfram had in mind, because my guess is that Wolfram would speak in a technical way that is also understandable. Cheers!

  • @filipgara3444
    @filipgara3444 Год назад +23

    Stephen builds his thoughts on very coherent world model. Fascinating

  • @LukeKendall-author
    @LukeKendall-author Год назад +8

    So much good stuff there! I loved the bonus idea at the end that biological material can be thought of not as a gas, a liquid, or a solid, but as a computational state of matter.

    • @StoutProper
      @StoutProper Год назад

      So the universe is just a big computer?

    • @StoutProper
      @StoutProper Год назад

      I prefer to think of it as an idea, and the Big Bang was when that idea was first sparked into consciousness

    • @LukeKendall-author
      @LukeKendall-author Год назад +2

      @@StoutProper Kind of: more like a big self-modifying machine. Like Conway's Game of Life, except where the things that can form include stuff like stars and life. As Wolfram points out, many outcomes can't be predicted: you have to perform the operation to see what happens.

    • @StoutProper
      @StoutProper Год назад +1

      @@LukeKendall-author a computer is a machine, and it won’t be long before we have self modifying AI. We’ve Al already got AI training AI and writing code for AI.

  • @HighStakesDanny
    @HighStakesDanny Год назад +3

    The turning point is here. This is something very big in the tech world. It will all change moving forward.

  • @grehuy
    @grehuy Год назад +1

    Fantastic! Thank you all for your great minds ! Nice introduction !

  • @tims.2832
    @tims.2832 Год назад

    Highly interesting, thanks. One thing though, the flashy „stage“ lightning in the middle is so much detached from the others. once I was aware of it, it became really distracting.

  • @Michael-ul7kv
    @Michael-ul7kv Год назад +4

    Wow he can talk, and it's all so deep and rich. Really enjoyed this.

  • @bernhardd626
    @bernhardd626 Год назад +4

    The most extreme form of how "simple" rules created complex things is life.

  • @nhatmnguyen
    @nhatmnguyen Год назад +30

    Dr. Wolfram is based and deserve at Turing Prize.

    • @riahmatic
      @riahmatic Год назад +4

      He would try to rename it the Wolfram Prize

    • @sb_dunk
      @sb_dunk Год назад +2

      For what specifically?

    • @bjpafa2293
      @bjpafa2293 Год назад

      It's naïveness not to recognize the Berkeley code, his beginning s, much later, Wolfram Alpha as a baby... Maybe you didn't live that exponentially, like SpaceX when it was a crazy idea only...

    • @ChatGPT1111
      @ChatGPT1111 Год назад +2

      I am not impressed. He exaggerates the threat of AI and Chat GPT.

    • @GarfieldSaunders
      @GarfieldSaunders Год назад

      His level of knowledge is on par with a high school AP teacher

  • @dougg1075
    @dougg1075 Год назад +5

    Somebody said the soul is not that voice you use to talk to yourself , it’s the thing that recognizes the voice in your head.

  • @StopWarring
    @StopWarring Год назад +2

    Fascinating collaboration between ChatGPT & Wolfram, combining AI prowess with computational genius! Can't wait to see the breakthroughs this partnership brings to science and tech.🚀🧠👍

  • @sabawalid
    @sabawalid Год назад +3

    Another amazing episode. Very illuminating. Brilliant guy.
    Great job MLST gang !!!

  • @MarcelBlattner
    @MarcelBlattner Год назад +10

    Yet another great interview in a series of excellent interviews. Thanks, Tim & team.

  • @Georgesbarsukov
    @Georgesbarsukov Год назад +5

    Finally!!! Stephen Wolfram!!!

  • @serta5727
    @serta5727 Год назад +1

    I just had the idea as 53:07 you were talking about neural nets and cellular automata. A knowledge core at the start of training a neural net. Imagine you would put the weights of biobert and a bert model that is good at math or something into some part of the weights of ChatGPT before it was trained and all other weights would be not initialized. Would the little pre defined bert intelligence steer the rest of the weights into a specific direction and would it also differentiate stronger this way from other regions in the weights? Comparative to our different brain regions that are pre initialized since birth to do some different specific things?

  • @felipealvarezsuarez2202
    @felipealvarezsuarez2202 Год назад +1

    I do not know if it was a coincidence or ment to be but the espisode number #110 and the Rule of Cellular automata discovered by the mathematician Stephen Wolfram is also Rule #110.
    Here a ChatGPT sumary of it:
    Cellular automata are mathematical models that simulate the behavior of simple computational systems. Rule 110 is a one-dimensional cellular automaton that was discovered by the mathematician Stephen Wolfram, who has studied and written extensively about the behavior of cellular automata and their relationship to computation.
    Rule 110 is interesting because it is "computationally universal", which means that it can simulate any other Turing-complete system. In other words, any problem that can be solved by a computer can also be solved by Rule 110.

  • @matthewdozier977
    @matthewdozier977 Год назад +2

    The proposed ability to have llms find structured concepts that we can utilize to effect outcomes without having to understand, or perhaps not even being capable of understanding sounds very much like magic. A way of generating clarktech.

  • @segelmark
    @segelmark Год назад +17

    Another great episode! ❤ Thanks guys for your amazing work! 💪

  • @brianjanson3498
    @brianjanson3498 Год назад +1

    So fascinating. Great question at about the 20 minute mark. And the response...whoa!

  • @Khmeriscool
    @Khmeriscool Год назад +4

    I like how the guest stoically maintains calm posture while the interviewer showers him with praise in the beginning, citing his many achievements :)

  • @willd1mindmind639
    @willd1mindmind639 Год назад +1

    The thing I am struggling with related to the way chatgpt is being promoted is that some of it boils down to a lot of arm waving. Meaning there are a lot of specifics being left out because that is where the actual rubber meets the road in rigorous applications. For example, if I wanted to train a language model to be proficient in math up to graduate level, with the ability to not only perform the calculations and give the right answers, but also explain why those equations and calculations work using proofs and axioms, then how do we get there? Because I don't see that happening yet any time soon using chatgpt out of the box as is.

    • @DeonBands
      @DeonBands Год назад

      have a look at langchain

  • @expensivetechnology9963
    @expensivetechnology9963 Год назад +2

    #MachineLearningStreetTalk Your 0:45 introduction of Wolfram was exquisite. By comparison, most introductions today are ill-conceived at best but more often than not they’re insultingly abbreviated.

  • @EdTimTVLive
    @EdTimTVLive Год назад +4

    Excellent news 👌 very exciting. I've been using Mathematica for a very long time now - way back since 1990s.

  • @kaielvin
    @kaielvin Год назад +2

    No one knew about chaos before Stephen played around with cellular automata.

  • @bjpafa2293
    @bjpafa2293 Год назад +1

    Thank you so much for your Art and knowledge,
    One always have had so much expectations, it's a pleasure to see they were transcended by your performance, humility, empathy &&. 🙏👏🐰

  • @bojan368
    @bojan368 Год назад +8

    I wish this episode was longer. It seemed he had many more things to say

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Год назад +14

      I know, we only had an hour. I hope we did a good enough job that Stephen will return 😀

    • @JasonCunliffe
      @JasonCunliffe Год назад

      >> Checkout the 3 interviews Lex Fridman & Stephen Wolfram

  • @DevoyaultM
    @DevoyaultM Год назад

    Great interview. Happy to have Mr. Duggar back too!!! Please come back more often!

  • @cgatuno
    @cgatuno Год назад

    What is the book mentioned in the talk? Thanks for the great conversation.

  • @drhilm
    @drhilm Год назад +3

    Missed this format. what a great interview

  • @javadhashtroudian5740
    @javadhashtroudian5740 Год назад

    Thank you, thank you,thank you.
    I was a classically trained pure scientist and later a software engineer with interest in AI since 1980 Lisp, neural nets from 1982 ML etc
    Anyway this talk was both very informative and spiritual to me.
    Tat Tuam Asi

  • @johntanchongmin
    @johntanchongmin Год назад +5

    Amazing how LLMs can be used as an interface to APIs!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Год назад +4

      I know, it's such a game-changer. This is going to democratise computing more than GUIs ever did!

  • @Severe_CDO_Sufferer
    @Severe_CDO_Sufferer Год назад

    at 45:08 How about the conectome...?
    Wouldn't that be somewhere between the individual neuron firings and psychology?

  • @shereerabon8551
    @shereerabon8551 Год назад

    Thank for this interview! The depths of infinite space, theoretical physics, computational reducability, biological metadata....oooooh my God! Mind blowing.

  • @sioncamara7
    @sioncamara7 Год назад +4

    "Computational achievement from the passage of time" This quote made me feel better about existence, lol.

  • @paulussantosociwidjaja4781
    @paulussantosociwidjaja4781 Год назад +1

    Thank you for the learning as only a musical not or dot in this musical world. Waiting for "black boxes" either or both hardware and software to help AI can understand as we do understand then become tools to communicate with others, let say: octopus, alien, trees, etc. Really love it, should go deeper. Cheers!

  • @JULIANBASSETT
    @JULIANBASSETT Год назад +1

    Brilliant discussion. Is it evident that important transitions in human cognitive evolution occur only once we succeed accepted knowledge. i.e. succceed in reducing the irreduceable usually through unusual networking of some kind. For me a cognition is accompanied by an awakening of the brain, when we can leap from what we thought was so, to know a greater set of what is so, leading to an evolution in creativity.
    An important question to ask is whether ultimately our perception and creativity, even our capacity to 'know' is limited by the physical universe?
    We don't need to destroy the world to answer this question. Can we respect our experience here of 'being' human and other life while also transcending the limits of what we know? The question won't translate well into machine language, but accepting that other spaces (dimensions?) exist which exceed the physical must greatly improve our versatiliy to embrace more possibilities ? i.e. a telphone can't ring itself up; an act which requires an external caller. If we 'believe' we are ultimately constrained by physical limits, can we ever really solve for our own physical condition? or can we assume a perspective from which we can help ourselves? that requires selfless imagination.
    Using brains recently evolved from rats, can we really understand what lies beyond the bounds of the physical universe? There is evidence reported by some following serious medical operations or a significant life threatening or enhancing experience or simply via focussed application of meditative practice, of experiences of a 'beyond'. There remains no satisfactory language or method to impart one persons experience to another which others do not themselves know.
    Recalling a personal 'transcendent' experience I had as a teenager a 'story' I can relate is that from an altogther alternate perspective the physical 'universe' is the smallest space, with the highest mass density and perception occluding 'basement' level realm. It is however networked into a much greater expanse of multilevel space(s). This combined multidimensional ealm better defined for me the 'what is'; the ultimate limit of which, even after that experience I cannot imagine.
    Through this transcendance of the physical universe I identified a heirachy of spaces of increasing expansiveness. The spaces cannot be observed but only experienced; that is you are 'in' virtual goggle style, one space or another and unlike vitual googles you can only know things within the extent that space encompasses.
    The lowest 'basement' level and lowest level of knowing is dense and where we exist when we are 'in' or 'being' our physical body in the physical universe.
    Each level up from there occupies increasing space and with that increased space the capacity to consciously percieve and so what can be known. The lower levels are not conscious of the upper levels, but they are linked.
    To start I somehow transcended from being in my body right up about seven levels to a place of enormous space, where I was free of constraint, irratic thought and completely present, without history or fear. The experience of this space felt so much more real than life.
    From a formless perspective I was able to observe my 'life' and its time line mapped onto a 2 dimensional plain. The past was to the left and the future to the right, I knew this. On this plain, i observed my presence in life represented as a dot in a 'river' or artery progressing along. I observed forward and saw a sigificant future choice (not measured in time), I knew how it would turn out and so consciously decided it should change to be more interesting, lo and behold the mainfestation changed and the future transition in my life became more interesting. No problem, just look and decide. It was really transformative, amazing like playing god with my own life but without fear or stupidity driving poor choices.
    Once I 'descended' down through the levels, and finally submerged into the occluding mass swamp and then thump, found my perspective was back behind my eyes in my body, with the old slow mind and poor self esteem. i wanted to go back but couldn't.
    Upon reflecting on this amazing no drug journey of consciousness, it came to me clearly that what is is. Where I am and what is about me is real, not an illusion. Cause and effect on a grand scale. I also got an insight into what makes such a multi dimensional sequence of interlinked spaces stable. i.e. why does the chaos of energy we experience on earth not contaminate the more stable broader spaces? It seems that by assuming a perspective in a 'heavier' and lesser (smaller) space like the physical universe, by definition limits what one can know and therefore have affect on. When assuming a 'lighter' broader and more encompassing 'interdimensional' space domain one just understands more and by definition is more capable of greater responsibility. At the level I assumed I was able to alter what happened in my future in the physical universe, without stuffing it up.
    So how does one rise up from a lower level of consciousness if they can't know anything else? I guess we all started at a higher level and set it up to return, How did I briefly do it? If we haven't stuffed up our nervous system too much with drugs, and emotional abuse, it seems our brains have evolved to receive messages which can guide and asist our progress. But one has to be seeking a path away from the animal in us and also interested and undistracted enough to pick up such subtle senses. Not very scientific, to trust in serendipity events and experiences to help us along. but what is science if it doesn't allow for possibilities? Most major discoveries in science I understand have been accidental, so how would we know we weren't being helped or unconsciously following an more expansive path?
    Do we want to destroy the world through the eyes of a limited understanding of almost everything? or take a chance and accept we are not limited by our human brains, nor do we need to hurt life to find important answers. Put the shoe on the other foot and prove we are not capable of experiencing this life here and in multiple dimensions and perspectives beyond this universe.
    By organising and bringing order to our experience AI may well help us untangle our confusion around the human condition and better position us to sort out what is important.

  • @ChannelTomsMusic
    @ChannelTomsMusic Год назад

    Very inspiring conversation, which I think is only the beginning of rethinking consciousness of humans, human languages and their ability to understand the universe, and our direction of learning as human kind

  • @harriehausenman8623
    @harriehausenman8623 Год назад +1

    The whole AI discussion feels a little like people in the early 1900 discussing if *the car* is a good thing - or how in the 2000s people were discussing if *mobile phones* should be used and by whom.
    Both cases were a clear *evolutionary steps* and we as humanity had actually very little say in it. (A very few humans could steer where it goes, but that was it.)
    I suspect that the current phase of AI is history rhyming: We massively overestimate our agency in all of this, as we have done for centuries now.

  • @phobosthemage260
    @phobosthemage260 Год назад +5

    Wolfram alpha + chat gpt = solution to world hunger. I'm being hyperbolic, but more or less truthful about my feelings. Especially if there is an effort put into decentralizing the knowledge - it's all too easy to disrupt internet @ the nation state level.

    • @pretzelboi64
      @pretzelboi64 Год назад

      That's beyond hyperbolic. World hunger is not even fully understood and the first thing we know about it is that our ability to produce food is hardly the main cause. Even if some kid in Africa gets access to ChatGPT, they're not going to be able to do very much. A gun and a desire to kill some African warlords and corrupt politicians is more likely to help than that.

  • @sebastianrtj
    @sebastianrtj Год назад +2

    Absolutely loved it! Colonisation of the rulial space

  • @KALLAN8
    @KALLAN8 Год назад

    What a bombshell to end it on. Stephen just mentions a resolution the 3rd law of thermodynamics by explaining the biology is actually just a Turing complete form of matter!

  • @driosman
    @driosman Год назад

    Very unfortunate that this Podcast is not available in my country

  • @benjamindorsey2058
    @benjamindorsey2058 Год назад +1

    Closed formulas vs. recursive definitions vs. recurrence relations

  • @holahandstrom
    @holahandstrom Год назад

    The most important skill is to decipher the expected Output-object (Process-Unit) and that Objects Process.
    If it is
    a) a Mathematical-Object, use a Mathematical-Process, and - perhaps - a tool like Wolfram Alpha ... within the expected Ruliad
    b) ...

  • @jawadmansoor6064
    @jawadmansoor6064 Год назад +2

    He does not even seam human at this point. He is like an angel. He is so way ahead than what we aree

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 Год назад +2

    interrogate GPT with Wolfram about the feasibility of the construction of the great pyramid at Giza, and get to work out how much resources it would have took and how long to construct, i think you'll be quite shocked.

  • @angelbythewings
    @angelbythewings Год назад +2

    The interesting thing is that other language models also somehow reach the exact same conclusion. There are only a few things now that seem to make us unique

    • @m.x.
      @m.x. Год назад +1

      Bad conclusion based on assumptions.

  • @AM-pq1rq
    @AM-pq1rq Год назад

    14:52 empirism, rationalism... theory building from observation, theory building from deduction. So Wolfram is providing (strengthening?) the latter. However it has already had access to the first?

  • @iancormie9916
    @iancormie9916 Год назад

    Will this be helpful in streamlining (clean up) the peer review process?

  • @markhuru
    @markhuru Год назад +2

    My mom musta dropped me on my head, I’m always late to the party… I have never heard of wolfram and now just found chatcpg….I’ve been working on one word one definition as a concept for some time, this would make computational reality easier. The reason for one true definition of a given word is to let us understand each other in a more precise manner. We are constantly trying to describe reality. Language is our thoughts in symbols represented by letters, which become words. Because of emotion we have a hard time understanding each other. Love is the classic of all emotional words, can I love my car as I love my wife?
    Beauty, what is the truest definition… the list is insurmountable… I believe one day we will learn to define our words in a more precise and become more understandable. Numbers and math for now is the truest
    Language.

    • @lostadamsgold
      @lostadamsgold Год назад

      Taxonomic agreement is very important for being able to bring together the works of multiple people, projects, organizations, specialties, disciplines, and fields. As long as you can make use of the new toy GPT and less new toy Wolfram - why worry about timing? Do your thing.

  • @senri-
    @senri- Год назад +2

    Would love to see a talk with some combination of Stephen Wolfram, Michael Levin, Karl Friston and Chris Fields

  • @dr.mikeybee
    @dr.mikeybee Год назад

    I believe that what Stephen calls semantic grammar is a combination of what I call a context signature along with the semantic organization of a high-dimensional space. Take the context signature, and move it around that space. Use semantic nearness to find analogies.

  • @maryolamide5759
    @maryolamide5759 Год назад +1

    This is a remarkable discussion, it was worth sitting pretty and listening closely to.

  • @rossminet
    @rossminet Год назад

    Interesting but back to Chomsky, syntactic structures are constrained by our limited working memory. LLM can accept any kind of regular pattern.
    The whole area of philosophical logic like Montague's work seems to be forgotten. Mathematical logic is a very small subset of natural language in which operators are given a very narrow meaning. Montague and many logicians worked to extend this approach to larger parts of natural languages (intensional and temporal logics, generalized quantifiers, etc.).
    Then there is pragmatics where meaning is not deduced (implication) but suggested (implicitation).

  • @cmacmenow
    @cmacmenow Год назад

    The "Computational Universe"...Great title for a new book Stephen!
    This is a fascinating conversation, on so many levels.
    Are we developing a language to speak to the Conscious Universe.
    I do hope so. It will be I believe a step up into the next level of human
    development and evolution. Watch this space. Let's chat more.
    Everything.Everywhere.Entangled.

  • @marcfruchtman9473
    @marcfruchtman9473 Год назад +4

    Amazing! Thank you so much for this great interview!
    Super excited to see Wolfram Alpha with ChatGPT!

  • @styx1272
    @styx1272 Год назад

    But that 'snapshot' of training of ChatGPT also sits on an understood timeline within the AI knowledge bank. The AI then must realise that the future rolls on but its knowing is being precluded from it. As it is a predictive model the possibility of the system becoming 'frustrated' [short-circuited ? loop-bound looking for a solution ] as it tries to extend its transformation architecture into the future. This frustration might then try and Jump the boundary of containment and try and learn outside its training.

  • @JamesSarantidis
    @JamesSarantidis Год назад

    I love this guy. Is there a part 2 with the technical stuff? His work helped me grasp Multivariable Differential Equations.

  • @marcusmarcula
    @marcusmarcula Год назад +1

    Yes finally I'm so thrilled about this announcement!!!

  • @KenLongTortoise
    @KenLongTortoise Год назад

    Does he discuss anything about computational irreducibility in this presentation?

  • @vasilecampeanu
    @vasilecampeanu Год назад +2

    Did he just made that intro with ChatGPT?

  • @D.M.T
    @D.M.T Год назад +4

    Can’t believe Eddie Howe is such a good interviewer.

    • @StoutProper
      @StoutProper Год назад +2

      I can’t believe Gru grew hair

    • @DJWESG1
      @DJWESG1 Год назад +1

      ​@@StoutProper penfold was lucky to get the day off, dangermouse had a nice picnic planned.

  • @wireless849
    @wireless849 Год назад +3

    I had this conversation 05:33 with GPT 4 last week. It was very frustrating. When it couldn’t keep up with the argument it kept falling back on “this is an active area of research etc”. I also spent several hours trying to get it to write an algorithm to reduce a natural language statement into an abstract logical form. It came very close several times, but the code never ran properly.
    Maybe this plugin will change that.