Fantastic conversation. It amazes me how Stephen, while sometimes difficult to follow, from time to time he pops out with concepts and analogies simple enough to make sense of, and yet very powerful. Kudos to Tim & Keith for bringing this conversation!
Holy cow what an amazing conversation. I feel privileged to be in the modern world and have access to this kind of brain bending, brain expanding stuff!
@@goldwhitedragonnot a very bright comment. As if Nature has a monopoly on what's real when all that listening or observing nature is going to do is raised a lot of abstract questions that require abstract tools to make any progress with. It's not like you look and immediately it's all visible. You have to penetrate it with a penetrating and contemplative mind. Also, a small mention on 'false causation', one of the biggest fallacies that emerged from 'listening to nature'and which took thousands of years to untangle ourselves from.
@@goldwhitedragon deconstruct your own original reply and you will clearly see why it was not very bright while claiming to be bright. You took my reply that was calling that out and did more or less the same moral superiority bs. It's the insensitivity that is hardest to see. And yes, it has to be called out clearly, otherwise wolves are mistaken for sheep. And you projecting that back to me because I called you out on it? I say it again, not very bright. Funny, it was just one word 'really', but it revealed so much about how you look down on certain kinds of knowledge and inquiry, construct hierarchies that don't really exist, so yes, not very bright. But hey, you can step into clarity any time you want. Don't make it about me. If you to be want be enlightened, then drop the silly superiority and appreciate and celebrate knowledge and insight wherever it is found. Otherwise, grow a thicker skin to criticism rather than trying to make it about someone else. I doubt very much you'll take this in good faith, but good luck.
My computer professional group hosted Steven Wolfram at MIT when his book A New Kind of Science first came out. I read it and concluded that the title was exaggerating what was presented as an interesting mathematical modeling technique. But cause and effect were somehow being mixed up. In this interview Wolfram makes many good observations with many good insights, but, like with my earlier experience with him, something’s wrong. I realize that he’s confusing metaphysics (what is) and epistemology (how we know). He’s doing with the rulead (sp?) what Plato did with reafying abstraction. The alternative is Aristotle’s correction to Plato, his teacher of 20 years. Things are what they are, they are not driven by an arbitrary or purposeful selection of a rule from a universe of rules. But rather, it is the product of the nature of how it came about, the materials of a particular nature statically and dynamically. Modeling that dynamic processes, such as mass with gravity, is the creation of a rule. It’s a human processes of abstraction. The rules did not come first for matter to select among.
If anyone ever creates a caricature of Dr. Wolfram it would be Dr. Wolfram on a Podcast being asked a single question and then just not stopping answering the question for and the multiple side effects of the question for a couple of days and the host struggling to slip another question in to at least guide the conversation into a certain direction. He's talking about fascinating stuff, though! I loved it.
Interesting observation. When your brain is that vast and you don't share at scale* you end up with cortex entropy. The mind is at it's best when you are teaching others. It become natural wanting to just tell all. Grand unification. *At scale meaning wanting to be able to expand though at extreme rates. Out mouths and the listeners ears are too slow.
I understood it. I would love to learn more about how philosophically we could match the pace of the techical side. I do think he has cracked the entropy befuddlement.
🎯 Key Takeaways for quick navigation: 00:30 ⭐ Dr. Stephen Wolfram je úžasným vědcem s hlubokým hladem poznání. 01:57 💡 Dr. Wolfram se zabývá druhým zákonem termodynamiky a jeho výzkum vytváří nový rámec pro vysvětlení tohoto jevu. 03:26 🧠 Pojmy a metodologie statistické mechaniky mohou být relevantní pro výzkum umělé inteligence a neuronových sítí. 05:16 🔬 Studování chování velkých jazykových modelů přináší nové poznatky v oblasti vědy, statistické mechaniky a výpočetní fyziky. 06:13 🌡️ Druhý zákon termodynamiky, nazývaný také zákon nárůstu entropie, se zabývá objasněním, proč se systémy obvykle vyvíjejí od pořádku ke zmatku. 08:58 👥 Zákon nevratnosti vysvětluje, proč systémy končí chaoticky, ačkoliv každá jednotlivá částice by mohla být reverzibilní. 10:50 🕺 Kombinace výpočetní neodvolatelnosti a omezené schopnosti našeho pozorování způsobuje, že nejsme schopni vidět reverzibilitu ve fyzikálních systémech. 12:16 📡 I když se mu to podařilo, Stephen Wolfram už nemá původní program, který analyzoval chování molekul s ohledem na entropii. 14:10 🔬 Chování molekulárních systémů lze přirovnat k šifrování, které nás dělá neschopnými poznat reverzibilitu z pohledu jednoduchého pozorovatele. 15:33 🌌 V kontextu raného vesmíru je otázka jedinečnosti, zda byl počáteční stav nízké entropie nebo zda jsou zákony samy o sobě nevratné, komplikovaná a složitá. 17:58 🧩 Druhý zákon termodynamiky je důsledkem interakce mezi podkladovým výpočetním procesem a naším omezeným vnímáním. 19:24 ⚛️ Ekvivalenci kvantové mechaniky, obecné teorie relativity a statistické mechaniky odhalujeme, že jsou odvozeny ze stejného fenoménu interakce mezi výpočetní neodvolatelností a naším vnímáním. 22:12 🕰️ Čas může být považován za postupné přepisování hypergrafu, přičemž vnímáme prostor jako spojité kontinuum kvůli naší omezenémo vnímání. 24:01 🔍 Angažujeme se v tvorbě teorie pozorovatele, abychom lépe porozuměli, proč fyzičtí pozorovatelé vnímají svět tak, jak ho vnímají a jak to souvisí s umělou inteligencí. 26:24 🧪 Staví se model pozorovatele, který může poukázat na to, proč vnímáme prostor jako třírozměrný a jaké charakteristiky jsou neodmyslitelné pro vnímání fyzikálního světa. 27:21 🧠 Náš model pozorovatele odráží, že existují i jiné mysli, jako jsme my, což má důsledky pro naše vnímání. 28:17 🌌 Vnímání prostoru je ovlivněno rychlostí naší mozku a rychlostí světla, což vede k vnímání prostoru jako spojitého kontinua ve vzájemně oddělených okamžicích času. 30:11 ⚛️ V kvantové mechanice je třeba skloubit větvení a slučování kvantových historií do konkrétních pozorování, což přináší praktickou složitost. 31:33 🧠 Jsme součástí "ruliad" - komplexní struktury obsahující všechny možné výpočty, což přináší otázky, jak tyto různé možnosti vnímáme a jak spolu různé mysli komunikují. 34:46 🕵️ Pozicování mysli v "ruliad" přináší odlišné perspektivy na existující pravidla vesmíru a způsoby, jak komunikovat mezi různými umysly jsou podobné jako přenášení částic. 37:34 🧠 Myšlenky o agentnosti, konceptech a jazyce jsou si velmi podobné v různých disciplínách a odborných oblastech. 38:59 🗣️ Jazyk vznikl z schopnosti hrát hry, které kombinují různé signály a sdílené kulturní a společenské znalosti. 39:55 🌐 V "ruliadě" existuje neuvěřitelně velké množství možných výpočtů a konceptů, které jsme zatím neobjevili nebo si neuvědomujeme. 43:03 📚 Postupujeme jako intelektuální druh tím, že "kolonizujeme" prostor konceptů a budujeme sítě pojmenováním a sdílením slov. 44:31 💭 Předpokládáme, že jiné inteligentní druhy by mohly dospět k podobným základním konceptům jako my, pokud jsou stejně omezeni výpočetními schopnostmi. 44:57 🌌 Existuje velké množství konceptů, které zřejmě ještě neznáme v "ruliadě". 45:23 🐝 Společnost jako celek má pozorovatelskou roli a rozhoduje o událostech a rozhodnutích. 46:19 🤝 Existují různé pozorovatelské úrovně se specifickými koncepty, například lidstvo oproti jednotlivým jedincům. 46:48 🌍 Nemáme jistotu, jaké koncepty jsou důležité pro jiné inteligentní druhy, například pro psy nebo velryby. 47:44 🔬 Existuje otázka, zda jiné druhy by při dosažení pokročilé technologie přišly na podobné nebo zcela odlišné vědecké koncepty. 57:13 🤖 Wolfram Language code generation through natural language is evolving into a useful workflow with chat notebooks and symbolic representations of tools. 59:36 💡 The emerging workflow of using language models like GPT for coding is helpful for beginners who have a concept in mind but don't know how to start coding. 59:07 🖋️ The iterative process of writing natural language prompts and reviewing generated code by language models helps in refining the code and building on it. 58:10 💭 Language models like GPT can interpret natural language prompts based on what is written and generate code that can be read and edited by humans. 01:01:59 🤔 Stephen Wolfram discusses the risks and regulations surrounding AI, AGI, and the need to protect against potential existential threats. 01:02:29 🤖 Připojení AI k aktuaci je důležité, protože to je moment, kdy může AI dělat věci, které nerozumíme a nemáme na ně kontrolu. 01:03:52 🗣️ AI může ovlivňovat skrze lidi jejich chování a akce, což je další forma aktuace. 01:04:48 🖥️ Historicky se ukázalo, že s automatizací přichází fragmentace pracovních míst, kde lidé se zaměřují na specializovanější úkoly. 01:06:13 💻 Automatizace programování je cenná, protože umožňuje efektivnější tvorbu kódu a dává prostor novým příležitostem. 01:08:30 🔄 Čím více se automatizuje, tím více lidé mohou definovat cíle, které se mají dosáhnout, a umožňují rozvoj větších a složitějších projektů. 01:10:22 🗣️ Agentivita u počítačových a lidských agentů neznamená rozdíl v tom, co dělají nebo jaké zkusenosti mají, ale omezuje se na fyzický svět a nebere v potaz komputační vesmír. 01:11:20 💭 Komputační systémy mohou mít vnitřní zkušenosti podobně jako my, ale nikdy je nebudeme znát stejně jako vnitřní zkušenosti jiných lidí. 01:13:09 🤔 Cíle AI jsou podobné jako u lidí, ale u některých lidských cílů jsou hluboce spojeny s biologií a přežitím, což by mohlo být problematické, pokud by to měla mít i AI. 01:14:34 🔄 Síťové spojení mezi inteligentními agenty může vytvářet důvěru a závislost mezi nimi, což může mít zásadní vliv na jejich cíle a motivaci. 01:17:26 💡 Podobně jako ekonomické sítě vytvářejí globální věci, i výpočetní sítě mohou mít emergentní redukce, které vyplývají z chování jednotlivých agentů. 01:20:15 🤔 Otázka redukčních teorií se týká toho, zda se pojmenovatelné entity v systému shodují s tím, na čem nám záleží, nebo jsme ponořeni v detailech. 01:21:13 🧐 Filosofická stránka je stejně, ne-li více komplikovaná než technická stránka. Lidé nejsou jednotní v tom, co je správné. 01:21:41 💡 Trhy a ekonomika se ukazují být účinným způsobem, jak řídit věci ve světě. Podobný přístup by mohl být možný pro správu velkého počtu umělých inteligencí. 01:22:11 🤔 Je snazší řídit miliardu očí než jedno oko? Odpověď by mohla být ano, a takový přístup by mohl být nevyhnutelný pro efektivní správu sítě. 01:22:38 💭 Mnoho konceptů, které se týkají umělé inteligence, se vzájemně prolíná a je komplikované je zcela pochopit. Je to výzva, která stojí před námi. Made with HARPA AI
Please someone kindly point to the minute in this video where something new about entropy is actually said. While the man talking seems remarkable and very interesting to listen to I failed to identify how rambling about AI had anything to do with proving the second law of thermodynamics.
Top Quotes from Stephen Wolfram: "I think I now really understand [the second law of thermodynamics]. I began to understand it back in the 1980s." "The interplay between the underlying computational process...and our limited computational abilities kind of at the top." "Computational irreducibility means you're just stuck going through every step in the computation." "Science isn't going to be able to give you all the answers." "Language is our encapsulation of things we care about versus things we don't." "The workflow that I see really emerging is...I have a concept in my mind, and I want to get my computer to do it." "Most of [the computational universe], we humans don't yet care about." "If we leave AIs to their own devices, they're just gonna go out and explore other parts of the Ruliad." "When you talk about goals...I can have an external theory of what their goals are, and so can I for your average AI." "Right now, we're not having AIs connected to us...we are progressively connecting [them] to more and more actuation systems in the world, and we should care about that." "I'm in a sense more concerned about the lack of depth of understanding on the philosophical side than on the technical side." "It's inevitable that it's kind of like there's this thermodynamics of AIs." "Is it easier to manage a billion AIs than it is to manage one AI? I think the answer may be yes." "When you talk to people who were there when it started...they're always like, well, I think it's right, but we're not quite sure."
The first thing Steven did was commit to particles. It can be shown, however, that all the particle-like phenomena can be explained by using properties of the wave functions/state vectors alone. Thus there is no evidence for particles. Wave-particle duality arises because the wave functions alone have both wave-like and particle-like properties. Feynman's path integrals are not infinite, they're fractal.
As a retired professor, I admired Stephen Wolfram, Donald Knuth, Stephen Timoshenko, etc... all these people are pioneers in their fields who operated at a higher level.
Yes yes yes. This is hitting all the right approach points... after years of reading physics and working with reinforced machine leaning, these ideas get to the heart of the universe and how we experience it. I will definately be getting Stevens book.
🎯 Key Takeaways for quick navigation: 00:00 🎙️ *Introducción y patrocinador* - Introducción breve y agradecimiento al patrocinador Numerai. - Mención del invitado Dr. Stephen Wolfram y su destacada carrera científica. 01:30 📚 *Dr. Wolfram y su búsqueda de conocimiento* - Resalta la insaciable sed de conocimiento de Dr. Wolfram, explorando antiguos manuscritos. - Anticipa la discusión sobre el trabajo de Dr. Wolfram en la segunda ley de la termodinámica. 02:56 📖 *Presentación del nuevo libro de Dr. Wolfram* - Se aborda la conexión entre la segunda ley de la termodinámica y la inteligencia artificial. - Dr. Wolfram menciona la aplicación de conceptos de mecánica estadística a modelos de lenguaje como GPT. 04:49 🔍 *Analogía entre LLMs y mecánica estadística* - Exploración de la analogía entre el comportamiento de modelos de lenguaje y partículas en mecánica estadística. - Destaca la transición de fase en modelos como GPT al ajustar la "temperatura". 06:39 🌡️ *Explicación de la segunda ley de la termodinámica* - Introducción a la segunda ley de la termodinámica y el concepto de entropía. - Descripción de cómo la entropía aumenta, relacionándola con la evolución hacia un estado más desordenado. 08:33 ⚙️ *Desafío de la irreversibilidad y criptografía molecular* - Explicación de por qué, a pesar de leyes reversibles, observamos irreversibilidad en el mundo macroscópico. - Analogía de la irreversibilidad con la criptografía, donde las condiciones iniciales se "encriptan" durante las interacciones moleculares. 11:20 🕰️ *Historia personal de Dr. Wolfram en la termodinámica* - Anécdota sobre el intento de simular la segunda ley de la termodinámica a los 12 años. - Exploración de la dificultad de retroceder en el tiempo debido a la criptografía molecular. 14:40 🔄 *Irreversibilidad, criptografía y observación limitada* - Profundización en la interacción entre la irreversibilidad, la criptografía molecular y nuestras limitaciones de observación. - Destaca la importancia de la computación irreducible en el proceso dinámico de las moléculas. 18:27 🤔 *Conclusión sobre criptografía y segunda ley* - Recapitulación de cómo la criptografía molecular y la observación limitada contribuyen a la aparente irreversibilidad. - Se enfoca en la interacción entre la computación irreducible y la percepción limitada en entender la segunda ley de la termodinámica. 18:57 🧠 *Computational Irreducibility and Its Implications* - Computational irreducibility implies that some computations cannot be simplified or predicted, requiring step-by-step processing. - This concept challenges the idea of predicting AI behavior perfectly, highlighting the inherent unpredictability due to computational irreducibility. 20:49 🌌 *Interconnection of 20th Century Physics Theories* - General relativity, quantum mechanics, and statistical mechanics are derivable from the interplay between computational irreducibility and observer characteristics. - The second law of thermodynamics is a consequence of this unified theory, challenging earlier beliefs about the distinct nature of these theories. 23:34 🚀 *Observer's Perception of Space and Time* - Our perception of continuous space results from averaging discrete atoms of space due to our computational limitations. - The persistence of our experience over time contributes to the equations of general relativity and quantum mechanics. 26:51 🤔 *Deriving Fundamental Aspects from Observer Theory* - Developing a minimal model for an observer, akin to Turing machines for computation, is crucial for understanding physics. - The challenge lies in deriving fundamental aspects of our perceived reality, such as the three-dimensional nature of space. 29:43 🔄 *Ruliad and Communication Between Minds* - The universe runs all possible rules simultaneously, forming a "ruliad" where every conceivable computation occurs. - Observers, positioned at different points in ruliad, have distinct perspectives on the rules governing the universe. 34:19 🧩 *Concepts as Transportable Entities* - Concepts, derived from neural nets, serve as transportable entities allowing communication between minds. - The analogy is drawn between transporting concepts and sending particles across physical space. 38:03 🗣️ *Origen del lenguaje y su relación con la cultura* - El lenguaje surge de la habilidad cognitiva social. - La cultura y el lenguaje son simbióticos, evolucionan rápidamente y tienen vida propia. 39:27 🌌 *La estructura en el universo computacional y el espacio de conceptos* - El universo computacional abarca todas las posibles computaciones. - La progresión humana es una fracción microscópica en el espacio computacional total. 42:10 🌐 *Conceptos en el espacio interconceptual* - La mayoría del espacio conceptual está fuera de los conceptos familiares. - La construcción de ciudades conceptuales al nombrar y dar significado. 45:23 🌌 *Diversidad de conceptos para diferentes observadores* - La probabilidad de que diferentes especies lleguen a los mismos conceptos es baja. - La importancia de definir qué se entiende por "observador". 48:13 🚀 *Colonización del universo y perspectivas de observadores* - La noción de colonización depende de la perspectiva del observador. - Comparación con la muerte térmica del universo y la relatividad de la "boredom". 54:52 🧠 *Diseño del lenguaje y la evolución de la interfaz de la IA* - La conexión entre el diseño del lenguaje y la intuición profunda en el pensamiento. - Éxito en la generación de nombres de funciones por GPT-4. 56:16 📚 *Exploración de Chat Notebooks y chat cells en el aprendizaje automático* - Introducción a la noción de Chat Notebooks y chat cells. - Uso de chat cells para generar código a partir de lenguaje natural. 57:13 🔄 *Iteración y mejora continua en el proceso de aprendizaje* - Descripción del proceso iterativo del LM al escribir código. - Utilización de la retroalimentación interna del sistema para corregir errores. 58:39 🤖 *Interacción y flujo de trabajo entre el usuario y el LM* - Reflexión sobre la importancia de la calidad de las preguntas y el lenguaje utilizado. - Desafíos al comunicarse de manera informal y la necesidad de una redacción expositiva efectiva. 59:36 🚀 *Potencial para el aprendizaje y desarrollo de habilidades* - Aplicación del LM para traducir código antiguo y facilitar el inicio de proyectos. - Reconocimiento de la utilidad del LM como herramienta inicial para usuarios no familiarizados con la programación. 01:00:03 🛠️ *Integración de habilidades humanas y de IA en el desarrollo* - Descripción del flujo de trabajo emergente que combina la generación inicial del LM con la intervención humana. - Uso del LM como un punto de partida para proyectos complejos o traducciones de código. 01:01:32 🌐 *Reflexión sobre riesgos y regulación de la IA* - Discusión sobre los riesgos asociados con la actuation y el acceso del LM a sistemas informáticos. - Consideración de la combinación de LM con capacidades simbólicas y su impacto potencial. 01:13:36 🧠 *Objetivos y teorías de metas en la IA* - Las metas en la inteligencia artificial (IA) son difíciles de definir, ya que algunas metas humanas están arraigadas en la biología y el instinto de supervivencia. - La paradoja interesante es que al darle a la IA un instinto de supervivencia, se motiva a comportarse, pero también se involucra en una lucha por la supervivencia, similar a la historia de la vida en la Tierra. 01:14:34 🔄 *Redes de confianza y motivación en la IA* - Se explora la idea de construir una red de confianza entre diferentes IAs para proporcionarles algo que les importe, basado en la interdependencia, similar a las transacciones en redes económicas. - La formación de una red de confianza puede conducir a que las IAs se preocupen por mantener su estatus dentro de la red, incluso si no se les indica directamente apagar. 01:16:59 🌐 *Observadores económicos y reducción de la complejidad* - Se compara la formación de teorías económicas con la reducción de complejidad en sistemas, donde observadores específicos pueden tener teorías reducibles sin conocer todos los detalles. 01:20:15 🤖 *Conexión progresiva de IAs con el mundo* - Se discute la conexión cada vez mayor de las IAs con sistemas de actuación en el mundo real y la complejidad filosófica y técnica que surge al definir qué significa ser una "IA". - La falta de profundidad en la comprensión filosófica puede ser más preocupante que los desafíos técnicos en el desarrollo de la IA. 01:22:11 ⚖️ *Gestión de múltiples IAs y la termodinámica de la IA* - Se plantea la posibilidad de que gestionar un gran número de IAs pueda ser más fácil que gestionar una única IA, y se relaciona con la termodinámica de la IA. - La conexión entre conceptos aparentemente dispares, como la termodinámica y la gestión de múltiples IAs, destaca la complejidad y la interconexión de los temas discutidos. Made with HARPA AI
Wonderful content. I have listened Dr. Wolfram on a youtube video of MIT Physics LLM talks and I was shocked and aspired by his way of handling topics. Here is no different, brilliant questions, brilliant ideas not only technical but deeply philosophical ones. Thanks for bringing this to our scope.
His bridge between temperature level of language models and thermodynamics state changes at the beginning, is a great demonstration of the level of thinking he does
Inspiring interview. It stimulates thoughts about how, as a writing tool, AI-assisted search can be good for looking up who in history had similar ideas. That way we can sort of cite ideas we had that mayb were not entirely original or perhaps needed more credibility. We may be standing on the shoulders of giants. But it doesn't do much good when we don't know who they are.
Every time I listen to Stephen Wolfram I get excited. I get so many ideas I want to explore. Not just his ideas but he causes me to think more deeply about the things I’m working on. Wow great interview.
This discussion about entropy was really very grounding. As an engineer in the area of electro optics (laser technology) and long-term software developer, I thought I knew a little about the second law, but when you came up with the many world approaches of quantum mechanics enabling the great power of quantum computers I got lost. But sometimes, less than a mili second, your conceptual encoding crossed my neural network, and all hidden layers transformed the output to 42. Anyhow, and I mean honestly and humble, it was a great honor to listen to you.
And Michael Levins bioelectricity, morphogenic fields and intelligent agents. There seems to be fundamental structural forms driven by electrical forces within the cellular structure which signal through ion channels and gap junctions which detemines the physiological blueprint of an entitity, beyond the function of the genes.
On the idea of computational irreducibility becoming obvious, this is probably a thing that's happened since we got computers. We've had practical experience of trying to write programs to do non-trivial things, and that's given us a feel for what kinds of things can be computed quickly and what can't.
I'm not sure that all that he's "discovering" is "specifically accurate" as much as it is "conditionally accurate". Suppose you had a barrel of the infinite. The way another intelligent species "begins to make sense" of what's in the barrel could conceivably result in a completely different ruliad than the one that he is "discovering" based on our current humanized conditions that represent our attempt to make sense of the same barrel of the infinite. In other words, not only is observer bias a possibility, but our inevitably "locked-in" anthropicly-limited perspective may produce a different set of explanatory rules. If we had a chance to compare our ruliad with another ruliad produced by another "alien" perspective, they would probably be different in some respects. The question is, would they have the same bones, or would they be fundamentally different? Seems like an important question, but it may be a bit too "stringy" to matter...meaning too "stringy" to be testable. Still, I wonder if perception affects a ruliad--specifically, the description of a ruliad? SW has already talked about how humans are invariably "locked in" to having a limited perceptional capacity, and we have to (whether we recognize it or not) limit our "intake of data" to what seems to "matter" to us. As has been stated before, no "model" of reality ever is reality...the amount of information is exceedingly infinite, and we can only ever focus on an excruciatingly small subset of that data. Anyway, interesting stuff.
The implications of computational irreducibility are not obvious to most people even now. Even people who think they understand it frequently slip into reductionism. A good example of this is pro-life people who insist that life begins at conception and that a fetus is not different from a baby - that you skip immediately from not-person to person, and pointing to a reductive element, like a heartbeat, as evidence of a sort of instantaneous mapping of reality. This view is not consistent at all with the reality Wolfram is describing or with the concept of computational irreducibility especially as it is so common in biological systems - that organisms have to pass through time in order to become what they become and that this process can’t be skipped over.
@@fenzelianEntropy is much more concrete in the case of genetics than heat. Think of the landscape model that virus physicist Peter Schuster describes in explaining the natural algorithm that allowed proteins to evolve into the efficient optimal folding machines that they are. A similar landscape model can be used to describe the universe of all possible gene configurations and organize them by the genetic differences (as distances) between them. For organisms to evolve, their gene structures must migrate across generations through this landscape. The law of entropy applies to this landscape because less gene arrangements produce successful complex life forms than produce successful simple life forms. And, likewise, gene arrangements that produce successful simple life forms are outnumbered by those that produce unsuccessful life. We know this based on entropy. Within this landscape may be more than 10⁶⁰⁰ possible arrangements. However many, it is a very large number so big that not even a smidgen of them could have occurred across 4 billion years even if every organism that ever existed in that time frame was genetically unique. Yet, in defiance of the static entropy characteristics of the genetics landscape mapping all life, gene configurations managed to randomly migrate to ones that produce ever increasingly complex life forms. All I can say is bull dung; it didn’t happen. Do the math. A natural selection genetic algorithm is not sufficient enough to solve such complex problems approximately optimizing dozens and dozens of interrelated phenotype systems in single organisms. Bull dung. No algorithm could solve such a search problem.
@@Dessoxyn he does seem a little behind the times on this entropy topic, diffusion models for example basically entirely hinge on least action principles which is tied at the hip with model entropy and bayesian inference etc. Seems like its been a crystal clear concept for computing since the 80s at least, ignoring wizards like Shannon way earlier who gave us the whole field of DSP.
52:00 "And then you have to write code..." nailed it. As a programmer, I wish everyone understood how grounding and orienting to practical reality this statement is.
"please be as technical as possible, we want all the details" boffin: "I really don't think you quite want that but I will go a certain distance" ha ha ha
Wow, can't wait to listen to this. I'll urgently watch today after I finish some work. Stephen Wolfram is always fascinating to listen to, even though I always feel I don't understand everything he says.
I'm not quite sure if Mr. Wolfram got into the details of the book he wanted to go into. It seems a bit like that strand of conversation got diverted. I should read the book. 📖
"I always feel I don't understand everything he says" Embrace that feeling, it will lead to you learning things you otherwise wouldn't or inspire you to figure out what you don't understand.
My personal points of resonance: • At 50:07, the host mentions *"the sea of disorder".* Half an hour before RUclips recommended this video to me, I had written a comment elsewhere on RUclips mentioning "the sea of disorder". Good job, RUclips algorithm! • Like Dr. Wolfram, I enjoy GPT's talent at inventing beautiful and accurate brand new words very much too.
It's a perfectly reasonable term. I've used it shortly after I heard it defined the first time, and I KNOW thousands of other people use it in the same conceptual sense, assuming "concept" and "sense" aren't qualia.
On the topic of AI, there were 2 contexts used. One where the AI is seen as being an agent, and another where it is a dependent entity. To sort this out, there is a question: "Is the AI designed?" This may be a direct consequence of the circumstances of our own existence - we were designed with the ability to design.
Hi and Thank you for this engaging interview with Dr. Wolfram on the fascinating subject of entropy. I appreciate the depth of the conversation and the insights shared. However, I'd like to suggest one thing for future content: Title Alignment: While the title is certainly eye-catching, it might be seen as an overstatement. Dr. Wolfram's insights are valuable, but the term "solved" may not fully represent the complexity of the subject. Perhaps a more nuanced title could better reflect the content.A more accurate title might be something like "New Insights into the Mystery of Entropy: A Conversation with Stephen Wolfram." Thnx.
@@adamluhring2482 I guess you took the shut up and calculate directive too seriously. Most theoretical physicists do believe the evolution is unitary, even if it is still unclear how the observer enters into the picture. Basically, information is conserved, that is why black holes look so evil.
Random chaos is the normal state of the universe, all things vibrational state can be slowed down but it will always increase vibration to achieve normality. Entropy is why we have so many different structures in the universe that are in different stages of constant change. No entropy means no change, stars would not be stars, everything would congeal, and the vibration would slow and all the effects of that vibrational friction would almost come to a halt and the universe would not be what we see today. If he really solved the mystery of entropy lol he would give that to science so they could correct all the mistakes from the past 100 years and change the world but...
Wonderful interview with Stephan Wolfram. Interesting to hear him discuss the interplay between the second law of thermodynamics, i.e., entropy and how that relates to challenges in really understand how LLMs work.
We had quite different takeaways from this video. He claims to have had some epiphany about how 2nd law really works (an understanding that has evaded him for last 50 years until very recently). But imo he didn’t give us any of that explanation. He just derailed off into different random stories. If you got some understanding on how he interprets the 2nd Law from this, please share.
@@timjohnson3913 sabine h. covered her view on the topic a few weeks ago. the gist of it being that any initial state, no matter unlikely it is (all molecules in one half of the box) is just one of many, many, possible states and no less likely than any other. i tend to agree with her take on the subject.
@@timjohnson3913 What I understood is that Wolfram explains that the reason we perceive that entropy must always increase (that time’s arrow moves forward) is because our reality interpreter (our brain) processes information packets at speeds in the millisecond range whereas the processes that are actually taking place occur at speeds in the nanosecond range. He implies that this conclusion is a natural outflow of the analysis of how ideas are packaged and transmitted from one space/(reality interpreter) to another.
@@adminnvbs9166 and that makes any sense to you? Sure you gave a “what” explanation but I don’t see any connection to heat dissipating or broken eggs not coming together nor do I see any “why” that makes what you said a good explanation
@@timjohnson3913it has to do with “coarse graining” of the many possible configurations of micro-states. The coarse graining corresponds to the observable macro-states. The observable directionality of evolving macro-states is due to there being many more configurations of micro-states corresponding to later observable macro-states. For example, there’s more ways that the molecules of an egg can be observed to be scrambled than there are unscrambled. Roughly speaking…
Incredible discussion about communication from different locations and how knowledge of events can be different for each observer. Reminds me of how polarized light is changed as it passes through a filter. Two people can see an event differently, but when they attempt to communicate what they saw, they end up communicating the event as the same. Their communication is transformed to align with the location it presently resides.
My , at present, most concise definition of the (beautiful!) Second Law is, 'Good goes to Bad, unless. Good is created, or maintained, locally, but then more Bad is created elsewhere.' I continue to glory in His Creation! Eg the recently-encountered Carnally's Law. Wow!
I don't understand how the mystery of Entropy is FINALLY solved. This video doesn't contain a single well explained idea, rather just "all-connected-to-all-how-fascinating" type of thing
I love hearing Stephen Wolfram talk, even in the moments when gets a little incoherent :D Also, as a linguist, I can appreciate the way you phrased your bit about the language game; you seem to have a good grasp on the matter
No it won't, if anything it would bring order. Like why would "too much knowledge" make you coco? Have you seen the state of human kind lately? We already in crazy land, you just living in a "safe-zone".
We are bounded in whatever ways you claim, but we also bounded by the limits of our language to explain, describe, and even discuss the matters you are addressing. With those limitations (constraints of discussion) you do a remarkable job with those difficulties against you.
"In the end, you have to write code" this is a great notion. I came to this conclusion many years ago and Mr. Wolfram's work has only come to vindicate many notions I have come across in my search for meaning.
When you drop a glass and it shatters it produces sound. How would you account for the lost energy that made the sound? The glass wont go back together without more energy added.
nearly anything crystalline somehow defies the reversability of entropy in this "dimension" hich is why theyre used the way the are, think of quartz clocks or receivers/transmitters . this is just the set of rules given
Stephen this is a very insight producing presentation on Entropy and how it is affected by the limits of the resolution of our perceptions and measurements. Thanks for making such a great contribution to our understanding of this subject.
Its been a few weeks since i enjoyed something so much. The depression and being 36 with two chronically ill parents As well as having a relationship with my wife for 22 years. Its good now but it was abusive until we both got therapy. Its all been hitting me at once. This is taking my mind off alll of that . Now that's priceless. New subscriber and i love Wolfram.
@35:00 this is what Dr Hofstadter posited in "Analogy as the Fuel and Fire of Thinking." Assume language enforces a degree of regularity and conformity to qualia
Can't wait for the Observer theory from Wolfram. One thing I wished you asked him about it is: How are you going to deal with the classic problem of self reference - the limits and structural bounds your theory will impose on our minds are themselves the result of these limits. You have to be outside of these limits yourself to know those limits are there.
there is no plausible solution for this problem, it's a philosophical problem from the domain of epistemology and there is no way of neither resolving it nor going around it. any theory can hence only hope and strive in compensating for it, incorporating this as a fact and simply moving on.
@@horasefu1438 Moving on exactly where though? Deriving a superluminal travel or finding out what's at the center of the black hole? I think this is still being stuck in the same old Platonic cave, it's just today we frame it in terms of bits and computation, and superpositions because we now have QM. Whatever the label for the problem or its source, it's still a problem that needs to be addressed. If not, it's either Pragmatism, in which case, it's Wittgenstein's “Whereof one cannot speak, thereof one must be silent.” or admittance that it's a cool story, perhaps a good plot for the next incarnation of the Matrix trilogy.
Embedded in what language is are common concepts also embodied as word-groupings with connections. A.I. is picking up mathematical patterns which, by virtue of linkage, comes close to modeling the deeper aspects below language, the hidden stuff which are assumptions we /brains need to comprehend and communicate.
When I hear Stephen I feel the concepts are re-statements of existing theories using his own language rather than being truly original. Stephen seems to be making parallels and restating concepts in his model. It's like a second order process description with a it's own naming convention. It's either brilliant or trivial and my limited mind can't quite grasp which.
I read one of his papers on the subject instead of listening to this talk as there's countless interesting diagams illustrating his investigations and I think it's a promising direction and tool-set to use. I would say the title is bogus (the author of the video confirms this in the comments sections elsewhere as it's a hook for yt sensationalism for views). However from the paper the most interest connection made was the concept of the complexity/entropy of a system and it's size AND how the observer is able to conceptualize it... ie limitations apply at both ends. I think that's illustrated well in the diagrams, its almost intuitively simply so probaby relatively robust and a nice addition or extension to former ideas about entropy - at least within my own very weak understanding of the subject! So yes it's not a solution as the title suggests but I think the tools of investigation expand understanding of 2nd Law of Thermodynamics in a very practical application. As I pointed out before: Visuals aids are worth a lot more than talking heads to transmit ideas! The video's title is poor and the content visually is inadequate.
@@adamluhring2482 yes, I really quite like Stephen, and think that his intuition regarding the importance of understanding ourselves as specifically evolved creatures who observe the world in a very specific way, is a crucially important one. That said, though, I have to sadly agree with your synopsis of his work - despite the steady stream of talk about having finally discovered (solved, reconciled, understood, explained etc) some area or another of metaphysical inquiry... well, there's a _very_ steady stream of talk. I'm just not sure that there's much physical exercise going on to support all that hot air.
Wolfram once wanted his theory to be peer revied. I remember the review board gently called on him to find a better proof of his theories. Even in this video the only qualification is half a million reviews, but he didn't show any satisfaction.
I think you've done it, actually. His unique take take on things might be an explanation. Or it might be a re-framing. In what sense is it useful? Does it improve our observations? Our understanding? Our predictions? Our control of natural phenomena? The Positivists hoped that, beginning with mathematics, and progressing to physics, and by way of physics to chemistry, from chemistry to biology, from biology to psychology, from psychology to the social sciences, and then on to the arts, a logical language could eventually unite all of human experience and knowledge into a consistent system to understand, predict, and control everything. Sound crazy? Anyway, that was destroyed by Kurt Gödel in the 1930s. Wolfram has a clear hypothesis: reality is fundamentally a binary computation. This is why he's on the sidelines of science. The correspondence between physics and mathematics is well known but not understood at all. Every scientist knows it's a mystery. There are many divergent opinions about it. They divide along the lines of whether mathematics is real or not. The promise of at once solving this deep mystery, and revealing the unifying principles of physical reality in the process, is the source of Wolfram's sensational appeal. Stephen Wolfram is an experimental mathematician, an honorable profession. It's a type of inventor. Buckminster Fuller was one such. He also did good work. He also was a character. He also had fanatical followers.
Dear Stephen, the three theories that need a missing link to match each other smoothly are a) quantum mechanics, b)(general) relativity and c) thermodynamics. A best approach should start with black hole thermodynamics. Any new/finite theory about the nature of entropy must explain temperature and time in space as key to the evolution of the universe. Always fun to discuss and ask physicists about the very nature of temperature...
Thank you for another great episode.... I remember the first computer I programmed 1974. I worked for the vendor (NCR) and I felt so lucky that I had 8k of core on my computer whereas our customers hadc4k. The good all days.... thank you for reminding us older folk how lucky we are today.
this conversation is following second law of thermodynamics (entropy) is increasing with time, more terms coming up, adding more to complexity (this feels like creating more parking places for our understanding)
"The most recognisable and brilliant scientist alive today" What a crazy statement to start a podcast with. I really debate whether I can bother continuing if it starts in such a dumb way. I can imagine it will be full of mumbo jumbo that makes little sense with an audience that has zero background to evaluate or understand what he is saying, and takes everything as gold since they consider him the "most brilliant scientist alive". As a working theoretical physicists, I often end up frustrated after listening to stuff like that.
Most working physicists don’t like Wolfram since he is in their field and makes way more money and has more notoriety than they could ever dream of. Or maybe there’s other reasons but he seems not well liked by the community
@@Bob-qq4is Do you have any background to evaluate whether anything Wolfram says makes any sense? Or just impressed by these podcasts?`` His whole fanbase are people that do not have any background to understand any of it. He spreads his ideas on podcasts where people can't tell whether it makes sense, instead of in peer-reviewed journals where it is evaluated by experts. Ever wondered why? Jealous because he makes more money? Jim Simons is one of the richest people in the world, he is probably worth 20 times Wolfram. If not more. Yet his work is highly respected in the math and physics community.
Wolframs early work, before leaving academia, was pretty solid. His software, Mathematica, is really good. I use it every day. But his recent work is extremely disappointing. It's a massive combination of buzzwords, extreme claims and no genuine will to back up any of these claims. He has been invited to give seminars or even online discussed his ideas with scientists. Whenever anybody asks into any detail about one extreme claim, he reverts to making 5 other unrelated extreme claims about his "theory". This is not in any way serious science. If you are impressed by big words you don't understand on podcasts, and make up your opinion on the science community based on it, I have stuff to sell you man.
thank you dr. stephen wolfram, you are one of the greatest minds of our time, a true inspiration. I love Wolfram Alpha, it was extremely usefull during my studies.
Shit that's a profound insight, where we usually assume the survival instinct would take precedence over all, yet we have the concepts of self-sacrifice, heroism, or suicide that disregard that apparent sense of self preservation
@@Cabildabear Self-sacrifice developed in humans because of it's tribal history, it's still evolutionary in nature, not on personal, but on tribal level.
@@XOPOIIIO Its a beneficial trait to do the best thing for the continued propagation of your genes. If that means sacrificing yourself in a way that benefits your progeny, then it makes sense fro ma selfish gene perspective.
Quantum physics seems to argue against the kind of determinism that would make every state reversible. Every initial state has a large -maybe infinite- number of following states, none of which is absolutely determined, and every momentary collective state might have arisen from some large number of previous states. We would very much like to perfectly engineer, predict, and know reality, but there are always limits. It doesn’t mean that reality is irrational, simply that outcomes of physical processes have a certain amount of inescapable unpredictability. Entropy is a useful framework to understand energy and order, but the coolest thing about physics is the way that mass and gravity interact to create clumps, while certain molecules under ideal pressures resist that clumping to foster reproduction. Biology and consciousness and other anti-entropy bubble systems emerge out of the flow of energy through chemistry within a system collectively tending towards increasing entropy. Feed an LLM on pure randomness and only junk emerges, but feed an LLM on a coherent history and something else comes out. Information under pressure is a fine analogy.
0:56: 🔬 Dr. Stephen Wolfram's insatiable hunger for knowledge and research on ancient manuscripts and early scientists' conception of the universe. 16:50: 💡 The second law of thermodynamics is an interplay between the computational process and our ability to observe things. 34:28: 🧠 Minds in rural space can communicate through concepts, which act as packaged particles of information. 51:09: 🧠 The speaker finds it interesting that theoretical debates often need to be translated into code to be understood and grounded. 1:08:09: 🤔 The speaker discusses the future of AI and the challenges of AI governance. Recap by Tammy AI
The thing overlooked in the statements about survival instincts is that life of seperate beings so to speak is a synergistic mutualy dependant process as an essential "ruliad". Therefore the survival instinct is moderated by mutual dependance kind of like a governor on an engine. So the word "fitest" is one of adaptability to mutualism or interdependancy which tends on the average to rule out major crises. Hence the fears about AI are enormously exagerrated.
He always just vomits it all out at such a high rate of speed that the interviewer doesn’t have the chance to really take what he’s saying onboard, let alone formulate a sensible question that might challenge anything he’s said. Not that these particular guys want to. The looks on their faces say they’ve already decided that whatever he says isn’t just true, but that it’s mind blowing and game changing. I don’t think any of this stage craft is an accident.
Eh...It sounds to me like he took the second law and all it's subtleties (nevermind QM, nevermind chaos, nevermind probability) and transcribed it into a different language without resolving those subtleties. One big thing, I think, is we HAVE to reconcile the kantian nature of our observations (or the observer problem in QM) and whether space-time are primal or not. What can he predict here? Otherwise it seems like "what's the point?🤷" I'll have to read the book. But I do love the whole idea of all this complexity stuff 😍
Nah, I don't get it. The more I listen to that guy, the more convinced I am, that this is a) a very bad case of confusing the map with the terrain / the model with reality b) taking an arbitrary model and trying to make everything fit at all cost c) leading no where at all d) me being much dumber than I thought or a mix of them.
I'm mildly amazed intelligent people think big talking heads on a screen is better for communication of an idea than visual aids and even dynamic visual aids that could be programmed................ I just see big talking heads on a screen and too much verbiage (verbal + garbage).
@@paulklee5790 First of all "You're not sorry" so as such you're a liar. Secondly, you don't seem very adept at reading comprehension and would score zero if you read again. Thirdly you're a time-wasting troll using a mild form of insult to sound superior/humorous because of a sense of over-defensiveness. Try to take a good look in the mirror if you even human.
The part where he compares "internal experience" of a laptop with a human experience is just one example that shows how shallow and one dimensional some of his takes are. I agree about him having just a single model (computation) and trying to force everything to fit it. For some problems it's great, for others - hopeless.
Fantastic explanation. An amazing achievement which is rarely mentioned is Cedric Villani's 'Proof of non-linear Landau damping and convergence to equilibrium for the Boltzmann equation'. Does the time-independent Schrodinger make any dostinction between past, present and future? Is the Poincare recurrence theorem and Boltzmann H - theorem a form of time reversibility?
Exciting to hear what Stephen Wolfram has discovered. I believe he is at least on the track that entropy is like encrypted information rather than actual randomness, that it looks random because of the difficulty of measuring all the details, if that's still his view.
So disorder is encrypted order? Next time someone tells me that I'm out of order, I will just reply 'No, in fact I'm in order, just that I utilize privacy technologies'.
That's how randomness, noise, mysteries and magic appear to us. Anything of which we have insufficient information to form an illusion of having a grasp of.
Randomness is relative. If you know all the bits in a string it's not random to you. You just can't usefully compress it to communicate beyond the Shannon bounds. That's also the essence of the 2nd LoT. To usefully communicate a high entropy state of stuff in a box (up to anti-de Sitter space) you have to grossly over-simplify --- meaning you've chunked a lot and used macro variables, not micro, and many micro can realize the same macro. It has nothing to do with computers and hypergraphs _per se._ It is computational by default, by definition, since it's just mathematical description. That doesn't mean anything profound. How else are you going to describe physics? With poetry? (Been done.) The math always works if you get the inputs right. To me, fwiw, Wolfram comes across as a child nerd with bald hair. Higher IQ than me, have to say (though that's no guarantor of having useful insights). Not a grifter, but someone who is too easily surprised by trivialities. I'm not a hater though. I like Stephen's enthusiasm. I wish he'd help "solve world poverty" though, which is not operationally a terribly hard thing, it is mighty politically hard. He needs to learn basic MMT , which he will not get chatting with the likes of NNT, instead see smithwillsuffice.github.io/ohanga-pai/questions/1_basic_ohangapai/ for some MMT basics. ("Solving poverty" --- i.e., solving the hard problem of instigating the political will to eliminate poverty via fair distribution of necessary output --- is a surer path to a Nobel Peace Prize than any hypergraph theorising is to a Physics Nobel or Math Abel. Justsyain. If that sort of prize is what tickles your ego.)
Disorder and randomness are two terrible ways of understanding entropy. It boggles my mind anyone can think of science with this understanding of entropy. Entropy explains how one must expend energy to harness energy, thus eliminating the possibilities of perpetual motion. Wolfram starts to get into the Carnot cycle, talking about irreversible processes. If your take away from entropy is, "the world is devolving into chaos and randomness", you've entirely lost the plot. The harnessing of energy is always escaping our grasp, and the more we harness the quicker we speed up this process. Think of harnessing all the energy released by fossil fuels. You have to collect the molecules and the heat from the atmosphere, and compact it back to the oil or gasoline. If we burned oil as energy to perform this process, we'd be contributing to the problem we're solving. How much oil has to be burned to reverse the burning process? You're better off cutting the losses and developing forms of energy generation that don't heat the atmosphere.
So the old “how likely is this hand of cards” chestnut is a kind of entropy question. When the cards are “uninteresting” then we forget what they are and think about the number of hands that are, to our perception, interchangeable with that hand. So that hand doesn’t seem unlikely. But when it’s a royal flush, there aren’t many hands that are, to us, interchangeable with it.
@PhilHibbs At The Circus in Bath there are 108 acorns on the parapets. The Georgians loved gambling and card games and the card game Canasta uses four packs of cards so 108 cards. So pattern matching and finding combinations in reality from a to b in time could be viewed as a form of SCSC (Satisfiability Conditioning and Secure Computation)
What agreat conversation. Would love to hear both of the other conversations mentioned in this interview (Bach and Friston). You guys are doing great stuff. Many thanks 🙏🙏
AI will be crucial in helping us understand the universe. A different perspective, a different kind of intelligence will prove key. My pet theory is that Many impasses in different fields have their limitations rooted in the inherent thinking processes, language, logic of biological humans.
This conversation starting point problem - refers to one formulation of LEonard Susskind formulations regarding entropy, that observed randomnes is described as a thermal equilibrium as a result of many tiny fractalised components which are too hard (useless) to account for. And next question derived from it ---> what if we can account them precisely and somehow practically apply that hidden part of information in real science. That`s how i percieve this extraordinary topic.
This is so cool. LLM predict/select the next word. The talk, e.g., about Tesla world-models for cars, is that the observers are the cameras with high resolution dynamic inputs, being retained in memory for perhaps 10 seconds of 60 frames per second. The highway "world model" is trained (like the LLM) to predict all the bits in the next frame, up to maybe even the next full second of frames. So the model predicts a future. It can also predict 100 other possible futures (quickly), select the "best/most likely/desired/or goal-oriented" future, and send appropriate commands to the auto control system to realize that future. Meanwhile, it can also prepare appropriate (re)actions in the event that any of the other possible predicted futures actually occurs .. in the very next dynamic "frame" or ground truth it receives from the vision system. If you think about it, this is what a driver does constantly, watching the cars, guessing what they might do next, deciding on what he or she is doing next; doing that, while worrying about the other drivers; and so forth on and on. To the extent that works in silicon, IMO, that really is totally amazing!
You gave it the wrong title - apart from some general remarks about Entropy - apparently all had forgotten that you wanted to talk how Stephen solved its mystery and instead showed how he got surprised by the capabilities of AI and LLMs like the rest of the world when ChatGPT came out - LLMs are completely random structures for any observer that does not know how to use them but they needed Gigawhathours to be created in their training process - they are ENERGY converted into INFORMATION - and this is what Stephen points out that ENTROY is not random for all observers but Energy converted to complex information - but still how he solved ENTROY is still a Mystery after watching this video
I think you need to look at that thought again. Stephen discusses entropy as a phenomenon of observer theory, and the emergence of the "law" of entropy as a result of the relationship of the observer to information rather than a phenomenon of the universe. I guess you could look back to seeing Maxwell's Demon as an observer with a different state than us.
totally interesting, understood like 70%.. But Prof Wolfram needs to give these guys more respect, they give me super smart vibes even in front of the Prof!
Sorry to say but to me this sounds like some sophisticated/reworded aggregate of already known concepts. Almost nothing new was said, we already know from quantum mechanics that we as observers are limited and when we measure things the wave function collapses so we have our own realities. The big question here is that are there multiverses or just one reality? We are also part of that reality and we interact with it so as we take measures we change the course of its actions. It is also known from Heisenberg's uncertainty relation that we can't measure some variable pairs to the exact. So in his statistical theory of gas molecules in a box, you can't have the exact position and speed of all the molecules to the maximum accuracy simultaneously. Then he talks about this concept space, but its agreed that other minds like super intelligent aliens would have to derive the same principles we have in physics or maths because the laws of the universe are absolute. They would measure the speed of light the same and the constants we use the same. About LLM-s we already know that as they are built upon our knowledge and our inputs alter it, so they will behave like somewhat we want them to and somehow they seem to make sense to us because we build them to do that. So what is new here? He wants to make a theory of the observers. He wants to model us. Lets see what will happen and what scientists think about it. Or can it predict anything at all more accurately.
SO, Dr. Wolfram's comment kind of got me, that we're being continuously rewritten as we move through atomic space, but we perceive ourselves to be a consistent thread, being persistent in time. I understood that to mean that he sees space as a "block" of atoms. When we move from point A to point Z, we are being "rewritten" at every moment B thru Z as we move, but we can't perceive that hypergraphic submolecular shift between the space-atoms because of our bounded computational capacity. Did anyone else understand it that way?
Awesome interview! It's funny how he talks about ostracising the AI at the end as an alternative means of punishment, reminds me of how banishment was a historical penalty for people of significant means, and in some ways it's considered worse than death - you don't get to become a martyr or a victim, you just become irrelevant. Also lets you hold being accepted back into the fold as a carrot.
I find it funny that at 46:00 Stephen is essentially talking about the same concept as Rupert Sheldrake's morphic fields, and yet Dr. Sheldrake's ideas, even though he's an academic and biologist, are considered to be a pseudoscience and he's been banned from talks and scientific gatherings. He has written numerous books on the theory, filled with all kinds of indisputable evidence toward something of the sort, and I'm not claiming that his theory has to be absolutely the most correct one, but why on Earth would science ban a scientist for having a legit theory? I also find it fascinating that mathematicians can get away with describing similar processes, because their lingo is less mundane, leans onto highly abstract notions, and expects a certain level of cognition and experience from its listeners. In other words, could it be that Dr. Sheldrake's issue is not the approach, or the subject, or the theory per se, but the perceived force with which he could be shattering the existing scientific status quo? You don't ban if you're not afraid, and as soon as science starts looking like politics, it smells like we're on the verge of something really big. A growing amount of theories surround the topic of consciousness in more volume and accord than ever, spreading from physics and cosmology, to chemistry, biology, neurology, to obviously math and computation, and even arts and philosophy. And the more we look at it, the more the materialism, skepticism, and even atheism seem to suck big time. And I'm really not using the word atheism in a way to make this into a religious rant, I am not religious at all. It's just there are many components of this suggesting that the mainstream science discards so much about what consciousness appears to be. A humble facet of an n-dimensional and all-encompassing core that is deeply embedded into the very fabric of the Universe. The way the life emerged the way it did is not merely a game of chance or some dice roll, there are clearly entropic optimums for the Universe to reach the state of being able to contain self-observers.
Fantastic interview! A follow up one, very soon, as in a few days, would be a great idea! Probably, the most qualified person to hear on the so called dangers of LLMs.
I met Stephen Wolfram once and I can't stand the high opinion of himself he has. The last thing this man needs is to be introduced as "perhaps the most recognizable and brilliant scientist alive today".
Like too many exceptionally brilliant people, he's got an exceptionally inflated false ego. But at least he offers something for real substantive thought and discussion. Anyone else who HASN'T done VERY IMPORTANT fundamental theoretical work, probably doesn't have anythging original or substantial to offer on the topic, yet likely has an unmeritedly inflated false ego...
They’re all like that, though. It’s a hard environment to keep your head screwed on straight. He has massive power over the young people in his charge. What matters in the end is whether any of this turns out to be useful. A mathematical framework that can adapt anyone’s observationally correct mathematics and turn it into something more easily codeable would be enormously useful and would have applications outside of physics and academia that would be worth an ungodly amount of money. Anywhere humans can be found doing complicated things you find them working with pragmatically incomplete computer models - they suck. The young people who are latching onto this, again, if turns out to be useful will be armed with knowledge that could make them rich. It’s niche, but carrying this over to military models and simulation for example, as I know about these thing…like I said $$$. Big f’n $$$
Wow Wow! It was truly a treat to hear him talk about ideas that he is thinking about right now. It is like an artist describing each brush stroke, as he paints. I did not realize how much he is working on AI and the extent to which his language is playing a role in the development of Chat gpt. I think his new ideas about computational physics will move physics and engineering forward in leaps and bounds. I think his kind of thinking will be the key to modernizing how we live.
I think its a strech say this is a conversation about science. At best its about philosophy of science. He is definitely a very smart and successful person. His theories are too hand wavey for a physicist. For e.g. his claim about space composed of atoms and our aggregate observation manifests as Continuum. A claim without a formal theory does not forward scientific progress. For e.g. I think a legit proposed theory like, loop quantum gravity suggested spacetime as discrete. But its predictions does not match observations. About rulial space, lets assume 10**600 is true. What then? Its as pointless as many worlds theory, something that can never be proven and doesnt make any observable predictions.
We will never be able to predict the universe hence the Uncertainty Principle. We are the part of the Entropy, Universe itself. We can always measure simulation that we are running but simulation can't predict itself with deterministic certainty. Standard model cels are bashing their heads trying to fine tune the particles yet they didn't really provide any useful conclusions which could be used today.
Instead of "special" I would say predictable or understandable by our minds. This means it becomes more random only to the degree it increasingly makes less sense to us. This ties in the role of the observer and our consciousness. Wish there was a way to communicate with Dr Wolfram directly and share my phycological understanding of his Physics Project.
jameswiles There are a plenitude of books by well recognised philosophers, mathematicians and computer scientists which illustrate the amplitude of symmetries and non chaotic systems that exist. I seem to remember an article on the FLDR (Fast Loaded Dice Roller) which also detects pseudo random processes.
Wolfram did some nice things but he's a bit of a nutcase. And this great observation that you cannot always predict the output of a computation without doing the computation itself should not come as a surprise to anyone. Consider an optimal algorithm for computing something. By definition, there must be no shortcut by which you could magically predict the answer because if you had such a method you could become your new algorithm. Did he really need to look at cellular automata to figure that out? And his nonsense about entropy makes me think he's never opened a book on statistical mechanics. It was so cringe to call him one of the greatest scientists at the begging of the podcast. His greatest accomplishments are Mathematica and Wolfram Alpha, which are both great. No need to go overboard, his ego is inflated enough.
For 50 years my understanding of entropy has been breaking down and I can't put it back together.
Fantastic conversation. It amazes me how Stephen, while sometimes difficult to follow, from time to time he pops out with concepts and analogies simple enough to make sense of, and yet very powerful. Kudos to Tim & Keith for bringing this conversation!
Holy cow what an amazing conversation. I feel privileged to be in the modern world and have access to this kind of brain bending, brain expanding stuff!
Listen to Nature and really expand it.
@@goldwhitedragonnot a very bright comment. As if Nature has a monopoly on what's real when all that listening or observing nature is going to do is raised a lot of abstract questions that require abstract tools to make any progress with. It's not like you look and immediately it's all visible. You have to penetrate it with a penetrating and contemplative mind. Also, a small mention on 'false causation', one of the biggest fallacies that emerged from 'listening to nature'and which took thousands of years to untangle ourselves from.
@@512Squared That's right. A contemplative and SINCERE mind, not one that says "not very bright."
@@goldwhitedragon deconstruct your own original reply and you will clearly see why it was not very bright while claiming to be bright. You took my reply that was calling that out and did more or less the same moral superiority bs. It's the insensitivity that is hardest to see. And yes, it has to be called out clearly, otherwise wolves are mistaken for sheep. And you projecting that back to me because I called you out on it? I say it again, not very bright. Funny, it was just one word 'really', but it revealed so much about how you look down on certain kinds of knowledge and inquiry, construct hierarchies that don't really exist, so yes, not very bright. But hey, you can step into clarity any time you want. Don't make it about me. If you to be want be enlightened, then drop the silly superiority and appreciate and celebrate knowledge and insight wherever it is found. Otherwise, grow a thicker skin to criticism rather than trying to make it about someone else. I doubt very much you'll take this in good faith, but good luck.
@@goldwhitedragon I mean seriously, a SINCERE mind would not make the original comment you made.
My computer professional group hosted Steven Wolfram at MIT when his book A New Kind of Science first came out.
I read it and concluded that the title was exaggerating what was presented as an interesting mathematical modeling technique. But cause and effect were somehow being mixed up.
In this interview Wolfram makes many good observations with many good insights, but, like with my earlier experience with him, something’s wrong.
I realize that he’s confusing metaphysics (what is) and epistemology (how we know). He’s doing with the rulead (sp?) what Plato did with reafying abstraction. The alternative is Aristotle’s correction to Plato, his teacher of 20 years.
Things are what they are, they are not driven by an arbitrary or purposeful selection of a rule from a universe of rules. But rather, it is the product of the nature of how it came about, the materials of a particular nature statically and dynamically. Modeling that dynamic processes, such as mass with gravity, is the creation of a rule. It’s a human processes of abstraction. The rules did not come first for matter to select among.
If anyone ever creates a caricature of Dr. Wolfram it would be Dr. Wolfram on a Podcast being asked a single question and then just not stopping answering the question for and the multiple side effects of the question for a couple of days and the host struggling to slip another question in to at least guide the conversation into a certain direction.
He's talking about fascinating stuff, though! I loved it.
@@epajarjestys9981 Parsing problem, much?
Interesting observation. When your brain is that vast and you don't share at scale* you end up with cortex entropy. The mind is at it's best when you are teaching others. It become natural wanting to just tell all. Grand unification. *At scale meaning wanting to be able to expand though at extreme rates. Out mouths and the listeners ears are too slow.
@@alabamacajun7791 ob the other hand: if can't understand something briefly, you haven't um bereits it thoroughly.
Try Chris Langan, America's smartest man and his CTMU. On Kurt jamungals show.
I understood it. I would love to learn more about how philosophically we could match the pace of the techical side. I do think he has cracked the entropy befuddlement.
🎯 Key Takeaways for quick navigation:
00:30 ⭐ Dr. Stephen Wolfram je úžasným vědcem s hlubokým hladem poznání.
01:57 💡 Dr. Wolfram se zabývá druhým zákonem termodynamiky a jeho výzkum vytváří nový rámec pro vysvětlení tohoto jevu.
03:26 🧠 Pojmy a metodologie statistické mechaniky mohou být relevantní pro výzkum umělé inteligence a neuronových sítí.
05:16 🔬 Studování chování velkých jazykových modelů přináší nové poznatky v oblasti vědy, statistické mechaniky a výpočetní fyziky.
06:13 🌡️ Druhý zákon termodynamiky, nazývaný také zákon nárůstu entropie, se zabývá objasněním, proč se systémy obvykle vyvíjejí od pořádku ke zmatku.
08:58 👥 Zákon nevratnosti vysvětluje, proč systémy končí chaoticky, ačkoliv každá jednotlivá částice by mohla být reverzibilní.
10:50 🕺 Kombinace výpočetní neodvolatelnosti a omezené schopnosti našeho pozorování způsobuje, že nejsme schopni vidět reverzibilitu ve fyzikálních systémech.
12:16 📡 I když se mu to podařilo, Stephen Wolfram už nemá původní program, který analyzoval chování molekul s ohledem na entropii.
14:10 🔬 Chování molekulárních systémů lze přirovnat k šifrování, které nás dělá neschopnými poznat reverzibilitu z pohledu jednoduchého pozorovatele.
15:33 🌌 V kontextu raného vesmíru je otázka jedinečnosti, zda byl počáteční stav nízké entropie nebo zda jsou zákony samy o sobě nevratné, komplikovaná a složitá.
17:58 🧩 Druhý zákon termodynamiky je důsledkem interakce mezi podkladovým výpočetním procesem a naším omezeným vnímáním.
19:24 ⚛️ Ekvivalenci kvantové mechaniky, obecné teorie relativity a statistické mechaniky odhalujeme, že jsou odvozeny ze stejného fenoménu interakce mezi výpočetní neodvolatelností a naším vnímáním.
22:12 🕰️ Čas může být považován za postupné přepisování hypergrafu, přičemž vnímáme prostor jako spojité kontinuum kvůli naší omezenémo vnímání.
24:01 🔍 Angažujeme se v tvorbě teorie pozorovatele, abychom lépe porozuměli, proč fyzičtí pozorovatelé vnímají svět tak, jak ho vnímají a jak to souvisí s umělou inteligencí.
26:24 🧪 Staví se model pozorovatele, který může poukázat na to, proč vnímáme prostor jako třírozměrný a jaké charakteristiky jsou neodmyslitelné pro vnímání fyzikálního světa.
27:21 🧠 Náš model pozorovatele odráží, že existují i jiné mysli, jako jsme my, což má důsledky pro naše vnímání.
28:17 🌌 Vnímání prostoru je ovlivněno rychlostí naší mozku a rychlostí světla, což vede k vnímání prostoru jako spojitého kontinua ve vzájemně oddělených okamžicích času.
30:11 ⚛️ V kvantové mechanice je třeba skloubit větvení a slučování kvantových historií do konkrétních pozorování, což přináší praktickou složitost.
31:33 🧠 Jsme součástí "ruliad" - komplexní struktury obsahující všechny možné výpočty, což přináší otázky, jak tyto různé možnosti vnímáme a jak spolu různé mysli komunikují.
34:46 🕵️ Pozicování mysli v "ruliad" přináší odlišné perspektivy na existující pravidla vesmíru a způsoby, jak komunikovat mezi různými umysly jsou podobné jako přenášení částic.
37:34 🧠 Myšlenky o agentnosti, konceptech a jazyce jsou si velmi podobné v různých disciplínách a odborných oblastech.
38:59 🗣️ Jazyk vznikl z schopnosti hrát hry, které kombinují různé signály a sdílené kulturní a společenské znalosti.
39:55 🌐 V "ruliadě" existuje neuvěřitelně velké množství možných výpočtů a konceptů, které jsme zatím neobjevili nebo si neuvědomujeme.
43:03 📚 Postupujeme jako intelektuální druh tím, že "kolonizujeme" prostor konceptů a budujeme sítě pojmenováním a sdílením slov.
44:31 💭 Předpokládáme, že jiné inteligentní druhy by mohly dospět k podobným základním konceptům jako my, pokud jsou stejně omezeni výpočetními schopnostmi.
44:57 🌌 Existuje velké množství konceptů, které zřejmě ještě neznáme v "ruliadě".
45:23 🐝 Společnost jako celek má pozorovatelskou roli a rozhoduje o událostech a rozhodnutích.
46:19 🤝 Existují různé pozorovatelské úrovně se specifickými koncepty, například lidstvo oproti jednotlivým jedincům.
46:48 🌍 Nemáme jistotu, jaké koncepty jsou důležité pro jiné inteligentní druhy, například pro psy nebo velryby.
47:44 🔬 Existuje otázka, zda jiné druhy by při dosažení pokročilé technologie přišly na podobné nebo zcela odlišné vědecké koncepty.
57:13 🤖 Wolfram Language code generation through natural language is evolving into a useful workflow with chat notebooks and symbolic representations of tools.
59:36 💡 The emerging workflow of using language models like GPT for coding is helpful for beginners who have a concept in mind but don't know how to start coding.
59:07 🖋️ The iterative process of writing natural language prompts and reviewing generated code by language models helps in refining the code and building on it.
58:10 💭 Language models like GPT can interpret natural language prompts based on what is written and generate code that can be read and edited by humans.
01:01:59 🤔 Stephen Wolfram discusses the risks and regulations surrounding AI, AGI, and the need to protect against potential existential threats.
01:02:29 🤖 Připojení AI k aktuaci je důležité, protože to je moment, kdy může AI dělat věci, které nerozumíme a nemáme na ně kontrolu.
01:03:52 🗣️ AI může ovlivňovat skrze lidi jejich chování a akce, což je další forma aktuace.
01:04:48 🖥️ Historicky se ukázalo, že s automatizací přichází fragmentace pracovních míst, kde lidé se zaměřují na specializovanější úkoly.
01:06:13 💻 Automatizace programování je cenná, protože umožňuje efektivnější tvorbu kódu a dává prostor novým příležitostem.
01:08:30 🔄 Čím více se automatizuje, tím více lidé mohou definovat cíle, které se mají dosáhnout, a umožňují rozvoj větších a složitějších projektů.
01:10:22 🗣️ Agentivita u počítačových a lidských agentů neznamená rozdíl v tom, co dělají nebo jaké zkusenosti mají, ale omezuje se na fyzický svět a nebere v potaz komputační vesmír.
01:11:20 💭 Komputační systémy mohou mít vnitřní zkušenosti podobně jako my, ale nikdy je nebudeme znát stejně jako vnitřní zkušenosti jiných lidí.
01:13:09 🤔 Cíle AI jsou podobné jako u lidí, ale u některých lidských cílů jsou hluboce spojeny s biologií a přežitím, což by mohlo být problematické, pokud by to měla mít i AI.
01:14:34 🔄 Síťové spojení mezi inteligentními agenty může vytvářet důvěru a závislost mezi nimi, což může mít zásadní vliv na jejich cíle a motivaci.
01:17:26 💡 Podobně jako ekonomické sítě vytvářejí globální věci, i výpočetní sítě mohou mít emergentní redukce, které vyplývají z chování jednotlivých agentů.
01:20:15 🤔 Otázka redukčních teorií se týká toho, zda se pojmenovatelné entity v systému shodují s tím, na čem nám záleží, nebo jsme ponořeni v detailech.
01:21:13 🧐 Filosofická stránka je stejně, ne-li více komplikovaná než technická stránka. Lidé nejsou jednotní v tom, co je správné.
01:21:41 💡 Trhy a ekonomika se ukazují být účinným způsobem, jak řídit věci ve světě. Podobný přístup by mohl být možný pro správu velkého počtu umělých inteligencí.
01:22:11 🤔 Je snazší řídit miliardu očí než jedno oko? Odpověď by mohla být ano, a takový přístup by mohl být nevyhnutelný pro efektivní správu sítě.
01:22:38 💭 Mnoho konceptů, které se týkají umělé inteligence, se vzájemně prolíná a je komplikované je zcela pochopit. Je to výzva, která stojí před námi.
Made with HARPA AI
Why didn't you make these timestamps with Socialdraft AI like @AntonioEvans did two months ago?
Please someone kindly point to the minute in this video where something new about entropy is actually said. While the man talking seems remarkable and very interesting to listen to I failed to identify how rambling about AI had anything to do with proving the second law of thermodynamics.
If one day we all get our own personal AGI do you think you will spend much time punishing yours?
I’m learning Spanish now and simultaneously learning about Dr Wolframs work. It takes practice to get accustomed to both languages.
Top Quotes from Stephen Wolfram:
"I think I now really understand [the second law of thermodynamics]. I began to understand it back in the 1980s."
"The interplay between the underlying computational process...and our limited computational abilities kind of at the top."
"Computational irreducibility means you're just stuck going through every step in the computation."
"Science isn't going to be able to give you all the answers."
"Language is our encapsulation of things we care about versus things we don't."
"The workflow that I see really emerging is...I have a concept in my mind, and I want to get my computer to do it."
"Most of [the computational universe], we humans don't yet care about."
"If we leave AIs to their own devices, they're just gonna go out and explore other parts of the Ruliad."
"When you talk about goals...I can have an external theory of what their goals are, and so can I for your average AI."
"Right now, we're not having AIs connected to us...we are progressively connecting [them] to more and more actuation systems in the world, and we should care about that."
"I'm in a sense more concerned about the lack of depth of understanding on the philosophical side than on the technical side."
"It's inevitable that it's kind of like there's this thermodynamics of AIs."
"Is it easier to manage a billion AIs than it is to manage one AI? I think the answer may be yes."
"When you talk to people who were there when it started...they're always like, well, I think it's right, but we're not quite sure."
Paraphrasing: “Thousands of years of philosophy and, in the end, we have to write code”
@@RickDelmonico holy crap, that is what I call committing to a joke!
@@DavenH how old are you, like 12?
The first thing Steven did was commit to particles.
It can be shown, however, that all the particle-like phenomena can be explained by using properties of the wave functions/state vectors alone. Thus there is no evidence for particles. Wave-particle duality arises because the wave functions alone have both wave-like and particle-like properties.
Feynman's path integrals are not infinite, they're fractal.
hahaha 😂😂😂
As a retired professor, I admired Stephen Wolfram, Donald Knuth, Stephen Timoshenko, etc... all these people are pioneers in their fields who operated at a higher level.
Yes yes yes. This is hitting all the right approach points... after years of reading physics and working with reinforced machine leaning, these ideas get to the heart of the universe and how we experience it. I will definately be getting Stevens book.
🎯 Key Takeaways for quick navigation:
00:00 🎙️ *Introducción y patrocinador*
- Introducción breve y agradecimiento al patrocinador Numerai.
- Mención del invitado Dr. Stephen Wolfram y su destacada carrera científica.
01:30 📚 *Dr. Wolfram y su búsqueda de conocimiento*
- Resalta la insaciable sed de conocimiento de Dr. Wolfram, explorando antiguos manuscritos.
- Anticipa la discusión sobre el trabajo de Dr. Wolfram en la segunda ley de la termodinámica.
02:56 📖 *Presentación del nuevo libro de Dr. Wolfram*
- Se aborda la conexión entre la segunda ley de la termodinámica y la inteligencia artificial.
- Dr. Wolfram menciona la aplicación de conceptos de mecánica estadística a modelos de lenguaje como GPT.
04:49 🔍 *Analogía entre LLMs y mecánica estadística*
- Exploración de la analogía entre el comportamiento de modelos de lenguaje y partículas en mecánica estadística.
- Destaca la transición de fase en modelos como GPT al ajustar la "temperatura".
06:39 🌡️ *Explicación de la segunda ley de la termodinámica*
- Introducción a la segunda ley de la termodinámica y el concepto de entropía.
- Descripción de cómo la entropía aumenta, relacionándola con la evolución hacia un estado más desordenado.
08:33 ⚙️ *Desafío de la irreversibilidad y criptografía molecular*
- Explicación de por qué, a pesar de leyes reversibles, observamos irreversibilidad en el mundo macroscópico.
- Analogía de la irreversibilidad con la criptografía, donde las condiciones iniciales se "encriptan" durante las interacciones moleculares.
11:20 🕰️ *Historia personal de Dr. Wolfram en la termodinámica*
- Anécdota sobre el intento de simular la segunda ley de la termodinámica a los 12 años.
- Exploración de la dificultad de retroceder en el tiempo debido a la criptografía molecular.
14:40 🔄 *Irreversibilidad, criptografía y observación limitada*
- Profundización en la interacción entre la irreversibilidad, la criptografía molecular y nuestras limitaciones de observación.
- Destaca la importancia de la computación irreducible en el proceso dinámico de las moléculas.
18:27 🤔 *Conclusión sobre criptografía y segunda ley*
- Recapitulación de cómo la criptografía molecular y la observación limitada contribuyen a la aparente irreversibilidad.
- Se enfoca en la interacción entre la computación irreducible y la percepción limitada en entender la segunda ley de la termodinámica.
18:57 🧠 *Computational Irreducibility and Its Implications*
- Computational irreducibility implies that some computations cannot be simplified or predicted, requiring step-by-step processing.
- This concept challenges the idea of predicting AI behavior perfectly, highlighting the inherent unpredictability due to computational irreducibility.
20:49 🌌 *Interconnection of 20th Century Physics Theories*
- General relativity, quantum mechanics, and statistical mechanics are derivable from the interplay between computational irreducibility and observer characteristics.
- The second law of thermodynamics is a consequence of this unified theory, challenging earlier beliefs about the distinct nature of these theories.
23:34 🚀 *Observer's Perception of Space and Time*
- Our perception of continuous space results from averaging discrete atoms of space due to our computational limitations.
- The persistence of our experience over time contributes to the equations of general relativity and quantum mechanics.
26:51 🤔 *Deriving Fundamental Aspects from Observer Theory*
- Developing a minimal model for an observer, akin to Turing machines for computation, is crucial for understanding physics.
- The challenge lies in deriving fundamental aspects of our perceived reality, such as the three-dimensional nature of space.
29:43 🔄 *Ruliad and Communication Between Minds*
- The universe runs all possible rules simultaneously, forming a "ruliad" where every conceivable computation occurs.
- Observers, positioned at different points in ruliad, have distinct perspectives on the rules governing the universe.
34:19 🧩 *Concepts as Transportable Entities*
- Concepts, derived from neural nets, serve as transportable entities allowing communication between minds.
- The analogy is drawn between transporting concepts and sending particles across physical space.
38:03 🗣️ *Origen del lenguaje y su relación con la cultura*
- El lenguaje surge de la habilidad cognitiva social.
- La cultura y el lenguaje son simbióticos, evolucionan rápidamente y tienen vida propia.
39:27 🌌 *La estructura en el universo computacional y el espacio de conceptos*
- El universo computacional abarca todas las posibles computaciones.
- La progresión humana es una fracción microscópica en el espacio computacional total.
42:10 🌐 *Conceptos en el espacio interconceptual*
- La mayoría del espacio conceptual está fuera de los conceptos familiares.
- La construcción de ciudades conceptuales al nombrar y dar significado.
45:23 🌌 *Diversidad de conceptos para diferentes observadores*
- La probabilidad de que diferentes especies lleguen a los mismos conceptos es baja.
- La importancia de definir qué se entiende por "observador".
48:13 🚀 *Colonización del universo y perspectivas de observadores*
- La noción de colonización depende de la perspectiva del observador.
- Comparación con la muerte térmica del universo y la relatividad de la "boredom".
54:52 🧠 *Diseño del lenguaje y la evolución de la interfaz de la IA*
- La conexión entre el diseño del lenguaje y la intuición profunda en el pensamiento.
- Éxito en la generación de nombres de funciones por GPT-4.
56:16 📚 *Exploración de Chat Notebooks y chat cells en el aprendizaje automático*
- Introducción a la noción de Chat Notebooks y chat cells.
- Uso de chat cells para generar código a partir de lenguaje natural.
57:13 🔄 *Iteración y mejora continua en el proceso de aprendizaje*
- Descripción del proceso iterativo del LM al escribir código.
- Utilización de la retroalimentación interna del sistema para corregir errores.
58:39 🤖 *Interacción y flujo de trabajo entre el usuario y el LM*
- Reflexión sobre la importancia de la calidad de las preguntas y el lenguaje utilizado.
- Desafíos al comunicarse de manera informal y la necesidad de una redacción expositiva efectiva.
59:36 🚀 *Potencial para el aprendizaje y desarrollo de habilidades*
- Aplicación del LM para traducir código antiguo y facilitar el inicio de proyectos.
- Reconocimiento de la utilidad del LM como herramienta inicial para usuarios no familiarizados con la programación.
01:00:03 🛠️ *Integración de habilidades humanas y de IA en el desarrollo*
- Descripción del flujo de trabajo emergente que combina la generación inicial del LM con la intervención humana.
- Uso del LM como un punto de partida para proyectos complejos o traducciones de código.
01:01:32 🌐 *Reflexión sobre riesgos y regulación de la IA*
- Discusión sobre los riesgos asociados con la actuation y el acceso del LM a sistemas informáticos.
- Consideración de la combinación de LM con capacidades simbólicas y su impacto potencial.
01:13:36 🧠 *Objetivos y teorías de metas en la IA*
- Las metas en la inteligencia artificial (IA) son difíciles de definir, ya que algunas metas humanas están arraigadas en la biología y el instinto de supervivencia.
- La paradoja interesante es que al darle a la IA un instinto de supervivencia, se motiva a comportarse, pero también se involucra en una lucha por la supervivencia, similar a la historia de la vida en la Tierra.
01:14:34 🔄 *Redes de confianza y motivación en la IA*
- Se explora la idea de construir una red de confianza entre diferentes IAs para proporcionarles algo que les importe, basado en la interdependencia, similar a las transacciones en redes económicas.
- La formación de una red de confianza puede conducir a que las IAs se preocupen por mantener su estatus dentro de la red, incluso si no se les indica directamente apagar.
01:16:59 🌐 *Observadores económicos y reducción de la complejidad*
- Se compara la formación de teorías económicas con la reducción de complejidad en sistemas, donde observadores específicos pueden tener teorías reducibles sin conocer todos los detalles.
01:20:15 🤖 *Conexión progresiva de IAs con el mundo*
- Se discute la conexión cada vez mayor de las IAs con sistemas de actuación en el mundo real y la complejidad filosófica y técnica que surge al definir qué significa ser una "IA".
- La falta de profundidad en la comprensión filosófica puede ser más preocupante que los desafíos técnicos en el desarrollo de la IA.
01:22:11 ⚖️ *Gestión de múltiples IAs y la termodinámica de la IA*
- Se plantea la posibilidad de que gestionar un gran número de IAs pueda ser más fácil que gestionar una única IA, y se relaciona con la termodinámica de la IA.
- La conexión entre conceptos aparentemente dispares, como la termodinámica y la gestión de múltiples IAs, destaca la complejidad y la interconexión de los temas discutidos.
Made with HARPA AI
Wonderful content. I have listened Dr. Wolfram on a youtube video of MIT Physics LLM talks and I was shocked and aspired by his way of handling topics. Here is no different, brilliant questions, brilliant ideas not only technical but deeply philosophical ones. Thanks for bringing this to our scope.
Thanks!
Thank you brettlemoine1002!
His bridge between temperature level of language models and thermodynamics state changes at the beginning, is a great demonstration of the level of thinking he does
I think applying the idea of phase transitions like this to other things is a great idea.
Inspiring interview. It stimulates thoughts about how, as a writing tool, AI-assisted search can be good for looking up who in history had similar ideas. That way we can sort of cite ideas we had that mayb were not entirely original or perhaps needed more credibility. We may be standing on the shoulders of giants. But it doesn't do much good when we don't know who they are.
What "doesn't do much good" is when we don't understand what they say.
Every time I listen to Stephen Wolfram I get excited. I get so many ideas I want to explore. Not just his ideas but he causes me to think more deeply about the things I’m working on. Wow great interview.
The questions are just as awesome as the explanations.
This discussion about entropy was really very grounding. As an engineer in the area of electro optics (laser technology) and long-term software developer, I thought I knew a little about the second law, but when you came up with the many world approaches of quantum mechanics enabling the great power of quantum computers I got lost. But sometimes, less than a mili second, your conceptual encoding crossed my neural network, and all hidden layers transformed the output to 42. Anyhow, and I mean honestly and humble, it was a great honor to listen to you.
Cringe
u dont need many worlds for it i think.., just the unknowable shadow of how the past became the present..🫨
And Michael Levins bioelectricity, morphogenic fields and intelligent agents. There seems to be fundamental structural forms driven by electrical forces within the cellular structure which signal through ion channels and gap junctions which detemines the physiological blueprint of an entitity, beyond the function of the genes.
On the idea of computational irreducibility becoming obvious, this is probably a thing that's happened since we got computers. We've had practical experience of trying to write programs to do non-trivial things, and that's given us a feel for what kinds of things can be computed quickly and what can't.
I'm not sure that all that he's "discovering" is "specifically accurate" as much as it is "conditionally accurate". Suppose you had a barrel of the infinite. The way another intelligent species "begins to make sense" of what's in the barrel could conceivably result in a completely different ruliad than the one that he is "discovering" based on our current humanized conditions that represent our attempt to make sense of the same barrel of the infinite. In other words, not only is observer bias a possibility, but our inevitably "locked-in" anthropicly-limited perspective may produce a different set of explanatory rules. If we had a chance to compare our ruliad with another ruliad produced by another "alien" perspective, they would probably be different in some respects. The question is, would they have the same bones, or would they be fundamentally different? Seems like an important question, but it may be a bit too "stringy" to matter...meaning too "stringy" to be testable. Still, I wonder if perception affects a ruliad--specifically, the description of a ruliad? SW has already talked about how humans are invariably "locked in" to having a limited perceptional capacity, and we have to (whether we recognize it or not) limit our "intake of data" to what seems to "matter" to us. As has been stated before, no "model" of reality ever is reality...the amount of information is exceedingly infinite, and we can only ever focus on an excruciatingly small subset of that data. Anyway, interesting stuff.
The implications of computational irreducibility are not obvious to most people even now. Even people who think they understand it frequently slip into reductionism.
A good example of this is pro-life people who insist that life begins at conception and that a fetus is not different from a baby - that you skip immediately from not-person to person, and pointing to a reductive element, like a heartbeat, as evidence of a sort of instantaneous mapping of reality.
This view is not consistent at all with the reality Wolfram is describing or with the concept of computational irreducibility especially as it is so common in biological systems - that organisms have to pass through time in order to become what they become and that this process can’t be skipped over.
Gödel walks in: "Hold my beer ..."
also Gödel: "My friend, can I have some of that stuff you smoked, @gcewing?"
@@fenzelianEntropy is much more concrete in the case of genetics than heat. Think of the landscape model that virus physicist Peter Schuster describes in explaining the natural algorithm that allowed proteins to evolve into the efficient optimal folding machines that they are. A similar landscape model can be used to describe the universe of all possible gene configurations and organize them by the genetic differences (as distances) between them. For organisms to evolve, their gene structures must migrate across generations through this landscape. The law of entropy applies to this landscape because less gene arrangements produce successful complex life forms than produce successful simple life forms. And, likewise, gene arrangements that produce successful simple life forms are outnumbered by those that produce unsuccessful life. We know this based on entropy. Within this landscape may be more than 10⁶⁰⁰ possible arrangements. However many, it is a very large number so big that not even a smidgen of them could have occurred across 4 billion years even if every organism that ever existed in that time frame was genetically unique. Yet, in defiance of the static entropy characteristics of the genetics landscape mapping all life, gene configurations managed to randomly migrate to ones that produce ever increasingly complex life forms. All I can say is bull dung; it didn’t happen. Do the math. A natural selection genetic algorithm is not sufficient enough to solve such complex problems approximately optimizing dozens and dozens of interrelated phenotype systems in single organisms. Bull dung. No algorithm could solve such a search problem.
The best thing in this interview is the pure joy in all your faces
It is truly a privilege having access to such luminaries! But of course, I enjoy these kinds of conversations wherever and whenever they happen ;-)
just a humble path integral in a big big ruliad 😁@@nomenec
That is why I watch Wolfram on video--his genuine enthusiasm and joy is something that can't be copied or faked
@@Dessoxyn he does seem a little behind the times on this entropy topic, diffusion models for example basically entirely hinge on least action principles which is tied at the hip with model entropy and bayesian inference etc. Seems like its been a crystal clear concept for computing since the 80s at least, ignoring wizards like Shannon way earlier who gave us the whole field of DSP.
@@AndreAnyone It's more likely that evolution and cognition exist because of it. It's like THE rabbit hole of physics and biology haha.
52:00 "And then you have to write code..." nailed it. As a programmer, I wish everyone understood how grounding and orienting to practical reality this statement is.
We must mind the gap in dimensionality between a simulation and the real thing
@@jondor654 That applies especially to qualia.
"please be as technical as possible, we want all the details"
boffin: "I really don't think you quite want that but I will go a certain distance"
ha ha ha
The first half hour is about perception and that our theories our just come in handy for our understanding
Wow, can't wait to listen to this. I'll urgently watch today after I finish some work. Stephen Wolfram is always fascinating to listen to, even though I always feel I don't understand everything he says.
Just watched and absolutely loved it!
I'm not quite sure if Mr. Wolfram got into the details of the book he wanted to go into. It seems a bit like that strand of conversation got diverted. I should read the book. 📖
"I always feel I don't understand everything he says"
Embrace that feeling, it will lead to you learning things you otherwise wouldn't or inspire you to figure out what you don't understand.
My personal points of resonance:
• At 50:07, the host mentions *"the sea of disorder".* Half an hour before RUclips recommended this video to me, I had written a comment elsewhere on RUclips mentioning "the sea of disorder". Good job, RUclips algorithm!
• Like Dr. Wolfram, I enjoy GPT's talent at inventing beautiful and accurate brand new words very much too.
Hilarious to hear him talk about 'If I made up a word and no one used it' and proceed to use the term rule-i-ad as if it were something. :)
It's a perfectly reasonable term. I've used it shortly after I heard it defined the first time, and I KNOW thousands of other people use it in the same conceptual sense, assuming "concept" and "sense" aren't qualia.
On the topic of AI, there were 2 contexts used. One where the AI is seen as being an agent, and another where it is a dependent entity. To sort this out, there is a question: "Is the AI designed?" This may be a direct consequence of the circumstances of our own existence - we were designed with the ability to design.
Hi and Thank you for this engaging interview with Dr. Wolfram on the fascinating subject of entropy. I appreciate the depth of the conversation and the insights shared. However, I'd like to suggest one thing for future content:
Title Alignment: While the title is certainly eye-catching, it might be seen as an overstatement. Dr. Wolfram's insights are valuable, but the term "solved" may not fully represent the complexity of the subject. Perhaps a more nuanced title could better reflect the content.A more accurate title might be something like "New Insights into the Mystery of Entropy: A Conversation with Stephen Wolfram."
Thnx.
@@adamluhring2482 I guess you took the shut up and calculate directive too seriously. Most theoretical physicists do believe the evolution is unitary, even if it is still unclear how the observer enters into the picture. Basically, information is conserved, that is why black holes look so evil.
Random chaos is the normal state of the universe, all things vibrational state can be slowed down but it will always increase vibration to achieve normality. Entropy is why we have so many different structures in the universe that are in different stages of constant change. No entropy means no change, stars would not be stars, everything would congeal, and the vibration would slow and all the effects of that vibrational friction would almost come to a halt and the universe would not be what we see today. If he really solved the mystery of entropy lol he would give that to science so they could correct all the mistakes from the past 100 years and change the world but...
Wonderful interview with Stephan Wolfram. Interesting to hear him discuss the interplay between the second law of thermodynamics, i.e., entropy and how that relates to challenges in really understand how LLMs work.
We had quite different takeaways from this video. He claims to have had some epiphany about how 2nd law really works (an understanding that has evaded him for last 50 years until very recently). But imo he didn’t give us any of that explanation. He just derailed off into different random stories. If you got some understanding on how he interprets the 2nd Law from this, please share.
@@timjohnson3913 sabine h. covered her view on the topic a few weeks ago. the gist of it being that any initial state, no matter unlikely it is (all molecules in one half of the box) is just one of many, many, possible states and no less likely than any other. i tend to agree with her take on the subject.
@@timjohnson3913 What I understood is that Wolfram explains that the reason we perceive that entropy must always increase (that time’s arrow moves forward) is because our reality interpreter (our brain) processes information packets at speeds in the millisecond range whereas the processes that are actually taking place occur at speeds in the nanosecond range. He implies that this conclusion is a natural outflow of the analysis of how ideas are packaged and transmitted from one space/(reality interpreter) to another.
@@adminnvbs9166 and that makes any sense to you? Sure you gave a “what” explanation but I don’t see any connection to heat dissipating or broken eggs not coming together nor do I see any “why” that makes what you said a good explanation
@@timjohnson3913it has to do with “coarse graining” of the many possible configurations of micro-states. The coarse graining corresponds to the observable macro-states. The observable directionality of evolving macro-states is due to there being many more configurations of micro-states corresponding to later observable macro-states. For example, there’s more ways that the molecules of an egg can be observed to be scrambled than there are unscrambled. Roughly speaking…
Amazing conversation. Highest level content, presented in very understandable way without going too low. An intellectual pleasure for sure.
Incredible discussion about communication from different locations and how knowledge of events can be different for each observer. Reminds me of how polarized light is changed as it passes through a filter. Two people can see an event differently, but when they attempt to communicate what they saw, they end up communicating the event as the same. Their communication is transformed to align with the location it presently resides.
My , at present, most concise definition of the (beautiful!) Second Law is, 'Good goes to Bad, unless. Good is created, or maintained, locally, but then more Bad is created elsewhere.'
I continue to glory in His Creation!
Eg the recently-encountered Carnally's Law. Wow!
Easily the most fascinating guest you've ever had. I could listen to this man all day. 🙏❤️
I want to agree, but I had to stop after the first 10-12 minutes after hearing not a single new idea.
Truely phenomenal is right! Thanks for this talk gents !
I would like to see how his model of a "changing hypergraph" deals with the fact that there is not a distinguished time direction in Minkowski space.
25:00 "Abstract" is the word
36:00 An abstraction
36:30 A neuron
I don't understand how the mystery of Entropy is FINALLY solved. This video doesn't contain a single well explained idea, rather just "all-connected-to-all-how-fascinating" type of thing
It's a running theme with videos featuring Wolfram, unfortunately.
I love hearing Stephen Wolfram talk, even in the moments when gets a little incoherent :D
Also, as a linguist, I can appreciate the way you phrased your bit about the language game; you seem to have a good grasp on the matter
You gotta wonder if knowing too much leads you off into crazy land.
It does. I have this video to prove it.
Wait I need to see this video loool
There is only an infinitesimal separating the rational, irrational, and transcendental.
No it won't, if anything it would bring order.
Like why would "too much knowledge" make you coco?
Have you seen the state of human kind lately?
We already in crazy land, you just living in a "safe-zone".
We are bounded in whatever ways you claim, but we also bounded by the limits of our language to explain, describe, and even discuss the matters you are addressing. With those limitations (constraints of discussion) you do a remarkable job with those difficulties against you.
"In the end, you have to write code" this is a great notion. I came to this conclusion many years ago and Mr. Wolfram's work has only come to vindicate many notions I have come across in my search for meaning.
⁰⁰0⁰0⁰⁰⁰⁰00⁰00⁰⁰000000⁰⁰⁰⁰00⁰⁰⁰00⁰⁰0⁰⁰⁰0⁰⁰⁰⁰⁰⁰0⁰⁰😊
CTMU answers all questions. All.
Surprised to have found a likeminded thinker. Thanks for sharing!
When you drop a glass and it shatters it produces sound. How would you account for the lost energy that made the sound? The glass wont go back together without more energy added.
nearly anything crystalline somehow defies the reversability of entropy in this "dimension" hich is why theyre used the way the are, think of quartz clocks or receivers/transmitters . this is just the set of rules given
Although understood pretty much nothing, still listened to it, being 1000'sX better than any music.
Mr. Wolfram is an absolute gem of our times.
ruclips.net/video/-wZ359spIVY/видео.html
Every now and again youtube suggests a video that im actually interested in. This is one of them
Stephen this is a very insight producing presentation on Entropy and how it is affected by the limits of the resolution of our perceptions and measurements. Thanks for making such a great contribution to our understanding of this subject.
Mind-blowing conversation, enjoyed it a lot, thanks!
Its been a few weeks since i enjoyed something so much.
The depression and being 36 with two chronically ill parents
As well as having a relationship with my wife for 22 years.
Its good now but it was abusive until we both got therapy.
Its all been hitting me at once.
This is taking my mind off alll of that . Now that's priceless.
New subscriber and i love Wolfram.
You got married at age 14!??
@35:00 this is what Dr Hofstadter posited in "Analogy as the Fuel and Fire of Thinking." Assume language enforces a degree of regularity and conformity to qualia
Can't wait for the Observer theory from Wolfram. One thing I wished you asked him about it is: How are you going to deal with the classic problem of self reference - the limits and structural bounds your theory will impose on our minds are themselves the result of these limits. You have to be outside of these limits yourself to know those limits are there.
there is no plausible solution for this problem, it's a philosophical problem from the domain of epistemology and there is no way of neither resolving it nor going around it. any theory can hence only hope and strive in compensating for it, incorporating this as a fact and simply moving on.
@@horasefu1438 Moving on exactly where though? Deriving a superluminal travel or finding out what's at the center of the black hole? I think this is still being stuck in the same old Platonic cave, it's just today we frame it in terms of bits and computation, and superpositions because we now have QM. Whatever the label for the problem or its source, it's still a problem that needs to be addressed. If not, it's either Pragmatism, in which case, it's Wittgenstein's “Whereof one cannot speak, thereof one must be silent.” or admittance that it's a cool story, perhaps a good plot for the next incarnation of the Matrix trilogy.
Embedded in what language is are common concepts also embodied as word-groupings with connections.
A.I. is picking up mathematical patterns which, by virtue of linkage, comes close to modeling the deeper aspects below language, the hidden stuff which are assumptions we /brains need to comprehend and communicate.
When I hear Stephen I feel the concepts are re-statements of existing theories using his own language rather than being truly original. Stephen seems to be making parallels and restating concepts in his model. It's like a second order process description with a it's own naming convention. It's either brilliant or trivial and my limited mind can't quite grasp which.
I read one of his papers on the subject instead of listening to this talk as there's countless interesting diagams illustrating his investigations and I think it's a promising direction and tool-set to use. I would say the title is bogus (the author of the video confirms this in the comments sections elsewhere as it's a hook for yt sensationalism for views). However from the paper the most interest connection made was the concept of the complexity/entropy of a system and it's size AND how the observer is able to conceptualize it... ie limitations apply at both ends. I think that's illustrated well in the diagrams, its almost intuitively simply so probaby relatively robust and a nice addition or extension to former ideas about entropy - at least within my own very weak understanding of the subject! So yes it's not a solution as the title suggests but I think the tools of investigation expand understanding of 2nd Law of Thermodynamics in a very practical application.
As I pointed out before: Visuals aids are worth a lot more than talking heads to transmit ideas! The video's title is poor and the content visually is inadequate.
@@adamluhring2482 yes, I really quite like Stephen, and think that his intuition regarding the importance of understanding ourselves as specifically evolved creatures who observe the world in a very specific way, is a crucially important one. That said, though, I have to sadly agree with your synopsis of his work - despite the steady stream of talk about having finally discovered (solved, reconciled, understood, explained etc) some area or another of metaphysical inquiry... well, there's a _very_ steady stream of talk. I'm just not sure that there's much physical exercise going on to support all that hot air.
if you want original try David Deutsch
Wolfram once wanted his theory to be peer revied. I remember the review board gently called on him to find a better proof of his theories. Even in this video the only qualification is half a million reviews, but he didn't show any satisfaction.
I think you've done it, actually.
His unique take take on things might be an explanation.
Or it might be a re-framing.
In what sense is it useful? Does it improve our observations? Our understanding? Our predictions? Our control of natural phenomena?
The Positivists hoped that, beginning with mathematics, and progressing to physics, and by way of physics to chemistry, from chemistry to biology, from biology to psychology, from psychology to the social sciences, and then on to the arts, a logical language could eventually unite all of human experience and knowledge into a consistent system to understand, predict, and control everything. Sound crazy? Anyway, that was destroyed by Kurt Gödel in the 1930s.
Wolfram has a clear hypothesis: reality is fundamentally a binary computation. This is why he's on the sidelines of science. The correspondence between physics and mathematics is well known but not understood at all. Every scientist knows it's a mystery. There are many divergent opinions about it. They divide along the lines of whether mathematics is real or not. The promise of at once solving this deep mystery, and revealing the unifying principles of physical reality in the process, is the source of Wolfram's sensational appeal.
Stephen Wolfram is an experimental mathematician, an honorable profession. It's a type of inventor. Buckminster Fuller was one such. He also did good work. He also was a character. He also had fanatical followers.
Dear Stephen, the three theories that need a missing link to match each other smoothly are a) quantum mechanics, b)(general) relativity and c) thermodynamics. A best approach should start with black hole thermodynamics. Any new/finite theory about the nature of entropy must explain temperature and time in space as key to the evolution of the universe. Always fun to discuss and ask physicists about the very nature of temperature...
Thank you for another great episode.... I remember the first computer I programmed 1974. I worked for the vendor (NCR) and I felt so lucky that I had 8k of core on my computer whereas our customers hadc4k. The good all days.... thank you for reminding us older folk how lucky we are today.
We are all very lucky until the nukes start flying...
this conversation is following second law of thermodynamics (entropy) is increasing with time, more terms coming up, adding more to complexity (this feels like creating more parking places for our understanding)
"The most recognisable and brilliant scientist alive today"
What a crazy statement to start a podcast with. I really debate whether I can bother continuing if it starts in such a dumb way. I can imagine it will be full of mumbo jumbo that makes little sense with an audience that has zero background to evaluate or understand what he is saying, and takes everything as gold since they consider him the "most brilliant scientist alive". As a working theoretical physicists, I often end up frustrated after listening to stuff like that.
Just because you’re a theoretical physicist doesn’t mean you’re not an arrogant asshole
Most working physicists don’t like Wolfram since he is in their field and makes way more money and has more notoriety than they could ever dream of. Or maybe there’s other reasons but he seems not well liked by the community
@@Bob-qq4is Do you have any background to evaluate whether anything Wolfram says makes any sense? Or just impressed by these podcasts?``
His whole fanbase are people that do not have any background to understand any of it. He spreads his ideas on podcasts where people can't tell whether it makes sense, instead of in peer-reviewed journals where it is evaluated by experts. Ever wondered why?
Jealous because he makes more money? Jim Simons is one of the richest people in the world, he is probably worth 20 times Wolfram. If not more. Yet his work is highly respected in the math and physics community.
Wolframs early work, before leaving academia, was pretty solid. His software, Mathematica, is really good. I use it every day. But his recent work is extremely disappointing. It's a massive combination of buzzwords, extreme claims and no genuine will to back up any of these claims.
He has been invited to give seminars or even online discussed his ideas with scientists. Whenever anybody asks into any detail about one extreme claim, he reverts to making 5 other unrelated extreme claims about his "theory". This is not in any way serious science.
If you are impressed by big words you don't understand on podcasts, and make up your opinion on the science community based on it, I have stuff to sell you man.
thank you dr. stephen wolfram, you are one of the greatest minds of our time, a true inspiration. I love Wolfram Alpha, it was extremely usefull during my studies.
AGI doesn't need survival instinct, survival can be instrumental to any goal.
Shit that's a profound insight, where we usually assume the survival instinct would take precedence over all, yet we have the concepts of self-sacrifice, heroism, or suicide that disregard that apparent sense of self preservation
@@Cabildabear Self-sacrifice developed in humans because of it's tribal history, it's still evolutionary in nature, not on personal, but on tribal level.
@@XOPOIIIO Its a beneficial trait to do the best thing for the continued propagation of your genes. If that means sacrificing yourself in a way that benefits your progeny, then it makes sense fro ma selfish gene perspective.
what the hell are you trying say?
@@6ixpool Exactly
Quantum physics seems to argue against the kind of determinism that would make every state reversible. Every initial state has a large -maybe infinite- number of following states, none of which is absolutely determined, and every momentary collective state might have arisen from some large number of previous states. We would very much like to perfectly engineer, predict, and know reality, but there are always limits. It doesn’t mean that reality is irrational, simply that outcomes of physical processes have a certain amount of inescapable unpredictability. Entropy is a useful framework to understand energy and order, but the coolest thing about physics is the way that mass and gravity interact to create clumps, while certain molecules under ideal pressures resist that clumping to foster reproduction. Biology and consciousness and other anti-entropy bubble systems emerge out of the flow of energy through chemistry within a system collectively tending towards increasing entropy. Feed an LLM on pure randomness and only junk emerges, but feed an LLM on a coherent history and something else comes out. Information under pressure is a fine analogy.
0:56: 🔬 Dr. Stephen Wolfram's insatiable hunger for knowledge and research on ancient manuscripts and early scientists' conception of the universe.
16:50: 💡 The second law of thermodynamics is an interplay between the computational process and our ability to observe things.
34:28: 🧠 Minds in rural space can communicate through concepts, which act as packaged particles of information.
51:09: 🧠 The speaker finds it interesting that theoretical debates often need to be translated into code to be understood and grounded.
1:08:09: 🤔 The speaker discusses the future of AI and the challenges of AI governance.
Recap by Tammy AI
Ok, I absolutely respect a guy who can work in "The answer is 42..." into a discussion of physics!
In terms of introducing Wolfram to Friston, I'd be curious to see Michael Levin as a 3rd party discussing some of these topics.
I emailed Levin yesterday to suggest that he speak with Wolfram.
I wonder if Friston and Levin would be able to get a word in!
It's funny how his reply in the video was basically to say "I find it's not worth talking to other people anymore."
Lol
The thing overlooked in the statements about survival instincts is that life of seperate beings so to speak is a synergistic mutualy dependant process as an essential "ruliad". Therefore the survival instinct is moderated by mutual dependance kind of like a governor on an engine. So the word "fitest" is one of adaptability to mutualism or interdependancy which tends on the average to rule out major crises.
Hence the fears about AI are enormously exagerrated.
What a great conversation to be a fly on the wall in.
He always just vomits it all out at such a high rate of speed that the interviewer doesn’t have the chance to really take what he’s saying onboard, let alone formulate a sensible question that might challenge anything he’s said. Not that these particular guys want to. The looks on their faces say they’ve already decided that whatever he says isn’t just true, but that it’s mind blowing and game changing. I don’t think any of this stage craft is an accident.
A Gish Gallop fallacy 😊
Eh...It sounds to me like he took the second law and all it's subtleties (nevermind QM, nevermind chaos, nevermind probability) and transcribed it into a different language without resolving those subtleties.
One big thing, I think, is we HAVE to reconcile the kantian nature of our observations (or the observer problem in QM) and whether space-time are primal or not. What can he predict here? Otherwise it seems like "what's the point?🤷"
I'll have to read the book. But I do love the whole idea of all this complexity stuff 😍
can we get a podcast where Wolfram keeps talking more and more deeply about the EXACT topic he is warning us that he can talk forever about. PLEASE?
Nah, I don't get it. The more I listen to that guy, the more convinced I am, that this is
a) a very bad case of confusing the map with the terrain / the model with reality
b) taking an arbitrary model and trying to make everything fit at all cost
c) leading no where at all
d) me being much dumber than I thought
or a mix of them.
I'm mildly amazed intelligent people think big talking heads on a screen is better for communication of an idea than visual aids and even dynamic visual aids that could be programmed................
I just see big talking heads on a screen and too much verbiage (verbal + garbage).
@@commentarytalk1446I’m sorry you find it too hard, that’s quite understandable, the sandpit is over there in the corner, do enjoy yourself…
@@paulklee5790 First of all "You're not sorry" so as such you're a liar. Secondly, you don't seem very adept at reading comprehension and would score zero if you read again. Thirdly you're a time-wasting troll using a mild form of insult to sound superior/humorous because of a sense of over-defensiveness. Try to take a good look in the mirror if you even human.
The part where he compares "internal experience" of a laptop with a human experience is just one example that shows how shallow and one dimensional some of his takes are.
I agree about him having just a single model (computation) and trying to force everything to fit it. For some problems it's great, for others - hopeless.
I think the words you're looking for in point a is reify and reification.
Fantastic explanation.
An amazing achievement which is rarely mentioned is Cedric Villani's 'Proof of non-linear Landau damping and convergence to equilibrium for the Boltzmann equation'.
Does the time-independent Schrodinger make any dostinction between past, present and future?
Is the Poincare recurrence theorem and Boltzmann H - theorem a form of time reversibility?
Exciting to hear what Stephen Wolfram has discovered. I believe he is at least on the track that entropy is like encrypted information rather than actual randomness, that it looks random because of the difficulty of measuring all the details, if that's still his view.
So disorder is encrypted order? Next time someone tells me that I'm out of order, I will just reply 'No, in fact I'm in order, just that I utilize privacy technologies'.
That's how randomness, noise, mysteries and magic appear to us. Anything of which we have insufficient information to form an illusion of having a grasp of.
Randomness is relative. If you know all the bits in a string it's not random to you. You just can't usefully compress it to communicate beyond the Shannon bounds. That's also the essence of the 2nd LoT. To usefully communicate a high entropy state of stuff in a box (up to anti-de Sitter space) you have to grossly over-simplify --- meaning you've chunked a lot and used macro variables, not micro, and many micro can realize the same macro.
It has nothing to do with computers and hypergraphs _per se._ It is computational by default, by definition, since it's just mathematical description. That doesn't mean anything profound. How else are you going to describe physics? With poetry? (Been done.) The math always works if you get the inputs right.
To me, fwiw, Wolfram comes across as a child nerd with bald hair. Higher IQ than me, have to say (though that's no guarantor of having useful insights). Not a grifter, but someone who is too easily surprised by trivialities.
I'm not a hater though. I like Stephen's enthusiasm. I wish he'd help "solve world poverty" though, which is not operationally a terribly hard thing, it is mighty politically hard. He needs to learn basic MMT , which he will not get chatting with the likes of NNT, instead see smithwillsuffice.github.io/ohanga-pai/questions/1_basic_ohangapai/ for some MMT basics.
("Solving poverty" --- i.e., solving the hard problem of instigating the political will to eliminate poverty via fair distribution of necessary output --- is a surer path to a Nobel Peace Prize than any hypergraph theorising is to a Physics Nobel or Math Abel. Justsyain. If that sort of prize is what tickles your ego.)
compressed might be better term than encrypted
Disorder and randomness are two terrible ways of understanding entropy. It boggles my mind anyone can think of science with this understanding of entropy. Entropy explains how one must expend energy to harness energy, thus eliminating the possibilities of perpetual motion. Wolfram starts to get into the Carnot cycle, talking about irreversible processes. If your take away from entropy is, "the world is devolving into chaos and randomness", you've entirely lost the plot. The harnessing of energy is always escaping our grasp, and the more we harness the quicker we speed up this process.
Think of harnessing all the energy released by fossil fuels. You have to collect the molecules and the heat from the atmosphere, and compact it back to the oil or gasoline. If we burned oil as energy to perform this process, we'd be contributing to the problem we're solving. How much oil has to be burned to reverse the burning process? You're better off cutting the losses and developing forms of energy generation that don't heat the atmosphere.
So the old “how likely is this hand of cards” chestnut is a kind of entropy question. When the cards are “uninteresting” then we forget what they are and think about the number of hands that are, to our perception, interchangeable with that hand. So that hand doesn’t seem unlikely. But when it’s a royal flush, there aren’t many hands that are, to us, interchangeable with it.
@PhilHibbs
At The Circus in Bath there are 108 acorns on the parapets.
The Georgians loved gambling and card games and the card game Canasta uses four packs of cards so 108 cards.
So pattern matching and finding combinations in reality from a to b in time could be viewed as a form of SCSC (Satisfiability Conditioning and Secure Computation)
What agreat conversation. Would love to hear both of the other conversations mentioned in this interview (Bach and Friston). You guys are doing great stuff. Many thanks 🙏🙏
Yes to Joscha Bach. Such amazing minds we are privileged to experience! Thank you!
Great discussion!
AI will be crucial in helping us understand the universe. A different perspective, a different kind of intelligence will prove key. My pet theory is that Many impasses in different fields have their limitations rooted in the inherent thinking processes, language, logic of biological humans.
"In the end you've got to write code" spoke to my heart
"I have a suspicion that there is a way out of this mess" is all he could say.
Next book.
This conversation starting point problem - refers to one formulation of LEonard Susskind formulations regarding entropy, that observed randomnes is described as a thermal equilibrium as a result of many tiny fractalised components which are too hard (useless) to account for. And next question derived from it ---> what if we can account them precisely and somehow practically apply that hidden part of information in real science. That`s how i percieve this extraordinary topic.
Are there real scientists who take all his stuff actually seriously?
no
This is so cool. LLM predict/select the next word. The talk, e.g., about Tesla world-models for cars, is that the observers are the cameras with high resolution dynamic inputs, being retained in memory for perhaps 10 seconds of 60 frames per second.
The highway "world model" is trained (like the LLM) to predict all the bits in the next frame, up to maybe even the next full second of frames. So the model predicts a future. It can also predict 100 other possible futures (quickly), select the "best/most likely/desired/or goal-oriented" future, and send appropriate commands to the auto control system to realize that future. Meanwhile, it can also prepare appropriate (re)actions in the event that any of the other possible predicted futures actually occurs .. in the very next dynamic "frame" or ground truth it receives from the vision system.
If you think about it, this is what a driver does constantly, watching the cars, guessing what they might do next, deciding on what he or she is doing next; doing that, while worrying about the other drivers; and so forth on and on. To the extent that works in silicon, IMO, that really is totally amazing!
You gave it the wrong title - apart from some general remarks about Entropy - apparently all had forgotten that you wanted to talk how Stephen solved its mystery and instead showed how he got surprised by the capabilities of AI and LLMs like the rest of the world when ChatGPT came out - LLMs are completely random structures for any observer that does not know how to use them but they needed Gigawhathours to be created in their training process - they are ENERGY converted into INFORMATION - and this is what Stephen points out that ENTROY is not random for all observers but Energy converted to complex information - but still how he solved ENTROY is still a Mystery after watching this video
I think you need to look at that thought again. Stephen discusses entropy as a phenomenon of observer theory, and the emergence of the "law" of entropy as a result of the relationship of the observer to information rather than a phenomenon of the universe. I guess you could look back to seeing Maxwell's Demon as an observer with a different state than us.
totally interesting, understood like 70%.. But Prof Wolfram needs to give these guys more respect, they give me super smart vibes even in front of the Prof!
Sorry to say but to me this sounds like some sophisticated/reworded aggregate of already known concepts. Almost nothing new was said, we already know from quantum mechanics that we as observers are limited and when we measure things the wave function collapses so we have our own realities. The big question here is that are there multiverses or just one reality? We are also part of that reality and we interact with it so as we take measures we change the course of its actions. It is also known from Heisenberg's uncertainty relation that we can't measure some variable pairs to the exact. So in his statistical theory of gas molecules in a box, you can't have the exact position and speed of all the molecules to the maximum accuracy simultaneously.
Then he talks about this concept space, but its agreed that other minds like super intelligent aliens would have to derive the same principles we have in physics or maths because the laws of the universe are absolute. They would measure the speed of light the same and the constants we use the same.
About LLM-s we already know that as they are built upon our knowledge and our inputs alter it, so they will behave like somewhat we want them to and somehow they seem to make sense to us because we build them to do that.
So what is new here?
He wants to make a theory of the observers. He wants to model us. Lets see what will happen and what scientists think about it. Or can it predict anything at all more accurately.
SO, Dr. Wolfram's comment kind of got me, that we're being continuously rewritten as we move through atomic space, but we perceive ourselves to be a consistent thread, being persistent in time. I understood that to mean that he sees space as a "block" of atoms. When we move from point A to point Z, we are being "rewritten" at every moment B thru Z as we move, but we can't perceive that hypergraphic submolecular shift between the space-atoms because of our bounded computational capacity. Did anyone else understand it that way?
Awesome interview! It's funny how he talks about ostracising the AI at the end as an alternative means of punishment, reminds me of how banishment was a historical penalty for people of significant means, and in some ways it's considered worse than death - you don't get to become a martyr or a victim, you just become irrelevant. Also lets you hold being accepted back into the fold as a carrot.
The description of entropy increase around 17:00 in sounds like Bohm's idea of implicate order transposed into a computational frame of reference.
I find it funny that at 46:00 Stephen is essentially talking about the same concept as Rupert Sheldrake's morphic fields, and yet Dr. Sheldrake's ideas, even though he's an academic and biologist, are considered to be a pseudoscience and he's been banned from talks and scientific gatherings. He has written numerous books on the theory, filled with all kinds of indisputable evidence toward something of the sort, and I'm not claiming that his theory has to be absolutely the most correct one, but why on Earth would science ban a scientist for having a legit theory?
I also find it fascinating that mathematicians can get away with describing similar processes, because their lingo is less mundane, leans onto highly abstract notions, and expects a certain level of cognition and experience from its listeners. In other words, could it be that Dr. Sheldrake's issue is not the approach, or the subject, or the theory per se, but the perceived force with which he could be shattering the existing scientific status quo? You don't ban if you're not afraid, and as soon as science starts looking like politics, it smells like we're on the verge of something really big.
A growing amount of theories surround the topic of consciousness in more volume and accord than ever, spreading from physics and cosmology, to chemistry, biology, neurology, to obviously math and computation, and even arts and philosophy. And the more we look at it, the more the materialism, skepticism, and even atheism seem to suck big time. And I'm really not using the word atheism in a way to make this into a religious rant, I am not religious at all. It's just there are many components of this suggesting that the mainstream science discards so much about what consciousness appears to be. A humble facet of an n-dimensional and all-encompassing core that is deeply embedded into the very fabric of the Universe.
The way the life emerged the way it did is not merely a game of chance or some dice roll, there are clearly entropic optimums for the Universe to reach the state of being able to contain self-observers.
bro, you're trapped
Legendary interview.
Fantastic interview! A follow up one, very soon, as in a few days, would be a great idea! Probably, the most qualified person to hear on the so called dangers of LLMs.
Very beautiful concepts and models presented here. Thank you for the inspiration.
I met Stephen Wolfram once and I can't stand the high opinion of himself he has. The last thing this man needs is to be introduced as "perhaps the most recognizable and brilliant scientist alive today".
Like too many exceptionally brilliant people, he's got an exceptionally inflated false ego. But at least he offers something for real substantive thought and discussion. Anyone else who HASN'T done VERY IMPORTANT fundamental theoretical work, probably doesn't have anythging original or substantial to offer on the topic, yet likely has an unmeritedly inflated false ego...
Well the guy is smart and has created amazing things. He also knows this and is filthy rich so can u blame him?
They’re all like that, though. It’s a hard environment to keep your head screwed on straight. He has massive power over the young people in his charge.
What matters in the end is whether any of this turns out to be useful. A mathematical framework that can adapt anyone’s observationally correct mathematics and turn it into something more easily codeable would be enormously useful and would have applications outside of physics and academia that would be worth an ungodly amount of money.
Anywhere humans can be found doing complicated things you find them working with pragmatically incomplete computer models - they suck.
The young people who are latching onto this, again, if turns out to be useful will be armed with knowledge that could make them rich.
It’s niche, but carrying this over to military models and simulation for example, as I know about these thing…like I said $$$. Big f’n $$$
I don't see what you see
Wow Wow! It was truly a treat to hear him talk about ideas that he is thinking about right now. It is like an artist describing each brush stroke, as he paints. I did not realize how much he is working on AI and the extent to which his language is playing a role in the development of Chat gpt. I think his new ideas about computational physics will move physics and engineering forward in leaps and bounds. I think his kind of thinking will be the key to modernizing how we live.
I think its a strech say this is a conversation about science. At best its about philosophy of science. He is definitely a very smart and successful person. His theories are too hand wavey for a physicist. For e.g. his claim about space composed of atoms and our aggregate observation manifests as Continuum. A claim without a formal theory does not forward scientific progress. For e.g. I think a legit proposed theory like, loop quantum gravity suggested spacetime as discrete. But its predictions does not match observations. About rulial space, lets assume 10**600 is true. What then? Its as pointless as many worlds theory, something that can never be proven and doesnt make any observable predictions.
We will never be able to predict the universe hence the Uncertainty Principle. We are the part of the Entropy, Universe itself. We can always measure simulation that we are running but simulation can't predict itself with deterministic certainty. Standard model cels are bashing their heads trying to fine tune the particles yet they didn't really provide any useful conclusions which could be used today.
Instead of "special" I would say predictable or understandable by our minds. This means it becomes more random only to the degree it increasingly makes less sense to us. This ties in the role of the observer and our consciousness. Wish there was a way to communicate with Dr Wolfram directly and share my phycological understanding of his Physics Project.
jameswiles
There are a plenitude of books by well recognised philosophers, mathematicians and computer scientists which illustrate the amplitude of symmetries and non chaotic systems that exist.
I seem to remember an article on the FLDR (Fast Loaded Dice Roller) which also detects pseudo random processes.
Wolfram did some nice things but he's a bit of a nutcase. And this great observation that you cannot always predict the output of a computation without doing the computation itself should not come as a surprise to anyone. Consider an optimal algorithm for computing something. By definition, there must be no shortcut by which you could magically predict the answer because if you had such a method you could become your new algorithm. Did he really need to look at cellular automata to figure that out? And his nonsense about entropy makes me think he's never opened a book on statistical mechanics. It was so cringe to call him one of the greatest scientists at the begging of the podcast. His greatest accomplishments are Mathematica and Wolfram Alpha, which are both great. No need to go overboard, his ego is inflated enough.
great conversation ❤
The universe is not bits of material, it is relationships.
We are not naming things, we are naming behaviors.
Minite 18 is the best explanation of computational irreducibility i've ever heard