So Prof Kevin Buzzard's goal is to formalize all the elementary textbooks of mathematics, the whole body of math, and create the modern "Elements." This can be the turning point of mathematics in this century.
It seems to me that at some point the journals should make this demand of mathematicians... submit computer proofs along with your papers. In the long run this will benefit everyone, but the mathematicians are currently not interested because they don't see the immediate benefit. But the reviewers will have an immediate benefit. How much easier it will be to review papers for correctness. I'd think math journals would be clamoring for this type of thing. I think this would speed up peer review. We have no structured database of knowledge... not even in mathematics... knowledge is scattered in raw text across the internet. What a waste. With computer formalization, all that knowledge can be synced and integrated. I think this approach would be valuable for all knowledge, not just mathematics. I really think this needs to be part of the journal process... so that our mathematics database remains immediately up to date... otherwise your going to have a backlog of papers that need formalization in the future.
This is a great comment and I think you're 100% correct. I think the work this guys doing is amazing and the computer formalization of mathematics will have tremendous benefits to the world.
They aren't interested because it's not feasible to expect every maths professor and researcher to learn Lean or Coq or whatever. It's very difficult to formalise even a simple proof, let alone the kind of research-level mathematics these people are doing. Plus, these tools are under active development. Lean is what, one year old? It's just not feasible to make that a requirement. It might get there one day, but we're clearly not there yet.
@@andrepduarte I agree we aren't there yet. We need a high level language... something that is in between a formal proof, and a human proof.... that could be compiled down to a formal proof. The idea of ai actually being able to read and verify human-written math papers is ridiculous... The idea of mathematicians writing complete formal proofs is also ridiculous. We need something in between that a computer can compile.
Based on my personal experience, more and more mathematicians start to engage with computer science. The next generation of mathematicians must be proficient in programming. To convert the mathematical proof into the valid computer codes is a great idea and feasible to achieve in the long run.
I remember listening to this when it came out four years ago and now I came back here after I learned that Lean is being used by DeepMind to expand and train their A.I models to learn and prove new theorems. Turns out the speaker was spot on. This is the future.
I've been using a programming language called Kind that supports theorem (and algorithms, they're isomophic through the curry-howard correspondence) proving. While it still lacks an automated theorem proof searcher, writing software and then writing proofs for it. Or writing proof for mathematical statements by writing them as algorithms and them proving it is really fun and really expanded my view on programing.
As a student, this amazes me As physics student, I feel like everything is from outer world As a person interested in programming, this LEAN looks interesting to dig into But most important for me? The plan of presentation put on the side of the slide is the best invention I've seen and I'm going to -steal- use that
I feel that it's very important, perhaps even moreso than for mathematics, that we find a way to do this for physics as well. A physical theory with an undetected logical inconsistency in it is more or less unfalsifiable (though it'll make individual claims which won't work so perhaps we'll find out that way). A lot of the mathematics which gets used by modern physics doesn't make it easy to turn a crank and get out predictions in a computable way. If physics could be formalized in something like dependent type theory in such a way that a significant portion of it was axiom-free, then the proofs of many theoretical results would be directly useful in computing numerical predictions. (They're literally computer programs which compute relevant things, even if not necessarily very efficiently.)
_"But most important for me? The plan of presentation put on the side of the slide is the best invention I've seen and I'm going to steal use that"_ I've seen a lot of beamer talks use that, don't worry about "stealing" it. IMO though, I think the best thing is to go completely minimal. The template for each slide should be blank. But that's just me.
69420 views, nice. Also, automated theorem proving is absolutely the future! How much do you want to bet that the first AGI will be a higher monad implemented in a formal proof language?
What I’m kinda hoping for, is for a transpiler of sorts which can take a valid proof written in Lean and transform it into a valid proof of the same statement in Coq, and visa versa, and the same thing with HoL, etc. (Of course, because some of these don’t have quite the same foundations, not everything will translate right, but I imagine that all the main mathematical content will translate.) It would be **really** cool if, in addition to such a transpiler, we could have theorems saying that it could translate theorems provided that the proof has certain properties (like, not depending too much on the details of the foundation). Like, wouldn’t that be really cool? Then we could have our formal proofs relatively independent of the system we are using to prove. One person could prove something in Coq, another could automatically translate it to lean, and use it to prove something else in Lean, using the conclusion from the first proof, and then it could go back into Coq. Wouldn’t that be great!
There are people who work in that area (I am one of them), but it's not that fashionable for some reason. Translating a theorem from system X to system Y usually results in a kind of garbage statement in system Y (it's not stated idiomatically and the proofs are probably very low level and unreadable), and there is usually a postprocessing stage where you turn the translated statements into the equivalent in the target system. You would not want to do this for the average theorem, because the overhead is probably higher than just proving it directly in system Y, but it might be good for the big name theorems with succinct statements like the odd order theorem. Translating between different dependent type theories is hard because definitional equality is finicky, but I have done a wholescale translation of the Metamath database to HOL and Lean.
@@digama0 Woah, cool! When you say that it produces garbage statements, is that as like, the statement of the theorem, or just as the proof steps? I don't think I particularly mind the translated proof steps being not nice to read, as I've generally found proof system proofs to not be near as nice for reading as proofs designed primarily to be read by people (though, my sample size for that is small. Most of the ones I've seen have just been application of a few tactics.)? Like, if I'm already willing to sacrifice the aesthetic nice-to-read-ness of the proof by writing it a system like this in order to buy very high confidence in the correctness, I don't see making it even worse to read as losing much? But, if the _statement_ of the auto-translated proofs can be manually massaged into a nice enough form without too much effort (and still being able to use the translated proof), I think that still sounds very nice. So, what I mean to say by all that, is that I think what you are doing sounds really cool and impressive :)
@@drdca8263 Usually both the statements and the proof come out pretty bad unless you work really hard to make them not-bad. The proofs usually end up worse than the statements though. Making the statements actually good usually requires human intervention, not necessarily a huge amount, but generally proportional to the size of the definitional stack leading to the statement of the theorem. For super advanced definitions like Kevin's perfectoid spaces that's not a trivial thing. The example I played with for Metamath -> Lean was to translate Dirichlet's theorem (on infinitely many primes in arithmetic progressions). Like Fermat's last theorem, the proof requires lots of advanced number theory but the statement is pretty simple, so it was reasonable to massage the statement that came out of the translation into the "idiomatic" way to state it in lean, using lean's natural numbers, lean's primes, and so on, rather than a bunch of statements about some model of ZFC inside lean (which has some set which is the translation of metamath's natural numbers, metamath's primes and so on).
Very informative! I will start looking into LEAN! I have a number theory proof I have been working on for 2 years. Python's types really can't handle the infinite number sets. This is very cool! Great Lecture!
_"Python's types really can't handle the infinite number sets"_ I'd suggest Haskell. If you know some category theory there are some really neat ideas in it inspired by category theory. Agda has an even more powerful type system. You may already know about these things of course.
Great talk. Mathematicians can learn so much from computer scientists when it comes to languages and tools. Computer scientists are forced to come up with languages that actually work in practice, solving real world problems together with other people. The design process of mathematical languages can roughly be characterised as 'what felt right to a guy perhaps centuries ago', and is devoid of any pragmatic feedback loops. The tendency of programmers to argue languages is sometimes derided; and they can be hung up on trivialities. But as a mathematician it also came as a shock to me when I realised most mathematicians seem actually opposed to any introspection as to what it is they are doing or how. Its a radically different culture. As someone who equally (little) identifies as either mathematican or computer scientist, I am very pleased for researchers to take the plunge into this scare interdisciplinary abyss. Actually writing down such an encyclopaedia of mathematics in a single sufficiently expressive language seems to me like an absolutely wonderful task. And for sure the process is going to teach us a lot of new things about mathematics and language itself. I get the impression that Lean isnt the last word in mathematical language when it comes to expressing high level mathematical reasoning; clearly human language is more expressive still, if formalizing in Lean feels comparatively tedious. If set-theoretic relations are like machine instructions, Lean feels more like C; not like what we would today consider a high-level programming language. And id be willing to bet it will not just reveal unstated assumptions, but also reveal subtle contradictions in widely held beliefs.
@@aa888zz Its a bit of a simplification; its language does of course undergo a slow cultural evolution. But insofar as there are people proactively thinking about what the language of mathematics ought to be, this is mostly an academic exercise; not something that impacts actual mathematicians, even over the centuries. Perhaps mathematicians are just better at getting it right the first time. But the fact that their language is either still too imprecise or too cumbersome for them to be able to write their own encyclopaedia in the sense discussed here, rather belies that notion.
The language of mathematics evolves generationally, and sometimes more quickly than that. Good notations get propagated quickly. You'll be hard pressed to find "f: X -> Y" much earlier than the 1950s. Category theory came along and coloured all the notation of mathematics quite rapidly. If you pick up a mathematics book from even the 1800s, it can often be really challenging to deal with the unfamiliar notation! An important thing to keep in mind is that at present, the notations of mathematics serve mainly the goal of explaining things to other humans, and not computers. Making notational changes comes with a human cost of making many people in your audience uncomfortable, so it had better come along with a sufficient benefit. We see this in programming languages too, but at least with programming languages, you can teach the computer first, and get useful things happening before you have to convince another human to try.
Yea it's crazy how rickshaw piece meal math is. Crazy how cool mathematicians seem to be about that status quo. When computers take over for us they may throw out a lot of stuff that didn't actually logically flow
It's amazing just how similar mathematics is to software development once either gets to a large enough scale. When he's talking about "the proof is correct because the elders said so", I couldn't help but think of a parallel with all the vulnerabilities and bugs software engineers import on a daily basis from "libraries the elders have accepted", and we are okay with it because we know it works and "small errors don't matter and we can work around them". No mathematician fully understands Wiles' proof just as no programmer fully understands the Linux kernel, but we are all obligated to accept both.
Computational mathematician master race checking in. Enjoy your single-subject expertise in your silos, pedagogues. Just kidding. Good talk. The world of mathematics will be blown wide open once we fully harness the power of the PC.
The speaker was somewhat dismissive of Logic claiming that " it is all checked and OK". However these computer based proof systems are going to be based on a Constructive Logic, as a result some theorems may not quite "say" what they say in the standard mathematical literature. Even Set Theory (which he was also somewhat dismissive of) will be formalised in a Constructive way (and thus be a different theory). The Foundational Framework in use here is a form of (Dependent) Type Theory. In some mathematical areas, perhaps including Combinatorics and Number theory, many proofs are (essentially) constructive, though it is tedious to dot the i's and cross all the t's as he seems to be saying - hence a proof tool is helpful there.. My understanding is that many of these proof tools were originally developed by Computer Scientists to assist with combinatorially tedious and error prone tasks in Formal Software Engineering. The corresponding Foundational Type theories could fit well there. However to convince Mathematicians in general of their overall validity I think that he will need to spend some time clarifying the foundational differences - including any foundational differences between Coq and Lean etc. There are discussions of these kind of topics in the Constructive Mathematics literature, and I think he will have to refer to that in future.
I'm not sure what you mean here; Lean is not purely constructive? It has the option of using classical logic, and this is used all the time in the lean maths library. Can you elaborate what you mean by this?
@@alexbest8905 Hi. I have just read the Lean PDF "Logic and Proof" which answers most of my points above. I did not necessarily intend to criticize Lean, just this presentation did not give me what I would have wanted to see. So Lean does have a Prop type which helps encode two valued logic, and the document does confirm the other points I made and discusses the link (in outline) between Lean's Type theory and ZFC Set Theory. As the document points out there are some subtleties because Lean is ultimately constructive (e.g. the Ex elimination rule needs a value of the given type to be pre-declared). There are further questions to be answered though: what about higher ZF axioms (ie doing ZF research in Lean); what about declaring another Logic (e.g. Modal logic); the proof of some non-constructive theorems (the Model Existence theorem was mentioned in the document as "tricky"). The Speaker did say that he learned new things trying to prove results using Lean, and I guess that these issues were in there.
could LEAN be used in math learning and question solving? I am learning abstract algebra in my spare time and feels really lonely and not sure if I got right the practice questions.
Logic programs have been around for a while. When it comes to proving theorems using computer logic that's a tall order. You effectively have to feed in an entire scaffolding of logic in order to set the context. This is before you can even begin to turn the handle to generate results. It's not surprising that MS is involved with this problem. They have a history in researching code completion in developer environments. A new project of theirs does the same thing but uses machine learning to cross reference instances of code from large pools of developer data. In fact if you look closely you might recognise, this is the same technology used in language translation on the Internet. The results aren't exactly spectacular but it gets the job done to a reasonable degree. I believe processes in contemporary machine learning are actually more mechanistic than heuristic. This is saying a lot when it comes to superimposing the notion to mathematical intuition in solving problems/proofs. Mathematical intuition is very much a human characteristic.
Skylark the work in Gödel’s incompleteness theorem already showed that logic can be codified. I fail to see why it’s so hard to believe that tech could solve some of our deepest problems.
I agree. It's a greet tool, but reality itself is not a logical condition, it is an intuition. The limitations within the field of AI will slowly demonstrate that logic is not equal to reality.
Seems like the would-be turning point for theory-provers would be to prove an interesting mathematical theory in a way that most mathematicians wouldn't have the skills to access the proof but which is ultimately correct as proved by software. While something similar happened with the four-color theorem, it was proven by Appel and Haken but "confidence of the elders" was only solidified by the the computers, because it was basically a proof by exhaustion. You need a sort of, "you don't need the 'elders' it compiled, QED." moment. Currently the ABC conjecture is still believed solved in Japan but not outside of Japan, if the proof were formalized and compiled, it would certainly end the sort of screaming match issues that sort of turned the lecturer off from some level of human trust.
23:31 * He’s thinking, “If I can do this I can do anything.” * I’m thinking, “If I could understand what he said, I could probably figure out why I’m so broke .”
it's funny cause I think the thing where you can't sell formal proofs to british mathematicians in the "Gauss and Euler" tradition, I have a feeling that the same argument would have more sucsses in the Bourbaki influenced french tradition where they like to formalize quite a lot, and be realy sure that it math is founded on solid foundations.
Great! Your work might have much more profound implications. Now in 2024 AI can create average-quality research articles (in machine learning field), from conceptualization to implementation. With the creation of strong AI in the future, I see no reason that it can't invent new theories like human mathematicians. Formalizing math theories and defining things will make future AI able to do true mathematics (not something ambiguous in correctness).
Science trends are more trendy than fashion trends. Same is also often said about computer programming. Recently, especially in the context of JavaScript frameworks.
The idea of formalizing math with the help of computers has been around for a long time. One of the first serious attempts is the ongoing Mizar project. Since 1989 people have been building a library of formal mathematical definitions and proofs called Mizar mizar.org/ There is an academic research journal called Formalized Mathematics where Mizar proofs are publishable. The Mizar library mizar.org/library/ is pretty big. Here is a list of 100 proofs, 69 of which have been completed in Mizar: mizar.org/100/index.html
*_...the future of mathematics, in this vein, is, compilation of theorems, proofs, lemmas, corollaries, partial proofs, partial proof spaces, special cases, formal and informal (cf induction vs back-induction e.g. if the kth-step has the same form as the 0th then the −kth must've too), equivalences, arbitrary-proof-generators... (computationing)..._*
Around 1:04:30, how does LEAN avoid setoid hell, if Coq doesn't? Is this something to do with making type classes available with quotient spaces? What does that even mean?
The coq people weren't happy about that comment. There's an axiom in Lean that lets you do quotients that could in principle be added to Coq but isn't. I think what they do instead is just use a type equipped with an equivalence relation instead of quotienting by the equality and using lemmas of the form x ~ y -> f(x) ~ f(y) whenever they need to make a substitution. This does indeed sound like hell.
Software is written by groups/teams of humans so it is made up of the combined knowledge of many humans and this what makes it surepass the knowledge of the individual human.
Really enjoyed the lecture. Wonder whether the Xena project will accept/encourage contributions from amateur mathematicians, similar to the polymath project. I'm sure many computer scientists would love to contribute.
At 9:10 He said computers will beat humans at math in 10 years. Well the most it will take is 7-8 years when o3/GPT7 will be released. They will have way more advanced reasoning than humans, have infiinite context and find connections in various literatures.
Mathematics is a language of intuition. Computer science is an interface between languages. Language itself is an intuition of correspondence. All these systems will eventually prove, is that intuition is more fundamental than logic.
@@jonhillery7736 Ah, but they provide nice isomorphisms, so if something is discovered in category theory, then we will have an analogous concept discovered elsewhere. : ) which is pretty cool and worthwhile in my opinion. People are attracted to a particular topic, naturally, and they will indirectly solve a problem in another topic!
I think he means that "proper" mathematicians, the vast majority, are not working in foundations and don't bother to try to follow current work in foundations like category theory. Many research mathematicians will use a bit of category theory as a tool or a language for something they also think of in another way, but they won't see category theory as a source of new ideas that they care about. Category theory works at the level of "objects" and what connects them (i.e., groups and homomorphisms, up to isomorphism and never "on the nose," it won't care about constructing a particular homomorphism). By design categories are blind to the elements of objects and things like multiplication tables (Cayley tables) that define particular groups. Category theory can't work at the level where one element equals another, it only works at the level where two objects are isomorphic (or categorically equivalent, or left adjoint or something...) and so will only take a "proper" mathematician so far before they have to move back to the level of the elements of the structures they are interested in. Many mathematicians know something about universal properties and think they are a good thing, but it doesn't make them want to read papers on category theory, or approve applications to their department from category theorists or mathematical logic researchers. It's like most mathematicians will use set operations and axiom systems, but are completely uninterested in recent work in set theory or logic. They think, those fields are like Euclidean plane geometry, it's nice and useful but completely understood. There is nothing new to discover for a researcher.
That's a whole lot of railing against the old guard without ever using the words "old guard". Though the word "elder" is used which I suppose means the same thing
I have an actual problem: I am a programmer without a degree collaborating with a mathematician, both of us out from any institution. I don't understand many of the things he talk to me often and viceversa :D. But I though the mathematicians don't like to collaborate with programmers, or "computer scientist". And you hug that collaboration in a inspirational way for me. I like a lot your way of thinking. Accepting error and doubting about evrything. Even when you say you prefer rigor over beauty :D. And Lean seem something that was around my mind many years... and it seems to be build. A way to use "new languages" to build mathematics. We are refutating and old theorem, in a way that seems to works very well. I can't talk your language in a proper way... is a prove "by competition". I builded some structures that gives natural numbers super powers, and then start analizing evry possible case as I can understand. But evrything is not done in a "rigor way"... just with examples. The theorem said something is impossible, so we build some example that prove that is possible... The idea is surrounding the theorem into a jail where it can not scape, even stealing its best tools. But you seem to be an expert in Algebra. You like rigor, and like me, you trust in computers to test your ideas. I used mine to test the formulas, to see if I have forgotten some "+1" in some place... things like that. What would be your opinion about a work just based in examples and observations. I mean... I can show a mathematical fact, like "see, this is happening..." but for me is totally impossible to write it in "standar language". My partner said that write all of our work in a "rigorous way" could take years. BUT IT EXISTS, AND IT IS HAPPENING, AND IT SEEMS TO WORK VERY GOOD. I always tried to find a mathematician to write it all in a rigor way but instead of it, he accepted to work in my way :D. I don't know if my abstract structures are hard to translate to Lean... but probably is the tool to "write it in a proper way". My level in mathematics is very low, but simple ideas could have complex ways of proving them. But the other doubt is: I haven't made it public, but...when I said in a forum what I can do someone said to me that it would be impossible because there is no way to deal with no computable numbers... but she/he was wrong. I haven't said they the trick, because I want the prestige.. bla bla bla...but the point is probably Lean could reach a bad decisition... and for that reason we always need humans. It could follow a set of definitions... and build the tree until the axioms (If I undertsanded well).. but if some theorem is wrong, it could reach a bad decisition. Am I wrong?
If your axioms are fine, and Lean accepts your proof in terms of them, it's very likely (apart from Lean being possibly buggy) that your theorem is true. It follows the rules of logic mechanically. The potential trouble comes when we start picking axioms that are questionable in the first place. For a long time, this will be necessary in order to do anything a mathematician considers interesting, because the proofs of certain foundational things might not have been formalized. But a great deal of mathematics could theoretically be formalized in dependent type theory with no axioms at all - just the rules of type theory. Probably you'll end up wanting the law of excluded middle at some points and the axiom of choice in others, and there may be one or two other things depending on what sort of mathematics you're working with, but not much more is required.
@@cgibbard My history is a weird history. I don't have "axioms" in my work... I tried to use some of them to justify my ideas, but it is costing much more formalizing the paper than really solving the problem. The real problem is just an indexation problem thta is supoosed to be impossible. Like a programmer I have my own tools to solve thta problem, but now My partner and me are trying to translate it to math, to try math comunity accept the solution. But the solution "works by itself" without formalization in software :D. For that reason Lean takes my attemption. Searching a way to formalize it all, because we are gonna publishing without formalizing it at all. Because it "works"..and many things could be explained in a simple way that is very hard to deny. I guess like I work like old mathematicians, because I am not one mathematician. Lots of text explainin concepts.
but if every single piece of maths depends on ZFC and ZFC is an infinite set of theorems, because of the schemata, isn't it impossible to implement any sort of general theorem prover that is guaranteed to run in a finite ammount of time?
I'm not sure what the question is, but it's certainly possible to write a program that checks ZFC proofs in finite time. The axiom schemata mean there are indeed in some sense infinitely many axioms, but you can check in finite time if a formula is one of the axioms, just like there are infinitely many prime numbers but you can still check in finite time if a particular number is prime.
@@ch2263 that makes sense. I guess the total number of axioms "in the schemata" that have some relation with the other axioms related to what you are trying to proof would be finite. thanks!
I spent 50 years playing with coq. Many of my friends have dedicated their lives to playing with coq. For a lot of guys I know, they're no longer just playing with coq, coq is their life. They absolutely use coq to generate new things, and see what comes out of it. Sometimes they share what comes out of their coq with other coq enthusiasts. Sometimes it not easy, though, you really have to work at that coq. Sometimes you gotta get angry at that coq. Sometimes I get so mad, I give my coq a smack. Coq smack, I call it. I've literally pounded my coq, I got so mad. Some guys are very empathetic, and will pound other guys' coq for them. I don't do that, I just watch. It's pretty safe to say, that they're are coq lovers out there. Some are just coq-curious, while others are in coq denial. At the end of the day, I still get joy from playing with coq and the good stuff that eventually comes out of it.
@@Neavris Oh, I get it. You're a moron. Unless a joke is graspable to a toddler, you're not going to get it. Even the slightest bit of subtlety will throw you off. You're the reason the world is not a pleasant place to be.
I am just fascinated by the degree to which mathematicians are not benefiting from tools that essentially emerged from mathematics, yet thinking about the level of technology utilisation by educational institutions, for educational purposes, I kind of get back on Earth remembering the 'sad' reality where we are wasting our collective efforts on trivial endeavours ..
First of all, you can explain p-adic galois representations to undergraduates. I saw it done. The question is - how much of the theory do they need to understand in order to formalize things like that in LEAN? Secondly, A better test of his approach is to test things that are simple to observe but seem unfit for an automatic proofer. Basic topology works best here. Show me a proof that, if you cut any disc from a manifold, you get the same resulting space.
I don't think that's true, unless the manifold is connected. If you let the manifold be the disjoint union of a sphere and a torus, then you get either a disk and a torus, or a sphere and a torus-with-a-hole.
@@jamesh625 Good point- I know other similar examples. Maybe it's the French getting back at prudish English and Americans! They put it in the terminology and can pretend innocence!!!
Automatically enumerate _All_The_Proofs_, in magnitude order :) In x^x time complexity :( Except it's also the Xth hyper-operation :( And all but a vanishingly small subset are provably unprovable :( I hope it's not that hard. 🤷🏻♂️
A century after Kurt Gödel, a number theorist says terms like "intellectual domination, and others have nothing to offer". Really! I feel sad and sorry for him, his students and his audience.
@@paulgoogol2652 You have to be a little patient to understand the issue, by the way (Do you see my comment 2 hours ago, I think it is still on the top of comments as I do see it from my computer? Thanks
There is a probality about Gödel have proved, in an unsufficient way, his theorem. So be carefull about 'the way' you say what you are thinking: probably you will need to eat that words in a near future. Remember to be nice always, it's a better way to express yourself.
@@FistroMan hey! Probability is a substory of number theory, so that's also discardable. I get there should be civility in public discourse. But, that shouldn't imply we start respecting ape shitery. I should be able to call something stupid, "stupid", otherwise what's the point of freedom of expression, if it's just for flowery words, which mean nothing.
It seems to me they are teaching computers how to do math, just as a few decades ago they were able to teach computers to play chess... I think they will succeed
(International philosophers project) Theorems conditions. International philosophers project are and have to be the same thing as a standing on their own. If and when put together are and have to become different. International on its own is all encompassing. Philosophers on its own is all encompassing. Project on its own is all encompassing. (International philosophers project), as one statement is not (is not meant or taken as) all encompassing, it refers to three different structural group types which if and when put together become a different group type. Yet by always being exactly the same thing as three different types, quantum probabilistic predictions can be derived at pleasure under all circumstances exactly as with and within the none quantum states. Sigma is/= X,Y,Z no matter what, how or when. Bell is wrong, all bells are wrong. In fact forget all what is written before this sentence and use only the sentence (Bell is wrong, all bells are wrong), and that very sentence shows Bell's inequality theorem to be wrong, as clearly as the very statement Bell is wrong, all bells are wrong, which in itself is a theorem that obeys preset conditions and a, any and possible combinations of quantum probabilistic predictions can be derived at pleasure with and within all possible conditions from, for, by, off, and to it.
Coming here to happily tell, for those who thinks that someone who thinks like Kevin SHOULD be incentivized do keep going; that in 2024 he'd successfully got a grant to formalize Fermat's Last Theory in Lean!
Early in school, I remember learning about proofs with Euclidean geometry. Then I remember as the math got more advanced, the proofs had giant gaps compared to Euclid. Then I hear about things like Banach-Tarski, and proofs hundreds of pages long of free-hand mathematics, and now I'm very skeptical of modern math.
Banach-Tarski almost certainly doesn't have anything logically wrong with it - it has been formalised in Coq even. But arbitrary infinite sets of points are not terribly intuitive beasts, and the axiom of choice gives you some powerful leverage to construct interesting sets with very peculiar properties.
Mathematics doesn't make any claims about "reality" as such. Physics does though, so one might have an argument that systems involving the axiom of choice are to some degree not as well suited to physics. Of course, if one is careful about which statements are allowed to be interpreted as predictions, it may be fine to have things like this around. I think the main thing that the axiom of choice does to harm physics is actually *not* the wacky infinite stuff that it does, but rather, that when taken unrestricted, it lets you prove the law of excluded middle, which already comes with a big downside for physics: it destroys the computational nature of proofs, meaning that it's possible to define numbers (perhaps numbers involved in "predictions") which can't be computed. That might be problematic for a setting where we'd like to guarantee that statements are falsifiable. Then again, what really constitutes a prediction? Is it sufficient to put real number bounds on the quantity we're measuring even if we can't compute the bounds at all? I don't think most physicists would regard that as adequate -- though it's quite hard to be certain until someone actually tries to calculate something that it's possible to really do the computation. But yeah, the predictions themselves only really tend to be considered made once we have something like a rational approximation with a handful of digits and error bars around it -- just having some variable which is a specific real number that we don't know how to calculate yet isn't really good enough. Similarly, the sets which are constructed in the proof of the Banach-Tarski paradox really have no physical interpretation whatsoever.
@@cgibbard Math, historically, started out as based in reality. Simple counting. Then geometry. Of course, things got more abstract with zero, negative numbers, square roots, imaginary numbers, etc, but they at least had a useful correspondence to the real world. Things got more confused when alternative geometries came around, though they ended up turning out useful as well. I'm not sure how the law of excluded middle leads to uncomputable, but definable, numbers. The weird thing to me seems to be the reals, where you can have a number that takes an infinite amount of numbers to define.
Well, the explanation for how removing excluded middle gets you a sense of computation is a bit involved, but the gist of it is that proofs themselves end up having an interpretation in terms of computational steps. For example, in classical logic, we usually prove that A implies B by first assuming A to be true, and then taking some steps to try to conclude B, and if we succeed, we're allowed to conclude A -> B. Type theory / intuitionistic logic elaborates this by saying that in order to construct a function of type A -> B, we're allowed to assume that we have a variable x of type A, and in terms of this, we're to produce an expression e of type B, and if we succeed, then (λx. e) (or if you prefer, we could choose some syntax like x ↦ e) is a function of type A -> B. That is, this is a notation for a function parameterised over x, whose result is the expression e, and our proofs of implications are descriptions of functions of this sort. Similarly, a proof of "A and B" corresponds to a pair of proofs of A and of B, and a proof of "A or B" turns into a proof of either A or of B, together with a tag saying which of the two we're opting to take. It's this last point which makes the assumption "A or (not A)" a bit magical: in order for this interpretation to hold, we would need to know which of the two is the case for any statement, which we don't actually know much of the time. Thinking of proofs as programs, a program of type "(A or B) -> C" might take a different branch depending on which of A or B holds, producing a different result of type C in each case. This is essentially the heart of what type theories like Lean and Coq do -- they're programming languages where the types of values are sufficiently expressive to express all the theorems of mathematics, and the programs constitute honest proofs of those propositions. So long as you don't throw in additional axioms (which are treated roughly like variables whose evaluation just gets stuck), every proof of existence of something ends up implicitly consisting of a description of how to compute that thing, together with a proof that it satisfies the property required. It's still possible in such a system to define something like the computable real numbers, where each real number becomes a function from a natural number n to a rational number within 1/2^n of the given real (i.e. the usual Cauchy sequences approach with a slightly stronger condition to make sure we can actually work out the digits of any such number). So if I can prove that some such computable real number exists, I get a function into which I can plug any natural number I want, and get an approximation which is as good as I'd like, and unlike the functions of classical mathematics, this is actually a mechanical (though arbitrarily complex) computational process. Although in a practical sense, I might write down proofs which correspond to algorithms that are entirely impractical for actually computing the numbers we want in a reasonable amount of time, there's something nice about knowing at least in principle that it could be done.
I am gonna call it now. "Computer Science" will be the new religious name people use when talking about formal discoveries. Mathematics will become like philosophy, that "unscientific" or "unrigorous" department on the other end of campus.
@@DanielBWilliams That is the job left for the computers to complete. Mathematicians will only need to focus on discovering and inventing new theories which depends more on imagination than logical thinking.
I find this worthy to invest in. There is potential for this type of application in ALL fields. As a proper ally\friend\family we should look out for him. It's a much wiser and safer investment compared to just smashing particles together.
The irony is that it is the first search result for "e theorem prover". If exact name of your software isn't enough to find it, you're definitely not hot stuff at the moment. (Note, that you just have to believe me on that - I did the same in Tor, and it was second result. YT like every other big player just likes your personal data too much).
IF..U..have any.. machine .. capable ..of.. creating..it's..own.. Evolution.. Matrix..then that machine has its own self..!! Now how to write Evolution Matrix code....it's problem to you by me ??
A great seminar on a new and exciting road for math discovery. THAT SAID, recall the Star Trek episode 'Court martial' (1966) when algorithmic jurisprudence was not enough to save Captain Kirk. KIRK: Yes. (Notices the piles of books everywhere) What is all this? COGLEY: I figure we'll be spending some time together, so I moved in. KIRK: I hope I'm not crowding you. COGLEY: What's the matter? Don't you like books? KIRK: Oh, I like them fine, but a computer takes less space. COGLEY: A computer, huh? I got one of these in my office. Contains all the precedents. The synthesis of all the great legal decisions written throughout time. I never use it. KIRK: Why not? COGLEY: I've got my own system. Books, young man, books. Thousands of them. If time wasn't so important, I'd show you something. My library. Thousands of books. KIRK: And what would be the point? COGLEY: This is where the law is. Not in that homogenised, pasteurised, synthesiser. Do you want to know the law, the ancient concepts in their own language, Learn the intent of the men who wrote them, from Moses to the tribunal of Alpha 3? Books.
this also comes down to a circular argument (as far as I know), because some trig identity used in that proof can not be proven without the deriv of sine
Guys... numbers are an intellectual invention: created to map order and metric onto the Real-World fabric of The Universe. Geometry is PRIMARY. Numbers are Ideal, created out of whole cloth, and PROVISIONAL and SECONDARY. People who put numbers ahead of the geometry of Reality are simply Platonic Idealists, at root -- and might as well be discussing how many angels can dance on the head of a pin.
Trotskisty You are confused. That which follows from a collection of premises follows from those premises. Topics in math do not follow a single linear progression. Many topics can be introduced in a variety of orders, and often either of two topics can be used to introduce or define the other topic. There is no problem in doing this. Either can be done first. The end results are the same. There are many equivalent ways to define the same structures.
By the way, have you ever tried to look at Gödel’s argument/formalism/theorem regarding the existence of God, using the Lean Theorem Prover? The idea goes something like: “(i) positive traits are necessarily positive, (ii) being God is inherently positive, (iii) existing is a positive trait, and (iv) God comprises all positive traits”. Based on these assumptions Gödel deducted that God must necessarily exist. Now, as a strong agnostic I have no ideological stake in this game, whatsoever, nor am I trying to confirm, refute, or influence anyone’s belief system. Also, as I recall Gödel himself was a staunch atheist, as well, and only extended this theorem as a logical exercise. That said, supposedly the formalization/theorem has been examined by an automatic theorem solver and subsequently confirmed. From a purely mathematical curiosity point of view, do you know of any similar result using Lean?
I believe the proof was carried out in Isabelle. As far as anyone is concerned, it's a trivial logical exercise with no ontological consequences. It's ten minutes work to prove that in Lean. I think the philosphers mainly think that the problem is assumption (iii); existence is not a property of an object, because calling something a thing is already asserting its existence (unless you mean something else by exists, and then it becomes a linguistic game). There is nothing interesting here from a formal proving standpoint, only a philosophical question.
So Prof Kevin Buzzard's goal is to formalize all the elementary textbooks of mathematics, the whole body of math, and create the modern "Elements." This can be the turning point of mathematics in this century.
It seems to me that at some point the journals should make this demand of mathematicians... submit computer proofs along with your papers. In the long run this will benefit everyone, but the mathematicians are currently not interested because they don't see the immediate benefit.
But the reviewers will have an immediate benefit. How much easier it will be to review papers for correctness. I'd think math journals would be clamoring for this type of thing. I think this would speed up peer review.
We have no structured database of knowledge... not even in mathematics... knowledge is scattered in raw text across the internet. What a waste. With computer formalization, all that knowledge can be synced and integrated. I think this approach would be valuable for all knowledge, not just mathematics.
I really think this needs to be part of the journal process... so that our mathematics database remains immediately up to date... otherwise your going to have a backlog of papers that need formalization in the future.
This is a great comment and I think you're 100% correct. I think the work this guys doing is amazing and the computer formalization of mathematics will have tremendous benefits to the world.
RE: "no structured database of mathematics" check out metamath.org. example proof in the database:
us.metamath.org/mpeuni/aleph0.html
They aren't interested because it's not feasible to expect every maths professor and researcher to learn Lean or Coq or whatever. It's very difficult to formalise even a simple proof, let alone the kind of research-level mathematics these people are doing. Plus, these tools are under active development. Lean is what, one year old? It's just not feasible to make that a requirement.
It might get there one day, but we're clearly not there yet.
@@andrepduarte I agree we aren't there yet. We need a high level language... something that is in between a formal proof, and a human proof.... that could be compiled down to a formal proof.
The idea of ai actually being able to read and verify human-written math papers is ridiculous... The idea of mathematicians writing complete formal proofs is also ridiculous. We need something in between that a computer can compile.
Based on my personal experience, more and more mathematicians start to engage with computer science. The next generation of mathematicians must be proficient in programming. To convert the mathematical proof into the valid computer codes is a great idea and feasible to achieve in the long run.
I remember listening to this when it came out four years ago and now I came back here after I learned that Lean is being used by DeepMind to expand and train their A.I models to learn and prove new theorems. Turns out the speaker was spot on. This is the future.
Ha! I love this guy. His statement about "proper mathematicians" is gold!
Give him the money!!
"Give them what they want
"
-Winston Churchill
We have money.
Come over.
-United States of We Can Fund This and Change the World with Your Logical Dreams of a Better Future of Mathematics-
Done!
Brilliant talk, the speaker has some very valid points about formalization and assisted proofs.
This is the best lecture and the best RUclips video ever! I've never watched anything more enjoyable!
Please include the presenter's name in the description.
Kevin Buzzard
I think the presenter is:
leodemoura.github.io/
I've been using a programming language called Kind that supports theorem (and algorithms, they're isomophic through the curry-howard correspondence) proving.
While it still lacks an automated theorem proof searcher, writing software and then writing proofs for it. Or writing proof for mathematical statements by writing them as algorithms and them proving it is really fun and really expanded my view on programing.
Electrifying. Really generates enthusiasm.
Great lecture by Kevin Buzzard, especially knowing him from number theory circles.
somehow his name is nowhere in the video description nor title.
Are u doing research in number theory by using circle method
As a student, this amazes me
As physics student, I feel like everything is from outer world
As a person interested in programming, this LEAN looks interesting to dig into
But most important for me? The plan of presentation put on the side of the slide is the best invention I've seen and I'm going to -steal- use that
I feel that it's very important, perhaps even moreso than for mathematics, that we find a way to do this for physics as well. A physical theory with an undetected logical inconsistency in it is more or less unfalsifiable (though it'll make individual claims which won't work so perhaps we'll find out that way).
A lot of the mathematics which gets used by modern physics doesn't make it easy to turn a crank and get out predictions in a computable way. If physics could be formalized in something like dependent type theory in such a way that a significant portion of it was axiom-free, then the proofs of many theoretical results would be directly useful in computing numerical predictions. (They're literally computer programs which compute relevant things, even if not necessarily very efficiently.)
His presentation is done in the latex beamer package, there is some style which does exactly this.
deic.uab.es/~iblanes/beamer_gallery/individual/Hannover-default-default.html.
\useoutertheme{sidebar} in beamer :p
_"But most important for me? The plan of presentation put on the side of the slide is the best invention I've seen and I'm going to steal use that"_
I've seen a lot of beamer talks use that, don't worry about "stealing" it.
IMO though, I think the best thing is to go completely minimal. The template for each slide should be blank. But that's just me.
69420 views, nice. Also, automated theorem proving is absolutely the future! How much do you want to bet that the first AGI will be a higher monad implemented in a formal proof language?
What I’m kinda hoping for, is for a transpiler of sorts which can take a valid proof written in Lean and transform it into a valid proof of the same statement in Coq, and visa versa, and the same thing with HoL, etc.
(Of course, because some of these don’t have quite the same foundations, not everything will translate right, but I imagine that all the main mathematical content will translate.)
It would be **really** cool if, in addition to such a transpiler, we could have theorems saying that it could translate theorems provided that the proof has certain properties (like, not depending too much on the details of the foundation).
Like, wouldn’t that be really cool? Then we could have our formal proofs relatively independent of the system we are using to prove. One person could prove something in Coq, another could automatically translate it to lean, and use it to prove something else in Lean, using the conclusion from the first proof, and then it could go back into Coq.
Wouldn’t that be great!
There are people who work in that area (I am one of them), but it's not that fashionable for some reason. Translating a theorem from system X to system Y usually results in a kind of garbage statement in system Y (it's not stated idiomatically and the proofs are probably very low level and unreadable), and there is usually a postprocessing stage where you turn the translated statements into the equivalent in the target system. You would not want to do this for the average theorem, because the overhead is probably higher than just proving it directly in system Y, but it might be good for the big name theorems with succinct statements like the odd order theorem.
Translating between different dependent type theories is hard because definitional equality is finicky, but I have done a wholescale translation of the Metamath database to HOL and Lean.
@@digama0 Woah, cool!
When you say that it produces garbage statements, is that as like, the statement of the theorem, or just as the proof steps?
I don't think I particularly mind the translated proof steps being not nice to read, as I've generally found proof system proofs to not be near as nice for reading as proofs designed primarily to be read by people (though, my sample size for that is small. Most of the ones I've seen have just been application of a few tactics.)?
Like, if I'm already willing to sacrifice the aesthetic nice-to-read-ness of the proof by writing it a system like this in order to buy very high confidence in the correctness, I don't see making it even worse to read as losing much?
But, if the _statement_ of the auto-translated proofs can be manually massaged into a nice enough form without too much effort (and still being able to use the translated proof), I think that still sounds very nice.
So, what I mean to say by all that, is that I think what you are doing sounds really cool and impressive :)
@@drdca8263 Usually both the statements and the proof come out pretty bad unless you work really hard to make them not-bad. The proofs usually end up worse than the statements though. Making the statements actually good usually requires human intervention, not necessarily a huge amount, but generally proportional to the size of the definitional stack leading to the statement of the theorem. For super advanced definitions like Kevin's perfectoid spaces that's not a trivial thing. The example I played with for Metamath -> Lean was to translate Dirichlet's theorem (on infinitely many primes in arithmetic progressions). Like Fermat's last theorem, the proof requires lots of advanced number theory but the statement is pretty simple, so it was reasonable to massage the statement that came out of the translation into the "idiomatic" way to state it in lean, using lean's natural numbers, lean's primes, and so on, rather than a bunch of statements about some model of ZFC inside lean (which has some set which is the translation of metamath's natural numbers, metamath's primes and so on).
Amazing lecture. I would love to hear more from this guy.
Anak Wannaphaschaiyong . ?
Very informative! I will start looking into LEAN! I have a number theory proof I have been working on for 2 years. Python's types really can't handle the infinite number sets. This is very cool! Great Lecture!
best of luck in your proofs!
@@Blox117 Thank You!
@@emilywong4601 Awesome thank you very much!
_"Python's types really can't handle the infinite number sets"_
I'd suggest Haskell. If you know some category theory there are some really neat ideas in it inspired by category theory. Agda has an even more powerful type system. You may already know about these things of course.
I actually followed a course in Orsay, by the very teacher mentioned in the lecture. Absolutely top notch. I instantly fell in love with Lean
Great talk. Mathematicians can learn so much from computer scientists when it comes to languages and tools. Computer scientists are forced to come up with languages that actually work in practice, solving real world problems together with other people. The design process of mathematical languages can roughly be characterised as 'what felt right to a guy perhaps centuries ago', and is devoid of any pragmatic feedback loops. The tendency of programmers to argue languages is sometimes derided; and they can be hung up on trivialities. But as a mathematician it also came as a shock to me when I realised most mathematicians seem actually opposed to any introspection as to what it is they are doing or how. Its a radically different culture.
As someone who equally (little) identifies as either mathematican or computer scientist, I am very pleased for researchers to take the plunge into this scare interdisciplinary abyss.
Actually writing down such an encyclopaedia of mathematics in a single sufficiently expressive language seems to me like an absolutely wonderful task. And for sure the process is going to teach us a lot of new things about mathematics and language itself. I get the impression that Lean isnt the last word in mathematical language when it comes to expressing high level mathematical reasoning; clearly human language is more expressive still, if formalizing in Lean feels comparatively tedious. If set-theoretic relations are like machine instructions, Lean feels more like C; not like what we would today consider a high-level programming language. And id be willing to bet it will not just reveal unstated assumptions, but also reveal subtle contradictions in widely held beliefs.
>the design process of mathematical languages can roughly be characterised as 'what felt right to a guy perhaps centuries ago',
No.
@@aa888zz Its a bit of a simplification; its language does of course undergo a slow cultural evolution. But insofar as there are people proactively thinking about what the language of mathematics ought to be, this is mostly an academic exercise; not something that impacts actual mathematicians, even over the centuries.
Perhaps mathematicians are just better at getting it right the first time. But the fact that their language is either still too imprecise or too cumbersome for them to be able to write their own encyclopaedia in the sense discussed here, rather belies that notion.
Algorithms are an applied mathmatics.
@THGIF Can you stop spamming this everywhere.
The language of mathematics evolves generationally, and sometimes more quickly than that. Good notations get propagated quickly. You'll be hard pressed to find "f: X -> Y" much earlier than the 1950s. Category theory came along and coloured all the notation of mathematics quite rapidly. If you pick up a mathematics book from even the 1800s, it can often be really challenging to deal with the unfamiliar notation!
An important thing to keep in mind is that at present, the notations of mathematics serve mainly the goal of explaining things to other humans, and not computers. Making notational changes comes with a human cost of making many people in your audience uncomfortable, so it had better come along with a sufficient benefit. We see this in programming languages too, but at least with programming languages, you can teach the computer first, and get useful things happening before you have to convince another human to try.
hehe, he played with "coq" for two weeks. yes I am too dumb to understand Coq.
Fascinating. Amazing how profs are accepted and not fully validated. Software looks super cool and is a key to helping
Yea it's crazy how rickshaw piece meal math is. Crazy how cool mathematicians seem to be about that status quo. When computers take over for us they may throw out a lot of stuff that didn't actually logically flow
It's amazing just how similar mathematics is to software development once either gets to a large enough scale. When he's talking about "the proof is correct because the elders said so", I couldn't help but think of a parallel with all the vulnerabilities and bugs software engineers import on a daily basis from "libraries the elders have accepted", and we are okay with it because we know it works and "small errors don't matter and we can work around them". No mathematician fully understands Wiles' proof just as no programmer fully understands the Linux kernel, but we are all obligated to accept both.
Doron Zeilberger's book A = B has talked about this. Zeilberger's opinions on formal methods and criticisms in mathematics should be read more.
Computational mathematician master race checking in. Enjoy your single-subject expertise in your silos, pedagogues.
Just kidding. Good talk. The world of mathematics will be blown wide open once we fully harness the power of the PC.
The most amazing thing is how a professor who talks about the formalization of math consistently uses the word "invented".
The speaker was somewhat dismissive of Logic claiming that " it is all checked and OK". However these computer based proof systems are going to be based on a Constructive Logic, as a result some theorems may not quite "say" what they say in the standard mathematical literature. Even Set Theory (which he was also somewhat dismissive of) will be formalised in a Constructive way (and thus be a different theory). The Foundational Framework in use here is a form of (Dependent) Type Theory. In some mathematical areas, perhaps including Combinatorics and Number theory, many proofs are (essentially) constructive, though it is tedious to dot the i's and cross all the t's as he seems to be saying - hence a proof tool is helpful there..
My understanding is that many of these proof tools were originally developed by Computer Scientists to assist with combinatorially tedious and error prone tasks in Formal Software Engineering. The corresponding Foundational Type theories could fit well there. However to convince Mathematicians in general of their overall validity I think that he will need to spend some time clarifying the foundational differences - including any foundational differences between Coq and Lean etc. There are discussions of these kind of topics in the Constructive Mathematics literature, and I think he will have to refer to that in future.
I'm not sure what you mean here; Lean is not purely constructive? It has the option of using classical logic, and this is used all the time in the lean maths library. Can you elaborate what you mean by this?
@@alexbest8905 Hi. I have just read the Lean PDF "Logic and Proof" which answers most of my points above. I did not necessarily intend to criticize Lean, just this presentation did not give me what I would have wanted to see. So Lean does have a Prop type which helps encode two valued logic, and the document does confirm the other points I made and discusses the link (in outline) between Lean's Type theory and ZFC Set Theory. As the document points out there are some subtleties because Lean is ultimately constructive (e.g. the Ex elimination rule needs a value of the given type to be pre-declared). There are further questions to be answered though: what about higher ZF axioms (ie doing ZF research in Lean); what about declaring another Logic (e.g. Modal logic); the proof of some non-constructive theorems (the Model Existence theorem was mentioned in the document as "tricky"). The Speaker did say that he learned new things trying to prove results using Lean, and I guess that these issues were in there.
could LEAN be used in math learning and question solving? I am learning abstract algebra in my spare time and feels really lonely and not sure if I got right the practice questions.
Yep definitely possible
Yes, you can program proofs in Lean, I am sure Lean has packages for Monoids, Groups, Rings and Fields. Not sure about vector spaces though.
@@VasuJaganath There seems to be definitions for linear algebra in mathlib leanprover-community.github.io/mathlib_docs/linear_algebra/basic.html
This was an inspiring talk, well done Prof. Buzzard.
I'm curious if he got his new computer, it seemed important... 32:10
Alternative title: "Number Theorist Finds Cheap and Highly Productive Manner In Which to Have a Mid-Life Crisis" XD 16:58
Logic programs have been around for a while. When it comes to proving theorems using computer logic that's a tall order. You effectively have to feed in an entire scaffolding of logic in order to set the context. This is before you can even begin to turn the handle to generate results.
It's not surprising that MS is involved with this problem. They have a history in researching code completion in developer environments. A new project of theirs does the same thing but uses machine learning to cross reference instances of code from large pools of developer data. In fact if you look closely you might recognise, this is the same technology used in language translation on the Internet. The results aren't exactly spectacular but it gets the job done to a reasonable degree. I believe processes in contemporary machine learning are actually more mechanistic than heuristic. This is saying a lot when it comes to superimposing the notion to mathematical intuition in solving problems/proofs. Mathematical intuition is very much a human characteristic.
Skylark the work in Gödel’s incompleteness theorem already showed that logic can be codified. I fail to see why it’s so hard to believe that tech could solve some of our deepest problems.
I agree.
It's a greet tool, but reality itself is not a logical condition, it is an intuition.
The limitations within the field of AI will slowly demonstrate that logic is not equal to reality.
Seems like the would-be turning point for theory-provers would be to prove an interesting mathematical theory in a way that most mathematicians wouldn't have the skills to access the proof but which is ultimately correct as proved by software. While something similar happened with the four-color theorem, it was proven by Appel and Haken but "confidence of the elders" was only solidified by the the computers, because it was basically a proof by exhaustion. You need a sort of, "you don't need the 'elders' it compiled, QED." moment.
Currently the ABC conjecture is still believed solved in Japan but not outside of Japan, if the proof were formalized and compiled, it would certainly end the sort of screaming match issues that sort of turned the lecturer off from some level of human trust.
23:31
* He’s thinking, “If I can do this I can do anything.”
* I’m thinking, “If I could understand what he said, I could probably figure out why I’m so broke .”
Hahahaha
it's funny cause I think the thing where you can't sell formal proofs to british mathematicians in the "Gauss and Euler" tradition, I have a feeling that the same argument would have more sucsses in the Bourbaki influenced french tradition where they like to formalize quite a lot, and be realy sure that it math is founded on solid foundations.
Great! Your work might have much more profound implications. Now in 2024 AI can create average-quality research articles (in machine learning field), from conceptualization to implementation. With the creation of strong AI in the future, I see no reason that it can't invent new theories like human mathematicians. Formalizing math theories and defining things will make future AI able to do true mathematics (not something ambiguous in correctness).
lucky enough to enjoy two lecture courses from big Kev at Imperial
Wow, I never realized academics is so political and, well , human and subject to all of the foibles of ego.
Oh yeah it is. Even more so than many other fields.
Good sort of lecturer to have around as a student though. :)
That's the one good thing about computer science. People are using it to make money, so of a system is broken we often notice.
The bad part is that once we notice we don't necessarily fix it
Science trends are more trendy than fashion trends. Same is also often said about computer programming. Recently, especially in the context of JavaScript frameworks.
The idea of formalizing math with the help of computers has been around for a long time. One of the first serious attempts is the ongoing Mizar project.
Since 1989 people have been building a library of formal mathematical definitions and proofs called Mizar mizar.org/
There is an academic research journal called Formalized Mathematics where Mizar proofs are publishable.
The Mizar library mizar.org/library/ is pretty big.
Here is a list of 100 proofs, 69 of which have been completed in Mizar: mizar.org/100/index.html
*_...the future of mathematics, in this vein, is, compilation of theorems, proofs, lemmas, corollaries, partial proofs, partial proof spaces, special cases, formal and informal (cf induction vs back-induction e.g. if the kth-step has the same form as the 0th then the −kth must've too), equivalences, arbitrary-proof-generators... (computationing)..._*
Around 1:04:30, how does LEAN avoid setoid hell, if Coq doesn't? Is this something to do with making type classes available with quotient spaces? What does that even mean?
The coq people weren't happy about that comment. There's an axiom in Lean that lets you do quotients that could in principle be added to Coq but isn't. I think what they do instead is just use a type equipped with an equivalence relation instead of quotienting by the equality and using lemmas of the form x ~ y -> f(x) ~ f(y) whenever they need to make a substitution. This does indeed sound like hell.
my coq is extremely hard
Great stuff. Yet another part of the world where software is going to supersede humanity and leave us behind. Bring it on...
I think you underestimate the difficulty of this domain.
It's not going to leave "us" behind.
It's going to interface with us. To what degree, is hopefully up to the individual.
@@dialecticalmonist3405 Likely to be true. The future of "us" is probably cyborg...
@@MikeKleinsteuber
Book
www.amazon.com/Artilect-War-Controversy-Concerning-Intelligent/dp/0882801546
Software is written by groups/teams of humans so it is made up of the combined knowledge of many humans and this what makes it surepass the knowledge of the individual human.
This stuff looks really cool. Might have to check out this scene a little
Really enjoyed the lecture. Wonder whether the Xena project will accept/encourage contributions from amateur mathematicians, similar to the polymath project. I'm sure many computer scientists would love to contribute.
At 9:10 He said computers will beat humans at math in 10 years. Well the most it will take is 7-8 years when o3/GPT7 will be released. They will have way more advanced reasoning than humans, have infiinite context and find connections in various literatures.
The future of mathematics is digital!
let's get digital, digital. I want to get digital.
@@paulgoogol2652 Yes! play the video :) ruclips.net/video/0hFstusDZNk/видео.htmlm56s
Mathematics is a language of intuition.
Computer science is an interface between languages.
Language itself is an intuition of correspondence.
All these systems will eventually prove, is that intuition is more fundamental than logic.
I hope those pants are an indication of future mathematics!
Category theory and type theory are nice. Why regard them as something other than proper?
Because nobody outside of those areas really care about them currently
@@jonhillery7736 Ah, but they provide nice isomorphisms, so if something is discovered in category theory, then we will have an analogous concept discovered elsewhere.
: ) which is pretty cool and worthwhile in my opinion. People are attracted to a particular topic, naturally, and they will indirectly solve a problem in another topic!
I think he means that "proper" mathematicians, the vast majority, are not working in foundations and don't bother to try to follow current work in foundations like category theory. Many research mathematicians will use a bit of category theory as a tool or a language for something they also think of in another way, but they won't see category theory as a source of new ideas that they care about.
Category theory works at the level of "objects" and what connects them (i.e., groups and homomorphisms, up to isomorphism and never "on the nose," it won't care about constructing a particular homomorphism). By design categories are blind to the elements of objects and things like multiplication tables (Cayley tables) that define particular groups. Category theory can't work at the level where one element equals another, it only works at the level where two objects are isomorphic (or categorically equivalent, or left adjoint or something...) and so will only take a "proper" mathematician so far before they have to move back to the level of the elements of the structures they are interested in. Many mathematicians know something about universal properties and think they are a good thing, but it doesn't make them want to read papers on category theory, or approve applications to their department from category theorists or mathematical logic researchers. It's like most mathematicians will use set operations and axiom systems, but are completely uninterested in recent work in set theory or logic. They think, those fields are like Euclidean plane geometry, it's nice and useful but completely understood. There is nothing new to discover for a researcher.
Too general to create something substantial.
That's a whole lot of railing against the old guard without ever using the words "old guard". Though the word "elder" is used which I suppose means the same thing
There's a concrete plan and call to action
The expectation theorem, what is the next move?!
Where can i get that shirt?
I have an actual problem:
I am a programmer without a degree collaborating with a mathematician, both of us out from any institution. I don't understand many of the things he talk to me often and viceversa :D. But I though the mathematicians don't like to collaborate with programmers, or "computer scientist". And you hug that collaboration in a inspirational way for me.
I like a lot your way of thinking. Accepting error and doubting about evrything. Even when you say you prefer rigor over beauty :D. And Lean seem something that was around my mind many years... and it seems to be build. A way to use "new languages" to build mathematics.
We are refutating and old theorem, in a way that seems to works very well. I can't talk your language in a proper way... is a prove "by competition". I builded some structures that gives natural numbers super powers, and then start analizing evry possible case as I can understand. But evrything is not done in a "rigor way"... just with examples. The theorem said something is impossible, so we build some example that prove that is possible...
The idea is surrounding the theorem into a jail where it can not scape, even stealing its best tools.
But you seem to be an expert in Algebra. You like rigor, and like me, you trust in computers to test your ideas. I used mine to test the formulas, to see if I have forgotten some "+1" in some place... things like that.
What would be your opinion about a work just based in examples and observations. I mean... I can show a mathematical fact, like "see, this is happening..." but for me is totally impossible to write it in "standar language". My partner said that write all of our work in a "rigorous way" could take years. BUT IT EXISTS, AND IT IS HAPPENING, AND IT SEEMS TO WORK VERY GOOD.
I always tried to find a mathematician to write it all in a rigor way but instead of it, he accepted to work in my way :D. I don't know if my abstract structures are hard to translate to Lean... but probably is the tool to "write it in a proper way". My level in mathematics is very low, but simple ideas could have complex ways of proving them.
But the other doubt is: I haven't made it public, but...when I said in a forum what I can do someone said to me that it would be impossible because there is no way to deal with no computable numbers... but she/he was wrong. I haven't said they the trick, because I want the prestige.. bla bla bla...but the point is probably Lean could reach a bad decisition... and for that reason we always need humans. It could follow a set of definitions... and build the tree until the axioms (If I undertsanded well).. but if some theorem is wrong, it could reach a bad decisition. Am I wrong?
If your axioms are fine, and Lean accepts your proof in terms of them, it's very likely (apart from Lean being possibly buggy) that your theorem is true. It follows the rules of logic mechanically.
The potential trouble comes when we start picking axioms that are questionable in the first place. For a long time, this will be necessary in order to do anything a mathematician considers interesting, because the proofs of certain foundational things might not have been formalized. But a great deal of mathematics could theoretically be formalized in dependent type theory with no axioms at all - just the rules of type theory. Probably you'll end up wanting the law of excluded middle at some points and the axiom of choice in others, and there may be one or two other things depending on what sort of mathematics you're working with, but not much more is required.
@@cgibbard My history is a weird history. I don't have "axioms" in my work... I tried to use some of them to justify my ideas, but it is costing much more formalizing the paper than really solving the problem.
The real problem is just an indexation problem thta is supoosed to be impossible. Like a programmer I have my own tools to solve thta problem, but now My partner and me are trying to translate it to math, to try math comunity accept the solution. But the solution "works by itself" without formalization in software :D.
For that reason Lean takes my attemption. Searching a way to formalize it all, because we are gonna publishing without formalizing it at all. Because it "works"..and many things could be explained in a simple way that is very hard to deny. I guess like I work like old mathematicians, because I am not one mathematician. Lots of text explainin concepts.
but if every single piece of maths depends on ZFC and ZFC is an infinite set of theorems, because of the schemata, isn't it impossible to implement any sort of general theorem prover that is guaranteed to run in a finite ammount of time?
I'm not sure what the question is, but it's certainly possible to write a program that checks ZFC proofs in finite time. The axiom schemata mean there are indeed in some sense infinitely many axioms, but you can check in finite time if a formula is one of the axioms, just like there are infinitely many prime numbers but you can still check in finite time if a particular number is prime.
@@ch2263 that makes sense. I guess the total number of axioms "in the schemata" that have some relation with the other axioms related to what you are trying to proof would be finite. thanks!
@1:02:50 someone knows what is this system he is talking about "errands" ?
I spent two weeks playing with coq - Kevin buzzard
Dead 😂
I spent 50 years playing with coq. Many of my friends have dedicated their lives to playing with coq. For a lot of guys I know, they're no longer just playing with coq, coq is their life. They absolutely use coq to generate new things, and see what comes out of it. Sometimes they share what comes out of their coq with other coq enthusiasts. Sometimes it not easy, though, you really have to work at that coq. Sometimes you gotta get angry at that coq. Sometimes I get so mad, I give my coq a smack. Coq smack, I call it. I've literally pounded my coq, I got so mad. Some guys are very empathetic, and will pound other guys' coq for them. I don't do that, I just watch. It's pretty safe to say, that they're are coq lovers out there. Some are just coq-curious, while others are in coq denial. At the end of the day, I still get joy from playing with coq and the good stuff that eventually comes out of it.
@@lonnybulldozer8426 Coq: Initial release: 1 May 1989; 30 years ago (version 4.10)
@@Neavris What's your point?
@@lonnybulldozer8426 Obviously, the 50 years brag needs an explanation. Thierry Coqand isn't even 60. Are you 70? Just explain, plain and simple.
@@Neavris Oh, I get it. You're a moron. Unless a joke is graspable to a toddler, you're not going to get it. Even the slightest bit of subtlety will throw you off. You're the reason the world is not a pleasant place to be.
Great talk. Great pants.
I am just fascinated by the degree to which mathematicians are not benefiting from tools that essentially emerged from mathematics, yet thinking about the level of technology utilisation by educational institutions, for educational purposes, I kind of get back on Earth remembering the 'sad' reality where we are wasting our collective efforts on trivial endeavours ..
Tradition in teaching and Lean (including CIC) is pretty hard to understand
What a talk man!!! ❤
First of all, you can explain p-adic galois representations to undergraduates. I saw it done. The question is - how much of the theory do they need to understand in order to formalize things like that in LEAN?
Secondly, A better test of his approach is to test things that are simple to observe but seem unfit for an automatic proofer. Basic topology works best here. Show me a proof that, if you cut any disc from a manifold, you get the same resulting space.
I don't think that's true, unless the manifold is connected. If you let the manifold be the disjoint union of a sphere and a torus, then you get either a disk and a torus, or a sphere and a torus-with-a-hole.
@@digama0 Indeed. I ment a connected manifold.
Did they have to call it Coq... just saying what everyone else is thinking...
The French knew exactly what they were doing haha
@@jamesh625 Good point- I know other similar examples. Maybe it's the French getting back at prudish English and Americans! They put it in the terminology and can pretend innocence!!!
Some people pronounce it like "coke" (which isn't the official pronunciation, but it is the pronunciation that I prefer)
Automatically enumerate _All_The_Proofs_, in magnitude order :) In x^x time complexity :( Except it's also the Xth hyper-operation :( And all but a vanishingly small subset are provably unprovable :( I hope it's not that hard. 🤷🏻♂️
Where can I look for more information about Lean?
Literally outlines the issues with current academic systems at 13:00 ish
This will become Modern Mathematics.
5:50 You have to ask the Elders of the Internet
A century after Kurt Gödel, a number theorist says terms like "intellectual domination, and others have nothing to offer". Really! I feel sad and sorry for him, his students and his audience.
lol what a nerd
@@paulgoogol2652 You have to be a little patient to understand the issue, by the way (Do you see my comment 2 hours ago, I think it is still on the top of comments as I do see it from my computer? Thanks
There is a probality about Gödel have proved, in an unsufficient way, his theorem. So be carefull about 'the way' you say what you are thinking: probably you will need to eat that words in a near future. Remember to be nice always, it's a better way to express yourself.
@@bassamkarzeddin6419 I see. Good comment.
@@FistroMan hey! Probability is a substory of number theory, so that's also discardable. I get there should be civility in public discourse. But, that shouldn't imply we start respecting ape shitery. I should be able to call something stupid, "stupid", otherwise what's the point of freedom of expression, if it's just for flowery words, which mean nothing.
That's fantastic❤!
It seems to me they are teaching computers how to do math, just as a few decades ago they were able to teach computers to play chess... I think they will succeed
(International philosophers project)
Theorems conditions.
International philosophers project are and have to be the same thing as a standing on their own. If and when put together are and have to become different.
International on its own is all encompassing.
Philosophers on its own is all encompassing.
Project on its own is all encompassing.
(International philosophers project), as one statement is not (is not meant or taken as) all encompassing, it refers to three different structural group types which if and when put together become a different group type.
Yet by always being exactly the same thing as three different types, quantum probabilistic predictions can be derived at pleasure under all circumstances exactly as with and within the none quantum states. Sigma is/= X,Y,Z no matter what, how or when.
Bell is wrong, all bells are wrong. In fact forget all what is written before this sentence and use only the sentence (Bell is wrong, all bells are wrong), and that very sentence shows Bell's inequality theorem to be wrong, as clearly as the very statement Bell is wrong, all bells are wrong, which in itself is a theorem that obeys preset conditions and a, any and possible combinations of quantum probabilistic predictions can be derived at pleasure with and within all possible conditions from, for, by, off, and to it.
Imagine having every proven theorem encoded in Lean and you could verify all of them at the press of a button.
Great video!
Who's the person introducing the speaker? His accent sounds Brazilian to me.
It is me :) Yes, I am Brazilian.
@@leonardodemoura3510 hahaha, eu sabia! Excelente palestra, obrigado por disponibilizar!
Links in the hyperlinked talk slides don't work.
It looks like they just photocopied the slides for some reason....
Onwards & Forwards Kevin (y)
When we can expect Lean 4?
Coming here to happily tell, for those who thinks that someone who thinks like Kevin SHOULD be incentivized do keep going; that in 2024 he'd successfully got a grant to formalize Fermat's Last Theory in Lean!
Early in school, I remember learning about proofs with Euclidean geometry. Then I remember as the math got more advanced, the proofs had giant gaps compared to Euclid. Then I hear about things like Banach-Tarski, and proofs hundreds of pages long of free-hand mathematics, and now I'm very skeptical of modern math.
Banach-Tarski almost certainly doesn't have anything logically wrong with it - it has been formalised in Coq even. But arbitrary infinite sets of points are not terribly intuitive beasts, and the axiom of choice gives you some powerful leverage to construct interesting sets with very peculiar properties.
@@cgibbard It may be logically correct in that it follows from axioms, but its connection to reality is dubious.
Mathematics doesn't make any claims about "reality" as such. Physics does though, so one might have an argument that systems involving the axiom of choice are to some degree not as well suited to physics. Of course, if one is careful about which statements are allowed to be interpreted as predictions, it may be fine to have things like this around.
I think the main thing that the axiom of choice does to harm physics is actually *not* the wacky infinite stuff that it does, but rather, that when taken unrestricted, it lets you prove the law of excluded middle, which already comes with a big downside for physics: it destroys the computational nature of proofs, meaning that it's possible to define numbers (perhaps numbers involved in "predictions") which can't be computed. That might be problematic for a setting where we'd like to guarantee that statements are falsifiable.
Then again, what really constitutes a prediction? Is it sufficient to put real number bounds on the quantity we're measuring even if we can't compute the bounds at all? I don't think most physicists would regard that as adequate -- though it's quite hard to be certain until someone actually tries to calculate something that it's possible to really do the computation. But yeah, the predictions themselves only really tend to be considered made once we have something like a rational approximation with a handful of digits and error bars around it -- just having some variable which is a specific real number that we don't know how to calculate yet isn't really good enough.
Similarly, the sets which are constructed in the proof of the Banach-Tarski paradox really have no physical interpretation whatsoever.
@@cgibbard Math, historically, started out as based in reality. Simple counting. Then geometry. Of course, things got more abstract with zero, negative numbers, square roots, imaginary numbers, etc, but they at least had a useful correspondence to the real world. Things got more confused when alternative geometries came around, though they ended up turning out useful as well.
I'm not sure how the law of excluded middle leads to uncomputable, but definable, numbers. The weird thing to me seems to be the reals, where you can have a number that takes an infinite amount of numbers to define.
Well, the explanation for how removing excluded middle gets you a sense of computation is a bit involved, but the gist of it is that proofs themselves end up having an interpretation in terms of computational steps. For example, in classical logic, we usually prove that A implies B by first assuming A to be true, and then taking some steps to try to conclude B, and if we succeed, we're allowed to conclude A -> B. Type theory / intuitionistic logic elaborates this by saying that in order to construct a function of type A -> B, we're allowed to assume that we have a variable x of type A, and in terms of this, we're to produce an expression e of type B, and if we succeed, then (λx. e) (or if you prefer, we could choose some syntax like x ↦ e) is a function of type A -> B. That is, this is a notation for a function parameterised over x, whose result is the expression e, and our proofs of implications are descriptions of functions of this sort.
Similarly, a proof of "A and B" corresponds to a pair of proofs of A and of B, and a proof of "A or B" turns into a proof of either A or of B, together with a tag saying which of the two we're opting to take. It's this last point which makes the assumption "A or (not A)" a bit magical: in order for this interpretation to hold, we would need to know which of the two is the case for any statement, which we don't actually know much of the time. Thinking of proofs as programs, a program of type "(A or B) -> C" might take a different branch depending on which of A or B holds, producing a different result of type C in each case.
This is essentially the heart of what type theories like Lean and Coq do -- they're programming languages where the types of values are sufficiently expressive to express all the theorems of mathematics, and the programs constitute honest proofs of those propositions. So long as you don't throw in additional axioms (which are treated roughly like variables whose evaluation just gets stuck), every proof of existence of something ends up implicitly consisting of a description of how to compute that thing, together with a proof that it satisfies the property required.
It's still possible in such a system to define something like the computable real numbers, where each real number becomes a function from a natural number n to a rational number within 1/2^n of the given real (i.e. the usual Cauchy sequences approach with a slightly stronger condition to make sure we can actually work out the digits of any such number). So if I can prove that some such computable real number exists, I get a function into which I can plug any natural number I want, and get an approximation which is as good as I'd like, and unlike the functions of classical mathematics, this is actually a mechanical (though arbitrarily complex) computational process.
Although in a practical sense, I might write down proofs which correspond to algorithms that are entirely impractical for actually computing the numbers we want in a reasonable amount of time, there's something nice about knowing at least in principle that it could be done.
I am gonna call it now. "Computer Science" will be the new religious name people use when talking about formal discoveries. Mathematics will become like philosophy, that "unscientific" or "unrigorous" department on the other end of campus.
How can math be unrigorous ? We proove evrything we say.
@@DanielBWilliams That is the job left for the computers to complete. Mathematicians will only need to focus on discovering and inventing new theories which depends more on imagination than logical thinking.
@@jmw1500 Knowing what kind of logic you need isn't really necessary to be rigourous.
Ahh open source LEAN got it!
CS myself, lots of computing stuff is too hard to use, nobody really benefits from that.
Wow. This is an inspiring talk.
lmao @ 20:00 "I'm shit hot"
I find this worthy to invest in. There is potential for this type of application in ALL fields. As a proper ally\friend\family we should look out for him. It's a much wiser and safer investment compared to just smashing particles together.
Thank you for allowing me to be a part of the conversation.
❤
The irony is that it is the first search result for "e theorem prover". If exact name of your software isn't enough to find it, you're definitely not hot stuff at the moment.
(Note, that you just have to believe me on that - I did the same in Tor, and it was second result. YT like every other big player just likes your personal data too much).
1:11:30
IF..U..have any.. machine .. capable ..of.. creating..it's..own.. Evolution.. Matrix..then that machine has its own self..!!
Now how to write Evolution Matrix code....it's problem to you by me ??
"Can the too tell us anyhing new?" is a stupid question.
The tool isn't a mathematician. Not yet. The question is "how does this tool help us?"
A great seminar on a new and exciting road for math discovery.
THAT SAID, recall the Star Trek episode 'Court martial' (1966) when algorithmic jurisprudence was not enough to save Captain Kirk.
KIRK: Yes. (Notices the piles of books everywhere) What is all this?
COGLEY: I figure we'll be spending some time together, so I moved in.
KIRK: I hope I'm not crowding you.
COGLEY: What's the matter? Don't you like books?
KIRK: Oh, I like them fine, but a computer takes less space.
COGLEY: A computer, huh? I got one of these in my office. Contains all the precedents. The synthesis of all the great legal decisions written throughout time. I never use it.
KIRK: Why not?
COGLEY: I've got my own system. Books, young man, books. Thousands of them. If time wasn't so important, I'd show you something. My library. Thousands of books.
KIRK: And what would be the point?
COGLEY: This is where the law is. Not in that homogenised, pasteurised, synthesiser. Do you want to know the law, the ancient concepts in their own language, Learn the intent of the men who wrote them, from Moses to the tribunal of Alpha 3? Books.
1921
Control A
wow.
Why didn’t he just prove the deriv of sin is cos by taking h->0 of sin(x+h)
this also comes down to a circular argument (as far as I know), because some trig identity used in that proof can not be proven without the deriv of sine
Hi
There are no numbers in number theory, that's what number theory has become.
If numbers themselves divulged structure and pattern of numbers then the field would be barren.
Guys... numbers are an intellectual invention: created to map order and metric onto the Real-World fabric of The Universe.
Geometry is PRIMARY. Numbers are Ideal, created out of whole cloth, and PROVISIONAL and SECONDARY.
People who put numbers ahead of the geometry of Reality are simply Platonic Idealists, at root -- and might as well be discussing how many angels can dance on the head of a pin.
it's a common theme in math to jump from simple things to more complex, and use the more complex framework to prove stuff about the simple things
Guys it's a quote from the talk...
Trotskisty You are confused. That which follows from a collection of premises follows from those premises. Topics in math do not follow a single linear progression. Many topics can be introduced in a variety of orders, and often either of two topics can be used to introduce or define the other topic. There is no problem in doing this. Either can be done first. The end results are the same. There are many equivalent ways to define the same structures.
those pants tho
Great!!!
By the way, have you ever tried to look at Gödel’s argument/formalism/theorem regarding the existence of God, using the Lean Theorem Prover? The idea goes something like: “(i) positive traits are necessarily positive, (ii) being God is inherently positive, (iii) existing is a positive trait, and (iv) God comprises all positive traits”. Based on these assumptions Gödel deducted that God must necessarily exist. Now, as a strong agnostic I have no ideological stake in this game, whatsoever, nor am I trying to confirm, refute, or influence anyone’s belief system. Also, as I recall Gödel himself was a staunch atheist, as well, and only extended this theorem as a logical exercise. That said, supposedly the formalization/theorem has been examined by an automatic theorem solver and subsequently confirmed. From a purely mathematical curiosity point of view, do you know of any similar result using Lean?
I believe the proof was carried out in Isabelle. As far as anyone is concerned, it's a trivial logical exercise with no ontological consequences. It's ten minutes work to prove that in Lean. I think the philosphers mainly think that the problem is assumption (iii); existence is not a property of an object, because calling something a thing is already asserting its existence (unless you mean something else by exists, and then it becomes a linguistic game). There is nothing interesting here from a formal proving standpoint, only a philosophical question.
This argument only proves that the idea of God exists, not that God itself exists.
Sounds not dissimilar to Descartes' original third meditation argument
A pencil that self sharpens? Eraser that leaves 1/2 the marks? Paper that don’t rip? Idk but it will be pencil and paper till Jesus comes back
Mathematicians are the nurds of science.
I love his clothes :)
21:17 "so i think that's kinda weird" lol