DeepMind’s AlphaGeometry AI: 100,000,000 Examples!

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • ❤️ Check out Weights & Biases and take their great courses for free: wandb.me/paper...
    📝 The paper "#AlphaGeometry: An Olympiad-level AI system for geometry" is available here:
    deepmind.googl...
    Me on Twitter/X: / twominutepapers
    📝 My latest paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Bret Brizzee, Gaston Ingaramo, Gordon Child, Jace O'Brien, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Putra Iskandar, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: / twominutepapers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's research works: cg.tuwien.ac.a...
    Twitter: / twominutepapers

Комментарии • 330

  • @marshallmcluhan33
    @marshallmcluhan33 8 месяцев назад +923

    What a time to be a sine

    • @jakeroper1096
      @jakeroper1096 8 месяцев назад +95

      don’t go off on a Tangent now

    • @spookyrays2816
      @spookyrays2816 8 месяцев назад +7

      I chuckled

    • @Speedrunner.007
      @Speedrunner.007 8 месяцев назад +8

      here before this blows up

    • @danisob3633
      @danisob3633 8 месяцев назад +40

      what a time to be A line

    • @MeesterG
      @MeesterG 8 месяцев назад +6

      What a time to be a five

  • @vectoralphaSec
    @vectoralphaSec 8 месяцев назад +116

    This is really impressive. AGI will revolutionize the world when it happens.

    • @CatfoodChronicles6737
      @CatfoodChronicles6737 8 месяцев назад +8

      Or take advantage of the peoples dumbness and give all their power to the prompter if given in the wrong hands

    • @stell4you
      @stell4you 8 месяцев назад +22

      Maybe we can create our own god.

    • @JackCrossSama
      @JackCrossSama 8 месяцев назад +5

      Instead of a Paradise, we will create our own hell.

    • @nemonomen3340
      @nemonomen3340 8 месяцев назад +12

      True, but I think it’s worth noting that AI would drastically change the world even if real AGI never came.

    • @-BarathKumarS
      @-BarathKumarS 8 месяцев назад

      How is this Olympiad solver gonna help though?

  • @yurirodrigues2216
    @yurirodrigues2216 8 месяцев назад +1

    It pulled the hat out of the rabbit

  • @krox477
    @krox477 8 месяцев назад

    This is huge discovery

  • @Dron008
    @Dron008 8 месяцев назад

    Can they prove theorems and do the same for other math?

  • @NeroDefogger
    @NeroDefogger 8 месяцев назад

    it's math.... is not surprising... dunno... I don't know how to say it... is just not... math is just math...

  • @Jacobk-g7r
    @Jacobk-g7r 8 месяцев назад

    I told you last video.

  • @shadowdragon3521
    @shadowdragon3521 8 месяцев назад +247

    This is a huge breakthrough. Right now LLMs tend to struggle with logic and reasoning, but that's about to change. This will open up so many possibilities!

    • @LincolnWorld
      @LincolnWorld 8 месяцев назад +38

      A lot of people tend to struggle with logic and reasoning too. At least half the population.

    • @martiddy
      @martiddy 8 месяцев назад +44

      If AI is capable of master mathematics, it will create a snowball effect to every branch of science and engineering and will eventually master those as well.

    • @Anton_Sh.
      @Anton_Sh. 8 месяцев назад +1

      @@martiddydamn. Is that good or bad ? We don't know yet..

    • @memegazer
      @memegazer 8 месяцев назад

      ruclips.net/video/ulwUkaKjgY0/видео.html

    • @asdfasdfasdf1218
      @asdfasdfasdf1218 8 месяцев назад +19

      @@LincolnWorld That's why most scientific and technological advancement is done by the top 0.01% of the population. Almost all of humanity is "ignorable" in terms of human progress, it's only the top of the top that pushed the whole thing forward. In other words, the kind of AI that can really push things forward is not one mimicking the average human, but the top 0.01%.

  • @Baekstrom
    @Baekstrom 8 месяцев назад +21

    Someone should show this to Roger Penrose. Those are the kinds of problems that he argued could never be solved by a classical computer.

  • @Ken1171Designs
    @Ken1171Designs 8 месяцев назад +209

    This is impressive, but the part I am mostly interested in is this recent trend of making models much smaller these days. We are also seeing a trend to make smaller models more efficient than the larger ones. I have hopes this will eventually bring the power of GPT-4 to consumer-grade GPU that can be installed locally. That would be really something.

    • @ryzikx
      @ryzikx 8 месяцев назад +20

      models like nous hermes mixtral and solar already surpass gpt 3.5. we just need multimodal models like llava to improve a bit more and we'll have gpt4 level capabilities

    • @Ken1171Designs
      @Ken1171Designs 8 месяцев назад +10

      @@ryzikx Yes, that would be the thing. However, the "nous hermes mixtral" models I have found don't fit into my 24GB GPU VRAM, so I can't use it locally. There is also the context token limit that creates a bottleneck on the size of the domain the AI can be used for. The majority of models are limited to only 4K tokens, which it soon forgets what it was talking about. More recently, I have seen models with 8K tokens, but the models themselves weren't very capable. I have tried a few 30B models, but they were too slow in my GPU to be usable, so I am sticking to 13B. Which 13B model would you think is the most capable that can fit into 24GB VRAM?

    • @Mega4est
      @Mega4est 8 месяцев назад +3

      ​@@Ken1171Designsfrom my experience, below 30B models are just not good enough. Even though you cannot fit those models fully on GPU, you can offload some of the layers to speed up the inference. I have been getting good results with quantised mixtral8x7b at a speed of 8-9 tokens per second by loading~15 layers to gpu and leaving the rest to CPU. Not very fast, but the results are of better quality and I would not recommend going lower than that.

    • @jacobnunya808
      @jacobnunya808 8 месяцев назад +2

      consumer grade GPU or an AI processor on things as small as smartphones.

    • @Ken1171Designs
      @Ken1171Designs 8 месяцев назад +3

      @@Mega4est This basically summarizes my original comment above, where models that can actually do something are too large for consumer-grade hardware (30B and up). When I saw the articles on small 7B models being more efficient than 30B or even 70B counterparts, that's a trend that gives me some hope for the future. However, what worries me is that smaller models tend to be limited to only small domains due to the context token bottleneck. There is still some way to go. ^___^

  • @Gingnose
    @Gingnose 8 месяцев назад +48

    It would be cool if AI starts to see intricate geometries in a way we never thought of into things like, art, architectures and engineering principles. Geometry carries over not only in other realm of mathematics. This achievement really showcases what to come in the future.

    • @Gigizhvania
      @Gigizhvania 8 месяцев назад

      Too much AI idealism will kill our every function and we'll end up bald and grey with the only thing left to do is to observe other civilizations like ants.

    • @me_hanics
      @me_hanics 8 месяцев назад +1

      It would be cool but as long as AI is "trained on data" I just think it can use ideas we have already invented. However, non-mathematicians (e.g. artists) may have came up with some ideas long ago which just went unnoticed and may be breakthroughs to see intricate geometries.

  • @feynstein1004
    @feynstein1004 8 месяцев назад +108

    I'm literally watching Skynet being born. What a time to be alive indeed 😁

    • @ArlindoBuriti
      @ArlindoBuriti 8 месяцев назад +5

      yeah bro... the ideia of my novel is coming to light where humans can only fight the A.I with a personal jammer on their bodies because without that it never miss.

    • @feynstein1004
      @feynstein1004 8 месяцев назад +1

      @@ArlindoBuriti Sounds lit 😀

    •  8 месяцев назад

      Not a huge worry soon >:).

    • @feynstein1004
      @feynstein1004 8 месяцев назад

      Eh?

    • @TheFartoholic
      @TheFartoholic 8 месяцев назад +5

      What a time to still be alive!

  • @user-mm8ts9ht4l
    @user-mm8ts9ht4l 8 месяцев назад +14

    This is not something I expected we would make significant progress on so soon.

  • @spookyrays2816
    @spookyrays2816 8 месяцев назад +14

    Thank you for making my go to videos for watching when I’m eating my lunch. You really improve the quality of my day. Thank you!

  • @LetsGenocide
    @LetsGenocide 8 месяцев назад +10

    Terrifying amount of progress for a single research! Can't wait for it to be applied to other AI's

  • @ayufkhan-bv2um
    @ayufkhan-bv2um 8 месяцев назад +2

    Is your voice ai generated

  • @vash2698
    @vash2698 8 месяцев назад +25

    This would work well in a Mixture of Experts type model or as a specialized agent in a swarm, right? GPT-4s limitations here would be mitigated by fine tuning it to leverage a model like AG as a tool. As long as it can get as far as asking the question and validating the answer it would likely be an effective approach.

    • @markonfilms
      @markonfilms 8 месяцев назад

      I've heard rumor of GPT-4 turbo being distilled into an MoE with about 230(?)B parameters making up a totally off something like 1.4 trillion still

  • @gix10000
    @gix10000 8 месяцев назад +18

    This might be the breakthrough needed for AI to start designing General AI, imagine it comes up with a general mathematical theorem for intelligence which is more complete than what we've had up to now? And then that model can improve upon itself, and so on. The next 5 years are going to be amazing to watch

  • @hectoris919
    @hectoris919 8 месяцев назад +33

    I know very little about AI, but an idea has been bouncing around my head for a while: What would happen if you made a dataset containing a ton of trained AI models, with each model labeled with a text description of what the model can do, along with descriptions on how many dimensions and parameters it has.
    Would it be possible to make a Text-to-Model AI that allows you to describe a functionality, and have it churn out a model with weights close to what is needed to achieve that functionality?

    • @alansmithee419
      @alansmithee419 8 месяцев назад +10

      It already takes huge quantities of data to train language models (like, internet-encompassing quantities).
      To train a model on text-to-model data would require millions (bare minimum conservative estimate to my mind) of examples of high-quality AIs, and we simply don't have that many. Not to mention architecture, training method, and data are all just as if not more important than parameter/node/layer count.
      I'm not an expert, so I could easily be wrong, but I don't think there's remotely enough data to achieve this usefully, if it could even be particularly useful at all.

    • @j3ffn4v4rr0
      @j3ffn4v4rr0 8 месяцев назад +3

      That's a super interesting idea, and I admit I'm also fairly uneducated about AI......but I suspect one hurdle to implementing your idea is that the model is essentially a "black box".....we don't know what's inside. So, writing a text description of "what it can do" would be prohibitively difficult. In fact, the description of it's capabilities might be only fully described by the model itself. But I'd be interested to know if I'm wrong about that.

    • @BlooFlame
      @BlooFlame 8 месяцев назад +1

      langchain

    • @okirschner001
      @okirschner001 8 месяцев назад

      You are correct. I use this sentence also a lot. This also implies that description can be exchanged with understanding. Also the problem with AIs trying to understand AIs to tackle the blackbox problem. An AI that 100% understands an other AI, would basically be a copy. Any derivation introduces uncertainty(blurryness). Not a coincidence the similarities to the uncertainty priciple in physics. Interesting metaphysical concepts arise from the study of AIs. Everything that exist, is just a different expression of the same fundamental concepts, just on a higher komplexity plane. It fractals down and up all the way. We are creating what created us!@@j3ffn4v4rr0

    • @hectoris919
      @hectoris919 8 месяцев назад +1

      @@alansmithee419 do models require that much data normally? I would’ve thought a thousand or so might be reasonable to start off with, and huggingface is full to the brim with people making experimental models

  • @usamazaheer9194
    @usamazaheer9194 8 месяцев назад +14

    4:46 What I am more impressed is there is a class of human biengs with such sheer brilliance, that they can beat a trained neural network at logical deductions.

    • @quantumspark343
      @quantumspark343 8 месяцев назад

      Humans are neural networks themselves...

    • @mysticalword8364
      @mysticalword8364 8 месяцев назад +3

      well, if you have a billion people try to do something and take the absolute peak accuracy it would be more like having a billion variants of the AI and cherry picking the absolute best results for each specific domain, which is fine, but has different implications I think

    • @nonameplsno8828
      @nonameplsno8828 8 месяцев назад +10

      Not for long. Guess what happens just two more papers down the line?

    • @Mojkanal1234
      @Mojkanal1234 8 месяцев назад +2

      If I understand it's high schoolers which is even more impressive

    • @swordofkings128
      @swordofkings128 8 месяцев назад +2

      Right? I think we forget that humans are extremely impressive compared to machine learning algorithms. Computers aren't limited to the flaws of our human bodies- they don't need to eat or sleep, they don't have any emotions compromising their performance, they don't need to be entertained or to relax, they don't need anything motivating them, they have no human needs. So it's no surprise that, given enough time and resources, a computer can be good at human-like math logic. Or anything, really.
      Like given that humans work under the restrictions of being human, it is kind of more impressive that humans can still do better than machines given how insanely good modern ML has become.

  • @pridefulobserver3807
    @pridefulobserver3807 8 месяцев назад +23

    New all-powerful mathematician... hmm i can smell the new physics in some years already

    • @asdfasdfasdf1218
      @asdfasdfasdf1218 8 месяцев назад +8

      All of science and engineering is applied mathematics, starting with some empirical observations at its base. If AI can master mathematics, it can essentially master the creation of any kind of technology.

    • @AnAncient76
      @AnAncient76 8 месяцев назад

      Mathematics is not reality.

    • @asdfasdfasdf1218
      @asdfasdfasdf1218 8 месяцев назад

      @@AnAncient76 Mathematics is reasoning and logic. In fact, math is simply another word for logic, these two things are the same. So math is reality insofar as concluding new things from previous things.

    • @AnAncient76
      @AnAncient76 8 месяцев назад

      @@asdfasdfasdf1218 Mathematics is a concept people use to explain reality. And obviously they can't explain it because reality is not mathematics. Numbers and lines do not exist in nature.
      The same applies to "logic". The Universe does not know logic, because to know logic it should know non-logic. This implies that the Universe can make non-logic, which is wrong. The Universe at its fundament does not create non-logic, mistakes, etc. People do that. The Universe also does not think like people.
      Space and time are also human concepts, aka they do not exist.

    • @asdfasdfasdf1218
      @asdfasdfasdf1218 8 месяцев назад +4

      ​@@AnAncient76 If you think mathematics is "numbers and lines," that means you don't know what mathematics is. Mathematics at its core is not numbers and lines, it's formal languages and model theory. You should educate yourself more before commenting these irrelevant things.

  • @marcfruchtman9473
    @marcfruchtman9473 8 месяцев назад +1

    This is great progress. However, you are incorrect @5:06. The AI doesn't "learn from scratch" ... the training data that it is given is pre-trained human created symbolic data from symbolic engines created by humans. So it is literally trained using data that humans created which provide the baseline rules of geometry... it doesn't self-create those rules. The synthetic data is 100,000,000 examples of that data, and then it can use that to give answers to more complex questions or questions not included in the data set.
    It is still very exciting progress because it can extrapolate what it is trained on to problems that it has not seen.

  • @marcmarc172
    @marcmarc172 8 месяцев назад +12

    Incredible! Specialized, but absolutely incredible! What a time to be alive!

    • @tuseroni6085
      @tuseroni6085 8 месяцев назад +3

      you just need a bunch of such specialized expert AI and a mixture-of-experts model to end up with ASI.

    • @marcmarc172
      @marcmarc172 8 месяцев назад

      @@tuseroni6085 that's one approach that could work - good point!

    • @asdfasdfasdf1218
      @asdfasdfasdf1218 8 месяцев назад +2

      If an AI can master all math, then that means it can do any kind of deductive reasoning. This AI is specialized in the sense that it can only find auxiliary points for 2D geometry proofs, but perhaps they'll soon find a way to branch out to all other kinds of math.

  • @carlsonbench1827
    @carlsonbench1827 8 месяцев назад +4

    listening to this is like racing down a bumpy road

  • @virgilbarnard4343
    @virgilbarnard4343 7 месяцев назад +1

    Yes, I’ve been obsessed with this paper… but to say it’s without human assistance is misleading; they developed a sophisticated set of methodologies and functional graph procedures to generate the training data from scratch, even setting a new state of the art in proof discovery in order to feed it.

  • @freedom_aint_free
    @freedom_aint_free 8 месяцев назад +3

    GPT-4 is quite bad at geometric problems, as I've tested it, I do a little coding exercise with it and so far it has never got right nor have gotten any better as far as I know:
    1)I ask it to generate a isometric 3D maze, simple with every wall of the same height and width,
    2) Them I explain to it how it could do it:
    2.1) Use a classical algorithm say Kruskal's to build the Minimum Spanning Tree (MST) out of random grid of M x N points;
    2.2) When point are connected there's a wall and where they don't there's a space in a cell (or the opposite also will work, overall is about 50/50 between open and closed cells)
    2.3) It needs to have an opening at top left (Entrance) and another at right bottom (EXIT)
    Those points (2.0-2.3) it gets right after a few back and forth
    3) I say that it should use a linear transformation to make it isometric and them to extrude the wall upwards.
    Them it keeps getting it wrong and I've showed it dozens of images of simple isometric mazes, but so far as I know it never gets it right, if somebody was able to do it, please leave a message !

  • @juhor.7594
    @juhor.7594 8 месяцев назад +2

    Makes me wonder how the problems in a mathematic olympiad are made.

    • @Adhil_parammel
      @Adhil_parammel 8 месяцев назад

      those who get 20 above become a group of professors

  • @prilep5
    @prilep5 8 месяцев назад +2

    I dropped my papers

  • @tsarprince
    @tsarprince 8 месяцев назад +3

    Very very scaringly brilliant

  • @nefaristo
    @nefaristo 8 месяцев назад +5

    6:38 about this AI being "still" relatively narrow: I think it's a good thing it would stay that way, while progressing in its own field. Since models are black boxes, to minimize the alignment problem we want narrow superintelligent AIs to communicate to each other (and themselves, see chain of thought methods etc) in natural language, so that humans (and other ais) can check on what's going on. I think it's a good trade off between security and efficiency.

  • @peetiegonzalez1845
    @peetiegonzalez1845 8 месяцев назад +3

    OK now I'm having a real hard time believing that this is anything close to what a LLM is supposed to be able to do. This is not a language predictor. It has short term memory and seemingly logically consistent views of what it's talking about. What is actually happening, here?

    • @vighneshkannan7896
      @vighneshkannan7896 8 месяцев назад +2

      It's a combination of two models from what I understand, one which can understand the semantics of the questions, and the other which can perform logical operations. I think they play off each other, in a kind of dance to achieve an end result

    • @CalebTerryRED
      @CalebTerryRED 8 месяцев назад +1

      This is an AI that's LLM-ish making guesses at what a method could be, and running that guess into a calculator that validates it using formal logic. The second part is something that has existed for a while, mathematicians use them kind of like how we use calculators, it does the tiresome calculations that are long and repetitive, but doesn't have any sort of intelligence to know what calculations need to be done. It's kind of like using the wolfram alpha plugin in chatGPT, just taken to the next level. use an LLM to translate and understand the problem, but use regular calculator functions to do the math so that the LLM doesn't make basic mistakes

    • @peetiegonzalez1845
      @peetiegonzalez1845 8 месяцев назад

      Thanks for your answers but I still don't understand how it's possible for these models to do what they are currently capable of. As Terence Tao himself said quite recently... We are on the cusp of these models being actual literal co-authors of new mathematical research. Hold on to your papers, indeed.

    • @peetiegonzalez1845
      @peetiegonzalez1845 8 месяцев назад

      @@CalebTerryRED The results show way more contextual intelligence than your description would suggest. [citation needed]. I've watched probably every Two Minute Paper video in the last 2 years but that's nowhere near enough to understand what's going on in the so-called "AI" world right now.
      Yes I watch other videos and try to keep up with the research but it's just coming so thick and fast right now it's practically impossible to keep up unless it's literally your job to report or expand on this stuff.

  • @tim40gabby25
    @tim40gabby25 8 месяцев назад +2

    Forget the paper, my subtitles spelt Karol's name perfectly! What a time to be a guy.

  • @luizpereira7165
    @luizpereira7165 8 месяцев назад +1

    Is geometry easier than algebra for AI? Can they make a general AlphaMath with this approach?

  • @tuseroni6085
    @tuseroni6085 8 месяцев назад +1

    get general purpose AI like this that outperform humans, make some for all domains, make an AI which is able to detect a domain or set of domains from related to a prompt, then bring in the relevant expert AI to solve, if multi domain break down the question into the relevant domains, ask each expert to solve for their part and them put them together. you now have artificial super intelligence.
    i feel like research papers in the future will be:
    methodology: i gave the question to an AI
    results:

  • @tmdquentin5095
    @tmdquentin5095 8 месяцев назад +2

    Can you please talk about the new LLM Model called "Mixtral-8x7B"
    Thanks

  • @imsatoboi
    @imsatoboi 8 месяцев назад +1

    Whaat? Open source? Damn , im having chills because this will be able to do things that we maybe dont know about because it learned from scratch(i think)

  • @DreckbobBratpfanne
    @DreckbobBratpfanne 8 месяцев назад +1

    I wonder if we see a development where we have some major frontier general AI that uses lots of specialized systems as tools but then the next-gen frontier AI is trained on this entire structure so it learns to do them by itself out of the box

  • @KilgoreTroutAsf
    @KilgoreTroutAsf 8 месяцев назад +5

    Look at this paper title! Look at this graphics! Let's not discuss anything about the method! What a time to be alive!

  • @BosonCollider
    @BosonCollider 8 месяцев назад +7

    It may be worth mentioning that you can already use a normal computer algorithm to solve these problems. Tarski's axioms are first order, so it is decidable to check whether a theorem follows or not from them, and there are plenty of existing algorithms that enumerate a large number of theorems. The innovation here is making it find short proofs in a way similar to a human, using said algos that enumerate practice problems

    • @denisbaudouin5979
      @denisbaudouin5979 8 месяцев назад

      Are you sure of that ?
      Here one difficulty is to find where to put the interesting points to be able to complete the demonstration, and it seems that Tarsky’s axioms don’t say anything about that.

    • @denisbaudouin5979
      @denisbaudouin5979 8 месяцев назад

      And I am unsure enumerating a large number of theorems really help, what you want is a way to find the correct one quickly.

    • @Spiritusp
      @Spiritusp 8 месяцев назад +1

      I think you are underestimating the enumeration space. We can also see other non AI methods in the comparison, including Grobner Basis method.

  • @waadeland
    @waadeland 8 месяцев назад +1

    2023: “It is just a fancy autocomplete” - human talking about ai
    2024: “It is just a fancy autocomplete” - ai talking about human

  • @charliebaby7065
    @charliebaby7065 8 месяцев назад +1

    Notice how people always refer to thinking in ways that are unique from every pervious attempts or methodologies..... as
    Thinking Outside of the box
    Hardy Har Har aaahhh sigh
    I love your vids
    Thank you for you passion and your diligence

  • @infowarriorone
    @infowarriorone 8 месяцев назад +1

    What I wonder is how AI can help decode previously undecipherable written languages from the past. No doubt people are already training AI to do that.

  • @ivanleon6164
    @ivanleon6164 8 месяцев назад +1

    Google: HEre is a new paper and new techinques
    OpenAI: thanks Google, how can we copy this... again?

  • @kanetsb
    @kanetsb 8 месяцев назад +1

    That moment, when you're walking happily in the streets of computer technology and there's a singularity standing in the next dark alley, waiting to shank you...

  • @yyaa2539
    @yyaa2539 7 месяцев назад +1

    I saw the first few seconds of the video and I don't know why I am SO sad 😢😢😢

    • @yyaa2539
      @yyaa2539 7 месяцев назад

      See the stars come falling down from the sky,
      Gently passing, they kiss your tears when you cry.
      See the wind come softly blow your hair from your face,
      See the rain hide away in disgrace.
      Still I'm sad.
      For myself my tears just fall into dust,
      Day will dry them, night will find they are lost.
      Now I find the wind is blowing time into my heart,
      Let the rain fall, for we are apart.
      How I'm sad,
      How I'm sad,
      Oh, how I'm sad.
      The Yardbirds

  • @joeewert4503
    @joeewert4503 8 месяцев назад +1

    6:05 the doesnt really demonstrate what you are talking about in the video. Ai revised seems worse in this case.

  • @AK-ox3mv
    @AK-ox3mv 8 месяцев назад +1

    If AI know MATH perfectly , it can know how to build anything from scratch

  • @alphablender
    @alphablender 8 месяцев назад +2

    Man thanks for your insights and speed news it's great.

  • @AnthonyWilsonOlympian
    @AnthonyWilsonOlympian 8 месяцев назад +1

    Impressive, but what are the implications or applications?

  • @NirvanaFan5000
    @NirvanaFan5000 8 месяцев назад +1

    great paper for the start of 2024. can't even imagine where we'll be at by december. hard to believe chatgpt is barely a year old

  • @lasagnadipalude8939
    @lasagnadipalude8939 8 месяцев назад +1

    Next version optimize for less steps possible and put it in a mixture of experts agent swarm to make the AGI

  • @ibrahim47x
    @ibrahim47x 8 месяцев назад +1

    Integrating these different models together, the multi-model approach is what will make this useful, GPT-4 will be able to do math really well using it.

  • @dreamok732
    @dreamok732 8 месяцев назад

    I dont know who thinks the voice over is English. Its almost un-listenable and the transcript almost unreadable. You expect it from cat videos, shame you cant do better in this context.

  • @TroyRubert
    @TroyRubert 8 месяцев назад +1

    Are you going to be here for the eclipse?

  • @thechadeuropeanfederalist893
    @thechadeuropeanfederalist893 7 месяцев назад

    How long until an AI solves one of the millenium prize problems?
    I guess only 2 years.

  • @GarethDavidson
    @GarethDavidson 7 месяцев назад

    Wow so can we spin this up as an API in Docker, and make LLMs pose problems in the same distribution as its input data, and use the outputs? How much GPU RAM do we need?

  • @xSeonerx
    @xSeonerx 8 месяцев назад +1

    Awesome! Imagine what will happen some years in the future!

  • @spiderjerusalem
    @spiderjerusalem 8 месяцев назад

    I love your channel but I will keep saying that your narration is very bad. You can not emphasize every word in a sentence, it is very tiring to listen and loses the notion of emphasizing.

  • @emmastewart3581
    @emmastewart3581 7 месяцев назад

    Can it create new, even more advanced maths problems, for the future AI Olympiads?

  • @carloslemos6919
    @carloslemos6919 8 месяцев назад

    They should compare AI's vs human's solution length, there is where intelligence is.

  • @3DProgramming
    @3DProgramming 8 месяцев назад +1

    It's all fun and games until it will solve the RSA cryptography problem.

  • @phutureproof
    @phutureproof 8 месяцев назад

    This ai stuff is really just a very refined search engine, fools us tho but I'm getting bored, when it can invent stuff give me a nudge

  • @yoverale
    @yoverale 8 месяцев назад +2

    What a time to be alive!! 🤯

  • @Jacobk-g7r
    @Jacobk-g7r 8 месяцев назад +1

    5:20 no, fast thinking is the quick response like muscle memory and the slow thinking is working out a process like using a process to window away until its answer. The process is what matters kinda like building the blueprint. Math process is the blueprint and we build to the answer. The slow is better thought out basically, not just a first prediction with all info but a first prediction that uses the info to find the right answer. Understanding the process allows it to be smarter.

    • @capitalistdingo
      @capitalistdingo 8 месяцев назад

      I think he is referencing a terminology in a book about cognitive science by Daniel Dennet rather than physiology terminology. Sort of like how the term “metal” means a different criteria to an astronomer than it does to chemists.

  • @ruutjormun2262
    @ruutjormun2262 8 месяцев назад

    GPT-4 has its strengths, but is horrendous at cryptographic logic.

  • @vitruviuscorvin3690
    @vitruviuscorvin3690 8 месяцев назад +1

    Would this be able to work with tabular data? I've been looking at ways to have an LLM crunch some tabular data i have and do some actual math on it (nothing too complicated tho). LLMs so far been incredible with text/string data, but when it comes to numbers, oh boy, hallucinations start easily

    • @alectoireneperez8444
      @alectoireneperez8444 8 месяцев назад

      It’s specialized for geometry problems & finding proofs

  • @Ctrl_Alt_Sup
    @Ctrl_Alt_Sup 5 месяцев назад

    Can AlphaGeometry solve the Riemann Hypothesis?

  • @espace1832
    @espace1832 8 месяцев назад +1

    what intellectual domain is man still superior?

  • @sganicocchi5337
    @sganicocchi5337 8 месяцев назад

    i wish i could train on 100 million synthetic geometry problems

  • @AttilioAltieri
    @AttilioAltieri 8 месяцев назад

    imagine the developers' reaction after they figured out they created something smarter than them..

  • @shishirsunar4680
    @shishirsunar4680 8 месяцев назад

    I thought pure mathematicians are safe from AI. Seems like they are not.

  • @aroemaliuged4776
    @aroemaliuged4776 8 месяцев назад

    Will Epsteins geometric Unity be surpassed😆

  • @Dark_Brandon_2024
    @Dark_Brandon_2024 8 месяцев назад

    "It cannot play starcraft..."
    chinese cheater: "for now..."

  • @metalim
    @metalim 8 месяцев назад

    Myyy goodness, that annoying voice again, hold on to your papers

  • @rodericksasu6976
    @rodericksasu6976 8 месяцев назад +1

    Shit this is too good too fast 😨

  • @herp_derpingson
    @herp_derpingson 8 месяцев назад

    Take the zero score of GPT4 with a grain of salt. There is more to it than that.

  • @Nulley0
    @Nulley0 8 месяцев назад

    But... can it play Geometry Dash?

  • @kphk3428
    @kphk3428 8 месяцев назад

    4:03 the titles you put make it look like the AI made the proof worse.

  • @jmoreno6094
    @jmoreno6094 8 месяцев назад

    Your tone speaking is a boxcar(t) function

  • @LabGecko
    @LabGecko 8 месяцев назад

    Regarding _"Dear fellow scholars, this is Dr Károly Zsolnai-Fehér"_ here is a suggestion: As you seem intent to include that phrase, perhaps put the _"Dear fellow scholars"_ at the first, and _"This has been Two Minute Papers by Dr Károly Zsolnai-Fehér"_ at or near the end?
    I ask because it is very jarring to be listening, considering the ramifications of the study you are describing, and then jolted into what should be an intro or outtro phrase.

  • @bobbobbob321
    @bobbobbob321 7 месяцев назад

    Was chatgpt actually able to solve that usaco problem?

  • @adrid951
    @adrid951 4 месяца назад

    how we can use alpha geometry??? thx

  • @okirschner001
    @okirschner001 8 месяцев назад

    Blackbox problem and stochastical parrots:(Originally an answer hjidden deeply here, so context is missing) An AI that 100% understands an other AI, would basically be a copy, or at least would have this "copy" integrated into itself. Any derivation introduces uncertainty(blurryness). Not a coincidence the similarities to the uncertainty priciple in physics. Interesting metaphysical concepts arise from the study of AIs. Everything that exist, is just a different expression of the same fundamental concepts, just on a higher complexity plane. It fractals down and up all the way. We are creating what created us!
    These fundamental similarities in concepts can also be seen with human neurological networks. Even though AI only modells one layer of our multilayered thinking and biology. We finally will understand everything much better with and through AI.
    Ai is the last invention humans will make.
    People saying that LLMs are only statistical parrots, don't really understand what's going on. But I don't feel like writing a roman now here.
    Basically both sides of the argument describe the same thing through a incompletness lense.
    It turtles all the way up and down.
    The very first time I heard that story of the old lady, that got laughed at by all the scientist, I knew in my heart that she was correct.
    The most ironic outcome is the most likely.
    But it was actually my intuition that I knew.
    And where Intuition comes from, and where it ties into this bigger picture(and QM), is very fascinating.
    Since AlphaGo, the bigger picture became much clearer for me. And we are just at the start.
    We are always just at the start.
    Prepare for the biggest ride ever known to mankind.
    (Sometimes I feel like I have to write a book, many concepts I never read somewhere else in my head)
    I am not even really starting here, just wanted to reply a few words, but can't help myself.

  • @jonathansung8197
    @jonathansung8197 7 месяцев назад

    For me, doing the mechanical work in maths class was learnable, but pulling "rabbits out of hats" was what I really struggled with when doing proofs at uni. Seeing this AI perform almost as well as a Gold IMO contestant is very impressive!

  • @antoniobortoni
    @antoniobortoni 8 месяцев назад

    This is HUGE, big, i mean imagine having problem solving capabilities in your pocket, wow.... the exponential grow is happening, we imagine robots idiots but can be poets, and genius inventors, just do the same to inventions and all the possible inventions and discovery ever can be possible now. Wow

  • @harrybarrow6222
    @harrybarrow6222 8 месяцев назад

    I took Maths and Physics at university.
    It always seemed to me that the logical presentation of a proof said very little about how the proof was first found.
    🎩🐇
    I came to the conclusion that we use perceptual abilities to discern the key features of the problem, and then use analogy with similar problems to suggest possible approaches.
    You might even have to break the task into a sequence of stages with sub-proofs for each.
    This by itself, of course, does not give you a watertight proof.
    You then have to go through the process of filling in the steps and details.

  • @asdfasdfasdf1218
    @asdfasdfasdf1218 8 месяцев назад

    Mathematics is in other words deductive reasoning, a foundation to all actual knowledge. The other foundation is empirical observation. If AI can master mathematics, it would not take much before it can master the creation of new technology.

  • @anywallsocket
    @anywallsocket 8 месяцев назад

    The problem is how to reduce the domain to reductive and inductive techniques, once you can do that, you can train a NN on anything, and it will be better than the best - why? Because it is fitting a function in the hyperspace, the limit of which is full memorization of the training data. We shouldn’t be so blown away by this, as the NN doesn’t know it’s doing geometry, and if you change anything fundamental it will fail spectacularly.

  • @danielebaldanzi8383
    @danielebaldanzi8383 8 месяцев назад

    4:50 Don't want to ruin the excitement, but this is incorrect. The gold medal isn't awarded just to the winner, put to a big number of contestants, still impressive, but not as much.

  • @smetljesm2276
    @smetljesm2276 8 месяцев назад

    Wow people found such an efficient way to make geometry as complicated as they could!
    This looks nothing like geometry we learned in school 😂😂😂

  • @TheSolsboer
    @TheSolsboer 8 месяцев назад

    i bet this model is not good in all aspects other than math

  • @BryanLu0
    @BryanLu0 8 месяцев назад

    I don't know if you can call the proof better. It requires lower level deductions, but more steps. It's easier to understand, but provides less intuition into what is going on.

  • @guncolony
    @guncolony 8 месяцев назад +1

    It still seems like a pretty narrow AI but seems incredibly good at what it does.
    There's still a long road to making this more general, but if that is accomplished, you essentially get an AGI that can figure out how to solve a complex problem on its own. Imagine giving it an access to a simulation (so the AI can check its solutions), and you can use it to develop drugs, computer algorithms, optimized mechanical designs... Basically replacing a whole lot of science and engineering.

    • @michaelleue7594
      @michaelleue7594 8 месяцев назад +1

      Getting a machine to narrowly do a task very well is something we've been doing for 200 hundred years, so let's not put the cart before the horse here. Generalizing this to *anything* more than what it's doing now may not be possible at all, let alone all of the stuff you're talking about.

    • @Afkmuds
      @Afkmuds 8 месяцев назад

      @@michaelleue7594but when combined with the others it can act as a section fit he brain for processing math😏

    • @michaelleue7594
      @michaelleue7594 8 месяцев назад

      @@Afkmuds I guess, but at what point does frankensteining a bunch of individual models together look less like a reasoning model and more like just a regular computer?

    • @Afkmuds
      @Afkmuds 8 месяцев назад

      which is the next step, a computer than can access itself, the fact the file explorer is so stupid for instance. computers are bouta be so much better at being computers@@michaelleue7594

  • @jeffreyspinner9720
    @jeffreyspinner9720 8 месяцев назад

    When will you be replaced by AI, so you can be like half my channel recommendations now? AI hasn't advanced until you are replaced on your own channel. What a time to be alive!

  • @BlooFlame
    @BlooFlame 8 месяцев назад

    Wow, first the MoE revisited approach, and now, we get a narrow expert to augment the MoE with exceptional mathematical ability!?
    What is going on with all these optimizations in AI right now?

  • @JoshKings-tr2vc
    @JoshKings-tr2vc 8 месяцев назад +1

    This is a beautiful paper. It made think about the way we train AI and how to allow it to grow on its own.

    • @JoshKings-tr2vc
      @JoshKings-tr2vc 8 месяцев назад

      One example is Machine Vision being greatly improved by vectorized images.

    • @JoshKings-tr2vc
      @JoshKings-tr2vc 8 месяцев назад

      The synthetic data process they used really made this AI shine.
      The symbolic deduction and traceback method for the training is very intriguing. Similar to how our brains have common sense and certain concepts that are bent and molded for deduction. But we also learn from seeing the logic reasoning of tracing back the deductions.
      Awesome paper.

  • @PrajwalDSouza
    @PrajwalDSouza 8 месяцев назад

    Note that a smart solution isn't the long one.

  • @spicycondiments3043
    @spicycondiments3043 8 месяцев назад

    at 3:42 the AI claimed angle congruencies that don't visually add up. Are these mistakes on the AI's part or mine?

  • @ericray7173
    @ericray7173 8 месяцев назад

    Here I am quieroing Taco Bell once again.

  • @ivanleon6164
    @ivanleon6164 8 месяцев назад

    DeepMing is way more Open than OpenAI lmao, thanks Microsoft...