Google AI dominates the Math Olympiad. But there's a catch

Поделиться
HTML-код
  • Опубликовано: 9 сен 2024
  • Google DeepMind crushed several Olympiad problems, solving one problem in a mere 19 seconds. But is AI ready to human mathematicians? Not quite. There are important caveats you need to know.
    Google Deepmind announcement
    deepmind.googl...
    Problem 4 solution
    storage.google...
    My video IMO 2024 question 4 (human solution)
    • Geometry question to t...
    References
    arstechnica.co...
    en.wikipedia.o...)
    x.com/wtgowers...
    www.technology...
    www.nature.com...
    www.theguardia...
    DeepMind's AlphaGeometry AI Two Minute Papers
    • DeepMind’s AlphaGeomet...
    Subscribe: www.youtube.co...
    Send me suggestions by email (address at end of many videos). I may not reply but I do consider all ideas!
    If you purchase through these links, I may be compensated for purchases made on Amazon. As an Amazon Associate I earn from qualifying purchases. This does not affect the price you pay.
    If you purchase through these links, I may be compensated for purchases made on Amazon. As an Amazon Associate I earn from qualifying purchases. This does not affect the price you pay.
    Book ratings are from January 2023.
    My Books (worldwide links)
    mindyourdecisi...
    My Books (US links)
    Mind Your Decisions: Five Book Compilation
    amzn.to/2pbJ4wR
    A collection of 5 books:
    "The Joy of Game Theory" rated 4.3/5 stars on 290 reviews
    amzn.to/1uQvA20
    "The Irrationality Illusion: How To Make Smart Decisions And Overcome Bias" rated 4.1/5 stars on 33 reviews
    amzn.to/1o3FaAg
    "40 Paradoxes in Logic, Probability, and Game Theory" rated 4.2/5 stars on 54 reviews
    amzn.to/1LOCI4U
    "The Best Mental Math Tricks" rated 4.3/5 stars on 116 reviews
    amzn.to/18maAdo
    "Multiply Numbers By Drawing Lines" rated 4.4/5 stars on 37 reviews
    amzn.to/XRm7M4
    Mind Your Puzzles: Collection Of Volumes 1 To 3
    amzn.to/2mMdrJr
    A collection of 3 books:
    "Math Puzzles Volume 1" rated 4.4/5 stars on 112 reviews
    amzn.to/1GhUUSH
    "Math Puzzles Volume 2" rated 4.2/5 stars on 33 reviews
    amzn.to/1NKbyCs
    "Math Puzzles Volume 3" rated 4.2/5 stars on 29 reviews
    amzn.to/1NKbGlp
    2017 Shorty Awards Nominee. Mind Your Decisions was nominated in the STEM category (Science, Technology, Engineering, and Math) along with eventual winner Bill Nye; finalists Adam Savage, Dr. Sandra Lee, Simone Giertz, Tim Peake, Unbox Therapy; and other nominees Elon Musk, Gizmoslip, Hope Jahren, Life Noggin, and Nerdwriter.
    My Blog
    mindyourdecisi...
    Twitter
    / preshtalwalkar
    Instagram
    / preshtalwalkar
    Merch
    teespring.com/...
    Patreon
    / mindyourdecisions
    Press
    mindyourdecisi...

Комментарии • 337

  • @johnm5928
    @johnm5928 Месяц назад +494

    Yesterday, ChatGPT insisted to me that 184 was a prime number.

    • @trueriver1950
      @trueriver1950 Месяц назад +70

      Well it certainly has prime factors. Isn't that good enough?

    • @howlu9086
      @howlu9086 Месяц назад

      ​@@trueriver1950then every number is prime to chat gpt 😂

    • @ubncgexam
      @ubncgexam Месяц назад +11

      In complex world even 1 is not. 😊

    • @IoT_
      @IoT_ Месяц назад

      ​@@ubncgexamthere are gaussian prime numbers

    • @film-gq1wg
      @film-gq1wg Месяц назад +53

      chatgpt is a large *language* model
      with the emphasis on LANGUAGE
      it's not your calculator or math homework robot

  • @YoungGandalf2325
    @YoungGandalf2325 Месяц назад +309

    AI is great, but it can't clean the blackboard and clap the erasers.

  • @mathgeniuszach
    @mathgeniuszach Месяц назад +19

    Everybody's all hyped over AI until AI solves the Reimann Hypothesis and everyone fights over the cash.

  • @caspermadlener4191
    @caspermadlener4191 Месяц назад +168

    IMO gold medalist here, with perfect score on the first day in 2022.
    Let's just say that if I got 3 days, I would have also gotten a perfect score on the second day.
    I am fine with the geometry solution though, because it is only fair that the IMO problems are translated to your own native language.
    It is actually insane that the solution only took 19 seconds!

    • @-sh
      @-sh Месяц назад +15

      Ahhh I remember you, you had an insane improvement from 2021-2022. How did you do it/

    • @redfinance3403
      @redfinance3403 Месяц назад +13

      Please be real with me, do you have to be a genius to make it to IMO? Just in case the answer is no (lol), how much time did you spend preparing for it and how? I appreciate your time answering this/these question(s).

    • @Ephemeral_EuphoriaYT
      @Ephemeral_EuphoriaYT Месяц назад +3

      ​@@redfinance3403i don't think he will answer these questions. Tbh I think he doesn't even have notifications on.

    • @caspermadlener4191
      @caspermadlener4191 Месяц назад +25

      @@-sh My skill level would have been mid silver in 2021 and high silver in 2022, but I experienced both sides of Murphy's law.
      The questions on the first day of 2021 are infamously hard, completely shattering my confidence for the second day, while I got a perfect score on the first day of 2022, which just left me wanting more.
      I also got really lucky with 4 points on problem 6 in 2022, 3 points would already have been quite high for my partial solution, but I apperently checked all the right boxes. These extra points helped me turn my low gold into a mid gold.

    • @Ephemeral_EuphoriaYT
      @Ephemeral_EuphoriaYT Месяц назад +8

      @@caspermadlener4191 damn i thought you guys didnt use youtube or phone and just studied all day..

  • @Bruno_Haible
    @Bruno_Haible Месяц назад +15

    Plane geometry is relatively easy for a computer, because it's a closed world and the main difficulty is the combinatorial explosion. Combinatorics and inequalities are much harder for a computer.

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 Месяц назад

      nah they wayyyy harder than imo, it hink one of th proved ones needed entire theories to develop fir it to work, but hopefully am wrong

  • @Rafix_989
    @Rafix_989 Месяц назад +136

    The fact that ai got 28 points on the IMO while i'm struggling to even get 12 points on the polish junior's mathematical olympiad is amazing

    • @bene2451
      @bene2451 Месяц назад +36

      humble brag lol

    • @cewka_zaplonowa_dwuiskrowa
      @cewka_zaplonowa_dwuiskrowa Месяц назад +6

      Flexing

    • @howareyou4400
      @howareyou4400 Месяц назад +13

      AI took extra time and had human translators...
      I'm wondering, when they were doing this, if the AI could't solve it, did they modify their code and try again?
      Because I don't think AI doesn't need 3 days to solve a few problems.
      The fact that AI took 3 days made me suspect that there are several tries and they modified the code, which could be the AI internal code as well as the problem description code.

    • @somanathdash3153
      @somanathdash3153 Месяц назад +1

      You doing way better than 90% people bruh 😂

    • @redfinance3403
      @redfinance3403 Месяц назад +2

      Took a look at some of the questions … Nothing you couldn’t practice for but definitely still hard compared to a lot of other junior olympiads. Great job 🙂

  • @BUtheBabyUnicorn
    @BUtheBabyUnicorn Месяц назад +31

    “Just as we use calculators for doing intricate calculations, we soon may be using computers to assist with mathematical proofs.” This statement is redundant when you consider that Lean is an interactive theorem prover, or what is otherwise known as a proof assistant. It is literally supposed to assist with mathematical proofs. I don’t think you can compare this AI to a calculator. I think a tool like Lean or Rocq (formerly known as Coq) is much more analogous to a calculator. Even if you’re talking about a very fancy calculator like a TI-89 which has a symbolic derivative and integral solver, these proof assistants have special tools that solve many problems instantly.
    This “future” that you’re proposing is already here. The Four Color Theorem was proved using a computer, and a proof in the proof assistant Rocq exists. Other new mathematical proofs of previously unproven theorems are being worked on in Lean every day.
    I think it’s much less impressive that the AI came up with a solution to a problem that is known to be solvable as well. The fact that it took 3 days to solve the other problems is less than encouraging.

  • @Mr.Not_Sure
    @Mr.Not_Sure Месяц назад +54

    There are "direct" problems which are solved step by step, and "reverse" problems where one have to "seek path in the darkness".
    An example of "direct" problem is "Find ((1+x)*2+3)*4+5 where x=4". You know the path to the solution, you just have to follow the path.
    An example of "reverse" problem is "Find an adequate theory of gravity", where one can propose one theory after another, test it, then choose more adequate one.
    "Reverse" problems are harder because one does not know path to solution, one must go trials and errors.
    Translating problem's text into formal language is "direct" problem. It's the easy part. Actually provide solution for formalized problem is the hard part.

    • @Fishy_17
      @Fishy_17 Месяц назад +11

      That's not the whole image though. If you make a mistake in translating the problem, you don't know if you made a mistake. This is why translating is the difficult part. It's a lot more difficult to make mathematical errors when solving the problem (at least as a computer) as mathematical concepts are much more concrete than informal language.

    • @yairklein-mp8vo
      @yairklein-mp8vo Месяц назад +1

      57
      apple fall down because its attracted to the ground, duh.

    • @mujtabaalam5907
      @mujtabaalam5907 Месяц назад

      The AI provided the solution for the formalized problems

  • @mr._jgp
    @mr._jgp Месяц назад +107

    As with almost everything, W Mathematicians and L Journalists

  • @SodaWithoutSparkles
    @SodaWithoutSparkles Месяц назад +4

    I wonder if it can finally say if 9.9 is bigger than 9.11

  • @XetXetable
    @XetXetable Месяц назад +5

    To be fair, the catch is not that huge. Application wise, the proof search has the most utility. If you want to prove there exists some design/program satisfying some collection of theorems, the proof (at least if it's constructive) is where the value lies. Translating English into formal language is useful, but it's more of a convenience than anything.

    • @prakharsrivastava6644
      @prakharsrivastava6644 Месяц назад +2

      Also english to lean translation seems like a relatively less complex problem compared to proof construction.

  • @howareyou4400
    @howareyou4400 Месяц назад +41

    Geometry problem is actually one of the problems that computers easily beat human.
    You don't need an AI to do that.

    • @hypnogri5457
      @hypnogri5457 Месяц назад +3

      elaborate how they can 'easily' do it

    • @mujtabaalam5907
      @mujtabaalam5907 Месяц назад +1

      Not just geometry

    • @theaveragepro1749
      @theaveragepro1749 Месяц назад

      @@hypnogri5457 well im not super familiar with the problem, but if you have one or two numbers, and are trying to find the value of some angle, you can use relations and formulas like A+B+C=180 to slowly explore what can be learned about this graph from the initial information until you find the value you wanted. That's a related problem anyways, I'm guessing this one also has extra complications like constructing new lines as a tool to find new relations, which probably leads to an infinite number possible relations to explore, requiring a good heuristic to determine what to search first. Reinforcement learning basically learns a good heuristic through experience and then executes the steps that maximize that heuristic.

    • @urisinger3412
      @urisinger3412 Месяц назад +8

      @@hypnogri5457 im not an expert on this, but geometry can probably be brute forced

    • @hypnogri5457
      @hypnogri5457 Месяц назад +1

      @@urisinger3412 sure it can. But math problems arent generally brute forceable in polynomial time. Most problems are exponential time, making brute force unfeasable

  • @GodbornNoven
    @GodbornNoven Месяц назад +14

    Yes but the progress we're making with AI is remarkable. That's the entire point. We're making amazing progress. They're not good enough to be considered reliable yet. Won't be for a while. But it's just a show of work. We should take these results more as "oh its getting better again" than as "pftt this won't ever replace mathematicians/coders/wtv"

  • @vitriolicAmaranth
    @vitriolicAmaranth Месяц назад +2

    Wow, their overly complex linear algebra calculator can now do linear algebra correctly

  • @seeibe
    @seeibe Месяц назад +1

    As someone who's tried to build LLM based formalizers myself, this is exactly what I expected pff

  • @z000ey
    @z000ey Месяц назад +7

    Do you have any info had any of the students that solved the geometry problem also use the approach that AI did? If yes, what approximate percentage of the students did so?

    • @rennleitung_7
      @rennleitung_7 Месяц назад

      Sorry, I have no information. But back in the day I would have extended some lines to find parallels, right angles, congruent triangles, just to see, if they were useful. So I think, there might be students that went on a similar path like the AI.(If they weren't useful, there still was a rubber.)

  • @52flyingbicycles
    @52flyingbicycles Месяц назад +22

    What people think the AI achieved: a deeper understanding of mathematics
    What the AI actually achieved: very fast guess and check

    • @Eval48292
      @Eval48292 Месяц назад +11

      Is there really any functional difference? Why would we do any different? Intelligence is the pruning of non-useful paths to compress what is to be explored as to be done in a finite amount of time to find the solution. If it was just guess and check then it could get "stuck" down a particular path forever, I would argue it is understanding (NN mechanistic interpretability research shows that novel features are "discovered", and can be generalized to new zero-shot prompts)

    • @leif1075
      @leif1075 Месяц назад

      WHY aould the AI or anyone draw that circle..and its solution has to prove that other hakf thr curcle encompassses the lines and how did it do that? If not, you don't know what the kther haf of that cirlce encompasses..ser what I mean? And why nkt ise one lf the other two triangle sides as the diameter of a circle...did it try those first?

    • @potatopotato6704
      @potatopotato6704 Месяц назад +6

      @@leif1075 im sorry but this is barely readable
      i dont see what you mean

    • @MrTomyCJ
      @MrTomyCJ Месяц назад +9

      It is not random guess, otherwise it would be impossible because the possibilities are just too many. The neural network develops some sort of intuition about what guesses to make, just like we humans do when solving these geometry problems.

    • @howareyou4400
      @howareyou4400 Месяц назад

      @@MrTomyCJ For 2d geometry problems, you can just brutal force it with the formal language code. You don't actually need an AI to do that (Maybe AI makes it slightly faster but I don't think its process is fundamentally different than a calculation program)

  • @marcelobrenha
    @marcelobrenha Месяц назад +2

    i am pretty sure you are capable of solving many of these problems, you are just being humble

  • @creativenametxt2960
    @creativenametxt2960 Месяц назад +1

    kinda expected formal proofs to be automated faster tbh, it's literally a completely verifiable process with not that many options to choose from that also happens to have a large database of theorems and problems to get trained on and get completely automatically verified, all you need is provide the description of each problem in Lean and if some problem turns out to be unsolvable by the AI you can even easily-ish check if you have messed up the task by providing the formal proof yourself
    at least AI has been used in some simpler tasks aimed at boolean logic specifically for a while from what I heard, doesn't seem all that different
    wondering how that aitomatic proover will turn out in the future

    • @creativenametxt2960
      @creativenametxt2960 Месяц назад +1

      @@Not_Even_Wrong well there's like around 10 rules of inference, I meant that
      the options for what to pick for those do grow fast tho
      but still, most make no "intuitive" sense and that's the kind of thing AI should be pretty decent at

  • @Stelios.Posantzis
    @Stelios.Posantzis Месяц назад +8

    Ha! I find it mind boggling that they allowed a human to translate the problem into formal language. But is that language complete? I'd like to know more about Lean. What are all these definitions in it, like "triangle", "incircle" etc.? How are they constructed?
    And what makes Lean different to, for example, English? How many different versions of the same Lean problem could be created in English and vice versa by the AI machine?

    • @pierrecurie
      @pierrecurie Месяц назад +3

      Lean is pretty rigorous, and not that similar to Eng. As for the competence of the machine translator, I don't know.

    • @JohnDoe-ti2np
      @JohnDoe-ti2np Месяц назад +6

      Lean is an interactive theorem prover that is programmed with the basic axioms of mathematics and the rules of logic. If you give a formal mathematical proof of a mathematical theorem, then it can check that the proof is correct. The mathematical theorem can in principle be in any branch of mathematics whatsoever. The catch is that you have to manually feed it all the definitions and all the steps in the proof. Fortunately, though, other people have laid a lot of groundwork, and created a library called "mathlib" with a lot of basic math. So you don't have to start from scratch, but can build on the math that other people have already coded up.

    • @BryanLu0
      @BryanLu0 Месяц назад

      You could technically write the translation in Lean in different ways, but the result is the same if translated correctly

    • @Keldor314
      @Keldor314 Месяц назад +5

      The Lean program is just a big list of contstraints as given by the problem. For instance, "triangle a b c" is saying "let abc be a triangle", "midpoint k a c" means "let the point p be at the midpoint of the segment ac", "x != c" means "let x and c be distinct points", and so forth. A computer theorem prover will then make manipulate the expressions in ways permitted by formal rules expressing how geometry works until it happens to stumble upon a path to the desired theorem.
      What are these formal rules? Well, an example for a system of algebra may be that you can always substitute anything matching the pattern "a+b" with "b+a"; the axiom of commutivity.
      Anyway, a traditional theorem solver might blindly make substitutions, churning out further true statements randomly until it arrives at the solution by brute force. The problem is, the space of possible statements grows exponentially as every permutation of substitutions could potentially be the one that leads to the answer. There are ways to make the theorem solver a little bit smarter at picking which permutations to try first, but it's really not at all trivial to look at a mess of statements and say "Hmm... Try THIS next!" That's where the AI comes in. AIs are good at building intuition, at seeing what steps have often worked in other problems, and so the hope is that the AI can suggest to the formal theorem prover what things to try first, guiding it away from mindless manipulations unlikely to result in progress.

  • @SonnyBubba
    @SonnyBubba Месяц назад +2

    Any chance that AI is going to tackle the 6 remaining Millennium Problems?

  • @tullochgorum6323
    @tullochgorum6323 Месяц назад

    I had a classmate who came 2nd in the Olympiad. He was mad as a frog, but a savant when it came to pure maths.

  • @Hubythereal
    @Hubythereal Месяц назад +3

    hope not many rappers watching this, they'd be lost when you talk about Lean

  • @vardanrathi7777
    @vardanrathi7777 Месяц назад +6

    Time for it to solve the Collatz Conjecture ! 😂

  • @Pongo8844
    @Pongo8844 Месяц назад +12

    No matter what, computers work on the “garbage in, garbage out”. AI learns from the material fed to it.

    • @EdinoRemerido
      @EdinoRemerido Месяц назад +3

      Not all of them.

    • @drdca8263
      @drdca8263 Месяц назад +3

      Not sure what point you are trying to make.

    • @aship-shippingshipshipsshippin
      @aship-shippingshipshipsshippin Месяц назад

      @@drdca8263 i think he wanted to say that not all deepmind models work that way, alpha go for example dont use any human data to train, they train by them self, all you have to do is tell them the rules, the documentry from deepmind how they beat champion at game GO is amazing and they talk about this there, and today we even have better models then back then
      ( sorry for my english )

    • @william41017
      @william41017 Месяц назад +1

      Garbage?

    • @f.b.i6889
      @f.b.i6889 Месяц назад

      ​@@EdinoRemerido Most of them that we consider "real/modern AI" do. That's because they're not trained on correct implications but rather trained on accurate predictions, which aren't the same. If you know that 2² is 4 and 4² is 16, you may be able to predict that 3² is between 4 and 16, but you may not guess that it is 9. If you know that n² is n added to itself n times, you will always be able to get the correct answer all the time, because that rule implies that when n is 3, n² is 3+3+3 which is 9.
      Note: This example is oversimplified to make it easily digestible.

  • @akashverma5756
    @akashverma5756 17 дней назад

    Recently, Companies have been lying a lot about their ai models. So, They can't be trusted.

  • @kmjohnny
    @kmjohnny Месяц назад +2

    Ngl, I'm more interested about how humans are solving problems. I want to see nice simplified solutions.

  • @stigmistergaming3561
    @stigmistergaming3561 20 дней назад

    It sounds to me like the AI was just brute forcing any possible solutions. With the geometry question it may have gotten lucky by guessing a correct solution quickly. With computing speeds that they are today, a computer taking three days to solve a problem a human is tasked to do in 1.5 hours is quite slow.

  • @MichaelPiz
    @MichaelPiz Месяц назад +10

    Do human contestants receive the questions in their native languages or are they given translations from the original problem statements (likely in English)? If the latter, how is that different than the AI receiving a translation into Lean?

    • @tollspiller2043
      @tollspiller2043 Месяц назад +6

      Students are given problems in up to 3 languages, at least one of those in an official IMO language.
      Translation is done by leaders and checked by a jury to ensure for fairness

    • @superneenjaa718
      @superneenjaa718 Месяц назад +5

      It's different like how we'd read and comprehend a question vs if the data streams were directly transmitted to our brain to each individual neuron. A tough math question often requires quite a bit of time to understand properly. Especially something like geometry.

    • @erinsgeography3619
      @erinsgeography3619 Месяц назад

      I may be completely wrong, so if anyone reading this has sat in the IMO, please correct me if I'm wrong
      I don't think that's always the case. In the IMO website, in the part where you can download the papers, some languages of participating countries are missing, like Filipino for example. So maybe, not every human participant is given the chance to receive the questions in their language
      As someone majoring in math in the PH, it was a bit disappointing. I was looking forward to learn mathematical terms in my native language, like I don't even know what "cube" (the 3d figure) is in Filipino
      But anyways, I'm not even mad lmao. The PH fares fairly good in the IMO even so
      (maybe a ph math olympiad can tell me there are actually fili papers in the imo, but it's not just upload on the website)

    • @tollspiller2043
      @tollspiller2043 Месяц назад

      @@erinsgeography3619 no if you requedt your paper in PH, it will be translated by the filipino leaders and you can write the exam in that language, it just seems as if the entire filipino team prefered english or some other language

    • @koibubbles3302
      @koibubbles3302 Месяц назад +2

      your brain doesn't speak english. it speaks in neurons and electricity. The computer getting the question pretranslated is equivalent to if I had the question zapped into my brain by a machine. Very different things.

  • @katrinabryce
    @katrinabryce Месяц назад +1

    If you translated it into something like Maple or Sage Math, then surely it could do it with a 100% accuracy in less than a second?
    Which makes it not really impressive at all, because the intelligence is in understanding the question and composing the steps to solve it.

  • @vcvartak7111
    @vcvartak7111 Месяц назад +1

    I want to appreciate poser of the this geometry problem more than solver of this .

  • @kringucitis2287
    @kringucitis2287 Месяц назад

    reminds me of the time when kasparov was battling DeepBlue, i think soon this thing could become better than humans at math

  • @jssamp4442
    @jssamp4442 Месяц назад +1

    I wonder if the Fields Medal is made of Field's metal.

  • @bengoodwin2141
    @bengoodwin2141 Месяц назад +1

    Are the problems it was tested on included in the training data at all? Most past Olympiad problems are available online, I would be most impressed if the questions used were ones that did not exist in the training data, which I would guess might be online data.
    Edit: this seems likely, because it constructed a novel solution.

    • @explosionspin3422
      @explosionspin3422 Месяц назад

      The problems the AI got tested on came from this year's competition, and were therefore not part of the training data.

    • @bengoodwin2141
      @bengoodwin2141 Месяц назад

      @@explosionspin3422 I see. That's impressive then.

  • @SuperPassek
    @SuperPassek Месяц назад +26

    Now most people accept some mathematical proof is correct, because other mathematicians say it is correct. We cannot understand the proof of Fermat's last theorem - it is even hard to try to understand. The reason we accept Andrew Wiles's proof is correct is because other mathematicians approved it. Possibly in the future, professional mathematicians do not understand a proof from AI and accept the proof as a correct one simply because other AIs approve it. For most people, nothing changes... sort of.

    • @superneenjaa718
      @superneenjaa718 Месяц назад +10

      I don't think so. Verification of a proof isn't anywhere as difficult as solving the problem.

    • @Houshalter
      @Houshalter Месяц назад +9

      This has already happened. IIRC the proof of the 4 color theorem was very large and automatically generated. The mathematicians did the hard work or dividing the problem into many possible cases. Then the computer brute forced each case, to prove it was possible for all such cases. The result is not readable or understandable though, we just accept the computer's verification.

    • @Sebo.
      @Sebo. Месяц назад +7

      ​@@Houshalter you can mathematically prove that an algorithm gives correct result and the proof is correct otherwise it wouldn't have been accepted, the problem was that the method wasn't very clean (or clever)

    • @hhhhhh0175
      @hhhhhh0175 Месяц назад

      automated proof verification is a tool that basically completely solves this problem. all the AIs have to do is write their proofs in a standardized language and then the proof verifier just checks that each step is valid

    • @superneenjaa718
      @superneenjaa718 Месяц назад +8

      @@Houshalter we accept the verification because we know the algorithm running in the computer is correct.
      That's like saying "we don't really understand if the millions of digits calculated for pi is correct or not since computer calculated it". We know it's correct because the algorithm has been proven to be correct.

  • @joaoguerreiro9403
    @joaoguerreiro9403 Месяц назад +1

    Next is physics… it will not be long until it can do anything!

  • @clementihammock7572
    @clementihammock7572 Месяц назад +1

    Why not? after IBM DeepBlue and AlphaGo. May be happened extremely soon?

  • @thecoffeejesus
    @thecoffeejesus Месяц назад

    This is amazing. So much scientific research will be unlocked by this breakthrough

  • @brittanyfriedman5118
    @brittanyfriedman5118 Месяц назад +1

    how much water and electricity did it use

  • @spl420
    @spl420 Месяц назад

    What important here is that AI there supposed to be just a translator, and solving is probably closer to something like Wolfram Alpha and other software that can do maths for you.

    • @explosionspin3422
      @explosionspin3422 Месяц назад

      Not really, the AI was moreso used as a search algorithm. The lean programming language is specifically designed for checking the validity of formal proofs (quite different from wolframalpha). The issue with trying to automate the generation of proofs is the combinatorical explosion of possibilities at each step. The AI was used to guide this search based on the "intuition" it gained through training.

  • @v.f.38
    @v.f.38 Месяц назад +5

    I am trying my best to work in Machine learning. I am very capable in mathematics and I once won a math olympiad with a 35 out of 50. I hoped Machine learning would be a math heavy field, but my computer science college companions keep telling me I chose the wrong degree because the field has little mathematics. I am currently pursuing Andrew NG and IBM data science certifications. Do you think I chose the wrong fields? All this AI craze I was excited about is confusing me so much. There are so many directions I can't choose. I am among math enthusiast, so please, suggest me what I should do. Geometry was my least favourite math subject, so I wasn't too unhappy to know it would be heavily assisted with computers, but now I am so confused. How can I break into AI if AI keeps break everything I can do? Please, to everyone who got a math related degree, help me understand what I should do.

    • @pierrecurie
      @pierrecurie Месяц назад +6

      ML research requires math, but normal usage does not. All the heavy lifting is covered by libraries/packages. 95% of the math is covered by "import pytorch".
      If you're still interested in ML, you need to get better at programming/CS. The bulk of the time is spent mucking around with data, and most of that is ordinary coding. At larger scale, there is a lot of engineering with managing compute clusters.

    • @solanaceous
      @solanaceous Месяц назад +1

      really? im in the cs field and i think machine learning seems more math heavy than cs.

    • @jamescole3152
      @jamescole3152 Месяц назад

      No you are in the right field. Your expertise in needed in writing Algorithms, Go though all of the Algorithms and see how they are used. Just look at any of the imaging or sound Algorithms and see how they work. Then look at encryption Algorithms. I would like to have time to see if there is a reverse Algorithm for the Bitcoin encryption. They use a curve to encrypt the key. I have wondered if there is another reverse curve that will un encrypt it. You can access the encryption. I would start by making a few million keys to see if I can see any pattern at all. Put them on a chart or graph. Use numbers that are close together in sequence,

    • @leif1075
      @leif1075 Месяц назад

      ​​@@pierrecuriethat sounds really.BORING and FRUSTRATING. do you soend all day staring at a screen ?? How do you notnget fed up..and sjrely you dont have to do that 8 hours a dsy 5 days a week?? Surely like 4 hours a day is enough ? Hope you can respond when you can.

    • @pierrecurie
      @pierrecurie Месяц назад

      @@leif1075 I think that's the case with most jobs. I imagine most youtubers spend all day staring at a screen LOL.
      FWIW, I'm currently between jobs. When I was doing deep learning, most of my time was spent mucking with data. Once the data pipeline is ready, and the chosen architecture is ready, I make it go train, and wait hours -weeks. During that time, I spend most of it reading arxiv or cleaning up the code base.

  • @tuppyglossop222
    @tuppyglossop222 Месяц назад +1

    If the AI was given a translation of the problem, why were the students not given a diagram?

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 Месяц назад

      i havent read any papers but i think the model deals with information in algebraic way:
      for example
      the model given this problem
      - AB segement
      - M point st AM=BM
      - I midpoint of AB
      ? prove AB orthogonal to MI
      and he proves it like this using data base of theorems:
      - I midpoint of AB thus IA=IB(from defintion of midpoint)
      - define L the bisector of AB
      - by defintion of bisector,and given IA=IB I is in L
      -same for M
      -I and m in L thus L is (IM)
      -L is perpendicular to (AB) (theorem on bisector) thus same for (IM) (by identity)
      ptoof finished
      so baiscally it uses defintion and theorems from some theorems, combine thoose with defintioms of problems to addore facts and is able to propose new constructions(points and....) and the proof assist probably is used to iteratively check his facts, and the neural network prover is there to make comstructions that are more optimal than brute force

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 Месяц назад

      but yeah, i think they should be givem a diagram, i always keep redrawing lol

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 Месяц назад

      for the above reply quoting from google blog on alpha geometry(you can find link in description)
      Formal languages offer the critical advantage that proofs involving mathematical reasoning can be formally verified for correctness. Their use in machine learning has, however, previously been constrained by the very limited amount of human-written data available.
      In contrast, natural language based approaches can hallucinate plausible but incorrect intermediate reasoning steps and solutions, despite having access to orders of magnitudes more data. We established a bridge between these two complementary spheres by fine-tuning a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty.
      and
      AlphaGeometry 2 employs a symbolic engine

    • @arturlima3210
      @arturlima3210 Месяц назад +3

      The students weren't competing with AI. IMO is a real Olympiad and has been done for decades. It just so happens that the AI was tested on this occasion, but the students don't need to have a fair chance against the robot because it was not competing. It is like reading that cheetahs are able to run way faster than Usain Bolt and asking why wasn't Usain Bolt given 4 legs, it's just a way to make a comparison, not a real competition.

  • @ookjannesplanting1296
    @ookjannesplanting1296 Месяц назад +1

    I'd be able to solve them too if I could Google it

  • @Maxime-fo8iv
    @Maxime-fo8iv Месяц назад +10

    "Just as we use calculators for doing intricate calculations, we soon may be using computers to assist with mathematical proofs."
    I'm worried as to how this will affect jobs in math...

    • @thecoffeejesus
      @thecoffeejesus Месяц назад

      Cyberpunk future or Star Trek future
      Hopefully less Matrix or Terminator future and more Jetsons or or Big Hero 6 future

    • @wumi2419
      @wumi2419 Месяц назад +7

      It probably won't, as we are already using computers for verification. Formalizing everything is annoying, but it can help to find an error. It still does not help others to understand your work though, which is the main issue. As you can know that something is correct, but implications of that correctness might be hard to see.

    • @Maxime-fo8iv
      @Maxime-fo8iv Месяц назад +3

      ​@@wumi2419 I feel like this is a different kind of assistance we're talking about here, it's one thing for computers to check a proof, but it's something else entirely for computers to prove things by themselves. Especially when it starts to reach a certain level where it becomes able to generate very complex proofs at a level comparable to human-level or even exceeding it.
      Also, about AI helping humans understand proofs, I think there is nothing preventing that, it seems just like the type of things Transformers could be good at, at the condition that we explore this area of research a bit further.

    • @wumi2419
      @wumi2419 Месяц назад +3

      @@Maxime-fo8iv computers deriving proofs was tried, but it was computationally expensive due to process being similar to brute force, as you can derive exponentially many true expressions from a set of axioms. It did however result in some useful theorems in very specific field, where axiom (already proven theorems) set was limited. I do not remember where exactly and I think it also caused a dispute over authorship.
      You can use machine learning to pick a direction, but in the end it will depend on whether machine learning, human intuition or combination of two is more efficient. In a way, both are looking through infinite sea of valid expressions. And it still comes down to interdisciplinary communication, as math is good, but what you want to see in the end is using mathematical instruments to solve engineering problems.

    • @Maxime-fo8iv
      @Maxime-fo8iv Месяц назад +1

      @@wumi2419 Yeah, until now there wasn't much "intelligence" to those proof assistants, so it makes sense. And you talk about interdisciplinary communication, but when we're talking about proving theorems that's more about pure maths, and I think many mathematicians do maths just for the sake of it, without aiming to pass on their works to physicians or whatever (think of the seven Millennium Prize Problems). And in that domain you have an awful lot of unproven theorems that mathematicians are stuck on, and there's a chance that those could be solved by cleverly using transformers and other current tools in my opinion, causing huge leap in the field of mathematics

  • @loikcarothers
    @loikcarothers Месяц назад

    At one point, the only thing us humans will need is energy. Because everything will be automated. or almost anything

  • @elreturner1227
    @elreturner1227 Месяц назад +1

    The funniest part of this to me is i use ai as a calculator that can do custom commands and the other weak it failed at basic addition 5 times…IN A ROW(Gemini) and yet can do math Olympiad questions

    • @redfinance3403
      @redfinance3403 Месяц назад +4

      Quite the difference between a ChatBot and an AI trained specifically for solving maths problems.

    • @olafschluter706
      @olafschluter706 Месяц назад +4

      @@redfinance3403 I am not even sure that those bots are trained in the ML bot sense. Automated mathematical proof systems had been there before. It is likely more of an expert systems knowing about all and every proven mathematical theorem and then use backtracking algorithms to apply them to a given problem for a proof. LLMs like ChatGPT or Gemini do not understand math as they do not understand any language. To them, any input is a stream of symbols correlated by the neural network of those models to a stream of output symbols based on statistics derived from the training material. You can't do math this way, you can't even sum two arbitrary numbers this way.

    • @redfinance3403
      @redfinance3403 Месяц назад

      I was going by the assumption that DeepMind's AI should be able to do operations such as addition, since that might be a requirement of a proof, regardless of how many patterns will just be applied, you still need to input the numbers. If that is done separately, then yes I agree.

    • @explosionspin3422
      @explosionspin3422 Месяц назад +2

      It's more of a combination of the two. The issue with naive search algorithms is that, no matter how optimized, they'll always fail on complex enough proofs because of the combinatorical explosion at each step. The techniques used for the AI in the video are moreso similar to the way their go AI has worked in the past (essentially playing a singleplayer game against the lean theorem prover as the enemy)

  • @corpsie666
    @corpsie666 Месяц назад

    The main takeaway is that part of the "math" competition isn't math at all, it is written language.

  • @kyoai
    @kyoai Месяц назад +3

    Ah yes, Gemini, the totally unbiased and totally apolitical Google AI model that totally does not make up facts that don't line up with reality. That being said, it's interesting that it found alternative solutions, though I would be highly skeptical of anything AI produces, no matter how accurate it may look at first glance.

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад

      No AI is unbiased or apolitical since its based on human data, unless you count your preferred worldview to be apolitical and unbiased and all others are.

    • @explosionspin3422
      @explosionspin3422 Месяц назад +1

      The solutions are written in lean, a programming language specifically designed for checking formal proofs. The proofs are therefore correct by construction, as long as the code compiles.

  • @Rollermonkey1
    @Rollermonkey1 Месяц назад +1

    Sorry, but the amount of electricity used by an AI to solve a problem is staggering when compared to how much is used by a calculator. Sometimes progress isn't really progress, particularly if the environmental impact is truly considered.

    • @Singularity606
      @Singularity606 Месяц назад +2

      Can you please stop using the internet? You're killing the rainforests.

  • @SG49478
    @SG49478 Месяц назад +12

    I gave it a chance and gave 4 problems of the 1st round of the german math olympiad to chat gpt. The first round is the easiest of 4 rounds, usually the medalists of the 4th round are considered for IMO participation. chat gpt did not solve a single of the 4 first round problems correctly.

    • @snowfloofcathug
      @snowfloofcathug Месяц назад +19

      You gave *ChatGPT* the problem, which is a very different AI than is talked about in the video. ChatGPT is a glorified autocorrect, it can’t even reliably do basic arithmetic

    • @Buphido
      @Buphido Месяц назад +4

      ChatGPT has way better odds of solving a problem if you rephrase it in logical terms first. Give it the puzzle with the 4 upside down wine glasses where you have to turn all of them around with the least moves but must turn exactly three different glasses each time and it fails. Rephrase that with 4 binary digits and XOR operations with 4 digit numbers of checksum 3 and it succeeds. Similar to this ai. But as the previous commenter said, different ai entirely. ChatGPT is mostly a parrot made to regurgitate what it is fed with only a very baseline of logical consistency.

    • @beyondscience004
      @beyondscience004 Месяц назад +2

      GPT is a calamity when it comes to original thinking towards solving practical problems

    • @pierrecurie
      @pierrecurie Месяц назад +4

      @@Buphido There's also an issue of data leakage. ChatGPT was trained on internet text, and the solutions to a lot of the older problems is included in the training data.

    • @cbnewham5633
      @cbnewham5633 Месяц назад +6

      GPT4 is a Large Language Model. The clue is in the name. It's just predicting words. That's clever, but pretty useless for solving even simple maths questions.

  • @jucom756
    @jucom756 29 дней назад

    The IMO questions are always fun to try, but P3 and P6 are the only ones that are really worth anything. The problem is that P1 and P2 are always only 2 ideas from a solution, so with a pretty simple path solver in LEAN you would get those for way less time and resources. P3 and P6 as well, but those at least require 5 steps or a creative trick that would take a lot of ordinary steps to represent. My point is a human with a years worth of formal training can solve problems on the IMO in 2 hours (and 10 minutes with computer assistance) that the AI couldn't even solve, and the only reason the rest take more time is because writing is inneficient. No matter how much braindead tech bros try, they are not qualified enough in math to design algorithms that can actually do a good mathematicians job.

  • @marat61
    @marat61 Месяц назад

    Cherry peaking is all Google need

  • @jespermikkelsen7553
    @jespermikkelsen7553 Месяц назад

    What does Google AI think about the Riemann hypothesis?

    • @MrTomyCJ
      @MrTomyCJ Месяц назад +5

      This video focuses on an AI that is only capable of solving geometry problems like these. Because of how it works, it wouldn't work for other areas of math, like the Riemann hypothesis. Those are still far out of reach, as far as I know.

  • @jameshiggins-thomas9617
    @jameshiggins-thomas9617 Месяц назад

    I'll assume Google's option is *not* a LLM?

  • @swbn6673
    @swbn6673 Месяц назад +1

    Maybe I didn't understand it correctly, but AI didn't solve the problems-Lean did! The difference is that no language model did the job, but a system that solves deterministic problems did. The challenge is that the language model has to determine whether the problem is deterministic or not.

    • @explosionspin3422
      @explosionspin3422 Месяц назад +2

      Lean does not solve problems, it only verifies proofs written by the user.

  • @thecriss88
    @thecriss88 Месяц назад +3

    Why go to college anymore? What's the purpose of the humans now?

    • @MrTomyCJ
      @MrTomyCJ Месяц назад +3

      That is the same as asking what is the purpose of your life. Just find a goal that would make you happy and work towards it.

    • @thecriss88
      @thecriss88 Месяц назад +2

      @@MrTomyCJ What if my goal use to be X,Y, and Z but it's now pointless due to the AI? Don't you see that sooner or later humans will be degraded to puppets similar to what cats and dogs are to us now?

    • @HedwigBleicher
      @HedwigBleicher Месяц назад

      To helpdesk and to take care of Otter humans

    • @thecriss88
      @thecriss88 Месяц назад

      @@HedwigBleicher Why are you assuming the AI won;t be able to do it?

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад +2

      Same as it was before, to make the world a better place than it was when you were born. So far AI hasn't been taught to do that yet, so we got our work cut out for us.

  • @jimjimmy2179
    @jimjimmy2179 Месяц назад +3

    AI is as good as we're the people who trained it and data use to train it.
    Also I'm somehow missing what is this all good for except being a toy. E.g. to this day I don't see a single application where it would have a dominant role as other technologies have in their own area. Anything I see is either
    - talk about what it could be doing
    - demonstrations of what it can be trained
    - use as a toy, useless "chat bots" advisory search engine with voice interface.
    The way it's being pushed though is fascinating.

    • @chocolatemodelsofficial5859
      @chocolatemodelsofficial5859 Месяц назад +1

      It's amazing that 99% of people won't see your point. This AI push is all smoke and mirrors.

  • @puliverius
    @puliverius Месяц назад

    So you can revisit older videos on how AI will solve them :)

  • @bendono
    @bendono Месяц назад +1

    I do not think that the fact that humans had to "translate" the problem into LEAN is necessarily a problem. Word problems are often the most difficult to students as well because not all students interpret the human language as the teacher / textbook expected. Human language is not necessarily unambiguous and multiple interpretations are often possible.

  • @adityaaman1928
    @adityaaman1928 Месяц назад +3

    Is there any olympiad for college students? Please help!

    • @pierrecurie
      @pierrecurie Месяц назад +6

      Putnam. It includes calculus and is much harder than IMO.

    • @wowyok4507
      @wowyok4507 Месяц назад +5

      @@pierrecurie no the IMO is a bit harder than the Putnam, Evan Chen agrees, and look at the official AOPS scale

    • @JohnDoe-ti2np
      @JohnDoe-ti2np Месяц назад

      @@wowyok4507 Well, in the Putnam, you get only 30 minutes to solve each problem, whereas you get 1.5 hours per problem in the IMO. Back when I was a student, graders for the IMO were more generous with partial credit than graders for the Putnam were (but maybe times have changed).

    • @JohnDoe-ti2np
      @JohnDoe-ti2np Месяц назад

      @@wowyok4507 But you get only 30 minutes per problem on the Putnam, whereas you get 1.5 hours per problem on the IMO.

    • @JohnDoe-ti2np
      @JohnDoe-ti2np Месяц назад

      @@wowyok4507 The Putnam allots only 30 minutes per problem. The IMO allots an hour and a half per problem.

  • @ToguMrewuku
    @ToguMrewuku Месяц назад

    The time has come. Let's start digging caves, the robots are about to conquer the surface.

  • @boxmanatee
    @boxmanatee Месяц назад

    Just like any programming or logical language like math. There are infinite solutions to any solvable problem.

  • @calholli
    @calholli Месяц назад +9

    All this will do is make the majority much more ignorant: like how we now don't remember phone numbers because they are stored in our cell phones. When you create a tool that can do problems for you, you'll end up forgetting how to do them yourself. Use it or lose it

    • @GrifGrey
      @GrifGrey Месяц назад +11

      There are WAY better examples than memorizing phone numbers.

    • @leif1075
      @leif1075 Месяц назад +1

      Yeah memorization is boring thats what AI should and can be used for. But are AI laresdy smarter than many humans simce many humans cannot solve this lr what??

    • @MrTomyCJ
      @MrTomyCJ Месяц назад +2

      That's the same for every tool that mankind has invented.
      Now the novelty is that these tools replace mental work, not just physical. So humanity needs to learn to keep training our minds, just as we learned to keep excercising out bodies.

    • @farhanrejwan
      @farhanrejwan Месяц назад +1

      lol, phone numbers aren't knowledge and remembering them isn't wisdom. a phone number is a piece of information, if anything. and memorising an info doesn't make you "not ignorant".
      at least give a relevant example.

    • @farhanrejwan
      @farhanrejwan Месяц назад

      what you people are actually worried about is AIs have now gained the "wisdom" to do things.
      this is certainly not the case lol. AIs haven't suddenly gained a deeper understanding of math by being able to solve this problem. at best, what they got is a faster guessing ability.
      this might only cut off some jobs, but certainly not all. just like how calculators still haven't replaced accountant jobs, AIs won't be able to replace much jobs. why? because AIs only have an intelligent guess, not any sentience or consciousness to drive itself to do something "willingly".

  • @clarenceauerbach7934
    @clarenceauerbach7934 Месяц назад

    I mean I've learning lean for a semester and writing a proof in lean is almost closer to coding than math,, so yeah it's cool to see lean being useful but the translating is like 50% of the work

  • @user-iy2fd3bt7w
    @user-iy2fd3bt7w Месяц назад

    Next time it solve faster and more problems.

  • @commontater652
    @commontater652 Месяц назад

    I hope AI can fix the Englishes problems also.

  • @henrymarkson3758
    @henrymarkson3758 Месяц назад

    USA, China, Korea and Russia dominate the IMO.
    They also seem to dominate the AI field.

  • @wearron
    @wearron Месяц назад +1

    Isn't this a repost? I think you already made this type of vid...

  • @mikelord93
    @mikelord93 Месяц назад +5

    You are missing a key detail when saying it's not fair to the ai earned a silver medal because it got extra time. Time to a machine is not a constant. Next year's hardware might run the exact same ai for the exact same problems at ten times the speed. Translating the problems should also be integrated into the models and slowly but surely become better and better. We're this close to math capable models

    • @roxas8999
      @roxas8999 Месяц назад +3

      I mean, you're correct that hardware keeps improving, but we're not using next year's tech. We are evaluating the current state of the tech, from hardware to AI model, and it took the current tech three days to accomplish it.
      By way of analogy, though perhaps not the best and I'm sure someone could come up with a better one, we don't wait to grade children's performance in school until they're fully grown and graduated, we grade their current performance. Of course things can always get better, but we are evaluating the current state of things.

    • @explosionspin3422
      @explosionspin3422 Месяц назад

      Well, I think this is still not accurate. They could've just thrown triple the computing power at it and had gotten the solutions in a single day (assuming the algorithm is parallel enough)

  • @simplefahrenheit4318
    @simplefahrenheit4318 Месяц назад

    Gemini? 8=9?

  • @MS-sv1tr
    @MS-sv1tr Месяц назад +1

    Solved the puzzle in 5 seconds just from the thumbnail. Too easy

    • @leif1075
      @leif1075 Месяц назад

      You're kidding right?

  • @soki9303
    @soki9303 Месяц назад

    This isn’t a mathematical breakthrough

  • @Mayank-lv9vb
    @Mayank-lv9vb Месяц назад +2

    3 views 6 likes 😮

    • @juianmoeil9682
      @juianmoeil9682 Месяц назад +2

      193 views 26 likes

    • @micahlong2073
      @micahlong2073 Месяц назад +4

      There are all kinds of delays in updating that might be going on

    • @TheWestVirginianGuy
      @TheWestVirginianGuy Месяц назад +1

      Yeah, wack.

    • @L17_8
      @L17_8 Месяц назад +1

      ​@@TheWestVirginianGuy God sent His only son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Jesus loves you ❤️ but the end times written about in the Holy Bible are already happening in the world. Please REPENT now and turn to Jesus and receive Salvation before it is too late. Time is almost up.

    • @L17_8
      @L17_8 Месяц назад +1

      ​@@juianmoeil9682 Jesus loves you soooo much ❤️

  • @Research.3735
    @Research.3735 Месяц назад +3

    America yaaa , Hallloooooooooooo

    • @L17_8
      @L17_8 Месяц назад +1

      God sent His only son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Jesus loves you ❤️ but the end times written about in the Holy Bible are already happening in the world. Please REPENT now and turn to Jesus and receive Salvation before it is too late. Time is running out.

    • @Research.3735
      @Research.3735 Месяц назад +3

      @@L17_8 Ok. But I'm happy with my religion. Happy salvations to yall.

  • @tvsettv
    @tvsettv Месяц назад

    So what? Evolution will find what to do. I support AI!

  • @tsunningwah3471
    @tsunningwah3471 Месяц назад

    指南

  • @aryanshvikramsingh9842
    @aryanshvikramsingh9842 Месяц назад +1

    Yoo

    • @L17_8
      @L17_8 Месяц назад

      God sent His only son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Jesus loves you ❤️ but the end times written about in the Holy Bible are already happening in the world. Please REPENT now and turn to Jesus and receive Salvation before it's too late. Time is running out.

  • @HoodiesAreAwesome
    @HoodiesAreAwesome Месяц назад +9

    You thought nobody would notice you using AI for the voice-over didn't you?

    • @wowyok4507
      @wowyok4507 Месяц назад +3

      ???

    • @Nick-lm9hg
      @Nick-lm9hg Месяц назад +3

      The truth is this whole channel is AI. There's no human behind the scenes

    • @wowyok4507
      @wowyok4507 Месяц назад

      @@Nick-lm9hg WHAT EXPLAIN

    • @Nick-lm9hg
      @Nick-lm9hg Месяц назад

      @@wowyok4507 this channel is a relatively early example of AI generated content. It's been going for years

    • @wowyok4507
      @wowyok4507 Месяц назад

      @@Nick-lm9hg please explain more

  • @jamescole3152
    @jamescole3152 Месяц назад

    Strange. Computers are so fast at math. This is where AI is missing the boat. Children learn by having teachers. My AI program, I call it HAL would have millions of teachers. The teachers would be proven experts in whatever they are teaching. Any of the students that can solve this problem would teach the program their method. So the computer would have the knowledge of all the best people in the world.
    The limitations at this time would be the amount of memory RAM that can be used at one time. But that will not be a problem as the hardware gets better every day. One day these kinds of problems will be solved in seconds. They are really simple. You can think about the problem like this. The movie the Matrix. When the woman needed to know how to fly a certain helicopter..... she loaded the helicopter pilot program. Now she can fly the helicopter with the same expertise as the pilot. Actually better because the program is the combined knowledge of all of the best pilots.
    So this is how the AI will work. Math problem, load the math program....;

  • @KevinJohnson-gc2kn
    @KevinJohnson-gc2kn Месяц назад

    Stopped listening at "Toughed".

  • @cricri593
    @cricri593 Месяц назад

    dopage dopage dopage dopage et heu dopage

  • @user-xx3zj3xb9b
    @user-xx3zj3xb9b Месяц назад

    Engineers dab on mathematicians once again

  • @DougTheDouglyDragon
    @DougTheDouglyDragon Месяц назад +2

    Yippee, I'm early 😃

  • @user-fed-yum
    @user-fed-yum Месяц назад

    Funny how none of what Google succeeded with was "AI". Gemini didn't work at all. And AlphaZero is an incredible mathematical solving engine that doesn't scrape the web to plagiarize, and it's also not AI. Better luck next time. Or have we hit peak "AI"?

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад

      How is AlphaZero not AI? What is it then? I don't think you know what you're talking about

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад

      And Gemini IS part of that system too, btw. Could've researched that

  • @JNET_Reloaded
    @JNET_Reloaded Месяц назад +1

    its irrelevant cus of the training time took to make the llm in 1st place!

    • @minhcongnguyen5917
      @minhcongnguyen5917 Месяц назад +8

      to be fair, the students are also trained like years for this olymliad

  • @L17_8
    @L17_8 Месяц назад +2

    God sent His only son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Jesus loves you ❤️ but the end times written about in the Holy Bible are already happening in the world. Please REPENT now and turn to Jesus and receive Salvation before it's too late. Time is almost up.

    • @cbnewham5633
      @cbnewham5633 Месяц назад +6

      No thanks. I'd rather use my limited time on this planet learning new things than reading some dusty old book that was written by numerous simple people long ago.

    • @tollspiller2043
      @tollspiller2043 Месяц назад

      Jesus womp womp

    • @trueriver1950
      @trueriver1950 Месяц назад +1

      I'm waiting for the AI monk to be invented that can do the believing for me. ...

  • @turtletom8383
    @turtletom8383 Месяц назад +1

    Found. not, came up with

    • @trueriver1950
      @trueriver1950 Месяц назад +2

      "came up with" is also correct: it conveys the idea that the AI or together knowledge it already had to "come up with" something it doesn't know at the outset.
      That's a better choice of words than "found", because found could mean that the solution was already in a big look up table in memory.
      So I'm not clear what your problem is with this choice of words? Please explain?

    • @turtletom8383
      @turtletom8383 Месяц назад

      @@trueriver1950 answers exist before we come upon them