ChatGPT can't multiply, but can AI do math?

Поделиться
HTML-код
  • Опубликовано: 7 май 2023
  • A discussion about AI and math research.
    Resources to learn more and other interesting notes:
    SAT solvers:
    en.wikipedia.org/wiki/SAT_solver
    Pythagorean Triples paper: arxiv.org/pdf/1605.00723.pdf
    A nice writeup on the use of SAT solvers on another recent problem: bsubercaseaux.github.io/blog/...
    Neural Networks in pure math:
    arxiv.org/pdf/2104.14516.pdf
    arxiv.org/pdf/2304.12602.pdf
    Corrections:

Комментарии • 28

  • @saaah707
    @saaah707 Год назад +12

    I played with this a bit. Very interesting. It can tell you how to multiply, but it can't follow its own instructions.
    I tried telling it to not give me an answer without first double-checking the result by dividing back to the original number, and to show its work. It proceeded to walk me step-by-step to the wrong answer, and then through the "check" step, also done incorrectly but magically arriving to the original multiplicand as a "proof" of correctness.
    Oddly enough, this is the hallmark behavior of an undergrad who doesn't want to learn -- these things are getting more and more humanlike by the day. 😅

  • @FireyDeath4
    @FireyDeath4 2 месяца назад +2

    It also intakes text tokenistically. Since patterns like "123" can be single tokens, it creates a lot of confusion when it tries to process data with unique numbers in it. It's much harder to train it and make it predict digital interactions properly when there are just random variations introduced like that

  • @metachirality
    @metachirality 10 месяцев назад +2

    In theory, given enough data, a language model can do arithmetic accurately in general. For example, a researcher Neel Nanda trained an AI to do modular arithmetic, and amazingly it learned an algorithm that works in every case.

  • @anonymousOrangutan
    @anonymousOrangutan Год назад

    awesome video btw! (:

  • @M.O.Valent
    @M.O.Valent Год назад

    I also noticed that when I tried to have it work around some mathematical problems

  • @BlackBull.
    @BlackBull. Год назад +12

    Now it makes sense. I knew its beefy auto complete but never understand why it got close but never hit

    • @izzyonyt
      @izzyonyt Год назад +1

      It's not *just* autocomplete. 🤦‍♀️ Well, GPT 4 at least.

    • @jmarvins
      @jmarvins 10 месяцев назад +3

      @@izzyonyt GPT4 is much closer to "just autocomplete" than it is to "general intelligence" in any philosophically relevant sense

    • @izzyonyt
      @izzyonyt 10 месяцев назад +1

      @@jmarvins That's an irrelevant statement to make though. It's still not just autocomplete

    • @jmarvins
      @jmarvins 10 месяцев назад +3

      @@izzyonyti suppose we should stop using the inaccurate term AI then as well, but nobody will do that
      the workings of LLMs have much more to do with autocomplete than however human brains produce general intelligence
      facepalm someone making an autocomplete joke all you want, but you should be facepalming every "AI" comment as well

    • @ckq
      @ckq 10 месяцев назад +1

      ​@@jmarvinsAGI might be impossible, but if it is possible don't you think it would be made in a similar manner as GPT4 is (training and fine tuning an LLM to maximize general intelligence)?
      You're overestimating human intelligence because GPT4 is better than 90-99% of humans in many tasks.
      If you disagree, is it because you think the only way to achieve AGI is some other way that's more similar to how the human brain functions?

  • @baerlauchstal
    @baerlauchstal Год назад +2

    I had fun trying to get ChatGPT to admit it couldn't solve y' = x - y^3 symbolically. Just a load of bluster, signifying nothing.

  • @Turalcar
    @Turalcar 10 месяцев назад

    2:39 To be clear, all problems can be converted just representation can be too large to be tractable.

  • @geekjokes8458
    @geekjokes8458 9 месяцев назад +1

    what exactly do you mean by "graphs closer to disproving the conjecture"? i feel like that wouldnt translate into most things
    like, parker squares are almost perfect magic squares, but they dont really do anything regarding proving or disproving if perfect magic squares exist, and it's not inconceivable tha an neural network could "believe" it does and keep tweaking it's dataset and keep training on a useless path

    • @ASackVideo
      @ASackVideo  9 месяцев назад

      The idea is that you have some way of measuring how close it is to disproving. The idea is particularly well-suited to the situation where you have a function that takes in a graph and returns a real number, and you conjecture that the function is bounded below by some constant. "Being closer to disproving it" would just mean being closer to that constant.
      It's not a technique that will work on every problem, and it will certainly sometimes go down useless paths. But it is a technique that was shown to useful on a couple problems. If you're interested in more details, I recommend reading the paper.

  • @otmanalami6621
    @otmanalami6621 Год назад +1

    is GPT4 is still struggling with simple calculation?!

    • @ASackVideo
      @ASackVideo  10 месяцев назад +2

      Yes if you don't let it use plugins.

    • @satunnainenkatselija4478
      @satunnainenkatselija4478 Месяц назад

      The two most significant digits are correct so as far as an engineer is concerned, the calculation is correct.

  • @BooleanDisorder
    @BooleanDisorder 2 месяца назад +1

    Pretty sure the problem is the tokenization, not neural networks per say.

    • @Takyodor2
      @Takyodor2 2 месяца назад

      Neural networks don't understand multiplication, getting the correct result every time would mean training on enough samples to "remember" every solution. I don't think it would be very difficult to train a neural network to recognize an arithmetic problem, and hard-code the behavior "put the numbers and operator into a calculator instead of giving an answer directly, return the result given by the calculator app".

  • @anonymousOrangutan
    @anonymousOrangutan Год назад +4

    actually chatgpt can get multiplications right with 100% accuracy if you turn "engineering mode" off

    • @saaah707
      @saaah707 Год назад +1

      How do you do that?

    • @adamclement2002
      @adamclement2002 15 часов назад

      i tested this and it works, but WHY???????

  • @anonymousOrangutan
    @anonymousOrangutan Год назад +13

    1:00 AI can't do simple arithmetic, so mathematicians aren't going out of business anytime soon...
    ***my math major friends asking me if 7 * 8 is 48***

  • @s4br3
    @s4br3 Год назад +1

    first lol
    also cool video