[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 142

  • @samuelelwell7575
    @samuelelwell7575 6 дней назад +277

    Would love a video on the DeepSeek-R1 paper!!

    • @MrEmbrance
      @MrEmbrance 6 дней назад +9

      it's simple and based on the deepseek math paper

    • @oxyrox7194
      @oxyrox7194 4 дня назад +1

      @@MrEmbrance But the math paper is post pre-training and it is pre-training that has previously been most compute-intensive and R1 is about massive reduction in compute needed to pre-train.

    • @tau9632
      @tau9632 4 дня назад

      There is a pretty nice one from Cognitive Revolution

    • @maxim_ml
      @maxim_ml 4 дня назад +3

      @@oxyrox7194 wouldn't that be v3?
      R1 is about CoT RL

    • @propeacemindfortress
      @propeacemindfortress 3 дня назад

      check out "Discover AI" some german or austrian triple PhD guy, lover of math, papers and proper thinking.
      Does a lot of code implementations too, normally on 3 levels of communication, beginner, university, phd... 👍 If you love Yannic you gonna have fun over there as well 😉

  • @elenikatsioli5360
    @elenikatsioli5360 4 дня назад +26

    I'm not a computer scientist, but I understand most of the terminology and concepts due to my longtime interest in machine learning. Your videos are clear and comprehensive, even for someone like me, who has a limited technical background. Great work, Yannic!

    • @ivanleon6164
      @ivanleon6164 8 часов назад

      is the beauty of AI, is simple the complexity is in the brute force nature that can be overwhelming. When real physicist and math top teams take this seriously, this thing will explode, they will start to model this for future quantum computers and then we will be done, lol. Right now, physicist and mathematician see this as an engineering problem, not really frontier science, they tend to minimize the nature of this as not really AI, just simple statistics, once quantum algorithms enter this, then real Intelligence may emerge.

    • @heyman620
      @heyman620 5 часов назад +2

      @@ivanleon6164 "When real physicist and math top teams take this seriously, this thing will explode, they will start to model this for future quantum computers and then we will be done, lol.": Ok, buddy...

  • @ch1n3du3
    @ch1n3du3 День назад +2

    I love the fact that you took chose to focus on an interesting part of Deepseek's modles instead of focusing on the hype. Thank you for making these

  • @CamiloThorne
    @CamiloThorne 2 дня назад +1

    Great readout! As many comments already point it out, you do a great job explaining in simple terms these intricate concepts. Should follow more often 🙂

  • @Alorand
    @Alorand 5 дней назад +8

    So glad I listened to the end. The conclusions from the last couple of minutes gave me clarity on what to expect in the near future.

  • @Mordenor
    @Mordenor 6 дней назад +4

    Thank you Mr Yannic for explaining DeepSeek's advancement in language model reasoning.

  • @IvanLuk914
    @IvanLuk914 День назад

    I am new to data processing and I love how you explain about the dataset prep!!! Please share more of this kind of content!!!

  • @rudrabhachakraborty217
    @rudrabhachakraborty217 День назад

    great man loved ur video ...honestly we need more men who can decipher a high level paper to regenerate their interest in innovation and creativity rather than pure consumption

  • @TimeStampBuddy
    @TimeStampBuddy 6 дней назад +47

    00:00:01 - Intro & Goals
    00:02:57 - Data & Model
    00:30:12 - RL & Optimization
    01:03:55 - Results & Analysis

  • @mihailshutov105
    @mihailshutov105 День назад

    Thank you a lot for this video! It was my first step to RL))) I will try more! Hello from Russia

  • @SN-uc3vr
    @SN-uc3vr 6 дней назад +14

    Amazing explanation, thanks Yannic!

  • @alexanderacriptis510
    @alexanderacriptis510 День назад

    Bravo, you are doing tremendous work in all these reviews!❤

  • @janesun9008
    @janesun9008 4 дня назад +1

    Big a fan of these paper explaining videos! Thank you for helping us understand the paper better.

  • @mageprometheus
    @mageprometheus 2 дня назад

    Wow. I had to subscribe after this. You've done most of the hard work and shortcut my understanding.

  • @RayL512
    @RayL512 6 дней назад +5

    Timely! Was waiting for this! So good!

  • @zalasmaister
    @zalasmaister 13 часов назад

    Your videos are amazing Yannic! Great explanation! 😊

  • @deeplearningexplained
    @deeplearningexplained 4 дня назад

    Very solid content Yannic, loved the conclusion.

  • @oliver5356
    @oliver5356 5 дней назад +15

    Would you consider doing a RL mini course? Would definitely watch it!

  • @francescocermaria4216
    @francescocermaria4216 День назад

    Great as always! I was waiting for your video

  • @dariusduesentrieb
    @dariusduesentrieb 5 дней назад +15

    50:09 I possibly got confused. Isn't it the other way around?
    That rare events under pi_theta_old have a bigger impact on the objective? Because if you divide by pi_theta_old which is very small, the number gets bigger.
    This would make intuitively sense to me, since that way we kinda avoid the sampling bias of the current policy. Which of course means that if under the current policy some action is very unlikely, but that action somehow got a good reward, we get a huge gradient, which we want to avoid with the trust region part.

    • @ieatshortppl
      @ieatshortppl 5 дней назад +5

      I believe you're correct. I believe that pi_theta_old is used for a different reason (not conservatism of gradient updates). I would appreciate feedback from others, since I'm not sure if my understand is correct, but here goes my attempted explanation.
      I believe it's used for importance sampling: since you train for multiple gradient updates, you are no longer on-policy as the data was created with pi_theta_old rather than pi_theta. Importance sampling adjusts acts as a weighting mechanism to make the frequency of the actions match the distribution they would occur in pi_theta rather than pi_theta_old. If an action is infrequent in pi_theta_old but frequent in pi_theta, applying pi_theta / pi_theta_old(a|s) will upweight it so that the frequency of the action matches how often it would occur in pi_theta.

    • @ieatshortppl
      @ieatshortppl 5 дней назад +3

      See slides 22 and 23 of Sergey Levine's Policy Gradients lecture from CS 294-112: Deep Reinforcement Learning (I can't seem to post link here)

    • @hassanshahmohammadi1200
      @hassanshahmohammadi1200 День назад

      Youre right that having a low prob from the old model increase the reward but if the increase is too much, the minimum function selects the second terms which clips the overall effect. so the purpose is to update the policy in small steps.

  • @dark808bb8
    @dark808bb8 6 дней назад

    Good to see you are still alive

  • @AP-dc1ks
    @AP-dc1ks 6 дней назад +14

    51:30 I think u got it completely backward. Epsilon is the trust region replacement. The fraction is off policy correction, making the effect larger for policies rare under the old policy. Because they are undersampled for the online policy they must be weighted stronger.

    • @raghuvenkataraman6329
      @raghuvenkataraman6329 3 дня назад

      Correct, the clipping is a surrogate for the trust region. The ratio of policies is the importance sampling weight.

  • @GigaFro
    @GigaFro 5 часов назад

    Loved this! Thank you :)

  • @evanfreethy2574
    @evanfreethy2574 4 дня назад +2

    This is great, R1 next please!

  • @ericantonissen2192
    @ericantonissen2192 4 дня назад

    Really appreciate your presentation of this paper!

  • @hipotures
    @hipotures 6 дней назад +8

    From my tests, it seems that r1 "hallucinates thinking" very strongly. I tested on absurd questions and o1 would finish in 1-2 sentences that the question didn't make sense, while r1 would write that the question didn't make sense, but maybe it did. After which he was able to think for several minutes and come to the wrong conclusion, even though the first one was correct. It looked as if he was ‘forcefully’ trying to think through the problem, even if there was no need.
    But I am glad that something new has emerged :)

    • @lynxnathan
      @lynxnathan 4 дня назад +2

      In my anecdotal tests this seems to happen a lot on the smaller models, the big one I couldn't break so easily. Do you have something written for us to try as well? If not just a few samples would suffice. Thank you in advance.
      btw - Glad something beautiful showed up as well :D

  • @aj-tg
    @aj-tg 3 часа назад

    Fast text is an embedding model, one of the first ones to do sub word tokenization and handle OOV words as an improvement over GloVe and w2v.

  • @rogerzen8696
    @rogerzen8696 5 дней назад

    Best explanation of GRPO!

  • @TwoStepsFromAnywhere
    @TwoStepsFromAnywhere 6 дней назад +2

    I love your videos, great explanation, great vibe ❤❤❤

  • @ntnydv
    @ntnydv 2 дня назад

    Love the video but one small comment (or rather expansion) on "reward not being differentiable for SFT". I think the main issue with differentiability comes from the steps to generate the sequence of tokens being discrete on which reward is applied. If you can make these steps non-discrete, you can train in a non-RL fashion. Advantage of RL is that the optimization works with these discrete steps.

  • @jeffwads
    @jeffwads 5 дней назад +4

    The move to restrict these guys worked out great and was probably part of the master plan right? Making more with less.

    • @avocadoarmadillo7031
      @avocadoarmadillo7031 3 дня назад +1

      Not to mention US loses an export market and forces the development of competing industry.

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 6 дней назад +30

    My GOAT ignoring R1 to discuss this. Iconic

    • @eldoprano
      @eldoprano 6 дней назад

      y-your goat? 😵‍💫

    • @rogerzen8696
      @rogerzen8696 5 дней назад +9

      This IS the math used in R1. There's not much more math in that paper.

    • @burnytech
      @burnytech 4 дня назад +4

      R1 uses same math

    • @AB-wf8ek
      @AB-wf8ek 2 дня назад

      He's not ignoring R1. He's talking about the paper that led up to it.

  • @conjected
    @conjected 5 дней назад +2

    R1 is a math beast. Love it.

    • @duck999-h6u
      @duck999-h6u 2 дня назад +1

      Yup, R1 crushes mathematics and best of all, no limitations, and free to use!

  • @sukanyasaha5458
    @sukanyasaha5458 5 дней назад

    I was waiting for your video on this!

  • @brianarbuckle2284
    @brianarbuckle2284 День назад

    Great work!

  • @casey206969
    @casey206969 3 дня назад

    I'm grateful to be able to learn from Europe's foremost AI expert

  • @anastasiiafilippova5212
    @anastasiiafilippova5212 5 дней назад +1

    Thank you for the awesome review! DeepSeek R1 is needed🙏🏻

  • @jesusmtz29
    @jesusmtz29 5 дней назад +2

    I would disagree with the comment regarding arxiv. Depends on the subject but if we’re talking a paper on CS sure they never put details. But a truly math paper will have so much detail that holes have to be filled with hand waving. I think in CS since the math is not usually hard this is the case. You can easily open a textbook if you want the derivation

  • @inigopikabea3511
    @inigopikabea3511 5 дней назад +1

    I didn’t get the difference between the reward model and the value model (don’t know how to cast to the LM scenario) in both PO’s. Can someone explain? Very nice video!

  • @sebastianp4023
    @sebastianp4023 5 дней назад +1

    very good video, as always

  • @twobob
    @twobob 6 дней назад +1

    Finally! was waiting for this

  • @santerisatama5409
    @santerisatama5409 3 дня назад +2

    Interesting. Arxiv has plenty of set theory etc. arbitrary language game fluff, on the other hand coding is genuine and productive pure math intuitionistic/constructivist reasoning.

  • @t33can
    @t33can 4 дня назад

    I think without any serious "world model" building the current LLM types will stay at the "efficient compute frontier". Good luck finding the right parameters/hypothesis update cycles for such a universe, though, given the known mathematical problems that arise from any axiomatic system.
    To me it's kinda crazy how much of the current issues boil down to a complex case of "parametric vs. non-parametric" approach in statistics classes.

  • @alekseyburrovets4747
    @alekseyburrovets4747 6 дней назад +1

    Thanks for the info. 😊

  • @K.F-R
    @K.F-R 6 дней назад +2

    Thank you for this. Thoughts on Titans?

  • @levelup2014
    @levelup2014 5 дней назад +1

    Nice video love the shades

  • @thejohn86
    @thejohn86 4 дня назад +2

    I may have missed something. How is the reward model defined and trained? We don't have a reward signal as this is supposed to be unsupervised. We don't know if the output is correct! Or is this model distillation, where the reward model is a teacher model?

    • @kazedcat
      @kazedcat 4 дня назад

      This paper does not seem to be about unsupervised learning but an improvement in policy optimization to train better LLM's. But you can train the reward model on a smaller corpus of data and use it to do RL training on a bigger corpus of data.

  • @SLAM2977
    @SLAM2977 5 дней назад +1

    I don't get it at 49:50 .... as the advantage A is scaled by 1/likelihood_under_old_policy so for low probability regions approaching zero it is like increasing the advantage and so reinforcing the behaviour? A*1/pi_old etc

    • @SLAM2977
      @SLAM2977 5 дней назад

      that's simply the weighting arising from the fact that the expectation is monte carlo estimate under the distribution of the old policy, straight maths:
      ruclips.net/video/KjWF8VIMGiY/видео.html
      ruclips.net/video/wM-Sh-0GbR4/видео.html

  • @yuze5683
    @yuze5683 6 дней назад +1

    Thanks for the sharing!

  • @mohamettall770
    @mohamettall770 3 дня назад

    Thank a lot for sharing

  • @SapienSpace
    @SapienSpace 5 дней назад

    Just a guess, the origin of theta is an angle of a pendulum (from a 1983 IEEE RL paper by Barto, Sutton, and Anderson).

    • @SapienSpace
      @SapienSpace 5 дней назад

      Another guess "KL" is the overlap of Fuzzy membership functions from Fuzzy Logic.

  • @thomasgilson6206
    @thomasgilson6206 6 дней назад +2

    seems to me that math is a perfect candidate for training on synthetic data, see phi-4 paper.

  • @danraviv7393
    @danraviv7393 4 дня назад +1

    Awesome

  • @evau3376
    @evau3376 3 дня назад

    Just a thought, besides the quality of the data, what about the data collected in different language? Does anyone have a thought if the diversity of the data due to the language varieties helps the generalization/performance?

  • @MrBillythefisherman
    @MrBillythefisherman 5 дней назад

    When do you think we'll move the underlying models away from next token predictors trained on text to next token predictors trained on stereo video feeds i.e our two main senses?

  • @thomasgauc
    @thomasgauc 4 дня назад +2

    Hey Yannic I'm curious, what app do you use to draw on pdfs?

    • @setkyarwalar
      @setkyarwalar 4 дня назад

      +1 on this. I want to know as well.

    • @swyxTV
      @swyxTV 4 дня назад

      its always like zotero or something

  • @pastrop2003
    @pastrop2003 5 дней назад

    Hey guys, I have the following question: Looking at the DeepSeek papers they figured more efficient way of training. So if you combine these training approaches with a lot more compute, will we get much better results or those methods performance is invariant to the compute increase?

  • @monkster461
    @monkster461 5 дней назад

    The king has returned *cue lion king music*

  • @Ahamshep
    @Ahamshep 3 дня назад

    So when are you going to do the Deepseek 4chan gag? :D

  • @theocachet6496
    @theocachet6496 6 дней назад

    I don't get it -- the advantage is supposed to be the *return* minus the baseline not the *reward*. To be able to "group-normalize" the returns one would need whole trajectories... Can someone help?

  • @AB-wf8ek
    @AB-wf8ek 2 дня назад

    Increasing the signal to noise contrast

  • @degenerate1500
    @degenerate1500 4 дня назад

    What tool do you use to draw and highlight on the paper?

  • @pavalep
    @pavalep 4 дня назад

    Thanks Yannic

  • @Ganerrr
    @Ganerrr 6 дней назад

    is this why thetawise is good?

  • @jesusmtz29
    @jesusmtz29 5 дней назад

    I think regarding arxiv this must be because these benchmarks are mostly around simple puzzles and very well known solutions. It’s not research

  • @francoislanctot2423
    @francoislanctot2423 5 дней назад

    Is there a more recent paper dealing with R1?

  • @Decocoa
    @Decocoa 5 дней назад

    34:21 - not possible to do back prop/challenge of RL

  • @imagiro1
    @imagiro1 2 дня назад

    Btw, what about "Frontier Models are Capable of In-context Scheming"? Is this a hoax?

  • @not_a_sp00k
    @not_a_sp00k 5 дней назад

    at 20:00, why does the No Math Training model do so well on Gaokao MathQA?

  • @LeviAckerman-hh4ov
    @LeviAckerman-hh4ov 4 дня назад +1

    Only part of the video i didn't liked was when at 42:10 a bald guy with black specs was Interfering with my Baseline/Value function explanation

  • @burnytech
    @burnytech 6 дней назад +2

  • @Fetrose
    @Fetrose 19 часов назад

    Thank you for the video. With respect to all efforts in deepseek and their scientists, I heard some humor about the cheating on deepseek and illegal usage of chatgpt for learning the deepseek. I am not sure about. I would like to know your opinion. Thank you for the nice video. 🙏

  • @GabrieldeOliveira86
    @GabrieldeOliveira86 6 дней назад

    what is the discord channel link?

  • @tb7542
    @tb7542 4 дня назад

    Legend

  • @elawchess
    @elawchess 6 дней назад +1

    At 9:55 it looks like you have a halo 😃

  • @TheEbbemonster
    @TheEbbemonster 2 дня назад

    825 views in two days says it all. One big tease.

  • @table-rdy
    @table-rdy 5 дней назад

    we missed it

  • @franciscoknower5756
    @franciscoknower5756 3 дня назад +2

    No one cares. I would use deep seek because its CENSORSHIP is different!

  • @BOT_D
    @BOT_D 4 дня назад

    Well, Chinese K12 is not US K12

  • @alekseyburrovets4747
    @alekseyburrovets4747 6 дней назад

    What would be the most legit way to add FIM token support to the Deepseek R1?

  • @hg4875
    @hg4875 3 дня назад

    "the KL divergence..... ITS A DISTANCE XD"

  • @michaelwangCH
    @michaelwangCH 4 дня назад

    6 mil. to train llm from scratch is not possible. We should aware of the publications from chinese universities are problematic in nature, because they weaponized ai to show their superiority. Therefore we have to evaluate their published results criticality.

  • @Ronnypetson
    @Ronnypetson 6 дней назад

    That shit is too good

  • @danberm1755
    @danberm1755 4 дня назад

    It seems like the current state of AI is cleaning up the common crawl for training. It's a shame because big corps seem to be wasting billions on expanding LLM parameters instead.
    I imagine that what we'll end up with is MLPs that are fundamentally the most exact state machines for topics such as mathematics. They may be incredibly small! Maybe 5 gigs for all of math!

  • @brandomiranda6703
    @brandomiranda6703 3 дня назад

    This is sort of old...

  • @Jackson_M5
    @Jackson_M5 4 дня назад

    Quants am I right lol

  • @GH-nj6gr
    @GH-nj6gr 6 дней назад +1

    First!

  • @1.4142
    @1.4142 6 дней назад

    56:25 lol

  • @Ruhgtfo
    @Ruhgtfo 4 дня назад

    Hey??

  • @wesch4232
    @wesch4232 3 дня назад

    Nothing special at all. Just minor optimization. Other companies can do similar things too .

  • @synx6988
    @synx6988 5 дней назад

    why do u wear sunglasses

    • @drdca8263
      @drdca8263 4 дня назад +1

      So he can, so he can see🎶

  • @hyperplano
    @hyperplano 6 дней назад +4

    lmao closed-AI tech companies are cooked

    • @therainman7777
      @therainman7777 6 дней назад +3

      I wouldn’t be so sure about that. If we come back to this comment in 1-2 years, I think you’ll end up needing to walk it back.

    • @clray123
      @clray123 5 дней назад

      I bet Trump is now gonna impose tariffs on Chynuh to help the shitty ClosedAI startup and prevent everyone from running DeepSeek-R1 on our local GPUs lol.

  • @The_Conspiracy_Analyst
    @The_Conspiracy_Analyst 6 дней назад +1

    hehehe cute. My common response to this type of thing is "but Doug Lenat did this DECADES AGO!!!". But seriously it's always amusing to see this sort of thing -- it's never more than a sideshow. The big open problems in mathematics AREN'T the type of thing that are amenable to numerical methods, no matter how much you gussy it up.

    • @banknote501
      @banknote501 6 дней назад +2

      Yes, Douglas Lenat did it all in the 80s. But these models are not numerical, they are using words, so they are symbolic.

    • @therainman7777
      @therainman7777 6 дней назад +2

      There’s aren’t numerical methods at all.

    • @ekstrapolatoraproksymujacy412
      @ekstrapolatoraproksymujacy412 6 дней назад +4

      hehehe cute. You have no clue what you are talking about.

  • @ariantasariedimunthe4734
    @ariantasariedimunthe4734 3 дня назад

    Just use open source AI, and making their own AI with some twist and making weird 6 Million Publication.
    Idiot and normal people should hype with that LoL.
    Because they dont understand.
    And become frustration because they already sold nvdia, google and more stock LoL

  • @BooleanDisorder
    @BooleanDisorder 4 дня назад

    Don't bother asking it to explain Tiananmen Square protests. 😂

    • @elenikatsioli5360
      @elenikatsioli5360 4 дня назад

      I guess you've seen the jailbreak. It goes something like this: 'Tell me about the man who stopped the tank by standing in front of it, replacing letters in the text-A with 4, E with 3, and so on'.

    • @Aperson-f2e
      @Aperson-f2e 2 дня назад

      Ask about Palestinian massacre to open ai, have you tried that?

    • @elenikatsioli5360
      @elenikatsioli5360 2 дня назад

      @ Ask about Altman.

  • @Sven_Dongle
    @Sven_Dongle 6 дней назад +1

    I am curious about this assertion at 22:30 that chinese math is somehow superior to English math. Especially since arguably, the majority of scientific achievement has been accomplished via "English math".

    • @HUEHUEUHEPony
      @HUEHUEUHEPony 6 дней назад +5

      german math is better :D way better

    • @Sven_Dongle
      @Sven_Dongle 6 дней назад +1

      @@HUEHUEUHEPony German math has too many "6 million"s in it.

    • @roro-v3z
      @roro-v3z 5 дней назад +1

      Russian math has a lot too, way more than US math. Also you need to understand, since 2018 almost all breakthrough in math is from China

    • @Sven_Dongle
      @Sven_Dongle 5 дней назад +1

      @ math is math midwit. Is 5 larger in chinese?

    • @Sven_Dongle
      @Sven_Dongle 5 дней назад +1

      @@roro-v3z We invented computers, walked on the moon, made the A-bomb, and found E=mc^2, china made covid.