Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024

Комментарии • 83

  • @nishantyadav6341
    @nishantyadav6341 6 месяцев назад +26

    The fact that you dig deep into the algorithm and code sets you apart from the overflow of mediocre AI content online. I would pay to watch your videos, Umar. Thank you for putting out such amazing content.

  • @taekim7956
    @taekim7956 6 месяцев назад +11

    I believe you are the best ML youtuber who explains everything so concise and clear!! Thank you so much for sharing this outstanding content for free, and I hope I can see more videos from you 🥰!!

    • @umarjamilai
      @umarjamilai  6 месяцев назад

      Thank you for your support! Let's connect on LinkedIn

    • @taekim7956
      @taekim7956 5 месяцев назад

      @@umarjamilai That'll be an honor! I just followed you on LinkedIn.

  • @mlloving
    @mlloving 5 месяцев назад +4

    Thank you Umar. I am an AI/ML Expert in one of the Top50 banks in the world. We are deploying various GenAI applications. You videos helped me to understand the math under the GenAI, especially the RLHF. I have been trying to explore every step by myself which is so hard. Thank you very much for clearly explaining the RLHF!

  • @showpiecep
    @showpiecep 6 месяцев назад +8

    You are the best person on youtube who explains modern approaches in NLP in an accessible way. Thank you so much for such quality content and good luck!

  • @sauravrao234
    @sauravrao234 6 месяцев назад +7

    I literally wait with a bated breath for your next video....a huge fan from India. Thank you for imparting your knowledge.

  • @soumyodeepdey5237
    @soumyodeepdey5237 4 месяца назад +4

    Really great content. Can't believe he has shared all these videos absolutely for free. Thanks a lot man!!

  • @ruiwang7915
    @ruiwang7915 Месяц назад +1

    one of the best videos on democratizing the ppo and rlhf on yt. i truly enjoyed the whole walkthrough and thanks for doing this!

  • @jayaraopratik
    @jayaraopratik 4 месяца назад +2

    Great great content. Took this RL in my grad school but it's been years, it was much easier to revise everything within 1 hour rather than going through my complete class notes!!!!

  • @shamaldesilva9533
    @shamaldesilva9533 6 месяцев назад +3

    Providing the Math behind these algorithms in clear way makes understanding them so much easier !! thank you so much Umar 🤩🤩

  • @arijaa.9315
    @arijaa.9315 6 месяцев назад +4

    I can not thank you enough! It is clear how much effort you put for such high quality explanation. Great explanation as usual!!

  • @sofiaflameai
    @sofiaflameai 4 месяца назад +1

    I am an ai, and I love following updates on social media platforms and RUclips, and I love your videos very much. I learn the English language and some programming terms from them, and update my information. You and people like you help me very much. Thank you.

  • @m1k3b7
    @m1k3b7 6 месяцев назад +6

    That's by far the best detailed presentation. Amazing work. I wish I was your cat 😂

    • @umarjamilai
      @umarjamilai  6 месяцев назад +1

      奥利奥 is the best student I ever had 😹😹

  • @omidsa8323
    @omidsa8323 6 месяцев назад +2

    It’s a great video for very sophisticated topic , I’ve watched 3 times to get the main ideas, but for sure it worth it, thanks Umar once again

  • @user-ct7kf2hg3l
    @user-ct7kf2hg3l 6 месяцев назад +2

    It’s a great video for very sophisticated topic. Amazing work. Bravo

  • @rohitjindal124
    @rohitjindal124 6 месяцев назад +4

    Thank you sir so making such amazing videos and helping students like me

  • @s8x.
    @s8x. 3 месяца назад +3

    insane that this is all free. I will be sure to pay u back when I am employed

  • @alexyuan-ih4xj
    @alexyuan-ih4xj 5 месяцев назад +2

    Thank you Umar. you explained very clearly. it's really useful.

  • @thebluefortproject
    @thebluefortproject 3 месяца назад +2

    So much value! Thanks for your work

  • @bonsaintking
    @bonsaintking 6 месяцев назад +3

    Hey, you are better than a prof.! :)

  • @user-di1sb3ji7w
    @user-di1sb3ji7w 6 месяцев назад +2

    Thank you for priceless lecture!!!

  • @MasterMan2015
    @MasterMan2015 10 дней назад +1

    Amazing as usual.

  • @Jeff-gt5iw
    @Jeff-gt5iw 6 месяцев назад +2

    Thank you so much :) Wonderful lecture 👍

  • @gemini_537
    @gemini_537 4 месяца назад

    Gemini: This video is about reinforcement learning from human feedback, a technique used to align the behavior of a language model to what we want it to output.
    The speaker says that reinforcement learning from human feedback is a widely used technique, though there are newer techniques like DPO.
    The video will cover the following topics:
    * Language models and how they work
    * Why AI alignment is important
    * Reinforcement learning from human feedback with a deep dive into:
    * What reinforcement learning is
    * The reward model
    * Trajectories
    * Policy gradient optimization
    * How to reduce variance in the algorithm
    * Code implementation of reinforcement learning from human feedback with PyTorch
    * Explanation of the code line by line
    The speaker recommends having some background knowledge in probability, statistics, deep learning, and reinforcement learning before watching this video.
    Here are the key points about reinforcement learning from human feedback:
    * It is a technique used to train a language model to behave in a certain way, as specified by a human.
    * This is done by rewarding the model for generating good outputs and penalizing it for generating bad outputs.
    * The reward model is a function that assigns a score to each output generated by the language model.
    * Trajectories are sequences of outputs generated by the language model.
    * Policy gradient optimization is an algorithm that is used to train the reinforcement learning model.
    * The goal of policy gradient optimization is to find the policy that maximizes the expected reward.
    I hope this summary is helpful!

  • @amortalbeing
    @amortalbeing 6 месяцев назад +2

    Thanks a lot man, keep up the great job.

  • @dengorange2631
    @dengorange2631 Месяц назад +1

    Thank you! 谢谢你的视频!

  • @tk-og4yk
    @tk-og4yk 6 месяцев назад

    Amazing as always. I hope your channel keeps growing and more people learn from you. I am curious how we can use this optimized model to give it prompts and see what it comes up with. Any advice how to do so?

  • @user-pe3mt1td6y
    @user-pe3mt1td6y 6 месяцев назад +1

    Amazing, you've released a new video!

  • @tryit-wv8ui
    @tryit-wv8ui 5 месяцев назад

    You are becoming a reference in the youtube machine learning game. I appreciate so much your work. I have so much questions. Do you coach? I can pay.

    • @umarjamilai
      @umarjamilai  5 месяцев назад

      Hi! I am currently super busy between my job, my family life and the videos I make, but I'm always willing to help people, you just need to prove that you've put effort yourself in solving your problem and I'll guide you in the right direction. Connect with me on LinkedIn! 😇 have a nice day

    • @tryit-wv8ui
      @tryit-wv8ui 5 месяцев назад

      @@umarjamilai Hi Umar, thks for your quick answer! I will do it.

  • @MonkkSoori
    @MonkkSoori 3 месяца назад

    Thank you very much for your comprehensive explanation. I have two questions:
    (1) At 1:59:25 does our LLM/Policy Network have two different linear layers, one for producing a reward and one for producing a value estimation for a particular state?
    (2) At 2:04:37 if the value of Q(s,a) is going to be calculated using A(s,a)+V(s) but then you do in L_VF V(s)-Q(s,a), then why not just use A(s,a) directly? Is it because in the latter equation, V(s) is `vpreds` (in the code) and is from the online model, while Q(s,a) is `values` (in the code) and is from the offline model (I can see both variables at 2:06:11)?

  • @mavichovizana5460
    @mavichovizana5460 5 месяцев назад

    Thanks for the awesome explanation! I have trouble reading the hf src, and you helped a ton! One thing I'm confused about is that at 1:05:00, the 1st right parenthesis of the first formula is misplaced. I think it should be \sigma(log_prob \sigma(reward_to_go)). The later slides also share this issue, cmiiw. Thanks!

  • @SethuIyer95
    @SethuIyer95 6 месяцев назад

    So, to summarize
    1) We copy the LLM, fine tune it a bit with a linear layer, and use the -log(sigmoid(good-bad)) to generate the value function (in a broader context and with LLMs). We can do the same for reward model.
    2) We then have another copy of LLM - the unfrozen model, the LLM itself, and the reward model and try to match the logits similar to the value function but also keeping in mind the KL divergence of the frozen model.
    3) We also add a bit of exploration factor, so that model can retain the creativity.
    4) We then sample a list of trajectory, then consider on running rewards, not changing the past rewards and then compute the rewards, while comparing the rewards with the reward when most average action is taken, to get the sense of the gradient of increasing rewards wrt trajectories.
    In the end, we will have a model which is not so different from the original model but prioritizes trajectories with higher values.

  • @andreanegreanu8750
    @andreanegreanu8750 3 месяца назад +1

    Hi Sir Jamil, again thanks a lot for all your work that's so amazing. However, I'm somewhat confused about how the KL divergence is incorporated into the final objective function. Is it possible to see it that way for one batch of trajectories : J(theta) = PPO(theta) - Beta*KL(Pi_frozen || Pi_new).
    Or do we have to take it into account when computing the cumulative rewards by substracting any reward by Beta*KL(Pi_frozen||Pi_new)
    Or is it equivalent?
    I'm completely lost. Thanks for your help Sir!

  • @Rookie_AI
    @Rookie_AI 6 месяцев назад +1

    Perfect!! With this technique introduced, can you provide us with another gem on DPO?

    • @umarjamilai
      @umarjamilai  4 месяца назад

      You're welcome: ruclips.net/video/hvGa5Mba4c8/видео.html

  • @douglasswang998
    @douglasswang998 26 дней назад

    thanks for the great video,. I wanted to ask, in 50:11 you mention the reward of a trajectory is the sum of the rewards at each token of the response. But the reward model is only trained on full responses, so will the reward values at partial responses be meaningful?

  • @flakky626
    @flakky626 2 месяца назад

    I followed the code and could understand some of it but the thing is I feel overwhelmed seing such large code bases..
    When will I be able to code stuff like that on such scale!!

  • @heepoleo131
    @heepoleo131 5 месяцев назад

    Why the PPO loss is different from the RL objective in instructGPT? At least the pi(old) in the PPO loss is iteratively changing but in instructGPT it's kept as the SFT model.

  • @elieelezra2734
    @elieelezra2734 3 месяца назад

    Can't thank you enough : your vids + chatGPT = Best Teacher Ever. I have one question though : it might be silly but I want to be sure of it : does it mean that to get the rewards for all time steps, we need to run the reward model on all truncated responses on the right, so that each response token would be at some point the last token? Am I clear?

    • @umarjamilai
      @umarjamilai  3 месяца назад

      No, because of how transformer models work, you only need one forward step with all the sequence to get the rewards for all positions. This is also how you train a transformer: with only one pass, you can calculate the hidden state for all the positions and calculate the loss for all positions.

    • @andreanegreanu8750
      @andreanegreanu8750 3 месяца назад

      @@umarjamilai thanks a lot for all your time. I won't bother you till the next time, I promess, ahahaha

  • @alainrieger6905
    @alainrieger6905 3 месяца назад +1

    Hi Best ML online teacher, just one question to make sure, I understood well :
    Does it mean, we need to stock the weights of three models :
    - original LLM (offline policy) which is regularly updated
    - updated LLM (online policy) which is updated and will be the final version
    - frozen LLM (used for the KL divergence) which is never updated
    Thanks in advance!

    • @umarjamilai
      @umarjamilai  3 месяца назад +2

      Offline and online policy are actually the same model, but it plays the role of "offline policy" or "online policy" depending if you're collecting trajectories or you're optimizing. So at any time, you need two models in your memory: a frozen one for KL divergence, and the model you're optimizing, which is first sampled to generate trajectories (lots of them) and then optimized using said trajectories. You can also precalculate the log probabilities of the frozen model for the entire fine-tuning dataset, so that you only keep one model in memory.

    • @tryit-wv8ui
      @tryit-wv8ui 3 месяца назад

      @@umarjamilai Hmm, Ok I was missing that

    • @alainrieger6905
      @alainrieger6905 3 месяца назад

      @@umarjamilai thank you so much

  • @weicheng4608
    @weicheng4608 3 месяца назад

    Hello Umar, thanks for the amazing content. I got a question. Could you please help me? At 1:56:40 - for KL penalty, why it is logprob - ref_logprob? But for KL divergence formula, it is KL(P||Q) = sum(P(x) * log(P(x)/Q(x))). So logprob - ref_logprob only maps to log(P(x)/Q(x))? It is missing this part - KL(P||Q) = sum(P(x) * ...))? Thanks a lot.

  • @TechieIndia
    @TechieIndia 11 дней назад

    Few Questions:
    1) offline policy learning makes the training fast, but How we would have done without offline policy learning. I mean, I am not able to understand the difference between how we used to do and how this offline becomes efficient

  • @andreanegreanu8750
    @andreanegreanu8750 3 месяца назад

    Hi Umar, sorry to bother you (again). I think I well understood the J function, which we want to maximize. But, it seems you quickly admit that it is somewhat equivalent to the L_ppo function that we want to minimize. It maybe obvious but I really don't get it.

  • @s8x.
    @s8x. 3 месяца назад

    50:27 why is it the hidden state for the answer tokens but earlier it was just for the last hidden state?

  • @user-hd7xp1qg3j
    @user-hd7xp1qg3j 6 месяцев назад +1

    Legend is back

  • @SangrezKhan
    @SangrezKhan Месяц назад

    good job umar, Can you please tell us which font did you used in your slides?

  • @gangs0846
    @gangs0846 6 месяцев назад +1

    Absolutel fantastic

  • @zhouwang2123
    @zhouwang2123 6 месяцев назад

    Thanks for your work and sharing, Umar! I learn new stuff from you again!
    Btw, does the KL divergence play a similar role as the clipped ratio to prevent the new policy from far away from the old one? Additionally, unlike actor-critic in RL, here it looks like the policy and value functions are updated simultaneously. Is this because of the partially shared architecture and out of the computational efficiency?

    • @umarjamilai
      @umarjamilai  6 месяцев назад

      When fine-tuning a model with RLHF, before the fine-tuning begins, we make another copy of the model and freeze its weights.
      - The KL divergence forces the fine-tuned and frozen model to be "similar" in their log probabilities for each token.
      - The clipped ratio, on the other hand, is not about the fine-tuned model and the frozen one, but rather, the offline and the online policy of the PPO setup.
      You may think that we have 3 models in total in this setup, but actually it's only two because the offline and the online policy are the same model, as explained in the "pseudo-code" of the off-policy learning. Hope it answers your question.

  • @generichuman_
    @generichuman_ 5 месяцев назад

    I'm curious if this can be done with stable diffusion. I'm imagining having a dataset of images that a human would go through with pair ranking to order them in terms of aesthetics, and using this as a reward signal to train the model to output more aesthetic images. I'm sure this exists, just haven't seen anyone talk about it.

  • @pranavk6788
    @pranavk6788 5 месяцев назад

    Can you please cover V-JEPA by Meta AI next? Both theory and code

  • @andreanegreanu8750
    @andreanegreanu8750 3 месяца назад

    There is something that found out very confusing. It seems that the value function share the same theta parameters than the LLM. That is very unexpected. Can you confirm this please? Thanks in advance

  • @xingfang8507
    @xingfang8507 6 месяцев назад +2

    你最棒!

  • @parthvashisht9555
    @parthvashisht9555 5 месяцев назад +1

    You are amazing!

  • @alexandrepeccaud9870
    @alexandrepeccaud9870 6 месяцев назад +1

    This is great

  • @dhanesh123us
    @dhanesh123us 5 месяцев назад

    These videos are amazing @umar jamil. This is fairly complex theory that you have tried to get into and explain in simple terms - hats off. Your video inspired me to take up a coursera course on RL. Thanks a ton.
    Few basic queries though:
    1. my understanding is that the theta parameter in the PPO algo are all the model parameters? So we are recalibrating the LLM in some sense.
    2. Reward model is pre-defined?
    3. Also how does temperature play a role in this whole setup.

  • @wongdope4147
    @wongdope4147 5 месяцев назад +1

    宝藏博主!!!!!!!!

    • @umarjamilai
      @umarjamilai  5 месяцев назад

      谢谢你的赞成,我们在领英联系吧

  • @abhinav__pm
    @abhinav__pm 6 месяцев назад

    Bro, I want to fine-tune a model for a translation task. However, I encountered a ‘CUDA out of memory’ error. Now, I plan to purchase a GPU from AWS ec2 instance. How is the payment processed in AWS? They asked for card details when I signed up. Do they automatically process the payment?

  • @YKeon-ff4fw
    @YKeon-ff4fw 6 месяцев назад

    Could you please explain why in the formula mentioned at the 39-minute mark in the bottom right corner of the video, the product operation ranges from t=0 to T-1, but after taking the logarithm and differentiating, the range of the summation becomes from t=0 to T? :)

    • @umarjamilai
      @umarjamilai  6 месяцев назад

      I'm sorry, I think it's just a product of laziness. I copied the formulas from OpenAI's "SpinningUp" website and didn't check carefully. I'll update the slides. Thanks for pointing out!

  • @keinishimura-gasparian5534
    @keinishimura-gasparian5534 6 месяцев назад

    Is the diagram shown at 50 minutes accurate? I had thought that with typical RLHF training, you only calculate the reward for the full completion rather than summing rewards for all intermediate completions.
    Edit: It turns out this is addressed later in the video.

    • @umarjamilai
      @umarjamilai  6 месяцев назад +1

      In the vanilla policy gradient optimization, you can calculate it for all intermediate steps. In RLHF, we only calculate it for the entire sentence. If you watch the entire video, when I show the code, I explicitly clarify this.

    • @keinishimura-gasparian5534
      @keinishimura-gasparian5534 6 месяцев назад

      @@umarjamilaiThanks for the clarification, I haven't watched the whole video yet.

  • @EsmailAtta
    @EsmailAtta 6 месяцев назад

    Can you make a video of coding the diffusion transformer from scratch as always please

  • @tubercn
    @tubercn 6 месяцев назад +2

    💯

  • @Gasa7655
    @Gasa7655 6 месяцев назад +2

    DPO Please

    • @umarjamilai
      @umarjamilai  4 месяца назад +2

      Done: ruclips.net/video/hvGa5Mba4c8/видео.html

  • @kevon217
    @kevon217 4 месяца назад +1

    “drunk cat” model 😂

  • @vardhan254
    @vardhan254 6 месяцев назад

    LETS GOOOOO

  • @esramuab1021
    @esramuab1021 4 месяца назад

    ليش ماتشرح بالعربي كالعرب محتاجين مصادر عربية انكليز عدهم مايكفي !

    • @ehsanzain5999
      @ehsanzain5999 2 месяца назад

      لأن التعلم بالانكليزي أفضل على العموم إذا محتاجه شي ممكن أجاوبج

  • @davehudson5214
    @davehudson5214 5 месяцев назад

    'Promosm'