Direct Preference Optimization (DPO) explained: Bradley-Terry model, log probabilities, math

Поделиться
HTML-код
  • Опубликовано: 19 янв 2025

Комментарии • 68

  • @Patrick-wn6uj
    @Patrick-wn6uj 9 месяцев назад +15

    The legend returns, Always excited for your videos. I am an international student at Shanghai Jiao Tong daxue. Your videos have given me a very strong foundation of transformers. Much blessings your way

    • @umarjamilai
      @umarjamilai  9 месяцев назад +3

      我们在领英联系吧,我有个微信小群,你可以参加

    • @汪茶水
      @汪茶水 8 месяцев назад

      ​@@umarjamilai我也想加

    • @汪茶水
      @汪茶水 8 месяцев назад

      ​@@umarjamilai我看到b站也有你的账号

    • @EliRahimkhani
      @EliRahimkhani 3 месяца назад

      same here from Canada!
      I can't thank you enough @umarjamilai

    • @谭立力
      @谭立力 2 дня назад

      @@umarjamilai 我也想加

  • @RudraPratapDhara
    @RudraPratapDhara 9 месяцев назад +7

    Legend is back, the GOAT, if my guess is right next will be ORPO or Q*

    • @umarjamilai
      @umarjamilai  9 месяцев назад +13

      Actually, the next video is going to be a totally new topic not related specifically to language models. Stay tuned!

    • @olympus8903
      @olympus8903 9 месяцев назад

      @@umarjamilai waiting

  • @luxorska5143
    @luxorska5143 9 месяцев назад +4

    wow your explanation is so clear and complete... you are godsend, keep doing it. Sei un fenomeno

  • @pietrogirotto6414
    @pietrogirotto6414 3 месяца назад +1

    Your explanations are on a whole another level, compared to whatever else you can find online. Keep up the amazing work and thank you!

  • @cken27
    @cken27 9 месяцев назад +2

    Thanks for making these videos. Concise and clear

  • @mlloving
    @mlloving 9 месяцев назад +3

    Thank you! It's very clear explaination. It helps for reading the original paper. Looking forward to new topics.

  • @kmalhotra3096
    @kmalhotra3096 9 месяцев назад +2

    Amazing! Great job once again!

  • @amanattheedge9056
    @amanattheedge9056 5 месяцев назад +1

    Very clear explanations!! Please, continue making such good videos!

  • @sauravrao234
    @sauravrao234 9 месяцев назад +6

    I humbly request you to make videos on how to build a career in machine learning and AI. I am a huge fan of your videos and i thank you for all the knowledge that you have shared

    • @umarjamilai
      @umarjamilai  9 месяцев назад +6

      Hi! I will for sure make a video in the future about my personal journey. I hope that can help more people in navigating their own journeys. Have a nice day!

  • @nwanted
    @nwanted 7 месяцев назад +1

    Thanks so much Umar, always learn a lot from your video!

  • @janigiovanni6075
    @janigiovanni6075 Месяц назад +1

    Great video, thank you very much for this!

  • @vanmira
    @vanmira 8 месяцев назад +1

    These lectures are amazing. Thank you!

  • @DiegoSilva-dv9uf
    @DiegoSilva-dv9uf 9 месяцев назад +1

    Valeu!

  • @amankhurana2154
    @amankhurana2154 4 месяца назад +1

    Awesome, thank you so much for putting this out, super helpful!

  • @yinghaohu8784
    @yinghaohu8784 6 месяцев назад +2

    You explained very clearly. Thanks!

  • @xugefu
    @xugefu 11 дней назад

    Thanks!

  • @alexm1815
    @alexm1815 3 месяца назад +1

    This is very, very good. Thank you!

  • @lukeskywalker7029
    @lukeskywalker7029 9 месяцев назад

    New video🎉 can't wait to watch. Although having used DPO in production for a while now!

  • @binjianxin7830
    @binjianxin7830 5 месяцев назад +1

    I believe the most evident insight of DPO is to change a RL problem to an equivalent MLE, while the optimal reward model is guarranteed by the human input as definition. That's the meat. But the efficiency depends still on the human annotater's consistency.

  • @SaiKiran-jc8yp
    @SaiKiran-jc8yp 9 месяцев назад +1

    Best explanation so far !!!!...

  • @k1tajfar714
    @k1tajfar714 5 месяцев назад +1

    Awesome Video. please Continue.

  • @sidward
    @sidward 9 месяцев назад +2

    Thanks for the great video! Very intuitive explanation and particular thanks for the code examples. Question: at 37:41, how do we know that the solving the optimization problem will yield the pi_*? Is there a guaranteed unique solution?

    • @umarjamilai
      @umarjamilai  9 месяцев назад +1

      Please check the paper I linked in the description for a complete derivation of the formula. It is also done in the DPO paper, but in my opinion the other paper is better suited for this particular derivation.

  • @koiRitwikHai
    @koiRitwikHai 4 месяца назад

    Great explanation
    but I have some doubts, please help
    36:50 in Ldpo π* was replaced with π theta... why π theta is considered as optimal policy?
    44:13 You said "each hidden state contains information about itself and all the tokens that comes before it", but this is applicable only to decoder part of the transformer. So this transformer layer is actually a decoder layer? like GPT

  • @olympus8903
    @olympus8903 9 месяцев назад +1

    My Kind Request Please Increase volume little bit , just little bit. Otherwise your videos Outstanding . Best I can say.

  • @elieelezra2734
    @elieelezra2734 7 месяцев назад

    Hello Umar,
    Great as usual, however why do you say at 46:11, that you need to sum log probabilities up? The objective function is the expectation of logarithm of the difference of two weighted log probabilities ratios. I don't get what do you want to sum up exactly? Thank you

  • @mahdisalmani6955
    @mahdisalmani6955 8 месяцев назад +1

    Thank you very much for this video, please make ORPO as well.

  • @mrsmurf911
    @mrsmurf911 8 месяцев назад +1

    Love from India sir, you are a legend 😊😊

  • @jak-zee
    @jak-zee 9 месяцев назад +1

    Enjoyed the style in which the video is presented. Which video editor/tools do you use to make your videos? Thanks.

    • @umarjamilai
      @umarjamilai  9 месяцев назад +1

      I use PowerPoint for the slides, Adobe Premiere for video editing

    • @jak-zee
      @jak-zee 9 месяцев назад

      @@umarjamilai What do you use to draw on your slides? I am assuming you connected an ipad to your screen.

  • @abdullahalsaadi5991
    @abdullahalsaadi5991 9 месяцев назад

    Amazing explanation. Would it be possible to make a video on the theory and implementation of automatic differentiation (autograd).

  • @AptCyborg
    @AptCyborg 9 месяцев назад

    Amazing Video! Please do one on SPIN (Self Play Fine-tuning) as well

  • @tuanduc4892
    @tuanduc4892 8 месяцев назад

    Thanks for your lecture. I wonder could you explain the vision language models

  • @AndriiLomakin
    @AndriiLomakin 4 месяца назад

    Thank you for the video ! Can you provide the video that explains AgentQ training in details ?

  • @vardhan254
    @vardhan254 9 месяцев назад +1

    love ur videos umar !!

  • @tommysnowy3068
    @tommysnowy3068 9 месяцев назад

    Amazing video. Would it be possible for you to explain video-transformers or potential guesses at how Sora works? Another exciting idea is explaining GFlowNets

  • @ai.mlvprasad
    @ai.mlvprasad 8 месяцев назад

    what is the ppt software you are using sir ?

  • @kevinscaria
    @kevinscaria 2 месяца назад +1

    Brilliant!!!!!!

  • @plslokeshreddy
    @plslokeshreddy 7 месяцев назад

    Thanks for the video. Do you know any way on how we can create a dataset for DPO training. I currently have only question, answer pairs. Is it fine if i take y_w as answer and y_l as some random text(which would obviously have lower preference than answer) and then train it?

    • @plslokeshreddy
      @plslokeshreddy 7 месяцев назад

      The potential problem that I think could happen is that having random text may decrease the loss and the policy may not even change much

  • @nguyenhuuuc2311
    @nguyenhuuuc2311 9 месяцев назад

    Hi Umar,
    If I use LoRA for fine-tuning a chat model with DPO loss, what should I use as a reference model?
    - The chat model applied LoRA
    - Or the chat model itself without LoRA?

    • @umarjamilai
      @umarjamilai  9 месяцев назад

      Considering LoRA is just a way to "store" fine-tuned weights with a smaller computation/memory footprint, the model WITHOUT LoRA should be used as the reference model.

    • @nguyenhuuuc2311
      @nguyenhuuuc2311 9 месяцев назад

      @@umarjamilai With my limited GPU, I can only fine-tune by combining a 4-bit-quantized model + LoRA. Surprisingly, using just the 4-bit model leads to NaN weight updates after one batch. But once LoRA is added, my loss updates smoothly without any problems.

    • @nguyenhuuuc2311
      @nguyenhuuuc2311 9 месяцев назад

      Thank you SO much for the quick answer and your excellent video. I did get the hang of DPO loss and be able to implement DPO loss + training loop with vanilla PyTorch code.

  • @CarterKira-g9s
    @CarterKira-g9s 8 месяцев назад

    great explaination, thanks. how about the recent work: KTO: Model Alignment as Prospect Theoretic Optimization? can you compare it with DPO?😁

  • @OGIMxGaMeR
    @OGIMxGaMeR 9 месяцев назад

    Thank you very much for the explanation.
    I had one questions. Are the dataset of preferences always made of two and only two answers?

    • @umarjamilai
      @umarjamilai  9 месяцев назад

      According to the Hugging Face library, yes, looks like you need a dataset with prompt and two answers, one is called the "chosen" one and the other is the "rejected" one. I'm pretty sure there are ways to convert more than two preferences into a dataset of two preferences.

    • @OGIMxGaMeR
      @OGIMxGaMeR 9 месяцев назад

      @@umarjamilai thank you! Yes of course. I am just wondering why it wouldn’t help to have more than 1 rejected for 1 accepted. I guess the formula does not consider this case but may add value.

  • @mohammadsarhangzadeh8820
    @mohammadsarhangzadeh8820 8 месяцев назад

    I love ur videos so much. please make a video about mamba or mamba vision

    • @umarjamilai
      @umarjamilai  8 месяцев назад

      There's already a video about Mamba, check it out

  • @TemporaryForstudy
    @TemporaryForstudy 9 месяцев назад +1

    great video. love from india.

  • @ernestbeckham2921
    @ernestbeckham2921 9 месяцев назад

    Thank you. can you make video about liquid neural network?

  • @trungquang1581
    @trungquang1581 9 месяцев назад

    thank you so much for your effort! could you make a video about tokenizers like BPE and sentencepiece from scratch? I would be very appreciate of it!

  • @agnarCS
    @agnarCS Месяц назад +1

    thank you

  • @samiloom8565
    @samiloom8565 9 месяцев назад +1

    I enjoy your videos umar on my phone while commuting or sitting in a coffe. Only the small fint on a phone is tiring me ..if you make them a bit bigger that will be better

    • @umarjamilai
      @umarjamilai  9 месяцев назад +1

      Sorry for the trouble, I'll keep it in mind for the next videos!

  • @Mortazaghafaripour
    @Mortazaghafaripour 7 месяцев назад +1

    Great 👍

  • @kevon217
    @kevon217 9 месяцев назад

    “digital biscuits”, lol