Direct Preference Optimization (DPO) - How to fine-tune LLMs directly without reinforcement learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ก.ค. 2024
  • Direct Preference Optimization (DPO) is a method used for training Large Language Models (LLMs). DPO is a direct way to train the LLM without the need for reinforcement learning, which makes it more effective and more efficient.
    Learn about it in this simple video!
    This is the third one in a series of 4 videos dedicated to the reinforcement learning methods used for training LLMs.
    Full Playlist: • RLHF for training Lang...
    Video 0 (Optional): Introduction to deep reinforcement learning • A friendly introductio...
    Video 1: Proximal Policy Optimization • Proximal Policy Optimi...
    Video 2: Reinforcement Learning with Human Feedback • Reinforcement Learning...
    Video 3 (This one!): Deterministic Policy Optimization
    00:00 Introduction
    01:08 RLHF vs DPO
    07:19 The Bradley-Terry Model
    11:25 KL Divergence
    16:32 The Loss Function
    14:36 Conclusion
    Get the Grokking Machine Learning book!
    manning.com/books/grokking-ma...
    Discount code (40%): serranoyt
    (Use the discount code on checkout)
  • บันเทิง

ความคิดเห็น • 24

  • @Cathiina
    @Cathiina หลายเดือนก่อน +1

    Hi Mr. Serrano! I am doing your coursera course at the moment on linear algebra for machine learning and I am having so much fun! You are a brilliant teacher, and I just wanted to say thank you! Wish more teachers would bring theoretical mathematics down to a more practical level. Obviously loving the very expensive fruit examples :)

    • @SerranoAcademy
      @SerranoAcademy  หลายเดือนก่อน +1

      Thank you so much @Cathiina, what an honor to be part of your learning journey, and I’m glad you like the expensive fruit examples! :)

  • @miklefeldman
    @miklefeldman 9 ชั่วโมงที่ผ่านมา

    Thank you very much for the video!
    Do I understand correctly that RLHF still has some advantages, namely that by using it we can gather a small amount of human preferences data, and then, after training a reward model using that data, it will itself evaluate many more new examples?
    So by having trained the reward model, we have basically free human annotator, that can rate endless new examples.
    In the case of DPO, however, we only have the initial human preferences data and that’s it.

  • @subhamkundu5043
    @subhamkundu5043 4 วันที่ผ่านมา

    Thanks for sharing. Is there any hands on resource to try DPO ?

  • @AravindUkrd
    @AravindUkrd หลายเดือนก่อน

    Thanks for the simplified explanation. Awesome as always.
    The book link in the description is not working.

    • @SerranoAcademy
      @SerranoAcademy  หลายเดือนก่อน

      Thank you so much! And thanks for letting me know, I’ll fix it

  • @mekuzeeyo
    @mekuzeeyo 28 วันที่ผ่านมา

    Great video as always. I have a question, in practice which one works best using DPO or RLHF?

    • @SerranoAcademy
      @SerranoAcademy  28 วันที่ผ่านมา

      Thank you! From what I've heard, DPO works better, as it trains the network directly instead of using RL and two networks.

    • @mekuzeeyo
      @mekuzeeyo 27 วันที่ผ่านมา

      @@SerranoAcademy Thank you sir for the great work. your Coursera courses have been awesome.

  • @IceMetalPunk
    @IceMetalPunk หลายเดือนก่อน +2

    I'm a little confused about one thing: the reward function, even in the Bradley-Terry model, is based on the human-given scores for individual context-prediction pairs, right? And πθ is the probability from the current iteration of the network, and πRef is the probability from the original, untuned network?
    So then after that "mathematical manipulation", how does the human-given set of scores become represented by the network's predictions all of a sudden?

    • @peace-it4rg
      @peace-it4rg 19 วันที่ผ่านมา

      same i was also thinking about that also i think it is incomplete maybe because it is dpo loss for just one training at a time and human evaluator tries it continuously in its training and tries to find better😅 it look tedious but i think main idea is all about training 1 neural network at a time. if u find it wrong correct me

    • @KeahiXie
      @KeahiXie 10 วันที่ผ่านมา

      @@peace-it4rg if you have an answer, please kick me up, thanks!!

    • @ryanhewitt9902
      @ryanhewitt9902 4 วันที่ผ่านมา

      I'm also confused here. It seems that in DPO, reward is still an *input* to the Bradley-Terry probabilities. I thought the reason RLHF trained a reward model was to abstract this human preference so that it can be applied to data not explicitly rated by humans. How does representing that reward in the form of a probability obviate the need for abstraction?

  • @frankl1
    @frankl1 หลายเดือนก่อน

    Really love the way you broke down the DPO loss, this direct way is more welcome by my brain :). Just one question on the video, I am wondering how important it is to choose the initial transformer carefully. I suspect that if it is very bad at the task, then we will have to change the initial response a lot, but because the loss function prevents from changing too much in one iteration, we will need to perform a lot tiny changes toward the good answer, making the training extremely long. Am I right ?

    • @SerranoAcademy
      @SerranoAcademy  หลายเดือนก่อน +1

      Thank you, great question! This method is used for fine-tuning, not specifically for training. In other words, it's crucial that we start with a fully trained model. For training, you'd use normal backpropagation on the transformer, and lots of data.
      Once the LLM is trained and very trusted, then you use DPO (or RLHF) to fine-tune it (meaning, post train it to get from good to great). So we should assume that the model is as trained as it can, and that's why we trust the LLM and we try to only change it marginally.
      If we were to do this method to train a model that's not fully trained... I'm not 100% if it would work. It may or may not, but we'd still have to punish the KL divergence much less. And also, human feedback gives a lot less data than scraping the whole internet, so I would still not use this as a training method, more as refining.
      Let me know if you have more questions!

    • @frankl1
      @frankl1 หลายเดือนก่อน

      @@SerranoAcademy Thanks for the answer, I understand better. I forgot that this design is for fine-tuning.

    • @peace-it4rg
      @peace-it4rg 19 วันที่ผ่านมา

      ​@@SerranoAcademy thank u that was also one of my doubts that transformer should be trained perfectly such that we can use dpo😅

  • @guzh
    @guzh หลายเดือนก่อน +1

    DPO main equation should be PPO main equation.

  • @frankl1
    @frankl1 หลายเดือนก่อน

    Did anyone expect something different than Sofmax regarding the Bradley-Terry model as myself? 😅

    • @SerranoAcademy
      @SerranoAcademy  หลายเดือนก่อน

      lol, I was expecting something different too initially 🤣

  • @VerdonTrigance
    @VerdonTrigance หลายเดือนก่อน

    It's kinda hard to remember all of these formulas and it's demotivating me from further learning.

    • @javiergimenezmoya86
      @javiergimenezmoya86 หลายเดือนก่อน +1

      You do not have to remember that formulas. You only have to understand the logic in them.

    • @SerranoAcademy
      @SerranoAcademy  24 วันที่ผ่านมา

      Thanks for your comment @VerdonTrigance! I also can't remember these formulas, since to me, they are the worst way to convey information. That's why I like to see it with examples. If you understand the example and the idea underneath, then you understand the concept. Don't worry about the formulas.

    • @SerranoAcademy
      @SerranoAcademy  24 วันที่ผ่านมา

      Agreed @javiergimenezmoya86!