ReFT: Representation Finetuning for Language Models | AI Paper Explained

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024

Комментарии • 4

  • @aryamanarora4967
    @aryamanarora4967 5 месяцев назад +1

    Thank you for making this excellent video about our work! Minor note: at the end you mention 18-minute training time for our instruction-following ReFT, but that number is only for the small 1K subset of Ultrafeedback (last row in table). It takes a couple hours to train on the whole dataset, but we wanted to show that ReFT is also data-efficient through that number.

    • @aipapersacademy
      @aipapersacademy  5 месяцев назад +1

      Thank you Aryaman for the kind feedback and for the correction 🙏

  • @jameswhitaker4357
    @jameswhitaker4357 5 месяцев назад

    So interesting! 👀

  • @xuantungnguyen9719
    @xuantungnguyen9719 5 месяцев назад

    Thanks