Knowledge Distillation in Deep Learning - DistilBERT Explained

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 9

  • @251_satyamrai4
    @251_satyamrai4 4 месяца назад

    Clear explanation. Thank you for your work

  • @ilhamafounnas8279
    @ilhamafounnas8279 3 года назад +2

    Hey! a very good explanation, waiting for more videos on KD

    • @dingusagar
      @dingusagar  3 года назад

      Thanks 😊. Yes, more videos coming soon.

  • @Gokulhraj
    @Gokulhraj 2 года назад

    Can you please make a video on what happens during fine tuning of bert? Like whether it will freeze the layers of core engine and train only the head and how weight updation takes place etc for question answering task.

  • @dhirajkumarsahu999
    @dhirajkumarsahu999 2 года назад

    Thank you Dingu

  • @sadikaljarif9635
    @sadikaljarif9635 2 года назад +1

    very good

  • @sadikaljarif9635
    @sadikaljarif9635 2 года назад

    could u please share this slide ???

    • @dingusagar
      @dingusagar  2 года назад +1

      docs.google.com/presentation/d/1IkPeSGOcUSO_qyCwtrP9ZBMx-l2aBzj7FDqwPLK9Ekk/edit?usp=drivesdk

    • @sadikaljarif9635
      @sadikaljarif9635 2 года назад +1

      @@dingusagar thank you so much