To Fine Tune or Not Fine Tune? That is the question

Поделиться
HTML-код
  • Опубликовано: 15 дек 2024

Комментарии • 5

  • @dipteshbose
    @dipteshbose Год назад +3

    Awesome, simple and easy to understand..

  • @ai4sme
    @ai4sme 9 месяцев назад +1

    Awesome explanation! Thanks!

  • @akilja2011
    @akilja2011 Год назад

    Great tutorial! I’m interested in learning more about how to iterate between the testing and training until you get to a sufficient quality of inference.

  • @JackGuo-l2x
    @JackGuo-l2x Год назад

    Thanks! Easy to understand

  • @TheHorse_yes
    @TheHorse_yes Год назад

    🐴 Fascinating! I have an OpenAI API-utilizing, localized web platform that uses the OpenAI API function calls to query/fetch extra data and such, I have been thinking if I should try out fine-tuning a GPT-3.5-16k instance for specific use case scenarios such as customer service bots that are up-to-date and would need less extra data fetching. This is especially important in non-English primary use cases where I find GPT-3.5's wording a bit lacking at times. Will definitely have to take a look at it. Thanks for the video. Regards, Horse