Self-Extend LLM: Upgrade your context length

Поделиться
HTML-код
  • Опубликовано: 23 янв 2025

Комментарии • 4

  • @s11-informationatyourservi44
    @s11-informationatyourservi44 Год назад

    sincerely appreciate the deep dives. another awesome post i’m watching in a loop❤

  • @joebarhouch2742
    @joebarhouch2742 Год назад +3

    super sick video!! Thanks for sharing all of this information, i hope you can keep going I love the content :)
    Would it be possible to share the LLM you trained on LLM knowledge? would be super useful

  • @kamleshpaul414
    @kamleshpaul414 Год назад +2

    can you make video when fine-tune what parameter can give best result i tried so many time finetune never got less then 1 validation loss and got 0.98 training loss but not good in validation loss my dataset size is 2K row it this too small for 7B model mistral 7B or may be i m doing something wrong

  • @pensiveintrovert4318
    @pensiveintrovert4318 Год назад +6

    This is what they claim, but does it actually work in reality?