[QA] Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • The paper identifies "underthinking" in LLMs, where frequent thought switching hampers reasoning depth. It proposes a new strategy to improve accuracy in complex tasks without model fine-tuning.
    arxiv.org/abs/...
    RUclips: / @arxivpapers
    TikTok: / arxiv_papers
    Apple Podcasts: podcasts.apple...
    Spotify: podcasters.spo...

Комментарии •