Large Language Models Are Zero Shot Reasoners

Поделиться
HTML-код
  • Опубликовано: 31 май 2023
  • Explore AI built for business: watsonx → ibm.biz/meet-watsonx
    When you create a prompt for a large language model, are the answers sometimes wrong or just plain weird? It may be you! Or more accurately, the way you are formulating your question. In the video, Martin Keen explains why LLMs are led astray and offers suggestions on prompting techniques to reduce these mishaps.
    Get started for free on IBM Cloud → ibm.biz/buildonibmcloud
    Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
    #llm #ai #watsonx

Комментарии • 13

  • @arijitgoswami3652
    @arijitgoswami3652 Год назад +1

    Thanks for this video! Would love to see the next video on Tree of Thoughts method of prompting.

  • @manomancan
    @manomancan Год назад +5

    Thank you for this video! Though, wasn't this video already published? I could even remember the beats of the first lines?

    • @IBMTechnology
      @IBMTechnology  Год назад +6

      You're right. On occasion we have to republish to fix an issue found after publishing. We hope you will enjoy some of truly new content.

  • @EvanBoyar
    @EvanBoyar 3 месяца назад +1

    There's such a strange, uncanny-valley feeling watching someone who's been inverted (flipped along the vertical axis like a mirror appears to do)

  • @Zulu369
    @Zulu369 11 месяцев назад +1

    Excellent explanation. I suggest that next time you add a little history at the beginning of your video about where the term is coming from (see original publication where the term was first coined).

  • @yuchentuan7011
    @yuchentuan7011 11 месяцев назад +1

    Thanks. I have one question. To do prompt tuning on a foundation model, how to choose data sets which are for general public domain (not for specific domain) and under which circumstances, we should train with few-shot prompts and zero-shot prompts? thanks

  • @michaeldausmann6066
    @michaeldausmann6066 4 месяца назад

    Good video, is there an established way to provide step by step examples to the llm? E.g. will check get better results if I explicitly number my stems and provide enumerated examples, can I use arrows to indicate example-> step -> final ?

  • @enthanna
    @enthanna 11 месяцев назад +4

    Great video. Thank you
    Can you make a video about the current state of LLMs in the market place? There are lots of claims out there of capable models like GPT but it’s really hard to separate fact from fiction. Thanks again

  • @fredericc2184
    @fredericc2184 7 месяцев назад

    I just try the same direct prompt right now to gpt4 and correct answer !

  • @sirk3v
    @sirk3v Год назад +1

    Has this been reuploaded or do I just have a really bad case of deja Vu? I'm 100% sure I have watched this video again before and it wasn't anywhere within the past 18hrs

  • @amparoconsuelo9451
    @amparoconsuelo9451 8 месяцев назад

    Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?