Chain of Thought Prompting | Self Consistency Prompting Explained | Prompt Engineering 101

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025

Комментарии • 13

  • @Analyticsvidhya
    @Analyticsvidhya  6 месяцев назад

    Book FREE 1:1 Mentorship for Gen AI / Data Science
    Link 🔗 bit.ly/3wlIIGz

  • @arvindmathur6574
    @arvindmathur6574 Месяц назад +1

    Very well explained. Also the examples provided are great illustrations! Congratulations!

  • @NLPprompter
    @NLPprompter Год назад

    more prompt please love your explanation.

  • @swetasharma8467
    @swetasharma8467 Год назад

    Very well explained!! Thank you!

  • @tofulover09
    @tofulover09 9 месяцев назад

    thank you for sharing. what types of prompting techniques would you recommend to incorporate empathy and increase the model's understanding of nuance in a LLM?

    • @Analyticsvidhya
      @Analyticsvidhya  9 месяцев назад

      Adding memory (via RAG) is a great way to make LLM responses nuanced.

  • @arpitsisodia1428
    @arpitsisodia1428 Год назад

    lol.. up the prompting game.
    arithmetic, logical reasoning,- chain of thoughts. -step by step
    reasoning is coming in answers, not in prompt. where is this info added at model in run time?

    • @Analyticsvidhya
      @Analyticsvidhya  Год назад

      Can you please mention the time stamp which you are referring to for this query?

  • @Jackson_M5
    @Jackson_M5 9 месяцев назад

    Self-Consistency Prompting is really just a poor mans version of Stacked Ensemble with Diverse Reasoning... why not called it Ensemble Prompting or extend the concept to Sparse Stacked Ensemble Prompting

  • @graciasnara
    @graciasnara 9 месяцев назад

    Dear Sir, thank you for your generous explanation. I understood to use "step-by-step" to apply CoT. But is there any keyword I can apply the "self-consistency" method in combination with CoT?

    • @Analyticsvidhya
      @Analyticsvidhya  9 месяцев назад

      Self-consistency prompting is a technique used to improve the performance of CoT prompting on tasks involving arithmetic and common sense reasoning [18:22]. It works by calling the language model multiple times on the same prompt and taking the result that is most consistent as the final answer. This essentially simulates how humans explore different reasoning paths before arriving at a well-informed decision.
      So yes, you can use self-consistency prompting in combination with CoT prompting by calling the language model multiple times on the same prompt and taking the most consistent answer as the final output.

    • @graciasnara
      @graciasnara 9 месяцев назад

      @@Analyticsvidhya Thank you teacher. I'm clear now.