Omar Khattab, DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 12

  • @zedler12
    @zedler12 10 дней назад

    I have been looking for this. I have a question. How do we generate prompt for Conversation Agent? Do youhave an example ?

  • @beedr.metwallykhattab115
    @beedr.metwallykhattab115 8 месяцев назад

    Very good Omar Khattab
    يحفظكم الله ويرعاكم ويبارك فيكم

    • @user-0j27M_JSs
      @user-0j27M_JSs 3 месяца назад

      why do you think he's muslim? He might already passed this stage in human development

  • @vbywrde
    @vbywrde 11 месяцев назад

    Great! Thank you!

  • @420_gunna
    @420_gunna Год назад +6

    dank

  • @julianrosenberger1793
    @julianrosenberger1793 Год назад +2

    🙏

  • @pensiveintrovert4318
    @pensiveintrovert4318 8 месяцев назад +1

    The whole point of LLMs is the ability to interact with them in natural language, directly. If that is gone, then FMs should be built around automation and NOT using English.

    • @sathishgangichetty685
      @sathishgangichetty685 7 месяцев назад

      you are still interacting (as an end user) via natural language. This is showing how to best make sota prompting techniques available to masses without them having to learn them.

  • @thannon72
    @thannon72 6 месяцев назад +1

    Another rubbish presentation on DSPy. Do these people really understand it. Just a regurgitation of the documents

  • @campbellhutcheson5162
    @campbellhutcheson5162 11 месяцев назад +6

    I've met a lot of people skeptical of DSPy and these kinds of videos do nothing to dispel the skepticism. I'm 10 minutes in and we haven't seen any examples of how this is any different from ordinary prompting with an LLM. The "goal" that he describes is literally just the prompt without explicit CoT language and CoT language will probably be unnecessary with stronger models, which will better infer when they need to do CoT to reach a good result (excluding cases where output is coerced in JSON mode, etc...).

    • @campbellhutcheson5162
      @campbellhutcheson5162 11 месяцев назад

      I literally paused at 16:44 to read the produced prompt. It's fine. But, you literally had to do all the work to get there. And, I'm not sure that's substantially less than writing the prompt yourself, especially when you are going to get GPT-4 to write the first version of the prompt for you (remember turbo-preview knows what LLM prompts are).

    • @GURUPRASADIYERV
      @GURUPRASADIYERV 11 месяцев назад +3

      The magic is in the compiling engine underneath. The optimization will get better with open source contribution.