Complete DSPy Tutorial - Master LLM Prompt Programming in 8 amazing examples!

Поделиться
HTML-код
  • Опубликовано: 29 янв 2025

Комментарии • 92

  • @avb_fj
    @avb_fj  4 месяца назад +4

    If you found DSPy useful, check out this TextGrad tutorial: ruclips.net/video/6pyYc8Upl-0/видео.html.
    TextGrad is another awesome LLM Prompt Optimization library that also tries to reduce prompt engineering in favor of a more programmatic approach.

  • @AIExplorer365
    @AIExplorer365 8 дней назад +2

    Excellent content, presentation bhai, keep it up!!

  • @c17hawke
    @c17hawke 18 дней назад +1

    By far the best content on DSPy. Till now I have ignored this library completely. Though now I can see it has immense potential. Thanks for the content and I Liked and Subscribed!

  • @JimMendenhall
    @JimMendenhall 5 месяцев назад +16

    This is the only video or resource I've seen on DSPy that makes ANY sense. Great job!

  • @fojo_reviews
    @fojo_reviews 5 месяцев назад +22

    I kinda feel guilty that I am seeing such content without paying anything! This is gold...Thank you!

    • @avb_fj
      @avb_fj  5 месяцев назад

      Thanks!

    • @caseyhoward8261
      @caseyhoward8261 5 месяцев назад +3

      You could always donate to his channel 😉

    • @zes7215
      @zes7215 3 месяца назад

      wrr

  • @spookymv
    @spookymv 5 месяцев назад +8

    Thanks to you, my friend, I learned what I haven't understood for days. I insistently want to learn dspy, but I didn't understand it.
    Thanks a lot.

    • @avb_fj
      @avb_fj  5 месяцев назад +2

      Glad to hear that! Your insistence has paid off! Good luck with your DSPy journey!

    • @gaims1
      @gaims1 4 месяца назад

      Same here, learnt what I couldn't from my other attempts

  • @AyanKhan-dc3eu
    @AyanKhan-dc3eu 5 месяцев назад +4

    When this module came out the docs were very confusing. thank you for such a great explanation

  • @security_threat
    @security_threat 2 месяца назад +1

    Thanks!

  • @haralc6196
    @haralc6196 5 месяцев назад +1

    Thanks!

  • @MrMoonsilver
    @MrMoonsilver 5 месяцев назад +2

    Yes boss! Subscribed! Great video and very much untapped territory, the only well made tutorial for dspy!

    • @avb_fj
      @avb_fj  5 месяцев назад

      Thanks!

  • @erfanrezaei7791
    @erfanrezaei7791 2 месяца назад +1

    Very good and clear explanation, thanks buddy👌

  • @thareejanp3933
    @thareejanp3933 4 месяца назад +1

    Wonderful content and Presentation loved the way you explained . Keep it up

  • @mazsorosh
    @mazsorosh 5 месяцев назад

    I love the contents and presentations in this video! Keep it up!💙

    • @avb_fj
      @avb_fj  4 месяца назад

      😇

  • @jeffrey5602
    @jeffrey5602 5 месяцев назад +2

    As a German I enjoyed the chosen example a lot 😄

    • @avb_fj
      @avb_fj  5 месяцев назад

      😂

  • @trashchenkov
    @trashchenkov 5 месяцев назад +1

    thanks for the video! It will be great to see how to use DSPy for agents.

  • @tedhand6237
    @tedhand6237 2 месяца назад +1

    I've been impressed with the ability of LLMs to summarize academic research but also getting really frustrated with the limits and hallucinations. I wonder if programming my own LLM is the answer.

  • @shoaibsh2872
    @shoaibsh2872 5 месяцев назад +1

    Great video man, also loved the one piece T-shirt ;)

    • @avb_fj
      @avb_fj  5 месяцев назад

      Thanks for noticing!

  • @GagandeepBhattacharya22
    @GagandeepBhattacharya22 4 месяца назад

    I love the content & found it really useful. Thank You!
    I have only 1 suggestion for you, "ZOOM IN" the code section as it's really difficult to see the code.

    • @avb_fj
      @avb_fj  4 месяца назад

      Thanks for the suggestion! Will keep it in mind next time.

  • @arirajuh
    @arirajuh 4 месяца назад

    Thank you for sharing step by step tutorial, i tried dspy with local Ollama, with Llama 3.1, the Chain of thought provided an different answer, i have shared the result, [ I don't know anything about football.]
    Reasoning: Let's think step by step in order to answer this question. First, we need to identify the team that won the World Cup in 2014. The winner of the tournament was Germany. Next, we should find out who scored the final goal for Germany. It was Mario Götze who scored the winning goal against Argentina. To determine who provided the assist, we can look at the details of the game and see that Toni Kroos made a long pass to André Schürrle, who then crossed the ball to Götze.

    • @avb_fj
      @avb_fj  4 месяца назад

      Yes the results will vary depending on the underlying LLM and the specific instructions as well.

  • @litttlemooncream5049
    @litttlemooncream5049 Месяц назад +1

    thanks for your contribution! I really think that writing prompt is just writing a prompt in a long long series of words...but I did realize that more things are behind that because there exists the word and position called prompt engineering..DSPy helps us write prompts in a modularized way. However, sadly it doesn't support Chinese....... still have to struggle with writing prompts by putting all information at one place.. the way which langchain only supports lol

    • @avb_fj
      @avb_fj  Месяц назад +1

      I’m very surprised you are unable to generate text in Chinese. The underlying LLMs that you are using with lang chain should be callable with DSPy as well. I would try to add more instructions in the system prompt (doc string of your Signature class) and maybe pass a couple of few shot examples in Chinese. If possible, I’d use Chinese characters wherever possible when calling the DSPy module as well.
      There is also a DSPy discord server which you can join and they should be able to help you setup the code.

  • @krlospatrick
    @krlospatrick 5 месяцев назад

    Thanks for sharing this amazing tutorial

  • @jaggyjut
    @jaggyjut Месяц назад

    Thank you for building this tutorial. What about in the RAG preprocessing stage to clean data, redact PII information before sending to LLM for chunking and chunk enrichment.

    • @avb_fj
      @avb_fj  Месяц назад +1

      If you don’t want to use an LLM to do PII redaction, I’ll suggest you to look into services like Google DLP to do this for you through an API call.

    • @jaggyjut
      @jaggyjut Месяц назад

      @avb_fj from what I know dlp (data loss prevention) solution are to block PII or sensitive info together. I was wondering if there is a library or cloud solution that when extracting data from documents redact sensitive information

  • @ashadqureshi4412
    @ashadqureshi4412 5 месяцев назад +1

    Thanks man!!!

  • @VedantiSaxena
    @VedantiSaxena 4 месяца назад +1

    Hi! I am trying to use the LLM (Mistral-Nemo) for sentiment analysis. The issue I'm facing is that for the same input text, it returns different responses. Sometimes it identifies the sentiment as positive, and other times as negative. I have set the temperature to 0, but this hasn’t resolved the problem. Can using DsPy help solve this inconsistency, or is there another solution?
    Also , great video with precise and crisp explanation ! kuddos 👏

    • @avb_fj
      @avb_fj  4 месяца назад

      That’s indeed strange. Setting the temp to zero and keeping the entire prompt same generally return the same output coz the LLM chooses next tokens greedily. When I say “entire prompt” it includes the prompt as well as the input question. I don’t know the implementation details in Mistral Nemo, but if they still sample tokens with t=0 during decoding then I’m afraid we can’t do much to make it deterministic. Again, I am not aware. You may want to test with other LLMs once.
      DSPy might help, you could try it. Note that DSPy by default caches the results of past prompts in the session, basically reusing the cached outputs when the same prompt is re-ran. This means to test the consistency correctly, you must first set the caching off. (Control F “cache” here: dspy-docs.vercel.app/docs/faqs)

    • @VedantiSaxena
      @VedantiSaxena 4 месяца назад

      @@avb_fj I guess , trying Dspy through Fewshot might help with the consistent result . Thank you so much though!

  • @boaz9798
    @boaz9798 5 месяцев назад +2

    Excellent video. Thank you. Can I grab the resulting prompt? I know it is supposedly a new paradigm which abstracts it away, but some may still want to revert back to using a simple prompt in prod. post optimization

    • @avb_fj
      @avb_fj  5 месяцев назад

      Yes you can use the inspect_history() function as shown in the video (around 4:30) to check out all the previous prompts ran by a module.

  • @ritikatanwar3013
    @ritikatanwar3013 2 месяца назад

    Thank you for making this video. This has been a great hands-on experience learning DSPy..
    What are your thoughts on this - For building AI Agents or having more robust prompt programming, what other framework can be used?

    • @avb_fj
      @avb_fj  2 месяца назад +1

      My current go-to framework is definitely DSPy, although they do have their fair share of issues (like no async programming support, continuous changes, etc). There are a bunch of frameworks that try to do prompt tuning - I have played around with TextGrad, which uses feedback loops to improve LLM generated content/prompts. Many people swear by Langchain, although I personally haven't used it much. Langfuse is a good tool for maintaining prompts, and tracking usage/sessions etc - unnecessary for small scale apps. I have also used litellm, which is a good solution for load-balancing if you are using multiple different LLM providers in your application.
      It might sound funny but the most useful framework I have used for prompting is good old function composition in Python with some Jinja2 templating. :)

  • @vineetsuvarna8185
    @vineetsuvarna8185 5 месяцев назад

    Awsome ! Please can you share the collab link of the examples shown in video

  • @shinigamiryuk4183
    @shinigamiryuk4183 5 месяцев назад

    Pretty Nice explanation

    • @avb_fj
      @avb_fj  4 месяца назад

      Nice username 🍎

  • @vaioslaschos
    @vaioslaschos 4 месяца назад

    Really nice content. I liked and subscribed :-). Is there something that can be done easily with DSPy that cant be done with langchain?

  • @daniloMgiomo
    @daniloMgiomo 5 месяцев назад

    Awesome content, u can please try some text-grad? The Stanford guys release a paper about this in July

    • @avb_fj
      @avb_fj  5 месяцев назад

      Thanks for the suggestion! Sounds like a good idea for a future video.

  • @zappist751
    @zappist751 4 месяца назад

    GOAT

  • @haralc
    @haralc 5 месяцев назад

    I just tried it. However it failed on the first try of the "basic" stuff. With GPT4, the BasicQA keep returning "Question: .. Answer: ..." , but I only need the answer itself, not the whole "Quesion: ... Answer: ..." .... So, which part I don't have to worry about the prompting part?

  • @lakshaynz
    @lakshaynz 5 месяцев назад

    Thank you:)

  • @aillminsimplelanguage
    @aillminsimplelanguage 5 месяцев назад

    Great video! Can you also please share the notebook with this code. This would help us in doing hands-on ourselves. Thanks!

    • @avb_fj
      @avb_fj  5 месяцев назад

      Thanks! As mentioned in the video, currently all the code produced in the channel is for Patreon/Channel members.

  • @prashlovessamosa
    @prashlovessamosa 5 месяцев назад +1

    Excellent 👌

  • @ProgrammerRajaa
    @ProgrammerRajaa 5 месяцев назад

    Greate explanation.
    can i have link for the Notebook that you have show on the video

    • @avb_fj
      @avb_fj  5 месяцев назад

      As I mentioned in the video and on the description, the code is currently members/patrons only.

  • @JeevaPadmanaban
    @JeevaPadmanaban 5 месяцев назад

    i have a question can i use any other models other than openai? im running my own models in deepinfra.

    • @avb_fj
      @avb_fj  5 месяцев назад +1

      Haven't tested this myself, but I assume that you can call deepinfra models using OpenAI apis by changing the base_url parameter.
      deepinfra.com/docs/openai_api

    • @JeevaPadmanaban
      @JeevaPadmanaban 5 месяцев назад

      @@avb_fj Thanks man ✨

    • @JeevaPadmanaban
      @JeevaPadmanaban 5 месяцев назад

      @@avb_fj also another doubt can i use another vector db's like astra db into rag
      Thanks.

    • @avb_fj
      @avb_fj  5 месяцев назад +1

      @@JeevaPadmanaban check out the supported ones here:
      dspy-docs.vercel.app/docs/category/retrieval-model-clients

  • @artmusic6937
    @artmusic6937 5 месяцев назад

    if the 60m parameter model does 50% accuracy, how can you improve this without using a bigger model? Because if you use a bigger model, then it actually just memorizes the data better. So it is basically overfitting, isn't it?

  • @Anonymous-lw1zy
    @Anonymous-lw1zy 5 месяцев назад +2

    Great video.
    However , dspy seems to be very fragile; it breaks easily.
    e.g. at 11:00 you ask ""What is the capital of the birth state of the person who provided the assist for Mario Gotze's in the football World Cup finals in 2014?"" and it answers 'Mainz', which you said is correct.
    But if I make the question slightly different by adding "goal" after "Gotze's", so the question is now ""What is the capital of the birth state of the person who provided the assist for Mario Gotze's goal in the football World Cup finals in 2014?"", it answers "Research".

    • @avb_fj
      @avb_fj  5 месяцев назад +1

      In general it’s the underlying LLM that could be “fragile”. Remember that DSPy is just converting your program into a prompt and sending to the LLM. The LLM generates the answer which depends on input prompts and temperature settings. Either way, as long as the concepts make sense, don’t worry about replicating the test cases shown in the video!

    • @RazorCXTechnologies
      @RazorCXTechnologies 5 месяцев назад +1

      @@avb_fj I tested your exact code with and without the "goal". It responded correctly for both prompts using a local Ollama model: gemma2:27b. DSPy seems to work well with local models that are >20B parameters. Smaller local models especially Mistral-Nemo:12b work in some cases but tend to fail with multi-step (ChainOfThought) modules.

    • @TheGenerationGapPodcast
      @TheGenerationGapPodcast 7 дней назад

      Which LLM is not brittle in some context? LLM s like people will never be perfect in all case. But, this the worst they will ever be. From here they will get better and better.

  • @Karan-Tak
    @Karan-Tak 3 месяца назад

    Can we get Optimized Prompt using DSPy as we get using TextGrad? If yes How can we do it?

    • @avb_fj
      @avb_fj  3 месяца назад

      I believe so. It's on my bucket list of things to try out one day. Look into this page: dspy-docs.vercel.app/docs/building-blocks/optimizers
      and look for COPRO and MIPROV2 optimizers.

    • @TheGenerationGapPodcast
      @TheGenerationGapPodcast 7 дней назад

      You are if DSPy is TextGrad? There are solving different problems. Decide which is best for your use-case.

  • @Anonymous-lw1zy
    @Anonymous-lw1zy 5 месяцев назад

    Also at 12:42 I am getting:
    answer='Mario Götze' confidence=0.9
    not Andre Schurrle
    I quadruple-checked that my code is the same as yours.

  • @ehsanmousavi7149
    @ehsanmousavi7149 21 день назад

    can I have the code in this video?

    • @avb_fj
      @avb_fj  21 день назад

      Hello. The code is shared on my Patreon.

  • @leeharrold98
    @leeharrold98 3 месяца назад

    Champ

  • @AbhijeetSingh-lf3uu
    @AbhijeetSingh-lf3uu 4 месяца назад

    you didn't show us the goal :(

    • @avb_fj
      @avb_fj  4 месяца назад

      Haha so I had a 5-second clip before, but FIFA copyrighted it so had to remove it.

    • @AbhijeetSingh-lf3uu
      @AbhijeetSingh-lf3uu 4 месяца назад

      @@avb_fj ah no worries, but thanks so much for teaching us I really appreciate it.

  • @xtema1624
    @xtema1624 5 месяцев назад

    Plz share the colab in this video, not dspy example.

  • @jjolla6391
    @jjolla6391 Месяц назад

    not sure how an agent is going to know what dspy modules it should invoke .. since an agent doesnt know what the question is about. What you have shown is you (a human) reasoning about what libraries to use .. after a few retries no less.

    • @avb_fj
      @avb_fj  Месяц назад

      That is right. The human developer will write the pipeline once. Once it is written, there is no more human involvement, it’s just a series of instructions that get executed in a sequence

  • @haralc
    @haralc 5 месяцев назад

    Btw, I'm not a football fan, and although you mentioned in the note that it is okay .... no, it's really not ... even when I watched this video a few times, who did what still doesn't register in my brain...

    • @avb_fj
      @avb_fj  5 месяцев назад

      Fair criticism. I regretted using these examples soon after shooting the video. Future tutorials won’t have examples like these.

  • @h3xkore
    @h3xkore 5 месяцев назад

    was hoping to make the prompts autonomized. i feel like you still need to understand prompting well before you can use this :(

  • @gslvqz8812
    @gslvqz8812 8 дней назад

    So. Really what this library is a prompt engineer, nothing more and nothing less. It is just passing your prompt and send it to the LLM in a more structured manner but that’s it!!

  • @orthodox_gentleman
    @orthodox_gentleman 5 месяцев назад

    You do realize that GPT 3.5 Turbo was deprecated aka it no longer exists.

    • @avb_fj
      @avb_fj  5 месяцев назад +1

      Thanks for your comment. The DSPy documentation and official tutorial still uses it (links below), and it worked out for the examples I was going for in the tutorial. Whether the particular LM is deprecated or not is not important at all. You can replace it with whichever model you prefer… the concepts remain the same.
      dspy-docs.vercel.app/docs/tutorials/rag
      github.com/stanfordnlp/dspy/blob/main/intro.ipynb

  • @pensiveintrovert4318
    @pensiveintrovert4318 Месяц назад

    This looks like a total mess. You are basically forced to hack up more and more code for every problem. How does this save time? It does not seem general enough to be useful.

    • @avb_fj
      @avb_fj  Месяц назад +1

      To each its own. In the end you should pick a framework that you are most comfortable in as long as it works for your project. For me, I find DSPy quite useful, and the PyTorch like APIs quite intuitive. I’ll give my opinions below, but again?, you should chose libraries that you find helpful!
      I feel like it’s a lesser mess than pure prompting methods. Writing long prompts to instruct the LLM to output things in a particular format is not very fun and in my experience, they introduce long term technical debts into the codebase. It is also annoying to write multi step prompt chains through pure python functions where the output of one module may cause cascading failures in future models.
      Personally, I like some functionalities of DSPy, mainly the signature stuff, the ability to specify pydantic basemodels for input/output, the simple modules API to do multistep prompt graphs, and the assertion framework for error handling. Specifically, not needing to write custom logic for output validation is a huge win for me. There are some features like the optimization and fine tuning thing, which honestly, I find not sophisticated enough so I barely interact with them. Another massive negative why I may choose to not use DSPY is lack of asynchronous programming support.

    • @KennyPowers-dx3mz
      @KennyPowers-dx3mz 23 дня назад +1

      @avb_fj I got a little bit discouraged by OPs comment but after reading your reply (which I think deserves attention) we are back at it, much appreciated. I'd rather have code, dont care about the time "loss". Thanks for the tutorial!