Omni Engineer sounds like a dream tool for developers, generating applications in seconds! With Ollama support, it’s taking coding efficiency to the next level. This kind of innovation is what makes the tech world so exciting right now
Browser automation can be a game-changer, and Claude Dev seems to make it easier than ever. Whether you’re automating tasks for efficiency or exploring new ways to enhance your workflow, this tool is definitely worth checking out
🎯 Key points for quick navigation: 00:00 *Introduction to DSPy and Prompt Engineering* - The speaker introduces themselves and topic, - Explains the inspiration behind creating the video, - Introduces concept of prompt engineering and the importance of effective prompt strategies. [:05](ruclips.net/video/eAZ2LtJD5k/видео.html) 🔄 Popular Prompt Engineering Techniques - Mentions the integration of these techniques in tools like ChatGPT - Explores other strategies such as Retrieval-Augmented Generation (R). 04:21 *🛠️ Workflow in Prompt Engineering* - Details the typical workflow for creating a R application, - Describes the human role in generating input, crafting, and evaluating outputs, - Discusses the iterative nature of prompt tuning the challenges faced. 07:02 *🧠 DSPy Optimization Mechanism* -pares DSPy optimization to classical machine learning processes, - Explains DSPy allows for automated tuning without manual prompt adjustments, - Highlights the of optimizing program architecture over manual prompt engineering. [14:26ruclips.net/video/eAZ2LtJ6D5/видео.html) 🛠️ Setting Up DSPy and Initial Testing - Walks through setting up dependencies for DSPy, - Discuss configuring LLMs and retrieval mechanisms, - Provides a brief introduction the benchmark dataset used for evaluations. 17:23 *🔧 Building a DSPy Program* - Outlines steps to create a simple DSPy program, - Describes how DSP modules are structured similarly to PyTorch, - Walks through defining and output fields for prompt generation. 20:38 *🚀 Running the DSPy Program* - Demonstrates the initial DSPy program with a sample query, - Shows how the interacts with configured retrieval and LLM components, - Provides an example the system's output performance before optimization. 21:58 *🧪 Optimizing DSPy Programs* - Introdu the process for optimizing DSPy programs, - Describes setting up visual for optimization progress, - Lays groundwork for defining evaluation metrics and measuring program performance improvements. 23:06 *Introduction to Few Shot and Chain of Thought Prompting* - Discussing use of few-shot prompting and Chain of Thought methods for improving model responses - Explanation of the process for generating reasoning as a step before answering questions- 25:08 🧪 Optimizer Results - Demonstrates running an optimizer with few-shot examples, - Reports on accuracy improvements and cost considerations. [29:](ruclips.net/video/eAZ2LtJ6Dk/видео.html) 💾 Saving and Evaluating Optimized Prom - How to save the optimized prompts in a JSON format, Overview of the improvement from the baseline accuracy to enhanced accuracy using the optimizer - Basics of inspecting the JSON file containing prompts. [3024](ruclips.net/video/eAZ2LtJ6Dk/видео.html) 📈 Advanced Optimization Techniques - Introduction the My Pro Optimizer and its capabilities, - Explanation of using-4 as a teacher model to improve the performance of GPT-3., - Discussion of trying different prompts and the role of the signatures optimizing performance. 35:02 *🚀 Directions and Closing Remarks* - Emphasis on the potential and progress automated prompt engineering, - Information about the development of a DSPy Builder for a more accessible interface, - Invitation to try out new tools participate in the beta testing. Made with HARPA AI
thanks!! The contexts used during training actually come from the retriever itself (colbert wiki abstracts) not from the dataset, this is one of the interesting things from DSPy optimizers as well, it can bootstrap input the data that the training set does not have, for example it also bootstraps the `reasoning` field used by ChainOfThought the way it works is that it attempts to generate those fields (on in the case of contexts retrieve it from colbert) and see if they pass the metric, if yet, then they are used as successfully bootstrapped example in this case we could even use the `gold_titles` which is available in the dataset in the metrics to hint that the correct passages were retrieved, but we didn't do that in this notebook
@@LangWatch thanks for the explanation. So the metrics is used to check given gold title is present in retrieved context and predicted answer is match with ground truth. Based on metric score the weight optimization and prompt optimization occours internally right. Can I use any other metrics like semantic similarity.
thank you for the presentation, was very interesting. I've successfully compiled my fist DSPy program to structure text into xml representation. regarding the UI tool that you're developing, I could not access the form; I receive access denied message.
Hey @NasreddineCheniki. That's very cool to hear! Ooops! A little mistake. You should be able to access to form right now. Also, you can already sign-up for the visualisation tool as well via: app.langwatch.ai Happy to support you on the integration itself.
Omni Engineer sounds like a dream tool for developers, generating applications in seconds! With Ollama support, it’s taking coding efficiency to the next level. This kind of innovation is what makes the tech world so exciting right now
Browser automation can be a game-changer, and Claude Dev seems to make it easier than ever. Whether you’re automating tasks for efficiency or exploring new ways to enhance your workflow, this tool is definitely worth checking out
Cool presentation. Also great to see the mention of our Prompt Engineering Guide. :)
Amazing tutorials you're creating :)
🎯 Key points for quick navigation:
00:00 *Introduction to DSPy and Prompt Engineering*
- The speaker introduces themselves and topic,
- Explains the inspiration behind creating the video,
- Introduces concept of prompt engineering and the importance of effective prompt strategies.
[:05](ruclips.net/video/eAZ2LtJD5k/видео.html) 🔄 Popular Prompt Engineering Techniques
- Mentions the integration of these techniques in tools like ChatGPT - Explores other strategies such as Retrieval-Augmented Generation (R).
04:21 *🛠️ Workflow in Prompt Engineering*
- Details the typical workflow for creating a R application,
- Describes the human role in generating input, crafting, and evaluating outputs,
- Discusses the iterative nature of prompt tuning the challenges faced.
07:02 *🧠 DSPy Optimization Mechanism*
-pares DSPy optimization to classical machine learning processes,
- Explains DSPy allows for automated tuning without manual prompt adjustments,
- Highlights the of optimizing program architecture over manual prompt engineering.
[14:26ruclips.net/video/eAZ2LtJ6D5/видео.html) 🛠️ Setting Up DSPy and Initial Testing - Walks through setting up dependencies for DSPy,
- Discuss configuring LLMs and retrieval mechanisms,
- Provides a brief introduction the benchmark dataset used for evaluations.
17:23 *🔧 Building a DSPy Program*
- Outlines steps to create a simple DSPy program,
- Describes how DSP modules are structured similarly to PyTorch,
- Walks through defining and output fields for prompt generation.
20:38 *🚀 Running the DSPy Program*
- Demonstrates the initial DSPy program with a sample query,
- Shows how the interacts with configured retrieval and LLM components,
- Provides an example the system's output performance before optimization.
21:58 *🧪 Optimizing DSPy Programs*
- Introdu the process for optimizing DSPy programs,
- Describes setting up visual for optimization progress,
- Lays groundwork for defining evaluation metrics and measuring program performance improvements.
23:06 *Introduction to Few Shot and Chain of Thought Prompting*
- Discussing use of few-shot prompting and Chain of Thought methods for improving model responses - Explanation of the process for generating reasoning as a step before answering questions- 25:08 🧪 Optimizer Results
- Demonstrates running an optimizer with few-shot examples,
- Reports on accuracy improvements and cost considerations.
[29:](ruclips.net/video/eAZ2LtJ6Dk/видео.html) 💾 Saving and Evaluating Optimized Prom
- How to save the optimized prompts in a JSON format,
Overview of the improvement from the baseline accuracy to enhanced accuracy using the optimizer - Basics of inspecting the JSON file containing prompts.
[3024](ruclips.net/video/eAZ2LtJ6Dk/видео.html) 📈 Advanced Optimization Techniques
- Introduction the My Pro Optimizer and its capabilities,
- Explanation of using-4 as a teacher model to improve the performance of GPT-3.,
- Discussion of trying different prompts and the role of the signatures optimizing performance.
35:02 *🚀 Directions and Closing Remarks*
- Emphasis on the potential and progress automated prompt engineering,
- Information about the development of a DSPy Builder for a more accessible interface,
- Invitation to try out new tools participate in the beta testing.
Made with HARPA AI
hi everywhere there is this same notebook using hotpotqa
an actual use case on any other dataset would actually explain the use of it
Great! any simple UI for this?
Great stuff. Can you help me from where context is coming while training. The dataset has only two fields question and answer
thanks!! The contexts used during training actually come from the retriever itself (colbert wiki abstracts) not from the dataset, this is one of the interesting things from DSPy optimizers as well, it can bootstrap input the data that the training set does not have, for example it also bootstraps the `reasoning` field used by ChainOfThought
the way it works is that it attempts to generate those fields (on in the case of contexts retrieve it from colbert) and see if they pass the metric, if yet, then they are used as successfully bootstrapped example
in this case we could even use the `gold_titles` which is available in the dataset in the metrics to hint that the correct passages were retrieved, but we didn't do that in this notebook
@@LangWatch thanks for the explanation. So the metrics is used to check given gold title is present in retrieved context and predicted answer is match with ground truth. Based on metric score the weight optimization and prompt optimization occours internally right.
Can I use any other metrics like semantic similarity.
thank you for the presentation, was very interesting. I've successfully compiled my fist DSPy program to structure text into xml representation.
regarding the UI tool that you're developing, I could not access the form; I receive access denied message.
Hey @NasreddineCheniki. That's very cool to hear! Ooops! A little mistake. You should be able to access to form right now.
Also, you can already sign-up for the visualisation tool as well via: app.langwatch.ai
Happy to support you on the integration itself.