Excited to launch our new course catalog! Use code RUclips20 to get an extra 20% discount when enrolling in our DAIR.AI Academy: dair-ai.thinkific.com/ IMPORTANT: The discount is limited to the first 500 students.
00:41 🔑 Prompt engineering involves using instructions and context to leverage language models effectively for various applications beyond just language tasks. 02:18 🔍 Prompt engineering is crucial for understanding language model capabilities, applicable in research and industry, as highlighted by job postings emphasizing this skill. 03:37 🛠 Components of a prompt include instructions, context, input data, and output indicators, affecting the model's response, with elements like temperature and top P influencing model output diversity. 05:45 📚 Prompt engineering applies to various tasks like text summarization, question answering, text classification, role playing, code generation, and reasoning, showcasing diverse applications. 09:57 💻 Language models, like OpenAI's, exhibit impressive code generation abilities, handling queries from natural language prompts for tasks such as SQL query generation. 10:51 🤔 While language models can reason to an extent, specific prompts and techniques like Chain of Thought prompting aid in improving their reasoning capabilities, although it's an evolving field. 11:19 📝 The lecture delves into code examples and tools, showcasing how prompt engineering techniques are applied practically, using OpenAI's Python client and other tools. 19:34 🚀 Advanced techniques like Few Shot Prompts, Chain of Thought prompting, and Zero Shot Chain of Thought prompting boost performance on complex tasks by providing demonstrations and step-by-step reasoning instructions to the language model. 23:13 🌟 Prompt engineering is an exciting space where crafting clever prompts empowers language models, allowing for powerful capabilities and advancements in various applications. 23:27 🧠 Prompt engineering aims to improve language models for complex reasoning tasks, as these models aren't naturally adept at such tasks. 24:22 🗳 Self-consistency in prompting involves generating multiple diverse reasoning paths and selecting the most consistent answers, boosting performance on tasks like arithmetic and Common Sense reasoning. 25:16 🔍 Demonstrating steps to solve problems within prompts guides models to produce correct answers consistently. 26:37 📚 Using language models to generate knowledge for specific tasks has emerged as a promising technique, even without external sources or APIs. 30:15 🐍 Program-aided language models use interpreters like Python to generate intermediate reasoning steps, enhancing complex problem-solving. 32:35 🔄 React frameworks utilize language models and external sources interchangeably for reasoning traces, action plans, and task handling. 35:20 📊 Tools and platforms for prompt engineering offer capabilities for development, evaluation, versioning, and deployment of prompts. 40:08 🧰 Various tools allow combining language models with external sources or APIs for sophisticated applications, augmenting the generation process. 44:45 📝 Leveraging tools like Long-Chain allows building on language models by chaining and augmenting data for generating responses. 46:22 🧠 Prompt engineering involves combining react-based actions with language models, showcasing the observation, thought, and action sequence for varied tasks. 47:53 🛠 Updated and accurate information from external sources is crucial for prompt engineering applications, highlighting the importance of up-to-date data stores. 48:34 📊 Data augmentation in prompt engineering involves reliance on external sources and tools to generate varied content, requiring data preparation and formatting. 50:34 💬 Prompt engineering explores clever problem-solving techniques to engage language models effectively, like converting questions into different languages while maintaining context and sources. 52:40 ⚠ Model safety is a critical aspect of prompt engineering, focusing on understanding and mitigating language model limitations, biases, and vulnerabilities, including initiatives like prompt injections to identify system vulnerabilities. 55:12 🔒 Potential vulnerabilities like prompt injection, prompt leaking, and jailbreaking highlight risks of manipulating language model outputs, emphasizing the importance of reinforcing system safety measures. 58:30 🎯 Reinforcement Learning from Human Feedback (RLHF) aims to train language models to meet human preferences, emphasizing the relevance of high-quality prompt datasets in this training process. 01:00:06 🌐 Prompt engineering facilitates the integration of external sources into language models, enabling diverse reasoning capabilities and applications, particularly useful for scientific tasks requiring factual references. 01:01:27 🔄 Understanding emerging language model capabilities, such as thought prompting, multi-modality, and graph data handling, is a crucial area for future exploration and development in AI research.
My friend, where's the flashy thum bnail with screaming/amazed people? Where's the promise of making $500,000 overnight. WHERE'S YOUR MATRIX SCREENSAVER? What's that? You don't feel the need to insult your viewers, yourself, or the science by promising the impossible and overemploying hyperbole? Whatever dude. You just keep on producing the absolute best video I've seen yet on prompt engineering. See if that gets you some kind of amazing career or something. By the way, that was all sarcasm. Thank you so much for this video!
In the end, prompting seems to be just a higher level programming construct, closer to natural everyday language. Precision still matters somewhat to get the most accurate results, but a whole much less so than your 3/4G languages. Soon, the machine will be so good at understanding context with additional input sensors, it'll almost feel like you can create with thought alone. Exciting times we're living through.
We could get our 4G languages closer to chatGPT and autoGPT via having a huge amount of defaults and programming with relations & constraints, although I guess from now on "our code" will just be a reference that is tweaked.
Great video - Thank you for putting this together. Quick question: is Data Augmented Generation the same as Retrieval Augmented Generation? They sure seem very similar in concept and implementation.
Thank you Elvis, this one is very useful. I need this for generating long blog posts. Any suggestion regarding this use case. What needs to be done for generating long blog posts.
Awesome Lecture thank you a lot, can you mention some of the open source large language models which have a decent output and we can make a lot of experiments on other than OpenAI models?
Hi Youssef. This is an important question. I am doing a bit of research on this as I haven't found an open-source model that shows similar capabilities to GPT-3 and can work with the prompting techniques I am covering here. Have you tried nat.dev/? It was free but now you need to top up to use it. I saw some open-source models in the list which should allow for quick experimentation.
But could you pls tell why we cannot directly use the playground instead where we can give the prompt in natural language directly and get the response. without using the python code ?
I have partnered with Sphere to deliver a course that will include a certificate. www.getsphere.com/cohorts/prompt-engineering-for-llms For now, this is the best option as the course will cover all the topics in this lecture and more hands-on exercises.
Hi Kyle. Basic knowledge of Python should be enough for this course. It would be good if you are familiar with the basic topics in this book: greenteapress.com/wp/think-python-2e/ We won't be needing advanced Python knowledge as the goal of the course will be to showcase prompt engineering with existing techniques and tools.
Haven't really thought about this application but I think it shouldn't be too hard. It might require a bit of instructing the model on the format (i.e., title, subtitle, and so on) and stating what exactly you would like to generate for each subsection. Be advised that these systems do tend to generate what look like coherent text but that might be inaccurate. There are ways to make the generation more reliable like relying on external sources, knowledge bases, etc. These all depend on the application. To generate something like code related tutorial/blogpost might be an interesting experiment. Let me try something and add to the guide if I get interesting results.
I think this could be interesting to showcase. I think it's important to know the settings well to make the most use of the playground. I follow the documentation for this.
I got davinci 3 to solve the "When I was 6, my sister was half my age...." Problem with the following one shot prompt: When I was 6, my sister was half my age. Now I am 70, how old is my sister? Let's think step by step and break down the problem in parts. By adding the "and break down the problem in parts" the AI was able to give me the write answer. Im guessing this can be used for deeper one-shot prompts.
@@MajorBorris In the end few shots becomes substitued with input data once you start fine tunning the models, so I think it is better to figure out zero-shot ways of prompting to create larger scale applications
@@MajorBorris Also, few shot prompting eats up a lot of the tokens available, so when you need to generate large chunks of text, providing multiple, or even 1 example, can eat up all your available tokens and leave you with a truncated response.
Prompt : When i was 6 , my sister was half my age. I am 70 now. How old is my sister? Chat GPT Answer: If you were 6 years old when your sister was half your age, that means she was 3 years old at that time. Now that you are 70 years old, your sister would be 67 years old, assuming she is still alive.
the trick is just to add engineer to whatever you to call yourself, create a RUclips video, and bam you're in. I'm a RUclips comment engineer... right .... now.
Absolutely. And anyone with a phone is now a photographer, Doordash app is a delivery driver & access to airbnb is a hotelier. I love the no code movement. At the end of the day it’s not what one calls themselves , but what they can deliver to the client.
In computer science research, which encompasses fields such as computer science, computer engineering, and artificial intelligence, ethical standards have been neglected for at least two decades. A recurring problem is the renaming of well-established concepts without properly acknowledging their origins. For example, “prompt engineering” is simply a renaming of the concept of relevance feedback, but existing work on relevance feedback often goes unnoticed. This trend is pervasive: in deep learning, research unrelated to deep learning is frequently ignored and thus avoids comparison with lightweigt or frugal methods. Random projection has been renamed compressive sensing. Even basic concepts like the dot product, correlation and convolution have been renamed to create an illusion of innovation. The examples are numerous. Where are the intellectuals whose responsibility it is to denounce such abuses?
It's hard to understand something you can't see but IT engineers build things that take humans millions of hours and billions of dollars to complete. Prompt engineers understand how to communicate and program large language models. Many consider it an art since few i.t. people are actually good at it.
Excited to launch our new course catalog!
Use code RUclips20 to get an extra 20% discount when enrolling in our DAIR.AI Academy: dair-ai.thinkific.com/
IMPORTANT: The discount is limited to the first 500 students.
00:41 🔑 Prompt engineering involves using instructions and context to leverage language models effectively for various applications beyond just language tasks.
02:18 🔍 Prompt engineering is crucial for understanding language model capabilities, applicable in research and industry, as highlighted by job postings emphasizing this skill.
03:37 🛠 Components of a prompt include instructions, context, input data, and output indicators, affecting the model's response, with elements like temperature and top P influencing model output diversity.
05:45 📚 Prompt engineering applies to various tasks like text summarization, question answering, text classification, role playing, code generation, and reasoning, showcasing diverse applications.
09:57 💻 Language models, like OpenAI's, exhibit impressive code generation abilities, handling queries from natural language prompts for tasks such as SQL query generation.
10:51 🤔 While language models can reason to an extent, specific prompts and techniques like Chain of Thought prompting aid in improving their reasoning capabilities, although it's an evolving field.
11:19 📝 The lecture delves into code examples and tools, showcasing how prompt engineering techniques are applied practically, using OpenAI's Python client and other tools.
19:34 🚀 Advanced techniques like Few Shot Prompts, Chain of Thought prompting, and Zero Shot Chain of Thought prompting boost performance on complex tasks by providing demonstrations and step-by-step reasoning instructions to the language model.
23:13 🌟 Prompt engineering is an exciting space where crafting clever prompts empowers language models, allowing for powerful capabilities and advancements in various applications.
23:27 🧠 Prompt engineering aims to improve language models for complex reasoning tasks, as these models aren't naturally adept at such tasks.
24:22 🗳 Self-consistency in prompting involves generating multiple diverse reasoning paths and selecting the most consistent answers, boosting performance on tasks like arithmetic and Common Sense reasoning.
25:16 🔍 Demonstrating steps to solve problems within prompts guides models to produce correct answers consistently.
26:37 📚 Using language models to generate knowledge for specific tasks has emerged as a promising technique, even without external sources or APIs.
30:15 🐍 Program-aided language models use interpreters like Python to generate intermediate reasoning steps, enhancing complex problem-solving.
32:35 🔄 React frameworks utilize language models and external sources interchangeably for reasoning traces, action plans, and task handling.
35:20 📊 Tools and platforms for prompt engineering offer capabilities for development, evaluation, versioning, and deployment of prompts.
40:08 🧰 Various tools allow combining language models with external sources or APIs for sophisticated applications, augmenting the generation process.
44:45 📝 Leveraging tools like Long-Chain allows building on language models by chaining and augmenting data for generating responses.
46:22 🧠 Prompt engineering involves combining react-based actions with language models, showcasing the observation, thought, and action sequence for varied tasks.
47:53 🛠 Updated and accurate information from external sources is crucial for prompt engineering applications, highlighting the importance of up-to-date data stores.
48:34 📊 Data augmentation in prompt engineering involves reliance on external sources and tools to generate varied content, requiring data preparation and formatting.
50:34 💬 Prompt engineering explores clever problem-solving techniques to engage language models effectively, like converting questions into different languages while maintaining context and sources.
52:40 ⚠ Model safety is a critical aspect of prompt engineering, focusing on understanding and mitigating language model limitations, biases, and vulnerabilities, including initiatives like prompt injections to identify system vulnerabilities.
55:12 🔒 Potential vulnerabilities like prompt injection, prompt leaking, and jailbreaking highlight risks of manipulating language model outputs, emphasizing the importance of reinforcing system safety measures.
58:30 🎯 Reinforcement Learning from Human Feedback (RLHF) aims to train language models to meet human preferences, emphasizing the relevance of high-quality prompt datasets in this training process.
01:00:06 🌐 Prompt engineering facilitates the integration of external sources into language models, enabling diverse reasoning capabilities and applications, particularly useful for scientific tasks requiring factual references.
01:01:27 🔄 Understanding emerging language model capabilities, such as thought prompting, multi-modality, and graph data handling, is a crucial area for future exploration and development in AI research.
I am a complete alien on this topic, yet I can see the value of your videos. Great job bro 👏
My obsidian notebook on one monitor, this video on the other. Taking notes, thinking things through. Tip of the hat for this fine video.
This is going to be a wide spread field and one of the hottest area of interest for businesses
And he who masters this will have opportunities coming out their ears..
he/she :)
This is the best overview of prompt engineering that I have seen! Thank you!
25:29, the text is, 'She bought 5 bagels for $3 each. This means she spent 5'. Apart of that - great video, thanks
It's very beneficial for learning prompting for beginners. Thank you for your effort.
My friend, where's the flashy thum bnail with screaming/amazed people? Where's the promise of making $500,000 overnight. WHERE'S YOUR MATRIX SCREENSAVER?
What's that? You don't feel the need to insult your viewers, yourself, or the science by promising the impossible and overemploying hyperbole?
Whatever dude. You just keep on producing the absolute best video I've seen yet on prompt engineering. See if that gets you some kind of amazing career or something.
By the way, that was all sarcasm. Thank you so much for this video!
Prompt: Instructions, Context, Input data, output Indicator
Tasks:
Question Answering
Text Classification
Code Generation
Summarizing
Role playing
Reasoning
Great Video Elvis👍👍
amazing video and great resources! Thank you Elvis!
Hi Elvis, you made an excellent video with good content
Simply great content, it is sincerely appreciated! Keep up the good work Elvis 💪😎
Excellent video 🎉
In the end, prompting seems to be just a higher level programming construct, closer to natural everyday language. Precision still matters somewhat to get the most accurate results, but a whole much less so than your 3/4G languages. Soon, the machine will be so good at understanding context with additional input sensors, it'll almost feel like you can create with thought alone. Exciting times we're living through.
We could get our 4G languages closer to chatGPT and autoGPT via having a huge amount of defaults and programming with relations & constraints, although I guess from now on "our code" will just be a reference that is tweaked.
Great video - Thank you for putting this together. Quick question: is Data Augmented Generation the same as Retrieval Augmented Generation? They sure seem very similar in concept and implementation.
just find out there is prompt engineering, thanks for the lecture
Thank you Elvis, this one is very useful.
I need this for generating long blog posts.
Any suggestion regarding this use case.
What needs to be done for generating long blog posts.
Thanks for putting this together Elvis.
Thank you soo much for putting all of this together!!
Thank you for sharing this video.. it's really grate learning. Can you elaborate prompt engineering for Multiple Choice Question Answering task ?
One of the best lecture I have ever attended
Fantastic info. Thank you for your hard work.
Great lecture - Thank you.
can you explain a bit more about the future directions?
Thank you so much. Good video.
Awesome Lecture thank you a lot, can you mention some of the open source large language models which have a decent output and we can make a lot of experiments on other than OpenAI models?
Hi Youssef. This is an important question. I am doing a bit of research on this as I haven't found an open-source model that shows similar capabilities to GPT-3 and can work with the prompting techniques I am covering here. Have you tried nat.dev/? It was free but now you need to top up to use it. I saw some open-source models in the list which should allow for quick experimentation.
LLama and Alpaca as well.
8:22 role playing
8:33 code prompting
Interesting. Good job!
Thanks for this. Brilliant.
Thanks for the detailed presentation, really helpful :)
Great content, excellent intro.
What an amazing lecture!
Awesome
Nice job
Nice content 👍
I have interview in this field soon, I have no idea what they will ask, can you please give some hint what they could because I am fresher
Good Content.👌✌
Amazing lecture, well done :)
But could you pls tell why we cannot directly use the playground instead where we can give the prompt in natural language directly and get the response. without using the python code ?
Great content!!! - thank you
Is there any type of certification available?
I have partnered with Sphere to deliver a course that will include a certificate. www.getsphere.com/cohorts/prompt-engineering-for-llms
For now, this is the best option as the course will cover all the topics in this lecture and more hands-on exercises.
Hi, we want to add your video link on our website
Nice!
Hi Elvis, I would like to take your course but am not fluent in Python. How do you think I should prepare?
Hi Kyle. Basic knowledge of Python should be enough for this course. It would be good if you are familiar with the basic topics in this book: greenteapress.com/wp/think-python-2e/
We won't be needing advanced Python knowledge as the goal of the course will be to showcase prompt engineering with existing techniques and tools.
@@elvissaravia Enrolled! Thanks for the resource. I'm excited.
Cool
You keep mentioning a mole. Is that a reference or a nickname for AI?
Thanks, I will make it notes in bahasa
Please suggest approach for generating blog posts!!!
Haven't really thought about this application but I think it shouldn't be too hard. It might require a bit of instructing the model on the format (i.e., title, subtitle, and so on) and stating what exactly you would like to generate for each subsection. Be advised that these systems do tend to generate what look like coherent text but that might be inaccurate. There are ways to make the generation more reliable like relying on external sources, knowledge bases, etc. These all depend on the application. To generate something like code related tutorial/blogpost might be an interesting experiment. Let me try something and add to the guide if I get interesting results.
Can you please make a clip on exactly how to best use the openAI playground?
I think this could be interesting to showcase. I think it's important to know the settings well to make the most use of the playground. I follow the documentation for this.
I got davinci 3 to solve the "When I was 6, my sister was half my age...." Problem with the following one shot prompt:
When I was 6, my sister was half my age. Now I am 70, how old is my sister? Let's think step by step and break down the problem in parts.
By adding the "and break down the problem in parts" the AI was able to give me the write answer. Im guessing this can be used for deeper one-shot prompts.
Well that sure didn't work in chat GPT. It produces the giant wall of text that is avoided with a few shot chain of thought prompt.
@@MajorBorris In the end few shots becomes substitued with input data once you start fine tunning the models, so I think it is better to figure out zero-shot ways of prompting to create larger scale applications
Plus my prompt was for davinci, not ChatGPT (Fine tuning chatgpt isnt possible yet)
@@MajorBorris Also, few shot prompting eats up a lot of the tokens available, so when you need to generate large chunks of text, providing multiple, or even 1 example, can eat up all your available tokens and leave you with a truncated response.
11:11 Demo
Prompt : When i was 6 , my sister was half my age. I am 70 now. How old is my sister?
Chat GPT Answer:
If you were 6 years old when your sister was half your age, that means she was 3 years old at that time.
Now that you are 70 years old, your sister would be 67 years old, assuming she is still alive.
nice!!thinks
quite noticeable the use of "somehow" in all these cases! AI is becoming so complex
Could you please accurate subtitles for this video?
The example of Olivia 25:51 is inconsistent. She has $23 left.
So you can now become an engineer without knowing math or code?
the trick is just to add engineer to whatever you to call yourself, create a RUclips video, and bam you're in. I'm a RUclips comment engineer... right .... now.
@jason white me too, I'm a RUclips comment engineer with ChatGPT capabilities would you like code for python for this stateme
yes
Absolutely. And anyone with a phone is now a photographer, Doordash app is a delivery driver & access to airbnb is a hotelier. I love the no code movement. At the end of the day it’s not what one calls themselves , but what they can deliver to the client.
@@pauls7534 I am a RUclips Comment Brain surgeon, and my expert engineering opinion is that you need back surgery.
In computer science research, which encompasses fields such as computer science, computer engineering, and artificial intelligence, ethical standards have been neglected for at least two decades. A recurring problem is the renaming of well-established concepts without properly acknowledging their origins. For example, “prompt engineering” is simply a renaming of the concept of relevance feedback, but existing work on relevance feedback often goes unnoticed. This trend is pervasive: in deep learning, research unrelated to deep learning is frequently ignored and thus avoids comparison with lightweigt or frugal methods. Random projection has been renamed compressive sensing. Even basic concepts like the dot product, correlation and convolution have been renamed to create an illusion of innovation. The examples are numerous.
Where are the intellectuals whose responsibility it is to denounce such abuses?
I'm curious what you think the word "engineering" actually means? Human to AI: Is this a cat? AI to Human: No, it's an engineer, a prompt engineer.
It's hard to understand something you can't see but IT engineers build things that take humans millions of hours and billions of dollars to complete. Prompt engineers understand how to communicate and program large language models. Many consider it an art since few i.t. people are actually good at it.
Should call it "Bot tickling"
Ask a dictionary
@@dowlrod66 I did, but it didn't answer...looks like I'll have to engineer a prompt for ChatGPT
@@MaxWinner careful, that could be dangerous.
please put subtitles in portuguese, in brazil not all people speak english,
thank you
@promptvideoinc