Discover Prompt Engineering | Google AI Essentials

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии • 70

  • @deekana
    @deekana 5 месяцев назад +11

    The best presenter I have ever listened to. Kudos! Perfect and purposeful use of tone, voice, and body language. Effective delivery. I learned more about presentation skills from this VDO than about prompt engineering lol. The team also did very well with filming, editing and directing. Well done!

    • @teenytinytoons
      @teenytinytoons 2 месяца назад

      thought i was watching an apple keynote. he's incredible.

  • @djarshad009
    @djarshad009 4 месяца назад +7

    One of the best and most informative videos about Prompt Engineering. The presenter is extremely clear and simple to understand, his passion for AI is very prominent through his speech.
    Good Job

  • @AIRecruiterTraining
    @AIRecruiterTraining 4 месяца назад +10

    Table of Contents
    - [00:00] Introduction to Prompt Engineering
    - Prompt engineering is designing prompts to get the desired output from conversational AI tools.
    - Prompt phrasing can influence the AI model's response, similar to how we use language in daily interactions.
    - [01:09] Motivation for Prompt Engineering
    - Yufeng, a Google engineer, was motivated by the inefficiency of getting useful responses from language models.
    - The goal is to improve the efficiency of conversational AI tools.
    - [03:00] Understanding LLMs for Effective Prompt Engineering
    - Understanding how LLMs work and their limitations is crucial for effective prompt engineering.
    - LLMs are AI models trained on massive amounts of text to identify patterns and generate responses to prompts.
    - [04:01] How LLMs Predict Words and Generate Text
    - LLMs are trained on vast amounts of text data to learn language patterns and predict the next word in a sequence.
    - They analyze probabilities to choose the most likely word based on the context, but may vary in their responses.
    - [04:57] ⚠ Limitations of LLMs: Bias and Unexpected Outputs
    - LLM outputs can be biased due to biases present in their training data.
    - You may not always get the desired output because LLMs rely on statistical analysis of training data.
    - [05:26] ⚠ Limitations of LLMs: Bias and Unexpected Outputs (continued)
    - LLM outputs can be factually inaccurate due to limitations in training data or its analysis methods (hallucinations).
    - Examples of hallucinations include generating incorrect historical data or company information.
    - [06:54] Critical Evaluation of LLM Outputs
    - It's crucial to critically evaluate LLM outputs for factual accuracy, bias, relevance, and information sufficiency.
    - This applies to all LLM outputs, regardless of the task (summarization, marketing ideas, project plans).
    - [08:22] Prompt Engineering for Effective LLM Use
    - The quality of the prompt affects the quality of the LLM output, similar to how high-quality ingredients affect a dish.
    - Prompt engineering involves designing clear, specific prompts with relevant context to get the desired output from an LLM.
    - [10:23] The Importance of Context in Prompt Engineering
    - Providing clear and specific context in prompts is crucial for getting the desired output from LLMs.
    - Example: A prompt about a professional conference will lead to different results than a prompt about a party.
    - [11:25] Prompt Engineering as an Iterative Process
    - Even well-designed prompts may require iteration to achieve the best results from LLMs.
    - The first attempt may not be perfect, so revising the prompt based on the initial output can be necessary.
    - [12:21] LLMs for Content Creation and Summarization Tasks
    - LLMs can be used to create content (articles, outlines, emails) and summarize lengthy documents.
    - Provide clear instructions and specify the desired task (create, summarize) in the prompt.
    - Example prompts are given for creating an article outline and summarizing a paragraph.
    - [14:24] LLMs for Text Classification and Data Extraction
    - LLMs can be used to classify text data (e.g., sentiment analysis of reviews) and extract information from text to a structured format (e.g., creating tables from reports).
    - Examples are provided for classifying customer reviews and extracting city and revenue data from a report.
    - [15:25] LLMs for Translation and Text Editing
    - LLMs can translate text between languages and edit text for tone, style, and grammar.
    - Examples are given for translating a training session title and changing the tone of a technical analysis.
    - [18:22] Iterative Prompt Engineering for Optimal Output
    - Prompt engineering is an iterative process, similar to developing a product or presentation.
    - It often requires multiple tries and revisions to the prompt before achieving the desired LLM output.
    - Don't be discouraged by initial failures; instead, evaluate the output and revise the prompt accordingly.
    - [19:19] Iterative Prompt Improvement for Better LLM Output
    - Getting the best output from an LLM often requires iteration on the prompt.
    - Evaluate the output after each attempt and revise the prompt to address shortcomings.
    - Reasons to revise the prompt include missing context, inaccurate information, or unsuitable formatting.
    - [21:42] Refining Prompts for Well-Structured Output
    - LLMs can be instructed to deliver output in a specific format (e.g., table).
    - Use clear and specific language to indicate the desired format in the prompt.
    - The example focuses on creating a well-organized table with relevant college information.
    - [24:11] Using Examples to Improve Prompt Efficiency
    - Including examples (or "shots") in prompts can significantly improve the quality of LLM output.
    - There are different prompting techniques based on the number of examples provided: zero-shot, one-shot, and few-shot.
    - Zero-shot prompts give no examples, while few-shot prompts provide two or more examples to guide the LLM.
    - [26:06] Using Few-Shot Prompting to Achieve Specific Style or Format
    - Few-shot prompting provides multiple examples (2+) to guide the LLM towards a desired output style or format.
    - The prompt should specify the number of adjectives, sentence length, and overall style desired.
    - The example focuses on creating a product description that matches the style of previous examples.

  • @capt2026
    @capt2026 7 месяцев назад +10

    Crystal clear presentation. Good job!

  • @looitaiyew6142
    @looitaiyew6142 4 месяца назад +1

    Hi Yufeng, Many Thanks for the presentation.Well done.

  • @chrismeyers4678
    @chrismeyers4678 3 дня назад

    Great video thanks!
    At time 7:15 you suggest we need to be critical the "output" fact is accurate, unbiased and relevant. If it is a fact why do I need to check if it's biased. According to Gemini, a fact itself cannot be biased. Help please. At what point can I just trust it works correctly and not have to check? Is this a free vs pay version thing?

  • @abhijeetcreates
    @abhijeetcreates 4 месяца назад +3

    What an amazingly crafted learning video, this man has become my most favourite teacher (who has actually added to my skills). although i knew most of the concepts already , | "This lecture enhanced those skills like and Upgrade"... These 30 minutes are gonna help me out many times in the future . A Big Thanks to you. |

  • @Anthony-dj4nd
    @Anthony-dj4nd 6 месяцев назад +7

    This guy is a great presenter

  • @mahmoudabdelazizabdelaziz2793
    @mahmoudabdelazizabdelaziz2793 7 месяцев назад +9

    I am against using the word engineering in “prompt engineering”.
    I understand that coding is undergoing significant changes with the introduction of LLMs such as GPT, however, in my opinion we need to be more precise in choosing words that match the actual task being done by the software developer.
    Engineering design has a very precise meaning and it incorporates a lot of stages starting from setting quantitative measurable specifications, proposing multiple design alternatives, evaluating them against the specifications, iterative analysis and design usually involving mathematics and physics, selecting a design after examining trade offs among conflicting metrics, implementing the selected design, evaluating the implementation on different platforms (if feasible), module testing, integration testing, and more. Those engineering design stages constitute a cycle that can be revisited with each iteration including even the specifications stage which can be modified if necessary, but with caution and after agreement with the project stakeholders.
    In that context, software engineering for example might not satisfy all the above while being considered engineering nevertheless, but with some reservation in my opinion.
    However, prompt engineering barely includes a very small fraction of engineering design stages, and thus in my opinion should be given another name.

    • @zealousprogrammer4539
      @zealousprogrammer4539 7 месяцев назад +2

      I agree I am also against data "scientists" term in data science.

    • @andremaxville4936
      @andremaxville4936 7 месяцев назад +3

      I always refer to it as prompt writing instead. It's not a job for a coder, but for a skilled writer.

    • @ErwinSalasErwin
      @ErwinSalasErwin 6 месяцев назад

      Engineering is meant to solve problems with technology, since prompt engineering do it, it’s ok to say engineering

    • @Balloonbot
      @Balloonbot 6 месяцев назад +1

      "Prompt pooping"

    • @otenyop
      @otenyop 5 месяцев назад

      I agree 100%

  • @hmnemonic
    @hmnemonic 6 месяцев назад +3

    I love his voice…❤ Thanks for your presentation!

    • @Berserq
      @Berserq 6 месяцев назад

      Seems like an AI

  • @ailearnershub
    @ailearnershub 5 месяцев назад +1

    Super'o'Super presentation and explanation and emphasis - Thanks Gentleman...

  • @saurabhbanerjee2998
    @saurabhbanerjee2998 Месяц назад

    Loved this presentation!❤

  • @LAG455
    @LAG455 6 месяцев назад +5

    Thank you for your teaching ☺️

  • @Footballfantastic07
    @Footballfantastic07 7 месяцев назад +2

    thanks for this type of knowledge pls update us with type of videos in future

  • @bahlechonco211
    @bahlechonco211 7 месяцев назад +26

    i love this this video but why are you crouching? the furniture looks small

    • @coolfrisbee
      @coolfrisbee 7 месяцев назад +3

      I created a prompt for a tiny chair

    • @bahlechonco211
      @bahlechonco211 7 месяцев назад +1

      @@OfficialUIUX 😂😂😂😂

    • @bahlechonco211
      @bahlechonco211 7 месяцев назад

      @@coolfrisbee 😂😂😂

    • @dasalsakid
      @dasalsakid 7 месяцев назад

      😂😂😂😂 so true

  • @vimalkumarv
    @vimalkumarv 2 месяца назад

    Excellent teacher.

  • @yogasetiawan2089
    @yogasetiawan2089 28 дней назад

    does he use script for the presentation?

  • @krishangopal4808
    @krishangopal4808 7 месяцев назад +2

    Can Gemini and ChatGPT help in writing research paper which can minimise Palagrism?

    • @richpoorworstbest4812
      @richpoorworstbest4812 7 месяцев назад +2

      I did exactly that

    • @PeterPan-hs5tu
      @PeterPan-hs5tu 7 месяцев назад +2

      my understanding of plagiarism is carbon copy of the exact combination of a collection of words from a publication made by an author. and what chatGPT capable of doing is to re-orienting the placement of the words. It directly challenges the very meaning of plagiarism as we know. it’s interesting to reflect on the fact how we define plagiarism, perhaps we have to move it’s goalpost in a post AI era 🤔 … or maybe it becomes the a concept of the past

    • @teenytinytoons
      @teenytinytoons 2 месяца назад

      you're only cheating yourself

  • @AlliyuAbdullihi
    @AlliyuAbdullihi 2 месяца назад

    I love this video and I happy ❤🎉

  • @Psrk4287
    @Psrk4287 6 месяцев назад

    What do you do when you don't know what the output should be? How will you use critical reasoning to evaluate the output then?

    • @bravosierra2447
      @bravosierra2447 6 месяцев назад +1

      Good question . In my case I start with a general prompt that involves at least three values. From the output I get I refine my prompts until the output satisfies what I am looking for even when I start off unsure. Hopes this helps.

  • @manuelherrera5615
    @manuelherrera5615 4 месяца назад

    Great job!

  • @prashantsawant4587
    @prashantsawant4587 7 месяцев назад

    Good i description & details on how LLlM can be.. Thank You...

  • @อนิรุทธ์ตุลสุข-ง2ฉ

    😊❤thank you

  • @MrNgues
    @MrNgues 10 дней назад

    merci❤❤❤

  • @proflead
    @proflead 4 месяца назад

    Thanks for sharing!

  • @chrisbrown4539
    @chrisbrown4539 4 месяца назад +1

    "Shot" is synonymous w/ "example." 0-shot prompts provide no examples for the LLM. Non-zero shot prompts are a basic method to fine-tune LLM output, getting less generalized results.

  • @PeterPan-hs5tu
    @PeterPan-hs5tu 7 месяцев назад +1

    amazing 🎉 thanks buddy

  • @bawiliankhum
    @bawiliankhum 4 месяца назад

    Thanks for sharing

  • @begeshgangadharan666
    @begeshgangadharan666 6 месяцев назад

    There is some glitch in the Google map can you guys fix that?

    • @56585656587
      @56585656587 6 месяцев назад

      Your prompt did not include sufficient detail and requires multimodal approach.

  • @juanlugofitness
    @juanlugofitness 7 месяцев назад

    Thank you

  • @bleacherz7503
    @bleacherz7503 6 месяцев назад

    Sanitation Engineer - Garbageman
    Prompt Engineer - Googler

  • @khunaukmbs7610
    @khunaukmbs7610 7 месяцев назад

    Thank!

  • @WorldRecordRapper
    @WorldRecordRapper 4 месяца назад

    Make a comment for a video on prompt engineering.

  • @ndubuisisarah7147
    @ndubuisisarah7147 Месяц назад

    Good

  • @karim3548
    @karim3548 2 месяца назад

    ❤❤❤

  • @-aris-an
    @-aris-an 6 месяцев назад

    👑

  • @Abaddonian-sz3lo
    @Abaddonian-sz3lo 6 месяцев назад

    👍

  • @PearsonLester-m5q
    @PearsonLester-m5q 2 месяца назад

    Hernandez David Lewis Daniel Allen Gary

  • @-aris-an
    @-aris-an 7 месяцев назад

    😊

    • @VduheBfube
      @VduheBfube 6 месяцев назад

      ‏‪0:29‬‏

  • @Xtrone96
    @Xtrone96 7 месяцев назад

  • @MaiconAraujo
    @MaiconAraujo 7 месяцев назад +4

    Prompting Engineering is very offensive to the what is engineering.

    • @otenyop
      @otenyop 5 месяцев назад

      Absolutely, it's an abuse to proper engineering sector

    • @AmalRafeeq
      @AmalRafeeq 5 месяцев назад +2

      That’s probably because you don’t understand prompt engineering.

    • @otenyop
      @otenyop 5 месяцев назад

      @@AmalRafeeq check my bio. I am an AI engineer

  • @SantaPacienciaRadio
    @SantaPacienciaRadio 2 месяца назад

    Hallucinations ? Really ? God all mighty...

  • @smejia6362
    @smejia6362 6 месяцев назад

    The important boring theory explained in a boring way. I gave a like anyway

    • @frederickwelsh
      @frederickwelsh Месяц назад

      I suspect you learned this long ago. The video is great for most people who haven't.

  • @MClub-i2z
    @MClub-i2z 7 месяцев назад +1

    bro talkin about biases when gemini ai itself is extremely biased due to the woke data fed to it by its devs

    • @teenytinytoons
      @teenytinytoons 2 месяца назад

      what does woke mean? i just woke up this morning.