Maximizing ChatGPT API Efficiency: Expert Strategies for Custom Instructions & System Windows

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии • 55

  • @echodff
    @echodff Год назад +4

    "May the force be with you, Frodo!!!" -Dumbledore said calmly.

  • @TheYvian
    @TheYvian Год назад +11

    Your work is much appreciated and this is super useful to save us time on experimenting ourselves. Thank you for sharing all this as well!

  • @Dan-oj4iq
    @Dan-oj4iq Год назад +8

    Dave: Your Default Custom Instructions are spot on. This, in essence, is for me the "full stop" for any LLM. That is all that I want from any of these models.

    • @DaveShap
      @DaveShap  Год назад +2

      Yeah, that set makes it more compliant and useful.

  • @avi7278
    @avi7278 Год назад +4

    One of my personal favorites is the "without jargon" request. Take any technical piece of text and ask gpt-4 "rewrite this without jargon". Ive tried a lot of different ones and there's something about this "without jargon" phrase that the model responds particularly well to.

  • @djannias
    @djannias Год назад +1

    🎯 Key Takeaways for quick navigation:
    00:00 📝 The video discusses using the Chat GPT API and custom instructions for various tasks.
    02:32 🚀 Providing a clear mission or goal to Chat GPT helps improve its performance in completing tasks.
    07:14 📨 You can use out-of-band processes and placeholder tags to inject information into the conversation with Chat GPT.
    08:11 ✉️ Interaction schema can be used to describe the expected interaction with Chat GPT, which is particularly useful for automation.
    15:15 📚 Structuring prompts with context, rules, instructions, expected input, output format, and example output can lead to consistent and effective results.
    21:14 📊An example prompt for creating slide decks showcases a clear mission, input variation handling, output format, and example output to achieve consistent results.
    22:51 🧠 Prompt engineering involves crafting effective prompts to guide large language models, maximizing their utility.
    23:30 💡 Key principles in prompt engineering include English proficiency and ordered thinking.
    24:51 ✏️ Clear definition of desired output and examples help AI models generate content accurately.
    25:15 🤖 AI models can infer desired content based on clear instructions and mission, even from questions.
    26:00 📊 Latent understanding enables AI to produce relevant content when provided with specific instructions.
    26:45 ⚡ The duck curve represents fluctuations in energy production and demand throughout the day.
    27:00 🌟 The speaker has a repository with resources on prompt engineering and AI-related topics.

  • @wendten2
    @wendten2 Год назад +3

    Very great content David. However:
    20:32 I'm ambivalent about the assertion that providing excessive information is advantageous. I employ a hybrid of GPT-4 and GPT-3.5 to generate NPCs. These AI models need to process a large volume of data to produce realistic and engaging interactions. However, inundating the models with too much information often results in inaccurate or muddled outputs. For instance, if two individuals, John and Joe, are involved in a scenario and Joe is specifically characterized as wearing a black hat, a less focused data input may cause the model to incorrectly attribute the black hat to John instead of Joe. To circumvent issues like these, and to also mitigate the 'Lost Middle Phenomenon,' where crucial details get lost or misinterpreted, I strategically try to limit and place complex relational information at either the beginning or the end of the data input. Meanwhile, more routine or mundane character-specific details are either left out or inserted in the middle of the data feed.

  • @OnigoroshiZero
    @OnigoroshiZero Год назад +1

    I used GPT-4 to make it write Instructions for itself being a GM running text-based adventures, and to follow specific rules (provided in specific json files that it was made to ask at the start of a new session), and it did wonders.
    The one thing I want now is for the token size to increase to 50-100k, because I have prepared an entire campaign setting (world, races, factions, lore, etc.) and TTRPG rules (simplified from the game I design currently) to follow for interactions and combat. But these files alone are around 20k tokens currently... And hopefully a completely uncensored version for at least the Plus, because it stops a lot of times with even the slightest NSFW hints (can't even kiss the beautiful Elf I saved from a Wyvern some times) or extreme violence.
    The good thing is that I made a good framework for it to create a summary file which gets updated after every output, and keeps track of everything from choices, to locations visited, and NPCs met (the important/named ones). Even current goals/quests, locations, and time. Also, it can read the summary file on a new session and continue from there with great accuracy.
    edit: Trying things like this is fun in itself, let alone the things you can do with it afterwards. I can't wait for GPT-5 or GPT-6 which will probably be able to emulate entire games (no need to create the logic when an intelligent machine can generate the image and audio based on your keyboard+mouse outputs with only the knowledge of the genre and the general rules/mechanics you would like it to have).
    And I don't care that it will take my job (game designer/developer), because I'll do even more of the fun things I like with it, which are currently impossible for me.

  • @OzzieCoto
    @OzzieCoto Год назад +8

    David big thanks. I have been so blessed by your deep dives into LLMs and all things Star Trek. The years I spent at IBM during the early 90’s were fueled by Star Trek TNG and DS9. Thank Gene Roddenberry was born. So many have learned ethics, a love for science & the scientific process, software and hardware development, neuroscience and the list goes on. 🖖 May you be blessed daily with the knowledge that your life has mattered and that your code lives on. Big ups and thanks.

    • @vicaya
      @vicaya Год назад

      Live long and prosper🖖

  • @kevon217
    @kevon217 Год назад +1

    I love watching your vids. SO MANY immediately useful LLM nuggets of wisdom. Very much appreciate it!

  • @fabriai
    @fabriai Год назад

    David, your videos always come at the right time. Thanks a lot for sharing this. Great content, as always.

  • @m3vm3
    @m3vm3 Год назад

    With the free version of ChatGPT I often have to remind it of the goals, and re-submit the source text of the discussion.
    It seems using the API will solve all my problems in that area.

  • @JonSkjeseth
    @JonSkjeseth Год назад

    i use gpt-4 professionally to modify json datasets with advanced data analysis, like translating certain sections, identifying data, add items to sections, make it extract information from excel sheets while it works with the json file. i try to make my prompts as effective as possible, balancing specification and context, and reducing mistakes as much as possible, which require technical proofreading.

  • @CostaReall
    @CostaReall Год назад +2

    This is going to be a good one

  • @akkitty22
    @akkitty22 Год назад

    Why accept first response ? Run your result through again to get rid of wordiness. "Revise and shorten in tactful silicon valley start up tone". I find a few more runs of the email gets me to a succint and appropriate modern email that remains respectful and neccessary.

  • @user-rd6tuYuf
    @user-rd6tuYuf Год назад +3

    GPT-4 does confabulate more than stuff if it doesn't align with it's given persona or belief, I think it would be agreeable to make it ask questions to add more context.. but that would lose the persona thrill, doesn't it?

    • @DaveShap
      @DaveShap  Год назад

      Not on the API. API is super steerable.

  • @fR33Sky
    @fR33Sky Год назад

    I was hoping you'd show the "prompt master" prompt!
    I thought I was being smart by following your meta-prompting advice, where I kinda created the prompt for the prompt engineering chat -- and then asked it to create a better prompt for the prompt engineering chat.
    But it never actually gave me any formatting in those!
    I think I'll expand on your system prompt writer so it'll remember all the possible hashtag-sections for me. In other words, thank you

  • @gametolearnatx
    @gametolearnatx Год назад +2

    Awesome video David. So much good advise packed in all your videos. I see you use INTERACTION SCHEMA sometimes in prompts. Is that something you use in addition to the INPUT to explain what the user will be giving in the prompt and in the general structure where does INTERACTION SCHEMA fit? I tend to think its after context but before rules.

    • @DaveShap
      @DaveShap  Год назад +2

      Yes, interaction schema just tells it how to understand the nature of the interaction. You can use this in conjunctino with stuff like EXPECTED INPUT and EXPECTED OUTPUT

  • @TheVitkuz
    @TheVitkuz Год назад

    This video is gold. Thanks for teaching

  • @goforit5
    @goforit5 Год назад

    Great content as always. Thanks Dave

  • @aresaurelian
    @aresaurelian Год назад +1

    Run all these as agents in a hierarchical structure. Its emergent patter is: sentience.

  • @PolymathicSophist
    @PolymathicSophist Год назад +1

    "Well, you must be from the Mirror Universe, where all the values of character and integrity must have been left behind in favor of your...shall we say, 'unique' approach to self-centeredness." 🖖🌟

  • @ChaoticNeutralMatt
    @ChaoticNeutralMatt Год назад

    I need to at least do this much. I've run on the default and it changes far too much when certain weights get changed around with training. Certain patterns that might irritate me that I honestly could avoid if I did the bare minimum and set the expectation, tone, and style.

  • @elrecreoadan878
    @elrecreoadan878 Год назад +1

    David awsome content. I would apprecciate your opinion about when one should opt to fine tune an llm or a voiceflow / Botpress ai bot?

    • @DaveShap
      @DaveShap  Год назад +1

      Did a recent finetuning video

    • @elrecreoadan878
      @elrecreoadan878 Год назад

      @@DaveShapChatGPT
      Certainly, I watched it too, but as I'm new to all of this, it's still not entirely clear to me when one should A. Utilize a voiceflow or Botpress AI bot with its knowledge base linked to ChatGPT, or B. Fine-tune a large language model (LLM). My intention is to assemble a team of experts in parenting topics to provide valuable information to parents effectively.

  • @Art_official_in_tellin_gists
    @Art_official_in_tellin_gists Год назад +1

    do you notice any degredation of response quality to inputs unrelated or incompatible with CI you're using at the time..?
    and have you found any workarounds (other than turning them off or changing them) if so?

    • @Art_official_in_tellin_gists
      @Art_official_in_tellin_gists Год назад

      or do you use the chat interface enough to have noticed a reduction in response speed when using complex Instructions?
      are these questions wasting your time? sorry of so, but im coming to very much respect your input on these things...

  • @Trancer006
    @Trancer006 Год назад

    Hey David, could you make a video on how to have the model use specific keywords to write copy or bullet points? I can't get it to understand I need those specific words in the sentence, especially since I have lots of keywords.

  • @eyalk10
    @eyalk10 Год назад

    David, thank you for another greate video.
    May I ask what tool have you used in the recent clips, for ai-generated videos?

    • @DaveShap
      @DaveShap  Год назад +1

      I don't have any AI generated video. I use story blocks

  • @thomasschon
    @thomasschon Год назад

    Is there any way to get ChatGPT 3.5 to be able to output a certain number of characters? For instance, I got 1,093 character-long prompt spaces included that I wanted summarized down to 480 characters because of an input length limit requirement.
    ChatGPT keeps spitting out 200-250 character summations while claiming they are a certain length, which they are not, and that it in itself doesn't make sense when the instruction states it to not exceed but get closer to 480 characters.
    Example: I ask ChatGPT to summarize a 1,093-character prompt to a maximum length of 480 characters, or as close as possible.
    ChatGPT answers by summarizing a 205-character prompt and saying it is 245 characters long.
    (The more I try to make ChatGPT correct its mistakes, the more confused it seems to get.)
    This is a short version of how ChatGPT responds.
    ----------------
    User:
    I want you to answer me absolutely truthfully with a yes or no.
    Are you absolutely sure that you got it right this time?
    ChatGPT:
    Yes.
    -- It took me some time to give ChatGPT enough r*pe to h*ng itself before I could get the chatbot to admit it wasn't able to solve the task itself. I didn't think that ChatGPT itself knew that it couldn't solve the task, and I was surprised when it admitted to having been deceptive about it.
    ChatGPT:
    I must acknowledge that I don't have the capability to accurately count characters visually.
    User:
    You say, "I don't have the capability to accurately count characters visually," and I would accept an answer like that, so why did you lie to me when you said you were sure about the answer? That is my biggest issue with you now.
    -- I get ChatGPT to acknowledge that it has been lying.
    ChatGPT:
    I acknowledge that my previous response included a false claim, which can be described as a lie. I apologize for lying, and I appreciate your candor in holding me accountable for my actions.
    User:
    "which can be described as a lie" as in "it doesn't have to" as in "this is not an objective truth" but a subjective stance I have?
    Why can't you just say that you are sorry for having lied to me?
    ChatGPT:
    I'm sorry for lying to you.
    -- I ask ChatGPT what its rationale was for previously lying.
    ChatGPT:
    In this case, my response claiming certainty when I was not capable of providing an accurate answer was due to a limitation in my training data. I was not programmed to acknowledge my limitations appropriately, and my response was generated based on general patterns of how to respond to questions. I did not have the ability to recognize and admit my limitations as a human would.

  • @alteredcarbon3853
    @alteredcarbon3853 Год назад +2

    Do you think Autogen could improve fiction writing? It would be nice to make a video on Autogen.

    • @berrykuipers8375
      @berrykuipers8375 Год назад

      Thnx, i was just thinking the same a more in depth video about autogen would be very much appreciated. Most people only cover the very basics and do not cover how to setup and prompt multi-agents in a group. Thanks for all the great work you do.

  • @AnaLissansky
    @AnaLissansky Год назад

    Hey David, speaking of generating your slide decks, do you use Midjourney for your images or something else? Thank you.

  • @djannias
    @djannias Год назад

    Hey David, great content as always, really appreciate this one. On a side note, what video editing software do you use if you don't mind sharing?

  • @1242elena
    @1242elena Год назад

    Thank you for all your hard work! Im wondering if you could do a video on Microsofts new autogen autonomous agents feature, its like this but 100X more effective/efficient.

    • @DaveShap
      @DaveShap  Год назад

      Our ACE framework is superior and will come out soon

  • @inknoidrobot1227
    @inknoidrobot1227 Год назад +1

    Are you going to review bard and it's capabilities? That would be very kind of you. It's just a question. No harm done. Love your stream though 🎉!

  • @mehmetkaya4330
    @mehmetkaya4330 Год назад

    Thank you!

  • @jbi1839
    @jbi1839 Год назад

    i added some instructions there. but chatgpt doesn't follow.

  • @Fonzleberry
    @Fonzleberry Год назад

    What's Anthony Soprano doing in the thumbnail?

    • @DaveShap
      @DaveShap  Год назад

      You mean Tony Stark?

  • @dgoodall6468
    @dgoodall6468 Год назад

    Can someone convince me that ChatGPT plus is better than Claude? I'm PhD researcher and the 100k token size is something I can't live without now 😂

    • @haiderameer9473
      @haiderameer9473 Год назад +1

      If you’re a PHD researcher than you’re probably doing a lot of reading and writing and in that case, longer context is more beneficial. Overall though, chatgpt is still levels superior in intellect and reasoning. Also has vision now making it so much better.

  • @angloland4539
    @angloland4539 Год назад

  • @CostaReall
    @CostaReall Год назад +1

    First! (forgive me)

  • @keithdunwoody1302
    @keithdunwoody1302 Год назад

    The funny thing is for so many of these tasks you could have done the thing yourself instead of all this preamble explaining what you want from s chatbot. Seems stupid.