Explaining Prompting Techniques In 12 Minutes - Stable Diffusion Tutorial (Automatic1111)

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • Want to learn prompting techniques within Stable Diffusion to produce amazing results from your ideas? Well, look no further than this short, straight to the point tutorial where we will demistify prompting techniques.
    -LINKS-
    (When using an affiliate link, I earn a commission which is a fantastic way to support the channel)
    ➤ GENERAL ACCESSORIES - amzn.to/3Jl9zpe
    ➤ NVIDIA GRAPHICS CARDS - amzn.to/3TZNGRn
    ➤ AMD GRAPHICS CARDS - amzn.to/3PYIaNN
    ➤ INTEL CPU - amzn.to/3vRlYya
    ➤ RAM - amzn.to/3Jm53Hf
    ➤ CORSAIR VENGEANCE 16GB - amzn.to/3PY8JTm
    ➤ CORSAIR VENGEANCE 32GB - amzn.to/4aRTwLx
    ➤ CORSAIR VENGEANCE 64GB - amzn.to/3Q1NboQ
    ➤ GENERAL MONITORS - amzn.to/3xEk6td
    ➤GENERAL KEYBOARDS AND MOUSES - amzn.to/4aAaEpp
    ➤GAMING KEYBOARDS AND MOUSE - amzn.to/49wpYly
    Support the Channel:
    ➤ Patreon: / bitesizedgenius
    ➤ Buy Me Coffee: bmc.link/bitesizedgenius
    Chapters
    0:00 - Introduction
    0:21 - Prompt Ordering
    0:48 - Prompt Style
    1:05 - Prompt Token
    1:33 - Prompt
    2:21 - Negative Prompt
    3:02 - Parentheses
    3:44 - Square Brackets
    4:23 - Prompt Weighting
    5:11 - Angled Brackets
    5:51 - Prompt Editing
    7:22 - Use Literal
    7:55 - BREAK
    8:19 - Alternating Words
    8:55 - CFG Scale
    9:36 - Prompt Matrix
    10:30 - Prompt From File/Text
    11:09 - X/Y/Z Plot
    11:49 - Conclusion
  • НаукаНаука

Комментарии • 123

  • @landmimes
    @landmimes 10 месяцев назад +12

    I don't know why it took so long to find a clear tutorial and explanation online for this topic, ty for secret sauce

  • @jzwadlo
    @jzwadlo 15 дней назад +1

    Legend - Finally someone who just gives clear concise explanations on how everything works! Subbed

  • @bmorg7244
    @bmorg7244 10 месяцев назад +12

    This is an excellent and comprehensive guide to Stable Diffusion. Well thought out and informative, thank you for this!

  • @user-qp1bn3nn9w
    @user-qp1bn3nn9w 10 месяцев назад +2

    Awesome breakdown, I went through and tested each one as I watch this video and I have leveled up for sure and know a bit more how SD advanced prompting works, hope to see more good stuff!

  • @coffeebot7016
    @coffeebot7016 9 месяцев назад +4

    Great video. My SD setup is quite complex and I'm making some startlingly photorealistic stuff, but even I learned a couple tips here. Thanks for the upload.

  • @DejayClayton
    @DejayClayton 9 месяцев назад +2

    The format of your videos is so incredible, and the value contained within this video in particular is amazing. Thanks much!

    • @BitesizedGenius
      @BitesizedGenius  9 месяцев назад

      Thanks, let me know if I missed anything as i'm thinking off additional prompting tips!

  • @gaurav123vaghela
    @gaurav123vaghela 4 месяца назад

    I have no perfect pipeline before I show this video. Thank you so much @Bitesized Genius

  • @NitroGlace
    @NitroGlace 9 дней назад

    I just got into Stable Diffusion and the whole week I was generating images without knowing these techniques. Thank you.

  • @kevinkennedy-spaien8163
    @kevinkennedy-spaien8163 10 месяцев назад +47

    Break is perfect for describing individual people or items in an image. Never heard of it before watching this, but it was exactly what I needed to have two people of different races and dress styles conversing in the same frame!

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад +6

      Ah good idea... It can also be good for weakening the impact of prompts by placing some padding so it's lower on the weighting!

    • @rere439
      @rere439 10 месяцев назад +1

      hey, thank you bro

    • @megamegathom
      @megamegathom 10 месяцев назад +1

      Thank you for explaining this. Game changer.

  • @WifeWantsAWizard
    @WifeWantsAWizard 9 месяцев назад +9

    (8:29) Those lines are vertical, not horizontal. Their proper name is "pipes".

  • @wizviz6986
    @wizviz6986 Месяц назад

    lovely - to the point - clear and concise - thank you

  • @zuke-aep
    @zuke-aep 8 месяцев назад

    Very informative video. Thanks

  • @kakuzadezukade5260
    @kakuzadezukade5260 5 месяцев назад

    Excellent video. Very detailed explanations. Thanks for your work.

  • @joeyi27
    @joeyi27 2 месяца назад

    Thank you sooo much for making this! 💎✨

  • @TheBattleRabbit860
    @TheBattleRabbit860 Месяц назад

    This is fantastic. I work for an AI image gen site, and I'm always trying to explain the principles of prompting and what ( ) does vs [ ], or how to structure a prompt from most to least important. You concisely break down so many things here, thank you!

  • @YooAzula8262
    @YooAzula8262 10 месяцев назад +1

    Underrated youtube channel. You deserve way more attention

  • @Pircla
    @Pircla Месяц назад

    Omg best explaination I ever see about prompting of stable diffusion, thanks you for being clear and fast.

  • @moneyonmars3754
    @moneyonmars3754 10 месяцев назад

    This was very helpful! Thank you :)

  • @3diva01
    @3diva01 10 месяцев назад +1

    Thank you for another very informative and helpful video! It's much appreciated! Regarding the BREAK, it actually looks like a slight bump in quality WITH the break. Nice!

  • @gamingking1
    @gamingking1 6 месяцев назад

    This is really great stuff.

  • @feyntmistral1110
    @feyntmistral1110 9 месяцев назад +3

    BREAK helps you keep certain sections of your prompt cohesive. Since every chunk is 75 tokens, if you were to run into a situation where one of your token segments was abruptly ended (such as "wearing a white/shirt and blue jeans") it can lead to unwanted results. If you know you're close to your token limit before starting another section (like before the clothing portion), you can put BREAK into the prompt and ensure everything sticks together in the next chunk.
    The AI also seems to put more emphasis on things when processed alone. Splitting colours into separate segments can ensure they don't bleed into each other or colour incorrect items.

    • @BitesizedGenius
      @BitesizedGenius  9 месяцев назад

      Thanks, i'll be revisiting this keyword soon in an upcoming video so this reaffirms my research!

  • @RedsNotAColor
    @RedsNotAColor 10 месяцев назад

    Awesome video dude!

  • @MikeisRelic
    @MikeisRelic 10 месяцев назад

    best guide I've seen yet.

  • @griftgfx
    @griftgfx 10 месяцев назад +4

    One of the more useful introduction to SD prompts I've watched. How do you only have 900 subs?

  • @bgill7475
    @bgill7475 10 месяцев назад

    Very nice, thanks for your tips 🙂

  • @UltraRealisticAI
    @UltraRealisticAI 10 месяцев назад

    Subscribed! This information has been tremendously useful for my channel.

  • @VoJnIk90
    @VoJnIk90 10 месяцев назад +1

    Great video and channel! You got a new subscriber!
    Would love to see how to create characters based on training the model with images of myself.

  • @MrSongib
    @MrSongib 10 месяцев назад +6

    7:55 Try Put BREAK after each concept keywords and see what happens.

  • @noobicorn_gamer
    @noobicorn_gamer 5 месяцев назад

    the BEAT explanation of SD that’s out there. Feels like I found an oasis. This vid needs to get more views :(

  • @mhmwoodar2678
    @mhmwoodar2678 10 месяцев назад +1

    Thank you

  • @VIDEOAC3D
    @VIDEOAC3D 7 месяцев назад

    Fantastic tutorial man!
    I've been, on and off, toying around with Auto1111 since early December. I've generated thousands if cool images, but... the output consistency is all over the place. Some great, but with lots of crap in between.
    I've mostly been winging it because I haven't seen anyone cover the features together well, anywhere, and there are just so many variables that can completely disrupt your output. It's time consuming.
    Also, many tutorials assume you already know much of this info and are hence obscure and incomplete lessons.
    E.G., I was pushing buttons, changing prompts, and observing the destruction... lol 😆 🤣 So that's lot's of "oh, hmmm, that's a neat, um, FAIL... f...uggggly."
    So thanks again! Great content, liked 👍 👌, subbed ❤, saved.

  • @jrbirdmanpodcast
    @jrbirdmanpodcast 9 месяцев назад

    Nice work

  • @JohnVanderbeck
    @JohnVanderbeck 10 месяцев назад +3

    Personally, for my characters, I wrote a Python script that uses ChatGPT to generate a bunch of prompts based on a template, and then write the resulting prompts out to a file which I then run through SD using the Prompts from File

  • @ashtyler8
    @ashtyler8 10 месяцев назад

    Thanks!

  • @sidheart8905
    @sidheart8905 10 месяцев назад

    wow i was confused with circular and square brackets and numbers..thanks for clarifying

  • @sb6934
    @sb6934 10 месяцев назад

    Thanks

  • @user-xy2zp7ze9k
    @user-xy2zp7ze9k 2 месяца назад

    Hi Bitesized Genius - Regarding the prompt from file/text - how do you generate negative prompts here? Really stuck would love your help thanks!

  • @JohnVanderbeck
    @JohnVanderbeck 10 месяцев назад +2

    I use what you call "Alternating Words" all the time. It is one of my favorite tricks for creating realistic looking faces by using a few different real people, but by alternating through 3 or 4 people using this technique in general it won't look like any one of them.

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад +2

      It's also useful for getting some consistency with how characters look, regardless of whether you use a different seed or not!

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg 10 месяцев назад +1

      I give that if I wanted to create consistent faces I sold use a name, but then I found you can just plug in a number instead of a babe, so something like this "a tall woman with brown hair named 5437". Would give me the same face in all the images from that generation

  • @Beokabatuka
    @Beokabatuka 10 месяцев назад +2

    4:57 You may have missed colons in the middle two generations which is why they look so similar, but I think the point still stands fine.

  • @hillcrestvideoprod1
    @hillcrestvideoprod1 5 месяцев назад +1

    Thank you for explaining this so well! I am reinstalling SDXL because my first install was flawed somehow…this makes experimenting with other interfaces like Comfy easier! Such amazing software!

  • @Xungames
    @Xungames 9 месяцев назад +1

    Hi , thanks for the video !
    4:30 Does the ":" character not matter?
    You use it for the value 1.8 but not for 1.2 and 1.3

    • @BitesizedGenius
      @BitesizedGenius  9 месяцев назад

      It matters, I just didn't notice it was missing 😂

  • @meredithhurston
    @meredithhurston 6 месяцев назад

    Thanks for this amazing SD prompt tutorial. Around the 07:00 mark you talk about prompt editing and I got a bit confused. Are we able to save images from the sampling steps? If so, how. What is the benefit of using prompt editing? Thank you!

    • @BitesizedGenius
      @BitesizedGenius  6 месяцев назад +1

      So prompt editing is just deciding at which sampling step a prompt should be used, via either a number or a percentage of the total sampling steps. It's a bit like swapping prompts from one to the next at a stage you specify.The benefit is would be getting a unique image that blends one prompt with another or delaying the start of a prompt until later in the image generation. You will get the resulting image when the generation is complete. Hopefully that helps?

    • @meredithhurston
      @meredithhurston 6 месяцев назад +1

      @@BitesizedGenius this does help. I also found a medium article in the topic that helped me understand it better. I realize now it influences the final image, but the step in which it’s introduced will determine how much influence. Thanks again for the great tutorials 👏🏾👏🏾👏🏾

    • @cardanochrome2452
      @cardanochrome2452 5 месяцев назад

      Is there a way to use prompt editing with loras? When I try something like [,keyword1:,keyword2:0.7] it doesn't seem to work. Do you have to do some trick to escape the ':' in the lora? Is it even possible (with an extension or addon?) Thanks so much for this awesome guide! I come back to it again and again! @@BitesizedGenius

  • @stopthink7202
    @stopthink7202 29 дней назад

    Is there a way to control angle and perspective. Like the low angles that imply the viewer of the picture would be small and looking up at a huge object or person that towers over them?

    • @BitesizedGenius
      @BitesizedGenius  28 дней назад +1

      use prompts like from below, from above, birdseye view, fisheye lens etc and add weighting if required

    • @stopthink7202
      @stopthink7202 28 дней назад

      @@BitesizedGenius Thanks dude

  • @Epulsenow
    @Epulsenow 10 месяцев назад

    Hello Sir, can you help with my query about Hand and fingers deform ho when creating artwork like extra legs extra fingers. sometime the went missing or not corrects as per human anatomy. I tried negative prompts, but still issue remains.

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      Hey, i've got a video about this on my list so keep an eye out!

  • @user-qe4ge3jv6c
    @user-qe4ge3jv6c 3 месяца назад

    This is a really good tutorial, great! I have a problem, the AI keeps creating a duplicated character, like a "twin". I can't resolve it by writing the negative prompt.
    Please help!

    • @BitesizedGenius
      @BitesizedGenius  3 месяца назад

      Hard to say based on that information. What checkpoint are you using and do you get this issue when copying the generation data from the checkpoints example image?

  • @motionpablo
    @motionpablo 2 месяца назад

    Hi! Does anyone know how to fix an error when trying to use x/y/z plot with prompt S/R. It doesn't give me the different images, instead it reads the following error: RuntimeError: Prompt S/R did not find Batman in prompt or negative prompt. Help would be greatly appreciated. Thank you!

    • @BitesizedGenius
      @BitesizedGenius  2 месяца назад +1

      Hey, that means that your prompt in XYZ cannot be found inthe prompt box, so check the wording as I think its case sensitive and ensure your commas are in the right place to signify the seperation of prompts

  • @marhensa
    @marhensa 10 месяцев назад +2

    BadDream and UnrealisticDream is a negative embeddings not just simple negative prompt, I think you don't explain it (or I'm missing it?). without proper installing those two negative embeddings in \stable-diffusion-webui\embeddings\ those words don't have a real meaning and don't processed properly. those two files are not LoRA either, it's embeddings, that could be downloaded in civitai /models/72437 (I can't put real URL because it always deleted). from the video, on result log below generated images, I saw those two words are not processed as embeddings as it really intended. if those negative embeddings are used properly, there's a log like this:
    medium shot of a woman with full lips, golden eyes and a white crop top, long black hair, snowy weather, octane render
    Negative prompt: BadDream, (UnrealisticDream:1.2), (NSFW)
    Used embeddings: BadDream [48d0], UnrealisticDream [5f55]

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      Yes, this is something I missed and I plan to make a video explaining embeddings, LoRa's etc soon, as it's not as simple as copying the generation data. That should then give some context for future videos, without explaining it each time.

  • @ImInTheMaking
    @ImInTheMaking 10 месяцев назад

    The formatting of prompt if very helpful. I tend to rattle whenever I am generating a prompt because I'm overwhelmed with all the settings the A1111 have.

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      Thanks, hopefully this video makes you less rattled. I also did a video on TXT2IMG and IMG2IMG where the settings are explained.

    • @ImInTheMaking
      @ImInTheMaking 10 месяцев назад

      @@BitesizedGenius I've also watched thosoe and added to my playlist for SD. I'm eager to grasp the prompt techniques so I won't have any troubles when I have an idea.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg 10 месяцев назад +1

      The thing that was frustrating the most for me about generating prompts with how long it took to do that for each prompt. And a lot of the times I wanted to generate multiple prompts with different settings in each one. I know there's extensions in stable diffusion that kind of help with that, but I developed a prompt generator for chat GPT that structures my prompts and I can ask it whatever I want. You should check it out sometime.

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 5 месяцев назад

    Thanks as always! I have an off-topic question, is there any way to make StableDiffusion not show people but only clothes? I put no human, no girl, etc. in the negative prompt and it still shows people.

    • @BitesizedGenius
      @BitesizedGenius  5 месяцев назад

      So that will likely depend on your checkpoint and most are trained for people. You can either try and find a checkpoint geared towards cloths, train your own or do people and clothes but remove the person through controlnet, photo editing etc.

  • @cyberprompt
    @cyberprompt 10 месяцев назад +3

    veritical is now horizontal. :/

  • @aa-nw5mq
    @aa-nw5mq 3 месяца назад

    How to check image used to learn the checkpoint?

    • @BitesizedGenius
      @BitesizedGenius  3 месяца назад

      Images from my video are available on Patreon

  • @TheMagista88
    @TheMagista88 4 месяца назад

    HI, I am new to SD and I tried those prompts yesterday and my results were nowhere as good as yours even though I used everything you used in your prompt. Can you help? Thank you.

    • @BitesizedGenius
      @BitesizedGenius  4 месяца назад

      Are you using a good checkpoint and not the default on installed?

    • @TheMagista88
      @TheMagista88 4 месяца назад

      @@BitesizedGenius i figured that out and ima try it. Thanks! I am a total newb

    • @TheMagista88
      @TheMagista88 4 месяца назад

      @@BitesizedGenius checkpoints are what "models" are called right?

    • @BitesizedGenius
      @BitesizedGenius  4 месяца назад +1

      @@TheMagista88 No worries, check out my other videos, some gems in there for beginners :)

    • @TheMagista88
      @TheMagista88 4 месяца назад

      @@BitesizedGenius def sybbed. Thanks ;)

  • @vallejomach6721
    @vallejomach6721 10 месяцев назад

    Token Limit - 'isn't that important' - How? Why? If 75 is the limit what happens to the rest? How are they used? Randomly selected etc from what's left or what happens to them? I've seen people say that 75 is the limit, but on the other hand many example prompts from places like civitai etc are a mile long...especially negative prompts. I'd be interested to know how those long prompts are used by SD.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg 10 месяцев назад

      With stable diffusion there really is no token limit. It reaches that limit and then just resubmits more information. The problem is the longer the prompt the less likely it is to incorporate the details later on in the prompt without emphasis.

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад +1

      Hey, i'll do a seperate video breaking it down in future and showing how it can be useful, but information is scarce, so I just gave the example most commonly found online.

  • @spagettification
    @spagettification Месяц назад

    Where i can download "Lora add details"?

    • @NitroGlace
      @NitroGlace 9 дней назад

      civitai detail tweaker

  • @stevo728822
    @stevo728822 9 месяцев назад +1

    SD works with easy subjects such as women's faces. But when you apply it to something that doesn't already exist it really struggles.

  • @frixid31
    @frixid31 10 месяцев назад

    i just understood prompt from file/text XD

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      It's self explanatory but it also doesn't explain much🤣

    • @frixid31
      @frixid31 10 месяцев назад

      @@BitesizedGenius 🤣

  • @chibuikeoffu
    @chibuikeoffu 5 месяцев назад

    Very nice video, how do i remove the background bokeh from human subject

    • @BitesizedGenius
      @BitesizedGenius  5 месяцев назад +1

      try putting bokeh into negative prompt? or writing sharp in positive

    • @chibuikeoffu
      @chibuikeoffu 5 месяцев назад

      @BitesizedGenius Didn't work but thanks, I use Fooocus btw. I put bokeh,depth of field in negative and small aperture I'm positive still no dice

  • @TheShafted178
    @TheShafted178 4 месяца назад

    what's this software tho?

    • @BitesizedGenius
      @BitesizedGenius  4 месяца назад

      ruclips.net/video/WJUsETmv_r8/видео.html

    • @TheShafted178
      @TheShafted178 4 месяца назад

      @@BitesizedGenius thanks :D

  • @deadringer-cultofdeathratt8813
    @deadringer-cultofdeathratt8813 9 месяцев назад +1

    Nope. I don’t have enough brains cells for this. I’m going back to midjourney 💀💀💀

  • @ohokcool
    @ohokcool 28 дней назад

    In the prompt weighting section you didn’t put colons so it didn’t accurately interpret it exactly the way you intended

  • @saperos
    @saperos 2 месяца назад

    Like for Manbearpig reference ;)

  • @zacharysherry2910
    @zacharysherry2910 8 месяцев назад

    "Yee Hee!"

  • @lioncrud9096
    @lioncrud9096 9 месяцев назад

    So confused as to why the StabilityAI team doesn't just explain this stuff. Instead of people having to theorize and endlessly test to find out how to best use their tools.

  • @NegociosLend
    @NegociosLend 8 месяцев назад

  • @mrplease66
    @mrplease66 2 месяца назад

    vertical line

  • @tranceemerson8325
    @tranceemerson8325 5 месяцев назад

    bad dream is a negative embedding and you need to have that file before it will even do anything.

  • @RareTechniques
    @RareTechniques 4 месяца назад

    This is sub-worthy content for sure.

  • @qwasd0r
    @qwasd0r 4 месяца назад

    Mine still look like nightmares, lol

  • @Gen-mhi-lt5oo
    @Gen-mhi-lt5oo 10 месяцев назад

    You should get a pop filter on your microphone.

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      I have one on my microphone, but i'm clearly doing something wrong with my audio set up, so i'll look into it 😊

  • @davedm6345
    @davedm6345 10 месяцев назад

    How to save prompts? i have found myself is a button on down right of skip, and was very hard to find, no one of the beginners tutorial talk about it's incredible the bad UI of Stable difussion like wtf the save button is always on upper left corner!? wtf happen with the AI and all technology LMAO so stupid UI sorry, and like to much and tanks the models are so easy to change on the upper left!? so ituitive to use, like i was changue the model notting happened the Stable difussion even it do not work at all lol

    • @BitesizedGenius
      @BitesizedGenius  10 месяцев назад

      Hey, so i've done a video on the TXT2IMG and IMG2IMG UI, which may help and i've got a video on UI settings on the way.
      But you can save prompts using the save style button under the generate button. Check my video on where files are saved for useful locations
      😁

    • @DejayClayton
      @DejayClayton 9 месяцев назад

      SD Web UI could definitely benefit from some serious usability improvements, such as saving and loading of prompt settings more than what is currently supported by "Save style". However, Web UI is so amazing that no one is complaining. I plan to look into Comfy UI to see if that solves some of these usability issues.

  • @KINGLIFERISM
    @KINGLIFERISM 8 месяцев назад

    Why you speak in a question?

  • @tdreamgmail
    @tdreamgmail 5 месяцев назад

    What gpu are you running?

    • @BitesizedGenius
      @BitesizedGenius  5 месяцев назад

      For this video it was a NVIDIA 1060 6bg but now i'm using an Nvidia 4080 16gb

  • @Maker_of_Creation
    @Maker_of_Creation Месяц назад

    ai photo movie
    ruclips.net/video/uP04emczDi8/видео.html