Stable Diffusion Prompt Guide

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024

Комментарии • 99

  • @pixaroma
    @pixaroma  4 месяца назад +3

    Useful Resources
    How to install Stable Diffusion Forge UI on Windows (Nvidia GPU)
    ruclips.net/video/zqgKj9yexMY/видео.html
    Settings and Tips and Tricks for Forge UI
    ruclips.net/video/zqgKj9yexMY/видео.html
    How to get 260+ Free Art Styles for Stable Diffusion A1111 and Forge UI (The styles.csv download link is on the pinned comment of that video)
    ruclips.net/video/UyBnkojQdtU/видео.html
    In this video I am using the model: Juggernaut X RunDiffusion (version 10) from CivitAI
    civitai.com/models/133005?modelVersionId=456194
    you download it and place it in the folder webui\models\Stable-diffusion
    Outpaint Tutorial for Forge UI
    ruclips.net/video/5_dOevJRzEI/видео.html
    Inpaint Tutorial for Forge UI
    ruclips.net/video/srvek4ucH-A/видео.html
    If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
    or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq

  • @federico68
    @federico68 4 месяца назад +38

    Finally a pro. No bs, no intro, straight to the point. Subscribed

    • @Showbiz_CH
      @Showbiz_CH 4 месяца назад +3

      I love that. Finally a RUclips that respects my time. Instantly subscribed

  • @RedRojo210
    @RedRojo210 4 месяца назад +3

    love it, learned a lot of new tricks. What are your specs you are running, GPU, Processor, Ram ?? yours generates pretty fast.

    • @pixaroma
      @pixaroma  4 месяца назад +3

      I speed up things in video but still go pretty fast usually. I have this: - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit- Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700 - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz- SSD Samsung 980 PRO, 2TB, M.2 - SSD WD Blue, 2TB, M2 2280- Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White- Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid- PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W- Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail

  • @TheCynicalNihilist
    @TheCynicalNihilist 4 месяца назад +3

    what would you say is the best model for SD and its settings. ive downloaded 1000 over the last year, tried merging a few, im always on the lookout for the "perfect" model that has midjourney quality for both nsfw/sfw photos. mostly portraits, but also creative mockups as well. while I have a few go toos, it can still be frustrating going back and forth just to get one that can handle what you want it to do. I just want to get to a point where I turn it on, have all the settings saved just the way i want, and prompt away without the back and forth.

    • @pixaroma
      @pixaroma  4 месяца назад +2

      In the last months I am using only the juggernaut xl models right now I am using latest version Juggernaut_X_Rundiffusion10 civitai.com/models/133005?modelVersionId=456194 but older models also works ok like from 7 to 9 but usually latest has more training, and I liked they always give the the settings you can use in the description, and is good as a general model because it can do anything. And is also the higher rated SDXL model in the last month on civitai
      Recommended settings:
      Res: 832*1216 (For Portrait, but any SDXL Res will work fine) - I usually just use between 1024 and 1216 what fit better for the ratio i need.
      Sampler: DPM++ 2M Karras
      Steps: 30-40
      CFG: 3-7 (less is a bit more realistic)

    • @alecubudulecu
      @alecubudulecu 4 месяца назад

      Unfortunately there’s no such thing as perfect do it all model. Midjourney actively juggles multiple models as it renders.
      If you want to replicate midjourney - closest possible is you’d have to use comfyui with python scripts and dynamically choosing models based upon image context and CLIP along with ipadapter.
      There are tons of models but each does specific things well. Best you can hope for is “decent in everything” or AMAZING in specific things.
      Juggernaut is good for realistic fantasy images.
      Pony is versatile for fantasy art.

  • @AnudeepKolluri
    @AnudeepKolluri 4 месяца назад +1

    Create discord server, no one uses facebook these days. (atleast i dont).
    With midjourney and games seeing a rise, i am confident the upcoming generation has discord account.

    • @pixaroma
      @pixaroma  4 месяца назад

      I created one today, but I dont have too much experience with it, I will work on it on the next days
      discord.gg/a8ZM7Qtsqq

  • @tapirko1
    @tapirko1 4 месяца назад +5

    Great video guide with clear explanations and usability.

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Thank you so much for your support, I really appreciate it ☺️

  • @ScytheSalinas
    @ScytheSalinas 6 часов назад

    Good video, found this all out the hard way lol
    subscribed.

  • @n3tw0rk_n3k0
    @n3tw0rk_n3k0 19 дней назад

    Example of one of my prompts:
    Cinematic photo of a Celtic woman, with pale skin, fiery red hair cascading over her shoulders, and bright blue eyes. She wears a woolen cloak fastened with a bronze brooch and is adorned with silver bracelets. Behind her, misty forests and ancient standing stones rise in the background, ultra realistic, Shot with a Nikon F3 and a 35mm ƒ2 lens, using Kodak Portra 400 film stock

  • @nomorejustice
    @nomorejustice 4 месяца назад +1

    Hi man i'm your new subcriber, may I ask something? I just bought a laptop with RTX 3070 VRAM 8 GB, I want to install Stable Diffusion Forge, but I'm still afraid and doubtful that there will be a virus and it seems like using the GPU for SD can really make the GPU heat up. I'm asking for your opinion on this as I'm still new to this, thanks in advance! success always for you!

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Hmm I never heard of a problem like that. As for safety when you download models from internet make sure is safetensor extension instead of ckpt. I have on older computer one with 6gb of VRAM and still works, but never had a problem. I guess you can test it to see if you are afraid, test with a game and test with stable diffusion to see how much temperature will get, but the video card should handle this kind of things

    • @nomorejustice
      @nomorejustice 4 месяца назад +1

      @@pixaroma thanks for your opinion man, really appreciate it! This really helped me in making a decision 🙏🙏🙏

  • @youcefamarache4801
    @youcefamarache4801 4 месяца назад +1

    Informative, as always. Thank You
    Can you tell me what is the minimum hardware requirements to run Forge WebUI. Please

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Windows operating system, Nvidia card, with at least 4gb of vram to run older models like 1.5 and you need more vram like 6-8 to run sdxl latest models, i got it yo work on 6gb of vram but didn't test it on 4gb vram

    • @youcefamarache4801
      @youcefamarache4801 4 месяца назад

      @@pixaroma Thank you for your time

  • @officially_s
    @officially_s 5 дней назад

    After a long search finally an amazing video.

  • @theconstantgardene
    @theconstantgardene День назад

    Nice video! Why is it so difficult to find any tutorial that shows how to use Stable Diffusion to add text to an existing image? Can you help?

    • @pixaroma
      @pixaroma  День назад

      I don't have for forge, but for comfyui i will do one next week, usually people just generate with ai and use other things like Photoshop for adding text

  • @davidclode3601
    @davidclode3601 18 дней назад

    Great, helpful video, thank you.

  • @Jojo2
    @Jojo2 4 месяца назад +1

    Would you be able to make a longer video going over how to use all the built in stuff forge comes with? (The whole area with LayerDiffuse, controlnet, dynamic thresholding, etc)

    • @pixaroma
      @pixaroma  4 месяца назад

      Is too much information for one video, but is split on multiple videos for most of the stuff check ruclips.net/video/zqgKj9yexMY/видео.html ruclips.net/video/q5MgWzZdq9s/видео.html ruclips.net/video/c03vp7JsCI8/видео.html ruclips.net/video/5_dOevJRzEI/видео.html ruclips.net/video/srvek4ucH-A/видео.html as for the dynamic thresholding I didnt found it so useful because it kind of change the colors. For control net sdxl models seems are not so good at v1.5 models, so I mostly use canny model, and you can check it in my sketch video or cartoon videos.

  • @-AiViX-
    @-AiViX- 18 дней назад

    Thank you for the video, quick question: when I try to create an x/y/z script to generate multiple photos like you, for example with different STEPs, I do get several photos generated, but I don’t have the captions to identify which photo contains which setting. Also, at the end, I don’t see all the photos lined up for comparison at a glance. I only see one photo, and I have to go into my folder to see the others. However, I have enabled "draw legend." Is there something I need to adjust in the settings? Thank you very much for your help.

    • @pixaroma
      @pixaroma  18 дней назад +1

      I just tested in the latest version, so i selected the xyz plot, for x type i put steps, for x value i put 20,21,22,23 and i enabled draw legends. When i generated on the interface i get a single image , but in the output folder i get 4 different image without the text with seed on it. So on the interface i can open that big image that has that legend on it and i can save it from the interface, not sure why is not saved with the rest, but if you clicked on that long image with the legend and all to open in the top left corner you have a save button, so that will save it in the folder that big image, or you can just right click and save image as. and put it where you want

  • @ZeroCool22
    @ZeroCool22 4 месяца назад +1

    Could you make a complete guide/tutorial about "Regional Prompter" extension for AUTO and how to get 2 characters interacting? Thx in advance.

    • @pixaroma
      @pixaroma  4 месяца назад

      I didn't play too much with it yet, i am still waiting for sd3 maybe can do things better

    • @KAVaviation
      @KAVaviation 18 дней назад

      @@pixaroma Can you make a video about making short animations? Like the SVD thing?

    • @pixaroma
      @pixaroma  17 дней назад

      @@KAVaviation I have a svd video but is for the older version of forge. Right now there are not many good video models locally, I am waiting for a good model maybe the guys who did flux will do a nice one for video. until then I am using online generators like klingai and others

  • @cash5627
    @cash5627 25 дней назад

    I'm having difficulty getting two subjects to interact. As an example I want two characters to simply "shake hands" well what ensues instead is a disembodied horror show. Advice?

    • @pixaroma
      @pixaroma  25 дней назад

      It can be hard sometimes, i either do Inpainting or I combine them in Photoshop and just do an image to image to blend it better. Flux model is definitely better at that but depends on how you prompt, sometimes you can get luck if you add a lot of details, so just saying shake hands might not be enough, so you can ask chatgpt to describe in better details, so with flux you can do something like: Two characters, one a tall, broad-shouldered man in a formal black suit with neatly combed hair and sharp features, and the other a slender woman in a stylish business outfit with her hair in a neat bun, stand facing each other in a softly lit office space, their hands extended in a firm yet respectful handshake, the man's confident grip meeting the woman's graceful, slightly forward-leaning posture, as both exchange subtle expressions of calm professionalism, signaling agreement or partnership in this formal yet amicable interaction.

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 4 месяца назад

    great vid...what causes anatomical mutations and how to address this formidable non-intelligent conundrum,

    • @pixaroma
      @pixaroma  4 месяца назад

      In my opinion the models need more training, it has problems with anything that can have a lot of combinations, like fingers on a hand can be in so many positions and if you look at the hand from different positions sometimes it looks like you have 4 fingers, sometimes 3 depends on position and you can hold objects and each finger bend in multiple points, and that is what i think make it confuse. Plus when the model is done it tries to censor it and it will miss some things on how it actually looks. It tries to find patterns and more training it has better images will create and with less mutation. Also doesn't know how to count very well. So i think they need to train it in more ways like to make it understand how things look in 3d from different angles to have better results, and probably some physics, things like gravitation affect things, and how objects interact, collisions etc, but in the future probably they figure out how to do that

  • @sobreaver
    @sobreaver 2 месяца назад

    ooo k ay ! Next step, making Weird Science with this thumbnail o0

    • @pixaroma
      @pixaroma  2 месяца назад

      😂🧪🧬👩‍🔬

  • @JarppaGuru
    @JarppaGuru 4 месяца назад

    0:30 yes that trained image. none was generated copy paste using other trained data. it not do spaceship if not trained. its will make spaceship looking toaster bcoz both those trained it can combine. nothing is generate from 0% its not intelligence.
    this is image this is caption. if you ask something similiar what is in caption that was trained you get that image or combined with other that has same. nothing generated. still it not know nothing else that was trained. even then it not know anything. its programmed todo.
    and its sold as AI what we think in movies. and its just this is this and answer is this. we allready know answer we trained it xD
    same as good old jarvis

    • @pixaroma
      @pixaroma  4 месяца назад

      In my opinion it can do what was trained but the billion combination for each different prompt is what makes it more interesting, its like having unlimited variations for something, i can not do a job if i was not trained, i learn different things then i make like a mix of what i learned, just ai can do those million combination that we don't have enough years in or life to do :) but will see in the future with more training will get more advanced

  • @lonewolf-vw9wf
    @lonewolf-vw9wf 4 месяца назад

    how come your stavle diffusion like a trained dog, as i have everyting same but never get what i want

    • @pixaroma
      @pixaroma  4 месяца назад

      😂 i don't always get what i want but with enough tries and right prompts i get it close enough, depends on the images, it still has things that can't do right no matter what you try

  • @yss7557
    @yss7557 16 дней назад

    hey mate, what specs in pc you have? e.g GPU

    • @pixaroma
      @pixaroma  16 дней назад

      My PC:
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

  • @BrettArt-Channel
    @BrettArt-Channel 2 месяца назад

    This is a Goody 💪💪

  • @ALNUDN
    @ALNUDN 2 месяца назад

    to generate the images quicker to i need a better GPU CPU RAM ?

    • @pixaroma
      @pixaroma  2 месяца назад +2

      Better gpu with more VRAM, preferably nvidia rtx series with more video ram

  • @jorgennorstrom
    @jorgennorstrom 2 месяца назад

    how is ur generation so fast lol, if i dare go over 612x612 it just stops and dies. even with lower end models.

    • @pixaroma
      @pixaroma  2 месяца назад

      You need an Nvidia rtx card, with a lot of vram, i have rtx4090 24gb of vram. I do a 1024px image in 4-5 seconds. If i use a hyper model can do like in one second or so. I do speed up the video to not wait but is quite fast anyway

    • @jorgennorstrom
      @jorgennorstrom 2 месяца назад

      @@pixaroma i tried using ForgeUI and it is what i needed fr. i got a RTX 3070 8gb. but fordge sped ut up to 7s per image, so thanks for the tutorual :D

  • @fr4nz51
    @fr4nz51 3 месяца назад

    What voice software did you use in making this video?

  • @balajikanakasabapathy6998
    @balajikanakasabapathy6998 2 месяца назад

    great video. Wish I had found you months ago, it would have saved me a lot of time. Liked and subscribed.

  • @megal0maniac
    @megal0maniac Месяц назад

    Wow wow wow. Fantastic video that doesnt have a goofy voice and those quickly paced captions. Thanks!!

  • @Knightstrikes
    @Knightstrikes 4 месяца назад

    @pixaroma
    Once again, you knocked it out of the park. You are in the major leagues. :)

  • @easyace4620
    @easyace4620 Месяц назад

    this might just be one of the best videos ive learned from thank you.

  • @brodull1142
    @brodull1142 4 месяца назад

    Any tips to make sdxl loading model faster? Sd1.5 is faster because it still use 512 base model but sdxl took longer like 3 time longer. I'm using Rtx 4060 8gb.

    • @pixaroma
      @pixaroma  4 месяца назад

      Usually sdxl models are also larger then 1.5, like 3 times large maybe that can be the cause, i dont have any tips for that, i have rtx4090 and I didnt notice any difference :) plus I dont use 1.5 since sdxl appeared for me 512px is too small image size

  • @jasonstetsonofficial
    @jasonstetsonofficial 4 месяца назад +1

    Love it !!

  • @tacji1284
    @tacji1284 4 месяца назад

    Nice video and great description
    Thanks for your efforts

  • @Maeve472
    @Maeve472 4 месяца назад

    • @pixaroma
      @pixaroma  4 месяца назад

      Did.you tried juggernaut? I think not all models support inpaint, the juggernaut xl i saw in description on civit ai that they added inpaint so maybe that can be the cause

    • @Maeve472
      @Maeve472 4 месяца назад

      @@pixaroma I mean is there way to use normal modals to inpaint because i dont know why im using krita ai diffusion made by acly when im using normal models they can inpaint but in normal stable diffusion forge its impossible do it

    • @pixaroma
      @pixaroma  4 месяца назад

      Sorry I don't know all the technical stuff, in forge ui i used the juggernaut but i didn't try other, and i think they are other in painting models. So if didn't work either is not compatible or is a bug with the interface

  • @cstar666
    @cstar666 3 месяца назад

    Hands and feet *smdh*, hands and feet.

    • @pixaroma
      @pixaroma  3 месяца назад

      😂 yeah it has more problems with hands than a finetuned sdxl, will see what happens, I saw is problem with finetuning because of license, not sure what features brings, if not we stick with sdxl

  • @lokitsar5799
    @lokitsar5799 4 месяца назад

    I like Forge but the creator of it has jumped ship. I know there's a couple of branches that are working but I just don't see it sticking around long term. I switched back to my combo of Auto, Comfy and fooocus

    • @pixaroma
      @pixaroma  4 месяца назад +1

      yeah is missing some updates, will see on long term what happens :)

  • @funsterkeyven
    @funsterkeyven 3 месяца назад

    Very informative and no nonsense. Subbed and liked!

  • @dreamzdziner8484
    @dreamzdziner8484 4 месяца назад +1

    Awesome mate!

  • @SumoBundle
    @SumoBundle 4 месяца назад

    Thank you for the video. Really nice.

  • @TheBlackBaku14
    @TheBlackBaku14 3 месяца назад

    very good video, thanks a lot this is a gold mine

  • @jankvis
    @jankvis 4 месяца назад

    THX, much appreciated :)

  • @svetlanaLisova666
    @svetlanaLisova666 4 месяца назад

    tell me how to make a character but in different poses.. for example (plant, put, delight) and so on

    • @pixaroma
      @pixaroma  4 месяца назад

      Without training a lora model is not so easy, even with lora is not perfect. You can also try extensions like Reactor but those work more with photorealistic images. There are options with controlnet and ip adapter but i didn't manage to get consistent results with sdxl models, i saw others using it with sd 1.5 models. Only from prompt is hard to make it right. You can also try Inpainting to keep the face or head and change everything else. You can get similar results if describe accurate how the hair look, how is dressed and so on, try a few generation and try to find one similar

    • @svetlanaLisova666
      @svetlanaLisova666 4 месяца назад

      @@pixaroma and you used the control.lnet??

    • @pixaroma
      @pixaroma  4 месяца назад

      I don't use the control net for faces, i tried but didn't get consistent results 😃 there are for comfy ui workflow that works I saw online but I use mostly forge and control net i use it to get contours and poses and convert sketches so i use mostly canny model

    • @svetlanaLisova666
      @svetlanaLisova666 4 месяца назад

      @@pixaroma you VAE LIKE??

    • @pixaroma
      @pixaroma  4 месяца назад

      I use automatic vae :)

  • @KalponicGames
    @KalponicGames 4 месяца назад

    Hey I was wondering is it possible to automate the process of a prompting in the text field in SD if so how? My biggest guess is that you use wild cards over here

    • @pixaroma
      @pixaroma  4 месяца назад

      I didnt try any methods, I usually just copy and paste from chatgpt because i use sometimes images to get prompts. But I saw there is an extension that let you add chatgpt api to it so you will have like chatgpt inside the stable diffusion, you can read more about it but I didnt test it. github.com/hallatore/stable-diffusion-webui-chatgpt-utilities

  • @farhang-n
    @farhang-n 4 месяца назад

    Thank's a lot 💚💚💚

  • @sobreaver
    @sobreaver 4 месяца назад

    hmmmm uniforms...

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Didn't find the word when I did the tutorial 😂

  • @videosfeoscomotucara9038
    @videosfeoscomotucara9038 4 месяца назад

    Good video thanks for the information

  • @rafref
    @rafref 2 месяца назад

    Awesome video, liked

  • @tacoturtle8708
    @tacoturtle8708 Месяц назад

    Love these videos

  • @dayspasttv2
    @dayspasttv2 4 месяца назад

    Thanks for this

  • @datman6266
    @datman6266 2 месяца назад

    Very good!

  • @aimademerich
    @aimademerich 4 месяца назад

    Phenomenal

  • @sb6934
    @sb6934 4 месяца назад

    Thanks!!

  • @XinCool
    @XinCool 3 месяца назад

    Thank you so much for sharing. Your tutorial series are greatly helpful for the starters.