Stable Diffusion Prompt Guide

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 122

  • @pixaroma
    @pixaroma  7 месяцев назад +6

    Useful Resources
    How to install Stable Diffusion Forge UI on Windows (Nvidia GPU)
    ruclips.net/video/zqgKj9yexMY/видео.html
    Settings and Tips and Tricks for Forge UI
    ruclips.net/video/zqgKj9yexMY/видео.html
    How to get 260+ Free Art Styles for Stable Diffusion A1111 and Forge UI (The styles.csv download link is on the pinned comment of that video)
    ruclips.net/video/UyBnkojQdtU/видео.html
    In this video I am using the model: Juggernaut X RunDiffusion (version 10) from CivitAI
    civitai.com/models/133005?modelVersionId=456194
    you download it and place it in the folder webui\models\Stable-diffusion
    Outpaint Tutorial for Forge UI
    ruclips.net/video/5_dOevJRzEI/видео.html
    Inpaint Tutorial for Forge UI
    ruclips.net/video/srvek4ucH-A/видео.html
    If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
    or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq

  • @federico68
    @federico68 7 месяцев назад +50

    Finally a pro. No bs, no intro, straight to the point. Subscribed

    • @Showbiz_Stuff
      @Showbiz_Stuff 7 месяцев назад +4

      I love that. Finally a RUclips that respects my time. Instantly subscribed

  • @bryanpoulter4482
    @bryanpoulter4482 17 дней назад

    I don't subscribe very often, but for you, oh yea! Fantastic information! Thank you.

  • @WolferAlpha
    @WolferAlpha 22 дня назад

    Thank you very much for this immense help... the part where you talk about the order in the prompt, I tested it by changing things around to match this structure and it made a big difference...

  • @tapirko1
    @tapirko1 7 месяцев назад +6

    Great video guide with clear explanations and usability.

    • @pixaroma
      @pixaroma  7 месяцев назад +2

      Thank you so much for your support, I really appreciate it ☺️

  • @stableArtAI
    @stableArtAI 5 дней назад

    Great stuff, we discovered these techniques several months ago as we started learning SD.

    • @pixaroma
      @pixaroma  5 дней назад

      Yeah video is 7 months old, but I used similar stuff even for midjourney back in the days, or the first dall-e :)

  • @janjanusek4383
    @janjanusek4383 2 месяца назад +1

    Just wow, I cannot stop watching 💣

  • @buddypapaluck
    @buddypapaluck 2 месяца назад

    great explanation and good tips, thank you so much. i can just copy what is said above: no bullshit, no ads, no intro, just straight to the point

  • @megal0maniac
    @megal0maniac 4 месяца назад

    Wow wow wow. Fantastic video that doesnt have a goofy voice and those quickly paced captions. Thanks!!

  • @easyace4620
    @easyace4620 3 месяца назад

    this might just be one of the best videos ive learned from thank you.

  • @johndoe-dj3fj
    @johndoe-dj3fj 2 месяца назад

    Great video I’m new to stable diffusion and never used a lot of those options!

  • @officially_s
    @officially_s 3 месяца назад

    After a long search finally an amazing video.

  • @arsletirott
    @arsletirott 2 месяца назад

    I just wanna say that you're amazing, man

  • @bikesfan
    @bikesfan 2 месяца назад

    Great video mate! Super informative, and straight to the point!

  • @balajikanakasabapathy6998
    @balajikanakasabapathy6998 5 месяцев назад

    great video. Wish I had found you months ago, it would have saved me a lot of time. Liked and subscribed.

  • @Knightstrikes
    @Knightstrikes 7 месяцев назад

    @pixaroma
    Once again, you knocked it out of the park. You are in the major leagues. :)

  • @tacoturtle8708
    @tacoturtle8708 4 месяца назад

    Love these videos

  • @funsterkeyven
    @funsterkeyven 6 месяцев назад

    Very informative and no nonsense. Subbed and liked!

  • @XinCool
    @XinCool 6 месяцев назад

    Thank you so much for sharing. Your tutorial series are greatly helpful for the starters.

  • @davidclode3601
    @davidclode3601 3 месяца назад

    Great, helpful video, thank you.

  • @streetphone4619
    @streetphone4619 2 месяца назад

    Excellent video. Glad I watched it. Liked and Subbed.

    • @pixaroma
      @pixaroma  2 месяца назад

      Thank you ☺️

  • @jasonstetsonofficial
    @jasonstetsonofficial 7 месяцев назад +1

    Love it !!

  • @n3tw0rk_n3k0
    @n3tw0rk_n3k0 3 месяца назад

    Example of one of my prompts:
    Cinematic photo of a Celtic woman, with pale skin, fiery red hair cascading over her shoulders, and bright blue eyes. She wears a woolen cloak fastened with a bronze brooch and is adorned with silver bracelets. Behind her, misty forests and ancient standing stones rise in the background, ultra realistic, Shot with a Nikon F3 and a 35mm ƒ2 lens, using Kodak Portra 400 film stock

  • @theconstantgardene
    @theconstantgardene 2 месяца назад +1

    Nice video! Why is it so difficult to find any tutorial that shows how to use Stable Diffusion to add text to an existing image? Can you help?

    • @pixaroma
      @pixaroma  2 месяца назад +1

      I don't have for forge, but for comfyui i will do one next week, usually people just generate with ai and use other things like Photoshop for adding text

  • @rafref
    @rafref 5 месяцев назад

    Awesome video, liked

  • @TheBlackBaku14
    @TheBlackBaku14 6 месяцев назад

    very good video, thanks a lot this is a gold mine

  • @RedRojo210
    @RedRojo210 7 месяцев назад +3

    love it, learned a lot of new tricks. What are your specs you are running, GPU, Processor, Ram ?? yours generates pretty fast.

    • @pixaroma
      @pixaroma  7 месяцев назад +4

      I speed up things in video but still go pretty fast usually. I have this: - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit- Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700 - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz- SSD Samsung 980 PRO, 2TB, M.2 - SSD WD Blue, 2TB, M2 2280- Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White- Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid- PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W- Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail

  • @datman6266
    @datman6266 5 месяцев назад

    Very good!

  • @ScytheSalinas
    @ScytheSalinas 2 месяца назад

    Good video, found this all out the hard way lol
    subscribed.

  • @tacji1284
    @tacji1284 7 месяцев назад

    Nice video and great description
    Thanks for your efforts

  • @Jojo2
    @Jojo2 7 месяцев назад +1

    Would you be able to make a longer video going over how to use all the built in stuff forge comes with? (The whole area with LayerDiffuse, controlnet, dynamic thresholding, etc)

    • @pixaroma
      @pixaroma  7 месяцев назад

      Is too much information for one video, but is split on multiple videos for most of the stuff check ruclips.net/video/zqgKj9yexMY/видео.html ruclips.net/video/q5MgWzZdq9s/видео.html ruclips.net/video/c03vp7JsCI8/видео.html ruclips.net/video/5_dOevJRzEI/видео.html ruclips.net/video/srvek4ucH-A/видео.html as for the dynamic thresholding I didnt found it so useful because it kind of change the colors. For control net sdxl models seems are not so good at v1.5 models, so I mostly use canny model, and you can check it in my sketch video or cartoon videos.

  • @dreamzdziner8484
    @dreamzdziner8484 7 месяцев назад +1

    Awesome mate!

  • @TheCynicalNihilist
    @TheCynicalNihilist 7 месяцев назад +3

    what would you say is the best model for SD and its settings. ive downloaded 1000 over the last year, tried merging a few, im always on the lookout for the "perfect" model that has midjourney quality for both nsfw/sfw photos. mostly portraits, but also creative mockups as well. while I have a few go toos, it can still be frustrating going back and forth just to get one that can handle what you want it to do. I just want to get to a point where I turn it on, have all the settings saved just the way i want, and prompt away without the back and forth.

    • @pixaroma
      @pixaroma  7 месяцев назад +2

      In the last months I am using only the juggernaut xl models right now I am using latest version Juggernaut_X_Rundiffusion10 civitai.com/models/133005?modelVersionId=456194 but older models also works ok like from 7 to 9 but usually latest has more training, and I liked they always give the the settings you can use in the description, and is good as a general model because it can do anything. And is also the higher rated SDXL model in the last month on civitai
      Recommended settings:
      Res: 832*1216 (For Portrait, but any SDXL Res will work fine) - I usually just use between 1024 and 1216 what fit better for the ratio i need.
      Sampler: DPM++ 2M Karras
      Steps: 30-40
      CFG: 3-7 (less is a bit more realistic)

    • @alecubudulecu
      @alecubudulecu 7 месяцев назад

      Unfortunately there’s no such thing as perfect do it all model. Midjourney actively juggles multiple models as it renders.
      If you want to replicate midjourney - closest possible is you’d have to use comfyui with python scripts and dynamically choosing models based upon image context and CLIP along with ipadapter.
      There are tons of models but each does specific things well. Best you can hope for is “decent in everything” or AMAZING in specific things.
      Juggernaut is good for realistic fantasy images.
      Pony is versatile for fantasy art.

  • @build.aiagents
    @build.aiagents 7 месяцев назад

    Phenomenal

  • @youcefamarache4801
    @youcefamarache4801 7 месяцев назад +1

    Informative, as always. Thank You
    Can you tell me what is the minimum hardware requirements to run Forge WebUI. Please

    • @pixaroma
      @pixaroma  7 месяцев назад +1

      Windows operating system, Nvidia card, with at least 4gb of vram to run older models like 1.5 and you need more vram like 6-8 to run sdxl latest models, i got it yo work on 6gb of vram but didn't test it on 4gb vram

    • @youcefamarache4801
      @youcefamarache4801 7 месяцев назад

      @@pixaroma Thank you for your time

  • @-AiViX-
    @-AiViX- 3 месяца назад

    Thank you for the video, quick question: when I try to create an x/y/z script to generate multiple photos like you, for example with different STEPs, I do get several photos generated, but I don’t have the captions to identify which photo contains which setting. Also, at the end, I don’t see all the photos lined up for comparison at a glance. I only see one photo, and I have to go into my folder to see the others. However, I have enabled "draw legend." Is there something I need to adjust in the settings? Thank you very much for your help.

    • @pixaroma
      @pixaroma  3 месяца назад +1

      I just tested in the latest version, so i selected the xyz plot, for x type i put steps, for x value i put 20,21,22,23 and i enabled draw legends. When i generated on the interface i get a single image , but in the output folder i get 4 different image without the text with seed on it. So on the interface i can open that big image that has that legend on it and i can save it from the interface, not sure why is not saved with the rest, but if you clicked on that long image with the legend and all to open in the top left corner you have a save button, so that will save it in the folder that big image, or you can just right click and save image as. and put it where you want

  • @farhang-n
    @farhang-n 7 месяцев назад

    Thank's a lot 💚💚💚

  • @videosfeoscomotucara9038
    @videosfeoscomotucara9038 6 месяцев назад

    Good video thanks for the information

  • @SumoBundle
    @SumoBundle 7 месяцев назад

    Thank you for the video. Really nice.

  • @dayspasttv2
    @dayspasttv2 7 месяцев назад

    Thanks for this

  • @jankvis
    @jankvis 7 месяцев назад

    THX, much appreciated :)

  • @sb6934
    @sb6934 7 месяцев назад

    Thanks!!

  • @ZeroCool22
    @ZeroCool22 7 месяцев назад +1

    Could you make a complete guide/tutorial about "Regional Prompter" extension for AUTO and how to get 2 characters interacting? Thx in advance.

    • @pixaroma
      @pixaroma  7 месяцев назад

      I didn't play too much with it yet, i am still waiting for sd3 maybe can do things better

    • @KAVaviation
      @KAVaviation 3 месяца назад

      @@pixaroma Can you make a video about making short animations? Like the SVD thing?

    • @pixaroma
      @pixaroma  3 месяца назад

      @@KAVaviation I have a svd video but is for the older version of forge. Right now there are not many good video models locally, I am waiting for a good model maybe the guys who did flux will do a nice one for video. until then I am using online generators like klingai and others

  • @BrettArt-Channel
    @BrettArt-Channel 5 месяцев назад

    This is a Goody 💪💪

  • @nomorejustice
    @nomorejustice 7 месяцев назад +1

    Hi man i'm your new subcriber, may I ask something? I just bought a laptop with RTX 3070 VRAM 8 GB, I want to install Stable Diffusion Forge, but I'm still afraid and doubtful that there will be a virus and it seems like using the GPU for SD can really make the GPU heat up. I'm asking for your opinion on this as I'm still new to this, thanks in advance! success always for you!

    • @pixaroma
      @pixaroma  7 месяцев назад +1

      Hmm I never heard of a problem like that. As for safety when you download models from internet make sure is safetensor extension instead of ckpt. I have on older computer one with 6gb of VRAM and still works, but never had a problem. I guess you can test it to see if you are afraid, test with a game and test with stable diffusion to see how much temperature will get, but the video card should handle this kind of things

    • @nomorejustice
      @nomorejustice 7 месяцев назад +1

      @@pixaroma thanks for your opinion man, really appreciate it! This really helped me in making a decision 🙏🙏🙏

  • @fr4nz51
    @fr4nz51 6 месяцев назад

    What voice software did you use in making this video?

    • @pixaroma
      @pixaroma  6 месяцев назад

      VoiceAir Ai

  • @yss7557
    @yss7557 3 месяца назад

    hey mate, what specs in pc you have? e.g GPU

    • @pixaroma
      @pixaroma  3 месяца назад

      My PC:
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

  • @ALNUDN
    @ALNUDN 5 месяцев назад

    to generate the images quicker to i need a better GPU CPU RAM ?

    • @pixaroma
      @pixaroma  5 месяцев назад +2

      Better gpu with more VRAM, preferably nvidia rtx series with more video ram

  • @cash5627
    @cash5627 3 месяца назад

    I'm having difficulty getting two subjects to interact. As an example I want two characters to simply "shake hands" well what ensues instead is a disembodied horror show. Advice?

    • @pixaroma
      @pixaroma  3 месяца назад +1

      It can be hard sometimes, i either do Inpainting or I combine them in Photoshop and just do an image to image to blend it better. Flux model is definitely better at that but depends on how you prompt, sometimes you can get luck if you add a lot of details, so just saying shake hands might not be enough, so you can ask chatgpt to describe in better details, so with flux you can do something like: Two characters, one a tall, broad-shouldered man in a formal black suit with neatly combed hair and sharp features, and the other a slender woman in a stylish business outfit with her hair in a neat bun, stand facing each other in a softly lit office space, their hands extended in a firm yet respectful handshake, the man's confident grip meeting the woman's graceful, slightly forward-leaning posture, as both exchange subtle expressions of calm professionalism, signaling agreement or partnership in this formal yet amicable interaction.

  • @KalponicGames
    @KalponicGames 7 месяцев назад

    Hey I was wondering is it possible to automate the process of a prompting in the text field in SD if so how? My biggest guess is that you use wild cards over here

    • @pixaroma
      @pixaroma  7 месяцев назад

      I didnt try any methods, I usually just copy and paste from chatgpt because i use sometimes images to get prompts. But I saw there is an extension that let you add chatgpt api to it so you will have like chatgpt inside the stable diffusion, you can read more about it but I didnt test it. github.com/hallatore/stable-diffusion-webui-chatgpt-utilities

  • @PetrusiliusZwack
    @PetrusiliusZwack 2 месяца назад +1

    Flashbang at 5:18

    • @pixaroma
      @pixaroma  2 месяца назад +1

      Sorry

    • @PetrusiliusZwack
      @PetrusiliusZwack 2 месяца назад

      @@pixaroma The video was great and made sense at the same time 👍. I tried it somewhat not 1:1 but similar. What should I tell stable diffusion if I want a sketch style or like anime Style oder Cartoon style?

    • @pixaroma
      @pixaroma  2 месяца назад

      @@PetrusiliusZwack you can actually use styles ready made prompts that you can add to your prompts, i did a few videos about that on my channel, for both forge and new one on comfyui

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 7 месяцев назад

    great vid...what causes anatomical mutations and how to address this formidable non-intelligent conundrum,

    • @pixaroma
      @pixaroma  7 месяцев назад

      In my opinion the models need more training, it has problems with anything that can have a lot of combinations, like fingers on a hand can be in so many positions and if you look at the hand from different positions sometimes it looks like you have 4 fingers, sometimes 3 depends on position and you can hold objects and each finger bend in multiple points, and that is what i think make it confuse. Plus when the model is done it tries to censor it and it will miss some things on how it actually looks. It tries to find patterns and more training it has better images will create and with less mutation. Also doesn't know how to count very well. So i think they need to train it in more ways like to make it understand how things look in 3d from different angles to have better results, and probably some physics, things like gravitation affect things, and how objects interact, collisions etc, but in the future probably they figure out how to do that

  • @Maeve472
    @Maeve472 7 месяцев назад

    • @pixaroma
      @pixaroma  7 месяцев назад

      Did.you tried juggernaut? I think not all models support inpaint, the juggernaut xl i saw in description on civit ai that they added inpaint so maybe that can be the cause

    • @Maeve472
      @Maeve472 7 месяцев назад

      @@pixaroma I mean is there way to use normal modals to inpaint because i dont know why im using krita ai diffusion made by acly when im using normal models they can inpaint but in normal stable diffusion forge its impossible do it

    • @pixaroma
      @pixaroma  7 месяцев назад

      Sorry I don't know all the technical stuff, in forge ui i used the juggernaut but i didn't try other, and i think they are other in painting models. So if didn't work either is not compatible or is a bug with the interface

  • @brodull1142
    @brodull1142 7 месяцев назад

    Any tips to make sdxl loading model faster? Sd1.5 is faster because it still use 512 base model but sdxl took longer like 3 time longer. I'm using Rtx 4060 8gb.

    • @pixaroma
      @pixaroma  7 месяцев назад

      Usually sdxl models are also larger then 1.5, like 3 times large maybe that can be the cause, i dont have any tips for that, i have rtx4090 and I didnt notice any difference :) plus I dont use 1.5 since sdxl appeared for me 512px is too small image size

  • @lokitsar5799
    @lokitsar5799 7 месяцев назад

    I like Forge but the creator of it has jumped ship. I know there's a couple of branches that are working but I just don't see it sticking around long term. I switched back to my combo of Auto, Comfy and fooocus

    • @pixaroma
      @pixaroma  7 месяцев назад +1

      yeah is missing some updates, will see on long term what happens :)

  • @Uday_अK
    @Uday_अK 2 месяца назад

    ❤👌🏻

  • @svetlanaLisova666
    @svetlanaLisova666 7 месяцев назад

    tell me how to make a character but in different poses.. for example (plant, put, delight) and so on

    • @pixaroma
      @pixaroma  7 месяцев назад

      Without training a lora model is not so easy, even with lora is not perfect. You can also try extensions like Reactor but those work more with photorealistic images. There are options with controlnet and ip adapter but i didn't manage to get consistent results with sdxl models, i saw others using it with sd 1.5 models. Only from prompt is hard to make it right. You can also try Inpainting to keep the face or head and change everything else. You can get similar results if describe accurate how the hair look, how is dressed and so on, try a few generation and try to find one similar

    • @svetlanaLisova666
      @svetlanaLisova666 7 месяцев назад

      @@pixaroma and you used the control.lnet??

    • @pixaroma
      @pixaroma  7 месяцев назад

      I don't use the control net for faces, i tried but didn't get consistent results 😃 there are for comfy ui workflow that works I saw online but I use mostly forge and control net i use it to get contours and poses and convert sketches so i use mostly canny model

    • @svetlanaLisova666
      @svetlanaLisova666 7 месяцев назад

      @@pixaroma you VAE LIKE??

    • @pixaroma
      @pixaroma  7 месяцев назад

      I use automatic vae :)

  • @petertremblay3725
    @petertremblay3725 15 дней назад

    There is not a single model that can generate a correct flintlock rifle with realistic hand pose so i guess i will have to do a lora for it.

    • @pixaroma
      @pixaroma  15 дней назад

      yes lora is usually the most accurate, not perfect but it does the job, I use tensor art to train loras, the loras for flux are quite ok

    • @petertremblay3725
      @petertremblay3725 15 дней назад

      @@pixaroma I will use comfyui to train lora thank!

  • @sobreaver
    @sobreaver 5 месяцев назад

    ooo k ay ! Next step, making Weird Science with this thumbnail o0

    • @pixaroma
      @pixaroma  5 месяцев назад

      😂🧪🧬👩‍🔬

  • @lonewolf-vw9wf
    @lonewolf-vw9wf 7 месяцев назад

    how come your stavle diffusion like a trained dog, as i have everyting same but never get what i want

    • @pixaroma
      @pixaroma  7 месяцев назад

      😂 i don't always get what i want but with enough tries and right prompts i get it close enough, depends on the images, it still has things that can't do right no matter what you try

  • @raymondandreaswilke8176
    @raymondandreaswilke8176 6 дней назад

    this programs creates promts 10% of the text and 90% what so ever, no matter how much details you describe

    • @pixaroma
      @pixaroma  6 дней назад +1

      depends on the model, for me for example flux is quite accurate, sdxl doesnt have so good prompt understanding, I get better results with flux than I get with midjournery or dall-e sometimes

  • @JarppaGuru
    @JarppaGuru 7 месяцев назад

    0:30 yes that trained image. none was generated copy paste using other trained data. it not do spaceship if not trained. its will make spaceship looking toaster bcoz both those trained it can combine. nothing is generate from 0% its not intelligence.
    this is image this is caption. if you ask something similiar what is in caption that was trained you get that image or combined with other that has same. nothing generated. still it not know nothing else that was trained. even then it not know anything. its programmed todo.
    and its sold as AI what we think in movies. and its just this is this and answer is this. we allready know answer we trained it xD
    same as good old jarvis

    • @pixaroma
      @pixaroma  7 месяцев назад

      In my opinion it can do what was trained but the billion combination for each different prompt is what makes it more interesting, its like having unlimited variations for something, i can not do a job if i was not trained, i learn different things then i make like a mix of what i learned, just ai can do those million combination that we don't have enough years in or life to do :) but will see in the future with more training will get more advanced

  • @NixxioMusic
    @NixxioMusic 5 месяцев назад

    how is ur generation so fast lol, if i dare go over 612x612 it just stops and dies. even with lower end models.

    • @pixaroma
      @pixaroma  5 месяцев назад +1

      You need an Nvidia rtx card, with a lot of vram, i have rtx4090 24gb of vram. I do a 1024px image in 4-5 seconds. If i use a hyper model can do like in one second or so. I do speed up the video to not wait but is quite fast anyway

    • @NixxioMusic
      @NixxioMusic 5 месяцев назад

      @@pixaroma i tried using ForgeUI and it is what i needed fr. i got a RTX 3070 8gb. but fordge sped ut up to 7s per image, so thanks for the tutorual :D

  • @ozgoodphotos
    @ozgoodphotos 4 дня назад

    Oh if it only worked that easily…..

    • @pixaroma
      @pixaroma  4 дня назад

      For sdxl it works, for flux you need longer prompts I use chatgpt to get long derailed prompt. And flux understand better the prompts, but if you ask something that was not trained to do, then it will not do it no matter how hard you try

  • @AnudeepKolluri
    @AnudeepKolluri 7 месяцев назад +1

    Create discord server, no one uses facebook these days. (atleast i dont).
    With midjourney and games seeing a rise, i am confident the upcoming generation has discord account.

    • @pixaroma
      @pixaroma  7 месяцев назад

      I created one today, but I dont have too much experience with it, I will work on it on the next days
      discord.gg/a8ZM7Qtsqq

  • @cstar666
    @cstar666 6 месяцев назад

    Hands and feet *smdh*, hands and feet.

    • @pixaroma
      @pixaroma  6 месяцев назад

      😂 yeah it has more problems with hands than a finetuned sdxl, will see what happens, I saw is problem with finetuning because of license, not sure what features brings, if not we stick with sdxl

  • @sobreaver
    @sobreaver 7 месяцев назад

    hmmmm uniforms...

    • @pixaroma
      @pixaroma  7 месяцев назад +1

      Didn't find the word when I did the tutorial 😂