AMAZING A1111 Stable Diffusion Extensions You Might Have Missed!
HTML-код
- Опубликовано: 6 июл 2023
- Absolute Reality is a great new model, but you know what can make it even better? Loads of extensions, of course! Have a go with these Nerdy Rodent suggested extensions to help a a little something extra to your AI art generation workflow.
Power your inner nerd :)
== Links! ==
* Automatic1111 Web UI - github.com/AUTOMATIC1111/stab...
* ControlNet Extension - github.com/Mikubill/sd-webui-...
* Model - civitai.com/models/81458/abso...
* Embedding - civitai.com/models/72437?mode...
* Embedding - civitai.com/models/72437?mode...
* Upscaler - nmkd.de/?esrgan
* Adetailer - github.com/Bing-su/adetailer
* Paper - arxiv.org/pdf/2305.08891.pdf
* Neutral Prompt - github.com/ljleb/sd-webui-neu...
* Dynamic CFG - github.com/ashen-sensored/sd-...
* Composable Diffusion - • Stable Diffusion AND C...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
== Stable Diffusion Playlists! ==
* Stable Diffusion Mega Playlist! - ruclips.net/p/PLj...
* Dreambooth Playlist - • Stable Diffusion Dream...
* Textual Inversion Playlist - • Stable Diffusion Textu... - Наука
I knew the thumbnail looked familiar :)
Thanks for the cool model! 😉
@@NerdyRodent you're welcome :D
FYI, you can include VAE, ENSD, and CLIP Skip to the top of the webpage, next to the checkpoint selector.
1. Go to the settings page.
2. Navigate to the "all pages" section.
3. Look for the quicksettings list, and add the following entries next to the "sd_model_checkpoint":
sd_vae
eta_noise_seed_delta
CLIP_stop_at_last_layers
This should help reduce the amount of time going in and out of the settings page.
Quick settings is indeed a cool feature!
whenever i see your new video pop up, i know there is something interesting to try and learn 👍
😀
Outstanding results from your addons! Thank you so much for the incredibly helpful information! You and your channel are very much appreciated!
Glad you like them!
Thank you so much for sharing and diving in!
You are so welcome!
I'd love to use some of your aliens for illustrations in my anthology of short stories, "Tales of the Galactic Alliance".
amazing, amazing, amazing! haven't tried Neutral Prompt yet, but thank you for pointing out the modified Dynamic CFG. finally free of those noise correcting lora's that slightly interfere with the aesthetic of your checkpoint model
Happy to help!
Thanks loved the video and guides, easy to understand and follow cheers!
Glad you enjoyed it!
Neutral prompt and Dynamic thingy had the same function IIRC, Using Neutral Prompt Atm when the prompt is a bit too long (I dint use it that much atm), and one more extension that's is good but broken right now is NPW for adjusting the strength of the negative prompt.
Update: I think ill try Dynamic CFG that one with other setting, looks fun to try it out.
Ah. I saw the dynamic thresholding custom nodes in comfy ui. Now I know what they are and I just installed them. Thanks.
It’s fun 😉
Cool! Will try a few
really love this funny of presenting information! thanks!
Glad you enjoyed it! 😀
You were right, I missed it, but now thanks to you I have found the kingdom of heaven. 🙇🏽♂️
👐
Excellent guide thanks
Remember to download the checkpoint into your Stable-Diffusion models directory!
@@NerdyRodent Sorry my bad your guide works pretty well. Problem was other extension that I installed after for testing :) Thanks for your great work.
Hey Nerdy, Do you have any advice for extensions if it's numbers are reaching army sizes? Mostly for bootup time reasons :P Or do you strictly stick to only what you need, and not what "could be handy" in the future/simply disable them in the extensions section?
Yup, I tend to keep the ones I use the most
The images look sooo god😮
I did! I did might have missed it...until now.
Thanks again!! 🎉😊🎉
Any time!
THX man, very interesting
No problem!
Could I ask why you put dots instead of commas in some places in the promt, and why there are no commas between those tokens "boring nsfw blurry lowres unrealistic monochrome game screenshot"?
Can you make a more in depth video on the rescaled cfg? I’ve been having issues with it washing out all my generations, after changing models/prompts/clip/extensions/etc. I finally realized the issue was the sampler methods. It only really “worked” well with DDIM ( what the researchers used ) and washed out images when using any other sampler, especially the Karras ones. I’m not sure but I read online it has something to do with the noise schedulers these models were trained on, whatever that means 😅, please help , my images are so much better looking and accurate to my prompt using the dynamic rcfg fix extension, it produces absolute bangers every time I click generate, if it weren’t for the fact every images is washed out with a strong sepia filter
A small work around was setting the phi to around .25-.30. It definitely helped but image quality is still far superior with this disabled entirely. It’s either a choice between having this off to generate superior looking but less accurate images, or using rcfg to generate extremely accurate images at the cost of color and contrast.
@@reverse7737 it has to due with the way the schedulers and models are trained, but fear not, I found a really great fix. Using the DPM++ samplers I found a phi between 20-35 works great. I usually use a phi of 25 or 30. 30 for a more balanced image and 25 for deep rich colors. You can of course go lower for darker/higher contrast scenes. Alternatively you can raise the cfg (leave the rescale at 1) to get higher contrast images. Another thing you can do is lower the clamp from 100 to 95. This will change the details of the image and can generate great or terrible results depending on the cfg weight and phi score. My current setup is DPM++2M SDE Karras with a cfg of around 10, phi at either 25 or 30, and clamp at either 100 or 95. You can go up to a cfg weight of 20 or 30 but you’ll have to raise the phi score to accommodate. You can also raise the rescale a bit (between 1 and 7) to get interesting results, but i prefer to leave it at 1. I’m not sure if this will help, but i lowered eta noise for ancestral samplers to 0. You’ll have to go into your settings and add it to your UI. A quick google search should tell you how. Honestly not sure what this does but it seems to give more consistence results.
Is it a good idea to leave the settings they suggest changing for other models too?
Clip skip is only good for some, but the others should be fine
If you liked a picture on CivitAI and you wanted to recreate in in AUTOMATIC1111, instead of copying he data which will take some time to populate, why do you just click and drag the image from CivitAI and just drop in the PNG Info tab, then send to txt2img or img2txt. It's faster and more accurate that way.
last 2 disable face restore and set clip were not present on my automatic 1111
'CLIP_stop_at_last_layers' in quicksetting
Comparing the before and after where the first set of images are not also headshots is apples to oranges, wouldn't you say?
I wouldn’t say that personally, no
...a tasty bit of cheddar for any A.I lab born rodent rodent such as I..
Hello Nerdy Rodent, did you know that a silly CivitAI user is uncensoring Stable Diffusion 2.1? I've seen something he posted and I'm honestly baffled, it's mostly for NSFW use but he managed to get some decent portrait of celebrities.
I have not updated A1111 in months because everydamn time updates break the install.
Not had an update issue myself as yet!
This isn’t even SDXL bro
Kidding. Thanks for the video 🎉
😉 July 18th
so confused, much information 🥵
Just start with the model, you can build up to installing an extension once you’ve got the basics 👍
🥰
🥰
woooo! 👋👋👋👋👋👋👋👋 🐀🐭
👋
Why ur voice sounds familiar….
Because I’ve been doing these for years? 😉
what exactly does Bad Dream and Unrealistic Dream do? All I hear is "use these, they're pretty good" with no explanation as to what they do.
As shown, they are embeddings used as negative prompts. See ruclips.net/video/7Lxdk89W2K0/видео.html for more info!
I don't know... It feels like ALOT of stuff for reaching results possible to reach without all these things... We'll see if I can stomach all these settings....
Wish yall on A1111 could use HAT-L upscaler. Yall do some amazing work on that webUI
Where are you using that one? :)
Whaat the rodent is on a pc 😂
Ikr! Been using Linux PCs for years now 😀
That what you get for using fake Japanese next time try 可愛い(かわいい) see if you get a better result
I was hoping for 化け狸 🙁
@@NerdyRodent With SD the best you can hope for is a 川太郎 maybe when SDXL 1.0 comes out
Weebs... 出て行け!
@@monkey4102 You got it wrong man true weebs don't know any Japanese. Have you seen the oh bro I have been doing Duolingo for the past 2 years videos or even worse write in romaji.
Actually all those "extras " are not really needed in order to get a decent looking image...
Having a "realistic " enough model combined with good descriptive prompting should normally do the trick...also with good upscaling methods.
At the end of the day its all about high resolution images with good and creative compositions that stand out from the rest 😎
Actually, they can help!
If you have to do ctrl-f to find something on a settings page, then the software has a poor UI.
When there are near a thousand parameters in the settings, searching is the fastest way.
The recent most popular discourse forum setting page use the same approach, it is very flexible but there so many parameters in the setting. Searching is the only way to find a setting at fast speed.
"If you have to...." You don't HAVE to, it's one way of doing it but not the only way.
You need more nuance in your statements; more accuracy. That take of yours is just wrong.
anyone knows how clipdrop^s XL Reimagine tool works?
WHO. BEAT. ME?!? 👈😒🧐🤔🫣🤬🤫😶 ✌️