4 Model - NO Refiner needed!!!! - A1111 / SDXL / Stable Diffusion XL
HTML-код
- Опубликовано: 23 сен 2024
- These 4 Models need NO Refiner to create perfect SDXL images. Check out NightVision XL, DynaVision XL, ProtoVision XL and BrightProtoNuke. Create highly detailed images in 3D, 2D, Photorealistic, Hyperreal, Portrait, SFW, NSFW and more
#### Links from my Video ####
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoff...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorial...
Support me on Patreon: / sarikas
The purpose of a refiner is to reduce the vram load and allow for people to generate larger images (through 2 stages) than they would be able to generate at once. I know for some reason people really dislike the refiner in SDXL, but that was a conscious decision on their part to make it more accessible. I really hate that people are so eager to go to no refiner, and instead they should be trying to make better refiners. Although nothing is stopping you from using these models in the same way as you would a refiner in the first place. For the record, this doesn't affect me, my card is capable, I'm saying this because I feel that it's important to say to look out for the community.
I don’t think reducing vram was ever the primary intent of the refiner. But it likely was kept in mind when making that decision
It has to load the model and then unload the model and then load the refiner model and then unload the refiner model, for every single image, which takes many times longer than just generating probably 20 images without a refiner on my 1080 ti GPU. (I didn't measure the actual time).
The main problem with refiner is that many additional features don't work well with it as many of those techniques only support the base model. But I agree that Refiner is not bad and helps for some content.
Where the did you even get such an idea brother? It doesn't work like that.
Refiner make my gpu out of memory
I haven't been using a refiner since the beginning. When I saw people use them starting out I noticed the images with the refiner weren't really better, just different. So I figured learning how to get what I want from the models would be the best way to go. And I've been very happy. I actually use BASE SDXL the most because that's what I trained my Loras on it it's really working great.
Same for me and I've been saying for awhile that you don't need it. All it really does is add to generation time and resource usage. I stopped once the first custom model dropped and I haven't used it since. I don't even use the SDXL base anymore unless someone else's prompt asks for it and I want that exact image. I can understand why you use it though.
@@Elwaves2925 It took me a while to notice! (SDXL Base) But I was getting the best results from base. I'm sure because that's what it was trained with. I may train Loras with a favorite model in the future but SDXL isn't the dog SD 1.5 was and definitely not what 2.0 was. If you're using your favorite Lora from CivitAI and see it was trained on the base model, try it in there once in a while. You may be shocked.
I personally use refiners like this
Get an idea
Get the best model to create the idea
Then use the refiner early on to use its style
@@0AThijs Yeah! If it works for you that's great.
@@thanksfernuthin Oh I already do use Loras with the model they were trained on, that's what I meant by when the "prompt asks for it." Some Loras don't work at all with different models. Sometimes though, you do get better, or at least different results with other models, so I like to switch it up. Same goes for switching SD 1.5 and SDXL prompts around.
I'm all for using whatever model works best for you and what you want from. I've never bought into the "this is the best model" rhetoric that some have. If you like the base model that much then fair play, go for it. I use Loras but I also like to see what I can get without them and I like having a model tailored to each style I want. 🙂
There is something off about the xl images and it's hard to describe. But I've noticed it from the beginning. To me it looks like blurred shading with sharper details on top. It kinda makes everthing look a bit like clay.
i feel the same way. But I think it's still early days and we'll see what comes of it
I am using DynaVision since a couple of weeks and it indeed gives amazing results - even with Loras of my own face 😊
Sebastian "Dad Jokes Master" Kamph is going places
Sebastian's dad jokes are really contagious now.
Happy bday Oli!! thanks for all you do for us! love u
I really recommend Realities Edge XL - it is truly amazing! And doesn't need refiner of course! ;) (I might be slightly biased....just sayin)
Haven't heard of that one, so cheers, I'll check it out. I'm liking the new RealVisXL 2.0 for photoreal, although it's so new I haven't really pushed it yet.
@@Elwaves2925 RealVis is good, but lacks detail in hair, it does look more realistic though! I find Realities Edge to be just the right amount of real but much more crisp and sharp and VERY easy to prompt! But again, I might have a slight bias! ;)
How fun and creative. You are always inspiring. Thanks OS. 🖖👍
Am I mistaken, but these larger SDXL examples tend to be so generic and the CFG incredible low (3-4) that really, anything being rendered will come out really clean and quick (under 2 minute). Why not actually try to use SDXL in a prompt that has at least more unique characterization and control and then see what can be done and see how much time it takes? My 12GB RTX is really picky and tends to only load 6 gb sdxl checkpoints (not anything more than that or an error will occur). and I'm maintaining AUTOMATIC 1111 to do anything rather than ComfyUI. There is just too much happening with Comfy my videocard is already picky with loading sdxl checkpoints that I won't be able to do much without getting errors.
I used the refiner maybe two times since SDXL launched. the XL Base mode is good enough and I never had the Vram to load both so I went without it. Nightvision goes hard, I use it in comfyui and I've easily ran off 500+ images. Nightvision, Protovision, and DynavisionXL models can easily take CFG's up to eight, maybe ten, but you have an even higher change of tanking your image quality. I like 8 for Nightvision, but I may go a little lower once I'm in the mood to run more gens.
Olívio, can you tell me in your opinion which are the best models for making bas-relief sculptures?
Captain's log ha ha good one 🤣
Wow. It turns out I didn’t understand the purpose of the refiner.
I used it if a model I liked visually could not take my idea from the prompt. In this case, I chose the model that draws well what I need, but using a refiner I turned it into the visual style that I like.
How well does that work for you? I'm always running into issues with getting particular styles to be applied to prompts that are accurate but not interesting.
@@DejayClayton This is a kind of process of finding the ideal. Sometimes I even use anime and the refiner copes with making the face look realistic with 28 steps and turning the refiner on at 0.5-0.6.
In other cases, it is convenient to use two realistic models, but which react differently to the prompt, for example, “soft focus” or one draws grass better. Then for easy adjustments, 22 steps and turning on the refiner at 0.9 are enough.
@@aggressiveaegyo7679 I've been using an approach in ComfyUI to start with a few steps using a specific prompt and model, and then do a latent masked merge with a different prompt and model, continuing the render with different CFG and denoising strengths. I've been getting some good results that were hard for me to achieve otherwise.
Olivio try Foocus or better fork Foocus-MRE
How do you turn off "Refiner" once you've selected it in the same session? It's always on.
I really like the BrightProtoNuke images. But I'm more into artistic rendering than photorealism.
Curious, all of my SDXL models are taking forever to create anything. 3080ti. Am I missing something?
Very impressive, Thanks for
What was the captain’s lock doing in its toilet though?
Daym, sso A.I. can do 1024x1024 now😂 amazing
Exciting! Thank you 👏
You got NightVision downloaded! ;)
These models are excellent for photorealism. UnstableDiffusers YamerMix is another good one with no "refiner" needed. The SDXL refiner is suspect. More than a few times it's decided for example a character I'm describing with long hair can't be a guy and turned him female. Extremely female.
I see a very small difference between with and without refiner. 0:50
With the refiner there is slightly more detail in the beard and hair and a little bit more detailed texture to the skin. But it's really not a lot.
I think JuggernautXL doesn't need a refiner either. Or have you had other experiences?
Refiner ? I never even heard about that lol :)
Thanks olivio
👍
Does anyone know how to train SDXL 1.0 model with your own photos with Dreambooth?
The joke alone got a like from me!
Can someone help me? When i tried to generate a image i got the following error:
TypeError: expected Tensor as element 0 in argument 0, but got DictWithShape
Last video I ask you, do you know Fooocus?
Fooocus is great to make insane good pictures even with stock SDXL and very easy to use. I love Fooocus the most ... Fooocus-MRE is even better ;P
Great job as usual. I like your accent, you said CFG and it sounded like you said "Sea of Cheese", so that's what i'm going to call it now.
Thanks!
Great video as always;). NightVisionXL doesn't work with control net and makes python crash. Control net works with all other models i have. Has anyone encountered the same issue ?
nothing new for me. my models don't need a refiner since 29th July 😊. (Hephaistos). But great video again. Those models are definitely worth a look.
What about the hands? Show hands Olivio, don't be naughty.
refiner is auto enable in my A111 1.6 , how i turn it off ? its auto selected the sdxl refiner
You should be able to click in the box and select the 'None' option. If that doesn't work you can always move the refiner out of your models folder.
Genio!!!!!
лайк с разбегу в голову ))
I cant get SDXL to work at all. In a1111 it just gives errors and can't load the checkpoint. Also didn't work at all with Easy Diff which I much prefer to a1111, where it just gives a weird collage of colors in the generated images. So far all the times I've posted on different videos asking for help, no one has ever replied. I'm close to giving up on all this.
In A1111, assuming you have at least 8GB of VRAM, put --medvram in the commandline arguments of the _webui-user.bat_ file and see how that works; many people have encountered that exact same problem with SDXL on “low” VRAM without that in their arguments.
Another thing you could do if that doesn’t work is try to update your Python and CUDA versions; having outdated versions can make it all harder. Having Python 3.10.10, PyTorch 2 (for faster performance) 0.1+cu118 can help SDXL to run a good deal better.
If all of that fails, unfortunately, you will have to install ComfyUI to get SDXL to work-but an advantage of that is that the VRAM requirements go down, and even a 6GB VRAM GPU can run it. Thankfully, there are already pre-made workflows such as _SDXL ComfyUI ULTIMATE Workflow_ on Civitai so you don’t have to mess with any of the complex stuff and can start generating images straight away
Can we use them in comfy UI?
Yes, I've experimented with all of them in Comfy.
Why not? You should try to find more in-depth ComfyUI videos if you really have to ask about using different models.
a busy guy
Doesn't seem that you understand the Refiner. Depending on the subject and style you don't need a Refiner with the SDXL base model either. There's nothing special about those random models.
Der letzte Witz war Hammer 😆
Is there something similar in sd 1.5?
nope .... and not even close
Not socal gitorist. But soCal gi-TAR-ist as in the musical instrument😏
wait. Someone uses refiner with custom XL models? 0_0
😂😂😂😂😂😂 good dad joke my dude!
is this the start of a dad joke battle? LOL
Ok cool so "we've" kinda mastered the fidelity of images but SD, when complex scenes and flawless multiple characters? My GPU is kinda sick and tired of spitting out pictures of women already.
uhm... controlnet and inpainting? you can do complex scenes with as many characters as you want, but it takes a bit more skill
@@OlivioSarikas For sure. There's Regional Prompter as well, but Im talking about out of the box, less hair pulling attempts.
Sure but can it do bobs?
Nlight is better 😂 nd um 1.5 is better thn all xl models
Star-date . . .
What was with the pointless joke in the beginning?
Ask your Mom
Please no more dad jokes
Lol... SoCalGuitarist. Southern California Guitarist. ie; plays guitar and lives in Southern California. Keep on being you Olivia Circus! 😂😂
Thanks!