4 Model - NO Refiner needed!!!! - A1111 / SDXL / Stable Diffusion XL

Поделиться
HTML-код
  • Опубликовано: 23 сен 2024
  • These 4 Models need NO Refiner to create perfect SDXL images. Check out NightVision XL, DynaVision XL, ProtoVision XL and BrightProtoNuke. Create highly detailed images in 3D, 2D, Photorealistic, Hyperreal, Portrait, SFW, NSFW and more
    #### Links from my Video ####
    civitai.com/mo...
    civitai.com/mo...
    civitai.com/mo...
    civitai.com/mo...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoff...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorial...
    Support me on Patreon: / sarikas

Комментарии • 95

  • @siegekeebs
    @siegekeebs Год назад +40

    The purpose of a refiner is to reduce the vram load and allow for people to generate larger images (through 2 stages) than they would be able to generate at once. I know for some reason people really dislike the refiner in SDXL, but that was a conscious decision on their part to make it more accessible. I really hate that people are so eager to go to no refiner, and instead they should be trying to make better refiners. Although nothing is stopping you from using these models in the same way as you would a refiner in the first place. For the record, this doesn't affect me, my card is capable, I'm saying this because I feel that it's important to say to look out for the community.

    • @thedeliverus
      @thedeliverus Год назад +2

      I don’t think reducing vram was ever the primary intent of the refiner. But it likely was kept in mind when making that decision

    • @FusionDeveloper
      @FusionDeveloper Год назад +3

      It has to load the model and then unload the model and then load the refiner model and then unload the refiner model, for every single image, which takes many times longer than just generating probably 20 images without a refiner on my 1080 ti GPU. (I didn't measure the actual time).

    • @gameplayfirst-ger
      @gameplayfirst-ger Год назад +1

      The main problem with refiner is that many additional features don't work well with it as many of those techniques only support the base model. But I agree that Refiner is not bad and helps for some content.

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 Год назад +4

      Where the did you even get such an idea brother? It doesn't work like that.

    • @orokchimoraN99
      @orokchimoraN99 Год назад

      Refiner make my gpu out of memory

  • @thanksfernuthin
    @thanksfernuthin Год назад +18

    I haven't been using a refiner since the beginning. When I saw people use them starting out I noticed the images with the refiner weren't really better, just different. So I figured learning how to get what I want from the models would be the best way to go. And I've been very happy. I actually use BASE SDXL the most because that's what I trained my Loras on it it's really working great.

    • @Elwaves2925
      @Elwaves2925 Год назад +1

      Same for me and I've been saying for awhile that you don't need it. All it really does is add to generation time and resource usage. I stopped once the first custom model dropped and I haven't used it since. I don't even use the SDXL base anymore unless someone else's prompt asks for it and I want that exact image. I can understand why you use it though.

    • @thanksfernuthin
      @thanksfernuthin Год назад +2

      @@Elwaves2925 It took me a while to notice! (SDXL Base) But I was getting the best results from base. I'm sure because that's what it was trained with. I may train Loras with a favorite model in the future but SDXL isn't the dog SD 1.5 was and definitely not what 2.0 was. If you're using your favorite Lora from CivitAI and see it was trained on the base model, try it in there once in a while. You may be shocked.

    • @0AThijs
      @0AThijs Год назад +1

      I personally use refiners like this
      Get an idea
      Get the best model to create the idea
      Then use the refiner early on to use its style

    • @thanksfernuthin
      @thanksfernuthin Год назад

      @@0AThijs Yeah! If it works for you that's great.

    • @Elwaves2925
      @Elwaves2925 Год назад +1

      @@thanksfernuthin Oh I already do use Loras with the model they were trained on, that's what I meant by when the "prompt asks for it." Some Loras don't work at all with different models. Sometimes though, you do get better, or at least different results with other models, so I like to switch it up. Same goes for switching SD 1.5 and SDXL prompts around.
      I'm all for using whatever model works best for you and what you want from. I've never bought into the "this is the best model" rhetoric that some have. If you like the base model that much then fair play, go for it. I use Loras but I also like to see what I can get without them and I like having a model tailored to each style I want. 🙂

  • @Thomahawk1234
    @Thomahawk1234 Год назад +11

    There is something off about the xl images and it's hard to describe. But I've noticed it from the beginning. To me it looks like blurred shading with sharper details on top. It kinda makes everthing look a bit like clay.

    • @Nick-do5ii
      @Nick-do5ii Год назад +1

      i feel the same way. But I think it's still early days and we'll see what comes of it

  • @MikevomMars
    @MikevomMars Год назад +3

    I am using DynaVision since a couple of weeks and it indeed gives amazing results - even with Loras of my own face 😊

  • @Hearcharted
    @Hearcharted Год назад +4

    Sebastian "Dad Jokes Master" Kamph is going places

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 Год назад +2

    Sebastian's dad jokes are really contagious now.

  • @xd-vf1kx
    @xd-vf1kx 11 месяцев назад

    Happy bday Oli!! thanks for all you do for us! love u

  • @chillsoft
    @chillsoft Год назад +2

    I really recommend Realities Edge XL - it is truly amazing! And doesn't need refiner of course! ;) (I might be slightly biased....just sayin)

    • @Elwaves2925
      @Elwaves2925 Год назад

      Haven't heard of that one, so cheers, I'll check it out. I'm liking the new RealVisXL 2.0 for photoreal, although it's so new I haven't really pushed it yet.

    • @chillsoft
      @chillsoft Год назад

      @@Elwaves2925 RealVis is good, but lacks detail in hair, it does look more realistic though! I find Realities Edge to be just the right amount of real but much more crisp and sharp and VERY easy to prompt! But again, I might have a slight bias! ;)

  • @calvingrondahl1011
    @calvingrondahl1011 Год назад

    How fun and creative. You are always inspiring. Thanks OS. 🖖👍

  • @numbernine5044
    @numbernine5044 Год назад +6

    Am I mistaken, but these larger SDXL examples tend to be so generic and the CFG incredible low (3-4) that really, anything being rendered will come out really clean and quick (under 2 minute). Why not actually try to use SDXL in a prompt that has at least more unique characterization and control and then see what can be done and see how much time it takes? My 12GB RTX is really picky and tends to only load 6 gb sdxl checkpoints (not anything more than that or an error will occur). and I'm maintaining AUTOMATIC 1111 to do anything rather than ComfyUI. There is just too much happening with Comfy my videocard is already picky with loading sdxl checkpoints that I won't be able to do much without getting errors.

  • @blacksage81
    @blacksage81 Год назад

    I used the refiner maybe two times since SDXL launched. the XL Base mode is good enough and I never had the Vram to load both so I went without it. Nightvision goes hard, I use it in comfyui and I've easily ran off 500+ images. Nightvision, Protovision, and DynavisionXL models can easily take CFG's up to eight, maybe ten, but you have an even higher change of tanking your image quality. I like 8 for Nightvision, but I may go a little lower once I'm in the mood to run more gens.

  • @alanfox9721
    @alanfox9721 Год назад +1

    Olívio, can you tell me in your opinion which are the best models for making bas-relief sculptures?

  • @peacetoall1858
    @peacetoall1858 Год назад

    Captain's log ha ha good one 🤣

  • @aggressiveaegyo7679
    @aggressiveaegyo7679 Год назад +2

    Wow. It turns out I didn’t understand the purpose of the refiner.
    I used it if a model I liked visually could not take my idea from the prompt. In this case, I chose the model that draws well what I need, but using a refiner I turned it into the visual style that I like.

    • @DejayClayton
      @DejayClayton Год назад

      How well does that work for you? I'm always running into issues with getting particular styles to be applied to prompts that are accurate but not interesting.

    • @aggressiveaegyo7679
      @aggressiveaegyo7679 Год назад

      @@DejayClayton This is a kind of process of finding the ideal. Sometimes I even use anime and the refiner copes with making the face look realistic with 28 steps and turning the refiner on at 0.5-0.6.
      In other cases, it is convenient to use two realistic models, but which react differently to the prompt, for example, “soft focus” or one draws grass better. Then for easy adjustments, 22 steps and turning on the refiner at 0.9 are enough.

    • @DejayClayton
      @DejayClayton Год назад

      @@aggressiveaegyo7679 I've been using an approach in ComfyUI to start with a few steps using a specific prompt and model, and then do a latent masked merge with a different prompt and model, continuing the render with different CFG and denoising strengths. I've been getting some good results that were hard for me to achieve otherwise.

  • @mirek190
    @mirek190 Год назад

    Olivio try Foocus or better fork Foocus-MRE

  • @activemotionpictures
    @activemotionpictures 10 месяцев назад

    How do you turn off "Refiner" once you've selected it in the same session? It's always on.

  • @CCoburn3
    @CCoburn3 Год назад

    I really like the BrightProtoNuke images. But I'm more into artistic rendering than photorealism.

  • @Vectorr66
    @Vectorr66 11 месяцев назад

    Curious, all of my SDXL models are taking forever to create anything. 3080ti. Am I missing something?

  • @Xinkgs
    @Xinkgs Год назад

    Very impressive, Thanks for

  • @Steponlyone
    @Steponlyone Год назад

    What was the captain’s lock doing in its toilet though?

  • @azrieljale
    @azrieljale 11 месяцев назад

    Daym, sso A.I. can do 1024x1024 now😂 amazing

  • @Adante.
    @Adante. Год назад

    Exciting! Thank you 👏

  • @matthallett4126
    @matthallett4126 Год назад

    You got NightVision downloaded! ;)

  • @qrkyboy
    @qrkyboy Год назад

    These models are excellent for photorealism. UnstableDiffusers YamerMix is another good one with no "refiner" needed. The SDXL refiner is suspect. More than a few times it's decided for example a character I'm describing with long hair can't be a guy and turned him female. Extremely female.

  • @Flexsan
    @Flexsan Год назад

    I see a very small difference between with and without refiner. 0:50
    With the refiner there is slightly more detail in the beard and hair and a little bit more detailed texture to the skin. But it's really not a lot.

  • @ZephyrusY91
    @ZephyrusY91 Год назад +1

    I think JuggernautXL doesn't need a refiner either. Or have you had other experiences?

  • @a64738
    @a64738 9 месяцев назад

    Refiner ? I never even heard about that lol :)

  • @BackooWorld
    @BackooWorld Год назад

    Thanks olivio

  • @thefablestudio
    @thefablestudio Год назад

    👍

  • @Соня_Огурцова
    @Соня_Огурцова Год назад

    Does anyone know how to train SDXL 1.0 model with your own photos with Dreambooth?

  • @James009D
    @James009D 11 месяцев назад

    The joke alone got a like from me!

  • @Deffcolony
    @Deffcolony 11 месяцев назад

    Can someone help me? When i tried to generate a image i got the following error:
    TypeError: expected Tensor as element 0 in argument 0, but got DictWithShape

  • @SteveM45
    @SteveM45 Год назад +1

    Last video I ask you, do you know Fooocus?

    • @mirek190
      @mirek190 Год назад

      Fooocus is great to make insane good pictures even with stock SDXL and very easy to use. I love Fooocus the most ... Fooocus-MRE is even better ;P

  • @RodgerE2472
    @RodgerE2472 Год назад

    Great job as usual. I like your accent, you said CFG and it sounded like you said "Sea of Cheese", so that's what i'm going to call it now.

  • @sb6934
    @sb6934 Год назад

    Thanks!

  • @Tanaste
    @Tanaste Год назад

    Great video as always;). NightVisionXL doesn't work with control net and makes python crash. Control net works with all other models i have. Has anyone encountered the same issue ?

  • @Afr0man4peace
    @Afr0man4peace Год назад

    nothing new for me. my models don't need a refiner since 29th July 😊. (Hephaistos). But great video again. Those models are definitely worth a look.

  • @petitemasque5784
    @petitemasque5784 Год назад +2

    What about the hands? Show hands Olivio, don't be naughty.

  • @saberkz
    @saberkz Год назад

    refiner is auto enable in my A111 1.6 , how i turn it off ? its auto selected the sdxl refiner

    • @Elwaves2925
      @Elwaves2925 Год назад

      You should be able to click in the box and select the 'None' option. If that doesn't work you can always move the refiner out of your models folder.

  • @Ferreyrajp
    @Ferreyrajp Год назад

    Genio!!!!!

  • @AP-rj2ls
    @AP-rj2ls Год назад

    лайк с разбегу в голову ))

  • @EvolvedSungod
    @EvolvedSungod Год назад

    I cant get SDXL to work at all. In a1111 it just gives errors and can't load the checkpoint. Also didn't work at all with Easy Diff which I much prefer to a1111, where it just gives a weird collage of colors in the generated images. So far all the times I've posted on different videos asking for help, no one has ever replied. I'm close to giving up on all this.

    • @ayaneagano6059
      @ayaneagano6059 Год назад

      In A1111, assuming you have at least 8GB of VRAM, put --medvram in the commandline arguments of the _webui-user.bat_ file and see how that works; many people have encountered that exact same problem with SDXL on “low” VRAM without that in their arguments.
      Another thing you could do if that doesn’t work is try to update your Python and CUDA versions; having outdated versions can make it all harder. Having Python 3.10.10, PyTorch 2 (for faster performance) 0.1+cu118 can help SDXL to run a good deal better.
      If all of that fails, unfortunately, you will have to install ComfyUI to get SDXL to work-but an advantage of that is that the VRAM requirements go down, and even a 6GB VRAM GPU can run it. Thankfully, there are already pre-made workflows such as _SDXL ComfyUI ULTIMATE Workflow_ on Civitai so you don’t have to mess with any of the complex stuff and can start generating images straight away

  • @NaviRetlav
    @NaviRetlav Год назад

    Can we use them in comfy UI?

    • @nokodemusic
      @nokodemusic Год назад +2

      Yes, I've experimented with all of them in Comfy.

    • @gameplayfirst-ger
      @gameplayfirst-ger Год назад +1

      Why not? You should try to find more in-depth ComfyUI videos if you really have to ask about using different models.

  • @sherpya
    @sherpya Год назад +1

    a busy guy

  • @gameplayfirst-ger
    @gameplayfirst-ger Год назад +1

    Doesn't seem that you understand the Refiner. Depending on the subject and style you don't need a Refiner with the SDXL base model either. There's nothing special about those random models.

  • @Dragonytez
    @Dragonytez Год назад

    Der letzte Witz war Hammer 😆

  • @adrianoprimero3590
    @adrianoprimero3590 Год назад

    Is there something similar in sd 1.5?

    • @mirek190
      @mirek190 Год назад

      nope .... and not even close

  • @ScottVanKirk
    @ScottVanKirk Год назад

    Not socal gitorist. But soCal gi-TAR-ist as in the musical instrument😏

  • @Airbender131090
    @Airbender131090 Год назад

    wait. Someone uses refiner with custom XL models? 0_0

  • @pogiman
    @pogiman Год назад

    😂😂😂😂😂😂 good dad joke my dude!

  • @lyonstyle
    @lyonstyle Год назад

    is this the start of a dad joke battle? LOL

  • @Mimeniia
    @Mimeniia Год назад +1

    Ok cool so "we've" kinda mastered the fidelity of images but SD, when complex scenes and flawless multiple characters? My GPU is kinda sick and tired of spitting out pictures of women already.

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      uhm... controlnet and inpainting? you can do complex scenes with as many characters as you want, but it takes a bit more skill

    • @Mimeniia
      @Mimeniia Год назад

      @@OlivioSarikas For sure. There's Regional Prompter as well, but Im talking about out of the box, less hair pulling attempts.

  • @wboumans
    @wboumans Год назад

    Sure but can it do bobs?

  • @tsahello
    @tsahello 10 месяцев назад

    Nlight is better 😂 nd um 1.5 is better thn all xl models

  • @magenta6
    @magenta6 Год назад

    Star-date . . .

  • @Eisenbison
    @Eisenbison 6 месяцев назад

    What was with the pointless joke in the beginning?

  • @nova8585
    @nova8585 Год назад

    Please no more dad jokes

  • @Theexplorographer
    @Theexplorographer Год назад +1

    Lol... SoCalGuitarist. Southern California Guitarist. ie; plays guitar and lives in Southern California. Keep on being you Olivia Circus! 😂😂

  • @blackvx
    @blackvx Год назад

    Thanks!