thanks for watching! might be live on twitch for debugging, questions and chat: www.twitch.tv/coderx huggingface sdxl1.0 base model: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 huggingface sdxl1.0 refiner model: huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0 automatic1111: github.com/AUTOMATIC1111/stable-diffusion-webui comfyui: github.com/comfyanonymous/ComfyUI refiner.json (now updated to refiner_v1.0.json) by camenduru: github.com/camenduru/sdxl-colab/blob/main/refiner_v1.0.json
Hey CoderX, I have been trying to generate some ummm spicy images but I cant seem to. Im using ComfyUI coz i cant run Automatic1111 or Vladmandic. Is it ComfyUI's problem? Also I used absolute reality and its generating what i want but it is again censoring my images if i try to send it to refiner. Can you please help me? Thanking you, Yours faithfully, Harold
Appreciate a guide that is not over-explained or under-explained. Was curious about comfy after finding that it seems like it avoids out-of-memory errors, while a1111 crashes with this model. I guess I shoulda got more vram.
i have 6gb vram, but there is an error while running update bat. How can i fix it? it says: "return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 4.06 GiB already allocated; 14.71 MiB free; 4.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Stable diffusion model failed to load Applying attention optimization: Doggettx... done."
Help When I make an image .. i get an error with *Folder "outputs\txt2img-images" does not exist. after you save an image, the folder will be created.* idk how to fix this
I have this error: Stable diffusion model failed to load Loading weights [31e35c80fc] from C:\Users\Documents\Stable Diffusion\Webui2\webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors
thank you, you're too kind! I had been planning this video for over 2 weeks now since SDXL 1.0 was supposed to launch on 18 so it feels good that it has been helpful to others :D
i keep getting this message "Creating model from config: C:\Users\Admin\stable-diffusion-webui epositories\generative-models\configs\inference\sd_xl_base.yaml Failed to create model quickly; will retry using slow method." do you know why?
when installing run.bat i got out of space, i had to remove folders to make space in the memory. Now i finnished installation but when I launch SD it says "error" when i press generate button, :( solutions??
I am trying to use automatic1111 and sdxl-refiner-1.0 and have memory issue, is there a way to set it up to use cpu since most of my gpu memory is reserved by pytorch. This is the error I get, it loads up but can not run a prompt "Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.20 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation."
I use A1111 with CPU only, and prompts that take 4-5 minutes with v1.5, now run 5 hours with SDXL (and just 512x512, have not tried higher resolutions yet!) You can increase the memory as much as you want (physical is better but swap/virtual should be ok), but memory *is not* your real problem if you use only CPU for SDXL.
i wont be using this it takes on my card to generate one image 512x512 like 5 minutes what the hell i can make like 12 images like that on Rev animated in 1minute
I have 2070 RTX and 16GB of ram but I keep getting OutOfMemoryError: CUDA out of memory. I have xformers installed and I turned Token Merging ratio up but I still get the error. Any idea how I can resolve this. Using Automatic1111
@@CoderXAI That is the thing, no error message. it just stays with just the text that come just before the image starts rendering. and stays there forever.
base model is what generates the image/the main model refiner is an additional model that takes the generated image and adds more details to it(so kind of optional)
Noob question. Why do you make a new installation of automatic1111 from a previous build instead of simply adding the SDXL model to the automatic1111 that you were already using?
if you already have existing automatic1111 and you can update that using update.bat script or manually, you don't need to do a fresh install. you do need to update since A1111 was recently updated to support SDXL and older versions won't work with it
(EDIT: I can confirm this issue is fixed with the new update of automatic 1111) I followed your previous tutorial and everytime i launched automatic 1111 it would redownload all the pytorch.bin files ever single time and take like 10 minutes to launch the web ui. I really hope that doesnt happen this time, but is there a way to prevent this from happening?
Pfff, Bye Midjourney? I dont think so. Everyone saying MJ killer. Its not. Both have pros and Cons. MJ still usually looks better. SD XL is an improvement sure and you have more control and can do nsfw, but it still cant compare to how many MJ images look.
Something's definitely amiss with the Clipdrop version, which you would think would be setting a good example of how good SDXL is supposed to be. At present, it can't even render a spoon on a white background, in fact, nothing with a white background. MJ can do objects with white backgrounds in its sleep. According to SDXL, a spoon is a DSLR camera, a dessert spoon is a dessert with a camera in the middle of it. I also added 'vector style' yesterday and SDXL wouldn't render anything. MJ does all that with no problems.
yep if one works the other should run as well. you'll run out of graphic card memory if you run both at the same time though! also ComfyUI is currently faster for most people
ComfyUI works much better for me. With the same prompts it took Automatic1111 almost 30 minutes to generate a 1024x1024 image, but took only 8 seconds in ComfyUI !
It isn't censored, like 2.1 is, so yes you can do NSFW but I don't know how well they'll compare to 1.5. Loras etc will help with that too once they start releasing.
@@danielhernanalonso7219 Obviously they don't mean the same thing but I also don't see the OP asking anything about training. Their comment is vague and can be read multiple ways, we both read it differently. Seeing as 2.1 was heavily censored, it made sense they were asking about that. 🙂
@@Elwaves2925 thats true. I guess the answer is "yes, it can", but at the same time it can mislead him because he cant do the same nsfw stuff right now.
Thanks to your instructions I got it running, but so far results are disappointing (havent tried the Refiner yet). It feels like starting all over again...
for those who have trouble with python dependencies use this as a last resort to send all those dependencies to the void of darkness (fix my torch cuda xformers etc lol) for /F %P in ('py -3.10 -m pip freeze') do py -3.10 -m pip uninstall -y %P
LULW, I think it might work on chonky AMD cards. Comfy has instructions for AMD+linux and auto1111 has some unofficial support but not sure if that works with SDXL as well. github.com/comfyanonymous/ComfyUI#amd-gpus-linux-only github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
thanks for watching! might be live on twitch for debugging, questions and chat: www.twitch.tv/coderx
huggingface sdxl1.0 base model: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
huggingface sdxl1.0 refiner model: huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
automatic1111: github.com/AUTOMATIC1111/stable-diffusion-webui
comfyui: github.com/comfyanonymous/ComfyUI
refiner.json (now updated to refiner_v1.0.json) by camenduru: github.com/camenduru/sdxl-colab/blob/main/refiner_v1.0.json
you should probaby pin this
@@dynoko3295 I thought this was always pinned, this explains a lot of comments :(
Looks like they took down the SDXL model...
looks like they updated the base+refiner models, there were some issues with VAE so they are probably(hopefully) better now
@@CoderXAI I do not see the tensor model in that link. Is it somewhere else?
Clipdrop error: Stable Diffusion XL (Watermark, 400 images per day) not per month for free users.
Hey CoderX,
I have been trying to generate some ummm spicy images but I cant seem to. Im using ComfyUI coz i cant run Automatic1111 or Vladmandic.
Is it ComfyUI's problem?
Also I used absolute reality and its generating what i want but it is again censoring my images if i try to send it to refiner.
Can you please help me?
Thanking you,
Yours faithfully,
Harold
Appreciate a guide that is not over-explained or under-explained. Was curious about comfy after finding that it seems like it avoids out-of-memory errors, while a1111 crashes with this model. I guess I shoulda got more vram.
Just came from another video trying to sell a one click download LOL. Man, thanks for the quick and concise tutorial!!!
yep SEcourse's video was suggested right after this lmaoo
Where do you put the Refiner Model in Auto1111?
same place " path " for the basic model .
i have 6gb vram, but there is an error while running update bat. How can i fix it? it says:
"return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 4.06 GiB already allocated; 14.71 MiB free; 4.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done."
I have 8gb and same problem
Help When I make an image .. i get an error with *Folder "outputs\txt2img-images" does not exist. after you save an image, the folder will be created.* idk how to fix this
Where did you get the refiner.json from?
yeh, this was definitely unclear
Does deforum support?
I see outputs are saved in the webui folder. Is there any chance that the prompts used to create them are saved somewhere too?
XL models won't load on my A1111 UI, it's not the gpu and I've tried reinstlaling, updating, etc
I can not get the refiner to work... I keep getting ERR reconnecting... With both 0.9 and 1.0. I have tried to update but that didn't fix the issue.
anyone getting the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
could be some sort of wrong torch error bug, here's a relevant link: github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9402
Been getting that on 1.4 update and even after a clean install of 1.5
i run into this problem when using refiner
Any advice for getting this working with AMD GPU?
I’d like to know as well
When using the refiner, do both models occupy VRAM similtaneously, or does the base unload to offer more space to the refiner?
Base unloads and then the refiner loads in ;)
I have this error: Stable diffusion model failed to load
Loading weights [31e35c80fc] from C:\Users\Documents\Stable Diffusion\Webui2\webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors
what folder does the refiner go into?
where do you put the refiner file in the 1111 webui folder?
Excited about 1.0. Going to try today.
Btw the background music is wonderful
And your explanation is clean, clear and to the point.
thank you, you're too kind! I had been planning this video for over 2 weeks now since SDXL 1.0 was supposed to launch on 18 so it feels good that it has been helpful to others :D
@@CoderXAI I agree, you have done a great job making this accessible and easy to understand. Thank you so much. May I know what the music is please?
@@CoderXAIhello can I batch modify 10 frames like we img2img like we use to do in the old stable diffusion?
i keep getting this message "Creating model from config: C:\Users\Admin\stable-diffusion-webui
epositories\generative-models\configs\inference\sd_xl_base.yaml
Failed to create model quickly; will retry using slow method." do you know why?
when installing run.bat i got out of space, i had to remove folders to make space in the memory. Now i finnished installation but when I launch SD it says "error" when i press generate button, :( solutions??
is it imposibble to use the refiner in automatic 1111?
I click generate and nothing happens
How do I do the options that were in automatic111 inside comfyui like inpainting, imgtoimg ect
Was able to install and run with both interfaces, thank you
I am trying to use automatic1111 and sdxl-refiner-1.0 and have memory issue, is there a way to set it up to use cpu since most of my gpu memory is reserved by pytorch. This is the error I get, it loads up but can not run a prompt "Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.20 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation."
I use A1111 with CPU only, and prompts that take 4-5 minutes with v1.5, now run 5 hours with SDXL (and just 512x512, have not tried higher resolutions yet!)
You can increase the memory as much as you want (physical is better but swap/virtual should be ok), but memory *is not* your real problem if you use only CPU for SDXL.
don't know why\how but my comfyUI works way slower than auto1111, does comfy need more vram to generate images or something like that?
im using vlad diffusion the sd xl model was Laoding then stopped loading at 70 percent
I don't really get why comfyui seems to generate images, with refiner, in like a minute on an 8gb card, but it takes like 6 minutes in A1111
i wont be using this it takes on my card to generate one image 512x512 like 5 minutes what the hell i can make like 12 images like that on Rev animated in 1minute
when I try to load the refiner json nothing happens. I downloaded the updated version btw
Can you install in on the existing sd?
Was your image gen sped up? I have a 3060 12gb vram and it takes about a minute for me with base+refiner.
same graphics card, around similar speed (~40-50s)
What do you think about Happy Diffusion
What do YOU think about Happy Diffusion?
can we use all automatic 1111 ckpts in in comfy UI ??
You ever find out?
I have 2070 RTX and 16GB of ram but I keep getting OutOfMemoryError: CUDA out of memory. I have xformers installed and I turned Token Merging ratio up but I still get the error. Any idea how I can resolve this. Using Automatic1111
2080 super here, same
i have 2070 too, xl and a111 no work ,but comfy work fine
hi, when I upload the model, give a prompt and generate, the webui doesnt even move. Am I doing something wrong?
does it say anything on the terminal? it'll either throw an error or show what it's loading/doing
@@CoderXAI That is the thing, no error message. it just stays with just the text that come just before the image starts rendering. and stays there forever.
Great explanation and guide. Thank you.
What's the difference between the refiner and the base model, please?
base model is what generates the image/the main model
refiner is an additional model that takes the generated image and adds more details to it(so kind of optional)
@@CoderXAI Thank you for your explanation 🐬
Noob question. Why do you make a new installation of automatic1111 from a previous build instead of simply adding the SDXL model to the automatic1111 that you were already using?
if you already have existing automatic1111 and you can update that using update.bat script or manually, you don't need to do a fresh install. you do need to update since A1111 was recently updated to support SDXL and older versions won't work with it
I wish there would be full explanation how to install it from very beginning, with this other app that you had before.
(EDIT: I can confirm this issue is fixed with the new update of automatic 1111)
I followed your previous tutorial and everytime i launched automatic 1111 it would redownload all the pytorch.bin files ever single time and take like 10 minutes to launch the web ui. I really hope that doesnt happen this time, but is there a way to prevent this from happening?
Im still downloading the files so this issue might be fixed, but im still yet to see. ill update you
Pfff, Bye Midjourney? I dont think so. Everyone saying MJ killer. Its not. Both have pros and Cons. MJ still usually looks better. SD XL is an improvement sure and you have more control and can do nsfw, but it still cant compare to how many MJ images look.
Something's definitely amiss with the Clipdrop version, which you would think would be setting a good example of how good SDXL is supposed to be. At present, it can't even render a spoon on a white background, in fact, nothing with a white background. MJ can do objects with white backgrounds in its sleep. According to SDXL, a spoon is a DSLR camera, a dessert spoon is a dessert with a camera in the middle of it. I also added 'vector style' yesterday and SDXL wouldn't render anything. MJ does all that with no problems.
Try to generate Xi Jinping in Midjourney (you can't)
@CoderXAI Great video, well done! And what we can use ComfyUI and Automatic 1111 on the same pc without problem?
yep if one works the other should run as well. you'll run out of graphic card memory if you run both at the same time though! also ComfyUI is currently faster for most people
@@CoderXAI tks for the infos
much helpful. Ty CoderX
so its basically a checkpoint i suppose ?
ComfyUI works much better for me. With the same prompts it took Automatic1111 almost 30 minutes to generate a 1024x1024 image, but took only 8 seconds in ComfyUI !
No links in the description. (looks for another tutorial)
it's the pinned comment, have some restrictions on adding links to the description for now
@@CoderXAI Oh ok, thank you for letting me know.
can i use my SD1.5 lora on SDXL ?
Nope
Very helpful. Subbed.
refiner by camenduru > 404 - page not found
thanks for letting me know. they've updated the file to refiner_v1.0.json and I've updated the link in my comment as well
@@CoderXAI 🫡
Can SDXL 1.0 make NSFW models or images just like SD 1.5 ?
I do not think so. Provably it needs a lot of finetuning like 1.5 does.
It isn't censored, like 2.1 is, so yes you can do NSFW but I don't know how well they'll compare to 1.5. Loras etc will help with that too once they start releasing.
@@Elwaves2925 Uncensored doesnt mean trained. You cant do the same nsfw stuff as 1.5. Some nudes and thats it.
@@danielhernanalonso7219 Obviously they don't mean the same thing but I also don't see the OP asking anything about training. Their comment is vague and can be read multiple ways, we both read it differently. Seeing as 2.1 was heavily censored, it made sense they were asking about that. 🙂
@@Elwaves2925 thats true. I guess the answer is "yes, it can", but at the same time it can mislead him because he cant do the same nsfw stuff right now.
It was very useful, thank you.
This video was very helpful. Thank you so much.
Thanks to your instructions I got it running, but so far results are disappointing (havent tried the Refiner yet). It feels like starting all over again...
Right, 0.9 was better in generating better quality images
I only download refiner and not the base, should I be downloading both? Is there a documentation or tutorial on how to use comfyUI? Thanks
Does it run offline?
Is it gonna work on rtx 3050 laptop?
No. Too little vram. Or maybe you can run it on CPU only and then you can generate pictures, even though it would be slow.
tx alot, get well soon!
for those who have trouble with python dependencies use this as a last resort to send all those dependencies to the void of darkness (fix my torch cuda xformers etc lol)
for /F %P in ('py -3.10 -m pip freeze') do py -3.10 -m pip uninstall -y %P
Thanks! :)
Thank you!
"Load this refiner.json file"?
i've updated the link in the pinned comment to refiner_v1.0.json, please load that
all good accept, Keanu has only 4 fingers ....
👋
ComfyUI is faster 😀
I'll never buy anything with AMD on it...
LULW, I think it might work on chonky AMD cards. Comfy has instructions for AMD+linux and auto1111 has some unofficial support but not sure if that works with SDXL as well.
github.com/comfyanonymous/ComfyUI#amd-gpus-linux-only
github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
Indeed
I am using RX 6800 on Linux its pretty well actually.
@Cutieplus of course not , xformers uses CUDA
@@kano326 ........linux