I noticed that you keep changing from 512 to 1024 manually every time. This can be made to be the default setting. Just update both width and height to 1024, then under the Settings tab, there is a section called 'Defaults' towards the bottom in the left menu. All you need to do is press View Changes to verify you see 1024 and press Apply and Reload UI. Now it's always defaulted to 1024 and no need to manually change it over and over. This Default section works for pretty much any setting you make and want as a default. Make the change, View the Change, Apply the change, then Reload the UI. Done.
Excellent and straight to the point tutorial; now I know what I was missing to do to get good quality images. Thanks. It good be great if you could do a tutorial on styles.
Thanks for the feedback, appreciate it. Regarding the styles tutorial - I'll think about it, but for now you can try adding styles.csv from civitai.com/models/119246?modelVersionId=141384 or using this plugin github.com/eideehi/sd-webui-better-styles or just browse sites like weirdwonderfulai.art/resources/stable-diffusion-xl-sdxl-artist-study/ to see what styles are available and manually add it to your prompts
First of all, great tutorial and straight to the point. As a newbie I would love it if you can make tutorial for img2img and inpaint for sdxl and how we can modify our own photos to look different. Also after watching your other tutorial I was wondering if you can use a trained model for img to img to change your own photo with the same style?
Thanks for the feedback. Yes, I have plans to make a tutorial on img2img and inpainting. Without training, SDXL knows nothing about your face. Therefore, if you want a stylized photo that retains your original face, you have two options: 1. Without any additional training, you can use your photos in img2img and mask out the face, instructing Stable Diffusion to leave the original facial features untouched. This can be accomplished by utilizing masks in inpainting. 2. Train your own model using your photos. Once trained, you can use it in txt2img or img2img with standard prompts like 'photo of [me] riding a horse'. If you wish to swap your face with someone else's, you can achieve this without custom models. Simply mask your face using inpainting and request a concept that Stable Diffusion is already familiar with - for example, 'a photo of Elon Musk'.
I'm running SDXL through ComfyAI on a rig with a GTX 1070 Ti video card and 16 GB of RAM. It's not super-fast, but it's fast enough that I don't quite feel like I'm wasting my time by running it locally.
Excellent Video !!! Now here is my problem: after installing sdxl_vae.safetensors, I can't see it in the dropdown list SD VAE. I tried with Firefox and Chrome but nothing. Can you help me please ?
Sadly for me 3060@12gb, it takes a lot of time to generate one pic of 1024x1024 (and just that,no refining or hi-rest stuff) , great quality but time is a major problem.
Please post your system configuration. How much RAM do you have, and what is your GPU? Check for error messages in the console (click on the launcher window) or in the WebUI itself(on the right side of the screen below all the buttons). Otherwise, it's hard to help. Are you able to generate 512x512 images with the regular Stable Diffusion 1.5 model? If you see something like OutOfMemoryError, then you don't have enough memory.
I had checked for error messages, but didn't see any. I have enough RAM and VRAM. But I now think that the problem is with the browser I was using. My main browser is Firefox, because of the extra security it allows. I think the extra security was causing the web interface some problems whenever I clicked on generate (either javascript restrictions or perhaps cookies). By using an alternative browser with lower security settings (Waterfox in this case, allowing session cookies and javascript) the system works. I had to turn off the autolaunch setting (so that it didn't open SD in Firefox) and connect manually to the local IP address from Waterfox, but that's all that was needed.
@@vanarunedottir I'm glad you found the solution. The Automatic1111 web UI is heavily dependent on JavaScript. If you're using extensions like NoScript and have JavaScript disabled by default, it won't work until you add an exception
keep getting errors when trying this - NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
@@itzmranonymous I haven't tried this configuration, but I think that you need more RAM. You can try increasing the pagefile setting and try with it www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html
idk why but after installing the sdxl module and applying the settings changes, my web ui stopped working, and now i cant even load the v1.5 module. Before it was working just fine. Can u please help
......Funny i get 24gb vram cuda errors with A1111 after using my own trained models. As for SDXL? It's realism isn't as perfect as they say it is - you get long necks, squished shoulders. Illustration and Anime content is a pain to train and it's taking the community blood sweat and tears with barely anyone interested lol.. "SD 1.5 IS BAD COMAPRED TO SDXL" is a bad statement on SD Ai's part - Most of my SDXL models are barely used but i'm still getting downloads on my 1.5 models ;)
THAT BEING SAID: SDXL is way better than 2 and 2.1, and I think giving it time is a big deal - one of the downsides is that the MODELS ARE 3x the size of 1.5 which if you have low SYSTEM ram can cause issues - Not joking, was on a 24gb GPU on vast a while back trying to do Lora testing for SDXL, as TensorArt has had it's SDXL bounty on for the last month - It would either give me two gpu's errors (and i was only on one) or it would cuda the hell out of memory and no matter what i did to reset it wouldn't.. So it could just be that A111 has a classic sad memory leak that came back to bite it, because an RTX 3090 shoulnd't die so hard XD
I don’t get the hype. You can generate photorealistic images that are indeed indistinguishable from actual photos. But as soon as you start prompting for more sophisticated things, they’re just ignored or you end up having weirdness in your generated images. Because SDXL just doesn’t recognize as many tokens as community models of 1.5 do. I get that it takes the community time to train custom models off of the xl base model to get to that stage. But currently (for me) sdxl is an absolute joke. Unless You’re satisfied with an image of a cat with grass
this AI voice is irritating and although its artificial , but it feels very artificial , if it was a softer and little faster in speed , that should be better
I noticed that you keep changing from 512 to 1024 manually every time. This can be made to be the default setting. Just update both width and height to 1024, then under the Settings tab, there is a section called 'Defaults' towards the bottom in the left menu. All you need to do is press View Changes to verify you see 1024 and press Apply and Reload UI. Now it's always defaulted to 1024 and no need to manually change it over and over. This Default section works for pretty much any setting you make and want as a default. Make the change, View the Change, Apply the change, then Reload the UI. Done.
Thanks for the info Law & Order guy!
The video you shared is great, it contains almost all the information, thank you very much~~.
Thanks for the feedback!
1:28 😂 Sounds like he's ready to kill a man over 8 gigabytes...
Great tutorial. Thank you
Great thanks. Does this work on mac's ?
Love the video. Do you plan on making a tutorial to train models as well?
Excellent and straight to the point tutorial; now I know what I was missing to do to get good quality images. Thanks. It good be great if you could do a tutorial on styles.
Thanks for the feedback, appreciate it. Regarding the styles tutorial - I'll think about it, but for now you can try adding styles.csv from
civitai.com/models/119246?modelVersionId=141384
or using this plugin
github.com/eideehi/sd-webui-better-styles
or just browse sites like weirdwonderfulai.art/resources/stable-diffusion-xl-sdxl-artist-study/
to see what styles are available and manually add it to your prompts
First of all, great tutorial and straight to the point. As a newbie I would love it if you can make tutorial for img2img and inpaint for sdxl and how we can modify our own photos to look different. Also after watching your other tutorial I was wondering if you can use a trained model for img to img to change your own photo with the same style?
Thanks for the feedback. Yes, I have plans to make a tutorial on img2img and inpainting.
Without training, SDXL knows nothing about your face. Therefore, if you want a stylized photo that retains your original face, you have two options:
1. Without any additional training, you can use your photos in img2img and mask out the face, instructing Stable Diffusion to leave the original facial features untouched. This can be accomplished by utilizing masks in inpainting.
2. Train your own model using your photos. Once trained, you can use it in txt2img or img2img with standard prompts like 'photo of [me] riding a horse'.
If you wish to swap your face with someone else's, you can achieve this without custom models. Simply mask your face using inpainting and request a concept that Stable Diffusion is already familiar with - for example, 'a photo of Elon Musk'.
I'm running SDXL through ComfyAI on a rig with a GTX 1070 Ti video card and 16 GB of RAM. It's not super-fast, but it's fast enough that I don't quite feel like I'm wasting my time by running it locally.
Great video. Much appreciated.
Thanks for the feedback, glad you enjoyed it!
good video keep continue 👌
How do you enable Styles?
Excellent Video !!!
Now here is my problem: after installing sdxl_vae.safetensors, I can't see it in the dropdown list SD VAE. I tried with Firefox and Chrome but nothing. Can you help me please ?
It's so easy ! I just paste sdxl_vae.safetensors file inside VAE directory
Sadly for me 3060@12gb, it takes a lot of time to generate one pic of 1024x1024 (and just that,no refining or hi-rest stuff) , great quality but time is a major problem.
i got better result from the base model, trying to improve through refiner model makes it more contrast and less realistic.
ComfyUI works very well with SDXL - I couldn't get any other UI to work on my 8gb vram laptop.
Can you make a tutorial on how to enhance personal portraits for Professional appearances. LinkedIn for example. Great work, by the way.
I was following you until after the first test prompt, your bengal cat, but then for me clicking generate after that does totally nothing at all. :(
Please post your system configuration. How much RAM do you have, and what is your GPU? Check for error messages in the console (click on the launcher window) or in the WebUI itself(on the right side of the screen below all the buttons). Otherwise, it's hard to help. Are you able to generate 512x512 images with the regular Stable Diffusion 1.5 model? If you see something like OutOfMemoryError, then you don't have enough memory.
I had checked for error messages, but didn't see any. I have enough RAM and VRAM. But I now think that the problem is with the browser I was using. My main browser is Firefox, because of the extra security it allows. I think the extra security was causing the web interface some problems whenever I clicked on generate (either javascript restrictions or perhaps cookies). By using an alternative browser with lower security settings (Waterfox in this case, allowing session cookies and javascript) the system works. I had to turn off the autolaunch setting (so that it didn't open SD in Firefox) and connect manually to the local IP address from Waterfox, but that's all that was needed.
@@vanarunedottir I'm glad you found the solution. The Automatic1111 web UI is heavily dependent on JavaScript. If you're using extensions like NoScript and have JavaScript disabled by default, it won't work until you add an exception
i have rtx 3060 with 12 gb vram should i used for fine tunning models or runpod
" RTX 3060 with 12 gb vram"? My RTX 4070 only has 8 gb vram. I am so jealous!
I want to intall SDXL on sagemaker studio, is there any how to guide?
Sorry, I'm not familiar with SageMaker. I will be releasing a guide for cloud installation and usage soon.
Sdxl just doesn't work for me it doesn't load.. I'm using vlad diffusion.. can anyone help?
Can you provide more details? Errors or console log
keep getting errors when trying this - NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Have you tried to select "sdxl_vae.safetensors" instead of automatic? (ruclips.net/video/2AKSJTYpBfU/видео.html)
@@IT_explainerplease reply, will it work on gtx 1650 4GB (Nvidia Gpu) with 8GB Ram and Ryzen 5 Processor.
@@itzmranonymous I haven't tried this configuration, but I think that you need more RAM. You can try increasing the pagefile setting and try with it www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html
@@IT_explainer thanks for the info 👍
How about controlnet
show us how to finetune?
My SDXL model is visible in the list but won't load
Check for error messages in the console (launcher window). Are you certain you're using the latest versions of all components?
@@IT_explainer Yes, everything is up to date on a very hi end machine RTX4090 and so forth
Thanks for your video ! Very informative ! But i would recommend you to better use Biden for your example.
Thanks for the feedback. The video featuring Biden has been released
ruclips.net/video/TCr2U8n95zU/видео.html
PLEASE SHOW ME HOW TO GO BACK TO 1.5
Just choose the "SD15NewVAEpruned.ckpt" from the Stable diffusion checkpoint drop down list and switch SD VAE to automatic
Yeah the Refiner Model definitely doesn't work on AMD :(
idk why but after installing the sdxl module and applying the settings changes, my web ui stopped working, and now i cant even load the v1.5 module. Before it was working just fine. Can u please help
You can either remove the sdxl files from the model and VAE directories or reinstall the launcher and the entire 1111 web UI to a new folder
......Funny i get 24gb vram cuda errors with A1111 after using my own trained models.
As for SDXL? It's realism isn't as perfect as they say it is - you get long necks, squished shoulders.
Illustration and Anime content is a pain to train and it's taking the community blood sweat and tears with barely anyone interested lol..
"SD 1.5 IS BAD COMAPRED TO SDXL" is a bad statement on SD Ai's part - Most of my SDXL models are barely used but i'm still getting downloads on my 1.5 models ;)
THAT BEING SAID: SDXL is way better than 2 and 2.1, and I think giving it time is a big deal - one of the downsides is that the MODELS ARE 3x the size of 1.5 which if you have low SYSTEM ram can cause issues - Not joking, was on a 24gb GPU on vast a while back trying to do Lora testing for SDXL, as TensorArt has had it's SDXL bounty on for the last month - It would either give me two gpu's errors (and i was only on one) or it would cuda the hell out of memory and no matter what i did to reset it wouldn't..
So it could just be that A111 has a classic sad memory leak that came back to bite it, because an RTX 3090 shoulnd't die so hard XD
I don’t get the hype. You can generate photorealistic images that are indeed indistinguishable from actual photos. But as soon as you start prompting for more sophisticated things, they’re just ignored or you end up having weirdness in your generated images. Because SDXL just doesn’t recognize as many tokens as community models of 1.5 do. I get that it takes the community time to train custom models off of the xl base model to get to that stage. But currently (for me) sdxl is an absolute joke. Unless You’re satisfied with an image of a cat with grass
👋
Won't need AI to show Trump in a prison jumpsuit.
And what about Joe Biden?.he's more close of wearing that.
@@Alberto-d3z7z naaaaah, Biden is closer to a straightjacket
"Depict correct anatomy" - shows images with worst kind of anatomical issues :D
Will be nice if not fact i have amd gpu 🙄
I'm currently working on a tutorial for this scenario. Stay tuned!
Me too. AMD A9 7gen 😅
@@footmaniax rx5700xt so it have power to run this kinda smoth 😅 but noo like always nvidia hype everywhere 🤣
this AI voice is irritating and although its artificial , but it feels very artificial , if it was a softer and little faster in speed , that should be better