I just wanted to say thank you for your simple explanation. This video is extremely helpful for someone who doesn't know anything about stable diffusion and it's just getting into it. Thanks for making this video!
Thank you so much for explaining and demonstrating this in very straightforward and easy way. Been so confused and lost on where to start because there is so much content to sort through to find the tutorial specific for my questions. 👍
Play around with your Lora weight and mixing model checkpoint to get better results Prompts needs to insert carefully. Some prompts will get noisy images and unnecessary blur and spots. Eg; ((Soft Light, Sharp:1.3)) weight can be adjusted from 1.1 to 1.5. more will make noisy images
thank you for your video !!! really appreciated it, just noticed that for the checkpoint(Realistic Vision) there is still a base model mentioned: SD 1.5 which is just like Lora. so for the checkpoint, we can ignore the base model and directly use the model that we download? that's correct?
Nice that I found this one. The other ones teaching to add Lora involved a slew of powershell commands and use of Dreambooth, in addition to Kohya GUI. I just wanted to know if I can just drop the files from the one site into a folder, or if I had to do some convoluted set of hair tearing efforts.
If you want to use Lora and 1.0 weight.. prompts "action" must match Lora "action" or a random image will appear. For most Lora I use 0.5 which generates nice results :)
0.5 doesn't work if you want to make lets say wallpaper/photo of yourself or someone else since even 1.0 has deformities like eye color not matching, head shape being off, head/face build being more masculine or feminine than it is etc.
How do you ACCESS stable diffusion?? I've been confused out of my mind trying to understand this. Is it a website? Is it a software? What *IS* it?! The UI you're showing of stable diffusion, I've never seen it anywhere before. How do i access stable diffusion? Please help.
Great tutorial! Would like to see you tackle training your own model locally. There's several out there but they're all from several months ago and not up to date. Not to mention there's been a lot of people having issues with Automatic1111 and dreambooth with xformers not working correctly
Thanks, I cant give a timeframe on when I will do one, but I think that would make for a interesting tutorial, most likely would be a LoRA training one as opposed to a full dreambooth model
@@neoProfessor please help me I need some serious help please same like automatic 1111 do a tutorial on flying dog Gyre standalone it's another amazing but not amazing like this please make a tutorial on it and how to load models and lora in 😢it please
Thank you for the video! A better analogy would be having custom parts that you could choose to install on your car temporarily. Because when you use a LoRA you don't actually modify the original checkpoint model.
So these "mods" are made by stable diffusion or by people outside the company? And how does it work together with the generator program? I just like to know how things work.
People outside the company. There are guides online if you want to make your own one and upload to civitai. How it works (simple) : It just taking the original models used by the program and either replaces them (checkpoint, also known as dreambooth) or modifies them (LoRA) How it works (technical) : ruclips.net/video/dVjMiJsuR5o/видео.html
Im trying sd1.5 cyberealisticv50 and pictures are always goofy needing editing at best. I think others dont have this problem SDXL seems so much better maybe its because i dont use embedding?
hey bro i need help. i did save the checkpoint on the right folder E:\Stable Diffusion\stable-diffusion-webui\models\Stable-diffusion. but when i go to the extra networks and go checkpoint tab i click refresh and nothing shows. i have like 3 models currently saved and none are showing any thoughjts?
How do you load a checkpoint merge safetensor but without the UI? All tutorials and youtube videos lead to automatic1111 or diffusion web ui instructions.....
Typically LORA's are not trained on base SD1.5 but on a model derived from SD1.5 (this is what SD1.5 in Civitai tells you). You will generally get best results using same model as the LORA has been trained on (most people don't specifically mention it in Civitai unfortunately) and varying results for other models in same generation (SD1.5 or SD2.0). Using a SD1.5 LORA with a SD2.0 checkpoint will simply break the LORA - that's why Civitai differentiates it so explicitly in the LORA page.
I have to stick with the free creations I can make online because my computer is not up to the job of using this locally, but they're still cool looking images I get
somehow my results are unsatisfying with stable diffusion automatic 1111 when trying to generate locally on my pc with my old gpu gtx 1080 ti 11gb, cant even get anything close to just even looking good or anything comparable, if i have to be honest automatic 1111 local generation cant even compare to the simple and basic generation of leonardo ai, so with that said am i missing anything? people claim they were able to generate stunning art with automatic 1111 but when i tried it, its soooo childishly unsatisfy like the results are worst than a 5 year old drawing, i've tried about 300x generative images now and i can just conclude that it wont take me anywhere without a better guide to a more pleasing result
I’ve watched DOZENS of videos on this subject, and this one finally made the topics of models, loras, and how to use them make sense. Thanks!
Straight to the point, no talking around the topic for minutes. Subbed 👍🏼
I just wanted to say thank you for your simple explanation. This video is extremely helpful for someone who doesn't know anything about stable diffusion and it's just getting into it. Thanks for making this video!
Great, I needed to know exactly where to place that downloaded file, and how to trigger it, thank you so much!!
Thank you so much for explaining and demonstrating this in very straightforward and easy way. Been so confused and lost on where to start because there is so much content to sort through to find the tutorial specific for my questions. 👍
Play around with your Lora weight and mixing model checkpoint to get better results
Prompts needs to insert carefully. Some prompts will get noisy images and unnecessary blur and spots. Eg; ((Soft Light, Sharp:1.3)) weight can be adjusted from 1.1 to 1.5. more will make noisy images
thank you for your video !!! really appreciated it, just noticed that for the checkpoint(Realistic Vision) there is still a base model mentioned: SD 1.5 which is just like Lora. so for the checkpoint, we can ignore the base model and directly use the model that we download? that's correct?
Clear, straight to the point, kudos my man
fantastic tutorial man, absolute class
Nice that I found this one. The other ones teaching to add Lora involved a slew of powershell commands and use of Dreambooth, in addition to Kohya GUI. I just wanted to know if I can just drop the files from the one site into a folder, or if I had to do some convoluted set of hair tearing efforts.
If you want to use Lora and 1.0 weight.. prompts "action" must match Lora "action" or a random image will appear.
For most Lora I use 0.5 which generates nice results :)
0.5 doesn't work if you want to make lets say wallpaper/photo of yourself or someone else since even 1.0 has deformities like eye color not matching, head shape being off, head/face build being more masculine or feminine than it is etc.
how do you change the weights of Loras or checkpoints, help I cant find it.
How do you ACCESS stable diffusion??
I've been confused out of my mind trying to understand this. Is it a website? Is it a software? What *IS* it?! The UI you're showing of stable diffusion, I've never seen it anywhere before.
How do i access stable diffusion? Please help.
I'm confused too because mine looks totally different
This shits confusing for no reason 😂
The red button isn’t there in my stable diffusion 1.5, what do I do?
i'm new . I don't see this either. have u found it? how?
How do you make visible the button "Show/hide extra networks" on my Pinokio Stable diffusion 1111 this button? doesnt appear.
Great tutorial! Would like to see you tackle training your own model locally. There's several out there but they're all from several months ago and not up to date. Not to mention there's been a lot of people having issues with Automatic1111 and dreambooth with xformers not working correctly
Thanks, I cant give a timeframe on when I will do one, but I think that would make for a interesting tutorial, most likely would be a LoRA training one as opposed to a full dreambooth model
@@neoProfessor please help me I need some serious help please same like automatic 1111 do a tutorial on flying dog Gyre standalone it's another amazing but not amazing like this please make a tutorial on it and how to load models and lora in 😢it please
I've just realized there's 2 checkpont types, one is trained, the others are merged. can you tell me what's the difference?
Thank you for the video!
A better analogy would be having custom parts that you could choose to install on your car temporarily. Because when you use a LoRA you don't actually modify the original checkpoint model.
So these "mods" are made by stable diffusion or by people outside the company? And how does it work together with the generator program? I just like to know how things work.
People outside the company. There are guides online if you want to make your own one and upload to civitai.
How it works (simple) : It just taking the original models used by the program and either replaces them (checkpoint, also known as dreambooth) or modifies them (LoRA)
How it works (technical) : ruclips.net/video/dVjMiJsuR5o/видео.html
Im trying sd1.5 cyberealisticv50 and pictures are always goofy needing editing at best. I think others dont have this problem SDXL seems so much better maybe its because i dont use embedding?
now i gotta figure out what add network to prompt is
how come in my stable diffusion i don't have the refresh options under the "Generate" tab?
hey bro i need help.
i did save the checkpoint on the right folder E:\Stable Diffusion\stable-diffusion-webui\models\Stable-diffusion.
but when i go to the extra networks and go checkpoint tab i click refresh and nothing shows. i have like 3 models currently saved and none are showing
any thoughjts?
How do you load a checkpoint merge safetensor but without the UI? All tutorials and youtube videos lead to automatic1111 or diffusion web ui instructions.....
Best in painting tutorial I have seen bro. 🙏🏾🙏🏾
Great, short and substancial tutorial. Straight to the point with examples. I subscribe with pleasure!
Excellent tutorial. Keep up the good work. Subscribed.
Please use WebUI in dark mode so settings are better visible in your video. Thanks. I did not knew that I needed Trigger words for some models.
Thank you for a clear explanation. Very good. 🙂
Sometimes when I try to use example photos to the t, they still come out inferioe
great explanations. keep it up mate
Thank you
Sometimes, increasing or lower the lora weights is needed to get a better result.
Very much so. Experiment with weight and keywords.
Its like Midjourney
then how do i download sd 1.5 ?
Typically LORA's are not trained on base SD1.5 but on a model derived from SD1.5 (this is what SD1.5 in Civitai tells you). You will generally get best results using same model as the LORA has been trained on (most people don't specifically mention it in Civitai unfortunately) and varying results for other models in same generation (SD1.5 or SD2.0). Using a SD1.5 LORA with a SD2.0 checkpoint will simply break the LORA - that's why Civitai differentiates it so explicitly in the LORA page.
I'm using 2.0 and have had good luck with loras
I was wondering why. Thank you
How can i install Stable Diffusion ?
How to switch from realisticVision 2.0 to stable diffusion 1.5?
Why is the site civitai using 20% of my CPU? I'm not doing anything on this site and my CPU begins to run high.
Whenever I use an anime Laura with stable diffusion 1.5 I get really bizarre results.
Could you please include your negative prompts in the descrption? I've been suffering with all of these uncanny generations lol
excellent and informative video, got what i was looking for ty!
I like your analogy explanations, great job.
I have to stick with the free creations I can make online because my computer is not up to the job of using this locally, but they're still cool looking images I get
This was helpful, thanks!
I always get a ton of extra hands, glitched limbs, floating spare heads, etc. With any models I download it's the same.
Best explanation!!!!!!!!!!
Hi, I am wondering if you have any contact details?
I dont see show hide extra networks. But thanks for this. Susbcribed. Hope to see more videos, last ones were 4 months ago.
I'm new . I don't see this either. have u found it?
@@charlesovatar It's in the red pencil logo under the generate button.
thanks, very helpful
Thank you soooo much
My characters look weird when I generate full body shots.
somehow my results are unsatisfying with stable diffusion automatic 1111 when trying to generate locally on my pc with my old gpu gtx 1080 ti 11gb, cant even get anything close to just even looking good or anything comparable, if i have to be honest automatic 1111 local generation cant even compare to the simple and basic generation of leonardo ai, so with that said am i missing anything? people claim they were able to generate stunning art with automatic 1111 but when i tried it, its soooo childishly unsatisfy like the results are worst than a 5 year old drawing, i've tried about 300x generative images now and i can just conclude that it wont take me anywhere without a better guide to a more pleasing result
I got a 4090 just for AI art XD
uhh a little correction to the title. Why everyone else's Stable Diffusion IMAGES are better than yours
the amount of porn on civitai is crazy lol
Sherman Roads
good sh*t