Stable diffusion tutorial - How to use Two or Three LoRA models in one image without in-paint
HTML-код
- Опубликовано: 23 сен 2024
- #stablediffusion #stablediffusiontutorial #stablediffusionai
🚨 Attention! 🚨
The background color for the mask is wrong in the video. 🎭🎥
Kindly use black color as the background color for the mask. ⚫️
See this post:
• Post
📷 Pnginfo used in the video 🔍
bit.ly/3P6Vno7
☕️ Please consider supporting me in Patreon 🍻
/ lifeisboringsoprogramming
☰ Rent a GPU at vast.ai 🔥
cloud.vast.ai/...
👩🦰 LoRA model 👇🏻
bit.ly/40imkqY
🌐 sd-webui-additional-networks ✨
github.com/koh...
Utilizing two to three Lora models simultaneously in a single image, without in-paint techniques, can be a daunting task for those interested in stable diffusion and AI art.
But fear not! If you're passionate about stable diffusion and advanced Lora training in the realm of AI art, we have the solution you've been looking for.
In this RUclips video, we'll reveal the secrets of effectively integrating two to three Lora models into one image, without relying on in-paint methods, to unlock the true potential of your AI art.
So, if you're eager to take your stable diffusion skills and AI artistry to the next level with multiple Lora models, stay tuned and let's embark on this exciting journey together!
🚨 Attention! 🚨
The background color for the mask is wrong in the video. 🎭🎥
Kindly use black color as the background color for the mask. ⚫ #VideoCorrection #TutorialUpdate #MaskingMatters 🎨🔧
See this post:
ruclips.net/user/postUgkxtMDWmmpuJ86PuSNjpld-6GxItLtTEypw
Thanks, I kind of gave up on Latent couple and Compostable LORA, they just were underperforming. This method has ZERO mask bleeding even if they are touching, works wonders!
Thanks for giving us this method guide.
I've tried two different methods (Yours and regional prompts) which both supports 2 or more different LoRa in one pic.
When using regional prompts, the generation speed is slow, and I need to keep experimenting to achieve the best results. However, LoRa doesn't consistently adhere to the specified regions for depicting characters. For example, if I use a 1:1:1 ratio to generate an image with an object + LoRa character A + LoRa character B, the result I obtain could be object + LoRa character A + LoRa character A, or it might be object + LoRa character B + LoRa character B. The composition ratios might not even match the intended 1:1:1 ratio.
Here's my testing prompt using regional prompts:
realistic, photo, 2 persons walking on the street, side by side, ADDCOMM
street stalls, ADDCOL
model_suesy, pink suit, lora:model_suesy:0.5:1, ADDCOL
model_dennis, short hair, white suit, lora:model_dennis:0.5:1
It seems that TE weight isn't recognized by the regional prompt (only the Unet weight works well, and the results are not consistent).
On the other hand, with this method, most manual operations don't appear to have a waiting issue. And that's amazing.
However, I'm unsure why this method usually generates multiple heads and extra figures in the masked area when using my self-trained LoRa, although it works fine with regional prompts achieving about 70% accuracy. My LoRa consistently tends to fill the mask and alter the background, even when I use a seed. Each time I modify the parameters, the image changes a bit regardless of its location. (Holding the seed will not help in this case and might even lead to misshapen limbs or fingers in some instances.)
And here is the test promt( I'm using trigger words for my LoRa character) using this method:
photo,masterpiece,model_suesy,model_dennis, sitting,smile, looking at viewer,european,canteen,
I have tested a whole afternoon and find the accuracy nearly 10% using same self-trained LoRa.
It "kinda" works, but I think regional prompts will do better for me.
It took me 6 MONTHS to finally find you! You a an angel send from GOD HIMSELF( New Subscriber)
hi, what about flux? can you do this?
that tutorial was blowing! amazing.please keep doing more!
Thanks, will do!
Usually I play with the weight when I use Loras. No more thanTwo. But I like this new method. This will help me a lot :). Now I will be able to use more then two Loras. Thanks!
Glad it was helpful!
You are straight to the point , thank you so much
I can’t wait to try this. Have this video saved for watch later. I can’t wait!
Hope you like it!
It’s not working. I’m doing something wrong.
Probably something small like a checkbox somewhere or something. I’ll restart the orc and try again tonight
@Santo Valentino
The background color for the mask is wrong in the video. 🎭🎥
Kindly use black color as the background color for the mask. ⚫️ #VideoCorrection #TutorialUpdate #MaskingMatters 🎨🔧
Please see this post:
ruclips.net/user/postUgkxtMDWmmpuJ86PuSNjpld-6GxItLtTEypw
@@life-is-boring-so-programming oh cool I’ll check that out this week!
Really liked the video. I will definetly try this in near future while starting to create storyboard for our story😊
This is what I want. Thank you master 🙏
You got it!
I have to say this is great tutorial!!!!! Loving it.
Thanks for sharing this knowledge, subscribed. 🤗
Thanks for the sub!
In my case Addittional network tab is not send model to txt to img window's addittional network tab
the missing tutorial,
thank you !
My Additional Network seem to crash with Keyerror when I try using the mask in extra args. Is there a specific format the mask suppose to be in?
Me too. Did you figure it out?
please make a scheme comfyui for Flux where you can load 2 loras of different people and put them into one category.
Thanks a lot for this upload..was wondering if you can make a tutorial on sd-cn animation as well? ❤
I can try
you can make a symbolic link to use the existant Lora models instead of duplicate them in the folder of the extension.
Example :
in a command prompt :
mklink /J E:\A1111\stable-diffusion-webui\extensions\sd-webui-additional-networks\models\lora E:\A1111\stable-diffusion-webui\models\Lora
thanks for the tips
you mean put a link to the files on your personal computer in SD?
i am not going to put massk every time i generaet. is there a way to maike loras prompts not mix without this
Wow that is pretty awesome and you explained everything so clear with good examples. Thank you.
Glad you enjoyed it!
What is the difference compared to regional prompter ?
Great Tutorial!!!!
I have a problem: When i choose a Lora at tab "Additional Networks" and sent it to (1) it will not show up in txt2img -> additional networks. I can choose the Lora manually but when i generate the image it will not use it. Any idea why?
Same problem
Thank you so much
You need to use this
ruclips.net/video/q-KGRRFARk4/видео.html
thank, it works, and i learn a lot
now needed more time to make the result look good
Great!
we need to use this for flux. any updates?
I decided to try out Latent Couple and Composable LoRA and with a bit of tweaking it works well, much simpler than this mess. So I guess it's just a skill issue/not learning how to actually use the extensions.
Nvm, turns out that the extension is just broken, but you can still get multiple LoRA to play well together by just reducing their weights.
@@sevret313 What extensions? Reduce weights to?
the result wasn't great, but it can definitely be improved with inpainting. thanks!
Thanks for watching!
not sure why mine doesnt separate the lora's. what could be causing this?
that ai generated voice is still not there yet...
I watch everything at 1.5x 😂
It's like UK ...American....with a bit of Irish ?!
its not too bad😊😅
This only works with SD 1.5? I am trying with SDXL but is not taking the Loras
How can I add my A.I influencer and an influencer from another account on the same picture? And can this be done on colab fooocus?
Does it work well with SDXL? Latent couple doesnt work with SDXL. It seems it doesn't work with SDXL models and more over doesn't support text encoder masking so if a lora has trigger words it won't work :(.
Nice tutorial and how to make above 5 character loras in one canvas? Thanks a lot.
学习了😀
which software did you use to generate the voice narration?
I'm using coqui_tts, you can check it out in this video
ruclips.net/video/aQgob9wLZdE/видео.html
ControlNet? i dont see that anywhere
my characters end up looking the same, no matter what i do.. any advice?
And latent couple is currently broke…I need a workaround
top notch content
is nice, but with inpaint you are more fast..and maybe get better control and result.. but nice video thank.. :)
How did you assign the colours to the masks? Is this random? Did the LORAs specify the colours needed?
No, LoRA model does not specify the color
It's you assign the mask to the LoRA model
the red channel in an RGB image for LoRA 1
the green channel in an RGB image for LoRA 2
the blue channel in an RGB image for LoRA 3
i have alot of Loras but dont apear on Additional Network model, what s the problem?
Put the LoRA models (*.pt, *.ckpt or *.safetensors) inside the sd-webui-additional-networks/models/LoRA folder.
@@life-is-boring-so-programming ohh, thanks!
@@life-is-boring-so-programming i find a way to save some space, installed hardlink shell and create symbolik link for lora from Stable Diffusion and link with sd-webui-additional-networks/models/LoRA
yes Symbolic Links worked
Hmmm. It "kinda" works. If you set your LoRA strength lower than 1, then the likeness isn't quite right and some features from person 1 are copied to person 2, although they still look like two distinct people, just like they might be related. The lower the strength, the more "blended" your people will turn out. Furthermore, if you do have the strength set to 1 for two or more characters, then the overall image quality is poorer, even with high-res fix enabled.
That’s what’s happening to me. Everything gets over trained aka wonky
@@SantoValentinome too, and regional prompts extension may be better for me I think.
I'm new to SD. How can you drag open pose the stick figure to left image? I tried but nothing happen. I can only save the stick figure and drag the the saved image into it.
1) drag an image to the image input
2) check the enable box
3) check the pixel prefect box
3) check the allow preview box
4) choose the preprocessor to be openpose
5) click the 💥 button
6) drag the stick figure to the image input on the left
7) uncheck the allow preview box
8) reset the preprocessor to None
9) choose the model to be openpose
10) generate
@@life-is-boring-so-programming thank you for the reply. My problem is step 6. Nothing happen when i drag it to the left image. It didn't replace the original image. I can only save the stick figure first before drag it to the left image. I can do everything except step 6.
@@life-is-boring-so-programming after checking everything, it turns out browser problem. I'm using firefox, after switching into chrome, it work fine. Sigh. Sad.
So you just stick with the method that already exists.
- Generate 2 Jedi women with Kristen and Jedi LoRa
- Add second face with Jenni LoRa with inpaint
- complete
No unnecessary new extension, no painting masks in other programs, no jumping around to other extension tabs...
Doesnt work in Forge sadly
I am searching for flux based loras too
Can anyone suggest how can we automate the masking process?
maybe you can try segment anything
@@life-is-boring-so-programming Segment Anything? How
Pretty cool, very tedious! Also it´s Kirsten, not Kristen!
👎
AI narrated ;-P
Your language is verbose
AI literally cannot create art.
I just love these comments😂 same like everyone making fun of crossfitters. Still they are fit as hell. Its like i put a pen on paper and leave it and state to the world: pen cannot make art🤣🤣🤣
You do realize there's a human using the program, right? People need to use their brains more often...
@@Dante02d12 And yet that human is not creating anything. They are telling the Ai what to steal from other artists. The dictionary definition of art does not allow for the byproduct of automated machines. It's just a fact. By the very definition of the word art, it cannot be created by AI.
If people used their brains more then they wouldn't be scraping the internet with AI and claiming they created something they didn't.
Cope harder.
@@Satsujinki1973
Hey, let's do a test. Draw me a shklabaloum.
Hm? You can't? Weird. Does this mean you'll have to look up what a shklabaloum is online?
But then you're stealing! Wait, no, that's stupid. You're just looking up what it is so you know what you have to draw.
*That's all an AI does.* It has a database where visual concepts are tied to words, so we can use those words to describe what we want. *An AI doesn't steal. It uses a database to understand what is asked.*
Nothing is stolen, just like a human artist doesn't steal when they look up pictures of what they want to draw themselves.
Next """argument""" :"that human is not creating anything".
Yes he is. The AI is a tool, and prompting has gotten complex. Also, there's now a shitton of tools for AI art, to the point that the excuse "it just takes typing a line of text" is objectively wrong.
The "dictionary definition", lmao. Do I really need to explain to you that definitions don't mean shit? They vary from one dictionary to another.
For something as abstract as art, no one has an objective definition of it, so don't pretend there is one.
Also, there is literally no difference between an image made with an AI or an image made by a human. *You would fail in a blind test.*
So whatever the definition of art is, it applies to both AI and non-AI works, as they are the same.
Finally : with your shallow perspective of 'there are AI images and non-AI images", where do you classify an image that I'd draw myself but enhance with AI? What about the opposite : images created by the AI but enhanced on Photoshop by humans?
The very idea that it is split between AI workd and non-AI work is objectively wrong and limited.
For fuck sake, think with your own brain. You're literally just spouting arguments you read elsewhere. Get informed, talk to people, and don't just blindly follow like a sheep. I've had that exact same conversation wwith 10 other people online, just this week.
Maybe maybe not but it can create some excellent pornography
I use lora1 at 0.6 and lora2 at 0.7 , lot of times i want characteristics from both lora models at the same time on one image, i give 0.7 to the one with likeness
well done~