Different Region = Different LoRA! New ComfyUI Nodes for Area Composition
HTML-код
- Опубликовано: 7 фев 2025
- Do you have multiple LoRAs and wish you could use them all for different areas? Well, the recent ComfyUI blog on Masking and Scheduling LoRA and Model weights introduces some new nodes to us making it super easy! With hooks you can now attach LoRAs to conditioning, rather than going through the typical model pipeline. New combine nodes also make it easier to work with conditioning pairs. Together they help to build simplified workflows for all sorts of regional prompting tasks, and the hooks can be easily added to any existing workflow :)
Links:
blog.comfy.org...
github.com/com...
github.com/log...
Workflow Basics - • ComfyUI Workflow Creat...
Flux Redux - • Black Forest Labs Drop...
RF Edit - • Edit Images with Flux ...
Want to help support the channel?
/ regional-loras-117960010
== Beginners Guides! ==
1. Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
2. Installing ComfyUI for Beginners - • Install Stable Diffusi...
3. ComfyUI Workflows for Beginners - • ComfyUI Workflow Creat...
== More Flux.1 ==
Flux.1 in ComfyUI - • Flux.1 vs AuraFlow 0.2...
AI Enhanced Prompting in Flux - • Flux.1 IMG2IMG + Using...
Train your own Flux LoRA - • How to Train a Flux.1 ...
Really made my day!
This is amazing
Your video help me alot, thanks you very much
And thank you!
Thank you, sir.
And thank you!
i kinda like working with the mulri area conditioning and the multi latent composote to work with. you can start with an empty latent in the original size and you can size it according to your conditioning afterwards and take that scaled latent as input in the multi latent composite and combine multiple latents that way
Can you do img2img as well?
for the multi area latent you can encode an image to latent and use that, i think for the multi area conditioning vision encode should work but haven't tested that yet, usually the way i do is to create full size images of the objects and scale them down to combine in the multi latent composite, the prompts of the individual objects can be combined in the multiAreaConditioning and the final step is to use the conditioning and latents together.
you can also create the individual objects at size by setting the latent size before processing and avoid the scaling.
it is also possible to use individual lora's for each conditioning layer
@@tuurblaffe I see. I will test that also. What about using multiple models instead of loras? Is there a way to do that?
Just a guess, but I imagine the author of Fluxtapoz nodes wanted them to be pronounced "Flux-ta-pose", you know, like "Juxtapose", which is what you're doing with the images...
I’m just going by the official google pronunciation 😉 The video is also about compositing, not juxtaposing…
thanks mate
thanks
Nice work. Only please check the blog post. The workflow downloads are not working.
hey ! nice video! would you know, can the mask be a colorsegment instead of the black/white type?
I picked red, but yes, if you have an image with colours then you can use that
You really missed showing an example of what I think a LOT of people would want to use these new nodes for: two different character loras in the same image. That's always been impossible with Flux up to this point.
05:10 but yes, I didn’t then click generate. You could do more than 2 as well with even more areas!
@Nerdy Rodent An unrelated question: I'm looking for a seg node that can seg a clothed human body (i.e. long sleeve shirt) into parts like arm, forearm, chest etc. Also a node that can seg a face into nose, lips etc. Do such seg nodes even exist yet?
Would it be possible to create an image with two persons with two distinct LoRAs for each person? Let's say I have a LoRA for Jasmine and one for Aladdin and I would like to recreate the carpet flight form "A whole new world" song. Do you have by any chance a workflow for such a thing?
Yup, you can hook in as many loras as you like!
helllo, thanks for the video.
can i mask the denoiser so that only black area of a video get affected.
I have problems with the mask. I am using it with controlnet, very useful for img to img. But perhaps the background is rubbish, binary mask is not a solution. Do you have some tips, I would be very grateful :)?
Try matching the mask size to start with
How is it possible to use two RTX 6000A cards in Comfyui.
I had it working for a while, it mentioned both on the startup.....and then no more.....
I start up Comfyui with CUDA_VISIBLE_DEVICES=0,1
Is it a Python issue ?
Dual cards plus TensorRT = faster inference ?
I fully intend to learn all of this.
I'm signing up for some intro classes.
Meanwhile, I could really use a recommendation.
Is there a software that can generate a.i.-driven story images, or at least an avatar, to accompany an hours-long voice file or visually-blank MP4 monologue?
HeyGen is pretty good, as if you’ve not got a decent GPU you’ll be looking at services
I have updated comfyui but I don't have these nodes!
Generative Photmontage equivalent?
All sorts of applications! Mixing styles, weird animal hybrids, multiple characters, different clothing, etc, etc
Does the original comfy node support Sana? I tried extramodel with ksampler, not god
Is Discover AI your channel? The voice is so similar that it's uncanny...
Files on the blog aren't linked up. Let me know where to get them. I'm logged in to the blog.
Click the JSON download
Q. Can you tie together multiple models instead of one with multiple loras?
Yup!
But how? 😅 @@NerdyRodent
it’s farcically simple to use the model hook instead of the lora hook 🤣
@@NerdyRodent Good to know, imma going to try that tonight 👍
Oh, Nerdy Rodent 🐭🎵
De veras me alegra el día ☀
Mostrándonos la IA 🤖
De una manera muy británica ☕🎶
(Yes, the audio has been auto-translated to Spanish, outro music included XDDD)
That said, I prefer your real voice, so I will be skipping the auto-translation even if it's to my main language ^_^
Hi
Fild
How is it possible to use two RTX 6000A cards in Comfyui.
I had it working for a while, it mentioned both on the startup.....and then no more.....
I start up Comfyui with CUDA_VISIBLE_DEVICES=0,1
Is it a Python issue ?
Dual cards plus TensorRT = faster inference ?