Stable Diffusion - Face + Pose + Clothing - NO training required!
HTML-код
- Опубликовано: 13 окт 2023
- Building on my Reposer workflow, Reposer Plus for Stable Diffusion now has a supporing image, allowing you to incorporate items from that image into your AI generations! Find a nice jacket, find a pose, pick a face and in just seconds your character has both a BODY and an OUTFIT!
No training, no roop, no visual studio bloatware - just rock with the images you’ve got!
Note: This video shows the original IP Adapter, as does the workflow. IP adapter nodes from the future are different, so see the other reposer workflow for an example of using other, newer nodes. This one will remain unchanged so you get the best of both worlds.
Available for FREE from the AVeryComfyNerd web page -
github.com/nerdyrodent/AVeryC...
Reposer Installation Guide -
• Reposer = Consistent S...
How to install ComfyUI:
• How to Install ComfyUI...
== More Stable Diffusion Stuff! ==
* ComfyUI Zero to Hero! -
• ComfyUI Tutorials and ...
* ControlNet Extension - github.com/Mikubill/sd-webui-...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
* Dreambooth Playlist - • Stable Diffusion Dream...
* Textual Inversion Playlist - • Stable Diffusion Textu... - Наука
You are amazing.
You are achieving what many of us are trying to do; "Consistency in character creation."
Thank you for sharing your progress with us.
Yep, this is the Holy Grail of AI design this year-- consistency.
You are so welcome!
This is freaking amazing, what a time to be alive!
This is beyond crazy. I feel like all these tools have just been created recently and they are already THIS powerful. Just crazy.
Subscribed. Great content.
Should see more fun in the future too!
Top tier content! And I didn’t even reach the end of the video! Please keep up the good job!
This is too powerful! You always surprise me with your amazing ideas. Thank you so much for making and sharing these tutorials! :D
Thank you! Cheers!
Backgrounds are the next logical step, yeah? Thanks for the awesome workflow!
omg, this is so extensive and so well made, thank you for sharing this
You're so welcome!
Unbelievable, I've never seen anything like this, you don't even need a supporting image yet if you use face restoration with ReActor it's just perfect, thank you very much.
Powerful stuff! I liked the variations of the dragon t-shirt. Thanks!
Damn, thjs is absolutely phenomenal for story telling. I've been searching for a workflow / method to get consistent characters in cinsistent clothing in a pose and this is just perfect. The only thing that would make this better would be the ability to add multiple characters on the same image, each character having their own consistent clothing. This would be revolutionary for using AI image generation for storytelling.
I'm trying to figure that out as well, my current workflow for this is messy but gets the job done. In short, you have to do a lot of messy compositing, and do a final pass using img2img.
You're an absolute magician. Thank you for your effort sir.
You are very welcome
Thanks a lot , your videos are always so helpful.
Glad to hear that! Thanks for watching 😃
Dude, this is Fn incredible. Will be diving in after work!!!!
Glad you like it! 🤓
Ip adapter changed everything for me and you have made Ip adapter even more useful...Thanks so much..
Glad I could help
The New Stable Nerdy+ Diffusion. Genius.
😉
WOW ! I have watched before that the Ai-trepreneur video about lora clothing that is absolutly complicated ! ... Thanks a lot for that information, I will try that :)
Check my twitter for other examples. Food makes a great jacket too 😉
very very amazing video. love it.
I will try to create a piece by following your video.
Thank you !
Please do!
❤❤❤you are simply great ,you deserve a lot of subs
That’ll never happen, but thanks! 😆
Yeaaaa! Thanx! Was waiting for this!
Hope you like it!
Excellent work!
Thank you! Cheers!
This is nothing but amazing! I'm gonna buy you a few cups of coffee that's for sure! I've been waiting for this since forever.. Will there be an XL version of clothes, face and pose ?
Looks nice and clean.
This is amazing! Any chance you have an updated version for SDXL?
WOW THIS IS AMAZING!
Thank you very much again !
this is amazing man ! the only thing missing is modifying the facial expression.
This is SOOO GOOOD!
So I finally dug my PC out of storage and got the SDXL reposer workflow, and after some searching I realize that there isnt a video covering that wonderful workflow, and what better I got it working on my 12gb gpu, which was a surprise as with XL workflows things can easily get out of hand vram wise. I am shocked by that workflow.
This is awesome! Thanks for the amazing workflow.
I have one question, I am trying to have the image generated in comic style. Since we are using SD 1.5 and can't just rely on prompt like "comic book style", what is my option to do so staying on SD 1.5? I tried introducing Lora as well as adding positive prompt with models like DreamShaper etc. but it seems like prompt does not have a lot of weight, OR simply isn't going to work. Any idea?
Hey love the hard work you put in for this! Im getting a LONG Ksampler error, wondering if you could help?
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
comfyui is up to date, I dont have fooocus ksampler installed... any thoughts?
Awesome! Thanks for this, qq, how do you increase the batch number?
You are a bloody legend
Amazing!! but can we apply this if we have different characters with different poses in one image. ??
Nerdy I bypassed the DW openpose processor and used openpose skeleton for pose reference and it worked beautifully and wasn't influenced because the skeleton has no clothes.
Nice!
So AWESOME! Thank you so much! This combined with a way to have a consistent background so that multiple characters can be in the same scene would be basically all we need to create comic books and maybe even A.I. movies! Is there a way to have a consistent background for multiple characters?
For that I’d probably use cut-out characters tbh, but maybe!
So I got this working, with NO errors! And it does very well looking at the pose and face but it seems to have a hard time with outfit. It will slightly take hints from the outfit image but it will fully change color, add bits, change how it fits, sometimes just change the outfit fully.
Do I need a model? change a weight? How can I tweak this so that it listens to the clothing image a bit more? I have tried using the input also to help but I am not getting much success with outfits. Any tips?
Thank you for the video!
may you create a tutorial specially for anime characters? like using an arm of a character and a face of another and the other body part of other character then pose it into a hard poses, like foreshortening?
Pure Gold
Thank you for sharing your work! I am getting an "Error occurred when executing ArithmeticBlend: The size of tensor a (3) must match the size of tensor b (6) at non-singleton dimension 0". The exact same error was posted in github discussions recently, which makes me think that a recent update to one of custom nodes broke something, maybe?
Thanks for sharing this amazing workflow.
Where is better to add the Reactor node for face swap as I can not get the same exact face for realistic images?
You can add it in before or after the open pose control net. Setting face weight to 1 or more will make the generation more like the face image.
Fantastic :) What a ride!
It really is!
im getting error. and cant see final image. error is 'NoneType' object has no attribute 'shape'
Hey Rodent, just dropped you a dm on your patreon earlier today. Looking forward to your assistance getting this workflow running smoothly!
Can we apply this if we have different characters with different poses in one image? please can anyone tell me
I would like to have your instant lora.... take the completed image and place it into this workflow. Is there a way to combine them?
WHAT ABOUT PERSPECTIVE? It'd be nice if it could mimic the camera angle as well?
Hey Nerdy, thanks for helping us to learn more. I'm using this workflow a lot, but unfortunately, I tried to use it this week and have this error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5). What could it be? Thanks a lot.
The images I'm generating all have slightly scrunched/shorter legs than they should. It might be because my clothing reference just focuses on the top of the body? I don't want to change my image input--is there a clever alteration that can just make the legs look more normal?
Great job. Could you explore using ipadapter to stylize images? Like turning real photo into rick and Morty style?
Sure thing!
Dear Mr. Rodent. I looked into your workflows for the new reposer plus workflow, but I only see poser, poser 2 with updates from last week. The only other one is reposer plus with bypass image option and that is still from 4 months ago.
I am replacing the ip adapter apply with ipadapter advanced notes as I am writing, assuming this will do the trick.
Thanks. Your video brought me back to comfy. One question, I've noticed Preview Image Final will have bits and pieces of the Supporting Image I can't get rid of despite the thresholds I set. Is there a way to use the MaskEditor to remove those bits before it gets sent to the final Reposer image?
Sure, give it a go!
My output looks a bit different from the face image, there is only a bit of similarity in the eyes, and that's it
You sir got my sub!
Can you please make a tutorial on how to apply a style to an image? Something like: grab a photo portrait and make it a 3d cartoon or anime style?
You could do it using this 😉
Just *WOW* :-) But... i am using A1111 - and tried to setup a cumfyui installation like this. Hell, i missed :/ any how to out there?
This would all be a lot of manual steps in automatic 1111
It would be cool to see someone creating a demo of it to use it on Hugging Face.
Hi!
I have a question/challenge. I have been trying to recreate this ability, but purely for a face. So, I have used controlnet for generating the faces in the same pose (being a head and shoulders, facing forward pose), but then, I want the ability to add glasses, hats, earrings, etc., and have them be the SAME every time. I would then want to extend this to hair, face shapes etc., so that I could have 10 different faces, all wearing the same pair of glasses, or have the same face, showcasing 10 different types of glasses. Is this possible? Believe me, I have been trying...
Thanks!
Any idea about this error, something stuck at ImageCompositeMask "Error occurred when executing ImageCompositeMasked: tuple index out of range"
How can I make it look exactly the same when the clothes are slightly different?
thank u so much.my english is short. but i really want say "thank u!!!!!!!!!!!!!!"
That's genius! For ip-adapter plus face, fp32 gave me better results. Is there a way to add two ip-adapters, one for the front and another for the style, like a painting style or comic? It would be awesome. Another question: is it better to use transparent png regarding faces? Thank you!
Yup, you can keep chaining IPAdapter like in this one
not sure where do I install these?
the controlnets I think it in the right spot just wanted to confirm
@@KINGLIFERISM you can check the installation video for complete and detailed installation instructions
Awesome!
I'm gonna be that guy--is it at all possible to run IPadapter on 6Gb Vram?
I gave your previous Reposer workflow a whirl recently and after resolving some missing nodes and Torch/pip upgrade errors, immediately ran out of Vram. Darn it.
Could be pushing it a bit, but maybe with all your low vram settings and such!
Exactly!!!
I get import errors for two required models: ComfyUI-Allor and comfyui-art-venture
This is really nifty, Rodent, and I certainly appreciate it, but I'm getting very inconsistent results from the poses. I've tried all sorts of reference images but four times out of five it ignores the reference pics and just does what it wants to. Not sure what I might do to improve the output off the top of my head (though I am going to fiddle with values & see where that gets me), so I thought I'd come see if you might have any suggestions. Again, it *does* pose the character correctly on occasion, but only once in awhile.
A couple of ways to ignore the pose would be to use a non-human pose image, which I’ve done to create some interesting creatures, or simply lower the pose controller strength. The reverse is, of course to use clear human images and a pose controller strength of one.
Thanks :) I'm not trying to ignore the pose, I'm trying to get it to conform to it (and I am using a humanoid character & pose references). I'll see if I can set it up with better reference images & see where it gets me. Much appreciated! @@NerdyRodent
Maravilloso
Can you please list which custom nodes you use. When I try the work flow I always have missing nodes. Thanks!
I use ComfyUI manager, making a single click to install all missing nodes! I’ve also added the full list of used and unused custom nodes.
How would we engine camera angles?
Is this possible in A1111 or Foooocus? Or should I just bite the bullet and learn ComfyUI?
you don't need to learn comfiUI to use it, just need to import the nodes following the tutorial and voila
wow! Cool!
:)
So many red nodes listed as undefined and "install missing" says everything is loaded. unfortunately I just started playing with comfyUI today but have used auto 1111 for months
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
Just starting with CUI, anybody have suggestion how cleanup such node situation? Managed to find one custom node that connect selected one into one, but I was thinking about path as well - noticed that some people have one that use only 90* bends and straight lines
It’s entirely up to your own personal settings, if you want to have wires and how they bend
thanks for the reposer plus workflow, I had trouble getting it working as I'm stuck at the Segment Anything nodes being all red (checked from the manager that it has been installed, tried reinstalling but to no avail), is there something that I am missing?
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
@@NerdyRodent I actually did a clean installation, and installed all custom nodes indicated by ComfyUI manager, and it's the Segment Anything nodes that are red, while the rest are okay, so I was wondering if it was due to a version conflict (which I need to manually install a particular version).
@@vtchiew5937 you can install it via the normal install in manager if somehow install missing fails. I’ve added a full list of both used and unused custom nodes.
Where do I press to generate the image?
Thank you so much for the cool content. did you perhaps update this workflow to work with the new ipadapter? the v2 arn't backwards compatible
There are a bunch of updates available via Patreon 😉
@@NerdyRodent thanks! I'll make sure to check it out. thank you again for your great work!
Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'
can someone please help me with this error
Am I the only one getting this error? "Currently DWPose doesn't support CUDA out-of-the-box". It gives me a grey image T.T
This is a very exciting pipeline! However, I constantly see very basic poses in many AI images. Is it possible to do more dynamic posing?
Probably with the help of controlnet
Yup. Add any custom nodes you like and let me know what you create! ;)
Is there any way to change facial expression like angry, yelling, etc.?
This looks great! I wanted to try it out, but I encountered a problem with the 'segment anything' module. I attempted to install it using the manager, but even after installation, it still gave me errors, for some reason comfyui doesn't recognise it or so! I tried to bypass it by removing nodes but than the output looked bad ( : ... Im relatively new in comfyui .. Could you please make one workflow without the 'segment anything' module?
The original Reposer doesn’t have segment anything, so you can use that 😉
@@NerdyRodent Sure, and that works , good job by the way ! But in that workflow it doesnt have the supporting img window for the Outfit ( :
It would help if you tell us wich ComfyUI workflow from your github you use in the video. I am massively confused :D I cant find the correct workflow. They all look different in your video.
Feel free to drop me a dm on patreon if you need more help! 😀
is this only available in comfy ui?
is it ok to use the OpenPose character rig thing (multi-colored bone structure poseable rig over black background) as the input for the pose here? or does it have to be a photo of a person?
The pose can be anything 😁
Ok this is crazy, I think that's all we need for complete designing of graphic novels with consistent characters. I need to figure out how I can use the API to achieve that
:D
There is a comic book generator on hugging face already you know.
right but there is no consistency or characters selection (yet)@@KINGLIFERISM
Ive been working for hours and keep getting nothing but errors...ArithmeticBlend errors and then IP adapter errors 'proj.weight'. running 4080 so i have plenty of power. not sure whats up.
I’d start by working through each of the troubleshooting steps. Likely it’s just needed to be updated.
i cannot fix the " Error occurred when executing DWPreprocessor" even after updating everything. Anyone like me who found a solution?
Click “update all” in the manager
Hello, ive been having troubles, Do you know where is the NNLatentUpscale node?
The easiest way is to use ComfyUI manager. You can also dm me on patreon if you need more help!
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
Amazing. I was trying to get it running on my end, but I keep getting "CUDA_PATH is set but CUDA wasn't able to be loaded" error message. Has anyone else encountered this problem?
Sounds like you need to reinstall your Nvidia stuff?
Hello Nerdy Rodent.
Thanks for your work ! I'm trying to create a comic book too. However I'm stuck after trying very hard to make you Reposer_Plus_BG work.
I've got this error and I couldn't fix it :
Error occurred when executing IPAdapter:
'NoneType' object has no attribute 'patcher'
Could you help me please ? That would appreciate it a lot.
NoneType means the node can’t load the file you’re asking it to load
Is there any workflow for SDXL ? I'm trying with SD1.5 but it is too much stubborn. I gave some prompts to generate background but it doesn't respect what I ask even change parameters.
Yup - a basic SDXL version is indeed there too!
@@NerdyRodent thank you so much. I'll see that to see if it fix my issues
I can't seem to get away from the background in the Load Face image. Any suggestions?
Maybe try blurring the background?
For me, through the manager or available within ComfyUI already (perhaps through previous node installations), there is a SAMLoader and a SAM Model Loader. Also, I find an InvertMask and Mask Invert. Unfortunately, I find nothing under a search for grounding or dino. I haven't cloned anything to this install as the manager has provided everything needed up to this point other than missing models for custom nodes. If cloning (DL a custom node outside the manager) is the case here, please provide links to those modules... or if I have this all wrong, perhaps a suggestion as to where to look for an answer to this dilemma. It appears that there are a few people with this same issue. TYIA
BTW, your Reposer is brilliant!!! ... as are you my nerdy compatriot.
Also, as a side note. Is it possible that you could also provide a snapshot of the layout, alongside the layout loading image on your GIT page. It would be quite helpful. Without this, for example, as I mentioned above, even though there are now nodes that will load the SAM, either by the node referenced in your workflow being merged or by that node being installed by a different custom node that performs the same task, and the fact that the node referenced in your workflow is no longer available (by being merged of depreciated), is shows up as blank with a red background in ComfyUI. We may be able to see input connectors and output connectors, but we cannot see what would have been the contents of that node such as parameter values or referenced files. Since this is the case, everyone with is same dilemma would be forced to search your entire video to see if they could find those details. I watched this video and I could not see what the contents of some items are, as they were never focused on... and even if they were, there may not have been enough clarity on those nodes to decipher. Providing those captures would alleviate these issues. This way, when we get a blank red missing node block, we can quickly and easily determine the parameters within that node when switched out with compatible nodes. TA
I also would like a complete screen shot of all of the nodes Lay-out. Having the exact same problem with Reposer 1
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
I've found that ""ComfyUI Impact Pack" also has to be installed. I think there are some missing SAM components that aren't in storyicon's "segment anything" node pack
What an amazing tutorial. I got it working with 1.5, and tried with SDXL, but I got an error due to the IPadapter SDXL models. Have you been able to get SDXL working with this workflow?
There is an Sdxl version on my website too, yes
@@NerdyRodent Oops, I glossed right over that. Thank you.
@@NerdyRodent I was hoping to get the full clothing workflow working with SDXL, but I couldn't. Also, one issue I'm facing in general is that after the clothing mask is created, the black parts of the image really want to stay black into the final generation. Don't know how to fix that.
@@none76ui Lower the strength ;)
@@NerdyRodent Okay, I got an SDXL workflow including clothing running! And lowering the strength seems to help a bit, but I found that increasing Ksampler (base) steps seems to have a much larger effect. Just curious also, what exactly are the Base and IPA Ksamplers? I can't find any documentation on them. I also can't figure out what the "Step End/Start" that links to them does. Does it override the step count option inside the nodes?
Hi. I'm getting a No module named 'midas.dpt_depth' error from the zoe depth map on the SDXL face and pose version of this. Any idea what's causing this or how to fix?
You may have an old version of controlnet support installed
@@NerdyRodent EDIT: Nevermind. I deleted an old controlnet preprocessors folder in the custom nodes folder called comfy_controlnet_preprocessors and it worked, but now I can't get the DWPose to work for some reason. The whole thing renders but it ignores the pose of the character in the pose jpg and the preview window for DWPose Estimator stays black.
your version has a controlnet for tile, the one on your website doesn't. has there been an update.. I am struggling with getting the likeness you are.. trying to find out why..
Yup, the ipadapter changed so it’s slightly updated.
@@NerdyRodent some clothing won't get detected no matter what I do. have you noticed this or does all pieces of clothing work for you all the time?
Fantastic. could you give us the .json file? Do you have a membership registration or a paid account?
Gonna have to make a Patreon thing, aren’t I? 😆
@@NerdyRodent Of course. We want a better benefit to be easier to follow.
How to load the workflow ,where shall I find .json file in order to load your workflow
I can't get this to work now after the latest controlnet update. The DWPreprocessor node cannot be found. I've tried uninstalling controlnet and deleting all of the controlnet folders like one reddit post suggested but that didn't help. if anyone knows of a fix please help. thanks
Make sure you are using the standard version of ComfyUI and that everything is up to date
@@NerdyRodentI'm using the portable version which was working fine with your previous version of this workflow. I updated everything tonight and that's when this broke. Looking through the Fannovel16/comfyui_controlnet_aux github files and I see that they updated some of the DWpose files as of yesterday and an hour ago. Maybe they broke something?
@@NerdyRodent Looks like it's a controlnet import issue. I'll contact the devs. thanks for the suggestions.
0.0 seconds (IMPORT FAILED): C:\Users\Big Bane\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
Just checked and yes - as of an hour ago they broke _all_ their preproccesors
They used an integer instead of a string in __init__.py - just put quotes around the 1 ("1") for MPS fallback until they fix it ;)
This is awesome, but I can't get this to work. In 'Positive_Prompt' there's red circle. In ComfyUI I get error. I tried to reinstall all, updated all, but still have not figured out what is the problem.
ERROR:root:Failed to validate prompt for output 158:
ERROR:root:* CLIPTextEncode 30:
ERROR:root: - Required input is missing: clip
ERROR:root:Output will be ignored
You’ll need to make sure you’ve installed all the required nodes before you can run the workflow. Update ComfyUI itself as well as all custom nodes. Check the troubleshooting guide at the top for a full set of steps!
I pressed Queue Prompt and didn't work