For keeping to the original, you can lower the sampler's denoise down to .2 to avoid changing details like the face too much, or raise up the controlnet values up to keep the original image/face as much as possible. Another option is to use ipadapter with a face you like to influence the face (controlling the structure, ethnicity, colors, etc. for example you can use the same exact image to force the same face from the original image).
Quick tip for Abigail: If you use the middle mouse button (press the scroll wheel) instead of the left mouse button to move around the canvas, you won't move any nodes around. Took me forever to change to this and I still catch myself panning with the left button sometimes.
Greetings, thanks for the video. Some details, the first girl's eyes point elsewhere, the second girl's face is not as congruent with the original. And I wanted to ask you if so many nodes were necessary. I would love to see more "workflow". You could make one similar to "kling ai"
@@marcoantonionunezcosinga7828 yes you are right, the eyes are always looking somewhere else. Maybe prompting would help with that not sure. We will try to compac the workflow a bit to cut down on the number of nodes.
Interesting workflow, thanks for sharing. The first time I ran it got stuck during VAE decode after the upscale and was showing 99% vram with my 4090. Had to stop and restart comfyui. I wondered if it was maybe because of the two load checkpoints so I ended up bypassing the 2nd load checkpoint and just used reroutes to connect the upscale section to the 1st load checkpoint and it ran through ok that time, but still a pretty lengthy process, but fairly solid results!
Excellent vid. Could you do a deep dive into Hallo? I find that to be a beast to set up with it's millions of dependencies and tentacle-like extra nodes.
Would love to try this workflow but for the life of me the recreator node won't load. Getting these errors, any ideas? When loading the graph, the following node types were not found: ReActorFaceSwap LayerColor: Brightness Contrast Nodes that have failed to load will show as red on the graph.
So we would make sure reactor face swap is installed and is the latest version from their GitHub. Also update ComfyUi and dependencies. For the layer brightness and contrast, you can bypass that node for now
@@AIFuzz59thanks for reply. What are the best adjustments to make it work with SDXL? I tried and image output is noisy distorted. Thanks in advance for guidance
Try multiple takes on the voice over. Congrats on the second example, you added 20 years... for some reason. On the next chidlren example, you added cleavage, questionable move but OK.
Hello, I am a Korean learning Comfy UI. I tried downloading the json file and running it. However, I'm running into a problem with the dwpose estimator node. I copied the error message and asked chatgpt. I don't understand everything, but roughly the problem is dwpose estimator's bbox_detector model and pose_estimator model are It seems to be happening because there is no problem. Where can I get the models for these two widgets?
@@AIFuzz59 There is a model for the basic open pose. It seems to me that there is no dedicated model for dwopenpose. Rather than saying it's missing, it seems like it wasn't there from the beginning.
What if I just want to enhance the overall features (the background, texture, plants, etc.) in the image without affecting the facial structure. Can you create a tutorial on that? Thank you so very much!
Hi ! I wanted to know if you have any idea how zia fit on I.ns.sta is made ? It seems that the base image is an exsiting one, but then they maybe use a 3d pose character + openpose + lora for body + lora for face, but something is off.
This will be fun to experiment with. Thank you!
This is great, but I'm looking for ways to restore without changing the faces so much. This is a huge issue everywhere.
For keeping to the original, you can lower the sampler's denoise down to .2 to avoid changing details like the face too much, or raise up the controlnet values up to keep the original image/face as much as possible. Another option is to use ipadapter with a face you like to influence the face (controlling the structure, ethnicity, colors, etc. for example you can use the same exact image to force the same face from the original image).
Thank you, for sharing this workflow to us!
Glad you like it!
Quick tip for Abigail: If you use the middle mouse button (press the scroll wheel) instead of the left mouse button to move around the canvas, you won't move any nodes around. Took me forever to change to this and I still catch myself panning with the left button sometimes.
Greetings, thanks for the video. Some details, the first girl's eyes point elsewhere, the second girl's face is not as congruent with the original. And I wanted to ask you if so many nodes were necessary. I would love to see more "workflow". You could make one similar to "kling ai"
@@marcoantonionunezcosinga7828 yes you are right, the eyes are always looking somewhere else. Maybe prompting would help with that not sure. We will try to compac the workflow a bit to cut down on the number of nodes.
Interesting workflow, thanks for sharing. The first time I ran it got stuck during VAE decode after the upscale and was showing 99% vram with my 4090. Had to stop and restart comfyui. I wondered if it was maybe because of the two load checkpoints so I ended up bypassing the 2nd load checkpoint and just used reroutes to connect the upscale section to the 1st load checkpoint and it ran through ok that time, but still a pretty lengthy process, but fairly solid results!
Yes! It is a beast on your system.
Looks cool
Thanks! You are cool!
Excellent vid. Could you do a deep dive into Hallo? I find that to be a beast to set up with it's millions of dependencies and tentacle-like extra nodes.
Will do!
amazing content, subscribed!
Welcome!
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
So the error is happening with one of the nodes going into the KSampler. We would check all the input nodes
And see if there
Are
Any missing values
Would love to try this workflow but for the life of me the recreator node won't load. Getting these errors, any ideas?
When loading the graph, the following node types were not found:
ReActorFaceSwap
LayerColor: Brightness Contrast
Nodes that have failed to load will show as red on the graph.
So we would make sure reactor face swap is installed and is the latest version from their GitHub. Also update ComfyUi and dependencies. For the layer brightness and contrast, you can bypass that node for now
@@AIFuzz59 Tried that and still the same error with the "LayerColor: Brightness Contrast"
Many thanks for the great workflow!! Have you tried comparing it with SUPIR?
No we haven’t. We used to use the Supir workflow in the end of all of our workflows but once we created our own, we just use that
@@AIFuzz59thanks for reply. What are the best adjustments to make it work with SDXL? I tried and image output is noisy distorted. Thanks in advance for guidance
Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
How to make the comfyui background black ?
Only issue Im having is sdxl.safetensor missing?
I love your voice, it is soothing
Thank you baby 😊
Make one for photo to painting
Will do!
Lol, I am learning how to upscale and enhance images and this is a great vid, but narrator comments crack me up. No I'm not a Swiftie.
I will try to add LaVa or some LLM to detect age and use with ipadapter and embandings... This WF is great start with thank you!
Thanks! Let us know how it works out!
Awesome workflow as always, the only thing I think is missing is a style and subject selector to make it even simpler.
Great suggestion!
Thanks for this amazing workflow. Do you think it can be adapted for old video ? Would it still work by adding animatediff ?
It should work! It may take time as it will do frame by frame
Try multiple takes on the voice over.
Congrats on the second example, you added 20 years... for some reason. On the next chidlren example, you added cleavage, questionable move but OK.
Thanks for your support 😎
Hello, I am a Korean learning Comfy UI.
I tried downloading the json file and running it.
However, I'm running into a problem with the dwpose estimator node.
I copied the error message and asked chatgpt.
I don't understand everything, but roughly the problem is
dwpose estimator's bbox_detector model and pose_estimator model are
It seems to be happening because there is no problem.
Where can I get the models for these two widgets?
And please understand if my English is awkward.
I rely on Google Translator.
보리야 힘내자! 너 영어 잘하니깐 기 죽지 말오!
Your English is very good! Are you missing the models?
@@AIFuzz59 There is a model for the basic open pose.
It seems to me that there is no dedicated model for dwopenpose.
Rather than saying it's missing, it seems like it wasn't there from the beginning.
@@정보리-k9t 보리야 힘내! Cheer up!!!!!!!!
What if I just want to enhance the overall features (the background, texture, plants, etc.) in the image without affecting the facial structure. Can you create a tutorial on that? Thank you so very much!
You got it!
Hi ! I wanted to know if you have any idea how zia fit on I.ns.sta is made ?
It seems that the base image is an exsiting one, but then they maybe use a 3d pose character + openpose + lora for body + lora for face, but something is off.