FaceID does work with Automatic1111 currently, including SD1.5 V2 and SDXL version. Works great, can work with other controlnet including other IPadapter and Lora as well.
Great video. I had no idea about this Kohya Deep Shrink thing, it works amazing and seems to counter one of the problems Matteo pointed about upscaling and losing some of the facial uniqueness, even if using tile controlnet. Do you have any other video on this Kohya Deep Shrink node? Is there any use of changing some of its options?
Unfortunately, I don't. The problem with Kohya Deep Shrink is that it's not compatible with ControlNet. If you're interested in these types of features, you could also look into this Lora : civitai.com/models/238891/hd-horizon-the-resolution-frontier-multi-resolution-high-resolution-native-inferencing , which serves pretty much the same purpose.
Nice video! For next vid it would be nice if you zoom in slightly to your nodes, it makes the viewing experience way better when the active nodes you are using are closer. Also, what do you think are the benefits of using IPAdapter for faces instead of Reactor or roop?
I like FaceReactor and have had good success. Yes at low resolutions but you can do some upscaling and looped upscaling to preserve the quality and get a consistent character at higher resolutions with food detail.
Hi. I've tried playing your workflow, but it doesn't work as the pink box comes out of "anything everywhre" I've tried several settings, but I can't. Is there no video of this installation?
Use the ComfyUI manager to install the "Use Everywhere" nodes. Search for "Everywhere" and install the one made by Chrisgoringe. Once that is done you want to restart comfyui.
Great Video! I always have error in the first apply ipadapter node as below, do u know what happened? Error occurred when executing IPAdapterApplyFaceID: mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x768)
It seems you haven't installed insightface correctly. Perhaps you can find the solution in the following thread: github.com/cubiq/ComfyUI_IPAdapter_plus/issues/162 , make sure you're installing it in the comfyui environment (python_embed) if you have the portable version.
@@tech_bytebrain User Error occurred when executing IPAdapterApply: InsightFace must be provided for FaceID models. File "/content/drive/MyDrive/ComfyUI/execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/ComfyUI/execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/ComfyUI/execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 570, in apply_ipadapter raise Exception('InsightFace must be provided for FaceID models.')
It can be pretty painful to install. What error are you getting? Perhaps you can find the answer here: github.com/cubiq/ComfyUI_IPAdapter_plus/issues/162 Make sure you update ComfyUI.
i assume you mean "long exposure" effect , try googling this word if this is what you are looking for , then you can get this effect by (long exposure lora)
@@tech_bytebrain the link is for model.safetensors. is it the same as the one named clip-vit-h-14-laion2b-s32B-b79k.safetensors that the IP adapter plus GitHub page tells to download and rename? It seems roughly the same size
Hello can you share your extensions? what kind of extension is that at the positive and negative prompt at the bottom right? , and how do I display "#343" in the window?, what is "Format" and "Clipspace" and "save as component?". You have nice workspaice👌👌😍
It does. But it's very difficult to change facial expressions. It's a little bit too strong to the source image. But you're right, it does produce a better overall result in terms of face similarity. But I still use face swap in conjunction with ipadapter. Ipadapter gives the general shape of the face, face swap nails the face.
All of this node-based UI nonsense goes away as soon as someone commoditizes this and makes an extremely simple user-friendly interface which you feed with reference photos and text prompts for continuity.
I've used everything I can, including the one you told me to do,"cv2' has no attribute 'mat_wrapper faceid consistent" says the message (insight face). And on clipvision, you used "ip_adater_clipvision_sd1.5.safetors" in video and I can't find it.
hello !! IP ADAPTER APPLY Node Not available at this time , can you share your IP ADAPTER apply Node Folder via Google drive !! Thank youuu
FaceID does work with Automatic1111 currently, including SD1.5 V2 and SDXL version. Works great, can work with other controlnet including other IPadapter and Lora as well.
You're 100% correct , my mistake!
Comfy still much better results.......
Great video. I had no idea about this Kohya Deep Shrink thing, it works amazing and seems to counter one of the problems Matteo pointed about upscaling and losing some of the facial uniqueness, even if using tile controlnet.
Do you have any other video on this Kohya Deep Shrink node? Is there any use of changing some of its options?
Unfortunately, I don't. The problem with Kohya Deep Shrink is that it's not compatible with ControlNet. If you're interested in these types of features, you could also look into this Lora : civitai.com/models/238891/hd-horizon-the-resolution-frontier-multi-resolution-high-resolution-native-inferencing , which serves pretty much the same purpose.
@@tech_bytebrain Thank you! I'll try it out.
Nice video! For next vid it would be nice if you zoom in slightly to your nodes, it makes the viewing experience way better when the active nodes you are using are closer. Also, what do you think are the benefits of using IPAdapter for faces instead of Reactor or roop?
Noted ! Reactor is fine, especially at low res. They both have their advantages and disadvantages. Depending on what you’re trying to do i guess.
I like FaceReactor and have had good success. Yes at low resolutions but you can do some upscaling and looped upscaling to preserve the quality and get a consistent character at higher resolutions with food detail.
Hi. I've tried playing your workflow, but it doesn't work as the pink box comes out of "anything everywhre" I've tried several settings, but I can't. Is there no video of this installation?
Use the ComfyUI manager to install the "Use Everywhere" nodes. Search for "Everywhere" and install the one made by Chrisgoringe.
Once that is done you want to restart comfyui.
Great Video! I always have error in the first apply ipadapter node as below, do u know what happened?
Error occurred when executing IPAdapterApplyFaceID:
mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x768)
Hit me up on discord : discord.gg/h4dJq8za
Why does yours look soo cool, gimme the themes and customizations please, i neeeed it.
Themes?
Any update for ipadapter v2?
Thanks for the walk through but I keep getting an error for 'No module named 'insightface'
any thoughts... looking through comments now
cheers
It seems you haven't installed insightface correctly. Perhaps you can find the solution in the following thread: github.com/cubiq/ComfyUI_IPAdapter_plus/issues/162 , make sure you're installing it in the comfyui environment (python_embed) if you have the portable version.
There is no "Apply IPAdapter FaceID", only "IPAdapter FaceID" which looks a bit different, and I can't get it to work.
would you give a link to your workflow?
use your hands =)
@@MitrichDX And with that sort of comment, no interest in your channel.
Sure, here you go : comfy.icu/c/xqOptt0
If you want to combine it with Ctrlnet, do not forget to remove the Deepshrink node, as it seems to conflict.
@@OddJob001 np, my channel not for all, it's main sandbox =))
I think you've installed everything you've said, "InSightFaceLoade,
IPAdapterApplyFaceID" related error occurs.
Whats the error ?
@@tech_bytebrain User
Error occurred when executing IPAdapterApply:
InsightFace must be provided for FaceID models.
File "/content/drive/MyDrive/ComfyUI/execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 570, in apply_ipadapter
raise Exception('InsightFace must be provided for FaceID models.')
I keep getting error when it goes to the insightface node, is there a way to correctly install insightface?
It can be pretty painful to install. What error are you getting? Perhaps you can find the answer here: github.com/cubiq/ComfyUI_IPAdapter_plus/issues/162
Make sure you update ComfyUI.
make sure the checkpoint you are using is compatible the model you are loading , both must be SD1.5 or both must be SDXL
I can't get to work. The VAE doesn't seem to be decoding and I get scrambled mess - otherwise looks great!
Can you link your workflow?
How can I get a running light effect on a path? Could you please guide me on how to install it? thanks
Running light effect?
i assume you mean "long exposure" effect , try googling this word if this is what you are looking for , then you can get this effect by (long exposure lora)
yeah light run on path @@tech_bytebrain
how to add dynamic icon on Clip text ecode nodes and the menu at the top left corner?
The menu in top left is the comfyui workflow manager : github.com/11cafe/comfyui-workspace-manager . Not sure what you mean with dynamic icon 😅
@@tech_bytebrain the icons that animate on the Clip text encode modules in the bottom right-hand corner
@@HTwords Oh, lol, that's Grammarly (Autocorrector) for my potato spelling!
Thanks. Where can I download clipvision 1.5?
Here : huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors
@@tech_bytebrain it's just called model.safetensors though?
@@tech_bytebrain the link is for model.safetensors. is it the same as the one named clip-vit-h-14-laion2b-s32B-b79k.safetensors that the IP adapter plus GitHub page tells to download and rename? It seems roughly the same size
Hello can you share your extensions? what kind of extension is that at the positive and negative prompt at the bottom right? , and how do I display "#343" in the window?, what is "Format" and "Clipspace" and "save as component?". You have nice workspaice👌👌😍
The extension is Grammarly (to check grammar) , the display #ID can be found in the Manager settings ;).
Can you do a comparison on v1 vs v2?
v1 produces more consistent and similar from the source image than v2 on mine...
You can find a complete deep-dive with all the different combinations on matteo's channel: www.youtube.com/@latentvision/videos
It does. But it's very difficult to change facial expressions. It's a little bit too strong to the source image. But you're right, it does produce a better overall result in terms of face similarity. But I still use face swap in conjunction with ipadapter. Ipadapter gives the general shape of the face, face swap nails the face.
People who do those comfy ui things. Why dont u just go cure cancer or something?! I meat it looks about the same difficulty level…
Some of the questions people asking in this comment section just make no sense... lmao
All of this node-based UI nonsense goes away as soon as someone commoditizes this and makes an extremely simple user-friendly interface which you feed with reference photos and text prompts for continuity.
You can as well use automatic1111 for most of these things. Whatever suits your needs is fine. 😀
That's a silly comment. The entire creative industry uses node-based programs for a reason.
FORGE. It’s auto1111 made by same folks … but runs on comfyui backend.
I've used everything I can, including the one you told me to do,"cv2' has no attribute 'mat_wrapper faceid consistent" says the message (insight face). And on clipvision, you used "ip_adater_clipvision_sd1.5.safetors" in video and I can't find it.
I've set up a discord to offer a minimum amount of support as its very hard to do this over RUclips Comments : discord.gg/Q3tH6UgE . Contact me there