I am getting the following error: Error occurred when executing ImageResize+: 'bool' object has no attribute 'startswith' File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials\image.py", line 295, in execute elif method.startswith('fill'): ^^^^^^^^^^^^^^^^^ What could be the reason?
Hello, Thanks for the great work. Need help. Im getting this error : Error occurred when executing ImageResize+: 'bool' object has no attribute 'startswith' The image resize gets a red corner. But when I change the method to anything but "true", the red corners are gone but I get this error then : Error occurred when executing CLIPVisionLoader:'NoneType' object has no attribute 'lower' I check anywhere I don't see any red corners Any help will be greatly appreciated
I used a workflow from someone else and the AnimateDiff Loader node had some name similar to the models I used. You have to make sure that every models that are selected are the same path as yours by selecting the node and choosing the path in the node
Looks kickass, but i'm getting a "insightface model is required for FaceID models" error message. Any ideas on how to fix it? I noticed that the workflow changed a bit since you made this video.
@@SuperBeastsAI thanks for a quick reply. For me it was stopping the whole process. After swapping the pairing beteween ipadapter model and lora, it's running fine. thanks!
Hey making the tutorial is super helpful! Thank you. I just wanted to ask. I am having bit of confusion on where to put the control net models. I downloaded the tile model and all 4 of the other control net models you linked (it seems like you are only using canny and depth). Is there a specific place that I need to place those models? I currently have a .bin model inside the control net folder that is being used for face_adapterv2... But I thought that the only files that were supposed to go in the control net folder were .bin types and that .safetensors were supposed to go somewhere else? I would appreciate the clarification if you could... Thanks for the amazing work!
Hey @TheGreatJ yeah the 3 Control net models in the workflow are the Tile, Depth and Canny models .Safetensors can definitely be placed in the controlnet folder :) Because the filenames of some of these files is sometimes pretty bad I would suggest creating subfolders e.g. 'SDXL' in the model specific directory. Comfy UI has access to any subfolders as well and will show these when you refresh your ComfyUI. So you can then can put your SDXL Control net models specifically in /models/controlnet/SDXL So in our case this should include ttplanetSDXLControlnet_v10F16.safetensors, t2i-adapter_diffusers_xl_depth_zoe.safetensors and t2i-adapter_diffusers_xl_canny.safetensors Similarly I would suggest the same for the primary model E.g juggernaut goes into models/checkpoints/SDXL and the Upscale model goes into models/upscale_models (These aren't specific to SD versions so it can just be at the base directory). Keep in mind the way everyone stores their files is different so for every new workflow you need to load it up and then reselect each of these models based on the location/filename you actually used. Good luck!
Great workflow. This video really helped me set it up! I've never gotten a custom workflow to work 😅
Awesome! :)
This is awesome mate!! Big congrats on the new workflow 💪🏼🙌🏼🔥
Thank you my friend!
that workflow is super huge wow
How did I miss this!? So dope I gotta try this out!
Huge contribution to the community. Even my "so so" generations look great with this! 😂😎👌🏻
So glad to read that!!! :)
@@midjourneyman Kudos, Brother 🙏🏻😎
@@huwhitememes Brad and I love it too :)
EPIC!
Thanks! :)
I am getting the following error:
Error occurred when executing ImageResize+:
'bool' object has no attribute 'startswith'
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials\image.py", line 295, in execute
elif method.startswith('fill'):
^^^^^^^^^^^^^^^^^
What could be the reason?
Got the same exact error, won't run without that resolved in the Resolution Calculator node group.
Very amazing today i will try this method and please make a video on animate diff
I have the manger but everytime I go to install the custom nodes it never gets everything >_
If I want to upscale the 4k image to 8k, do I change the upscale settings? It seems to stay at 4k.
I'll test but the factors for pass 1 and 2 should help you achieve that.
Hello,
Thanks for the great work. Need help. Im getting this error : Error occurred when executing ImageResize+:
'bool' object has no attribute 'startswith'
The image resize gets a red corner. But when I change the method to anything but "true", the red corners are gone but I get this error then : Error occurred when executing CLIPVisionLoader:'NoneType' object has no attribute 'lower'
I check anywhere I don't see any red corners
Any help will be greatly appreciated
Mmmmh, that is a strange one, did you try to google the error to see if it brings something useful up?
I used a workflow from someone else and the AnimateDiff Loader node had some name similar to the models I used. You have to make sure that every models that are selected are the same path as yours by selecting the node and choosing the path in the node
Looks kickass, but i'm getting a "insightface model is required for FaceID models" error message. Any ideas on how to fix it?
I noticed that the workflow changed a bit since you made this video.
I've seen this and I'm not sure which module is calling it but it doesn't stop the workflow from running.
@@SuperBeastsAI thanks for a quick reply. For me it was stopping the whole process. After swapping the pairing beteween ipadapter model and lora, it's running fine. thanks!
@@keepitrollin3670 I get the same error, can't quite get your explanation to fix it. Could you please elaborate, it's a big workflow! cheers
Great work but i keep getting "Can't import color-matcher ERROR". Eventhough installed all the requirements. Any idea why?
That's odd! I am not sure why, but you could try to run it without the color match nodes to see if it goes through.
Hey making the tutorial is super helpful! Thank you. I just wanted to ask. I am having bit of confusion on where to put the control net models. I downloaded the tile model and all 4 of the other control net models you linked (it seems like you are only using canny and depth). Is there a specific place that I need to place those models? I currently have a .bin model inside the control net folder that is being used for face_adapterv2... But I thought that the only files that were supposed to go in the control net folder were .bin types and that .safetensors were supposed to go somewhere else?
I would appreciate the clarification if you could... Thanks for the amazing work!
Hey @TheGreatJ yeah the 3 Control net models in the workflow are the Tile, Depth and Canny models
.Safetensors can definitely be placed in the controlnet folder :)
Because the filenames of some of these files is sometimes pretty bad I would suggest creating subfolders e.g. 'SDXL' in the model specific directory. Comfy UI has access to any subfolders as well and will show these when you refresh your ComfyUI.
So you can then can put your SDXL Control net models specifically in /models/controlnet/SDXL
So in our case this should include ttplanetSDXLControlnet_v10F16.safetensors, t2i-adapter_diffusers_xl_depth_zoe.safetensors and t2i-adapter_diffusers_xl_canny.safetensors
Similarly I would suggest the same for the primary model E.g juggernaut goes into models/checkpoints/SDXL and the Upscale model goes into models/upscale_models (These aren't specific to SD versions so it can just be at the base directory).
Keep in mind the way everyone stores their files is different so for every new workflow you need to load it up and then reselect each of these models based on the location/filename you actually used. Good luck!
I see Brad answered. Please let me know if you have more questions.
dude u made a tutorial!!!!
My first! :D