You could try to "stabilize" the image of the 3rd method with ControlNets like OpenPose or a weak tile net (0.35) only active up to 0.4 or 0.5 (40%-50%) steps. LineArt works too if you remove the background first so the line art is only a sketch of subject. Adding limbs with low weights (foot:0.5 etc.) to the negative prompt might help too to reduce the chance of body parts lying around. I really have to try this new "manly" padding node myself. :)
Amazing tutorial Thank you. My question is related to Face detailer. it's fairly easy to detail 1 person. If you have 4 people and want to detailed them individually, is there a way to do that? have you covered that in any previous video? Thanks
Hey, if someone is currently not in the ability to join your Patreon, then are they supposed to recreate your workflow manually by watching the tutorial?
Hi! As I wrote in the description, even if you have the option to do it, I would like people to watch the whole video and go step by step instead of downloading the final workflow from Patreon, so that you can really learn what you are doing! 😋 I put it behind Patreon because there are people who don't want to/can't do it for one reason or another, but that is not the main reason behind Patreon.
Error occurred when executing GPT Sampler [n-suite]: The expanded size of the tensor (749) must match the existing size (750) at non-singleton dimension 1. Target sizes: [1, 749]. Tensor sizes: [1, 750] Any solutions?
really nice video, and there is also comfyui-inpaint-nodes which has outpaint and I think it uses different inpaint/outpaint method (fooocus) which I wish to see in comparison, and what is the custom node name for the top bar progress bar?
Thank you very much for the exciting experiments. I have tested with an AI image in which a narrow idyllic alley in an Italian village is created. There are cobblestones, windows, doors and flowers. Unfortunately, all this creativity is lost in the outpaint. Fooocus handles it a little better, but the images are too dark there. The hard test is to enlarge an image, then reduce it to the original size in the graphics program and then enlarge it again using Outpaint. Repeating this 5 times (optically we go backwards) shows all the weaknesses. How can we use the creativity that is in the AI in the outpaint? Even with the original promt there is no improvement. It is also not possible to draw conclusions about the enlargement only from the original, the user has to say (text prompt) how the world should change, even if only slightly. If a light comes in from the right, then the lamp must come in at some point. If there is a shadow, there must be a person standing there at some point...
@@DreamingAIChannel no, third one with this Advanced outpainting node works ok. double checked, all nodes connected correctly, but just noticed, if i increase denoise around 0.8-0.85, it start working the only difference i can see - i am using different checkpoints (not Anything V3) but not sure if this could be a reason
Joytag is good but when I used moondream, after it finished downloading the model, this error happened: Error occurred when executing GPT Loader Simple [n-suite]: Unknown model (vit_so400m_patch14_siglip_384) File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 368, in load_gpt_checkpoint llm = MODEL_LOAD_FUNCTIONS[ckpt_name](ckpt_path,cpu) File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 128, in load_moondream moondream = Moondream.from_pretrained(os.path.join(models_base_path,"moondream")).to(device=device, dtype=dtype) File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\transformers\modeling_utils.py", line 3594, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\moondream.py", line 16, in __init__ self.vision_encoder = VisionEncoder() File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\vision_encoder.py", line 98, in __init__ VisualHolder(timm.create_model("vit_so400m_patch14_siglip_384")) File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\timm\models\factory.py", line 67, in create_model
Thank you so much for combining Rob Adam's workflow into a node. I was struggling to recreate it in the same way he did. Appreciate it!!!
Excellent content as always. Thanks for sharing!
You could try to "stabilize" the image of the 3rd method with ControlNets like OpenPose or a weak tile net (0.35) only active up to 0.4 or 0.5 (40%-50%) steps. LineArt works too if you remove the background first so the line art is only a sketch of subject. Adding limbs with low weights (foot:0.5 etc.) to the negative prompt might help too to reduce the chance of body parts lying around. I really have to try this new "manly" padding node myself. :)
Amazing tutorial Thank you. My question is related to Face detailer. it's fairly easy to detail 1 person. If you have 4 people and want to detailed them individually, is there a way to do that? have you covered that in any previous video? Thanks
Hi! I haven't covered this topic yet, but I think this may help you: github.com/ltdrdata/ComfyUI-Impact-Pack/issues/148
Hey, if someone is currently not in the ability to join your Patreon, then are they supposed to recreate your workflow manually by watching the tutorial?
Hi! As I wrote in the description, even if you have the option to do it, I would like people to watch the whole video and go step by step instead of downloading the final workflow from Patreon, so that you can really learn what you are doing! 😋 I put it behind Patreon because there are people who don't want to/can't do it for one reason or another, but that is not the main reason behind Patreon.
Error occurred when executing GPT Sampler [n-suite]:
The expanded size of the tensor (749) must match the existing size (750) at non-singleton dimension 1. Target sizes: [1, 749]. Tensor sizes: [1, 750]
Any solutions?
really nice video, and there is also comfyui-inpaint-nodes which has outpaint and I think it uses different inpaint/outpaint method (fooocus) which I wish to see in comparison, and what is the custom node name for the top bar progress bar?
Hi! I didn't know that node, I'll check it out! The name of the progress bar custom node is rgthree-comfy (github.com/rgthree/rgthree-comfy)
Thank you very much for the exciting experiments. I have tested with an AI image in which a narrow idyllic alley in an Italian village is created. There are cobblestones, windows, doors and flowers. Unfortunately, all this creativity is lost in the outpaint. Fooocus handles it a little better, but the images are too dark there. The hard test is to enlarge an image, then reduce it to the original size in the graphics program and then enlarge it again using Outpaint. Repeating this 5 times (optically we go backwards) shows all the weaknesses. How can we use the creativity that is in the AI in the outpaint? Even with the original promt there is no improvement. It is also not possible to draw conclusions about the enlargement only from the original, the user has to say (text prompt) how the world should change, even if only slightly. If a light comes in from the right, then the lamp must come in at some point. If there is a shadow, there must be a person standing there at some point...
best tutorial would be using native nodes only
Hi
Thanks for the tutorial
When i use denoise less then 1, always get gray borders.. what is wrong with my workflow?
Hi! uhm, is it possible that you've attacched the wrong latent at the ksampler? Do you have this problem in all three "flow"?
@@DreamingAIChannel no, third one with this Advanced outpainting node works ok.
double checked, all nodes connected correctly, but just noticed, if i increase denoise around 0.8-0.85, it start working
the only difference i can see - i am using different checkpoints (not Anything V3) but not sure if this could be a reason
@@rbbdzz well if all nodes are correctly connected it could be only that! PS: At the end i've used MeinaMixV11
Joytag is good but when I used moondream, after it finished downloading the model, this error happened:
Error occurred when executing GPT Loader Simple [n-suite]:
Unknown model (vit_so400m_patch14_siglip_384)
File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 368, in load_gpt_checkpoint
llm = MODEL_LOAD_FUNCTIONS[ckpt_name](ckpt_path,cpu)
File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 128, in load_moondream
moondream = Moondream.from_pretrained(os.path.join(models_base_path,"moondream")).to(device=device, dtype=dtype)
File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\transformers\modeling_utils.py", line 3594, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\moondream.py", line 16, in __init__
self.vision_encoder = VisionEncoder()
File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\vision_encoder.py", line 98, in __init__
VisualHolder(timm.create_model("vit_so400m_patch14_siglip_384"))
File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\timm\models\factory.py", line 67, in create_model
i think is a "timm" related problem if you reboot confyui do you have some error regarding the "timm" installation?
Thanks it's working now after rebooting comfyui. :D@@DreamingAIChannel
@@Cingku perfect! 👍