very well explained, awesome workflow. 10/10 video. extra points for sharing the workflow for free so i dont need to pause a billion times to recreate it myself and can just look at the workflow myself to learn it.
Wow the best is, that it keeps the original image size. Thats the first solution i found like this. Would be great if you build a solution for outpainting like this. I know that i can use it almost for outpainting. Enlarge image and mask the new part etc. But a real outpaintingsotion where i can set expand to the right xy pixel without the need for IP Adapter manual input would be great. One step further would be to incrementally expand if the aera is to big for a certain pixel limit, since systems and models limit are exeedet. Just an idea. Keep going!
Thank you for the workflow and tutorial, I'm new to comfy. Since u're going back to the basics, would love to learn about masks and segments and how it relates to each other. Also would really appreciate a way to automatically mask a face and only apply changes to that. Thank you again for all the content:)
They should give an option where we can select that part of the area just by dragging the cursor the way we do cropping in Photoshop so that particular selected part only gets changes
I was thrilled to finally find someone combining control net with differential diffusion, as no one else seemed to have covered it! However, despite spending hours trying to modify your node to blend character to the scenery, I couldn't get it to work due to my lack of knowledge. Is there a way to mask an existing background photo and seamlessly integrate a character into it using openpose and differential diffusion?
thanks! if I understood correctly, I think u can use this workflow ruclips.net/video/k76f8aVgS4c/видео.htmlsi=QNEtQSwoo5okPtuw to achieve what u are looking for
I have a few photos that I just want to put a smile on your face. But if it looks real, it means that the reaction of the face should change in such a way that the face does not change. Is it possible to make a tutorial about this?
Hello! I've downloaded the workflow, and (tried) to install the models. The blip image captioning and the two prompt generators don't seem to be working correctly. First they wouldn't install through git, so I downloaded them manually and uploaded them to my GDrive (I use colab). But now it keeps saying it can't detect __init__.py. Also the Derfuu_ComfyUI_ModdedNodes doesn't want to install either. I'm unfortunately not well versed at all in python and git so I don't know what to do... Thank you again for your help!
Hi Pixel Easel, Thank you very much for this inpainting tutorial but I can't get it to work for me. I have errors saying Following node types were not found: "Text and ShowTextForGPT" 1. I tried to fix it by first going to "Install Missing Custom Nodes" but it says there are no missing nodes to install. 2. I have also just downloaded "Derfuu_ComfyUI_ModdedNodes "using Mod Manager but its nodes are still missing. Can you help me with this?
amazing tutorial, however , it is a little bit unfriendly to beginners who haven't used the nodes you mentioned in the video. is it possible to deconstruct this complex workflow with short episode ? still thanks for your video.
Hi,Derfuu_Comfyui_ModdedNodes can not be opened at present. It can be used before. But it is not work at these days. Can you please help to resolve the problem. Thanks.:)
Hey, I loaded your workflow and install everything but two notes are missing, first „text“ and „ShowTextForGPT“ can u help me where can I finde these? Both are part of the first group, called group in your workflow
I want to use this workflow so badly but Derfuu doesnt load up, even though it has been installed. I'm using the Comfyui Mobile version. Is that the issue?
Thanks for the great workflow! I want to try it as soon as possible too, but I get the following error and can't try it, please help! When loading the graph, the following node types were not found:. Derfuu_ComfyUI_ModdedNodes 🔗. Nodes that have failed to load will show as red on the graph.
thanks for the tutorial, I am getting this error Error occurred when executing KSamplerAdvanced: 'ModuleList' object has no attribute '1' tried with different settings and even checked in community
@@PixelEasel Thanks! Already solved the problem. The reason was the package 'Derfuu_ComfyUI_ModdedNodes' in the ComfyUI-Windows-11-Portable, I could not update this package. But in ComfyUI-MX-Linux everything worked. I've been looking for a good build for INPAINT-SDXL for a long time, and I found it! You have!
@@jasondulin7376 Yes. In MX Linux, for example, install ComfyUI according to Git instructions/ I just mechanically moved the Derfuu_ComfyUI_ModdedNodes folders from ComfyUI-Linux to ComfyUI-Windows-Portable and everything worked, although mixlab keeps offering to help.)) Perhaps mixlab is to blame. Also FaceSwap and comfyui-reactor-node don't work in ComfyUI-Windows-Portable, but they work fine from ComfyUI-Linux)))
Big thanks to share your knowledge but for me it's strange the masked area stay's empty, The main subject doesn't show. Sorry, my bad... i didn't put the good controlnet model. control-lora-depth-rank256.safetensors
Very impressive, I have not seen any stabile inpaint workflow till this one, Thanks!
Thank you very much for the workflow. Just as someone pointed out i hope you'd do a throughout episode going through details for newbies like me.
it works AMAZINGLY, masterful work man, and thank you!
thanks for the explanation and the method. I always had problem with changing small areas beautifully, hope now I can do something better.
you welcome! I'll be glad to see the results
Very useful and easy to follow. Thank you ^^
thanks!
very well explained, awesome workflow. 10/10 video. extra points for sharing the workflow for free so i dont need to pause a billion times to recreate it myself and can just look at the workflow myself to learn it.
thanks!!! 10/10 comment !
Excellent video, and thanks for sharing the workflow
thanks! excellent comment 😉
Thank you great video 👌
thanks!
Wow the best is, that it keeps the original image size. Thats the first solution i found like this. Would be great if you build a solution for outpainting like this. I know that i can use it almost for outpainting. Enlarge image and mask the new part etc. But a real outpaintingsotion where i can set expand to the right xy pixel without the need for IP Adapter manual input would be great. One step further would be to incrementally expand if the aera is to big for a certain pixel limit, since systems and models limit are exeedet.
Just an idea. Keep going!
Thanks! I'm working on an outpainting workflow... I hope I'll finish it soon
Thanks a lot, very useful.
you very welcome🙏
Thank you for the workflow and tutorial, I'm new to comfy. Since u're going back to the basics, would love to learn about masks and segments and how it relates to each other. Also would really appreciate a way to automatically mask a face and only apply changes to that. Thank you again for all the content:)
thanks for commenting! I will cover those topics soon
wonderfully explained. nice
thanks! wonderful comment!
incredible work mate
thanks man!
Thanks bro... subscribed :)
thanks!
They should give an option where we can select that part of the area just by dragging the cursor the way we do cropping in Photoshop so that particular selected part only gets changes
this is the "problem" with open source programs. There's no such thing 'they'
I was thrilled to finally find someone combining control net with differential diffusion, as no one else seemed to have covered it! However, despite spending hours trying to modify your node to blend character to the scenery, I couldn't get it to work due to my lack of knowledge. Is there a way to mask an existing background photo and seamlessly integrate a character into it using openpose and differential diffusion?
thanks! if I understood correctly, I think u can use this workflow ruclips.net/video/k76f8aVgS4c/видео.htmlsi=QNEtQSwoo5okPtuw
to achieve what u are looking for
Great ! 😊
thx!
nice :))
thanks 😊!
Thanks! Have you tried SEGS nodes from Impact Pack? You can inpaint only masked area less complicated way (MASK to SEGS node)
thanks! I will check it out..
Thanks for your wonderful work, can I ask where you downloaded the safetensor of the clip vision model used in your ipadapter advanced node?
thanks! from the ip adapter repo. the link un the description
Hello! In what way would this workflow change if I wanted to automate the masking part with SAM2?
I HAVE A VERY IMPORTANT QUESTION PLEASE, sometimes the image in the preview is perfect, but in the final result it doesnt appear?
I have a few photos that I just want to put a smile on your face. But if it looks real, it means that the reaction of the face should change in such a way that the face does not change. Is it possible to make a tutorial about this?
check this one ruclips.net/video/VwEcGIBwsyw/видео.html
Hello! I've downloaded the workflow, and (tried) to install the models. The blip image captioning and the two prompt generators don't seem to be working correctly. First they wouldn't install through git, so I downloaded them manually and uploaded them to my GDrive (I use colab). But now it keeps saying it can't detect __init__.py. Also the Derfuu_ComfyUI_ModdedNodes doesn't want to install either. I'm unfortunately not well versed at all in python and git so I don't know what to do... Thank you again for your help!
hi. the prompt nodes are just a simple input text, you can use any input text node instead
Hi Pixel Easel, Thank you very much for this inpainting tutorial but I can't get it to work for me.
I have errors saying Following node types were not found: "Text and ShowTextForGPT"
1. I tried to fix it by first going to "Install Missing Custom Nodes" but it says there are no missing nodes to install.
2. I have also just downloaded "Derfuu_ComfyUI_ModdedNodes "using Mod Manager but its nodes are still missing.
Can you help me with this?
same problem here
can you make a tutorial for control net inpainting on video to video??
thanks! yes i will do one on vfx
Arnold schwarzenegger teaching comfyui...
amazing tutorial, however , it is a little bit unfriendly to beginners who haven't used the nodes you mentioned in the video. is it possible to deconstruct this complex workflow with short episode ?
still thanks for your video.
thanks! ill do my best to make it clearer next vid
This is only for SDXL , right? will not work with Flux?
this is for flux ruclips.net/video/q047DlB04tw/видео.html
IPAdapter Model Loader has a red line around it even though the node is loaded.
Hi,Derfuu_Comfyui_ModdedNodes can not be opened at present. It can be used before. But it is not work at these days. Can you please help to resolve the problem. Thanks.:)
i think you can replace it with any other input text
🙈 All is red! I give up!
Hey, I loaded your workflow and install everything but two notes are missing, first „text“ and „ShowTextForGPT“ can u help me where can I finde these? Both are part of the first group, called group in your workflow
for the text, you can use any input text node. and the same for the show text, u can try pythongosssss
I want to use this workflow so badly but Derfuu doesnt load up, even though it has been installed. I'm using the Comfyui Mobile version. Is that the issue?
somehow derfuu ui modded nodes doesnt work for me :(
I get
Error occurred when executing ImageResize+:
'bool' object has no attribute 'startswith'
try to delete it and reload.
Thanks for the great workflow!
I want to try it as soon as possible too, but I get the following error and can't try it, please help!
When loading the graph, the following node types were not found:.
Derfuu_ComfyUI_ModdedNodes 🔗.
Nodes that have failed to load will show as red on the graph.
thanks! try to update comfy... and if it still doesn't work for you , write to me, and we'll think of another solution
@@PixelEasel im getting the same thing too. please help
I'm getting ugly and low quality face, i'm using pony is there a way to fix the face quality ?
u can always use reactor, but u shouldn't get distorted images
do you create workflows on request?
yes. you can send me an email to gophoto101@gmail.com
@@PixelEasel I've sent you an email
Thanks for sharing!but.how to resolve ……out of memory?🙏 my video card is rtx 4060
sounds weird . Check if you haven't set the resolution too high
thanks.If I set the mask area resolution 768 768,instead of 1024 1024,can resolve this problem?
thanks for the tutorial, I am getting this error
Error occurred when executing KSamplerAdvanced:
'ModuleList' object has no attribute '1'
tried with different settings and even checked in community
bypassed controlnet and got the result
thanks for sharing!
The author (Derfuu) deleted the file?
which one?
@@PixelEasel Thanks! Already solved the problem.
The reason was the package 'Derfuu_ComfyUI_ModdedNodes' in the
ComfyUI-Windows-11-Portable, I could not update this package. But in ComfyUI-MX-Linux everything worked. I've been looking for a good build for INPAINT-SDXL for a long time, and I found it! You have!
@@alexk1072 Derfuu isn't working for me in Portable. Are you saying you have to use a different installation of Comfy?
@@jasondulin7376 Yes. In MX Linux, for example, install ComfyUI according to Git instructions/ I just mechanically moved the Derfuu_ComfyUI_ModdedNodes folders from ComfyUI-Linux to ComfyUI-Windows-Portable and everything worked, although mixlab keeps offering to help.)) Perhaps mixlab is to blame. Also FaceSwap and comfyui-reactor-node don't work in ComfyUI-Windows-Portable, but they work fine from ComfyUI-Linux)))
Hello pixel , i have a problem in my custom workflow i need your help for a task job . Please sent me contact info
I hope i can help... gophoto101@gmail.com
@@PixelEasel done i have already emailed
Big thanks to share your knowledge but for me it's strange the masked area stay's empty, The main subject doesn't show.
Sorry, my bad... i didn't put the good controlnet model. control-lora-depth-rank256.safetensors