That is amazing. And I'm glad to see that you enjoyed using my workflow. This is probably the best explanation of the workflow that I've seen so far. So if I ever forget how my workflow works I can just come back here and learn everything I need to know. You also just gained another subscriber, I need to watch some of the other great videos on your channel.
Yes, it's a bit of work to get everything in place. And thanks to your video I can now just send people with questions to this channel while I can focus all my limited free time on working on the next update before getting back to writing more documentation 😃
Thank you, so glad i stumbled across your chanel, and in partcular this workflow, so neat and easy to use for us that lost our meatballs in the spaghetti.
Finally! I was searching and searching for something on inpainting and how to do it with comfy. With this workflow and your explanation, it all seems so much easier now.
Here's a summary if you'd like to add it to your workflow. I just added a text box: 1. Subject Focus: Prioritize the main details of your input to closely align the result with your vision. For instance, if you describe a vibrant sunset over the ocean, the image will highlight the sun's colors and the water's reflection. 2. Style Focus: Use this option to ensure the image embodies a particular artistic approach or style. If you mention a desire for a Renaissance painting vibe, the generated image will mimic the brushwork and color palette of that era. 3. Weighted: Achieve a balance between subject and style. If you want a balance between a serene forest scene (subject) and impressionist brushstrokes (style), you can fine-tune the settings to get the desired mix. 4. Overlay: This mode allows for unexpected but pleasantly surprising results. If you ask for a fusion of a cityscape and a dreamy cloudscape, the output might creatively merge elements from both. 5. Weighted-Overlay and Overlay-Weighted: Combine the strengths of both methods for even more precise control over the image output. When combining a subject like a mountain with a minimalist style, you can tweak the balance to get a unique interpretation. 6. Style Only: Generate outputs solely focused on the stylistic approach. If you want a picture that captures the essence of impressionism, the image will exhibit the distinct brushwork and color play of that style. 7. Subject - Style and Style - Subject: These modes emphasize one aspect while downplaying the other, leading to intriguing outcomes. For a subject like a bouquet of flowers with a modern art twist, the image could either prioritize the subject's details or infuse them with modernist aesthetics.
Thank you for this tutorial. I've been waiting to invest myself into Comfy but what was holding me back was the in-paint. Thanks to you, I have no more excuses but to dive in.🐀
Excellent demonstration. I’ve only been using a small fraction of this workflow because I was afraid to touch any of the settings. Now I’m excited to press all the buttons. Ty
i love the absolutely bonkers node structure (it’s like the inside of an engine). I can’t wait to see one with multiple loras and controlnet nodes and adetailer, sdupscale etc 😂
I've been using this workflow for a little while and had my head around most of it, thankfully I've learned the high res fix properly now, thanks heaps for your vids..and to @Searge-DP for the wonderful work..
I love this. I'm so glad I found this. Thanks for making this video and thanks to Searge for making this node setup. I'm on 4.3 right now and it's even cleaner than the version used in the video.
Leeet's gooooo! I've been waiting for this kind of workflow for ComfyUI since SDXL 1.0 released :D I can't resist to say it, I freaking love open source
A couple hours into using it.... its genuinely something. At first I thought it was a gimmick but with all the prompt layers it allows you to force the outputs in the direction you need them. I'll make an updated comment in a couple days.
brilliant stuff, very comprehensive and straight to the point, no fluff as always nerdyrodent! oh, and don't forget to save image node instead of preview if you want them saved by default! ;)
The easiest way to get the work flow is to drag and drop an example image into the browser window. The workflow will instantly. Then open the manager and click install missing nodes. EASY PEASY
@Nerdy Rodent Thanks for this tutorial, very useful. I'm just getting into Comfy and I use Searge as a starting point. There's a setup I don't know how to do in Searge. I want to do img2img, but with a depth map as an extra input, along with the main image. Basically: img2img + depth2img. The latest Searge (4.3) has ControlNets, but I don't want to make a depth map in a ControlNet, I want to supply my own. There's a second image input, but it's meant for "masks" for inpainting, not really depth maps. Maybe there's a way to "inject" my depth map into a depth ControlNet, so that the Controlnet uses an external depth map instead of making one, but I don't know how to do this. If you have any ideas (or alternative Comfy setups that can do this), please let me know. Thanks again.
When using Image-Image or Inpainting. The finished image has the same pixel size but the ratio has changed and the saved images have sections cut off. Anyone know what I'm missing?
Thank you so much for this wonderful tutorial. If I might ask, Can this be used with other models? If we switch models are their other considerations? I get errors that I don't yet understand. Thanks!
hi what does Error occurred when executing SeargeImageSave: 'Namespace' object has no attribute 'disable_metadata' File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 144, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 67, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\SeargeSDXL\searge_sdxl_sampler_node.py", line 361, in save_images if args.disable_metadata is None or not args.disable_metadata: mean?
Basically, go with natural language in G and the usual word soup in L. Unfortunately the paper has little as to why, but you can check it out at arxiv.org/abs/2307.01952
My only question; I run Comfy as an Integration to Krita. Meaning I never see these workflow windows etc. Can I still make changes to these files/workflows/background things? Or are things like this video presents just something for me to forget about?
@NerdyRodent Hi. When you inpainted with mask, did Comfyui generate all picture again, or only masked area ? This is critical moment for me to start using Comfyui as until recently it generated whole image and if you work with large size images it's impossible to work normally.
After applying the mask it still recreates the entire image instead of just using my masked areas. Are there any additional steps needed to make sure the mask is used?
Very easy to work with... 🤣 Or maybe not "Lots of prompting options high res fix upscaling all in a single compact interface" ..... Uhm... That's Automatic1111 right?
@@NerdyRodent Do a crazy video attempting it! 😆 BTW can you please do a quick walkthrough vid on integrating the new ControlNet with the SeargeSDXL comfyui workflow? Pretty pleeaasseee😘
I was in love with this ui (for about 10 seconds) when I first loaded it. But the amount of mouse work and fighting with scrolls and zooms - too gamey. I don’t game.
I try to ..... but This.... Prompt outputs failed validation ImageUpscaleWithModel: - Required input is missing: image ImageUpscaleWithModel: - Required input is missing: image SeargeInput4: - Value not in list: base_model: 'sd_xl_base_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors'] - Value not in list: refiner_model: 'sd_xl_refiner_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors'] - Value not in list: vae_model: 'sdxl-vae.safetensors' not in [] - Value not in list: main_upscale_model: '4x_NMKD-Siax_200k.pth' not in [] - Value not in list: support_upscale_model: '4x-UltraSharp.pth' not in [] - Value not in list: lora_model: 'sd_xl_offset_example-lora_1.0.safetensors' not in ['DetailedEyes_xl_V2.safetensors', 'JapaneseDollLikeness_v15.safetensors', 'ThaiMassageDressV2.safetensors', 'asianGirlsFace_v1.safetensors', 'ellewuu-V1.safetensors', 'flat2.safetensors', 'handmix101.safetensors', 'koreanDollLikeness.safetensors', 'mahalaiuniform-000001.safetensors', 'sabai6.safetensors', 'taiwanDollLikeness_v20.safetensors']
That is amazing. And I'm glad to see that you enjoyed using my workflow.
This is probably the best explanation of the workflow that I've seen so far. So if I ever forget how my workflow works I can just come back here and learn everything I need to know. You also just gained another subscriber, I need to watch some of the other great videos on your channel.
And many thanks to you for the workflow! Lots of hard work went in there 👍
Yes, it's a bit of work to get everything in place. And thanks to your video I can now just send people with questions to this channel while I can focus all my limited free time on working on the next update before getting back to writing more documentation 😃
@@Searge-DP woohoo! Updates! 😀
Any one know how you can add controlnet to this workflow
Any one know how you can add controlnet to this workflow@@NerdyRodent
Thank you, so glad i stumbled across your chanel, and in partcular this workflow, so neat and easy to use for us that lost our meatballs in the spaghetti.
Finally! I was searching and searching for something on inpainting and how to do it with comfy. With this workflow and your explanation, it all seems so much easier now.
Great to hear!
Here's a summary if you'd like to add it to your workflow. I just added a text box:
1. Subject Focus: Prioritize the main details of your input to closely align the result with your vision. For instance, if you describe a vibrant sunset over the ocean, the image will highlight the sun's colors and the water's reflection.
2. Style Focus: Use this option to ensure the image embodies a particular artistic approach or style. If you mention a desire for a Renaissance painting vibe, the generated image will mimic the brushwork and color palette of that era.
3. Weighted: Achieve a balance between subject and style. If you want a balance between a serene forest scene (subject) and impressionist brushstrokes (style), you can fine-tune the settings to get the desired mix.
4. Overlay: This mode allows for unexpected but pleasantly surprising results. If you ask for a fusion of a cityscape and a dreamy cloudscape, the output might creatively merge elements from both.
5. Weighted-Overlay and Overlay-Weighted: Combine the strengths of both methods for even more precise control over the image output. When combining a subject like a mountain with a minimalist style, you can tweak the balance to get a unique interpretation.
6. Style Only: Generate outputs solely focused on the stylistic approach. If you want a picture that captures the essence of impressionism, the image will exhibit the distinct brushwork and color play of that style.
7. Subject - Style and Style - Subject: These modes emphasize one aspect while downplaying the other, leading to intriguing outcomes. For a subject like a bouquet of flowers with a modern art twist, the image could either prioritize the subject's details or infuse them with modernist aesthetics.
Thank you for this tutorial. I've been waiting to invest myself into Comfy but what was holding me back was the in-paint. Thanks to you, I have no more excuses but to dive in.🐀
Great to hear!
As an Auto1111 user I think ComfyUI will completely fill my needs ! I will change it ASAP thank you for tutorials !
Excellent demonstration. I’ve only been using a small fraction of this workflow because I was afraid to touch any of the settings. Now I’m excited to press all the buttons. Ty
Press all the things!
i love the absolutely bonkers node structure (it’s like the inside of an engine). I can’t wait to see one with multiple loras and controlnet nodes and adetailer, sdupscale etc 😂
Did you figure out how to add openpose ?
This is awesome, already got it downloaded and playing around. Bravo! Very well done my friend :)
I've been using this workflow for a little while and had my head around most of it, thankfully I've learned the high res fix properly now, thanks heaps for your vids..and to @Searge-DP for the wonderful work..
Great to hear!
I love this. I'm so glad I found this. Thanks for making this video and thanks to Searge for making this node setup. I'm on 4.3 right now and it's even cleaner than the version used in the video.
Glad I could help
Leeet's gooooo! I've been waiting for this kind of workflow for ComfyUI since SDXL 1.0 released :D I can't resist to say it, I freaking love open source
Ikr! 😀
Very excited to see proper inpainting in comfyUI
A couple hours into using it.... its genuinely something. At first I thought it was a gimmick but with all the prompt layers it allows you to force the outputs in the direction you need them. I'll make an updated comment in a couple days.
brilliant stuff, very comprehensive and straight to the point, no fluff as always nerdyrodent! oh, and don't forget to save image node instead of preview if you want them saved by default! ;)
brillant and very compact tutorial ! not spaghetti, right to the point ! 🙏🙏
Amazing Tutorial and Amazing Workflow, keep up the great work. Subbed
Fabulous post as always! Thank you so much for this!!!
Glad you enjoyed it!
Cool. SDXL Controlnets are out now. Can you cover installation and use with Comfyui? Thanks
The easiest way to get the work flow is to drag and drop an example image into the browser window. The workflow will instantly. Then open the manager and click install missing nodes. EASY PEASY
SeargeDP 4.0 doesn't have the disconnected upscale node. Now I'm lost as to why it doesn't generate upscaled image.
You can still use the hires fix for larger images in 4.0
@Nerdy Rodent Thanks for this tutorial, very useful. I'm just getting into Comfy and I use Searge as a starting point. There's a setup I don't know how to do in Searge. I want to do img2img, but with a depth map as an extra input, along with the main image. Basically: img2img + depth2img. The latest Searge (4.3) has ControlNets, but I don't want to make a depth map in a ControlNet, I want to supply my own. There's a second image input, but it's meant for "masks" for inpainting, not really depth maps. Maybe there's a way to "inject" my depth map into a depth ControlNet, so that the Controlnet uses an external depth map instead of making one, but I don't know how to do this. If you have any ideas (or alternative Comfy setups that can do this), please let me know. Thanks again.
Yup, just bypass the preprocessing when using preprocessed images
Hey Nerdy Rodent, thx for the tutorial, do you plan on doing a new video for the newest version?
Useful as always. Need to get ComfyUI booted up.
You should!
Wow thanks! How can I use two custom LoRAs with this workflow?
I’ve not used multiple loras myself as yet, but you could try connecting your extra Lora to the checkpoint then using ModelMergeSimple
When using Image-Image or Inpainting. The finished image has the same pixel size but the ratio has changed and the saved images have sections cut off. Anyone know what I'm missing?
Thank you for the great video! What is the minimal required VRAM for SDXL with ComfyUI?
Pretty low. Even ancient 6GB cards should be ok
thanks! this is great!! is there a way to see a progress bar?
The two easiest ways are to view the terminal output progress bar, or zoom out and watch the progress through the nodes
I got "fatal: refusing to merge unrelated histories", but I have some other custom node installed as well, maybe they are interferring.
Just pasting the folder to customnodes worked fine though.
How do I add open pose controlnet ?? I tried custom and it thinks it’s just an image. Need preprocess or?
Thank you so much for this wonderful tutorial. If I might ask, Can this be used with other models? If we switch models are their other considerations? I get errors that I don't yet understand. Thanks!
Yup, it can indeed be used with other models :)
very cool, thank you Nerdy!
Thank you too!
How can we merge/blend two or three pictures for make a very new one ? (like Midjourney blend function)
Use revisions - ruclips.net/video/uK51kvxFkhc/видео.html
hi what does
Error occurred when executing SeargeImageSave:
'Namespace' object has no attribute 'disable_metadata'
File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 144, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 67, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\SeargeSDXL\searge_sdxl_sampler_node.py", line 361, in save_images
if args.disable_metadata is None or not args.disable_metadata:
mean?
Could be you're using a really old version or something?
How do you work the upscaler? if i connect the upscaler, it's creating images that are 1024x1024
sorry for the basic question but... how do you graduatly zoom in and out (my mouse scrolling is way too fats)?
I just use my scroll wheel
@@NerdyRodent this is what I do but the minimal change of zoom is huge. Very far or super close. do you have any idea of how to refine its steps?
imwheel is the best option as you can configure per-application scroll wheel settings or changes when a modifier key is pressed
Any reason as to why I cant achieve the kind of high accuracy on text and writing claimed by StabilityAI?
Someone explain the difference between the main and secondary prompt, what to put in each, and the technical explanation for why they are separate.
Basically, go with natural language in G and the usual word soup in L. Unfortunately the paper has little as to why, but you can check it out at arxiv.org/abs/2307.01952
I tried to change vae_model and it went "undefined". Same with others. What happened?
nvm, got it
My only question; I run Comfy as an Integration to Krita. Meaning I never see these workflow windows etc. Can I still make changes to these files/workflows/background things? Or are things like this video presents just something for me to forget about?
If you’re just using Krita then you can forget about comfy… unless you want to start making your own workflows, of course!
is there a way to easily add more lora models?
There is a lora stacker node available if you want loads of loras!
thanks for that it help me alot, DO you have a V4 update
@NerdyRodent Hi. When you inpainted with mask, did Comfyui generate all picture again, or only masked area ? This is critical moment for me to start using Comfyui as until recently it generated whole image and if you work with large size images it's impossible to work normally.
I have issues when I don't connect the upscale nodes together
3.4 is working for me 😉
@@NerdyRodent Yes I saw that . I can use it no problem. But I have deactivate the upscaler nodes. To actually run this.
@@NerdyRodent ok now it's working. I had to reinstall the comfyUI to get this working.
Do you get syntax error it comfyUi?
Nope 😀
great! did u find out a way to "reproduce" an img sequence in this way? //like a stack? grz
After applying the mask it still recreates the entire image instead of just using my masked areas. Are there any additional steps needed to make sure the mask is used?
Not had that happen myself, so maybe confirm the height & width?
Are there any optimizations to make generations faster? It takes me 90-150 seconds per image
Mostly GPU. It's about 7 seconds for me.
This is fantastic, thank you.
You're very welcome!
Is there any way to change the output directory in ComfyUI?
Yup, that’s below the user interface area though
@@NerdyRodent Thanks, but where exactly? I can not find it anywhere.
@ In the "save image" nodes
@@NerdyRodent Thanks!
Waiting for image to video on ComfyUi
This is amazing!
make video on
CONTROL NET IN COMFY UI PLEASE
So far as I know there’s only one available at the moment 😕
Weird, image to image always produces an exact copy of the source image for me.
Try with denoise 0.9 😉
@@NerdyRodent Thanks. I mean it works but its the last parameter I would have suspected to fix it.
my man
thanks for the tutorial it helped me a lot! Now!... lets hit queue picture and...... **GIF OF NUCLEAR EXPLOSION** mah GPU!!!! >_
Very cool!
Ikr!
My thanks 🎉
Amazing video, keep up the amazing work
Any time!
Very easy to work with... 🤣 Or maybe not
"Lots of prompting options high res fix upscaling all in a single compact interface" ..... Uhm... That's Automatic1111 right?
This is so good! Besides in gives me an error and i can`t use it. Now i have to become python programmer at first.
Lol, no programming required, thank goodness, as we can just use the app and pre-made workflows! 😃
Holy crap the entire thing looks like a freaking pc motherboard circuit wtf😱😱
Cool, isn’t it! But can we make Minecraft in ComfyUI? 😉
@@NerdyRodent Do a crazy video attempting it! 😆 BTW can you please do a quick walkthrough vid on integrating the new ControlNet with the SeargeSDXL comfyui workflow? Pretty pleeaasseee😘
I was in love with this ui (for about 10 seconds) when I first loaded it. But the amount of mouse work and fighting with scrolls and zooms - too gamey. I don’t game.
Hadoken!
👍
🙂
New!
I'm getting some weird and creepy results with denoising at 0.666 like you
I do like creepy :)
sdxl is just SD 2.1 trained on 1024 pixels
If you honestly believe this, you're an idiot...
lol ..no
nope
good luck changing my mind
@@Feelix420 nobody cares
Pretty ironic how ComyUI is called that. Anything but "comfy" these days.
ComfyUI hurts my eyes and SDXL isn't actually that good.
Whatever helps you sleep at night.
@@JeradBenge What an odd reply.
I try to .....
but This....
Prompt outputs failed validation
ImageUpscaleWithModel:
- Required input is missing: image
ImageUpscaleWithModel:
- Required input is missing: image
SeargeInput4:
- Value not in list: base_model: 'sd_xl_base_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors']
- Value not in list: refiner_model: 'sd_xl_refiner_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors']
- Value not in list: vae_model: 'sdxl-vae.safetensors' not in []
- Value not in list: main_upscale_model: '4x_NMKD-Siax_200k.pth' not in []
- Value not in list: support_upscale_model: '4x-UltraSharp.pth' not in []
- Value not in list: lora_model: 'sd_xl_offset_example-lora_1.0.safetensors' not in ['DetailedEyes_xl_V2.safetensors', 'JapaneseDollLikeness_v15.safetensors', 'ThaiMassageDressV2.safetensors', 'asianGirlsFace_v1.safetensors', 'ellewuu-V1.safetensors', 'flat2.safetensors', 'handmix101.safetensors', 'koreanDollLikeness.safetensors', 'mahalaiuniform-000001.safetensors', 'sabai6.safetensors', 'taiwanDollLikeness_v20.safetensors']
Looks like you forgot the bit where you download all the models 😉