Just a BIG THANKS for this video! I started using Comfy two days ago after using AE1111 for almost a year and I will not go back, the start was a bit rough, but the longer I am working with Comfy the broader my grin gets. So many possibilities, I already love it. Still struggling with some inpaint workflows, but I am getting much better and have a much greater percentage of usable images compared to AE1111. The fact that the complete workflow is stored in each image is super handy. Comfy is also much faster and more efficient in loading different models. Keep up on making your AI videos, well done so far! 💖
The anxiety of finally getting the hang on A1111 and then seeing all of these videos explaining the potential of ComfyUI. I know I should take the plunge, I just need to find a deep enough breath first.
I decided to try ComfyUI and it's really not as bad as I thought when I first looked at it. I also see how easy it is to do things that I can't see any way at all to do in automatic1111. One extension I found which I think really helps clear up the mess of connections between nodes is the 'Use Everywhere' extension that can be found in the ComfyUI custom nodes manager panel. One thing I did with it is create a checkpoint loader node and an Update Everywhere node and connect checkpoint node outputs to Update Everything inputs. Then I don't have to drag connections all over the place in my workflow to connect up model, clip and vae. Other nodes which expect these inputs that don't have anything connected hook up transparently to the Update Everything node. If I want to override this 'default' behavior for a node input, I just connect that input to something that can feed it. I haven't done this, but you can also filter what the Update Everything node hooks up to by node color and by specifying a regular expression . This makes cleaner work flows in my opinion.
how can we use comfy UI with control-net to make desired poses of our favourite anime characters with correct hands and faces and effects because automatic1111 in-paint feature is not much of an amazing tool for me till now, my goal is to replicate my favourite anime characters in the desired poses with effects, please make a tutorial on it if it is possible
It might be a failure on my end (not searched for the right keywords) but I'm still looking for an in-depth tut that shows me what the different nodes does and what the different text means and does( with text I mean those inside the nodes like cfg and so on) like what is the bare minimum to render images and such as well ( I'm neurodivergent so sometimes its hard for me to understand and do stuff if I don't have the background for it)
I haven't watched it fully yet, but I was definitely looking for more videos on ComfyUI. It's a great concept but it's so finicky. There are 1000x improvements that could be made to the interface and user flow, as well as making it robust and less dependent (cuda must be the correct version, must have certain chips, when importing workflows must have all dependent files, same version, etc) So it could use more attention and user base to solve those kinks
Aloha from Hawaii my friend :) Thanks to your video introducting ComfyUI many months ago, it has become my latest addiction. I haven't used A1111 in months now. I've managed to duplicate all of my A1111 workflows to ComfyUI. I am a photographer also and incorporate a lot of my work into my Ai creations. The example you show in the video of replacing the clothing and surroundings caught my eye. Paused the vid and managed to recreate it HOWEVER I think there's something I've missed. I keep getting a black face. I think it has to do with Set Latent Noise Mask node. I can't tell where the sample (latent) connection is coming from. If it's possible, could you post the workflow? Thanks again (or as we say in Hawaii, Mahalo!!). addition info: I'm on a Mac Studio M2 Ultra. Hopefully that is not the issue.
The only thing I wonder it's, if I invest time in this, learn everything, dedicate hours of my life to it, now I wonder, how long until an AI do this for me, or at least make it easier? I just don't know what to commit myself, thinking into the future and the acceleration of the AI development.
Very cool! I finally took the plunge last week and am really glad I did. It was a bit intimidating at first but I just started off with the default workflow and then slowly would add things in as I went and it all started making sense pretty quickly. Would you happen to have that workflow at 4: 50 posted anywhere?
9:33 for some reason this isn't working for me. I used your example and connected all the folder pathways but when I open up ComfyUI it just says undefined when I try to bring my models in.
Seems like you showed us everything _except for_ the installation process. I'd have like to see what I'm supposed to do after 6:55. I downloaded the correct file (I think) and installed it, but I can't tell if anything actually happened.
Hello, is there a workflow for changing people's backgrounds and costumes in the video? I can't find the KSampler Face Color Merge in your video, can you tell me the name of the plug-in? Thank you very much❤
Olivio, please help! I installed Krita, and tried to install the AI, but it kept failing since I had ComfyUI already installed. I uninstalled ComfyUI, and let Krita install it. It worked, but now I can't use ComfyUI by itself. There is no .bat file in the ComfyUI folders to get it started. Thanks.
Hello ! Something very basic (but important) is still not clear for me... it's probably obvious for many of you here, so i'll ask the question : I'm a SD1.5 user since probably more than a year and i'm very happy with it. Then I saw videos and web pages about ComfyUI everywhere and I'd finally would like to try it. The thing that's still not clear for me... is... Does ComfyUI totally replaces SD1.5 ? I thought first that it was just an interface (just like WebUI) that was helping you to use SD1.5 or SDXL... and other way to tell to SD which prompts, lora, and settings you want to use for an image generation... but now i'm not so sure. It seem that for installation (as it's explained in this video)... you just have to install the ComfyUI pack. Does it mean SD in included in it ? or is it just a totally different AI image generator.. with nothing to do with Stable Diffusion ? thank you for any help to understand it better ! P.S : Oh.. and i forgot... if ComfyUI is really an interface to use SD1.5 or SDXL... do i have to install one of these first before i install comfyUI ? (or is it packed with it ?).
I switched to ComfyUI the day SDXL was out - and it's awesome! Actually, I HAD to switch over. A1111 has way too much issues with SDXL and is too slow on low end GPUs. I just wonder if there is a way yet for *LORA training* using ComfyUI. This would be ideal since Comfy has a much better memory management than A1111.
Hello Olivio, the contest sounds like a great initiative, but I'm still getting familiar with workflows. Could you also share your OpenArt profile name? I'm interested in the workflow you demonstrated at 4:50 in your video, where you kept the person's face intact while changing the background and clothing. Have you uploaded this workflow to OpenArt?
I still prefer A1111. The UI makes more sense to me and the process feels more natural. I have LCM loras and TensorRT set up in A1111 and I can generate images at a lightning fast speed, so it's not like ComnfyUI is faster. The node system is great for automation, but not so much for simple experimenting. I wish ComfyUI would come with presets or something for the most often used workflows and mayve even a workflow library where you can pick and choose workflows to test out and learn from. It needs a lot more time in the oven in terms of user friendliness before I can recommend it to my friends who are interested in SD.
have always avoided it because it looks so complicated. Now I thought I'd give it a try. extra_model_paths file doesn't exist, so I'll leave it alone again if even the "simplest" things don't work
For the past few days I haven't found the right way to make a workflow like in your video at 4.54, can I get your workflow as a comparison for my workflow error? Thank you in advance.
@@OlivioSarikas Will you send the workflow to me if I become a RUclips member? what type of member? Sorry if I ask a lot of questions, I'm just a beginner student and I have a lot in my head that I want to learn. once again I apologize 🙏
For low end gpu A1111 still better. I've tried comfyui but for some reason comfy is slower than A1111 with my lower end gpu. Generating 512x512 takes 40 secs with comfy while A1111 takes 17 secs same steps, sampler, and checkpoint and that's with low end gpu. And the culprit turns out is comfy ui won't let me use the whole gpu vram while A1111 can used all of it even with low end gpu. I've tried medvram option in comfy but nothing changed.
True, they are getting better but no where near Nvidia's offerings. ML on Windows is painful on Automatic1111, really don't want to be having to dual-boot Linux for RocM for minor improvements. @@Phobos11
They may do that at some point but A1111 was around before node based AI came to this arena, so it's built on the way things were done then. There is another UI that uses nodes and an A1111 style UI. It may be Invoke, I'm not sure, someone else can come here with the correct answer.
The A1111 code base is a mess. There is a repo that already uses ComfyUI as a backend and a front end made in gradio, so it looks like A1111. InvokeAI uses their own different nodes implementation, not compatible with ComfyUI
Because A1111 needed to be able to handle the extensions, so if A1111 isn't updated the extensions won't work. ComfyUI on the otherhand is kind of a framwork that will run everything you put into it
Deal with it ...A1111 is obsolete and messy and nothing will fix it. It was ok a year ago at the beginning of the AI era. Nowadays we have much better tools . Very easy to play with fun we have something like Fooocus. Still easy but much more advanced tool like comfyui.
you say install. you realize You just say extract. and that's the extent of your install!!!! you DONT SAY WHAT TO CLICK ON TO START THE PROGRAM! THER ARE 29,291 Files, 2,930 Folders!!! are you kidding me.
Comfy UI is doomed to fail, not because of its merits, but because it’s unnatural, unfriendly and not intuitive at all. It will be another failure I’m the long list of apps using nodes that ended up dying…..Nodes begun with Silicon Graphics in the 90’s and they went out of business plummeting like a meteorite……
Ive yet to see an image made in comfy that cannot be replicated to the same degree of quality in Automatic1111. I understand from an automation point of view it may hold more utility but image output, meh.
in that case, please blend 2 images together as latents, add latent noise, then render them with a live blend of 3 models and 4 loras (not talking about merging here) and sharpen and color adjust the output. oh, wait, a1111 can't do any of that as far as i know.
@@OlivioSarikas Im sorry to have hurt your feelings, however it is unclear to me how adding convoluted steps to the generation process increases your image quality by comparison. You may certainly achieve a varied style, but again, nothing that cant be replicated with regards to image quality. Rather baffling attitude to be honest.
@OlivioSarikas lol the number of Salty A1111 users in here is mind-blowing! These guys just cannot stand the idea that they are gonna have to adapt to the quickly changing times or get left behind. I just hear them struggling making ridiculous excuses for why they just "can't commit" to Anything but A111.. It's so odd, to be honest..
Just a BIG THANKS for this video! I started using Comfy two days ago after using AE1111 for almost a year and I will not go back, the start was a bit rough, but the longer I am working with Comfy the broader my grin gets. So many possibilities, I already love it. Still struggling with some inpaint workflows, but I am getting much better and have a much greater percentage of usable images compared to AE1111. The fact that the complete workflow is stored in each image is super handy. Comfy is also much faster and more efficient in loading different models. Keep up on making your AI videos, well done so far! 💖
The anxiety of finally getting the hang on A1111 and then seeing all of these videos explaining the potential of ComfyUI.
I know I should take the plunge, I just need to find a deep enough breath first.
And a few weeks off work
I decided to try ComfyUI and it's really not as bad as I thought when I first looked at it. I also see how easy it is to do things that I can't see any way at all to do in automatic1111.
One extension I found which I think really helps clear up the mess of connections between nodes is the 'Use Everywhere' extension that can be found in the ComfyUI custom nodes manager panel.
One thing I did with it is create a checkpoint loader node and an Update Everywhere node and connect checkpoint node outputs to Update Everything inputs. Then I don't have to drag connections all over the place in my workflow to connect up model, clip and vae. Other nodes which expect these inputs that don't have anything connected hook up transparently to the Update Everything node.
If I want to override this 'default' behavior for a node input, I just connect that input to something that can feed it.
I haven't done this, but you can also filter what the Update Everything node hooks up to by node color and by specifying a regular expression .
This makes cleaner work flows in my opinion.
any chance you could share the change background workflow? looks very interesting
Nice. Great idea just for ComfyUI. I am finally using it instead of A1111 as much especially for Video!
Awesome! Comfyui can do so much more :)
Amazing! We literally decided to make the move as of yesterday. Can’t come at a better time 🎉
Awesome :)
I have no idea why but the results I get from ComfyUI vs 1111 using the exact same models is absolutely unreal.
I’m interested in your background changer. will you share a video tutorial about that ?
Comfy is great fot automating and batch processing.
how can we use comfy UI with control-net to make desired poses of our favourite anime characters with correct hands and faces and effects because automatic1111 in-paint feature is not much of an amazing tool for me till now, my goal is to replicate my favourite anime characters in the desired poses with effects, please make a tutorial on it if it is possible
It might be a failure on my end (not searched for the right keywords) but I'm still looking for an in-depth tut that shows me what the different nodes does and what the different text means and does( with text I mean those inside the nodes like cfg and so on)
like what is the bare minimum to render images and such as well ( I'm neurodivergent so sometimes its hard for me to understand and do stuff if I don't have the background for it)
for yaml it says you only need to change base_path, there was no need to change all the others
I've been having a lot of trouble with this aspect of it. No matter what I do it won't connect to my folder paths.
Thank you...... Excellent presentation
The best!
I haven't watched it fully yet, but I was definitely looking for more videos on ComfyUI.
It's a great concept but it's so finicky. There are 1000x improvements that could be made to the interface and user flow, as well as making it robust and less dependent (cuda must be the correct version, must have certain chips, when importing workflows must have all dependent files, same version, etc)
So it could use more attention and user base to solve those kinks
ConfyUI is for building a process for advanced users, theres 1111 for you.
Aloha from Hawaii my friend :) Thanks to your video introducting ComfyUI many months ago, it has become my latest addiction. I haven't used A1111 in months now. I've managed to duplicate all of my A1111 workflows to ComfyUI. I am a photographer also and incorporate a lot of my work into my Ai creations. The example you show in the video of replacing the clothing and surroundings caught my eye. Paused the vid and managed to recreate it HOWEVER I think there's something I've missed. I keep getting a black face. I think it has to do with Set Latent Noise Mask node. I can't tell where the sample (latent) connection is coming from. If it's possible, could you post the workflow? Thanks again (or as we say in Hawaii, Mahalo!!). addition info: I'm on a Mac Studio M2 Ultra. Hopefully that is not the issue.
would love to see this particular workflow expounded as well Olivio!
So stoked.
The only thing I wonder it's, if I invest time in this, learn everything, dedicate hours of my life to it, now I wonder, how long until an AI do this for me, or at least make it easier? I just don't know what to commit myself, thinking into the future and the acceleration of the AI development.
I need YOUR workflow. The same face different clothing one
Could you explain in more detail HOW TO sell your comfyui workflows?
Very cool! I finally took the plunge last week and am really glad I did. It was a bit intimidating at first but I just started off with the default workflow and then slowly would add things in as I went and it all started making sense pretty quickly. Would you happen to have that workflow at 4: 50 posted anywhere?
Will save this video for later.. I guess in few days you just need a prompt to create comfy UI workflows..
Fabulous channel and info grateful
comfy ui manager and life became easy thx maan
Great I installed it - thank you for tips
9:33 for some reason this isn't working for me. I used your example and connected all the folder pathways but when I open up ComfyUI it just says undefined when I try to bring my models in.
Why did you choose to apply the general comfy manager link rather than the link for the portable version?
Can you make a tutorial on connecting comfyui to an application via api? I work with mevn stacks and would love that.
Thank you for all you do.
Seems like you showed us everything _except for_ the installation process. I'd have like to see what I'm supposed to do after 6:55. I downloaded the correct file (I think) and installed it, but I can't tell if anything actually happened.
Hello, is there a workflow for changing people's backgrounds and costumes in the video? I can't find the KSampler Face Color Merge in your video, can you tell me the name of the plug-in? Thank you very much❤
Olivio, please help! I installed Krita, and tried to install the AI, but it kept failing since I had ComfyUI already installed. I uninstalled ComfyUI, and let Krita install it. It worked, but now I can't use ComfyUI by itself. There is no .bat file in the ComfyUI folders to get it started. Thanks.
is this pure AI or we can use other software for cleanup like PS or CSP?
I always confuse when contest like this being taking place.
The Contest if for the ComfyUI Workflow, not for the AI image
Hello ! Something very basic (but important) is still not clear for me... it's probably obvious for many of you here, so i'll ask the question :
I'm a SD1.5 user since probably more than a year and i'm very happy with it. Then I saw videos and web pages about ComfyUI everywhere and I'd finally would like to try it.
The thing that's still not clear for me... is... Does ComfyUI totally replaces SD1.5 ? I thought first that it was just an interface (just like WebUI) that was helping you to use SD1.5 or SDXL... and other way to tell to SD which prompts, lora, and settings you want to use for an image generation... but now i'm not so sure.
It seem that for installation (as it's explained in this video)... you just have to install the ComfyUI pack. Does it mean SD in included in it ? or is it just a totally different AI image generator.. with nothing to do with Stable Diffusion ?
thank you for any help to understand it better !
P.S :
Oh.. and i forgot... if ComfyUI is really an interface to use SD1.5 or SDXL... do i have to install one of these first before i install comfyUI ? (or is it packed with it ?).
no one to answer ?
both A1111 and comfyui are interfaces, they both support sd1.5 and sdxl
Thank you !.. and it seem it's all "packed" so we don't have to install Stable Diffusion with it.@@ahadmazhar4154
My biggest issue is with using controlnet in Comfy UI with XL Models.
I'm confused. I followed these steps, but there isn't a button for "Manager", so I can't do a lot of this
I switched to ComfyUI the day SDXL was out - and it's awesome! Actually, I HAD to switch over. A1111 has way too much issues with SDXL and is too slow on low end GPUs. I just wonder if there is a way yet for *LORA training* using ComfyUI. This would be ideal since Comfy has a much better memory management than A1111.
Hello Olivio, the contest sounds like a great initiative, but I'm still getting familiar with workflows. Could you also share your OpenArt profile name? I'm interested in the workflow you demonstrated at 4:50 in your video, where you kept the person's face intact while changing the background and clothing. Have you uploaded this workflow to OpenArt?
I still prefer A1111. The UI makes more sense to me and the process feels more natural. I have LCM loras and TensorRT set up in A1111 and I can generate images at a lightning fast speed, so it's not like ComnfyUI is faster. The node system is great for automation, but not so much for simple experimenting. I wish ComfyUI would come with presets or something for the most often used workflows and mayve even a workflow library where you can pick and choose workflows to test out and learn from. It needs a lot more time in the oven in terms of user friendliness before I can recommend it to my friends who are interested in SD.
have always avoided it because it looks so complicated. Now I thought I'd give it a try.
extra_model_paths file doesn't exist, so I'll leave it alone again if even the "simplest" things don't work
For the past few days I haven't found the right way to make a workflow like in your video at 4.54, can I get your workflow as a comparison for my workflow error? Thank you in advance.
I uploaded the workflow as a reward for my youtube members and patreon supporters :)
@@OlivioSarikas Will you send the workflow to me if I become a RUclips member? what type of member? Sorry if I ask a lot of questions, I'm just a beginner student and I have a lot in my head that I want to learn. once again I apologize 🙏
@@OlivioSarikas where?
I don't understand how to install (use) SD 1.5 Workflows? Any advice?
once you have confyui installed, open it and in there click on load and load the json file of the workflow
@@OlivioSarikas Thanks! It work :)
all my attempts at trying to install things through the manager fails.
For low end gpu A1111 still better. I've tried comfyui but for some reason comfy is slower than A1111 with my lower end gpu.
Generating 512x512 takes 40 secs with comfy while A1111 takes 17 secs same steps, sampler, and checkpoint and that's with low end gpu.
And the culprit turns out is comfy ui won't let me use the whole gpu vram while A1111 can used all of it even with low end gpu. I've tried medvram option in comfy but nothing changed.
how to install on something like paperspace? any suggestions from anyone regarding services like paperspace that offer the best bang for you buck?
How does it differ from Invok AI?
ComfyUI has a much bigger community in the Node Based UI Landscape
Copy and learn other people's workflows then create a new workflow that is the best for you
I see ComfyUI I leave. I'll watch when you get back to A1111.
Unfortunately doesn't work with AMD
It’s mainly AMD that doesn’t work with AI. Really, users should pressure AMD to offer better support
True, they are getting better but no where near Nvidia's offerings. ML on Windows is painful on Automatic1111, really don't want to be having to dual-boot Linux for RocM for minor improvements. @@Phobos11
why cant they build A1111 to function with all of these updates, it should look like A1111 but underneath it would work like cumfyui
They may do that at some point but A1111 was around before node based AI came to this arena, so it's built on the way things were done then. There is another UI that uses nodes and an A1111 style UI. It may be Invoke, I'm not sure, someone else can come here with the correct answer.
The A1111 code base is a mess. There is a repo that already uses ComfyUI as a backend and a front end made in gradio, so it looks like A1111. InvokeAI uses their own different nodes implementation, not compatible with ComfyUI
Because A1111 needed to be able to handle the extensions, so if A1111 isn't updated the extensions won't work. ComfyUI on the otherhand is kind of a framwork that will run everything you put into it
Deal with it ...A1111 is obsolete and messy and nothing will fix it.
It was ok a year ago at the beginning of the AI era.
Nowadays we have much better tools .
Very easy to play with fun we have something like Fooocus.
Still easy but much more advanced tool like comfyui.
only 4 months late..
Bro
you say install. you realize You just say extract. and that's the extent of your install!!!! you DONT SAY WHAT TO CLICK ON TO START THE PROGRAM! THER ARE 29,291 Files, 2,930 Folders!!! are you kidding me.
Comfy UI is doomed to fail, not because of its merits, but because it’s unnatural, unfriendly and not intuitive at all. It will be another failure I’m the long list of apps using nodes that ended up dying…..Nodes begun with Silicon Graphics in the 90’s and they went out of business plummeting like a meteorite……
Join my Discord Group: discord.gg/XKAk7GUzAW
#### Links from my Video ####
ComfyUI OpenArt Contest: contest.openart.ai/#participate
OpenArt Workflow Database: openart.ai/workflows/dev?sort=latest
SD 1.5 Workflows: civitai.com/models/59806/sd15-template-workflows-for-comfyui
SDXL Workflows: civitai.com/models/118005/sdxl-workflow-templates-for-comfyui-with-controlnet
ComfyUI Install: github.com/comfyanonymous/ComfyUI
ComfyUI Manager: github.com/ltdrdata/ComfyUI-Manager
Ive yet to see an image made in comfy that cannot be replicated to the same degree of quality in Automatic1111. I understand from an automation point of view it may hold more utility but image output, meh.
in that case, please blend 2 images together as latents, add latent noise, then render them with a live blend of 3 models and 4 loras (not talking about merging here) and sharpen and color adjust the output. oh, wait, a1111 can't do any of that as far as i know.
@@OlivioSarikas Im sorry to have hurt your feelings, however it is unclear to me how adding convoluted steps to the generation process increases your image quality by comparison. You may certainly achieve a varied style, but again, nothing that cant be replicated with regards to image quality. Rather baffling attitude to be honest.
@OlivioSarikas lol the number of Salty A1111 users in here is mind-blowing! These guys just cannot stand the idea that they are gonna have to adapt to the quickly changing times or get left behind. I just hear them struggling making ridiculous excuses for why they just "can't commit" to Anything but A111.. It's so odd, to be honest..
Im still havent join ComfyUI wagon... is just look like a mess so I don't know...
give it a try and if you don't like it, you can still delete it again ;)
@@OlivioSarikas Ok im using it and while is a bit complicated I even find it faster