I know a lot of people dislike the spaghetti mess that comfy can become but personally as a trained aircraft electrician that reads and interprets complex electrical wiring diagrams on the daily it just makes sense to my brain. Being able to see the inputs and outputs of each component (node) allows you to better understand how the whole process works and what each piece contributes to the final outcome. Thank you for making these tutorials they have been a great help in learning!
Thank you to spend time creating those so didactic videos. Building up the worflow from nothing, explaining the settings, the whys.. I have been using MJ for months and shifted to A1111 to have more control but ComfyUI is exactly the place I craved all along. And with your videos the learning process is so enjoyable. Thank you. Happy to have subscribed.
Thanks for your lessons. Unfortunately, your videos are not popular among ALL Internet users, but know that for those who share your interests, you do a lot and inspire new things
@@sedetweilerhey bro do you know how can i use the control net in comfyUI to select an image, then using another to merge only the style of the image, without changing pose, details, ecc? Basically a style transfer, thanks
Hey, Scott. Would you consider making us a short video about your Top 5 or Top 10 Stable Diffusion MYTHS (number of steps, schedulers, negative prompts, etc.)? 🙏
Scott, you're a genius. I've had a project in mind and didn't know how to make it work -- I want a cameo of alabaster-on-onyx of a crescent moon with a sleepy beautiful female face. It's a specific look that I've been searching for and having no idea where to start. I had one that was the right shape, so after watching your video I pulled a depth map and ran that through and got - on the first try - closer than I've been able to get before. Of course, the second one was nothing like what I want, but now Ive seen the promised land.
Hey Scott and community! First up, thank you so much for the very well and not too quickly explained tutorial!! I'm having a little problem though and I don't know exactly what to change/do to correct it... At 8:19 you're showing the long list of ControlNet Models and you say that we'll see the list if we did it correctly and I followed your video step by step but I still don't get that list... any idea or hint on what might've gone wrong? Thanks so much in advance!
Might be worth mentioning the clone will take a while and the comand window won't show progress continuously, you just have to be patient and wait for the Filtering content line to hit 100%. First time round I quit out of the cmd window to early. 🙃
I really struggled with this part. I was running Comfy via Swarm. When Control-Lora loaded, and Terminal/PowerShell said "done," I thought I was finished. I shut it down and restarted Comfy, hoping it would work. However, I didn't have any models when I restarted Comfy, and I started overthinking things. Despite having read this thread earlier, I continued to overcomplicate things. After several hours of installs and uninstalls with no success, I took a lunch break, and then this comment began to blossom in the overgrown garden that is my brain. So, thank you for this. Also Big thanks to Scott Detweiler for these fantastic tutorials!
Had the same problem, checked the "Control Loras" Files (Rank 128 and 256) and both were empty. Had to go into those folders on Hugging Face and download them directly into the correct folders. Poof, problem solved. :)
@@idoshor4470 Had the same problem, checked the "Control Loras" Files (Rank 128 and 256) and both were empty. Had to go into those folders on Hugging Face and download them directly into the correct folders. Poof, problem solved. :)
7:54 Late comment, but you actually can override the default css font-size: values in ComfyUI/web/style.css by adding your preferred settings in the ComfyUI/web/user.css file. I did this once for the multiline textboxes to make them more legible on zoom out but it was awhile ago and I've reinstalled ComfyUI once or twice since then so I don't have that modified user.css anymore. Was only a matter of copying and pasting the relevant style section from styles.css into user.css and then modifying the part I needed to change.
Thank you so much for these wonderful tutorials. I'm very new to this and have a newb question: Can this same workflow be used with other models? Any other model I try gives me a litany of errors, the first being - "Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' Should I be changing a setting in the KSampler? Thanks!
Yes, the workflow will be the same for the models. If one is giving you an issue figure out if it is the preprocessor, controlnet model, or the inference model.
@@jasondulin7376 Did you ever figure out what the issue was for you? I get the same error, I tried changing everything @sedetweiler mentioned and nothing helped or even changed the error.
Hey can you help me out? I cloned the repository with git clone like mentioned at 3:40 and everything was cloned except for the models 😓 can you tell me why this is? I always have to download models manually because of this
I have been trying to use any depth preprocessor and for the life of me I cant! I don't know why. i've been searching all the tutorials. I have all the auxilary preprocessors and the models in the right folders and it keeps telling me I dont :(
I get an error at 6:13 that it can't find a file in custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel/Annotators\\cache\\models--lllyasviel--Annotators\\snapshots\\982e7edaec38759d914a963c48c4726685de7d96'.
You lost me at the 15:55 mark. How do you place a model in the folder with a code? I'm equally confused as to what you were doing in Notepad. Is there a walkthrough where I can see it being done? Thanks.
a touch-off subject, in the sdlx UIs, there is a tile button; where can I select the ability to make seamless tileable textures in comfy? I'm a sculptor seeing if I can maybe train a checkpoint or lora or something to create tiling patterns like snake scales and hopefully train in on alphas or depth maps so we can create interesting new stuff as opposed to my students always downloading the same old crap. I can imagine snake scales with mouths or eyes on them. but no one is training texture models. thank you t p.s. Thank you greatly you are making it possible for someone who can't c-prompt his way out of a paper bag to explore what these new technologies can take our art. I went from my best tools being an 800-year-old set of chisels to a computer when I saw Zbrush 18 years ago. Your videos are wonderful thank you again
I joined last night, I don't want to take advantage of your generosity. I teach Zbrush if I can figure out how to train texture and seamless tileable texture models or Loras or something lol, it will be an amazing tool. I'm so excited I found your page. such powerful tools. thank you greatly @@sedetweiler
Scott, do you know if there are plans to bring the lineart preprocessors to SDXL? I'm a product designer and for me it's one of the most exciting things about SD. Load in a clean sketch and see where SD takes it. I need more detail though for things like bicycles.. SD1.5 does cars really well, but fine details get lost and that's what makes up most of a bicycle :) I'm trying lineart with SD1.5, then taking the output of that into depth and XL, but it's a bit of a roundabout way of doing things with less accurate results than I'd like.
Thanks for that. There is still one big big missing piece for narrative work with SD - namely LORAs for the refiner - we can get consistent characters with base LORAs via Kohya-ss, but refining screws them up, so quality of custom characters is barely better than SD1.5 at this point...
I use Impact to fix that. Generate the image as usual, with the refiner, then run it through the Impact Face Detailer and regenerate the face with just the base model and character LoRA, without the refiner. The nice thing about Impact is that it detects the face automatically, so once you have your workflow set up, you don't have to worry about it anymore. Just let it do its thing.
Hey Scott, great video, i got most of the way there but keep encountering a "header too large" error. Running Apple Macbook Pro M2 24gb Ram. Is this hardware related? Im searching for a lighterweight model with little luck. Control net works, and I can run stable diffusion with control net on the webui, but not ComfyUI. Followed your tut to a t twice. Any direction or resources you can reccomend would be great, thanks for the informative video!
Thanks for these awesome tuts. Why don't you use Link Render Mode: Straight ? it fixes the noodle problem. it's in the option (gear button on top right of Queue Prompts). I installed a bunch of custom nods now SDXL1 wont work anymore, before it was slow now i'm getting not enough memory error. well i guess XL is no for me.
thank you for these videos but i need a little help! im a 3d artist that is transition to learning some stable diffusion, im an absolute boor when it comes to python, i use gpt to write maya scripts for me because looking at code makes my head spin. anyway, when i try to use the 2 pose recognition nodes installed in this video, one return an error that im missing something called "media pipe" and the other node says im missing "matplotlib". how can i install these? the manages doesnt show anything missing, and before this video i didnt even understand how to install stuff from huggingface or github because im used to a big green "download" button. cheers for the help :)
I would love to have the ability to split the screen, and then have one part that is fixed (on the image generation node), and the second part where we can move freely
@@sedetweilerBtw, I remember you talking about chaiNNer a while back. The recent versions have the ability to use automatic1111, and nodes specific to stable diffusion
I want to try the same thing. Probably too large a question, but if I wanted to add another controlnet can I just duplicate a few modules and plug them into the same place?
Hi, this might be a dumb question but why it doesn't work with other checkpoints?, I have a setup to create pixel art, and I get an error in the KSampler "moduleList object doesn't have attribute 1", I tried with different models but the only one that works is the base model. Thanks for the tutorials I am binge watching your videos.
Hi @Scott, your videos are amazing! I am very new in this space and trying to do a PoC where I need to take person's photo with clear face and use the face and give a slightly different expression (maybe use controlnet openpose or canny) and more importantly create a comic style. I know we can do really good comic style using SDXL 1.0 but I can't use my custom face in such situation. Is my only way to achieve it is using training model or lora? Can you please guide me?
@@sedetweiler , I am currently trying to use controlnet canny with SDXL 1.0 and positive comment "color comic book..." and it seems to work ok-ish. So roop/reactor would make sense to put before this flow to swap person's face into the picture and pose I want it to be and then do the same process? This eventually has to be hooked up on a website for people to try so training a Lora would be time taking process and probably not the best too?!
Your tutorial videos on Comfy are fantastic! I'm truly enjoying them. How can I support your channel? Also, it would be great if you could add clear subtitles to minimize any language and accent barriers. Thank you!
Thank you! There should be a join button under the video or on my main page to help support the channel. Lots of great goodies as well as a sign of appreciation. Cheers!
How about a quick basic overview on recommended environment, directory structure, to manage in an efficient way - storage, update mgmt, image mgmt, workflow...
small suggestion with these tutorials. and particularly these intro tutorials - slow down. like... a lot. I totally get you know what you doing... but most of us have to switch screens and hunt and peck for where things are... we don't even know menu buttons. we have to pause the video every few seconds.... rewind constantly. especially at the end... you move so quick. and I say that - because your tutorials are excellent. you communicate well and show very good details of what you are doing....just so fast! haha
Yeah, sorry about that. I have found that if I am to slow, people just leave the video, so I tend to just get it down to the actual points you need to know. I often suggest watching it first, and then doing it again with pauses as needed. I do the same thing with tutorials on products like zBrush, as those can be brutal to keep up with, so I totally understand what you mean. Since you are a subscriber, you will also see a recent live stream and you might really enjoy the speed of those. Thank you for your support!
I´m always getting an error on the ControlNetLoader saying Error occurred when executing ControlNetLoader: 'input_blocks.0.0.weight' any idea what that could be. Setup should be the same. Thanks! Awesome Tutorials by the way!
@@sedetweiler I would like to suggest you a topic. could you make a tutorial on how to colorize black and white pictures using the new RECOLOR? could you teach us how to use various controlNet modules? Thanks
in Load controlnet model node i get this Error that i can't understand why Error while deserializing header: HeaderTooLarge I did everything i reinstalled comfyui :) I'm using same model and same controlnet-lora you're using here any idea what this could mean?
I tried to download the controlnet preprocessors, but apparently the list is missing. I have only two models there (maybe from another download I did?) : diffusion_pytorch_model and ..model.fp16. any ideas how do I get the full list? thanks
I'm hoping you can help, I'm trying to use the ControlNet Repository and OpenPose - bit I keep running into errors when at the Control Net Loader step, something about py lines 151, 81, etc. I've tried updating and reinstalling control net repository but I'm still running into the same issue.
Hi there. I used the manager to download comfyui-extensions to get the custom color nodes and titled reroutes etc, and it isnt workig. The extra options didnt even show up. Is there something i did wrong? Is there a fix for this or a step to solve this? thanks in advance.
How would you set up multiple control nets? I tried to chain 2 Apply Control Net Nodes together but I think it’s not working. For example, let’s say first Apply Control Net is for a depth map, set to run from 0.0 steps to 0.2. Then, I chain that into another Apply Control Net that has a Canny, and set for start at 0.0 steps and end at 0.8…….wouldn’t the most recent Apply Control Net steps settings override the first Control Net? Since I passed the first into the second?
I'm encountering an issue while trying to use the 'INTER_LINEAR' attribute from the 'cv2' module in my Python code. When I attempt to execute my script, I receive the following error message: Error: module 'cv2' has no attribute 'INTER_LINEAR' I'm using OpenCV (cv2) for performing image processing operations, and I need to use the 'INTER_LINEAR' method for image interpolation in my project. I have checked the OpenCV documentation, and I know that this attribute should be available, but I don't understand why I'm receiving this error.
Good tutorial! I got a workflow that mixes SDXL with SD1.54 (i think as a refiner), does it makes sense? I use it like "a giant lora" as i make the denoise super low so it changes the features of the image with the model i choose but not changing the image as whole (es. A-Zovya has a particular style i like so i create an image with sdXL model i like, then the refiner will implement some features from the Zvya 1.5 model)
i get the following error when it reaches the ksampler node. The ksampler is also outlined in purple at this step/error. Any idea how to fix? Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' EDIT: was using the soapmix load checkpoint from earlier videos, unsure why that one did not work. reverted to the SDXL load checkpoint, problem solved.
Hey thanks for the tutorial! I have a question: i have no problem using this with SDXL, but the moment i try to pair it with sd 1.5 checkpoints comfyUI sees an error It says: Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' What should I do?
I get so lost trying to follow stuff like this. Mine has errors all over the place, both prompts and the ksaple and vae all are red and i seem to have all the same things you got. Where do i get your workflow files?
Be sure to change the model to things you have on your machine. The graphs for all of the videos are available on the Community tab here in RUclips for channel sponsors. Have you done the earlier videos? This is not a good one to start with, as this is assuming you have done the previous episodes.
I'm using SD 1.5 and using controlnet to give my character a pose as you described (without Reactor). Can I somehow control and define my own background that the subject (my character) blends in?
Hi, in my controlnet/control-lora directory I only have the rank128 and rank256 of canny, recolor, depth and sketch. any idea what happened? (Windows system here with ComfyUI from Pinokio)
I got it working just fine, and I did a test gen with the Canny model, and I was wondering what was taking my render so long... I realized that I loaded 6gb diffusion model and not the control Lora. I once again was setting my laptop's gpu on fire.
Forgive me I may be a bit slow but when I type in git clone and paste link in the cmd window it says "git not recognized as an internal or external command" etc. I also don't have admin privilege in the window, how do I fix this?
I have followed this to the letter but when I connect my ControlNet node to my KSampler and run it, I get an error that I have no clue about, as I'm just a user not a developer... Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' Any ideas at all as to who I can ask, or what I can do?
You are using a SD 1.5 model, control loras are for SDXL. You have to download a control net model for 1.5, from lllyasviel/ControlNet-v1-1 Hugging Face
will these work for automatic 11? Both of my gaming rigs im unsuccessful at installing warpfusion which I want to use for better consistency in videos, especially to redo the last realism test I put on my youtube, as well as comfyui I cant install. I always run into some sort of issue with the installation on both
Is it worth switching from a1111 to comfyui regarding performance? I am working with sd1.5 in a1111 but I want to try sdxl. So maybe a good Point to Switch.
hey thanks for the video, unfortunately the git clone command dose not download the models themselves. Also a bit confusing is the the fact the old controlenet in AUtomatic1111 is actually installed under extentions > controlnet and not at Models > Controlnet
You mentioned Blender -- here's a fun question: do you happen to know if there's ever going to be plans to port ComfyUI nodes in Blender? I recognize that these two programs both are VRAM hogs and this might limit the utility of it for a lot of people -- myself included; I def need a new card, hah -- but there seems to be a TON of potential for workflows, particularly with both the depth and normal maps, but also more creatively with the other bits, like rigged poses, training it to make textures for UVs, etc. :) Thanks!
We do have a stability API tie in for Blender, so we can get part of the way there. We do have some new goodies coming that I think you are going to enjoy!
Can you make a tutorial how to solve Clip Vision not appearing in preprocessors? It is something about yaml changing names. But No video about it. I’m Lost. 😄
Even if i run a workflow from the control Lora folder I get the error: 'CFGDenoiser' object has no attribute 'model_config' on the Ksampler. Can someone help? But 1.5 controller models work.
@@sedetweiler Yeah, i got that. The control Loras are in my a1111 installation/extension/model folder. Comfyui got the path to that, and 1.5 Controlnet files work on the XL model and vae, but the control lora gives the CFGDenoiser error. Welp, it's something that i don't need/want for now. Maybe it will fix itself with an update or something.
UPDATE: Found that the error came from my cuda12.1 installation. I reverted to 11.8. All good thanks: Hello Scott, when I install Controlnet_aux i get dw Pose error, unable to install, is there a known issue? I tried installing both from Manager and git clone.
Hi, I followed all your steps for installing clone models, but even after copying "clone this model repository " in huggingface (should I paste it somewhere?), then I can't find them in "controlnet" folder, but i only have the file "put_controlnets_and_t2i_here". What did I do wrong? Thanks if someone will answer me.
Sorry but i didn't find the files "control-lora and controlNet-v1-1 in hugginface, please could you help me with a link to download them? Thanks@@sedetweiler
Yup! It's a lot leaner application. It won't be a ton faster, but as you get your workflow in place, you can optimize it. For example you might find you can cut the steps on your first pass, or between control nets.
I know a lot of people dislike the spaghetti mess that comfy can become but personally as a trained aircraft electrician that reads and interprets complex electrical wiring diagrams on the daily it just makes sense to my brain. Being able to see the inputs and outputs of each component (node) allows you to better understand how the whole process works and what each piece contributes to the final outcome. Thank you for making these tutorials they have been a great help in learning!
Thank you to spend time creating those so didactic videos. Building up the worflow from nothing, explaining the settings, the whys.. I have been using MJ for months and shifted to A1111 to have more control but ComfyUI is exactly the place I craved all along. And with your videos the learning process is so enjoyable. Thank you. Happy to have subscribed.
My pleasure!
I think you are the only Stable Diffusion video creator who actually pronounces Euler correctly. My poor ears thank you.
Dude, I am with ya! .🤣
do people still prefer Euler over DPM++ 2M Karras?
Thanks for your lessons. Unfortunately, your videos are not popular among ALL Internet users, but know that for those who share your interests, you do a lot and inspire new things
I appreciate that!
@@sedetweilerhey bro do you know how can i use the control net in comfyUI to select an image, then using another to merge only the style of the image, without changing pose, details, ecc? Basically a style transfer, thanks
Hey, Scott. Would you consider making us a short video about your Top 5 or Top 10 Stable Diffusion MYTHS (number of steps, schedulers, negative prompts, etc.)? 🙏
Scott, you're a genius. I've had a project in mind and didn't know how to make it work -- I want a cameo of alabaster-on-onyx of a crescent moon with a sleepy beautiful female face. It's a specific look that I've been searching for and having no idea where to start.
I had one that was the right shape, so after watching your video I pulled a depth map and ran that through and got - on the first try - closer than I've been able to get before.
Of course, the second one was nothing like what I want, but now Ive seen the promised land.
Hey Scott and community! First up, thank you so much for the very well and not too quickly explained tutorial!!
I'm having a little problem though and I don't know exactly what to change/do to correct it...
At 8:19 you're showing the long list of ControlNet Models and you say that we'll see the list if we did it correctly and I followed your video step by step but I still don't get that list... any idea or hint on what might've gone wrong?
Thanks so much in advance!
You are a wonderful teacher, I not only learn how to use new features but also how they work. Thank you.
Happy to hear that!
it's a pleasure to absorb knowledge from such a mentor.
Thank you!
Might be worth mentioning the clone will take a while and the comand window won't show progress continuously, you just have to be patient and wait for the Filtering content line to hit 100%. First time round I quit out of the cmd window to early. 🙃
Thank you, that is a good observation!
I really struggled with this part. I was running Comfy via Swarm. When Control-Lora loaded, and Terminal/PowerShell said "done," I thought I was finished. I shut it down and restarted Comfy, hoping it would work. However, I didn't have any models when I restarted Comfy, and I started overthinking things.
Despite having read this thread earlier, I continued to overcomplicate things. After several hours of installs and uninstalls with no success, I took a lunch break, and then this comment began to blossom in the overgrown garden that is my brain. So, thank you for this. Also Big thanks to Scott Detweiler for these fantastic tutorials!
Good shot m8, I totally did this also.
You should highlight this comment, I must have closed too early also and the controlnet models where empty.
Thanks for the video! I'm just getting started with ComfyUI and your video has helped me a lot! Thanks again!
1:18 Struggling to understand the very first part, command line prompt? can you slow down? thanks 🍷
I have the control - loras but nothing apprears in the "load control net model"box. whats gone wrong there?
same
same
Had the same problem, checked the "Control Loras" Files (Rank 128 and 256) and both were empty. Had to go into those folders on Hugging Face and download them directly into the correct folders. Poof, problem solved. :)
@@idoshor4470 Had the same problem, checked the "Control Loras" Files (Rank 128 and 256) and both were empty. Had to go into those folders on Hugging Face and download them directly into the correct folders. Poof, problem solved. :)
finally someone who can explain comfyui the way I can understand, thank you so much!!
7:54 Late comment, but you actually can override the default css font-size: values in ComfyUI/web/style.css by adding your preferred settings in the ComfyUI/web/user.css file. I did this once for the multiline textboxes to make them more legible on zoom out but it was awhile ago and I've reinstalled ComfyUI once or twice since then so I don't have that modified user.css anymore. Was only a matter of copying and pasting the relevant style section from styles.css into user.css and then modifying the part I needed to change.
Happy trees! 🤣 I love it. I think I know who your inspiration for good video tutorials is.
Yeah, that came out of nowhere! Lol!
You are helping so much with these videos. Joined as channel sponsor (it's way way worth it folks)
Thank you!
Thank you!
Thank you so much for these wonderful tutorials. I'm very new to this and have a newb question: Can this same workflow be used with other models? Any other model I try gives me a litany of errors, the first being - "Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' Should I be changing a setting in the KSampler? Thanks!
Yes, the workflow will be the same for the models. If one is giving you an issue figure out if it is the preprocessor, controlnet model, or the inference model.
@@sedetweiler thanks so much!
@@jasondulin7376 Did you ever figure out what the issue was for you? I get the same error, I tried changing everything @sedetweiler mentioned and nothing helped or even changed the error.
hello! same error with you, How do you fixed it?
Hey can you help me out? I cloned the repository with git clone like mentioned at 3:40 and everything was cloned except for the models 😓 can you tell me why this is? I always have to download models manually because of this
I have been trying to use any depth preprocessor and for the life of me I cant! I don't know why. i've been searching all the tutorials. I have all the auxilary preprocessors and the models in the right folders and it keeps telling me I dont :(
I get an error at 6:13 that it can't find a file in custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel/Annotators\\cache\\models--lllyasviel--Annotators\\snapshots\\982e7edaec38759d914a963c48c4726685de7d96'.
You lost me at the 15:55 mark. How do you place a model in the folder with a code? I'm equally confused as to what you were doing in Notepad. Is there a walkthrough where I can see it being done? Thanks.
Did you ever figure out what he's talking about? He completely skipped that step, and I don't know what to do there either =/
@@WarGamerGirl i never did. I ended up just installing ComfyUI separately through another tutorial that did it at different way.
a touch-off subject, in the sdlx UIs, there is a tile button; where can I select the ability to make seamless tileable textures in comfy? I'm a sculptor seeing if I can maybe train a checkpoint or lora or something to create tiling patterns like snake scales and hopefully train in on alphas or depth maps so we can create interesting new stuff as opposed to my students always downloading the same old crap. I can imagine snake scales with mouths or eyes on them. but no one is training texture models. thank you t p.s. Thank you greatly you are making it possible for someone who can't c-prompt his way out of a paper bag to explore what these new technologies can take our art. I went from my best tools being an 800-year-old set of chisels to a computer when I saw Zbrush 18 years ago. Your videos are wonderful thank you again
Hmm, I know we have some methods for doing that in-house. Let me see if there is a method I can get out here that you can use.
I joined last night, I don't want to take advantage of your generosity. I teach Zbrush if I can figure out how to train texture and seamless tileable texture models or Loras or something lol, it will be an amazing tool. I'm so excited I found your page. such powerful tools. thank you greatly @@sedetweiler
Scott, do you know if there are plans to bring the lineart preprocessors to SDXL? I'm a product designer and for me it's one of the most exciting things about SD. Load in a clean sketch and see where SD takes it. I need more detail though for things like bicycles.. SD1.5 does cars really well, but fine details get lost and that's what makes up most of a bicycle :)
I'm trying lineart with SD1.5, then taking the output of that into depth and XL, but it's a bit of a roundabout way of doing things with less accurate results than I'd like.
Thanks for that. There is still one big big missing piece for narrative work with SD - namely LORAs for the refiner - we can get consistent characters with base LORAs via Kohya-ss, but refining screws them up, so quality of custom characters is barely better than SD1.5 at this point...
I use Impact to fix that. Generate the image as usual, with the refiner, then run it through the Impact Face Detailer and regenerate the face with just the base model and character LoRA, without the refiner. The nice thing about Impact is that it detects the face automatically, so once you have your workflow set up, you don't have to worry about it anymore. Just let it do its thing.
@@wsippel Thank you for the idea! Will give it a try tonight.
a couple of videos back i commented that you are like Bob Ross. Thank you for saying "happy tree"
Thank you so much for being out there :)
Always!
Hey Scott, great video, i got most of the way there but keep encountering a "header too large" error. Running Apple Macbook Pro M2 24gb Ram. Is this hardware related? Im searching for a lighterweight model with little luck. Control net works, and I can run stable diffusion with control net on the webui, but not ComfyUI. Followed your tut to a t twice. Any direction or resources you can reccomend would be great, thanks for the informative video!
Thanks for these awesome tuts. Why don't you use Link Render Mode: Straight ? it fixes the noodle problem. it's in the option (gear button on top right of Queue Prompts).
I installed a bunch of custom nods now SDXL1 wont work anymore, before it was slow now i'm getting not enough memory error. well i guess XL is no for me.
Yea, I used straight for a bit, but prefer the noodles :-)
thank you for these videos but i need a little help! im a 3d artist that is transition to learning some stable diffusion, im an absolute boor when it comes to python, i use gpt to write maya scripts for me because looking at code makes my head spin.
anyway, when i try to use the 2 pose recognition nodes installed in this video, one return an error that im missing something called "media pipe" and the other node says im missing "matplotlib". how can i install these? the manages doesnt show anything missing, and before this video i didnt even understand how to install stuff from huggingface or github because im used to a big green "download" button. cheers for the help :)
You might want to ask on the git for the nodes throwing the errors. Issues are so unique it would be hard to troubleshoot.
I would love to have the ability to split the screen, and then have one part that is fixed (on the image generation node), and the second part where we can move freely
Oh, that would be pretty sexy!
@@sedetweilerBtw, I remember you talking about chaiNNer a while back.
The recent versions have the ability to use automatic1111, and nodes specific to stable diffusion
Interesting! If you check the ComfyUI folders, you will also find chaiNNer is part of it's implementation as well.
I like to blend depth and canny or lineart 🙂
But once it is all set up, I can easily choose whether to use all canny or all depth, or a certain ratio.
I want to try the same thing. Probably too large a question, but if I wanted to add another controlnet can I just duplicate a few modules and plug them into the same place?
Use them in a chain. The order isn't important.
hello Scott, how can y ou make the reroute nodes get the name of whaterver they are rerouting?
right click on the tiny dot and choose rename. Mine are that way because of the pythongossss node set I believe.
Hi, this might be a dumb question but why it doesn't work with other checkpoints?, I have a setup to create pixel art, and I get an error in the KSampler "moduleList object doesn't have attribute 1", I tried with different models but the only one that works is the base model. Thanks for the tutorials I am binge watching your videos.
Q. Why don't we need the SDXL-specific CLIP text encoders (like used in the earlier videos) here?
Hi @Scott, your videos are amazing!
I am very new in this space and trying to do a PoC where I need to take person's photo with clear face and use the face and give a slightly different expression (maybe use controlnet openpose or canny) and more importantly create a comic style.
I know we can do really good comic style using SDXL 1.0 but I can't use my custom face in such situation. Is my only way to achieve it is using training model or lora? Can you please guide me?
You can try roop or training a custom LoRA. You have a few options there, but roop or reactor would be your best bet.
@@sedetweiler , I am currently trying to use controlnet canny with SDXL 1.0 and positive comment "color comic book..." and it seems to work ok-ish. So roop/reactor would make sense to put before this flow to swap person's face into the picture and pose I want it to be and then do the same process?
This eventually has to be hooked up on a website for people to try so training a Lora would be time taking process and probably not the best too?!
I am sure those would work. However, be careful what you allow people to use as training material!
Very good. I need to learn to keep faces similar in various character images for plot consistency.
Your tutorial videos on Comfy are fantastic! I'm truly enjoying them. How can I support your channel? Also, it would be great if you could add clear subtitles to minimize any language and accent barriers. Thank you!
Thank you! There should be a join button under the video or on my main page to help support the channel. Lots of great goodies as well as a sign of appreciation. Cheers!
Awesome démonstration. Do you have any playlist to start from baby steps?
Yup! ruclips.net/p/PLIF38owJLhR1EGDY4kOnsEnMyolZgza1x
What if i dont see the giant list at 8:19?
How about a quick basic overview on recommended environment, directory structure, to manage in an efficient way - storage, update mgmt, image mgmt, workflow...
Sure! I can do that. I use a NAS so not sure how many others will be interested in using what I have.
small suggestion with these tutorials. and particularly these intro tutorials - slow down. like... a lot. I totally get you know what you doing... but most of us have to switch screens and hunt and peck for where things are... we don't even know menu buttons. we have to pause the video every few seconds.... rewind constantly. especially at the end... you move so quick.
and I say that - because your tutorials are excellent. you communicate well and show very good details of what you are doing....just so fast! haha
Yeah, sorry about that. I have found that if I am to slow, people just leave the video, so I tend to just get it down to the actual points you need to know. I often suggest watching it first, and then doing it again with pauses as needed. I do the same thing with tutorials on products like zBrush, as those can be brutal to keep up with, so I totally understand what you mean. Since you are a subscriber, you will also see a recent live stream and you might really enjoy the speed of those. Thank you for your support!
I´m always getting an error on the ControlNetLoader saying
Error occurred when executing ControlNetLoader:
'input_blocks.0.0.weight'
any idea what that could be. Setup should be the same. Thanks!
Awesome Tutorials by the way!
Thanks for the great tutorial, Scott. How to get the VAE label on the reroute node btw?
It is automatic with some of the latest updates.
you can right clock on the reroute and set to 'show type by default'
this is what i call a reeeeaaally good tut. thank you
Glad you think so!
@@sedetweiler I would like to suggest you a topic. could you make a tutorial on how to colorize black and white pictures using the new RECOLOR? could you teach us how to use various controlNet modules? Thanks
im using the comfyUi portable version and when i clone the comfyui manager to the custom nodes it wont appear the manager button on the ui
in Load controlnet model node i get this Error that i can't understand why
Error while deserializing header: HeaderTooLarge
I did everything i reinstalled comfyui :) I'm using same model and same controlnet-lora you're using here
any idea what this could mean?
same problem you find anything?
I tried to download the controlnet preprocessors, but apparently the list is missing. I have only two models there (maybe from another download I did?) : diffusion_pytorch_model and ..model.fp16.
any ideas how do I get the full list? thanks
Would this work for replacing for example a sofa in a room with another sofa? Or would you suggest some other approach?
Sure! That would be a good example.
@@sedetweiler alright, but how would one know what is the sofa in the image and replace that part and not something else?
I'm hoping you can help, I'm trying to use the ControlNet Repository and OpenPose - bit I keep running into errors when at the Control Net Loader step, something about py lines 151, 81, etc.
I've tried updating and reinstalling control net repository but I'm still running into the same issue.
Hi there. I used the manager to download comfyui-extensions to get the custom color nodes and titled reroutes etc, and it isnt workig. The extra options didnt even show up. Is there something i did wrong? Is there a fix for this or a step to solve this? thanks in advance.
Can I ask, since we are using SDXL, why is the standard Clip Text Encode node used instead of the CLIPTextEncodeSDXL node?
I just didn't need both G and L for this demo.
How would you set up multiple control nets? I tried to chain 2 Apply Control Net Nodes together but I think it’s not working. For example, let’s say first Apply Control Net is for a depth map, set to run from 0.0 steps to 0.2. Then, I chain that into another Apply Control Net that has a Canny, and set for start at 0.0 steps and end at 0.8…….wouldn’t the most recent Apply Control Net steps settings override the first Control Net? Since I passed the first into the second?
I'm encountering an issue while trying to use the 'INTER_LINEAR' attribute from the 'cv2' module in my Python code. When I attempt to execute my script, I receive the following error message:
Error: module 'cv2' has no attribute 'INTER_LINEAR'
I'm using OpenCV (cv2) for performing image processing operations, and I need to use the 'INTER_LINEAR' method for image interpolation in my project. I have checked the OpenCV documentation, and I know that this attribute should be available, but I don't understand why I'm receiving this error.
Conflicted node warning when installing that Controlnet thing, why?
is there any stereoscopic/stereogram creating depth map in ComfyUI yet?
Good tutorial! I got a workflow that mixes SDXL with SD1.54 (i think as a refiner), does it makes sense? I use it like "a giant lora" as i make the denoise super low so it changes the features of the image with the model i choose but not changing the image as whole (es. A-Zovya has a particular style i like so i create an image with sdXL model i like, then the refiner will implement some features from the Zvya 1.5 model)
Sounds like fun! That is one of the reasons why I love this node method so much, as you can do whatever you want!
Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
Why does the section in extra_model_paths.yaml say a111 and not a1111? 🤔
Where can I find help with a Depthmap error?
i get the following error when it reaches the ksampler node. The ksampler is also outlined in purple at this step/error. Any idea how to fix?
Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
EDIT: was using the soapmix load checkpoint from earlier videos, unsure why that one did not work. reverted to the SDXL load checkpoint, problem solved.
Hey thanks for the tutorial! I have a question: i have no problem using this with SDXL, but the moment i try to pair it with sd 1.5 checkpoints comfyUI sees an error
It says: Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
What should I do?
you cannot mix the 2 models or loras. 😞
@@sedetweiler Can I use controlnet with sd 1.5 checkpoints in comfyui and if so, how?
I get so lost trying to follow stuff like this. Mine has errors all over the place, both prompts and the ksaple and vae all are red and i seem to have all the same things you got. Where do i get your workflow files?
Be sure to change the model to things you have on your machine. The graphs for all of the videos are available on the Community tab here in RUclips for channel sponsors. Have you done the earlier videos? This is not a good one to start with, as this is assuming you have done the previous episodes.
i have at this point, thanks man!
@@sedetweiler
hi, you can help me? Error occurred when executing ControlNetLoader:
I'm using SD 1.5 and using controlnet to give my character a pose as you described (without Reactor). Can I somehow control and define my own background that the subject (my character) blends in?
I'm getting errors when running this with Stable Diffusion 3, is it not compatible?
Hi, in my controlnet/control-lora directory I only have the rank128 and rank256 of canny, recolor, depth and sketch. any idea what happened? (Windows system here with ComfyUI from Pinokio)
friend I have
extra_model_paths.yaml.example but I don't have
extra_model_paths.yaml. What do I do, where can I get off?
You rename the .example file and edit it so it is pointing to the correct place. The example file is just an example.
I got it working just fine, and I did a test gen with the Canny model, and I was wondering what was taking my render so long... I realized that I loaded 6gb diffusion model and not the control Lora. I once again was setting my laptop's gpu on fire.
Don't start a fire! :-)
Hi i cannot get past the install custom nodes bit this update sign comes up and then just doesnt disappear what can i do???
How do I apply this same tutorial but for openpose?
Thanks a lot. Please talk about the pipe in your next video.
Sure thing!
Why is it not possible to use ControlNet with other models than SD?
Now we cookin with FIRE!
Heck yes!
Forgive me I may be a bit slow but when I type in git clone and paste link in the cmd window it says "git not recognized as an internal or external command" etc. I also don't have admin privilege in the window, how do I fix this?
You will need to install git. Most of us already had it, so I made an assumption here that was apparently incorrect.
thanks@@sedetweiler
I have followed this to the letter but when I connect my ControlNet node to my KSampler and run it, I get an error that I have no clue about, as I'm just a user not a developer... Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' Any ideas at all as to who I can ask, or what I can do?
what kind of control net are you using?
@sedetweiler The exact same ones shown in the video. For the Midas depth map, but also tried the Zoe as well.
You are using a SD 1.5 model, control loras are for SDXL. You have to download a control net model for 1.5, from lllyasviel/ControlNet-v1-1 Hugging Face
will these work for automatic 11? Both of my gaming rigs im unsuccessful at installing warpfusion which I want to use for better consistency in videos, especially to redo the last realism test I put on my youtube, as well as comfyui I cant install. I always run into some sort of issue with the installation on both
I think in the beta, but I have not checked with the most recent pull.
Is it worth switching from a1111 to comfyui regarding performance? I am working with sd1.5 in a1111 but I want to try sdxl. So maybe a good Point to Switch.
I find it much faster since it is only loading what you need and not that huge interface.
Friend, I want to add realistic lights and shadows to a photo, without changing the characters, is it possible?
hey thanks for the video, unfortunately the git clone command dose not download the models themselves.
Also a bit confusing is the the fact the old controlenet in AUtomatic1111 is actually installed under extentions > controlnet and not at Models > Controlnet
The models get pulled in for the preprocessors when you use them the first time. The scripts and the nodes are pulled in with the git.
It did install them for me, but it took a long time, and the cmd window didn't update for like 10 minutes at one point.
Great Tuts, but unfortunately mine always got this error: input_blocks.0.0.weight'
Sounds like the preprocessors. I would restart comfy and it should locate the missing model.
You mentioned Blender -- here's a fun question: do you happen to know if there's ever going to be plans to port ComfyUI nodes in Blender? I recognize that these two programs both are VRAM hogs and this might limit the utility of it for a lot of people -- myself included; I def need a new card, hah -- but there seems to be a TON of potential for workflows, particularly with both the depth and normal maps, but also more creatively with the other bits, like rigged poses, training it to make textures for UVs, etc. :) Thanks!
We do have a stability API tie in for Blender, so we can get part of the way there. We do have some new goodies coming that I think you are going to enjoy!
I just installed comfyui. I'm having difficulty finding a tutorial on how to train my own images for use. Do you have a tutorial for that?
I don't use comfy for training, just for inference.
@@sedetweiler is there a way to drop in a random photograph and have comfy tweak it?
Can you make a tutorial how to solve Clip Vision not appearing in preprocessors? It is something about yaml changing names. But No video about it. I’m Lost. 😄
Clip vision has it's own loader
Even if i run a workflow from the control Lora folder I get the error:
'CFGDenoiser' object has no attribute 'model_config'
on the Ksampler.
Can someone help?
But 1.5 controller models work.
It isn't a LoRA in the traditional sense. In the video I show where to place it.
@@sedetweiler Yeah, i got that. The control Loras are in my a1111 installation/extension/model folder. Comfyui got the path to that, and 1.5 Controlnet files work on the XL model and vae, but the control lora gives the CFGDenoiser error. Welp, it's something that i don't need/want for now. Maybe it will fix itself with an update or something.
hi can someone tell me how can i add multiple preprocessor ?
UPDATE: Found that the error came from my cuda12.1 installation. I reverted to 11.8. All good thanks: Hello Scott, when I install Controlnet_aux i get dw Pose error, unable to install, is there a known issue? I tried installing both from Manager and git clone.
I think DW pose uses onxy, so you might want to check their git for install tips.
Hi, I followed all your steps for installing clone models, but even after copying "clone this model repository " in huggingface (should I paste it somewhere?), then I can't find them in "controlnet" folder, but i only have the file "put_controlnets_and_t2i_here". What did I do wrong? Thanks if someone will answer me.
You need to download the models from huggingface and put them into the controlnet folder next to that "put_contronnets_and_2ti_here" file.
Be patient, but for me this is a completely new thing, where do I find the files to download, can you give me the link please? Thank you@@sedetweiler
Check again, mine too a while to load in folder, but let me know if they show up in controlnet checkpoint loader mine didn't@@CALIGARIS-xe1pp
Sorry but i didn't find the files "control-lora and controlNet-v1-1 in hugginface, please could you help me with a link to download them? Thanks@@sedetweiler
would be much appreciated @sedetweiler
@@CALIGARIS-xe1pp
Nice job, appriciate it. 😀
Thanks! 😃
easy to follow very helpful euler gang euler gang
woot! woot!
Thanks - very well explained!
Glad it was helpful!
cant find your video that made the Manager button appear
You're watching it..
@@victorhansson3410 thanks man.. I totally missed it... rewatched it and saw it..
glad you got it working!
I have a1111 and a 4060 Nvidia graphic card takes 45 sec to 1min for images . Is comfy faster
Yup! It's a lot leaner application. It won't be a ton faster, but as you get your workflow in place, you can optimize it. For example you might find you can cut the steps on your first pass, or between control nets.
perfect explanation
I think we need a video showing how to add multiple LoRas to a ComfyUI workflow.
Comfy Roll Nodes has a Lora Stacker
It's super easy. Just chain them together
Hi Scott! Is it possible to do face swap with comfyui? If yes, can you a,e a vjdeo explaining how? Thanks in advance!
Yes, there are quite a few ways. I have a tutorial in the works for that!
@@sedetweiler Thank you very much!
thank you!!!
You're welcome!
wonderful news!
:-)
Waouu,super merci c'est genial
thanks dude, ur a dude!
Sweet!
amazing.
Thank you! Cheers!