This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!
I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.
I've been through a few of your tutes so far, and I am just floored by your expertise, your delivery and to add your workflows work! Not like some others where it's all just smoke and mirrors and when you use their workflow you soon find out it was made just for the show. Not you! I am thrilled to have found your channel. Thank you!
id also love to see a long video that shows your process of deciding what nodes to use, how u found them, why you chose them, how you tested them, to come to this workflow.
I usually go through trial and error, node stacking, and testing multiple outputs before landing on the final setup. I’ll see how I can make it happen!
Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.
I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!
Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!
You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!
hey, if i want to make a charactersheet for an animal like a bunny, do I need na new reference sheet with different dimensions? how would i go about creating that. When i copy paste someone's charactersheets they look too humanoid instead of being a bunny for example
You are right. A character sheet won't quite cut it because the proportions and features are so different. Create or find a reference sheet specifically designed for animal anatomy. For a bunny, this would include front, side, and back views, focusing on its unique features-like ears, body shape, and tail placement. You can even sketch a simple one yourself or use basic AI tools to generate outlines.
also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem
Hello there, the ControlNet Apply SD3 and Hunyuandit node is no longer available-the node has been renamed to "Apply ControlNet with VAE" in the latest updates. It is a core node available in ComfyUI. Once you update it to the latest version, it needs no installation. I hope this helps.
Hey! Great video, can you tell me how long approximately it should take to render an image at 5:26? I am using this workflow on my Mac with M3 processor but it takes forever to render, do I have to change my hardware? Can you recommend any good Windows based laptop for it?
Rendering time can vary a lot depending on the complexity of the workflow and hardware capabilities. On a Mac with an M3 processor, ComfyUI might run slower since it’s designed more for NVIDIA GPUs. Flux models may also perform more slowly if you are just starting out. If you're considering switching to a Windows-based system. 1. An RTX 3070 or 4070 GPU with at least 16GB of RAM will give you a speed boost. 2. If budget isn’t an issue, go for an RTX 3080/4080 or 3090/4090 laptop with 24GB VRAM or higher. Also, pairing it with an SSD will improve overall responsiveness. After using flux for a while and setting, my generations took at least five minutes.
double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.
I appreciate your tutorials. I am leaning comfyui and these have been tremendously helpful. I would be very interested in seeing how you would incorporate PULID into this workflow so the characters face could be driven by an input image. ...I have tried but have not yet been successful.
Thank you for providing feedback and suggestions. I have been working on including Pulid for quite some time now. I hope to overcome the challenges and share the process with everyone.
Can you update to include a florence node or similar to automatically describe the existing character along with an 'append' clip text encode to include the turnaround info?
also is there a way to add a face to include with the controlnet poses.? Describing is all well n good but linking an image would be brilliant. let US know.
Yes, you can use the PULID Custum node FLUX for that. I have not been able to get PULID to function on my system with ComfoyyUI yet. you can view this reference to guide you: ruclips.net/video/Uls_jXy9RuU/видео.htmlsi=qNoYR0xjk_A3COhB.
Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?
You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.
I'll keep this in mind for a future breakdown. I appreciate the suggestion! In the meantime, the video guide here could be a good starting point if you missed it. ruclips.net/video/kqBhMYeRPE0/видео.htmlsi=v4YzfYPRNn33ttDI
I'm getting an error when I try to use the DWPreprocessor (and several others). The message says: # ComfyUI Error Report ## Error Details - **Node Type:** AIO_Preprocessor - **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError - **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. ## Stack Trace My internet connection is fine. Any advice?
IDK if you can help me but I've had problems with this AIO Preprocessor. AIO_Preprocessor 'NoneType' object has no attribute 'get_provider. Please help
A missing or outdated dependency can cause this, so make sure to update comfy Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.
To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. ruclips.net/video/HuDU4DlZid8/видео.htmlsi=-l4wISSzrH0i1wmp
It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.
yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?
Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory. Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video ruclips.net/video/TWSFej_S_bY/видео.htmlsi=hWosspilbjYj3QWl
@@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you! Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.
You always make great content! I have a question. I got a image of character in a front view T-pose and I want to get different views of the character from one image. Is it possible to load that image and get different views of that character using open pose character sheet? Thanks for all of your hard work!
That is possible, but the process will likely involve a lot of trial and error. I recommend using the OpenPose character sheet as a guide to create the character views. Then use this to make a Lora for the character. This approach will give you more control. Thank you for your encouraging feedback.
Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx
Thank you for diving into the workflow! Here are a few tips that might help: - Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already. - It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result. i hope these helps
I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs. Also they are super useful for game development, animation, or just keeping a consistent look across different art projects
i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently. your take is basically the mick mumpitz workflow with flux. it's good as it is.
I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.
Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator
Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently. Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.
great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.
You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight
How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow
Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet. For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration. I hope these help.
I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.
My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?
First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ] If it’s missing, you can download the necessary files by using the Manager. then make sure you update dcomfyUi to the latest version.
It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.
Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. ruclips.net/video/HuDU4DlZid8/видео.htmlsi=FzSSqoe6OV_56l55
Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.
I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.
If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear. That will help to find the missing nodes and fix them.
Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency. To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.
@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process. you can check out this reference video that might assist you ruclips.net/video/Uls_jXy9RuU/видео.htmlsi=EJoLucxVyOFFQKjB
It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.
This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?
Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency. it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.
Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3) AnimalPosePreprocessor [ComfyUI-tbox] DWPreprocessor [ComfyUI-tbox] DensePosePreprocessor [ComfyUI-tbox] So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?
Hello, all you have to do is update comfyUI to the most recent version and confirm that you have installed the controlnet auxiliary preprocessors. This will enable the nodes to be accessible.
Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
Thank you for coming here, and I appreciate your feedback. Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling. If you can tweak those and add the right nodes, you should be able to get great, consistent characters!
@goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf
I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.
@@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference
Wow, I really enjoyed this vid. I am an absolute beginner. I am confused. In the video you have your character in many poses and improved the details. How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment? Do you have a video on that?
I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background. I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.
So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times????? Thank you
What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.
The face detailer on your example doesn't seem to understand these are all the same character poses and adds more variety to the faces, which is obviously not wanted.
Well observed. When working with different poses of the same character, the face detailer may introduce unwanted variations. By adjusting the denoising strength, we can strengthen consistency.
The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you ruclips.net/video/PPPQ1SANScM/видео.htmlsi=M-fMMvE6-kEzr5u8
Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!
Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.
This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!
I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.
Thanks for not just showing a final workflow but explaining each node. This is what makes your videos so great.
You’re welcome, glad you found the breakdown helpful!
I've been through a few of your tutes so far, and I am just floored by your expertise, your delivery and to add your workflows work! Not like some others where it's all just smoke and mirrors and when you use their workflow you soon find out it was made just for the show. Not you! I am thrilled to have found your channel. Thank you!
Your excitement is motivating, and I am glad that you’ve not only found value in the videos but also had success with the workflows.
You are golden ! You answered more questions that those 5x more viewed Channels. All they do is push their Patreons. Thank you ! Subbed and Liked !
Thank you for being here and joining the family. I appreciate your feedback and for bringing that to my awareness.
One of the most hardcore useful tuts out there and it's FREE !
Bro, you're amazing.
Liked & Subscribed
💯
Thank you for the encouragement. We welcome you to the family.
@@goshniiAI 🙌
So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.
Thank you so much! I’m honoured to have earned your subscription and and glad you found this helpful.
Amazing, concise, understandable. Congrats man, keep the good work.
Thank you so much! appreciate it
id also love to see a long video that shows your process of deciding what nodes to use, how u found them, why you chose them, how you tested them, to come to this workflow.
I usually go through trial and error, node stacking, and testing multiple outputs before landing on the final setup. I’ll see how I can make it happen!
I usually never comment but this is really helpful video man , you explain everything so perfectly , god bless
I appreciate you taking the time. I'm glad you found it helpful.
omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method
you are most welcome. i am glad to read your feedback.💜
Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.
I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!
Just wanted to say, you are amazing!!
Hearing that means so much. Thank you for your support.
Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!
You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!
Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.
I am Glad it was helpful! Thank you for the observation and feedback . It means alot
def subscribing. this was excellent.
That means a lot, and Welcome aboard!
thank bro ... i love the way your detailles all the process ... you are a Rock star , merci
You are very welcome, and saying thank you for your compliment.
Man what a great tutorial. Thank you!
You are very welcome. thank you for stopping by.
hey, if i want to make a charactersheet for an animal like a bunny, do I need na new reference sheet with different dimensions? how would i go about creating that. When i copy paste someone's charactersheets they look too humanoid instead of being a bunny for example
You are right. A character sheet won't quite cut it because the proportions and features are so different.
Create or find a reference sheet specifically designed for animal anatomy. For a bunny, this would include front, side, and back views, focusing on its unique features-like ears, body shape, and tail placement. You can even sketch a simple one yourself or use basic AI tools to generate outlines.
thank you very much for this tutorial... at the right speed and detailed explanation..
Thank you so much for the kind words!
also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem
awesome and much appreciated - thank you for the additional information .
This is amazing! Thank you so much. Subscribed!
Really good Explanation, Keep up the good work :)
Thank you for the motivation! I'm glad I could help.
Great video as always! Thanks!
Thank you for your encouragement.
I love this, already subscribed
Thank you for being here. i appreciate your support.
Thanks masta❤.. u'r tutorial is the best i ever learn...😂
I appreciate your encouragement. Thank you so much 💚
Superb work mate
Thank you so much, Suda! Love
Thank you, you are excellent!
That's very kind of you!
Thanks so much for your hardwrok, very useful videos.
You are very welcome! I appreciate your encouraging feedback. Thank you!
Hi. Can you please explain to everyone how to create the node called "controlnet apply sd3 and hunyuandit"? Thanks.
Hello there, the ControlNet Apply SD3 and Hunyuandit node is no longer available-the node has been renamed to "Apply ControlNet with VAE" in the latest updates.
It is a core node available in ComfyUI. Once you update it to the latest version, it needs no installation.
I hope this helps.
Hey! Great video, can you tell me how long approximately it should take to render an image at 5:26? I am using this workflow on my Mac with M3 processor but it takes forever to render, do I have to change my hardware? Can you recommend any good Windows based laptop for it?
Rendering time can vary a lot depending on the complexity of the workflow and hardware capabilities. On a Mac with an M3 processor, ComfyUI might run slower since it’s designed more for NVIDIA GPUs. Flux models may also perform more slowly if you are just starting out.
If you're considering switching to a Windows-based system.
1. An RTX 3070 or 4070 GPU with at least 16GB of RAM will give you a speed boost.
2. If budget isn’t an issue, go for an RTX 3080/4080 or 3090/4090 laptop with 24GB VRAM or higher.
Also, pairing it with an SSD will improve overall responsiveness. After using flux for a while and setting, my generations took at least five minutes.
@@goshniiAI Thank you very much for your response!
update on the controlnetapplysd3 node, supposedly it has been renamed controlnet apply vae
Thank you for making us aware. We appreciate you watching out for that.
when doing the first queue prompt for the aio aux processor - i just get a blank black image
double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.
@@goshniiAI i still get a blank image also the strength is after the preprocessor save image i don't think it affects it?
I appreciate your tutorials. I am leaning comfyui and these have been tremendously helpful. I would be very interested in seeing how you would incorporate PULID into this workflow so the characters face could be driven by an input image. ...I have tried but have not yet been successful.
Thank you for providing feedback and suggestions. I have been working on including Pulid for quite some time now. I hope to overcome the challenges and share the process with everyone.
for the pose reference, can we add our own pics posing as we like. will it work?
Yep!!! You can use any picture, and then you'll need ControlNet to extract your pose.
Thank you!
You are more than welcome.
Can you update to include a florence node or similar to automatically describe the existing character along with an 'append' clip text encode to include the turnaround info?
Thank you for your suggestion. I'll explore this and see how it integrates into the FLUX workflow.
also is there a way to add a face to include with the controlnet poses.? Describing is all well n good but linking an image would be brilliant. let US know.
Yes, you can use the PULID Custum node FLUX for that. I have not been able to get PULID to function on my system with ComfoyyUI yet.
you can view this reference to guide you: ruclips.net/video/Uls_jXy9RuU/видео.htmlsi=qNoYR0xjk_A3COhB.
Dope stuff. You rock!
I appreciate that! Thank you!
How to create multiple consistent cartoon characters interacting with each other on different scenes?
Hopefully soon, in the next post
it shows "(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors" when i try to install ControlNet Auxiliary Preprocessors...anyone pls help
Make sure you're running the latest version of ComfyUI. Sometimes, older versions don’t play well with newer add-ons.
I can’t use AIO Aux processor, how do I fix this? 😢
No need to worry. You can use separate preprocessors for each model, and everything will still work.
How to use the image reference in animation?
I am hoping to share a video process on that in future videos.
Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?
You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.
Would you be so kind as to give the workflow for using an existing image or character? Thanks
Yes, hopefully, the tutorial that follows will clarify and give that.
@@goshniiAI can't wait
id love to see you do this exact same video but with actual realistic human characters
I'll keep this in mind for a future breakdown. I appreciate the suggestion! In the meantime, the video guide here could be a good starting point if you missed it. ruclips.net/video/kqBhMYeRPE0/видео.htmlsi=v4YzfYPRNn33ttDI
Very helpful, thank you.
i appreciate your feedback
i need this but instead of text prompt.. i use redux to style transfer
you could find the video here for ControlNet and redux helpful
I'm getting an error when I try to use the DWPreprocessor (and several others). The message says:
# ComfyUI Error Report
## Error Details
- **Node Type:** AIO_Preprocessor
- **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError
- **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
## Stack Trace
My internet connection is fine. Any advice?
Sorry to hear that; I would recommend updating any of your nodes as well as running an update for ComfyUI.
Great video!
I'm glad you enjoyed it!
IDK if you can help me but I've had problems with this AIO Preprocessor.
AIO_Preprocessor
'NoneType' object has no attribute 'get_provider. Please help
A missing or outdated dependency can cause this, so make sure to update comfy
Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.
How to add LoRA to this workflow? Please. I need LoRA for my charector face and Controlnet for my charector pose.
To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. ruclips.net/video/HuDU4DlZid8/видео.htmlsi=-l4wISSzrH0i1wmp
It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.
yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!
This "controlnetapply sd3 andhunyuandit" is nowhere :/ I updated everything.
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple
ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?
Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory.
Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video ruclips.net/video/TWSFej_S_bY/видео.htmlsi=hWosspilbjYj3QWl
@@goshniiAI Thank you! It worked but is it normally very slow when it hits the first ksampler? it takes forever to get through this point
@@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you!
Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.
You always make great content! I have a question. I got a image of character in a front view T-pose and I want to get different views of the character from one image. Is it possible to load that image and get different views of that character using open pose character sheet? Thanks for all of your hard work!
That is possible, but the process will likely involve a lot of trial and error. I recommend using the OpenPose character sheet as a guide to create the character views. Then use this to make a Lora for the character. This approach will give you more control.
Thank you for your encouraging feedback.
Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx
Thank you for diving into the workflow! Here are a few tips that might help:
- Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already.
- It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result.
i hope these helps
are you going to follow up on this video on how to use this character sheet to put them in different scenes/videos?
Thanks for the suggestion! I'll check it out since you mentioned it.
Thanks for the video. This is Awesome. Do you use this to create loras? Or what do you use the character sheets for?
I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs.
Also they are super useful for game development, animation, or just keeping a consistent look across different art projects
FAB!
LOVE!
i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently.
your take is basically the mick mumpitz workflow with flux. it's good as it is.
I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.
can this do image to video?
Yes, you can, once you have your character. The video here can guide you: ruclips.net/video/Yv5FuxXACZ4/видео.htmlsi=Slm5LUUHilgD0oIR
THANKS
You're welcome!
Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator
Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently.
Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.
great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.
You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight
How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow
Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet.
For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration.
I hope these help.
thanks
You're welcome!
I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.
Thank you for sharing that approach with everyone! awesome tip!
My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?
First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ]
If it’s missing, you can download the necessary files by using the Manager.
then make sure you update dcomfyUi to the latest version.
its because names are too long, try to reduce names in your path. Had same issue. once it hits namespace for pathing it cannot find it.
@@anlar1998Thank you for the extra information.
Does anyone know how to fix this problem?
Failed to restore node: Ultimate SD Upscale
Please remove and re-add it.
It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.
@@goshniiAI Yes, that's it, but I don't know which node to delete.. How do I know which node to delete?
but can i use image generated from flux dev commercially??
yes
Thank you for your support.
@ralphmccloudvideo
how to add simple lora?
Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. ruclips.net/video/HuDU4DlZid8/видео.htmlsi=FzSSqoe6OV_56l55
Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.
I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.
So would you incorporate a photo of myself or another real person into this workflow to get realistic images?
Yes, you could do that by including the IP adapter node. but for now FLUX is inconsistent with the models available. Hopefully soon
great thanks
You are welcome!
Nice thanks. But what about when we want to use the character in a generation?
Yes, you can, here is a follow-up video that explains the process. ruclips.net/video/OHl9J_Pga-E/видео.html
Bro this video is great but some nodes are missing...how should we fix this?
If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear.
That will help to find the missing nodes and fix them.
perfect but what if i want use image instead use prompt input?
You can use a prompt, then use any input face you want. you can view the video guide here ruclips.net/video/kqBhMYeRPE0/видео.htmlsi=zYwDFfodmHWPfCmJ
Is there a way with this workflow to use an image of a person that would be part of the output character sheet?
Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency.
To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.
@@goshniiAI oh ok, that works also. Doooo you happen to have a link to a training a lora video :D
@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process.
you can check out this reference video that might assist you ruclips.net/video/Uls_jXy9RuU/видео.htmlsi=EJoLucxVyOFFQKjB
It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.
.glb would advance the creation of consistent characters. That might just be a possibility in the future!
This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?
Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency.
it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.
@@goshniiAI Cool! Maybe you could do a video on that? How to move from a character sheet to a 3D model :)
Is there automated way in comfy to split the character sheet into individual images to train LoRAs on the character?
Yes, you can get individual images by using the image crop node.
Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3)
AnimalPosePreprocessor [ComfyUI-tbox]
DWPreprocessor [ComfyUI-tbox]
DensePosePreprocessor [ComfyUI-tbox]
So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?
Hello, all you have to do is update comfyUI to the most recent version and confirm that you have installed the controlnet auxiliary preprocessors. This will enable the nodes to be accessible.
Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
@@goshniiAI Thanks! And thank you for an excellent video
@@RxAIWithDrJen You are most welcome. Thank you for being here
i cant lie, this was the best consistent character video for sure! is this able to work with sd3.5?
Thank you for coming here, and I appreciate your feedback.
Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling.
If you can tweak those and add the right nodes, you should be able to get great, consistent characters!
@goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf
Can use this for Sdxl?
Yes, you can; just make sure to use the correct SDXL models for controlnet, checkpoint Loader, and other SDXL-compatible nodes.
Wow nice
saying thank you!
sir! which gpu are you using? and please suggest cloud gpu service site!
I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.
Fantastic walk through, thank you so much for making this valuable knowledge available for all to learn. Top G
You are awesome for sharing your thoughts. you are most welcome
your UI is very nice, I still have the old look, how do I update to get your UI ?
Please see my Video here, towards the end, i explained the settings: ruclips.net/video/PPPQ1SANScM/видео.htmlsi=uMK8VUuxhCxyIerW
So it appears that apply SD3 node has been renamed to Apply With VAE?
It is still SD3, as I checked.
@@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference
Thank you for pointing that out, it looks like the "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates
@@fungus98 Yeah, you are right, and thank you for sharing your observation
Wow, I really enjoyed this vid.
I am an absolute beginner.
I am confused. In the video you have your character in many poses and improved the details.
How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment?
Do you have a video on that?
I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background.
I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.
How do you get that new Interface??, I updated everything and I still have the old interface
Nevermind, I found it
Awesome! I'm glad you found it.
@@goshniiAI By the way, Amazing video, Thank you
@@Huguillon i appreciate it, You are welcome
why not share the json for comfy? I went to gumroad and downloaded your files but was surprised there is no json just an image of your set up!!!?????
You sure the image didn't have the comfy workflow stored into it? Did you try dropping it into Comfy UI?
Yes you are right, the PNG image still works the same as a JSON file. You only have to import it or drag and drop into comfyUI.
@@goshniiAI I saw that later... sorry I thought comfy only accepted json... thanks for your work!
@@tmlander you are most welcome, thank you for sharing an update.
I can't find the ControlNetApply SD3 and HunyuanDIT nodes. Where can I install them?
One of the key nodes in comfyUI is the controlnetapplySD3. Before it's made available, make sure comfy is updated.
@@goshniiAI I can't find it either. Auxiliary Preprocessors is installed and "ComfyUI is already up to date with the latest version."
@@goshniiAI I already have comfy and packages up to date and still can't find it
@@bluemodize7718 It has changed. It's been renamed to "Apply Controlnet with VAE"
@@bluemodize7718 same here
but what about non-human characters?
Animals?
For animals, you'll need the controlnet animal position model, but for now I'm not sure it is currently available for Flux.
@@goshniiAI how i can custom skelet.
iam have game char like pokemon
thank you!!!!
You're welcome!
So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box
Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times?????
Thank you
What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.
Thanks for explaining and providing the extra information
The face detailer on your example doesn't seem to understand these are all the same character poses and adds more variety to the faces, which is obviously not wanted.
Well observed. When working with different poses of the same character, the face detailer may introduce unwanted variations. By adjusting the denoising strength, we can strengthen consistency.
Hi it's stuck on Load Upscale Model node. I believe I don't have the "4x-Ultrasharp.pth". How to get that please?
The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you ruclips.net/video/PPPQ1SANScM/видео.htmlsi=M-fMMvE6-kEzr5u8
Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!
Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.