Thank you so much. This is exactly what I need and nobody else has anything like what you’re doing. I’m new to ComfyUI but it seems like fun to me. I’ll be supporting your Patreon and joining the Discord community.
I am astonished at well this works! This was only my second workflow with comfy and blown away by how powerful it is! Downloading the models was a PITA but totally worth it 😀
That tip about adding "Pixar character" to the face detailer prompt is pure gold! My characters already look 10 times better. Have you had any luck using this workflow with other character styles, like anime or realistic?
incredible tech, scary how few real artists the industry will probably have in 5 years time. Overall depreciation of quality and interesting character design over time in the interest of turning the cog wheels of consumption content out as fast as possible. 10% will use this for good the rest of it will be used to save money and time in an area where money and more importantly time should be spent on this subject. Awesome tech, horrifying corporate application.
I am sure many people that like Node type interfaces may think programming is complicated, but Nodes are only useful if you know how to use it. I had Comfy UI installed for 5 minutes, and just get it.
I'm commenting so I remember to watch this soon once I have more stamina charged up (just woke up) (been chronically sleepy and cranky for the past like 7 or 8 months 😢)
A getting started with ai would be a nice video because all this is very confusing for a complete beginner but seems like important information to know
At 02:44 there are no photos in the upscaler. Only the pose sheet. Then at 02:50 there are photos in the upscaler. How do you move from the pose sheet to the upscale? Running it just repeats the pose sheet again and again.
would love to have a step by step installation video. i tried to install and I'm sure i did something wrong. I'm extremely glad you made this video thanks for everything you've given
Thank you! I am currently working on the next iteration of the workflow. When I make the video about it, I will go through the full process step by step!
@@mickmumpitz could you please also show an example of how to generate characters of different styles? Anything i try to generate always comes out in 3d, pixar, disney style, can never create something like 2d cartoon character or children's book illustrations style character. And can we generate a character using our face with this?
@@mickmumpitz Hi I need a helping hand please, I would like to be able to change the location and size of the subject above the background I found it in compositing mask but I can just move it to the right and down but I can't change that size and exact location, do you have a solution?
Hello, thanks to your videos I am learning a lot about AI and the photo generation is now much better than on other platforms, and can you show how to get a story book but in 2d version for children from 3 to 8 years old?
I LOVE this workflow and your content. I have some questions - could you help? 1. Character sheet - can you try yourself and then suggest how I could load an image of a face and your workflow create the character sheet. 2. Pose and background workflow A. Instead of using the dwpose and hands depth images, how could we use another reference picture and the workflow nodes figures out the dwpose and required depths? I tried to do it but the render created a haunting monster mutant. B. How can the model be automatically sized/scaled relative to the background image generated? Mine looks like a giant C. How can I stop the background stop changing across the workflow? I tried lowering the noise.
I would like to input an Image instead of the Positive and Negative prompts connected to the Apply ControlNet (Advanced) node. How can I achieve this? The intention is to create an Image -> CharacterSheet, rather than a Text -> CharacterSheet. I would like to use my favorite character, but I'm tired of having to create new characters all the time. I am using the workflow effectively! Thank you.
@@luismanuell7 Well, you can use SD Prompt Reader to get information from image (it describe image and creating promt) and connect information from it to workflow prompt
We really need to have different camera angles, though. The character consistency is great, but we need to be able to take this same premise and be able to, like, take a picture or a rendering of a 3D model and get the angle on the consistent character.
Hello, thank you for sharing your work. Is it possible to create a Model Sheets from a reference character image ??? and if so, what node would you add? In short, how would you proceed?
@@iamfactsology You just import it via the image. You need to read the installation document. The faces section on mine showed a whole body on the center three and the heads on the side were displayed as hands for some reason.
Just another quick note to say, the second workflow doesn't have the right prompt, which confused me as out came a Fraulein, as per the prompt. I know this would be seen if people follow the video, but nonetheless could be changed. Once again, incredible work, and very generous of you to share. Many many thanks
Thank you for the awesome workflow! For the folks having the IPadapter issue, I don't know if it's the right way but I created an "ipadapter" folder like this : ComfyUI_windows_portable\ComfyUI\models\ipadapter And put the 8 files in there, and it worked :)
I just watched the image to 3d model video. I don't see why you couldn't generate multiple views using this method and then generate a more complete 3d model from the output. It would be more like photogrammetry.
Hey I got these workflows installed in comfy and everything looks great, but I have one little holdup. Where do I get the 4 image files that you're using in the 2nd workflow (the posable character flow) that are briefly showing @6:16? You have them as Face_upscale_00019_.png, Face_upscale_00053_(3).png, pose_2024_04_15_19_23_png, and depth_2024_04_15_19_23_30(11)png, but when you download the workflows it shows them as FaceRefine_00087_(1).pgn, FaceRefine_00059_(3).png, Pose_2024_04_26_16_31_12.png, and depth_2024_04_26_16_31_12(1)png. I've searched for these filenames on the web and can't seem to find them on Get or Huggyface. Not sure where to go from here :/
Your "IPAdapter" node is different than the "Load IPAdapter" node by your expressions. I'm getting an error in my "Load Adapter" mode, and there are no models in the dropdown list there, although I have all IPAdapter models loaded.
@@juxxcreative @sam5519 hace 1 día For people facing "IPAdapter model not found." and using ComfyUI from StabilityMatrix. ComfyUI Manager downloads model to StabilityMatrix\Packages\ComfyUI\models\ipadapter, but workflow trying to load model from StabilityMatrix\Models\IpAdapter. What you need to do is copying all models to StabilityMatrix\Models\IpAdapter
I installed everything but this is what i get: When loading the graph, the following node types were not found: FromBasicPipe_v2 ToBasicPipe FaceDetailer IPAdapterUnifiedLoader PrepImageForClipVision UltralyticsDetectorProvider IPAdapter
ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models no longer works since the updated version of IPAdapter_plus says models is a "legacy_directory_do_not_use" So where am I supposed to put the files step 4 now?
if you haven't resolved this issue, here is the right way i believe this is the path you should store your models in: ComfyUI_windows_portable\ComfyUI\models\IPAdapter\ let me know if you have any questions, or if this works.
@@nextusp hi, i am getting an error "IPAdapterUnifiedLoader: ClipVision model not found". What should I do? I dont know what it means "download and rename" in step 5 of his instruction anyway.
I had this error at first too. The guide says to 'rename' when downloading it, but not to what. So at first I just named them clip vision 1.safetensors and clip vision 2.safetensors. When I renamed it to the exact name the link have CLIP-ViT-H-14-laion2B-s32B-b79K for the first one, and CLIP-ViT-bigG-14-laion2B-39B-b160k for the 2nd one, the problem went away
sounds like u need to get a model for upscaling? it usually has at leat one pre installed so check by clicking on the model in that node to see if any r listed. its basically a checkpoint for upscaling, kinda like sdxl or juggernaut r checkpoints for image generation. every action in comfy needs a checkpoint or model loaded to b able to perform that action...
So we can create these sheet and then use a image2image faceswap right? im looking for faceswap, same clothes, different poses workflows ^^ need consisten characters, as an exapmple for an comic book
Is there a way to integrate instantid or IP adapter in the first place if I already have reference image of the character?? btw thanks for sharing this amazing workflow♥
Absolutely fascinating video. So let's say I generate 2 keyframes, one where Hans grabs the cheese, and a second where he is holding the cheese over his head triumphantly. Is there a way within comfy to generate the 12 or so frames inbetween, creating a custom animation out of our custom poses?
this is perfect but I want to use my own character that I draw, and I want to create a ref sheets like this without prompting the character because I already got the character that ı want to use, how can I do that?
Wow, this is just what I was looking for, but there is a catch: can I use a reference picture to kick things off? Like MJ (or other tools)? I have the characters I want as faces/poses but need more of them, so is there a way to front-load this onto the start of your quite brilliant workflow? Having comped stuff using SD myself, I think the section you have here is really clever :)
You are the state of the art! Thanks for sharing your brilliance!
Thank you so much. This is exactly what I need and nobody else has anything like what you’re doing. I’m new to ComfyUI but it seems like fun to me. I’ll be supporting your Patreon and joining the Discord community.
Thanks man , really appreciate it, we need more people like you.
Dude! God bless your curious mind and a generous heart!
iam a 3 days old automatic1111 user and damn your video is amazing!!! i wish i can do this soon ♥ thank you so much!
i dont even remember liking this video, this is mind blowing.
Man, I wish I wasn't so intimidated by that UI. This looks really incredible.
Just go ahead and give it a shot. It's not complicated at all once you try it
Checkout Olivio's Comfy Academy and you can go from 0-competent
Yeah try it out. I've only made 2 workflows myself, it is pretty hard, but the workflows the community give out help understand more.
Just try it. Download some pre-made workflows and play with the settings. Won't even touch another webui now. None are as versatile not even close
Try it! This is my first day exploring and it's actually very intuitive once you start to understand how things work
Probably one of the best videos for consistent characters. Thank you so much for sharing this!
Outstanding! Thanks for sharing these workflows so generously!
I am astonished at well this works! This was only my second workflow with comfy and blown away by how powerful it is! Downloading the models was a PITA but totally worth it 😀
That tip about adding "Pixar character" to the face detailer prompt is pure gold! My characters already look 10 times better. Have you had any luck using this workflow with other character styles, like anime or realistic?
I am new to all of this ai stuff and blown away by the capability, but also you generosity in sharing/teaching! Thank you. I am inspired
This is the best and most useful video + workflow i have ever seen, it helped me to improve my understanding of comfyui
incredible tech, scary how few real artists the industry will probably have in 5 years time. Overall depreciation of quality and interesting character design over time in the interest of turning the cog wheels of consumption content out as fast as possible. 10% will use this for good the rest of it will be used to save money and time in an area where money and more importantly time should be spent on this subject. Awesome tech, horrifying corporate application.
This is an absolutely awesome tutorial! Thank you so much!
Came here for the tutorial - stayed for the voice
Wunderbar!
I am sure many people that like Node type interfaces may think programming is complicated, but Nodes are only useful if you know how to use it. I had Comfy UI installed for 5 minutes, and just get it.
Wooow! You nailed it with this one, this is really powerful and useful. You have a new follower!
Teşekkürler.
The mustache joke got me *hard*, did not expect that at all
I'm commenting so I remember to watch this soon once I have more stamina charged up (just woke up) (been chronically sleepy and cranky for the past like 7 or 8 months 😢)
Why is that also me X) ?
A getting started with ai would be a nice video because all this is very confusing for a complete beginner but seems like important information to know
Wow! I really appreciate the time and effort that you've put into this. Thanks!
What if don't want human poses how are we suppose to ger poses of another animals or even aquatic animals
This is amazing. Thanks for everything, brother.
what happen to your vRam 2:38
bro, you a lifesaver!
@02:29 "I'm really happy that it's THIS type of moustache!" BRUH!💀🤣
Awesome tutorial and channel btw❤
Jesus Christ! This is awesome! 🎉
At 02:44 there are no photos in the upscaler. Only the pose sheet. Then at 02:50 there are photos in the upscaler. How do you move from the pose sheet to the upscale? Running it just repeats the pose sheet again and again.
some good stuff right here
hello my friend, thank you for using my wildcardx-xl turbo on this amazing video, more power to your channel
One of my absolute favorites! 🙏
I'd find interesting digging more into the LORA, explaining how to use the saved faces to train lora.
Man, you are simply God! A year ago I tried to do the same thing in A1111, and it didn’t work. What you created is a masterpiece. Thank you!
Very interesting. Not only about workflow but your quick explanations are very great and valuable. Thank you a lot for that !
would love to have a step by step installation video. i tried to install and I'm sure i did something wrong. I'm extremely glad you made this video thanks for everything you've given
Thank you! I am currently working on the next iteration of the workflow. When I make the video about it, I will go through the full process step by step!
@@mickmumpitz could you please also show an example of how to generate characters of different styles? Anything i try to generate always comes out in 3d, pixar, disney style, can never create something like 2d cartoon character or children's book illustrations style character. And can we generate a character using our face with this?
@@mickmumpitz Hi I need a helping hand please, I would like to be able to change the location and size of the subject above the background I found it in compositing mask but I can just move it to the right and down but I can't change that size and exact location, do you have a solution?
Incredible! Thanks for your work~
3:28 I'm getting an error at the IPAdappter Unified Loader. I can't find the models for it or where to put them.
Hello, thanks to your videos I am learning a lot about AI and the photo generation is now much better than on other platforms, and can you show how to get a story book but in 2d version for children from 3 to 8 years old?
I was thinking you can probably mix this with your blender wokflow and even have multiple characters in your scene. Excellent videos! Thanks.
I will do that soon!
I LOVE this workflow and your content.
I have some questions - could you help?
1. Character sheet - can you try yourself and then suggest how I could load an image of a face and your workflow create the character sheet.
2. Pose and background workflow
A. Instead of using the dwpose and hands depth images, how could we use another reference picture and the workflow nodes figures out the dwpose and required depths? I tried to do it but the render created a haunting monster mutant.
B. How can the model be automatically sized/scaled relative to the background image generated? Mine looks like a giant
C. How can I stop the background stop changing across the workflow?
I tried lowering the noise.
I would like to input an Image instead of the Positive and Negative prompts connected to the Apply ControlNet (Advanced) node. How can I achieve this?
The intention is to create an Image -> CharacterSheet, rather than a Text -> CharacterSheet.
I would like to use my favorite character, but I'm tired of having to create new characters all the time.
I am using the workflow effectively! Thank you.
any luck yet?
conseguiu amigo?
@@luismanuell7 Well, you can use SD Prompt Reader to get information from image (it describe image and creating promt) and connect information from it to workflow prompt
We really need to have different camera angles, though. The character consistency is great, but we need to be able to take this same premise and be able to, like, take a picture or a rendering of a 3D model and get the angle on the consistent character.
Thank you very much! This is great!
真是个相当棒的创意工作流😁
as an artist i have to admit this is impressive and if this helps me create stories so be it. i'll still do art but this will speed up the pipline
Thanks great tutorial! I only wish flux had this kinda level of support and options. With flux video too 😢
Hello, thank you for sharing your work. Is it possible to create a Model Sheets from a reference character image ??? and if so, what node would you add? In short, how would you proceed?
I have same question
@@sethmunich4062 +1
alguma novidade?
@@luismanuell7 no, sorry
Love this guide! keep it up mate :D
Great work bro that's what i want thinking of creating model for character
Can this wf be altered to begin with an existing image that you've created instead of text?
Ah, my exact question as well!
Any chance?
@@offmybach same question, bro has to read his yt comment section 🙄
@@Roman_R4 my bad
@@offmybach no, I've got the same question, I was refering to the uploader not you
This was an amazing tutorial. Thank you!
Great tutorial, this made me explore more of the comfyui. thank you!
Subscribed! I hope you live happily forever and ever after
Great video and finally a useful tutorial…
This workflow worked great for me! thx
how did you get the workflow he imported into ComfyUI?
@@iamfactsology it's Mickmumpitz_CharacterSheet_v01.json
@@iamfactsology You just import it via the image. You need to read the installation document. The faces section on mine showed a whole body on the center three and the heads on the side were displayed as hands for some reason.
@@iamfactsology You drag on drop the jsons that he posted on his patron
Genius work.
Just another quick note to say, the second workflow doesn't have the right prompt, which confused me as out came a Fraulein, as per the prompt. I know this would be seen if people follow the video, but nonetheless could be changed. Once again, incredible work, and very generous of you to share. Many many thanks
Thank you for the awesome workflow!
For the folks having the IPadapter issue, I don't know if it's the right way but I created an "ipadapter" folder like this :
ComfyUI_windows_portable\ComfyUI\models\ipadapter
And put the 8 files in there, and it worked :)
What 8 file ?
@@HeyTikal The ones mentioned in the 4th step of the [FREE GUIDE] (....all of them...)
@@timjones9316 thanks 💪
Oho, this was the solution. Thank you.
This is exactly what i was missing, thank you
Thanks a lot, this certainly helpful for me as I explore design character creation 😅❤
Hey thanks for your great content. Always enjoy your videos !
Can we add an image prompt instead of a text prompt to use a character we already have?
I know it's prolly easy, but I'm new to comfyui.
I just watched the image to 3d model video. I don't see why you couldn't generate multiple views using this method and then generate a more complete 3d model from the output. It would be more like photogrammetry.
I think that for use in photogrammetry the consistency would have to be perfect and that is precisely where AI fails the most.
Hey I got these workflows installed in comfy and everything looks great, but I have one little holdup. Where do I get the 4 image files that you're using in the 2nd workflow (the posable character flow) that are briefly showing @6:16? You have them as Face_upscale_00019_.png, Face_upscale_00053_(3).png, pose_2024_04_15_19_23_png, and depth_2024_04_15_19_23_30(11)png, but when you download the workflows it shows them as FaceRefine_00087_(1).pgn, FaceRefine_00059_(3).png, Pose_2024_04_26_16_31_12.png, and depth_2024_04_26_16_31_12(1)png. I've searched for these filenames on the web and can't seem to find them on Get or Huggyface. Not sure where to go from here :/
can this apply to animals like spiders as well?
is there a chance that you could create such a workflow with flux?
Yes please, that would be awesome!!! thanks for your great work🤩
I just can't believe this is real life.
This is amazing! I keep getting an error that says "clip Vision" model not installed.
Oh boy this is an amazing workflow ! Thank you so much for sharing !
Your "IPAdapter" node is different than the "Load IPAdapter" node by your expressions. I'm getting an error in my "Load Adapter" mode, and there are no models in the dropdown list there, although I have all IPAdapter models loaded.
Same here. I have tried to download whole model but no luck. Still getting the same error.
@@juxxcreative did you solve that? same error
@@ernienosoul unfortunately no
@@juxxcreative
@sam5519
hace 1 día
For people facing "IPAdapter model not found." and using ComfyUI from StabilityMatrix. ComfyUI Manager downloads model to StabilityMatrix\Packages\ComfyUI\models\ipadapter, but workflow trying to load model from StabilityMatrix\Models\IpAdapter. What you need to do is copying all models to StabilityMatrix\Models\IpAdapter
it worked for me :)
I installed everything but this is what i get:
When loading the graph, the following node types were not found:
FromBasicPipe_v2
ToBasicPipe
FaceDetailer
IPAdapterUnifiedLoader
PrepImageForClipVision
UltralyticsDetectorProvider
IPAdapter
ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models no longer works since the updated version of IPAdapter_plus says models is a "legacy_directory_do_not_use" So where am I supposed to put the files step 4 now?
if you haven't resolved this issue, here is the right way i believe
this is the path you should store your models in:
ComfyUI_windows_portable\ComfyUI\models\IPAdapter\
let me know if you have any questions, or if this works.
@@nextusp hi, i am getting an error "IPAdapterUnifiedLoader: ClipVision model not found". What should I do? I dont know what it means "download and rename" in step 5 of his instruction anyway.
in the emotion tap for third image it said Ipa model not found but i do it all as intruduction can you help me ?
amazing stuff, thanks!
Please write a book or make a UDEMY tutorial. I will join. You know what you do and perfectly. Love the work. Thank you,
Excellent video, congrats and ty
The ultimate goal
you're so frikkin' awesome.
Clip vision model not found error What am I missing?
same here
same
I had this error at first too. The guide says to 'rename' when downloading it, but not to what. So at first I just named them clip vision 1.safetensors and clip vision 2.safetensors.
When I renamed it to the exact name the link have CLIP-ViT-H-14-laion2B-s32B-b79K for the first one, and CLIP-ViT-bigG-14-laion2B-39B-b160k for the 2nd one, the problem went away
Thank you!!! But what does it mean - "Now select a workflow and drag & drop it into your ComfyUI interface"? What is - "workflow", where to get it?
The workflow is the. Json file. It is the link in the very first paragraph of the user guide.
Is it possible to set this up so a second or more characters could be used together?
Thank you very much! This is great! I wish you have tutorial to train lora from this tutorial
Cannot execute because a node is missing the class_type property.: Node ID '#14' how can i fix?
Mine freezes on "load upscale model" with a red indicator around that node. Does anyone know how to fix it? This is my first dive into comfy ui
sounds like u need to get a model for upscaling? it usually has at leat one pre installed so check by clicking on the model in that node to see if any r listed. its basically a checkpoint for upscaling, kinda like sdxl or juggernaut r checkpoints for image generation. every action in comfy needs a checkpoint or model loaded to b able to perform that action...
which model and lora have you used for this Orange Jacket Guy?
I love your content!!! How can you be so smart when creating these things. btw are you german? i thought because mumpitz
Definitely gonna try this! Thanks! 🧀
Very nice turorial! but exist the possibility of consistent character and posing in Forge Flux? Thanks
Hi where can I get the IpAdaptorunified loader node and also the IPAdaptor (also missing in manager) ? Thanks in advance :)
did you find the solution
?
@@masihafattahi5006 Nope. you can see it is missing in the video =(
Any idea how to do this with flux? SDXL seems to not get all the faces to the left, but the flux controlnet openpose is just bizarre.
Can I use a real picture as reference?
So we can create these sheet and then use a image2image faceswap right? im looking for faceswap, same clothes, different poses workflows ^^ need consisten characters, as an exapmple for an comic book
always doing something unique and helpful content
I got everything, but I don't understand where I can find your Workflow (template), please help!
Is there a way to integrate instantid or IP adapter in the first place if I already have reference image of the character??
btw thanks for sharing this amazing workflow♥
Can you tell me what to rename clip vision files into, i am stuck
Absolutely fascinating video.
So let's say I generate 2 keyframes, one where Hans grabs the cheese, and a second where he is holding the cheese over his head triumphantly.
Is there a way within comfy to generate the 12 or so frames inbetween, creating a custom animation out of our custom poses?
this is perfect but I want to use my own character that I draw, and I want to create a ref sheets like this without prompting the character because I already got the character that ı want to use, how can I do that?
very nice
Wow, this is just what I was looking for, but there is a catch: can I use a reference picture to kick things off? Like MJ (or other tools)? I have the characters I want as faces/poses but need more of them, so is there a way to front-load this onto the start of your quite brilliant workflow? Having comped stuff using SD myself, I think the section you have here is really clever :)
It is possible to insert a reference photo for the face? Thanks!