Great Video....Personally, I get 80% there and finish in Photoshop....I've been retouching for over 20 years, and I don't see AI as a threat; I think it's a fantastic tool...
Yes it enhances your workflow, its like creativity on steroids, today i did a composite that would take me normally 2 days and i cut it down to 3 hours! it was a collaboration between PS and AI, the results were fantastic!
Thank you very much for partially showing how to do this whole process. It was very difficult for me to achieve a workflow similar to yours to do the process but I did it.
This is sick 🔥🔥 Can you also add jewelry? Or even better, add specific jewelry? (as if you were creating a photoshoot with an AI model for a jewelry brand with existing products?)
I’m very interested in enrolling in your course but wanted to ask for your advice on a specific challenge. My business focuses on selling Christian Louboutin shoes for women, and it’s crucial for me to create AI models that showcase these products with consistency and precision. One of my main concerns is achieving detailed and consistent results, particularly for hands and feet, as these are often areas where AI models struggle but are vital for my use case. I noticed you mentioned tools like ControlNet and IP Adapter in your tutorial. Would these tools, along with the techniques in your course, be effective in addressing this challenge? Most importantly, do you think your course would fit my specific needs for generating consistent, high-quality models where details like hands and feet are a priority? Thank you for your time, and I look forward to learning from your expertise!
great video im watching it for the last 2 days now, within the hand part im facing this issue : MeshGraphormer-DepthMapPreprocessor shape '[1, 9]' is invalid for input of size 0 couldn't find a solutionfor it online can anyone help ? i followed each step exactly same resolutions and everything
Can i buy and run this workflow on platforms like runDiffusion? Do not have that kind of powerfull GPU myself or use it on nodes on RunPod or for example Paperspace?
Thank you very much Aiconomist for the tutorial on magic clothing, which is very useful to me, but I still have a question. Does this model have no effect on the replacement of pants? If so, can you tell me how to operate it? Thank you so much.
Thank you for a very good video. I'm having issues with both Face Analysis and InstantID failing to import. I've searched and tried to fix for a couple hours now. Anybody able to assist?
Great tutorial! but I have some questions: the "ImageCompositeMaked" node was used to enhance facial details. The attribute "mask" uses magnification up to 768 resolution, while "x、y、width、height" attributes are both raw image data obtained from the "Face Bounding Box". Why can they work properly ?
Would be great if you showed the result generating the character in different poses and styles... but I really like the clothing consistence, very important for storytelling like the comic book story I'm trying to create with AI
I use stable diffusion but COmfyui is too hard for me i click this video as it appears in my recommendation you made a good video but this is very diffult to understand the node system.
I'm really interested in running this type of AI model locally on my own system. To do that, I would need to purchase a dedicated graphics card. Could you please suggest which one would be the better option for my needs - the NVIDIA RTX 3060, RTX 3060 Ti, or the AMD RX 6700 XT? I'd appreciate your recommendation. Thank you in advance for your help."
Between the cards you listed, the RTX series would be the better option, because the amd card lacks the AI optimization that NVidia is known for. That being said the other reply advising the 4060 is also a good recommendation, the improved core performance and higher vram would help with your image generation workflow. Do you have a specific budget or a workstation/PC for us to advise you better?
Thank you for your interest! The course is still in progress, and I'll be contacting everyone who subscribed soon. Thanks for your patience and support!
That's been a difficult problem for me and also a disappointment while learning to use IPAdapter. I've had moderate amount of success by generating an expressive face that is not my character using aI will use attention masking with IPAdapter to focus on expression lora, and then faceswapping with my character using Reactor. While you do lose a lot of expressiveness after the faceswap, it does help preventing the permanent RBF Syndrome you typically get!
Play with the Starting and Ending steps in the IPAdapter Node to give the model a chance to create the expression. Or create around 20 or so images with the IPAdapter and train a Lora for your character. I sometimes use some of those realistic Pony Models and they don't work with IPAdapter. So I just created images with IPAdapter + Realvis and trained a Lora for Pony. Worked well honestly, had no problems letting the character wink, smile, sticking the tongue out etc. You would need to get used to training Loras though. And also make sure to fix your images as good as you can. If you put in trash for Lora training you will get trash out aswell.
@Aiconomist It's outstanding how you can take something very complex and learn to break it down into simpler terms to make it reasonably understandable. Fantastic job! I hope to try this in the future. I got some things lined up first to implement. But when I need this; I got the "Secret Sauce" down in the bag. God bless you Sir. May the love of Jesus Christ always stir your heart back to him. "+"
when generating batch closeup images using the same seed number. also you can use ipadapter plus face to get features from a real person but i can't show that on youtube.
youll find all the info in the description box if you PAY FOR A SUBCRIPTION. i am all for u guys making money but being deceptive about it means i don't trust u.
"Photopea" is actually pronounced as "photo-pee" like the tiny green vegetable. If you look at the Photopea logo, it's actually a curling vine from a pea plant. This information has been confirmed by the developer himself.
Its the most easy that you can do it with comfyui, detailed, understandable, if you want more automated...go buy a subscription of another ai...and surely anyways don't will have this results 😂
one of the best tutorials. The problem with SD is that all the faces remain lifeless, no facial expression, looking straight to the camera, no smile, muscles activation, wrikles etc... Midjourney does that well
@@gingercholo I disagree. test both you will never achieve the realism of midjourney in stable diffusion. I talked only about the face but it's the same for body position. I did hours of testing i can assure you, you will never achieve this level of realism in SD. If not show me a link of a realistic image someone made, even the best on SD can't achieve realistic facial expression as good as midjourney, i ve personnaly never seen it
It's possible to run comfyui with cloud processing but you need to find a place to purchase processing power from. Comfyui is just the interface and it typically is designed to run on your local hardware. A more powerful graphics card would definitely make generations run faster, especially with modern core design in the 30xx and 40xx series of cards.
Too bad you didn't include a free JSON file to finish the video, you'll have low completion rates, and when I tried to pay you didn't include paypal as a payment method.
at node 324 stopped my mental elevator and Ifelt aleep. 😢 not because of the exiting content... but, ya know, that music in connection to this weird AI-voice makes meeeeeeeeaaaaaaouwwwwww 🥱
Nice tutorial, but it doesn't covers things, like make the girl less likely a game character from 2010s, or the blurry background, which is the first sign of a photo not real, but AI generated image.
So i can see this used on amz and Levi's and nordstrom and wherever, BUT. It's a bunch of guys making this. Choosing the ideal model and fitting and bg. Coding this. Until a woman is doing this for women's clothing, the BIGGEST rev stream, this is just wasted genius. It'll work great for furniture, though. My point is NOT that a man made this. But the scalable selling will be when women, like me, can adopt it. Maybe show that so ppl not as immersed as me can see it better.
@@Aiconomist great video! I've also emailed you a quick question with an error on the KSampler. I'd love your help to get past it. Thank you in advance!
📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course
the prompt from the beginning of the video:
realistic photograph, closeup, beautiful woman, smiling, blond, symmetrical features, closed mouth, clear skin, wearing a tank top, light makeup, elegant, refined, delicate features, soft lighting, natural look, brown eyes, chiseled cheekbones, slight smile, glossy lips, natural hair color, background blurred
4k, highly detailed, high-quality, masterpiece
Great Video....Personally, I get 80% there and finish in Photoshop....I've been retouching for over 20 years, and I don't see AI as a threat; I think it's a fantastic tool...
Yes it enhances your workflow, its like creativity on steroids, today i did a composite that would take me normally 2 days and i cut it down to 3 hours! it was a collaboration between PS and AI, the results were fantastic!
Thank you very much for partially showing how to do this whole process. It was very difficult for me to achieve a workflow similar to yours to do the process but I did it.
Thanks to this great tutorial. The explanation is to the point, way better than many others on RUclips.
Hi, I have a problem with "IPAdapterUnifiedLoader" ClipVision model not found. What could be the reason for this error?
thank you so much
My ipadapter unified loader is not working. Even after having all models in models/idadapter, it is still showing "ipdapter model not found".
This is sick 🔥🔥 Can you also add jewelry? Or even better, add specific jewelry? (as if you were creating a photoshoot with an AI model for a jewelry brand with existing products?)
This video generated with prompt: Best quality, (masterpiece: 1.2)
IDM VTON not working at all when i tried to download
Can I add my product in her hand realistic?
I’m very interested in enrolling in your course but wanted to ask for your advice on a specific challenge. My business focuses on selling Christian Louboutin shoes for women, and it’s crucial for me to create AI models that showcase these products with consistency and precision. One of my main concerns is achieving detailed and consistent results, particularly for hands and feet, as these are often areas where AI models struggle but are vital for my use case.
I noticed you mentioned tools like ControlNet and IP Adapter in your tutorial. Would these tools, along with the techniques in your course, be effective in addressing this challenge? Most importantly, do you think your course would fit my specific needs for generating consistent, high-quality models where details like hands and feet are a priority?
Thank you for your time, and I look forward to learning from your expertise!
great video im watching it for the last 2 days now, within the hand part im facing this issue :
MeshGraphormer-DepthMapPreprocessor
shape '[1, 9]' is invalid for input of size 0
couldn't find a solutionfor it online can anyone help ? i followed each step exactly same resolutions and everything
not able to find realvisxlV40_v40Bakedvae model in the comfyui model list
How to generate multiple images od same ia person for reference?
got an error : IPAdapterUnifiedLoader
ClipVision model not found. how to solve it please
It means you didnt load a clipvision model or worse, you dont have a clipvision model "installed".
@@ImmortalShiro thank you so much bro , I will try again
what about this:
Directory 'X:\Dataset\Faces cannot be found.'
in the dataset we can put our own photos to get some professional photos right?
Where is the download link
question.. how come i get 4 different women when I followed your step 1. I dont get unique face. I copied all of your detailed settings..
Does this method work on consistent face with different posing angle as well ?
it should work if you have good reference images (i will go deeper on face consistency and position in upcoming videos)
What depth model do we use if using SD1.5?
What can i use as an alternative to ipAdapter that works with Flux?
For some reason the face swapping is a much lower quality and it doesn't blend as good as yours, any idea of what could be the issue ?
Same issue
nice video man, relaxing and deep, nice music very good!
Thank you, great tutorial. What would be the best way to make masking face part automatic and connect with image after pose flow?
Will this work with flux dev? :)
this would be very useful for e-commerce stores.
Definitely!
But models used are not allowed for commercial use.
@@baryla888 name one for me
@@baryla888according to who?
@@baryla888and how are they going to know?
great video,thanks!
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found. this error is coming what to do please tell
same
same, all updates and installs not resolving.
you have to go into the manager and manually search under model manager.
Can i buy and run this workflow on platforms like runDiffusion? Do not have that kind of powerfull GPU myself or use it on nodes on RunPod or for example Paperspace?
i want to buy, but pls update
Tools for shoes modeling?
Thank you very much Aiconomist for the tutorial on magic clothing, which is very useful to me, but I still have a question. Does this model have no effect on the replacement of pants? If so, can you tell me how to operate it? Thank you so much.
Thank you for a very good video. I'm having issues with both Face Analysis and InstantID failing to import. I've searched and tried to fix for a couple hours now. Anybody able to assist?
same issue, have you found the fix?
@@creative8665 Unfortunately, no. I gave up on it for now.
Hello, is it possible to use this tool for jewelry?
She has 2 hands. How do you fix them both together to make them uniform?
Great tutorial! but I have some questions:
the "ImageCompositeMaked" node was used to enhance facial details.
The attribute "mask" uses magnification up to 768 resolution, while "x、y、width、height" attributes are both raw image data obtained from the "Face Bounding Box".
Why can they work properly ?
Would be great if you showed the result generating the character in different poses and styles... but I really like the clothing consistence, very important for storytelling like the comic book story I'm trying to create with AI
That sounds really interesting, would love to see your work as you progress.
is the workflow from gumroad same as video?
yes it's same but I hope his making a guide about how to use the workflow, this video tutor is not enough for me.
i followed your setting exactly the same, and how come i get 4 different person, i cant get unique face with 4 figures
try setting the seed to "fixed".
Cannot fix the import failed in ComfyUI-Inference-Core-Nodes, pls know a way around or how to fix
Ignore me, I'm a newbie. But have you tried Update All from the manager in case it's an install issue somewhere? 🤷🤞
Try to intall Comfyui using Pinokio, it will solve nodes interference
How do I install the "Inspire pack" on ComgyUI???
I have the same problem
I use stable diffusion but COmfyui is too hard for me i click this video as it appears in my recommendation you made a good video but this is very diffult to understand the node system.
I'm really interested in running this type of AI model locally on my own system. To do that, I would need to purchase a dedicated graphics card. Could you please suggest which one would be the better option for my needs - the NVIDIA RTX 3060, RTX 3060 Ti, or the AMD RX 6700 XT? I'd appreciate your recommendation. Thank you in advance for your help."
4060ti 16gb
Between the cards you listed, the RTX series would be the better option, because the amd card lacks the AI optimization that NVidia is known for. That being said the other reply advising the 4060 is also a good recommendation, the improved core performance and higher vram would help with your image generation workflow. Do you have a specific budget or a workstation/PC for us to advise you better?
@@raininheart9967 i've chosen it
I don't understand what the price of your new course is. I'm interested.
Thank you for your interest! The course is still in progress, and I'll be contacting everyone who subscribed soon. Thanks for your patience and support!
import failed when trying to install IPadapter plus.. any thoughts?
same
Great video and I learned a lot. Thanks. Just wondering if there is a way to adjust facial expressions whilst keeping the models face consistent?
it also here in youtube, search something about emotion/face controlnet
That's been a difficult problem for me and also a disappointment while learning to use IPAdapter. I've had moderate amount of success by generating an expressive face that is not my character using aI will use attention masking with IPAdapter to focus on expression lora, and then faceswapping with my character using Reactor. While you do lose a lot of expressiveness after the faceswap, it does help preventing the permanent RBF Syndrome you typically get!
@@swipesomething Great reply. Thanks!
Play with the Starting and Ending steps in the IPAdapter Node to give the model a chance to create the expression. Or create around 20 or so images with the IPAdapter and train a Lora for your character. I sometimes use some of those realistic Pony Models and they don't work with IPAdapter. So I just created images with IPAdapter + Realvis and trained a Lora for Pony. Worked well honestly, had no problems letting the character wink, smile, sticking the tongue out etc. You would need to get used to training Loras though. And also make sure to fix your images as good as you can. If you put in trash for Lora training you will get trash out aswell.
@Aiconomist
It's outstanding how you can take something very complex and learn to break it down into simpler terms to make it reasonably understandable.
Fantastic job! I hope to try this in the future. I got some things lined up first to implement. But when I need this; I got the "Secret Sauce" down
in the bag. God bless you Sir. May the love of Jesus Christ always stir your heart back to him. "+"
Glad it was helpful!
First, how to make sure that the four images generated in the first step are the same person
when generating batch closeup images using the same seed number. also you can use ipadapter plus face to get features from a real person but i can't show that on youtube.
great video!
@Aiconomist hi this is great but can you creat a kaggel workflow that will be alote easier please
Custom validation failed for node: image - Invalid image file: outfit 4 (1).jpg
LoadImage:
- Custom validation failed for node: image - Invalid image file: ComfyUI_temp_mecbj_00001_.png
LoadImage:
- Custom validation failed for node: image - Invalid image file: image (13) (4).png
How to upscale after dress up? Clothes texture upscale plz, sir ❤
great intro :)
youll find all the info in the description box if you PAY FOR A SUBCRIPTION. i am all for u guys making money but being deceptive about it means i don't trust u.
Ok this one is much better
This is a super roundabout way to do this lol
Comfy Background really kills RUclipss compression :D
"Photopea" is actually pronounced as "photo-pee" like the tiny green vegetable. If you look at the Photopea logo, it's actually a curling vine from a pea plant. This information has been confirmed by the developer himself.
Ressources only with membership - yeah...
brilliant
what ai voice you use?
Thanks my dude!!!
Can i run this on rtx 3050 6gb vram?
i am using it on a 3050TI...yeah its slow but working
6 easy steps???
It is a lightning checkpoint
86 easy steps
69 easy steps
106 😢
Its the most easy that you can do it with comfyui, detailed, understandable, if you want more automated...go buy a subscription of another ai...and surely anyways don't will have this results 😂
Will this work on cats?
Although it's very neat what you are doing, and yes I also have workflows I worked on for hours, but it's kinda missing the I in AI.
stable diffusion? or flux????
Both
one of the best tutorials. The problem with SD is that all the faces remain lifeless, no facial expression, looking straight to the camera, no smile, muscles activation, wrikles etc... Midjourney does that well
not true, skill/prompting issue.
@@gingercholo I disagree. test both you will never achieve the realism of midjourney in stable diffusion. I talked only about the face but it's the same for body position.
I did hours of testing i can assure you, you will never achieve this level of realism in SD.
If not show me a link of a realistic image someone made, even the best on SD can't achieve realistic facial expression as good as midjourney, i ve personnaly never seen it
Do I need a GPU for this?
It's possible to run comfyui with cloud processing but you need to find a place to purchase processing power from. Comfyui is just the interface and it typically is designed to run on your local hardware. A more powerful graphics card would definitely make generations run faster, especially with modern core design in the 30xx and 40xx series of cards.
Too bad you didn't include a free JSON file to finish the video, you'll have low completion rates, and when I tried to pay you didn't include paypal as a payment method.
Epic!
could you create a new video for creating your own photos for professional photos or social media using PuLID, InstanID or IPAdapters?
at node 324 stopped my mental elevator and Ifelt aleep. 😢 not because of the exiting content... but, ya know, that music in connection to this weird AI-voice makes meeeeeeeeaaaaaaouwwwwww 🥱
make a payment way for people who cant pay using paypal! like crypto
todo bien hasta que vi que era pago, no, gracias
🤯🤯🤯🤯🤯🤯🤯🤯🤯🤯🤯🤯
damn I'm too focus on older video tutorial
Nice tutorial, but it doesn't covers things, like make the girl less likely a game character from 2010s, or the blurry background, which is the first sign of a photo not real, but AI generated image.
Can you remove the piano track...
Complex and Wierd , just get a model and take a picture.
So i can see this used on amz and Levi's and nordstrom and wherever, BUT. It's a bunch of guys making this. Choosing the ideal model and fitting and bg. Coding this. Until a woman is doing this for women's clothing, the BIGGEST rev stream, this is just wasted genius.
It'll work great for furniture, though.
My point is NOT that a man made this. But the scalable selling will be when women, like me, can adopt it.
Maybe show that so ppl not as immersed as me can see it better.
Hello I bought your workflow but I have a very quick question, the work is amazing btw! where I can contact you? :)
Hi friend! Thank you for your support, you can find my email in the video description
@@Aiconomist great video! I've also emailed you a quick question with an error on the KSampler. I'd love your help to get past it. Thank you in advance!