Love this quote: "This is not magic and it's definitely not going to change everything. It's just a very powerful tool at your disposal. If you understand how it works, you'll be able to get great images out of it, but don't think that you can send whatever reference and have perfect results with no effort."
I know you’ve lamented people leaving your videos before the end, but this one leaving is just because I wanted to get amped for when I actually have time to watch the whole thing sometime tomorrow. Love the 15-20 min video format, you’re still the GOAT.
the best thing about matteo is he always start with a fresh comfyui default setting, not some overwhelming spaghetti pre made nodes. makes it easy to understand the process and follow along. thanks matteo!!
I love your videos, the amount of useful information you give (although it makes me dizzy to see the nodes), the tranquility of your voice and the charisma you exude. Thank you very much for the workflows
You sir, is an excellent teacher... So easy to understand step by step... Please do this most of the time... The level difficulties is so helpful for a noob like me..
Amazing. Just it...amazing. Thinking about myself now. I spend lot of time watching videos and trying to mimic those tecniques...i wish someday i can reach that kind of mastery. Amazing. Just amazing.
You are a digital wizard, Mateo. You explained it so well, so simply, and so satisfyingly at the same time, as if your brain is made of pixels. I did a ton of tests to get to 30% of this conclusions (but yeah, I don’t have a solid knowledge foundation about the architecture of comfy cause didnt got the time to do it properly). Still, I watched a lot of videos, and none of them were as explanatory as this one. So yeah, congrats, you are an excellent explainer with a very optimised logic! I’m looking forward to learn from you. Hope the community is gratefull, and support you so that you can keep on going with this. Thank you and keep up the magic! ❤❤❤
@@latentvision Let's just say that your explanations lift the veil on the magical side of generations, and that even if we understand that we'll have to try a bit at random, we still get the feeling of having more control. The other RUclips channels don't go into as much detail, so you can apply their precepts to give it a try, but as it doesn't seem to be based on anything, you might be tempted to give up as soon as you've had a few failures.
Great video and thank you for providing the demo/practice workflows. They are the most useful for me and I learn so much from them. Usually, I do not watch videos that do not include their workflows. :)
I have been using your embeds node to try and go the other way, from a photo to a hatched pen drawing... much harder but I got quite close. Being able to save and load embeds is a great touch.
Brother, thank you for your video. They are particularly useful because with them, I went from knowing nothing to having clear thinking, and it only took me a little time. Thank you very much for your efforts.
simply the best. I hope your channel will soar soon. We had enough of the Ai Image generation fake tutors. This is a discipline and needs a sound teaching method. And thank you for the freebies to the unemployed. Not everyone can afford a subscription. God Bless you Matteo.
Thank you for another great tutorial! The models and many modules are mostly black boxes for the community and any insight on their internal workings is very helpful. Such clues as "SDXL prefers CN strength and end_percent lower than SD1.5" or "bleeding of undesired elements can be counterbalanced with noisy negative image" are invaluable. Any insights on behavior of Unet, Clip, Vae, latents, save us hours of trials and errors. Is it possible to control the scale of model application better than with the regular img2img denoise? Namely, is it possible to force a model to preserve large scale structures and change the textures only or vice versa? IPAdapter appears to be working along these lines already but separate feature scale control would be of additional help. Any insights on how various types of noise affect the diffusion would be great. Looking forward to more of your videos.
What you're looking for is likely the start and end step settings of KSampler (Advanced). Pull up one of the refiner example workflows for some inspiration on how to do this in a non-refiner based fashion. The key concept here is keeping and reusing noise but sampling it differently towards the end. Along with that consider creative use of masks and differential diffusion - since the entire point of DD is using the true power of masks for variable denoising (masks are no longer binary).
Thank you for these amazing tools Matteo! I was wondering if maybe you have some tips on how to best transfer an art-style to a subject that the checkpoint has no knowledge of. I have some 3D renders of creatures that I would like to turn into an illustration. So far sending the 3D render as the latent image and a style reference through ipadapter along with some style descriptions in the prompt was "ok". However unless I keep the denoise extremely low the features of the creatures (especially the faces) change drastically. I already tried turning the 3D render into lineart/depth, testing several controlnets...similar to what you did with the castle. Unfortunately nothing really did the trick. Either the design of the creatures changes or I get hardly any of the style into the picture.
the checkpoint is actually very important, try many of them, it makes a huge difference. Regarding your specific question it's hard to say without checking the actual material
Mateo, i don't know if already ip adpater can do this, but will be cool to CR Overlay Text, can be added ip adapter somehow, or another text node that will be able to receive ipadapter as an input
Amazing tutorial, can not find the working set of the models though - not available in Model Manager and the ones from search seem incompatible. Might be a good idea to store them with photos/workflows. Presently have to be creative =) Otherwise - great stuff, thanks so much.
I'd love to give this a shot but I can't seem to find a way to install the t2i-adapter-sdxl for comfyui, I'd greatly appreciate any help I could get. Thanks!
Hello, Matteo I was wondering what Lineart controlnet you used for SDXL with the sketch images. Keep up the great work! It's super helpful across all the community!
This vídeos are amazing! What kind of hardware are you using? I'm considering to build a machine for SD and Small LLMs, but my budget is low. Would a 3060 12gb be good enough to start?
16:02 When loading the graph, the following node types were not found: DepthAnythingPreprocessor Nodes that have failed to load will show as red on the graph. what should i do?
Great tutorial! You're really doing a fantastic job! Thanks a lot! Just tell me, please, where can I find xl-lineart-fp.16 that you're using as a controlnet model?
Hello Matteo, thank you very much for your videos, they are really good, I only have a little problem with this tutorial, it gives me an error in the sample when I delete the adapter it creates the image, at the beginning it generates 5 images but now this error appears, I wanted to know if you know Any solution for this error. Error occurred when executing KSampler: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. File "C:\artificial intelligence\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
@latentvision , why you changed the name of the selection at the weight type from "Style transfer(SDXL) " to " Style transfer " only ?! , Can we now works with the style transfer for both (SDXL) & (SD1.5) ?!....🤔
yes, you can transfer style (and composition) in SD1.5 too, even though it's not as effective. The style+composition node is only for SDXL, but I'm working on it.
its not about sketch , its about colour controls...many people want to make comic, but cant draw...with control-net, ipadapter and sd any one can draw anything... But colour control, for example dress color, home colour, overall multiscreen scene colour. that's the problem.
all awesome stuffs, im trying to learn from you ...but here is this tiger example the network is broken at KSampler step, i have no clue why, is there any conflict between nodes due to comfyUI updates ? please help
First of all, you are the best your tutorial videos are great. I tried to download "t2i-adapter-lineart-sdxl-1.0" but in download area, there are two pytorch models, where can i find that? Edit: I found in "install models"
this is really cool. Thank you . I am able to duplicate similar results with SD1.5. But when I tried XL models, I get a tiger with the iceberg pattern rather than a ice tiger. What am I doing wrong? Thanks.
@matteo, I am following this and while doing the inpainting part I get a "AttributeError: 'NoneType' object has no attribute 'shape'" error coming from the KSampler node, I can't figure out why it's happening. Can you please help?
The T2i lineart-fp16 safetensor does not appear in the "LoadControlnet" list. All the rest of theT2i are listed except the lineart safetensor. I tried the sketch and style safetensors which worked fairly well. I am a newbie and need your help. What am I doing wrong. ComyUI is fully updated.
this is so cool but i got a question what if I tried to do reverse of coloring book from normal image to line art/ coloring book do I just swap the images..thank you
yes, works very well the other way around too. be very aggressive with the text prompt in saying exactly what you want. Also you might NOT want to send the original image into the ksampler latent to avoid getting colors.
Thanks for the video! But can't make it work.... do everything like you but getting "Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1664])." could you please help me?
I can't get anything except the error "mat1 and mat2 shapes cannot be multiplied". Even though I downloaded the models, put them in the correct directories and have everything named properly, the Load ControlNet Model nodes will not recognize them / allow me to choose them.
Watch tv? Nah. Play games? Nah. Tweak and generate with IP-adapter deep into the night, under the guidance of master Mateo : YES!!
Love this quote: "This is not magic and it's definitely not going to change everything. It's just a very powerful tool at your disposal. If you understand how it works, you'll be able to get great images out of it, but don't think that you can send whatever reference and have perfect results with no effort."
however, ipadapter is so magical for me ❤
7:48 for timestamp
I know you’ve lamented people leaving your videos before the end, but this one leaving is just because I wanted to get amped for when I actually have time to watch the whole thing sometime tomorrow. Love the 15-20 min video format, you’re still the GOAT.
eheh don't worry I was just kidding. the videos are here, people can watch them for how long or how short they want :D
the best thing about matteo is he always start with a fresh comfyui default setting, not some overwhelming spaghetti pre made nodes. makes it easy to understand the process and follow along. thanks matteo!!
I love your videos, the amount of useful information you give (although it makes me dizzy to see the nodes), the tranquility of your voice and the charisma you exude.
Thank you very much for the workflows
aaaw thanks
Wow ! I'm messing with it all day and you upload a new masterpiece ! 😍 I even learning to draw all day yesterday . This is a great usage !
You sir, is an excellent teacher... So easy to understand step by step... Please do this most of the time... The level difficulties is so helpful for a noob like me..
I just wanted to say that this is the most helpful and clearest tutorial on style transfer and ipadapter. Thank you very much!
Amazing. Just it...amazing.
Thinking about myself now. I spend lot of time watching videos and trying to mimic those tecniques...i wish someday i can reach that kind of mastery.
Amazing. Just amazing.
You are a digital wizard, Mateo. You explained it so well, so simply, and so satisfyingly at the same time, as if your brain is made of pixels. I did a ton of tests to get to 30% of this conclusions (but yeah, I don’t have a solid knowledge foundation about the architecture of comfy cause didnt got the time to do it properly). Still, I watched a lot of videos, and none of them were as explanatory as this one. So yeah, congrats, you are an excellent explainer with a very optimised logic! I’m looking forward to learn from you. Hope the community is gratefull, and support you so that you can keep on going with this. Thank you and keep up the magic! ❤❤❤
God bless you dear Matteo.You are so precious mind,Thankful for your time shared us,best regards.
Thank you for all the hard work
Thank you so much for the detailed breakdowns of how IPadapter works. We are looking forward to new videos!
You're... simply the best
Better than all the rest ?!
@@latentvision Let's just say that your explanations lift the veil on the magical side of generations, and that even if we understand that we'll have to try a bit at random, we still get the feeling of having more control. The other RUclips channels don't go into as much detail, so you can apply their precepts to give it a try, but as it doesn't seem to be based on anything, you might be tempted to give up as soon as you've had a few failures.
you are a wizard and your genorosity is inspiring!
being inspiring is the greatest recognition I can ask for... thanks
Great video and thank you for providing the demo/practice workflows. They are the most useful for me and I learn so much from them. Usually, I do not watch videos that do not include their workflows. :)
Thanks so much, just came back to Comfy and IP Adapters! This is amazing, thanks for taking the time! 😊
I have been using your embeds node to try and go the other way, from a photo to a hatched pen drawing... much harder but I got quite close. Being able to save and load embeds is a great touch.
Thank you Mateo. I will watch over and over again this video to make sure I get all!
Ps : "you are now the master of style transfer.." ! 😅😅😅
Best video I've seen so far. Insta like at 4 seconds of playing it.
Amaizing, thank you!!
Brother, thank you for your video. They are particularly useful because with them, I went from knowing nothing to having clear thinking, and it only took me a little time. Thank you very much for your efforts.
This is such a nice tutorial. Thank you for walking through IPA+Controlnet possibilities.
Thanks!
thank you! cheerio
Subscribed. I love these projects from start instead of downloading template and spending weekend on debugging
Maestro Latente delivers another masterclass and entertaining creation!! You may live forever!!!
I'm just starting. you are ery helpfull! Big thanks from Poland. Wish you all the best!
thanks man, you too!
Sir. You are my lord. Simple and usable and even changeable for the work I want to apply. You are the true engineer my lord
lol thanks but I'm no lord.
What you talk about flows smoothly, and I gain a lot from it.thanks.
Thank you for this amazing tutorial. I love to see my own drawings and styles come to life, and how quickly new things are created 🙂
oh is it yours? please tell me more so I can give you proper credit
@@latentvision No worries, these are not my drawings 😅 Sorry for confusion . I was meaning my drawings at home, which im gonna use 😉
Wow you make it look so effortless and I swear this is pretty much MagnificAI haha. Great work!
simply the best. I hope your channel will soar soon. We had enough of the Ai Image generation fake tutors. This is a discipline and needs a sound teaching method. And thank you for the freebies to the unemployed. Not everyone can afford a subscription. God Bless you Matteo.
you are most welcome! Have fun! and thanks
Thx for the work, you're awesome
I always enjoy watching your videos. You are the master!
the way of teaching is very simple and effective. easy to understand 😍😍
thank you for explaining how to use the negative image input. i added different images and was never sure what to put there.
You make your work available for everyone! Thank you! You have a good ❤
Valeu!
This is a fantastic video that seems to teach legendary magic. Thank you always.
Thank you for another great tutorial!
The models and many modules are mostly black boxes for the community and any insight on their internal workings is very helpful. Such clues as "SDXL prefers CN strength and end_percent lower than SD1.5" or "bleeding of undesired elements can be counterbalanced with noisy negative image" are invaluable. Any insights on behavior of Unet, Clip, Vae, latents, save us hours of trials and errors.
Is it possible to control the scale of model application better than with the regular img2img denoise? Namely, is it possible to force a model to preserve large scale structures and change the textures only or vice versa? IPAdapter appears to be working along these lines already but separate feature scale control would be of additional help. Any insights on how various types of noise affect the diffusion would be great. Looking forward to more of your videos.
What you're looking for is likely the start and end step settings of KSampler (Advanced). Pull up one of the refiner example workflows for some inspiration on how to do this in a non-refiner based fashion. The key concept here is keeping and reusing noise but sampling it differently towards the end. Along with that consider creative use of masks and differential diffusion - since the entire point of DD is using the true power of masks for variable denoising (masks are no longer binary).
Mateo, you are amazing. Thank you so much!
thank you
Very good lecture video, easy to understand
You made it
I watch and learn well.
I will always support you
Incredible work as usual. Love it!!!
"This is not magic."
But it sure helluva feels like it, boss!
i see this can be very useful in someway of work ... very impressive
ip adapter is a real magic
If I had money, I would be throwing it at you, but sadly I'm broke. Great Video!!!
i really love your content, very informative thanks!!
Excellent tools, thanks for sharing
thanks again for sharing.
Amazing vid. Thank you.
great video as always!💪
Thank you kind sir!
Thank you, Matteo, your videos are always helpful. One question: what is the use of "prep image for clipvision"? Just to make output image shaper?
it tries to use the best scaling algorithm possible to catch as much details as possible. on top you can add sharpening
@@latentvision Thank you so much!
Thank you for these amazing tools Matteo! I was wondering if maybe you have some tips on how to best transfer an art-style to a subject that the checkpoint has no knowledge of.
I have some 3D renders of creatures that I would like to turn into an illustration. So far sending the 3D render as the latent image and a style reference through ipadapter along with some style descriptions in the prompt was "ok". However unless I keep the denoise extremely low the features of the creatures (especially the faces) change drastically. I already tried turning the 3D render into lineart/depth, testing several controlnets...similar to what you did with the castle. Unfortunately nothing really did the trick. Either the design of the creatures changes or I get hardly any of the style into the picture.
the checkpoint is actually very important, try many of them, it makes a huge difference. Regarding your specific question it's hard to say without checking the actual material
Mateo, i don't know if already ip adpater can do this, but will be cool to CR Overlay Text, can be added ip adapter somehow, or another text node that will be able to receive ipadapter as an input
Amazing tutorial, can not find the working set of the models though - not available in Model Manager and the ones from search seem incompatible. Might be a good idea to store them with photos/workflows. Presently have to be creative =) Otherwise - great stuff, thanks so much.
damn, i wish i had money to support you lol. Thank you so much for the wonderful tutorial
I'd love to give this a shot but I can't seem to find a way to install the t2i-adapter-sdxl for comfyui, I'd greatly appreciate any help I could get. Thanks!
I really like the flow of the video. The example at the end with one IPAdapter and two ControlNets; would using InstantID be better for portraits?
face models don't generally like other conditioning on top, but yeah it is possible
Hello, Matteo
I was wondering what Lineart controlnet you used for SDXL with the sketch images.
Keep up the great work! It's super helpful across all the community!
it's the controlnet lora by stability ai, but you can check other models if they are available
This vídeos are amazing!
What kind of hardware are you using? I'm considering to build a machine for SD and Small LLMs, but my budget is low.
Would a 3060 12gb be good enough to start?
I have a 4090. I had a 3060 before... to start, yeah should be enough.
@@latentvision thanks for the reply man!
16:02
When loading the graph, the following node types were not found:
DepthAnythingPreprocessor
Nodes that have failed to load will show as red on the graph.
what should i do?
install ComfyUI's ControlNet Auxiliary Preprocessors from the manager
Great tutorial! You're really doing a fantastic job! Thanks a lot! Just tell me, please, where can I find xl-lineart-fp.16 that you're using as a controlnet model?
I found it 😉
linked in the description!
@@latentvision Thanks 🙂
Hey what's the style reference image you used for the first one? Its adorable and would love to use it myself
Thumbs up! Do you have a tutorial on how to install the IPAdapter?
I don't do installation tutorials, sorry ;)
I'm learning from you, God.
goat sacrifices only on Friday
Grande Matteo ❤
Hello Matteo, thank you very much for your videos, they are really good, I only have a little problem with this tutorial, it gives me an error in the sample when I delete the adapter it creates the image, at the beginning it generates 5 images but now this error appears, I wanted to know if you know Any solution for this error.
Error occurred when executing KSampler:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
File "C:\artificial intelligence\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
try to execute comfy with --force-fp16 option
@latentvision , why you changed the name of the selection at the weight type from "Style transfer(SDXL) " to " Style transfer " only ?! , Can we now works with the style transfer for both (SDXL) & (SD1.5) ?!....🤔
yes, you can transfer style (and composition) in SD1.5 too, even though it's not as effective. The style+composition node is only for SDXL, but I'm working on it.
@@latentvision thanks...👍
Epic video
You are a star....
its not about sketch , its about colour controls...many people want to make comic, but cant draw...with control-net, ipadapter and sd any one can draw anything... But colour control, for example dress color, home colour, overall multiscreen scene colour. that's the problem.
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
I have same problem and cant find out solution ;(
Phenomenal
all awesome stuffs, im trying to learn from you ...but here is this tiger example the network is broken at KSampler step, i have no clue why, is there any conflict between nodes due to comfyUI updates ? please help
Thank you master !
First of all, you are the best your tutorial videos are great. I tried to download "t2i-adapter-lineart-sdxl-1.0" but in download area, there are two pytorch models, where can i find that?
Edit: I found in "install models"
WOW!!😍😍
this is really cool. Thank you . I am able to duplicate similar results with SD1.5. But when I tried XL models, I get a tiger with the iceberg pattern rather than a ice tiger. What am I doing wrong? Thanks.
1) convert the tiger to grayscale before inpainting, that helps 2) try a different sdxl model. you need a generic one
GOAT
incredible as always. ring the subscribe bell, people!
rror occurred when executing KSampler (Efficient):
'NoneType' object has no attribute 'shape'
@matteo, I am following this and while doing the inpainting part I get a "AttributeError: 'NoneType' object has no attribute 'shape'" error coming from the KSampler node, I can't figure out why it's happening. Can you please help?
you are probably using the wrong controlnet
Will you release a ComfyUI course in the future? I love your workflows but I find the software daunting
Hello, do you have any process from image to hand drawing? Thanks
the best!
The T2i lineart-fp16 safetensor does not appear in the "LoadControlnet" list. All the rest of theT2i are listed except the lineart safetensor. I tried the sketch and style safetensors which worked fairly well. I am a newbie and need your help. What am I doing wrong. ComyUI is fully updated.
Error occurred when executing Canny:
shape '[1, 1, 1836, 960]' is invalid for input of size 1759040
this is so cool but i got a question what if I tried to do reverse of coloring book from normal image to line art/ coloring book do I just swap the images..thank you
yes, works very well the other way around too. be very aggressive with the text prompt in saying exactly what you want. Also you might NOT want to send the original image into the ksampler latent to avoid getting colors.
I'm missing the Plus high strength model, if I use the manager what model am I looking for??
Thanks for the video! But can't make it work.... do everything like you but getting "Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1664])." could you please help me?
I can't get anything except the error "mat1 and mat2 shapes cannot be multiplied". Even though I downloaded the models, put them in the correct directories and have everything named properly, the Load ControlNet Model nodes will not recognize them / allow me to choose them.
you are probably using the wrong control net
I can't find for the life of me find the "depth anything vit l14" in the preprocessors, could you tell me where you got them please?
Where to download the "ipadapter-xl-lineart-fp16.safetensors" used in the setup? EDIT: Got it - used "Install Models" in the ComfyUI Manager.
Good workflow!!! but inpaint model call error ksampler, maybe you know how fix this?
Im having error IPAdapterUnifiedLoader
IPAdapter model not found.
Now, if you reverse the process, could we make a useful Coloring Book drawing, with thick(er) lines?
you can very easily make coloring books, but calibrating the thickness of the line would be not trivial
i have seen lots of this demos, why dont you add a automatic description to the base image, so you dont even need to make a promth?
Amazing. Just it...amazing.
What kind of hardware is this running on... or is it edited for time?