How to use IPAdapter models in ComfyUI
HTML-код
- Опубликовано: 9 июн 2024
- Everything you need to know about using the IPAdapter models in ComfyUI directly from the developer of the IPAdapter ComfyUI extension.
👉 You can find the extension "ComfyUI_IPAdapter_plus" on github here: github.com/cubiq/ComfyUI_IPAd...
00:00 Introduction
01:33 Basic Workflow
05:55 IPAdapter Plus Model
07:32 Prepping Images
09:16 Sending Multiple Images
14:07 Face Model
15:18 SDXL
17:42 Img2Img
19:10 Inpainting
20:00 ControlNet
21:40 Upscaling
23:17 Saving embeds
26:33 Conclusions
🎵 Background Music
-- "Part A" by Alexander Nakarada (www.serpentsoundstudios.com)
Licensed under Creative Commons BY Attribution 4.0 License
-- Last Stop Synthwave by Karl Casey @ White Bat Audio (whitebataudio.com/)
-- CyberPunk City by Peritune (peritune.com/blog/2020/05/22/...)
Just want to say Thank You Thank You Thank You, from the bottom of my heart. There are very few developers that take the time to actually explain their tools, let alone include additional options such as the saving embeddings, that offer huge potential for sharing, and extending the workflow in regards to resource management. You are a huge asset and very much appreciated.
this is so absolutely bonkers I can't beleive how much things have changed in 3 months. I just found your channel and binged your last months content and holy whattheheck this is brilliant.
Just want to say Thank you. For 2 days, I have been searching the way to inpaint using image and in the video, you have explained it. in a very easy way to understand. Thank you very much.
Your comfy node (and this video) are invaluable resources! Thanks so much for helping me wrap my head around IP!
Thank you so much for this AMAZING feature and also the detailed readme plus VIDEOS! We need more people like you! You are an enrichment to the AI / Open Source community
double down on that. thank u so much for this amazing piece of work.
i been sure i'll never understand how all this things work, especially inside of comfyui, you're just the best in your explanations, they're so clear! thanks for the knowledge you're sharing to us.
Oh man, this is changing my world. So much we can do with this. And... you explained your tools. Thank you so much!
Since you have so much expertise and knowledge in this topic, I really look forward to the training model tutorial 😊
I'll work on that but it's really for kinda edge scenarios and optimization (I guess can be useful for some art styles)
One of the best nodes I've seen for COMFY. Using it to lead renders with my current workflows and results show increased accuracy and detail. SUBBED!!
It worth 30 min without hesitation
Excellent video! You have such a pleasant style of communicating that it really was a pleasure absorbing all of this information. Well done and thanks!
Incredibly valuable tutorial. Keep up the good work.
Thank you for creating this implementation! Very clever solutions for handling workflows!
I saw you added weighting options for the images as someone requested it, I was doing it through repeating the same images a few times to increase their weight, it was very messy 😅
Awesome. Adding different image ratio inputs and outputs and the ability to give custom weights to batched input images would be a blessing !!! Thx for ur work !
i think scott detweiler made a video on weighted inputs. one of his comfyui episodes. unsure if it had to do with ipadapter.
So much content in a single video, this is amazing... thanks so much!
What a useful tutorial, absolutelly fantastic, thanks a lot!
This a game changer and continue to a be game changer. Not to mention you are kind enough to provide not just a video, but A GREAT video on how to use your tool! Thank you a million times.
New to ComfyUI. Thanks for this. It was very helpful for someone like me who has heard about IPAdaptor, but had no clue what it really does.
Wonderful tutorial...very clear and easy to follow...👍🏻
thank you, this is fantastic -- very well explained. The saving embeds is brilliant!
I'd classify this as the top 5 informational comfy/SD video to watch. Thank you Mato sir! Also looking forward to the training tutorial.
Great video! So informative and straight to the point 👍
I would love to see your video on the training
Thank you for your time and energy on this. This was a great introduction to comfyui.
Just came across your amazing tool. Congratulations and thank you! Amazing applications for this in the future, I think.
Wow! Fantastic video, I learned so much, thank you!
Brilliant, thanks so much. Your system and explanation is awesome, I've learnt so much!
Grazie mille for sharing this tool and explaining it so clearly!
Man, I can't thank you enough for this, Bravo. 👏👏👏
Thanks so much for IP Adapter its been working nicely in Automatic 1111. I still have to learn how to use it thoroughly. More tutorials would be appreciated!
Amazing. Thank you for all your hard work.
Amazing tutorial!! Much respect ❤️🇲🇽❤️
Thank you. This is huge to the community
Great Job explaining everything , Thank You!!!
A fantastic presentation, thanks so much.
This is AMAZING!
Your explanation and trick, omg! I learned alot!
Great extension and great tutorial! Awesome! Thanks for this It was posted on my Discord yesterday after I released a video about a more basic noise injection technique just using nodes. I will defintely try this out and, if it's ok for you, introduce it to my german speaking audience.
hey thanks! I checked your videos, I don't speak German but they are really well done and easy to follow.
Absolutely take whatever you want from my video and the content on my repository.
I love this tutorial. But at 14:21, you have a load clp vision: ipAdapter_image_encoder.sd15.safetensors. I have been looking everywhere for this image encoder but cannot find it. I can only find the clipvisionG or the ViT-H clip tensors. Any tips?
Bravo! Fantastic work with IPAdapterPlus,
I would be very interested to see also a video on the training process you mentioned. I'm trying to train a style that is quite unique, so I can't just use one image. I'm getting poor result with Lora standard network training, and standard dreambooth training
thanks for creating this! Game changer on comfyui!
Thank you very much for this tutorial! This tool is very powerful, and it is going to make my workflows so much easier to construct.
great explanation, and very good benefits you add into ipadapter.
Thank you so much
Amazing work bud!
Amazing! so many useful tools in one video
This was very useful and very well explained! thanks you a lot!
thank you so much yo! you are, literally, incredible.
Great Explaination simple crystal clear thank you so much
spectacular presentation, thank you
WOWWW! This looks amazing... I'm going to try this out tonight (I may not get any sleep this weekend :) )
The best! 🤝👏
just beginning with it and already seem this great nodes
thanks a lot for your very clear explanations and this awesome tool.
Amazing work! Very inspirational
Thanks a lot for this one!! Great tutorial!!
Marvelous, keep up the good work.
Very good tutorial, thanks
incredible detailed!... thank you !.
Thank you very much for this video and nodes !
Your voice (I don't know if it is your own) sounds a lot like "my name is Giovanni Giorgio" 😊 Thanks so much for your very calm way of explaining and naturally also for your time & energy invested into the development of IPAdapter!!!
Excellent, you are awesome, and thanks very much for the explanation and video.
Thanks for the great guide!
thank you soo much, clear and to the point.
Thankyou, You do a much better job at explaining compared to a person paywalling content behind patreon. Thanks for your work on ipadapter its another indespensible tool and hopefully others will help and work toward further amazing improvements to the whole Stable Diffusion scene, and less paywalling content on youtube and such and more open source and less stagnation.
thanks for you message, I feel the same way.
Mind you I don't think there's anything wrong in asking compensation for quality content, but since I developed an Open Source tool I find it's only fair that I also share the know-how on how to use it. I guess it's the only way we can actually evolve.
@@latentvision thankyou for your response. Correct, people wouldn't be able to learn and only watching a couple of your videos and you have made understanding comfy in general a lot easier - support chains and donations are fine but when they hide what is essentially open and free infomation found in the same videos like yours is very concerning in this community and only invites stagnatioin in the ai/stablediff space.
woow very nice, i wish you make more videos and tutorials .. Thanks thanks
This is magic! Thank you very much...
Great Work please keep going!
Great video, thank you!
So useful! Thanks!!
More videos like this please !!!
Many thanks for Your work!
really really nice, many thanks!
nice one, i'm going to use it. thank you.
Nice work, thanks
this tutorial is best
Great Job
Thank you!
great stuff
TY me and my ComfyUI loves you
Thank You
Great tutorial! How were you able to get such high denoise on the upscale? Anything over .2 for me starts to change the look.
bro sei un grande
Ciao matteo, grazie
thank u man
COngrats !!!
where do you get the clipvision models that you use?
T'ho sgamato Matteo, sei uno di noi :)
A note.
I installed Ipadapter just yesterday, just so you know I had to create an Ipadapter folder into the model folder.
Placing the model in the custom nodes section didn't work for me.
Very helpful. Is IPadapter similar to A4’s “reference” controlnet?
@latentvision Is there a way to know how the image is being described internally?
I mean, can that text be extracted somehow?
Thanks for the tools and tutorials! One question though is where can I find the clip encoders to download? I am looking everywhere and I can't seem to find the IPAdapter SD15 clip encoder anywhere. Also once I download it where do I put it so it shows up correctly? Additionally is there a way to find out the underlying path of where each node is looking for?(It would make pathing so much easier when manually adding files)
daje matteoo!
😅
Ladies and gentlemen, we have found THE GUY !
Thank you so much for everything you are doing to the Ai community, this is great work and a great explanation.
May I ask what is the limitation of the IP adaptor in terms of the fed images, like on what bases it determines the tokens? For example, if i used a line art image with a realistic checkpoint trained for producing mostly photographic images, will the IP adaptor get the tokens from the trained checkpoint or from the IP adaptor model ?
thanks for the kind words.
IPAdapter is a very strong conditioning but the main checkpoint will always show its character. It's better to pick the right model for the image you wish to generate
Man you make confyui makes so easy to use, thanks for the amazing video, I want to try it but confyui still scares me a bit
thanks! well maybe I'll make an intro video to comfyui next
@@latentvision your video helped a lot, it manage it to work it perfectly, only one thing is not working for me is upscaling, it worked using esrgan but id like something like hirez fix in the A1111, thanks for everything
@@latentvision Nvm, i was afraid to try confyui and now im amazed, its not easy but after you get used its not that harder either
Hey, great video! Really curious how you got your generation's to generate so fast! I run through google colab on a MacBook Pro M3 Pro 18GB RAM using a T4 GPU in Colab to run ComfyUI and my generations take up to 20 minutes. I've tried using a V100 GPU to speed things up and I feel like they take roughly the same time for me.
that's a 4090!
Hi, thanks for making and sharing this extension. I am looking for the node you mention about Prepare Image for ClipVision. Would you mind telling us where to find? thanks
I found it :)
How much does the model affect the outcome?
Also, how do you resize the latent image in the img2img section? I'm on a 4GB GPU so I can't run high resolution images, it will take me ages. I need to resize the images that are latent.
Also the latent image does not maintain the lines like yours, yours is almost like using controlnet canny, it keeps everything the same and just changes the style, mine doesn't. It changes the entire look of the image, pose, etc.
Where can I get the ip-adapter_sd15.bin file for the IP adapter model loader node and in which folder should I put it?
Truly amazing. I would give you a Noble Prize if I could. Thank you Matteo.
For upscaling with IPAdapter, why not send it through a ControlNet Tile along with an Ultimate Upscaler? You could create an image in an SDXL checkpoint, then IMG2IMG with ControlNet tile in a SD1.5 checkpoint. The denoise of the tile model should be 0.3 or less to give the closest results to the image.
hey thanks
the point of that segment is to show you the strength of the IPAdapter model. Of course you'll have to mix it with other nodes in a real life scenario. If you use tile controlnet already though you probably don't need to add IPAdapter
thank tyou
this is great! I'm going to play with it. Thanks. By the way, the link in the description is broken :)
you are right, thanks for the headsup!
Can you include more of the model links for the vae and unrelated ipadapter models
Hi, nice work on the IPAdapter, what about the add_weight widget on the "Prepare Image for Clip Vision" Node? How much weight is added to the image if it is activated?
the image has twice as much weight, it's just a patch until I find a way to add fine grained weight
@@latentvision Thanks for the quick reply. That should come in handy.