Style and Composition with IPAdapter and ComfyUI
HTML-код
- Опубликовано: 27 май 2024
- IPAdapter Extension: github.com/cubiq/ComfyUI_IPAd...
Github sponsorship: github.com/sponsors/cubiq
Paypal: www.paypal.me/matt3o
Discord server: / discord
00:00 Intro
00:26 Style Transfer
03:05 Composition Transfer
04:56 Style and Composition
07:42 Improve the composition
08:40 Outro - Наука
No hype. No BS gimmicks.
You are the golden god of AI generation tutorials. Plus you make a damned fine node or two.
thanks! The difference is that I'm not a youtuber, youtube for me is a hosting platform not a job... that and the fact that I hate "this will change everything..." kind of videos
@@latentvisionthe fact is this is GAME CHANGING. I've been struggling with art direction for images for client requests for artwork. This is game changing making it very easy to get desired outputs with styles and compositions. Looking forward to see how this works out with image posing references for composition.
Thanks much appreciated
Incase anyone is wondering... the art work used (First crayon skull) is from Jean-Michel Basquiat. An amazing artist.
thanks for saying it 👌. it's true it should be credited.
we don't want to destroy the people despite the look of it. we just like to make stuff like everyone else.
You added value to our lives. Thank you..
I'm doing my part!
Wow, this is realy the last missing piece I needed to use Stable Diffusion in proper client work. I already tested it and am super impressed by the performance of style transfer!! Thank you so much for making all of this possible open source! ❤ This new features will change entire industries!
I remember when I was a research assistant my professor told me to look into this "GAN-style transfer" thing and report back to him. For the time it was super impressive but it's cool to see what ~7 years can do, this is astounding compared to back then
God bless you, Matteo! This makes me fall in love with generative AI again😍AMAZING!
IKR?! me too!
Literally grabbing pop corn and smiling at all your videos. The goat. Thank you for all!!
What you do here is incredibly helpful on all ends. Helpful is a bad word. Game changing is better but that is two words. Honestly whatever you release is gold. I look forward to every video and release man. Thank you so very, very much for your hard work!
Thank you once again for doing all of this!
This is massive, something I was hoping to achieve when I started using sd and you made it possible
if I understand SD3 architecture, it should be totally possible
Elegant and powerful workflow - this is absolutely fantastic stuff
That's wonderful! That combined composition and style node is game changer! I added visual style prompt to some generations it seems to give that final touch to style transfer. Thank you!
OMG, just this small short vid gives a lot of important information and update to my knowledge.... THANK YOU MATEO!!
just doing my part
Thank you for your hard work, I have a workflow im constantly updating with your ipadapter tech, these new capabilties are awesome!
I am so grateful that you take the time to make these videos. Your nodes are so powerful and efficient, and I am able to use them with confidence right from the start given these wonderful expositions. :)
Just when you think it can't get any better you always proof us wrong and surprise is with new tools and smarter as well as more efficient ways to get to certain results. Thank you so much.
Great work Matteo 👏👏✨❤️
Absolutely brilliant! Already having so much fun with this.
you are a genius, sir!
only by chance
fantastic work Mateo, Thank You!
Matt3o this is amazing! I can tell you I'm going to have a blast experimenting with the nodes, keep up the amazing work!
IKR?! Stable Diffusion is fun again 😄
This is increadible, great job.
you are the best, number 1, the greatest channel in youtube. I love you, Mateo.
thanks! I'm not worth it :)
amazing, thank you very much for making these and your easy to understand explanation and workflow!
Incredible stuff, thank you Matteo!
Lovely work, Matteo! I can’t wait to play with this.
This is amazing. Thank you for great work.
I love it! for me, it works like a charme. thanks for your engagement and time
I haven't had so much fun with SD in a while, thanks so much for all you do!
IKR?! same here!
Amaaaaazing video! Your work is fantastic :D
This information is gold. Thank you
This is what we imagined RUclips to be used for when it happened. More practical than the forums and newsgroups of the time.
Excellent work. it's simple and effective, it opens the way to a lot of exploration and testing (I added lora to see what it gives and... at 5 a.m. I looked up from the screen: oops 😅
I’ve been missing your live stream and you gave us this? Impressive
Thank you Matteo and the unnamed heroes (sponsers) who are responsible for this incredible thing🎉
This is exactly what I needed! Thanks so much
ok wow amazing update. Thank you for your hard work
you did wonder to the community ! thank you so much!
You always make my day ❤
Just finished updating my IPAdapter and it wasn't too painful. I did had to use ComfyUI manager to download the new models but other than that I didn't hit any walls. Thanx for the tutorial!
ooh - I want to update too! - did you use a tutorial to do that? -
So you just update the IP Adapter nodes (using manager) and download some new models? - I am a bit confused 😬
Managed! - Using the 'IPAdapter Style & Composition SDXL' now - it's awesome! THX much much - @Matteo! 🤗
I have been away from all AI stuff due to my work, just wanted to thank you for this. It means a lot
you are welcome! have fun (and profit) with it
@@latentvision
That's just fantastic, thank you!
Very good video, thank you, it helped me a lot
I am going to try this as soon as i get home 😮❤
Great work as always.
Fantastic work! Thank you, Matteo!
mamma Mia, this is incredible! Thank you for making this a open-source project
Amazing again. Really nice work
This is absolutely amazing!
Great update as usual
Thank you for this excellent tutorial :)
👍Excellent work
OMG!!! 👏👏👏
Thank you for everything. I supported you in paypal :)
thanks!
Mamamia ! Gracia Mateo ❤
Amazing. Ipadapter and comfyui = 😍
Amazing work!
you really unlocked the power of image generation
Interesting behaviour: I've just discovered that if I mask the image leaving the center of the image out of the mask, whatever generation will be ifluenced by the image but no in the center
Interesting results.
(workflow at 9:21)
Insane, you're the goat!
Very interesting surely gonna try it. Thx!
Cant seem to find StyleAndComposition node .. .am I missing something?
amazing!! thank you Matteo!
Fantastic!! As always 👏
Thank you!
Thank you for the time you dedicate to the community, my thanks unfortunately are not much, I was only able to offer you a coffee, but I hope that others will do the same to be able to support your time. 😊
I gave you something using the youtube thank you button.
thank you, it would be great if companies that are actually making a lot of money out of this technology would chime in
@@latentvision Can you mix multiple styles? And does it also work with image to image?
@@alessandrorusso583 yes and yes-ish. img2img works but the denoise needs to be pretty high. depends on the result you are after
nice one... i am looking for this feature in stablediffusion and just found it on your video. nice one ...
IPAdapter is just mesmerizing...
You are the best !
I love this guy so much
Great video and thank you for your service
When’s the next comfy to hero video coming out?
I'm prepping it... not sure when but it's in the pipeline
So good!
this is crazy good..
Thank you! I think this will be way better than controlnet for me. Give the AI a base to follow but have way more freedom than controlnet gives. I could never get the SDXL controlnets to work too well.
👌👌👌❤❤❤ very nice & high quality video. Really your new node give lots more option😀. I am sorry for my comments on your last video due to change your node structure. Salut 🫡 your thought about open source in this video last part.
woow thats cool
Since you build these, you're the perfect person to ask. Would it ever be possible to combine your previous workflow "character stability and repeatability" with something like composition adapter or multi area composition, where you'd feed separately stable and posed characters into one image to generate a final singular artwork where they interact?
great work
that is crazy xD I love it
Dhanyavad
awesome!!!
Thanks always and again Mateo. should I update via Manager or Git ?
if you know how... git is always better, but the manager works too
thanks so much
awesome work!!! btw, can you suggest a workflow for using the new composition transfer with subjects from other IPAdpater (like how we use attention masks to achieve before)
I really just released the feature I still need to play with it. I'll post more in the coming weeks
@@latentvision you’re the best
These updates are great! How do you recommend composition for SD1.5 models currently?
Since 1.5 models don't have a composition unet, tools like attention/latent couple and regional prompter might be your best bet.
🤯amazing
great nodes! 🍒🍒🍒
amazing!
Thanks Matteo!!
I'm guessing this only works with SDXL and not 1.5?
GREAT!
bravo Matteo 💪💪💪💪 ora so che esiste anche il LeoneMedusa, farò un documentario 🤣
awesome man
where would you insert a lora loader (character), before or after the ipadapter in the model chain? I tried to do some testing but the results were inconclusive so I wanted to hear your thoughts on this. thanks.. oh and huge thanks for this marvelous node 😍
Thanks Dear Mateo. Well done. God bless you man ❤
I'm just wondering could we use IPAdapter Style and Composition node for putting an specific sunglasses on a face ? certain face or a random person ?
that feels like inpainting
@@latentvision thanks.
V2 is so powerful and easy to use! Im really having fun with it! Thank you Matteo! Style and Composition Transfer for SDXL is amazing! I'm wondering what this type of approaches can mean for SD3 where the different modalities are even more closely related?
Hello, thank you very much for this! I'm kinda noob at this and it took a long time to make ComfyUI work without errors. Your tutorial was very helpful and wonderful to explore. I have a noob question, is it possible to use Canny or other controlnet nodes, canny etc, with an adapter to enforce adherence to the original image? If yes is there any guides for that?
Thank you!
Hey, so please ignore my question, I figured out how to add controlnet Canny & Depth components and plug them into conditioning. The thing is that it works but very very slowly, ~25 sec becomes ~800 secs. Is there something I may be doing wrong?
Any plans for SD 1.5 version? XL is just too slow to be used with AnimateDiff.
sd15 has a completely different architecture, I'll give it a go but it might not be possible without a dedicated model.
Holy magical heaven
Grazie.
Thank you for sharing, can you add seed control to the zone control?
no unfortunately the diffusion process is the same, one see takes care of everything (if I understand your question)
Это супер круто! Вы не перестаёте удивлять.
А как насчёт создать узел Стиль+Лицо? Ставить их последовательно это увеличить время в 6 раз.
А может даже все... Стиль+Лицо+Композиция. Такое просто мечта.
if I understand the question, unfortunately the face models are different and you would need to load 2 IPAdapters anyway
Amazing, great work. But in my case, when running, it is asking for a clip vision model in IPAdapter Unified Loader node. Any feedback? I added a clip vision model in next node but it didnt work