Style and Composition with IPAdapter and ComfyUI

Поделиться
HTML-код
  • Опубликовано: 27 май 2024
  • IPAdapter Extension: github.com/cubiq/ComfyUI_IPAd...
    Github sponsorship: github.com/sponsors/cubiq
    Paypal: www.paypal.me/matt3o
    Discord server: / discord
    00:00 Intro
    00:26 Style Transfer
    03:05 Composition Transfer
    04:56 Style and Composition
    07:42 Improve the composition
    08:40 Outro
  • НаукаНаука

Комментарии • 241

  • @Billybuckets
    @Billybuckets Месяц назад +52

    No hype. No BS gimmicks.
    You are the golden god of AI generation tutorials. Plus you make a damned fine node or two.

    • @latentvision
      @latentvision  Месяц назад +15

      thanks! The difference is that I'm not a youtuber, youtube for me is a hosting platform not a job... that and the fact that I hate "this will change everything..." kind of videos

    • @HiProfileAI
      @HiProfileAI Месяц назад

      ​@@latentvisionthe fact is this is GAME CHANGING. I've been struggling with art direction for images for client requests for artwork. This is game changing making it very easy to get desired outputs with styles and compositions. Looking forward to see how this works out with image posing references for composition.
      Thanks much appreciated

  • @zGenMedia
    @zGenMedia Месяц назад +26

    Incase anyone is wondering... the art work used (First crayon skull) is from Jean-Michel Basquiat. An amazing artist.

    • @lefourbe5596
      @lefourbe5596 Месяц назад +3

      thanks for saying it 👌. it's true it should be credited.
      we don't want to destroy the people despite the look of it. we just like to make stuff like everyone else.

  •  Месяц назад +43

    You added value to our lives. Thank you..

  • @GabrielRosenthal
    @GabrielRosenthal Месяц назад +11

    Wow, this is realy the last missing piece I needed to use Stable Diffusion in proper client work. I already tested it and am super impressed by the performance of style transfer!! Thank you so much for making all of this possible open source! ❤ This new features will change entire industries!

  • @GoblinWar
    @GoblinWar Месяц назад +3

    I remember when I was a research assistant my professor told me to look into this "GAN-style transfer" thing and report back to him. For the time it was super impressive but it's cool to see what ~7 years can do, this is astounding compared to back then

  • @TheGalacticIndian
    @TheGalacticIndian Месяц назад +5

    God bless you, Matteo! This makes me fall in love with generative AI again😍AMAZING!

  • @urbanthem
    @urbanthem Месяц назад +5

    Literally grabbing pop corn and smiling at all your videos. The goat. Thank you for all!!

  • @Mr.Sinister_666
    @Mr.Sinister_666 Месяц назад +5

    What you do here is incredibly helpful on all ends. Helpful is a bad word. Game changing is better but that is two words. Honestly whatever you release is gold. I look forward to every video and release man. Thank you so very, very much for your hard work!

  • @razvanmatt
    @razvanmatt Месяц назад +11

    Thank you once again for doing all of this!

  • @denisquarte7177
    @denisquarte7177 Месяц назад +3

    This is massive, something I was hoping to achieve when I started using sd and you made it possible

    • @latentvision
      @latentvision  Месяц назад +1

      if I understand SD3 architecture, it should be totally possible

  • @AdamDesrosiers
    @AdamDesrosiers Месяц назад +2

    Elegant and powerful workflow - this is absolutely fantastic stuff

  • @Paulo-ut1li
    @Paulo-ut1li Месяц назад +3

    That's wonderful! That combined composition and style node is game changer! I added visual style prompt to some generations it seems to give that final touch to style transfer. Thank you!

  • @AnotherPlace
    @AnotherPlace Месяц назад +4

    OMG, just this small short vid gives a lot of important information and update to my knowledge.... THANK YOU MATEO!!

  • @sorijin
    @sorijin Месяц назад +4

    Thank you for your hard work, I have a workflow im constantly updating with your ipadapter tech, these new capabilties are awesome!

  • @reapicus557
    @reapicus557 Месяц назад +1

    I am so grateful that you take the time to make these videos. Your nodes are so powerful and efficient, and I am able to use them with confidence right from the start given these wonderful expositions. :)

  • @DanielPartzsch
    @DanielPartzsch Месяц назад +2

    Just when you think it can't get any better you always proof us wrong and surprise is with new tools and smarter as well as more efficient ways to get to certain results. Thank you so much.

  • @flisbonwlove
    @flisbonwlove Месяц назад +9

    Great work Matteo 👏👏✨❤️

  • @Skydam33hoezee
    @Skydam33hoezee Месяц назад

    Absolutely brilliant! Already having so much fun with this.

  • @andykoala3010
    @andykoala3010 Месяц назад +6

    you are a genius, sir!

  • @petertjie4128
    @petertjie4128 Месяц назад +2

    fantastic work Mateo, Thank You!

  • @DarkGrayFantasy
    @DarkGrayFantasy Месяц назад

    Matt3o this is amazing! I can tell you I'm going to have a blast experimenting with the nodes, keep up the amazing work!

    • @latentvision
      @latentvision  Месяц назад

      IKR?! Stable Diffusion is fun again 😄

  • @jibcot8541
    @jibcot8541 12 дней назад +1

    This is increadible, great job.

  • @PulpoPaul28
    @PulpoPaul28 Месяц назад +3

    you are the best, number 1, the greatest channel in youtube. I love you, Mateo.

  • @marjolein_pas
    @marjolein_pas Месяц назад +1

    amazing, thank you very much for making these and your easy to understand explanation and workflow!

  • @aamir3d
    @aamir3d Месяц назад +1

    Incredible stuff, thank you Matteo!

  • @ttul
    @ttul Месяц назад

    Lovely work, Matteo! I can’t wait to play with this.

  • @remmo123
    @remmo123 Месяц назад +1

    This is amazing. Thank you for great work.

  • @3dpixelhouse
    @3dpixelhouse Месяц назад +1

    I love it! for me, it works like a charme. thanks for your engagement and time

  • @mmxyt
    @mmxyt Месяц назад

    I haven't had so much fun with SD in a while, thanks so much for all you do!

  • @kallamamran
    @kallamamran Месяц назад +1

    Amaaaaazing video! Your work is fantastic :D

  • @Some1uNo
    @Some1uNo Месяц назад +1

    This information is gold. Thank you

  • @erikdias9604
    @erikdias9604 Месяц назад

    This is what we imagined RUclips to be used for when it happened. More practical than the forums and newsgroups of the time.
    Excellent work. it's simple and effective, it opens the way to a lot of exploration and testing (I added lora to see what it gives and... at 5 a.m. I looked up from the screen: oops 😅

  • @piemoul
    @piemoul Месяц назад

    I’ve been missing your live stream and you gave us this? Impressive

  • @prasanthchowhan
    @prasanthchowhan Месяц назад +1

    Thank you Matteo and the unnamed heroes (sponsers) who are responsible for this incredible thing🎉

  • @latent-broadcasting
    @latent-broadcasting Месяц назад +1

    This is exactly what I needed! Thanks so much

  • @mhfx
    @mhfx Месяц назад +1

    ok wow amazing update. Thank you for your hard work

  • @ivanyang2022
    @ivanyang2022 Месяц назад +2

    you did wonder to the community ! thank you so much!

  • @ranks6670
    @ranks6670 Месяц назад +2

    You always make my day ❤

  • @Injaznito1
    @Injaznito1 Месяц назад +1

    Just finished updating my IPAdapter and it wasn't too painful. I did had to use ComfyUI manager to download the new models but other than that I didn't hit any walls. Thanx for the tutorial!

    • @electrolab2624
      @electrolab2624 Месяц назад

      ooh - I want to update too! - did you use a tutorial to do that? -
      So you just update the IP Adapter nodes (using manager) and download some new models? - I am a bit confused 😬

    • @electrolab2624
      @electrolab2624 Месяц назад +2

      Managed! - Using the 'IPAdapter Style & Composition SDXL' now - it's awesome! THX much much - @Matteo! 🤗

  • @optimbro
    @optimbro Месяц назад

    I have been away from all AI stuff due to my work, just wanted to thank you for this. It means a lot

    • @latentvision
      @latentvision  Месяц назад

      you are welcome! have fun (and profit) with it

    • @optimbro
      @optimbro Месяц назад

      @@latentvision

  • @Showdonttell-hq1dk
    @Showdonttell-hq1dk Месяц назад +1

    That's just fantastic, thank you!

  • @ZhuYuxiang
    @ZhuYuxiang Месяц назад +1

    Very good video, thank you, it helped me a lot

  • @amineroula
    @amineroula Месяц назад +3

    I am going to try this as soon as i get home 😮❤

  • @kittikajorns1811
    @kittikajorns1811 Месяц назад +1

    Great work as always.

  • @faxuancai
    @faxuancai Месяц назад

    Fantastic work! Thank you, Matteo!

  • @DataMysterium
    @DataMysterium Месяц назад

    mamma Mia, this is incredible! Thank you for making this a open-source project

  • @jccluaviz
    @jccluaviz Месяц назад

    Amazing again. Really nice work

  • @renegat552
    @renegat552 Месяц назад

    This is absolutely amazing!

  • @Rammahkhalid
    @Rammahkhalid Месяц назад +1

    Great update as usual

  • @autonomousreviews2521
    @autonomousreviews2521 Месяц назад

    Thank you for this excellent tutorial :)

  • @orion4d727
    @orion4d727 Месяц назад +2

    👍Excellent work

  • @NotThatOlivia
    @NotThatOlivia Месяц назад +6

    OMG!!! 👏👏👏

  • @eddiemauro.design
    @eddiemauro.design Месяц назад

    Thank you for everything. I supported you in paypal :)

  • @caseyj789456
    @caseyj789456 Месяц назад +1

    Mamamia ! Gracia Mateo ❤

  • @AnthonyDev
    @AnthonyDev Месяц назад

    Amazing. Ipadapter and comfyui = 😍

  • @no-handles
    @no-handles Месяц назад

    Amazing work!

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 Месяц назад

    you really unlocked the power of image generation

  • @xellostube
    @xellostube Месяц назад

    Interesting behaviour: I've just discovered that if I mask the image leaving the center of the image out of the mask, whatever generation will be ifluenced by the image but no in the center
    Interesting results.
    (workflow at 9:21)

  • @paulotarso4483
    @paulotarso4483 Месяц назад +1

    Insane, you're the goat!

  • @RodrigoNishino
    @RodrigoNishino Месяц назад

    Very interesting surely gonna try it. Thx!

    • @RodrigoNishino
      @RodrigoNishino Месяц назад

      Cant seem to find StyleAndComposition node .. .am I missing something?

  • @sachacarletti6533
    @sachacarletti6533 Месяц назад

    amazing!! thank you Matteo!

  • @banzai316
    @banzai316 Месяц назад

    Fantastic!! As always 👏

  • @user-cp8vm5ef2l
    @user-cp8vm5ef2l Месяц назад +1

    Thank you!

  • @alessandrorusso583
    @alessandrorusso583 Месяц назад

    Thank you for the time you dedicate to the community, my thanks unfortunately are not much, I was only able to offer you a coffee, but I hope that others will do the same to be able to support your time. 😊
    I gave you something using the youtube thank you button.

    • @latentvision
      @latentvision  Месяц назад +1

      thank you, it would be great if companies that are actually making a lot of money out of this technology would chime in

    • @alessandrorusso583
      @alessandrorusso583 Месяц назад

      @@latentvision Can you mix multiple styles? And does it also work with image to image?

    • @latentvision
      @latentvision  Месяц назад +1

      @@alessandrorusso583 yes and yes-ish. img2img works but the denoise needs to be pretty high. depends on the result you are after

  • @AdwinWijaya
    @AdwinWijaya Месяц назад

    nice one... i am looking for this feature in stablediffusion and just found it on your video. nice one ...

  • @MrMartinBoo
    @MrMartinBoo Месяц назад

    IPAdapter is just mesmerizing...

  • @alanhk147
    @alanhk147 Месяц назад +1

    You are the best !

  • @david_ce
    @david_ce Месяц назад

    I love this guy so much
    Great video and thank you for your service
    When’s the next comfy to hero video coming out?

    • @latentvision
      @latentvision  Месяц назад

      I'm prepping it... not sure when but it's in the pipeline

  • @pandalayreal
    @pandalayreal Месяц назад +1

    So good!

  • @ashokp9260
    @ashokp9260 Месяц назад

    this is crazy good..

  • @dashx3465
    @dashx3465 Месяц назад

    Thank you! I think this will be way better than controlnet for me. Give the AI a base to follow but have way more freedom than controlnet gives. I could never get the SDXL controlnets to work too well.

  • @pfbeast
    @pfbeast Месяц назад

    👌👌👌❤❤❤ very nice & high quality video. Really your new node give lots more option😀. I am sorry for my comments on your last video due to change your node structure. Salut 🫡 your thought about open source in this video last part.

  • @hmmrm
    @hmmrm Месяц назад +2

    woow thats cool

  • @haljordan1575
    @haljordan1575 Месяц назад +1

    Since you build these, you're the perfect person to ask. Would it ever be possible to combine your previous workflow "character stability and repeatability" with something like composition adapter or multi area composition, where you'd feed separately stable and posed characters into one image to generate a final singular artwork where they interact?

  • @pk.9436
    @pk.9436 Месяц назад

    great work

  • @hayateltelbany
    @hayateltelbany 6 дней назад

    that is crazy xD I love it

  • @sanchitwadehra
    @sanchitwadehra Месяц назад +1

    Dhanyavad

  • @35wangfeng
    @35wangfeng Месяц назад +1

    awesome!!!

  • @ysy69
    @ysy69 Месяц назад

    Thanks always and again Mateo. should I update via Manager or Git ?

    • @latentvision
      @latentvision  Месяц назад +2

      if you know how... git is always better, but the manager works too

  • @yvann.mp4
    @yvann.mp4 Месяц назад

    thanks so much

  • @burdenedbyhope
    @burdenedbyhope Месяц назад

    awesome work!!! btw, can you suggest a workflow for using the new composition transfer with subjects from other IPAdpater (like how we use attention masks to achieve before)

    • @latentvision
      @latentvision  Месяц назад +1

      I really just released the feature I still need to play with it. I'll post more in the coming weeks

    • @burdenedbyhope
      @burdenedbyhope Месяц назад

      @@latentvision you’re the best

  • @flamingwoodz
    @flamingwoodz Месяц назад

    These updates are great! How do you recommend composition for SD1.5 models currently?

    • @Darkwing8707
      @Darkwing8707 Месяц назад +1

      Since 1.5 models don't have a composition unet, tools like attention/latent couple and regional prompter might be your best bet.

  • @afaridoon1104
    @afaridoon1104 Месяц назад +1

    🤯amazing

  • @liialuuna
    @liialuuna Месяц назад

    great nodes! 🍒🍒🍒

  • @ceegeevibes1335
    @ceegeevibes1335 Месяц назад

    amazing!

  • @juanchogarzonmiranda
    @juanchogarzonmiranda Месяц назад

    Thanks Matteo!!

  • @Injaznito1
    @Injaznito1 Месяц назад +3

    I'm guessing this only works with SDXL and not 1.5?

  • @BobDoyleMedia
    @BobDoyleMedia Месяц назад +1

    GREAT!

  • @the_neural_network
    @the_neural_network Месяц назад +1

    bravo Matteo 💪💪💪💪 ora so che esiste anche il LeoneMedusa, farò un documentario 🤣

  • @aviator4922
    @aviator4922 Месяц назад

    awesome man

  • @user-sy9eq2vp9h
    @user-sy9eq2vp9h Месяц назад

    where would you insert a lora loader (character), before or after the ipadapter in the model chain? I tried to do some testing but the results were inconclusive so I wanted to hear your thoughts on this. thanks.. oh and huge thanks for this marvelous node 😍

  • @WiLDeveD
    @WiLDeveD Месяц назад +1

    Thanks Dear Mateo. Well done. God bless you man ❤
    I'm just wondering could we use IPAdapter Style and Composition node for putting an specific sunglasses on a face ? certain face or a random person ?

    • @latentvision
      @latentvision  Месяц назад +1

      that feels like inpainting

    • @WiLDeveD
      @WiLDeveD Месяц назад

      @@latentvision thanks.

  • @TheCcamera
    @TheCcamera Месяц назад

    V2 is so powerful and easy to use! Im really having fun with it! Thank you Matteo! Style and Composition Transfer for SDXL is amazing! I'm wondering what this type of approaches can mean for SD3 where the different modalities are even more closely related?

  • @flyashy8397
    @flyashy8397 11 дней назад

    Hello, thank you very much for this! I'm kinda noob at this and it took a long time to make ComfyUI work without errors. Your tutorial was very helpful and wonderful to explore. I have a noob question, is it possible to use Canny or other controlnet nodes, canny etc, with an adapter to enforce adherence to the original image? If yes is there any guides for that?
    Thank you!

    • @flyashy8397
      @flyashy8397 10 дней назад

      Hey, so please ignore my question, I figured out how to add controlnet Canny & Depth components and plug them into conditioning. The thing is that it works but very very slowly, ~25 sec becomes ~800 secs. Is there something I may be doing wrong?

  • @digidope
    @digidope Месяц назад +2

    Any plans for SD 1.5 version? XL is just too slow to be used with AnimateDiff.

    • @latentvision
      @latentvision  Месяц назад +3

      sd15 has a completely different architecture, I'll give it a go but it might not be possible without a dedicated model.

  • @rsunghun
    @rsunghun Месяц назад +1

    Holy magical heaven

  • @alessandrorusso583
    @alessandrorusso583 Месяц назад

    Grazie.

  • @deepwaterbetta2420
    @deepwaterbetta2420 Месяц назад

    Thank you for sharing, can you add seed control to the zone control?

    • @latentvision
      @latentvision  Месяц назад

      no unfortunately the diffusion process is the same, one see takes care of everything (if I understand your question)

  • @ponponych2
    @ponponych2 Месяц назад

    Это супер круто! Вы не перестаёте удивлять.
    А как насчёт создать узел Стиль+Лицо? Ставить их последовательно это увеличить время в 6 раз.
    А может даже все... Стиль+Лицо+Композиция. Такое просто мечта.

    • @latentvision
      @latentvision  Месяц назад +1

      if I understand the question, unfortunately the face models are different and you would need to load 2 IPAdapters anyway

  • @Kamerosoul
    @Kamerosoul Месяц назад

    Amazing, great work. But in my case, when running, it is asking for a clip vision model in IPAdapter Unified Loader node. Any feedback? I added a clip vision model in next node but it didnt work