Become a Style Transfer Master with ComfyUI and IPAdapter

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 195

  • @goodie2shoes
    @goodie2shoes 9 месяцев назад +34

    Watch tv? Nah. Play games? Nah. Tweak and generate with IP-adapter deep into the night, under the guidance of master Mateo : YES!!

  • @citizenplain
    @citizenplain 9 месяцев назад +52

    Love this quote: "This is not magic and it's definitely not going to change everything. It's just a very powerful tool at your disposal. If you understand how it works, you'll be able to get great images out of it, but don't think that you can send whatever reference and have perfect results with no effort."

    • @AIAngelGallery
      @AIAngelGallery 9 месяцев назад +3

      however, ipadapter is so magical for me ❤

    • @ericbarr734
      @ericbarr734 9 месяцев назад +1

      7:48 for timestamp

  • @Billybuckets
    @Billybuckets 9 месяцев назад +19

    I know you’ve lamented people leaving your videos before the end, but this one leaving is just because I wanted to get amped for when I actually have time to watch the whole thing sometime tomorrow. Love the 15-20 min video format, you’re still the GOAT.

    • @latentvision
      @latentvision  9 месяцев назад +1

      eheh don't worry I was just kidding. the videos are here, people can watch them for how long or how short they want :D

  • @konstabelpiksel
    @konstabelpiksel 9 месяцев назад +31

    the best thing about matteo is he always start with a fresh comfyui default setting, not some overwhelming spaghetti pre made nodes. makes it easy to understand the process and follow along. thanks matteo!!

  • @nicolasmarnic399
    @nicolasmarnic399 9 месяцев назад +11

    I love your videos, the amount of useful information you give (although it makes me dizzy to see the nodes), the tranquility of your voice and the charisma you exude.
    Thank you very much for the workflows

  • @voxyloids8723
    @voxyloids8723 9 месяцев назад +6

    Wow ! I'm messing with it all day and you upload a new masterpiece ! 😍 I even learning to draw all day yesterday . This is a great usage !

  • @AnotherPlace
    @AnotherPlace 9 месяцев назад +3

    You sir, is an excellent teacher... So easy to understand step by step... Please do this most of the time... The level difficulties is so helpful for a noob like me..

  • @3X3Beastsu
    @3X3Beastsu Месяц назад

    I just wanted to say that this is the most helpful and clearest tutorial on style transfer and ipadapter. Thank you very much!

  • @jccluaviz
    @jccluaviz 9 месяцев назад +1

    Amazing. Just it...amazing.
    Thinking about myself now. I spend lot of time watching videos and trying to mimic those tecniques...i wish someday i can reach that kind of mastery.
    Amazing. Just amazing.

  • @bubuububu
    @bubuububu 2 месяца назад

    You are a digital wizard, Mateo. You explained it so well, so simply, and so satisfyingly at the same time, as if your brain is made of pixels. I did a ton of tests to get to 30% of this conclusions (but yeah, I don’t have a solid knowledge foundation about the architecture of comfy cause didnt got the time to do it properly). Still, I watched a lot of videos, and none of them were as explanatory as this one. So yeah, congrats, you are an excellent explainer with a very optimised logic! I’m looking forward to learn from you. Hope the community is gratefull, and support you so that you can keep on going with this. Thank you and keep up the magic! ❤❤❤

  • @MertEmre-f6z
    @MertEmre-f6z 9 месяцев назад +4

    God bless you dear Matteo.You are so precious mind,Thankful for your time shared us,best regards.

  • @piorewrzece
    @piorewrzece 7 месяцев назад

    Thank you for all the hard work

  • @FeyaElena
    @FeyaElena 9 месяцев назад +2

    Thank you so much for the detailed breakdowns of how IPadapter works. We are looking forward to new videos!

  • @tofu1687
    @tofu1687 9 месяцев назад +6

    You're... simply the best

    • @latentvision
      @latentvision  9 месяцев назад +1

      Better than all the rest ?!

    • @tofu1687
      @tofu1687 9 месяцев назад

      @@latentvision Let's just say that your explanations lift the veil on the magical side of generations, and that even if we understand that we'll have to try a bit at random, we still get the feeling of having more control. The other RUclips channels don't go into as much detail, so you can apply their precepts to give it a try, but as it doesn't seem to be based on anything, you might be tempted to give up as soon as you've had a few failures.

  • @alexgilseg
    @alexgilseg 9 месяцев назад +2

    you are a wizard and your genorosity is inspiring!

    • @latentvision
      @latentvision  9 месяцев назад

      being inspiring is the greatest recognition I can ask for... thanks

  • @TestMyHomeChannel
    @TestMyHomeChannel 7 месяцев назад

    Great video and thank you for providing the demo/practice workflows. They are the most useful for me and I learn so much from them. Usually, I do not watch videos that do not include their workflows. :)

  • @swannschilling474
    @swannschilling474 7 месяцев назад +1

    Thanks so much, just came back to Comfy and IP Adapters! This is amazing, thanks for taking the time! 😊

  • @robadams2451
    @robadams2451 9 месяцев назад +1

    I have been using your embeds node to try and go the other way, from a photo to a hatched pen drawing... much harder but I got quite close. Being able to save and load embeds is a great touch.

  • @caseyj789456
    @caseyj789456 9 месяцев назад +1

    Thank you Mateo. I will watch over and over again this video to make sure I get all!
    Ps : "you are now the master of style transfer.." ! 😅😅😅

  • @Desleiden
    @Desleiden 7 месяцев назад

    Best video I've seen so far. Insta like at 4 seconds of playing it.
    Amaizing, thank you!!

  • @xiaojunwen-nw4xd
    @xiaojunwen-nw4xd 9 месяцев назад

    Brother, thank you for your video. They are particularly useful because with them, I went from knowing nothing to having clear thinking, and it only took me a little time. Thank you very much for your efforts.

  • @aamir3d
    @aamir3d 9 месяцев назад +1

    This is such a nice tutorial. Thank you for walking through IPA+Controlnet possibilities.

  • @kallamamran
    @kallamamran 9 месяцев назад

    Thanks!

  • @KristijanKL
    @KristijanKL 9 месяцев назад

    Subscribed. I love these projects from start instead of downloading template and spending weekend on debugging

  • @WhySoBroke
    @WhySoBroke 9 месяцев назад

    Maestro Latente delivers another masterclass and entertaining creation!! You may live forever!!!

  • @jakbaustudio
    @jakbaustudio 6 месяцев назад

    I'm just starting. you are ery helpfull! Big thanks from Poland. Wish you all the best!

  • @Sedtiny
    @Sedtiny 9 месяцев назад

    Sir. You are my lord. Simple and usable and even changeable for the work I want to apply. You are the true engineer my lord

    • @latentvision
      @latentvision  9 месяцев назад

      lol thanks but I'm no lord.

  • @anylatent
    @anylatent 9 месяцев назад +1

    What you talk about flows smoothly, and I gain a lot from it.thanks.

  • @Grunacho
    @Grunacho 8 месяцев назад

    Thank you for this amazing tutorial. I love to see my own drawings and styles come to life, and how quickly new things are created 🙂

    • @latentvision
      @latentvision  8 месяцев назад +1

      oh is it yours? please tell me more so I can give you proper credit

    • @Grunacho
      @Grunacho 8 месяцев назад

      @@latentvision No worries, these are not my drawings 😅 Sorry for confusion . I was meaning my drawings at home, which im gonna use 😉

  • @alxleiva
    @alxleiva 9 месяцев назад

    Wow you make it look so effortless and I swear this is pretty much MagnificAI haha. Great work!

  • @Lahouel
    @Lahouel 9 месяцев назад +1

    simply the best. I hope your channel will soar soon. We had enough of the Ai Image generation fake tutors. This is a discipline and needs a sound teaching method. And thank you for the freebies to the unemployed. Not everyone can afford a subscription. God Bless you Matteo.

    • @latentvision
      @latentvision  9 месяцев назад

      you are most welcome! Have fun! and thanks

  • @kademo1594
    @kademo1594 9 месяцев назад +4

    Thx for the work, you're awesome

  • @vivigomez5960
    @vivigomez5960 9 месяцев назад

    I always enjoy watching your videos. You are the master!

  • @Umermehmood-jo6gv
    @Umermehmood-jo6gv 9 месяцев назад

    the way of teaching is very simple and effective. easy to understand 😍😍

  • @gamingthunder6305
    @gamingthunder6305 9 месяцев назад +1

    thank you for explaining how to use the negative image input. i added different images and was never sure what to put there.

  • @mcselcik
    @mcselcik 9 месяцев назад

    You make your work available for everyone! Thank you! You have a good ❤

  • @wascopitch
    @wascopitch 9 месяцев назад

    Valeu!

  • @moviecartoonworld4459
    @moviecartoonworld4459 9 месяцев назад

    This is a fantastic video that seems to teach legendary magic. Thank you always.

  • @andrewostrovsky4804
    @andrewostrovsky4804 9 месяцев назад +1

    Thank you for another great tutorial!
    The models and many modules are mostly black boxes for the community and any insight on their internal workings is very helpful. Such clues as "SDXL prefers CN strength and end_percent lower than SD1.5" or "bleeding of undesired elements can be counterbalanced with noisy negative image" are invaluable. Any insights on behavior of Unet, Clip, Vae, latents, save us hours of trials and errors.
    Is it possible to control the scale of model application better than with the regular img2img denoise? Namely, is it possible to force a model to preserve large scale structures and change the textures only or vice versa? IPAdapter appears to be working along these lines already but separate feature scale control would be of additional help. Any insights on how various types of noise affect the diffusion would be great. Looking forward to more of your videos.

    • @CL-rm6sb
      @CL-rm6sb 7 месяцев назад

      What you're looking for is likely the start and end step settings of KSampler (Advanced). Pull up one of the refiner example workflows for some inspiration on how to do this in a non-refiner based fashion. The key concept here is keeping and reusing noise but sampling it differently towards the end. Along with that consider creative use of masks and differential diffusion - since the entire point of DD is using the true power of masks for variable denoising (masks are no longer binary).

  • @jorgeluismontoyasolis9800
    @jorgeluismontoyasolis9800 5 месяцев назад

    Mateo, you are amazing. Thank you so much!

  • @janudece-music
    @janudece-music 6 месяцев назад

    thank you
    Very good lecture video, easy to understand
    You made it
    I watch and learn well.
    I will always support you

  • @StudioOCOMA2D3D
    @StudioOCOMA2D3D 9 месяцев назад

    Incredible work as usual. Love it!!!

  • @divye.ruhela
    @divye.ruhela 7 месяцев назад +3

    "This is not magic."
    But it sure helluva feels like it, boss!

  • @joonienyc
    @joonienyc 9 месяцев назад +1

    i see this can be very useful in someway of work ... very impressive

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 9 месяцев назад +1

    ip adapter is a real magic

  • @AI-Rogue
    @AI-Rogue 9 месяцев назад

    If I had money, I would be throwing it at you, but sadly I'm broke. Great Video!!!

  • @styrke9272
    @styrke9272 9 месяцев назад

    i really love your content, very informative thanks!!

  • @barcob5558
    @barcob5558 7 месяцев назад

    Excellent tools, thanks for sharing

  • @Kentel_AI
    @Kentel_AI 9 месяцев назад +1

    thanks again for sharing.

  • @pixelpaws-ai
    @pixelpaws-ai 3 месяца назад

    Amazing vid. Thank you.

  • @Mika43344
    @Mika43344 9 месяцев назад

    great video as always!💪

  • @sekkemann
    @sekkemann 9 месяцев назад +2

    Thank you kind sir!

  • @DDBM2023
    @DDBM2023 9 месяцев назад

    Thank you, Matteo, your videos are always helpful. One question: what is the use of "prep image for clipvision"? Just to make output image shaper?

    • @latentvision
      @latentvision  9 месяцев назад

      it tries to use the best scaling algorithm possible to catch as much details as possible. on top you can add sharpening

    • @DDBM2023
      @DDBM2023 9 месяцев назад

      @@latentvision Thank you so much!

  • @YoMeiOhMei
    @YoMeiOhMei 9 месяцев назад

    Thank you for these amazing tools Matteo! I was wondering if maybe you have some tips on how to best transfer an art-style to a subject that the checkpoint has no knowledge of.
    I have some 3D renders of creatures that I would like to turn into an illustration. So far sending the 3D render as the latent image and a style reference through ipadapter along with some style descriptions in the prompt was "ok". However unless I keep the denoise extremely low the features of the creatures (especially the faces) change drastically. I already tried turning the 3D render into lineart/depth, testing several controlnets...similar to what you did with the castle. Unfortunately nothing really did the trick. Either the design of the creatures changes or I get hardly any of the style into the picture.

    • @latentvision
      @latentvision  9 месяцев назад

      the checkpoint is actually very important, try many of them, it makes a huge difference. Regarding your specific question it's hard to say without checking the actual material

  • @JuanS_DuodecimStudio
    @JuanS_DuodecimStudio 7 месяцев назад

    Mateo, i don't know if already ip adpater can do this, but will be cool to CR Overlay Text, can be added ip adapter somehow, or another text node that will be able to receive ipadapter as an input

  • @gkrizos
    @gkrizos Месяц назад

    Amazing tutorial, can not find the working set of the models though - not available in Model Manager and the ones from search seem incompatible. Might be a good idea to store them with photos/workflows. Presently have to be creative =) Otherwise - great stuff, thanks so much.

  • @hmmyaa7867
    @hmmyaa7867 2 месяца назад

    damn, i wish i had money to support you lol. Thank you so much for the wonderful tutorial

  • @JDoyleJokes
    @JDoyleJokes 9 месяцев назад +1

    I'd love to give this a shot but I can't seem to find a way to install the t2i-adapter-sdxl for comfyui, I'd greatly appreciate any help I could get. Thanks!

  • @no-handles
    @no-handles 9 месяцев назад

    I really like the flow of the video. The example at the end with one IPAdapter and two ControlNets; would using InstantID be better for portraits?

    • @latentvision
      @latentvision  9 месяцев назад +2

      face models don't generally like other conditioning on top, but yeah it is possible

  • @drframemedia
    @drframemedia 9 месяцев назад

    Hello, Matteo
    I was wondering what Lineart controlnet you used for SDXL with the sketch images.
    Keep up the great work! It's super helpful across all the community!

    • @latentvision
      @latentvision  9 месяцев назад

      it's the controlnet lora by stability ai, but you can check other models if they are available

  • @zerorusher
    @zerorusher 9 месяцев назад

    This vídeos are amazing!
    What kind of hardware are you using? I'm considering to build a machine for SD and Small LLMs, but my budget is low.
    Would a 3060 12gb be good enough to start?

    • @latentvision
      @latentvision  9 месяцев назад +1

      I have a 4090. I had a 3060 before... to start, yeah should be enough.

    • @zerorusher
      @zerorusher 9 месяцев назад

      @@latentvision thanks for the reply man!

  • @elektrik7918
    @elektrik7918 5 месяцев назад

    16:02
    When loading the graph, the following node types were not found:
    DepthAnythingPreprocessor
    Nodes that have failed to load will show as red on the graph.
    what should i do?

    • @van_grimm9007
      @van_grimm9007 3 месяца назад

      install ComfyUI's ControlNet Auxiliary Preprocessors from the manager

  • @wlaznik
    @wlaznik 9 месяцев назад

    Great tutorial! You're really doing a fantastic job! Thanks a lot! Just tell me, please, where can I find xl-lineart-fp.16 that you're using as a controlnet model?

    • @wlaznik
      @wlaznik 9 месяцев назад

      I found it 😉

    • @latentvision
      @latentvision  9 месяцев назад +1

      linked in the description!

    • @wlaznik
      @wlaznik 9 месяцев назад

      @@latentvision Thanks 🙂

  • @IshaTiwari-jm6vj
    @IshaTiwari-jm6vj 2 месяца назад

    Hey what's the style reference image you used for the first one? Its adorable and would love to use it myself

  • @bcgd5059
    @bcgd5059 2 месяца назад

    Thumbs up! Do you have a tutorial on how to install the IPAdapter?

    • @latentvision
      @latentvision  2 месяца назад

      I don't do installation tutorials, sorry ;)

  • @nrpacb
    @nrpacb 9 месяцев назад +1

    I'm learning from you, God.

    • @latentvision
      @latentvision  9 месяцев назад +1

      goat sacrifices only on Friday

  • @AnimAI-Lora
    @AnimAI-Lora 9 месяцев назад

    Grande Matteo ❤

  • @juliangonzalez5104
    @juliangonzalez5104 9 месяцев назад +1

    Hello Matteo, thank you very much for your videos, they are really good, I only have a little problem with this tutorial, it gives me an error in the sample when I delete the adapter it creates the image, at the beginning it generates 5 images but now this error appears, I wanted to know if you know Any solution for this error.
    Error occurred when executing KSampler:
    Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
    File "C:\artificial intelligence\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)

    • @latentvision
      @latentvision  9 месяцев назад

      try to execute comfy with --force-fp16 option

  • @fadysaber-b8p
    @fadysaber-b8p 9 месяцев назад

    @latentvision , why you changed the name of the selection at the weight type from "Style transfer(SDXL) " to " Style transfer " only ?! , Can we now works with the style transfer for both (SDXL) & (SD1.5) ?!....🤔

    • @latentvision
      @latentvision  9 месяцев назад +4

      yes, you can transfer style (and composition) in SD1.5 too, even though it's not as effective. The style+composition node is only for SDXL, but I'm working on it.

    • @fadysaber-b8p
      @fadysaber-b8p 9 месяцев назад

      @@latentvision thanks...👍

  • @AIPixelFusion
    @AIPixelFusion 9 месяцев назад +1

    Epic video

  • @DesignDesigns
    @DesignDesigns 9 месяцев назад +1

    You are a star....

  • @godofdream9112
    @godofdream9112 8 месяцев назад +2

    its not about sketch , its about colour controls...many people want to make comic, but cant draw...with control-net, ipadapter and sd any one can draw anything... But colour control, for example dress color, home colour, overall multiscreen scene colour. that's the problem.

  • @xturru
    @xturru 8 месяцев назад +1

    Error occurred when executing IPAdapterUnifiedLoader:
    ClipVision model not found.

    • @content1
      @content1 5 месяцев назад

      I have same problem and cant find out solution ;(

  • @build.aiagents
    @build.aiagents 9 месяцев назад +1

    Phenomenal

  • @rajibpaul6271
    @rajibpaul6271 8 месяцев назад

    all awesome stuffs, im trying to learn from you ...but here is this tiger example the network is broken at KSampler step, i have no clue why, is there any conflict between nodes due to comfyUI updates ? please help

  • @kachuncheng-s1v
    @kachuncheng-s1v 9 месяцев назад

    Thank you master !

  • @godorox
    @godorox 9 месяцев назад

    First of all, you are the best your tutorial videos are great. I tried to download "t2i-adapter-lineart-sdxl-1.0" but in download area, there are two pytorch models, where can i find that?
    Edit: I found in "install models"

  • @TheGalacticIndian
    @TheGalacticIndian 9 месяцев назад +1

    WOW!!😍😍

  • @derrickpang4304
    @derrickpang4304 4 месяца назад

    this is really cool. Thank you . I am able to duplicate similar results with SD1.5. But when I tried XL models, I get a tiger with the iceberg pattern rather than a ice tiger. What am I doing wrong? Thanks.

    • @latentvision
      @latentvision  4 месяца назад

      1) convert the tiger to grayscale before inpainting, that helps 2) try a different sdxl model. you need a generic one

  • @ryanontheinside
    @ryanontheinside 8 месяцев назад

    GOAT

  • @dimitrimitropoulos
    @dimitrimitropoulos 9 месяцев назад +1

    incredible as always. ring the subscribe bell, people!

  • @SuperCinema4d
    @SuperCinema4d 6 месяцев назад

    rror occurred when executing KSampler (Efficient):
    'NoneType' object has no attribute 'shape'

  • @ChinaFilm-v6f
    @ChinaFilm-v6f 9 месяцев назад

    @matteo, I am following this and while doing the inpainting part I get a "AttributeError: 'NoneType' object has no attribute 'shape'" error coming from the KSampler node, I can't figure out why it's happening. Can you please help?

    • @latentvision
      @latentvision  9 месяцев назад

      you are probably using the wrong controlnet

  • @superhumandose
    @superhumandose 9 месяцев назад

    Will you release a ComfyUI course in the future? I love your workflows but I find the software daunting

  • @HaiNguyen-qt6vc
    @HaiNguyen-qt6vc 5 месяцев назад

    Hello, do you have any process from image to hand drawing? Thanks

  • @kirill99
    @kirill99 2 дня назад

    the best!

  • @leggettmrt2965
    @leggettmrt2965 8 месяцев назад

    The T2i lineart-fp16 safetensor does not appear in the "LoadControlnet" list. All the rest of theT2i are listed except the lineart safetensor. I tried the sketch and style safetensors which worked fairly well. I am a newbie and need your help. What am I doing wrong. ComyUI is fully updated.

  • @lepontRL
    @lepontRL 7 месяцев назад

    Error occurred when executing Canny:
    shape '[1, 1, 1836, 960]' is invalid for input of size 1759040

  • @roberamitiku5844
    @roberamitiku5844 9 месяцев назад

    this is so cool but i got a question what if I tried to do reverse of coloring book from normal image to line art/ coloring book do I just swap the images..thank you

    • @latentvision
      @latentvision  9 месяцев назад +1

      yes, works very well the other way around too. be very aggressive with the text prompt in saying exactly what you want. Also you might NOT want to send the original image into the ksampler latent to avoid getting colors.

  • @Avalon1951
    @Avalon1951 7 месяцев назад

    I'm missing the Plus high strength model, if I use the manager what model am I looking for??

  • @anton5381
    @anton5381 4 месяца назад

    Thanks for the video! But can't make it work.... do everything like you but getting "Error(s) in loading state_dict for Resampler:
    size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1664])." could you please help me?

  • @johnwhittaker-tx1pd
    @johnwhittaker-tx1pd 8 месяцев назад

    I can't get anything except the error "mat1 and mat2 shapes cannot be multiplied". Even though I downloaded the models, put them in the correct directories and have everything named properly, the Load ControlNet Model nodes will not recognize them / allow me to choose them.

    • @latentvision
      @latentvision  8 месяцев назад

      you are probably using the wrong control net

  • @TheRMartz12
    @TheRMartz12 8 месяцев назад

    I can't find for the life of me find the "depth anything vit l14" in the preprocessors, could you tell me where you got them please?

  • @MikevomMars
    @MikevomMars 9 месяцев назад

    Where to download the "ipadapter-xl-lineart-fp16.safetensors" used in the setup? EDIT: Got it - used "Install Models" in the ComfyUI Manager.

  • @SuperCinema4d
    @SuperCinema4d 6 месяцев назад

    Good workflow!!! but inpaint model call error ksampler, maybe you know how fix this?

  • @juridjatsenko2013
    @juridjatsenko2013 7 дней назад

    Im having error IPAdapterUnifiedLoader
    IPAdapter model not found.

  • @Asyouwere
    @Asyouwere 9 месяцев назад

    Now, if you reverse the process, could we make a useful Coloring Book drawing, with thick(er) lines?

    • @latentvision
      @latentvision  9 месяцев назад +1

      you can very easily make coloring books, but calibrating the thickness of the line would be not trivial

  • @jymcaballero5748
    @jymcaballero5748 6 месяцев назад

    i have seen lots of this demos, why dont you add a automatic description to the base image, so you dont even need to make a promth?

  • @denle637
    @denle637 9 месяцев назад

    Amazing. Just it...amazing.

  • @shobley
    @shobley 3 месяца назад

    What kind of hardware is this running on... or is it edited for time?