Infinite Variations with ComfyUI

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024

Комментарии • 144

  • @zef3k
    @zef3k Год назад +45

    dude unsampler is sick! I love that you're showing how some of these other nodes work and not just ipadapter, thanks!

  • @knabbi
    @knabbi Год назад +11

    "...and now she is pissed".
    Never had a better introduction for another useful comfyui node😂 Appreciate your work and your entertaining videos. I like your effective and pragmatic way of explanation.
    Thanks.

  • @BoolitMagnet
    @BoolitMagnet Год назад +27

    Wow. Another great video, so much info and all clearly explained. Your mastery of ComfyUI is impressive.

  • @Dunc4n1d4h0
    @Dunc4n1d4h0 Год назад +5

    Hahaha, "and now she's pissed". I would never miss a lesson with such teacher 🙂 Every time I watch something from you I have new ideas, thank you.

  • @Paulo-ut1li
    @Paulo-ut1li 11 месяцев назад +3

    Saying this channel is the best ComfyUI resource on YT is an understatement . Thank you Matteo, please keep up the amazing work!

  • @WallyMahar
    @WallyMahar 3 месяца назад +1

    A WORKFLOW I DOWNLOADED AND IT ACTUALLY WORKS! OMFG!
    You don't understand how rare that is. As a pro artist but A python noob, you don't understand how many of my list that is at least a screen long of workflows that just don't work and I don't understand why. And I have spent hours and hours and usually give up Thank you.

  • @ronnykhalil
    @ronnykhalil Год назад +4

    Unsampler is an insane option that I can only begin to imagine its potential. thanks for shining lights on all these unsung heroes. the channel remains my favorite by a long shot

  • @latent-broadcasting
    @latent-broadcasting Год назад +1

    The unsampler blew my mind! It's amazing all the possibilities available with ComfyUI. Thanks for the tutorial!

  • @pedxing
    @pedxing Год назад +16

    absolutely love watching these work sessions. ❤‍🔥💡💪

  • @johnmcaleer6917
    @johnmcaleer6917 Год назад

    'and now she's pissed' cracked me up.....Your vids continue to impress and your knowledge of such a new subject is amazing....Love your explanations and subject choices..Wonderful stuff again..

  • @TheDocPixel
    @TheDocPixel Год назад +2

    You have read my mind! I've been searching for more information and usage videos and tuts for all of these nodes that are bundled in packages, that other YTers suggest to install, but use only one or 2 of them. Please continue on with these easy, to the point videos for advanced users. WE NEED THEM!

  • @chadhamlet
    @chadhamlet Год назад +3

    Wow! Using the noise in this fashion really makes it so much nicer than image to image. I've done some really great enhancements of some old 1.5 generations that kept the look of the old but dramatically increased the details with the newer SDXL models. I've never had an upscale do something this nice and not change the image. Can't wait to see what you've got planned next. Your videos are amazing! I'd love to see you tackle a workflow that is geared towards reusing a character, face, clothes and in multiple poses!

  • @ooiirraa
    @ooiirraa Год назад +6

    Dear Matteo, I became your absolute fan 🎉 your videos and projects (ip-adapter) are generous and abundant. Every your product is valuable but understandable in the same time. Thank you very much, Please keep creating ❤

  • @ProzacgodAI
    @ProzacgodAI Год назад +1

    I was playing with the unsampler, and went (total 20 steps) - unsampler(5 steps) -> advanced sampler (5 steps -> 10 steps) -> advanced sampler +add noise (10->20) and it produces really good variations. I can even supply it with a new prompt at the last step and it's really really good at integrating it and keeping consistency

    • @latentvision
      @latentvision  Год назад +2

      I guess this is a situation like: give a man a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime 😄

    • @ProzacgodAI
      @ProzacgodAI Год назад +1

      ​@@latentvision Give the man the seed for the fish image and he'll have variations for a lifetime...

  • @antiquechrono
    @antiquechrono Год назад +1

    Short, to the point, and absolutely jam packed with information. Great video.

  • @nawrasryhan
    @nawrasryhan Год назад

    The best comfyUI tutorials hands down, the amount of info, small tips, real experience that you show in these videos is unmatched and highly appreciated. Keep it up and of course Thanks for sharing!

  • @michaelbayes802
    @michaelbayes802 Год назад +2

    Wow! You could have made 10 videos with this content. Respect

  • @1E9L9I7J1A6
    @1E9L9I7J1A6 2 месяца назад

    not much to say other than thank you very much, great videos, im about to explore your whole channel, you definetly just won a new regular viewer

  • @eagleeyedjoe0075
    @eagleeyedjoe0075 Год назад +1

    These videos are fantastic, I'm learning many new techniques and you've introduced me to loads of new nodes. Can't wait to see the new IPAdapter you mentioned.

  • @dck7048
    @dck7048 Год назад +1

    These videos are so consistently useful, thanks for taking the time! Even on subjects that you'd think are "solved" like image variations, the fine control can be a real asset when you're looking to generate something specific.

  • @Renzsu
    @Renzsu Год назад +1

    Love your videos man, they're a joy to watch. And I like how you keep your examples relatively simple and straight to the point, no unnecessary fluff :)

  • @64kernel
    @64kernel Год назад +2

    Applying this in my workflow immediately. Very useful. Thanks!

  • @crow-mag4827
    @crow-mag4827 Год назад

    Found you after the release of ipadapter, your skills in comfy are amazing. Watching all your videos.

  • @uk3dcom
    @uk3dcom Год назад +1

    So many useful nugets of information. Taking control of the generative image is fascinating. Thank you.❤

  • @TheJAM_Sr
    @TheJAM_Sr Год назад

    Wow, great demonstration!
    I have been playing around with combing noises for a bit now and I still learned a lot!
    I’m going to take what I’ve learned here and play around with all the different type of noise formats.

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha Год назад +1

    📝 Summary of Key Points:
    The speaker discusses various techniques for creating small variations on an image using the SDXL workflow. They suggest adding low-weight tokens or random numbers to slightly change the image.
    The concept of "horror negatives" is introduced, where negative prompts with words like "horror" or "zombie" are used to achieve a clean result.
    Conditioning comcat is explained as a way to change the style or details of an image while keeping the same composition. Conditioning combine is also discussed for achieving more mutation in the image.
    The use of IP adapter is explored to guide the composition of the image, using different reference images to achieve different styles.
    The unsampler node from the confi noise extension is shown as a technique to modify an existing image by removing noise until it reaches the original noise at the first step of generation.
    Creating a batch of images with little differences is demonstrated using fixed base noise and the slurp latent node. The strength of the noise can be adjusted, and a new batch of similar images can be generated by changing the seed in the noise generator.
    💡 Additional Insights and Observations:
    💬 "There is no one-size-fits-all solution" - The speaker emphasizes that different techniques may work better for different images and prompts.
    📊 No specific data or statistics were mentioned in the video.
    🌐 The video provides practical examples and demonstrations to support the techniques discussed.
    📣 Concluding Remarks:
    The video provides a comprehensive overview of techniques for creating image variations using the SDXL workflow. From simple tricks like adding tokens or random numbers to more advanced techniques like conditioning comcat and using IP adapter, the speaker demonstrates practical examples and offers valuable insights for achieving desired image variations.
    Generated using Talkbud (Browser Extension)

  • @Enricii
    @Enricii Год назад

    PAZZESCO!
    My favourite one was the unsampler method. I think I need to play with it very soon!
    Grazie ancora per tutto quello che fai!

  • @terrorcuda1832
    @terrorcuda1832 Год назад +1

    That was a fantastic video. I want to leave work and go home and experiment.

  • @BuckleyandAugustin
    @BuckleyandAugustin 3 месяца назад

    I agree with everyone here your content is so valuable, thank you for all you do Matteo!

  • @svenhinrichs4072
    @svenhinrichs4072 Год назад

    Thanks a lot. Your tutorials are great ! Perfectly explained and going to the details which are really hard to find out without the technical insights. Keep up the great work!

  • @vizsumit
    @vizsumit Год назад

    You are making me falling in love with ComfyUI

  • @morphidevtalk
    @morphidevtalk 10 месяцев назад +1

    mindblowing! ty for the workflow! i'll try it for myself

  • @LucasSavelli-e3w
    @LucasSavelli-e3w Год назад

    Mateo, YOU are the god! Thank you so much for sharing all your knowledge with us!

  • @world4ai
    @world4ai Год назад +2

    I have to say that so far I found all of your videos really useful. I would like some AnimateDiff tutorials.

  • @moviecartoonworld4459
    @moviecartoonworld4459 Год назад +1

    I am always grateful to hear amazing and moving lectures.
    0

  • @pedxing
    @pedxing Год назад

    REALLY looking forward to the seeing your process for the logo animation as well!

  • @HisWorkman
    @HisWorkman Год назад +1

    As always this was a fantastic tutorial. Thank you!

  • @Ulayo
    @Ulayo Год назад +2

    This video is amazing! I learned so much today! 👍

  • @tonikunec
    @tonikunec Год назад

    That's pretty amazing! I am kinda new to all this AI thing and still learning a lot, but this video really opened my eyes on how to get started and make even more amazing stuff. Keep those videos coming as it seems you really know your stuff! Subscribed!

  • @tiporight
    @tiporight 11 месяцев назад

    Excellent. Thank you for sharing this type of tutorials

  • @WhySoBroke
    @WhySoBroke Год назад +1

    You have my full attention Maestro Latente!!! Please create a discord community!! ❤️🇲🇽❤️

  • @ChandreshJoshi
    @ChandreshJoshi Год назад

    your approach is very creative and very easy to understand thanks for video

  • @hakandurgut
    @hakandurgut Год назад +4

    In last 16 mins, i have learned more than i had in last months.... great video, great knowlwdge... ate you an ai scientist?

  • @MannyGonzalez
    @MannyGonzalez 9 месяцев назад

    Absolute master class. Thanks for these tutorials.

  • @roktecha
    @roktecha Год назад +1

    These videos are excellent! Thank you

  • @ysy69
    @ysy69 Год назад +2

    incredible. thnak you

  • @christianblinde
    @christianblinde Год назад

    Great examples with good explainations

  • @human-error
    @human-error 9 месяцев назад +1

    Amazing as usual Mateo. Gracias !

  • @MikevomMars
    @MikevomMars 7 месяцев назад

    Just adding a number to the prompt to get a variation is true ZEN - simple but effective 😊

  • @koalanation
    @koalanation Год назад +1

    This is a great essentials video! Thanks Matteo. Not sure if everyone thinks inpainting is lame, though 😂😂😂

  •  Год назад

    You are amazing.. This is the best video I've ever seen...

  • @steveyy3567
    @steveyy3567 7 месяцев назад

    mind blowing, great job!

  • @Bartskol
    @Bartskol Год назад +1

    This video is gold.

  • @fgmanfredini
    @fgmanfredini Год назад +1

    Very Nice, really! Very useful, thank you. If i can give you a suggestion would be for a vídeo about dynamic composition using automatic masks. Example: generate a subject, cut it with automatic masking (Sam?) and paste it over a generate background and then a second pass to fix The composition and then generate variations of the background for the same subject or vice versa.

  • @TimVerweij
    @TimVerweij Год назад

    So much useful information! Thanks!

  • @j_shelby_damnwird
    @j_shelby_damnwird Год назад +1

    This and Scott's are the coolest AI art channels. Kudos! are these workflows available somewhere for reverse engineering? I tried to follow along but it's hard to keep track of everything that's going on.

  • @JoeSim8s
    @JoeSim8s 8 месяцев назад +1

    Pure gold! Thank you!

  • @MicheleBrugiolo
    @MicheleBrugiolo Год назад +1

    Grazie grazie grazie!

  • @bgtubber
    @bgtubber 5 месяцев назад

    Fascinating! Is this something like the Noise Inversion feature in A1111?

  • @chornsokun
    @chornsokun 5 месяцев назад

    Thank you Matteo for the great content. Could you advise which node/extension used in the clip to convert noise into input?

    • @latentvision
      @latentvision  5 месяцев назад

      you mean the unsampler? it's comfyui_noise

    • @chornsokun
      @chornsokun 5 месяцев назад

      @@latentvision the step at ruclips.net/video/Ev44xkbnbeQ/видео.htmlsi=cWiy-uDpeQelusMM&t=58 and 1:00 noise_seed node I can't find in base comfy

    • @latentvision
      @latentvision  5 месяцев назад

      @@chornsokun that's just a primitive. convert the seed to an input and you can connect a primitive to it

  • @ai-roman-ai
    @ai-roman-ai Год назад

    I love your videos, they are the best! I want to generate keyframes and then interpolate them to create a realistic video in the end without any time constraints.
    Can you advise me on how I can apply your approaches to create the consistent frames, which you show in this video or other videos? For example, a dog plays with a ball in the garden. The dog must run and be in different positions in each frame, the camera does not move. How to specify the position of the dog and the ball in each keyframe?

    • @latentvision
      @latentvision  Год назад

      what are asking is pretty complicated, it can't be really explained in a YT comment

    • @aliyilmaz852
      @aliyilmaz852 9 месяцев назад

      @@latentvision it would be good if you can teach us in another video. btw you are amazing Matteo!

  • @EMSSpammer
    @EMSSpammer Месяц назад

    Hello can anyone help me with this Error.
    Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 150, 150] to have 4 channels, but got 16 channels instead
    This part should be the workflow from Unsampler 09:30

  • @kakochka1
    @kakochka1 Год назад

    @latentvision Could you explain how you created start_at_step primitive (to control both unsampler and kSampler inputs) with just one click and the correct naming? Is this some custom nodes magic? And as an idea for future videos - could you share how you debug the content of different nodes (maskPreivew and PreviewImage aside) with int/bool/etc values in them?

    • @latentvision
      @latentvision  Год назад

      double click on the input little dot 😄

  • @dflfd
    @dflfd 10 месяцев назад

    thank you, this is really great!

  • @PradeepKumar6
    @PradeepKumar6 10 месяцев назад

    Great video, I have a question what is text_g and text_l in clip text encode? Thanks

  • @paulofalca0
    @paulofalca0 Год назад +1

    Great stuff! Thanks!

  • @bwheldale
    @bwheldale Год назад

    I'm slowly absorbing these valuable insights, my favourite Comfy channel. At the beginning of 'light conditioning' I wasn't getting subtle changes they were drastic until I tried other seeds. Some worked for subtle changes while some did not. Unless I'm mistaken this light conditioning may be seed dependent. Just wondering if some seeds you tried weren't "subtle friendly"?

    • @latentvision
      @latentvision  Год назад

      sometimes it's hard to see them but there's always a difference. Try to use the "enhance difference" node from the Comfy_Essentials extension. Yes, some seeds will show more difference than others, but it's completely random.

    • @bwheldale
      @bwheldale Год назад

      My appologies, I was just about to edit my post to say my wiring to each text box was not from both "text_g and text_l". It's now all working fine and looks exactly as yours with the subtle results achieved. I'll also play with the extension as suggested, thank you for the tips.

  • @opposegravity
    @opposegravity Год назад

    Can you go over all comfy nodes, I’ve learned more watching your videos than any other resource! Thanks

    • @latentvision
      @latentvision  Год назад

      I started doing that, but it's a bit boring...

    • @opposegravity
      @opposegravity Год назад

      Maybe to make them but not to watch, I'm enjoying the content!@@latentvision

  • @thelookerful
    @thelookerful 7 месяцев назад

    These tutorials are great!!

  • @tomolson6169
    @tomolson6169 11 месяцев назад

    I noticed you never re-adjusted the values for width/Height on the ClipTextEncode nodes after you switched to the Unsampler demo. Even tho you started working with a different latent size. Was that just an oversight? It didn't seem to make a difference, your images still looked GREAT! I was just curious, I ended up using a node template for SDXL with primitives set up to quickly adjust the values to 4x the latent size as you suggested. Thank you so much for all your teachings! You've helped me GREATLY!

    • @latentvision
      @latentvision  11 месяцев назад +2

      yeah I noticed after I posted the video. the size conditioning doesn't make much difference, it's more of a refinement, so it's not crucial, but yeah in this case it's an oversight

  • @sincdraws
    @sincdraws 11 месяцев назад

    great stuff as always

  • @impactframes
    @impactframes Год назад

    Another excellent tutorial. ❤

  • @blisterfingers8169
    @blisterfingers8169 Год назад

    Would conditioning concat be the same as something like Automatic1111's blend function or is it something different?
    Love these videos, thanks!
    Also: "a hint of Klimt" had me chuckling.

    • @latentvision
      @latentvision  Год назад

      no, blend is another option. The node is called conditioning average.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ Год назад

    Matteo, I wish you would explore latent upscaling and show us some useful possibilities for getting high frequency details most effectively through step iterative upscaling and though other more esoteric modes such as block weights etc. And how to best leverage specialised upscale models such as SkinDiff etc

    • @latentvision
      @latentvision  Год назад +1

      yeah working with noise to increase details is in the pipeline :)

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ Год назад

      @@latentvision right, what you just showed us! that is a great idea. I will try it now. Love this community ! right, what you just showed us! that is a great idea. I will try it now. Love this community !

  • @81sw0le
    @81sw0le Год назад

    I have a unique way of creating characters in midjourney. I'd like to use it as an ipadapter and pose it but I never get any good results. (very detailed, grotesque cartoon style)
    The goal it to be able to create a character sheet so I can animate it.
    Have you seen a way to do something like this?

    • @latentvision
      @latentvision  Год назад

      I'd need to see the pictures. Technically it's possible, you probably need a checkpoint or a lora with a close style and depends on the kind of result and fidelity you are after.

    • @81sw0le
      @81sw0le Год назад

      Do you have a discord so I can send you the images?@@latentvision

  • @alexgilseg
    @alexgilseg 11 месяцев назад

    This is really cool however I have a question. In The Video you set "end at step" to 0 and it keeps the structure of the loaded image. When I set it to 0 it just uses nothing of my loaded image and just goes by the prompt.. And that's what I thought the whole thing was, to go backwards in an image and then load from there so to say.. By setting it to zero don't you tell the workflow to ignore the loaded image ?

  • @luiswebdev8292
    @luiswebdev8292 Год назад

    can you explain more detail why you're using the CLIPTextEncodeSDXL and not just CLIPTextEncode? Is that important to this workflow?

    • @latentvision
      @latentvision  Год назад

      no, it's not essential. As I mentioned at the very beginning CLIPTextEncodeSDXL generally gives slightly sharper details

    • @luiswebdev8292
      @luiswebdev8292 Год назад

      @@latentvision that only works with SDXL models right? Is there an alternative with other models (e.g dreamshaper) or for those you would simply use CLIPTextEncode?

  • @iozsoo
    @iozsoo 11 месяцев назад

    Why my SDXL node hasn't got green pins on it? Also, my positive and negative prompts has conditioning, not string :(

  • @kikoking5009
    @kikoking5009 8 месяцев назад

    The Unsampler node is not working
    (import failed) it shows after Downloading

    • @latentvision
      @latentvision  8 месяцев назад +1

      comfy made a breaking upgrade, the nodes need to be updated. I believe the unsampler should be fine now

  • @bgtubber
    @bgtubber 5 месяцев назад

    I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).

    • @latentvision
      @latentvision  5 месяцев назад

      hard to say, it was an "old" workflow so it might be just a matter of updated checkpoints or different version of some library

    • @bgtubber
      @bgtubber 5 месяцев назад

      @@latentvision Ah, I see. No worries. I'll keep trying. Hopefully I'll figure it out. :)

  • @dan323609
    @dan323609 Год назад

    What is sigma in comfy (or SD)? What it means, or it does?

    • @latentvision
      @latentvision  Год назад +1

      roughly it is the current progress in the generation. you can compare it to a sigma start/end to know where you are in the image generation

    • @dan323609
      @dan323609 Год назад

      @@latentvision oh i get it, thx

  • @P4TCH5S
    @P4TCH5S Год назад +1

    so cool! thank you

  • @Kikoking-y9b
    @Kikoking-y9b 8 месяцев назад

    Hello
    I have 2 issues
    Repeat Latent Batch gives exactly 2 same images.
    And:
    Working with Get Sigma it shows this error :
    Error occurred when executing BNK_GetSigma:
    'SDXL' object has no attribute 'get model_object'

    • @latentvision
      @latentvision  8 месяцев назад

      you probably just need to upgrade comfy

    • @Kikoking-y9b
      @Kikoking-y9b 8 месяцев назад

      @@latentvision unfortunately no. The error is still there. Also with the ksampler Variation with noise injection.
      I tried with juggernaut sdxl checkpoint and sd_xl_base 1.0 checkpoint. Same issue with 'get _model_object

    • @xieporter
      @xieporter 8 месяцев назад

      I have the same problem

    • @Kikoking-y9b
      @Kikoking-y9b 8 месяцев назад

      @@latentvision would it help to delete comfy at all and install it again so maybe like these the error goes away!
      Because a lot of updates didn't help at all. Its crazy

  • @AntonioRomero-x1e
    @AntonioRomero-x1e 10 месяцев назад

    Ive watched this video many times trying to use one of this methods to fake an "unstable" animation. Animatediff evolved so quickly that it seems imposible now to make each frame in a different style...... can u make a video on how to make a video with animatediff where Ipadapter keeps the identity of the main subject but the rest of the composition changes style in each frame? have in mind that scheduled prompots are not a solution here. It would be very difficult to write a prompt for each frame.

  • @generalawareness101
    @generalawareness101 Год назад

    For whatever reason if I put the int to 0 I get nothing and the closer I get to the sample steps (30 in this example) the more the image comes in.

  • @pk.9436
    @pk.9436 Год назад

    great work 👏

  • @danielmatejka1976
    @danielmatejka1976 Год назад +1

    thank you ❤

  • @cyril1111
    @cyril1111 Год назад +1

    Thanks for the explanations! Super helpful! Now, Im a bit confused of the width and height of your TextencodeSDXL - it is huge! How come it goes so fast on your workflow, when for me it takes more than 5min with a 4090 ?

  • @petruschka222
    @petruschka222 Год назад

    Thank You. Great Job.

  • @gamersgabangest3179
    @gamersgabangest3179 10 месяцев назад

    Ciao Matteo, che GPU utilizzi? Grazie

  • @Homopolitan_ai
    @Homopolitan_ai 10 месяцев назад

    Total ❤

  • @HideousSlots
    @HideousSlots Год назад

    Awesome!

  • @petneb
    @petneb 21 день назад

    Wonderful

  • @kakochka1
    @kakochka1 Год назад

    Am I the only one who can't open "pastebin" links? Does anyone know what am I doing wrong?)

    • @latentvision
      @latentvision  Год назад

      seems to be working for me... I'll find a better location for all the workflows soon

    • @kakochka1
      @kakochka1 Год назад

      @@latentvision Sorry for the trouble) Previously I averted this problem just by going to your github page, but couldn't find them there this time(

  • @bobgalka
    @bobgalka 5 месяцев назад

    I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)

  • @salvatorecancilla1605
    @salvatorecancilla1605 5 месяцев назад

    sei un grande

  • @kdesign1579
    @kdesign1579 10 месяцев назад

    awesome!

  • @GForcenuwan
    @GForcenuwan Год назад +1

    wow💡

  • @yangzhang8964
    @yangzhang8964 Год назад

    I didn't find "Unsampler"

    • @latentvision
      @latentvision  Год назад

      it's linked in the video description

    • @yangzhang8964
      @yangzhang8964 Год назад

      My "UnSampler" module shows "undefined"@@latentvision

  • @whatwherethere
    @whatwherethere Год назад

    How are you getting consistently good images? The moment I change anything in my prompts the image goes crazy. This is nowhere close to my experiences.