Style Transfer Using ComfyUI - No Training Required!

Поделиться
HTML-код
  • Опубликовано: 16 мар 2024
  • Visual style prompting aims to produce a diverse range of images while maintaining specific style elements and nuances. During the denoising process, they keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers.
    Their approach allows for the visual style prompting without any fine-tuning, ensuring that generated images maintain a faithful style.
    My personal favourite so far - and yes, it works in ComfyUI too ;)
    Want to help support the channel? Get workflows and more!
    / nerdyrodent
    Links:
    github.com/naver-ai/Visual-St...
    github.com/ExponentialML/Comf... - WIP
    == More Stable Diffusion Stuff! ==
    * Install ComfyUI - • How to Install ComfyUI...
    * ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
    * Make Images QUICKLY with an LCM LoRA! - • LCM LoRA = Speedy Stab...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * Consistent Characters in ANY pose with ONE Image! - • Reposer = Consistent S...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
  • НаукаНаука

Комментарии • 71

  • @jimdelsol1941
    @jimdelsol1941 Месяц назад +13

    That one is fantastic !

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Yeah, they did really well!

  • @ultimategolfarchives4746
    @ultimategolfarchives4746 Месяц назад +1

    Earlier, I installed the nodes but didn't get around to trying them out. Now, you're making me regret not giving them a go! 😂😂

  • @kariannecrysler640
    @kariannecrysler640 Месяц назад +4

    My Nerdy friend 🤘🥰 seed starting this week for my salad garden 😁

  • @Steve.Jobless
    @Steve.Jobless Месяц назад +1

    Dude, this is what I've been waiting for since Style Aligned came out.

    • @AustralienUFO
      @AustralienUFO Месяц назад

      This is what I've been waiting for since DeepDream dropped

  • @Pending22
    @Pending22 Месяц назад +1

    Top content as always! 👍 Thx

  • @Main267
    @Main267 Месяц назад +3

    5:30 Have you seen Marigold depth yet? It's so super crisp and clean for most of the images I threw at it. Only downside is that whatever the base image is it will work best at 768x768, but you can rescale it back up to the base image size after Marigold does its magic.

  • @GfcgamerOrgon
    @GfcgamerOrgon Месяц назад +2

    Nerdy Rodent is great!

  • @andyone7616
    @andyone7616 Месяц назад +5

    Is there are version for automatic 1111?

  • @mufeedco
    @mufeedco Месяц назад +1

    Great video. Thank you.

  • @GamingDaveUK
    @GamingDaveUK Месяц назад

    Couple of years ago there was a website that allowed you to upload an image and apply that style to another image, so you could upload a plate of speghatti and then upload an image of your mate and you had a mate made of speghatti... this reminds me of that, gonna have to add that to comfyui (and fully watch this video) on my day off :)

  • @attashemk8985
    @attashemk8985 Месяц назад +1

    Looks better than IPAdapter, cool. Sometimes you don't have a dozen photo with something made from cloud to train style

  • @craizyai
    @craizyai Месяц назад +2

    Hi! Please upload the ControlNet Depth example. The Exponential ML Github has taken the down : (

  • @androidgamerxc
    @androidgamerxc Месяц назад +2

    what about auto 11111

  • @twilightfilms9436
    @twilightfilms9436 Месяц назад

    Would it work with batch sequencing for video? How about consistency?

  • @ronnykhalil
    @ronnykhalil Месяц назад

    good jeebus there goes my evening!

  • @lmlsergiolml
    @lmlsergiolml Месяц назад +1

    Super cool technique!
    Can someone explain to me where to start? There is so much info, and it's a bit overwhelming for me

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Check the links in the video description!

  • @unknownuser3000
    @unknownuser3000 Месяц назад +1

    This looks incredible... I've I don't have to train 100s of hours...

  • @contrarian8870
    @contrarian8870 Месяц назад +2

    @Nerdy Rodent Great stuff. Request: on Patreon, can you release a version with a Canny Controlnet added to the depth Controlnet? I'm not yet at the stage of being able to do this myself...

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Sure, I’ll add a canny one too!

    • @contrarian8870
      @contrarian8870 Месяц назад +1

      @@NerdyRodent Thank you!

    • @contrarian8870
      @contrarian8870 Месяц назад

      @@NerdyRodent Wait, I didn't mean replace Depth with Canny (I can do that) :) I meant: adding a Canny Controlnet on top of the Depth Controlnet within the same workflow, so that both are active. That's the part I can't do yet: chaining two Controlnets in one workflow.

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      @@contrarian8870 oh, for two (or more) control nets you can just chain them together so the two outputs from c1 are the inputs to c2. E.g control net 1 -> control net 2 -> etc

    • @contrarian8870
      @contrarian8870 Месяц назад

      @@NerdyRodent OK, thanks.

  • @hamtsammich
    @hamtsammich Месяц назад

    I'm having a hard time getting my head around comfyui.
    I'm sure it's not all that hard, but I've grown accustomed to the command line, or automatic1111.

  • @bilalalam1
    @bilalalam1 Месяц назад +4

    Automatic 1111 forge ?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +4

      Give it a few days - it's brand new! XD

    • @havemoney
      @havemoney Месяц назад +1

      @@NerdyRodent Will wait!

  • @DemShion
    @DemShion Месяц назад

    Can't seem to get this to work with XDSL, can anyone confirm that it is still working with the updates?

  • @DanielThiele
    @DanielThiele Месяц назад

    Do you have a workflow tutorial, or are you interested in making one, that also generates orthogonal views / model sheets from the initial sketch? I know there is things like char turner but so far it alsways works based on text input only. I assume for you ut's super easy. I'm still a noob with ComfyUI

  • @nioki6449
    @nioki6449 Месяц назад

    after installation i got "module for custom nodes due to the lack of NODE CLASS MAPPINGS.", can smbdy help with that

  • @bladechild2449
    @bladechild2449 Месяц назад

    I tried the comfyui workflow from the github page and it didn't seem to do much at all until I realized it mostly seems very reliant on piggy backing off of the prompts, and gets very confused with anyhting beyond basic. If your reference image is vector art and you put in a person's name, it won't take the style at all and just gives a photo of the person.

  • @edwardwilliams2564
    @edwardwilliams2564 Месяц назад

    If I were to guess, I'd say that the workflow not working as well with the 1.5 version was due to the model used for the style transfer not being trained on 512x512 images.

  • @dogvandog
    @dogvandog Месяц назад +1

    I think something got broken with COmfy ui extension 2 days ago because this is just not working

  • @unknownuser3000
    @unknownuser3000 Месяц назад +1

    Not for automatic?

  • @MrPrasanna1993
    @MrPrasanna1993 Месяц назад

    How much vram does it require?

  • @AnnisNaeemOfficial
    @AnnisNaeemOfficial Месяц назад

    Thanks. I just tried it and am not getting the same results as you. Not even close. Images look mutilated... I've double, triple checked my work and reviewed the github. Seems to me like this is only working in extremely specific scenarios?

  • @pmtrek
    @pmtrek 26 дней назад

    what extensions have you used for the BLIP nodes, please ? I have installed both comfy_clip_blip_node and ComfyUI_Pic2Story, but none show as yours :/

  • @waurbenyeger
    @waurbenyeger Месяц назад

    I've installed the extension using the URL from Git like I've done for every other extension, but I'm not seeing anything new on the interface. I'm also using Forge ... is this only available on the HF website right now or? I'm lost. Where is this suppose to pop up when you install it?

  • @mr.entezaee
    @mr.entezaee Месяц назад +2

    I could not make this Workfolw from the video. Please put it free if possible.

    • @wv146
      @wv146 Месяц назад

      No, its pay to play now

  • @Omfghellokitty
    @Omfghellokitty Месяц назад

    import keeps failing and when I try to install the reqs the triton or whatever fails

  • @blacksage81
    @blacksage81 Месяц назад

    Hm, I can use this to Force my vehicle design generation into sketches for Vizcom, why may give me cleaner results to take into TripoSR, which may give me good 3d reference models. My body is ready.

  • @steinscamus8037
    @steinscamus8037 Месяц назад

    Cool, is there an a1111 version?

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Hopefully we’ll see something in the coming months!

  • @MushroomFleet
    @MushroomFleet Месяц назад

    1:41 "it's a Gundam" :)

    • @kex0
      @kex0 Месяц назад +2

      Which is a robot.

  • @Paulo-ut1li
    @Paulo-ut1li Месяц назад +1

    Not working so good on comfy yet :(

  • @SasukeGER
    @SasukeGER 29 дней назад

    do you have this workflow somewhere :O ?

    • @NerdyRodent
      @NerdyRodent  28 дней назад

      Sure! You can grab this one and more at www.patreon.com/NerdyRodent !

  • @mr.entezaee
    @mr.entezaee Месяц назад

    How to install node types?
    ImageFromBatch

    • @mr.entezaee
      @mr.entezaee Месяц назад

      Essential nodes that are weirdly missing from ComfyUI core.

    • @mr.entezaee
      @mr.entezaee Месяц назад

      ImageFromBatch Nodes that have failed

  • @icedzinnia
    @icedzinnia Месяц назад

    👍

  • @DemShion
    @DemShion Месяц назад

    Does this only works with 512x512?

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Nope!

    • @DemShion
      @DemShion Месяц назад

      @@NerdyRodent Then i must be doing something wrong, when i use a reference image with any other dimensions than 512 by 512 i get an image identical to the one i would get without utilizing the visual style prompting. The idea is extremely cool and the example results both in your video and paper are amazing but for some reason it seems to be a very obscure feature, in the communities im part of most ppl had not even heard of it and are not able to offer assistance troubleshooting.

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      @@DemShion my guess would be that perhaps you need to update everything?

  • @LouisGedo
    @LouisGedo Месяц назад

    👋

    • @Bond-Bacon
      @Bond-Bacon Месяц назад

      Do facts and logic still destroy carnists?

  • @RahulGupta1981
    @RahulGupta1981 Месяц назад

    How your 3 conditions are automatically getting picked in apply visual style prompting, in my case it's always taking the ref image prompt as positive condition for style prompt and render fire only :), however it's a pretty good one

  • @toothpastesushi5664
    @toothpastesushi5664 Месяц назад +1

    doesnt work for most cases

    • @ultimategolfarchives4746
      @ultimategolfarchives4746 Месяц назад

      Same for me... We need to prompt it extremely well to get good results.

    • @toothpastesushi5664
      @toothpastesushi5664 Месяц назад

      @@ultimategolfarchives4746i don't think prompting is the problem, it's that it is only seldom able to separate style from subject matter. It works perfectly for origami (as long as you put in one animal and ask for another animal) but in most other cases it won't work (after all it seems like it's based on a hack in latent space, were it to work correctly it would be a major breakthrough and it would be big news by now)

  • @LilShepherdBoy
    @LilShepherdBoy Месяц назад +9

    Jesus Christ loves you 💙

    • @kariannecrysler640
      @kariannecrysler640 Месяц назад +4

      You speak for gods? How special you are.

    • @lambgoat2421
      @lambgoat2421 Месяц назад +2

      @@kariannecrysler640 I mean isn't that kind of Jesus' whole thing?

    • @LilShepherdBoy
      @LilShepherdBoy Месяц назад +2

      "For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life."