Style Transfer Using ComfyUI - No Training Required!
HTML-код
- Опубликовано: 16 мар 2024
- Visual style prompting aims to produce a diverse range of images while maintaining specific style elements and nuances. During the denoising process, they keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers.
Their approach allows for the visual style prompting without any fine-tuning, ensuring that generated images maintain a faithful style.
My personal favourite so far - and yes, it works in ComfyUI too ;)
Want to help support the channel? Get workflows and more!
/ nerdyrodent
Links:
github.com/naver-ai/Visual-St...
github.com/ExponentialML/Comf... - WIP
== More Stable Diffusion Stuff! ==
* Install ComfyUI - • How to Install ComfyUI...
* ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
* Make Images QUICKLY with an LCM LoRA! - • LCM LoRA = Speedy Stab...
* How do I create an animated SD avatar? - • Create your own animat...
* Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
* Consistent Characters in ANY pose with ONE Image! - • Reposer = Consistent S...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst... - Наука
That one is fantastic !
Yeah, they did really well!
Earlier, I installed the nodes but didn't get around to trying them out. Now, you're making me regret not giving them a go! 😂😂
My Nerdy friend 🤘🥰 seed starting this week for my salad garden 😁
😀
Dude, this is what I've been waiting for since Style Aligned came out.
This is what I've been waiting for since DeepDream dropped
Top content as always! 👍 Thx
5:30 Have you seen Marigold depth yet? It's so super crisp and clean for most of the images I threw at it. Only downside is that whatever the base image is it will work best at 768x768, but you can rescale it back up to the base image size after Marigold does its magic.
Nerdy Rodent is great!
Is there are version for automatic 1111?
Great video. Thank you.
Couple of years ago there was a website that allowed you to upload an image and apply that style to another image, so you could upload a plate of speghatti and then upload an image of your mate and you had a mate made of speghatti... this reminds me of that, gonna have to add that to comfyui (and fully watch this video) on my day off :)
Looks better than IPAdapter, cool. Sometimes you don't have a dozen photo with something made from cloud to train style
Hi! Please upload the ControlNet Depth example. The Exponential ML Github has taken the down : (
what about auto 11111
Would it work with batch sequencing for video? How about consistency?
good jeebus there goes my evening!
Super cool technique!
Can someone explain to me where to start? There is so much info, and it's a bit overwhelming for me
Check the links in the video description!
This looks incredible... I've I don't have to train 100s of hours...
@Nerdy Rodent Great stuff. Request: on Patreon, can you release a version with a Canny Controlnet added to the depth Controlnet? I'm not yet at the stage of being able to do this myself...
Sure, I’ll add a canny one too!
@@NerdyRodent Thank you!
@@NerdyRodent Wait, I didn't mean replace Depth with Canny (I can do that) :) I meant: adding a Canny Controlnet on top of the Depth Controlnet within the same workflow, so that both are active. That's the part I can't do yet: chaining two Controlnets in one workflow.
@@contrarian8870 oh, for two (or more) control nets you can just chain them together so the two outputs from c1 are the inputs to c2. E.g control net 1 -> control net 2 -> etc
@@NerdyRodent OK, thanks.
I'm having a hard time getting my head around comfyui.
I'm sure it's not all that hard, but I've grown accustomed to the command line, or automatic1111.
Automatic 1111 forge ?
Give it a few days - it's brand new! XD
@@NerdyRodent Will wait!
Can't seem to get this to work with XDSL, can anyone confirm that it is still working with the updates?
Do you have a workflow tutorial, or are you interested in making one, that also generates orthogonal views / model sheets from the initial sketch? I know there is things like char turner but so far it alsways works based on text input only. I assume for you ut's super easy. I'm still a noob with ComfyUI
after installation i got "module for custom nodes due to the lack of NODE CLASS MAPPINGS.", can smbdy help with that
I tried the comfyui workflow from the github page and it didn't seem to do much at all until I realized it mostly seems very reliant on piggy backing off of the prompts, and gets very confused with anyhting beyond basic. If your reference image is vector art and you put in a person's name, it won't take the style at all and just gives a photo of the person.
If I were to guess, I'd say that the workflow not working as well with the 1.5 version was due to the model used for the style transfer not being trained on 512x512 images.
I think something got broken with COmfy ui extension 2 days ago because this is just not working
Not for automatic?
How much vram does it require?
Thanks. I just tried it and am not getting the same results as you. Not even close. Images look mutilated... I've double, triple checked my work and reviewed the github. Seems to me like this is only working in extremely specific scenarios?
what extensions have you used for the BLIP nodes, please ? I have installed both comfy_clip_blip_node and ComfyUI_Pic2Story, but none show as yours :/
I've installed the extension using the URL from Git like I've done for every other extension, but I'm not seeing anything new on the interface. I'm also using Forge ... is this only available on the HF website right now or? I'm lost. Where is this suppose to pop up when you install it?
I could not make this Workfolw from the video. Please put it free if possible.
No, its pay to play now
import keeps failing and when I try to install the reqs the triton or whatever fails
Hm, I can use this to Force my vehicle design generation into sketches for Vizcom, why may give me cleaner results to take into TripoSR, which may give me good 3d reference models. My body is ready.
Cool, is there an a1111 version?
Hopefully we’ll see something in the coming months!
1:41 "it's a Gundam" :)
Which is a robot.
Not working so good on comfy yet :(
do you have this workflow somewhere :O ?
Sure! You can grab this one and more at www.patreon.com/NerdyRodent !
How to install node types?
ImageFromBatch
Essential nodes that are weirdly missing from ComfyUI core.
ImageFromBatch Nodes that have failed
👍
Does this only works with 512x512?
Nope!
@@NerdyRodent Then i must be doing something wrong, when i use a reference image with any other dimensions than 512 by 512 i get an image identical to the one i would get without utilizing the visual style prompting. The idea is extremely cool and the example results both in your video and paper are amazing but for some reason it seems to be a very obscure feature, in the communities im part of most ppl had not even heard of it and are not able to offer assistance troubleshooting.
@@DemShion my guess would be that perhaps you need to update everything?
👋
Do facts and logic still destroy carnists?
How your 3 conditions are automatically getting picked in apply visual style prompting, in my case it's always taking the ref image prompt as positive condition for style prompt and render fire only :), however it's a pretty good one
doesnt work for most cases
Same for me... We need to prompt it extremely well to get good results.
@@ultimategolfarchives4746i don't think prompting is the problem, it's that it is only seldom able to separate style from subject matter. It works perfectly for origami (as long as you put in one animal and ask for another animal) but in most other cases it won't work (after all it seems like it's based on a hack in latent space, were it to work correctly it would be a major breakthrough and it would be big news by now)
Jesus Christ loves you 💙
You speak for gods? How special you are.
@@kariannecrysler640 I mean isn't that kind of Jesus' whole thing?
"For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life."