Style Transfer Adapter for ControlNet (img2img)
HTML-код
- Опубликовано: 6 мар 2023
- Very cool feature for ControlNet that lets you transfer a style.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
_________________________________________________________________________
Details about Adapters
TencentARC/T2I-Adapter: T2I-Adapter (github.com)
Models
huggingface.co/TencentARC/T2I...
Ebstynth + SD
• Stable Diffusion + EbS...
Install SD
• Installing Stable Diff...
Install ControlNet
• New Stable Diffusion E...
I can not wait to apply this to my AI Animations. This is a huge game changer. Using less in the text prompt area is a step forward for us. Having two images being the only driving factors should help a ton with cohesion/consistency in animation
Guidance start is about
Thanks for sharing this! Very cool that they’ve added this style option. Excited for your next video on connecting it with eb synth. I’ll watch that next and see what I can do as well.
incredible amount of useful information in this video. thank YOU!!!!!!
Great video :-) Thanx for sharing
Thank you! very interesting!
@
Thank your very much
Ur a legend
Really cool thanks for sharing! I wonder what would happen if you put 3d wireframes in the controlnet lines instead of the generated ones, could be very temporally stable
I need help, when I generate the result is way different than the actual image im using.
Ahhh yeah
Thanks for another informative video. This style transfer feature already makes Photoshop’s Style Transfer neural filter look sad by comparison. It’s clear that Stable Diffusion’s open-source status, enabling all of these new features, is leaving MidJourney and DALL-E in the dust.
Great Video ! Trying to update my xformers, it seemed to install a later version of pytorch that no longer supports cuda, and the version of xformers you used is no longer available, how do I fix this?
hi Sir., thanks for making those good videos on style transfer, i have a question , is there a way to change a person's outfit based on a input pattern of picture? using style transfer and inpaint? thanks in advance
The color adapter works for me, but the style adapter does not. The Guidance Start value doesn't change anything. The result is the same as when ControlNet is turned off. Please tell me how to fix this? Thank you!
i try use this new controlnet extension and get not style in generated result. i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?
Hey I was going to tell you how to get after effects to pull in multiple png sequences and autocross fade them. You have to pull them in and make each ebsynth outfolder it's own sequence. Then right click all the sequences, create new composition, in the menu there is an option to crossfade all the imported sequences. Specify with the ebsynth settings. Voila!
any tips on how to build this with ComfyUI?
Hello. What about style transfer in images?