Style Transfer Adapter for ControlNet (img2img)

Поделиться
HTML-код
  • Опубликовано: 6 мар 2023
  • Very cool feature for ControlNet that lets you transfer a style.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    _________________________________________________________________________
    Details about Adapters
    TencentARC/T2I-Adapter: T2I-Adapter (github.com)
    Models
    huggingface.co/TencentARC/T2I...
    Ebstynth + SD
    • Stable Diffusion + EbS...
    Install SD
    • Installing Stable Diff...
    Install ControlNet
    • New Stable Diffusion E...

Комментарии • 83

  • @ixiTimmyixi
    @ixiTimmyixi Год назад +5

    I can not wait to apply this to my AI Animations. This is a huge game changer. Using less in the text prompt area is a step forward for us. Having two images being the only driving factors should help a ton with cohesion/consistency in animation

  • @clenzen9930
    @clenzen9930 Год назад +3

    Guidance start is about

  • @BeatoxYT
    @BeatoxYT Год назад

    Thanks for sharing this! Very cool that they’ve added this style option. Excited for your next video on connecting it with eb synth. I’ll watch that next and see what I can do as well.

  • @snckyy
    @snckyy Год назад

    incredible amount of useful information in this video. thank YOU!!!!!!

  • @digital_magic
    @digital_magic Год назад

    Great video :-) Thanx for sharing

  • @ramilgr7467
    @ramilgr7467 Год назад

    Thank you! very interesting!

  • @androidgamerxc
    @androidgamerxc Год назад +1

    @

  • @BoringType

    Thank your very much

  • @koto9x
    @koto9x Год назад

    Ur a legend

  • @CompositingAcademy
    @CompositingAcademy Год назад

    Really cool thanks for sharing! I wonder what would happen if you put 3d wireframes in the controlnet lines instead of the generated ones, could be very temporally stable

  • @J.l198
    @J.l198 Год назад +2

    I need help, when I generate the result is way different than the actual image im using.

  • @HopsinThaGoat
    @HopsinThaGoat Год назад

    Ahhh yeah

  • @sidewaysdesign
    @sidewaysdesign Год назад +1

    Thanks for another informative video. This style transfer feature already makes Photoshop’s Style Transfer neural filter look sad by comparison. It’s clear that Stable Diffusion’s open-source status, enabling all of these new features, is leaving MidJourney and DALL-E in the dust.

  • @zachkrausnick5030
    @zachkrausnick5030 Год назад

    Great Video ! Trying to update my xformers, it seemed to install a later version of pytorch that no longer supports cuda, and the version of xformers you used is no longer available, how do I fix this?

  • @JeffFengcn
    @JeffFengcn Год назад

    hi Sir., thanks for making those good videos on style transfer, i have a question , is there a way to change a person's outfit based on a input pattern of picture? using style transfer and inpaint? thanks in advance

  • @User-pq2yn
    @User-pq2yn Год назад +3

    The color adapter works for me, but the style adapter does not. The Guidance Start value doesn't change anything. The result is the same as when ControlNet is turned off. Please tell me how to fix this? Thank you!

  • @theairchitect
    @theairchitect Год назад +1

    i try use this new controlnet extension and get not style in generated result. i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?

  • @christophervillatoro3253
    @christophervillatoro3253 Год назад

    Hey I was going to tell you how to get after effects to pull in multiple png sequences and autocross fade them. You have to pull them in and make each ebsynth outfolder it's own sequence. Then right click all the sequences, create new composition, in the menu there is an option to crossfade all the imported sequences. Specify with the ebsynth settings. Voila!

  • @tonon_AI
    @tonon_AI Год назад

    any tips on how to build this with ComfyUI?

  • @Fravije

    Hello. What about style transfer in images?