New IP Adapter Model for Image Composition in Stable Diffusion!

Поделиться
HTML-код
  • Опубликовано: 21 мар 2024
  • The new IP Composition Adapter model is a great companion to any Stable Diffusion workflow. Just provide a single image, and the power of artificial intelligence will analyse the very composition itself - ready for you use!
    Check out some of things you can do with it :)
    Want to support the channel?
    / nerdyrodent
    Links:
    huggingface.co/ostris/ip-comp...
    == More Stable Diffusion Stuff! ==
    * Faster Stable Diffusions with LCM LoRA - • LCM LoRA = Speedy Stab...
    * SD Generated Avatar Animation - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * One image = A Consistent Character in ANY pose - • Reposer = Consistent S...
  • НаукаНаука

Комментарии • 43

  • @ClownCar666
    @ClownCar666 Месяц назад +4

    Thanks for sharing! I've been messing with Ip adapter all week, it's so much fun!

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST Месяц назад +2

    Impressive!

  • @Niffelheim
    @Niffelheim Месяц назад

    Hey Nerdy Rodent, thanks for the tutorial. Do you know if this can apply together with a pose control net? I want to design a character from different views, (front, back, profile) and maybe transfer style or Lora character for consistency. Any tips?

  • @dudufridak1145
    @dudufridak1145 15 дней назад

    I like the thumbnail for this image.
    I wonder if you can create an A. I. for generating similar images, compositing text (with effects) as such.

  • @Remianr
    @Remianr Месяц назад +4

    6:54 Meme material haha!

  • @godpunisher
    @godpunisher Месяц назад +2

    Nerdy's contents are amazing. Are you a mind reader? 😁

  • @farsi_vibes_edit
    @farsi_vibes_edit Месяц назад +3

    I wish I had found your channel earlier😢🤯❤❤🔥

  • @kariannecrysler640
    @kariannecrysler640 Месяц назад +2

    I saw the rodent in the sky!!!! I have the witnesses!
    🤘😉

  • @BabylonBaller
    @BabylonBaller Месяц назад +2

    Negative Prompt: "Bad Stuff Such as Evil Kittens" ROFL!

  • @holysabre8499
    @holysabre8499 Месяц назад +1

    What I really want to see is a Lucid Sonic Dreams working update or something similar that's user friendly. Any idea of anything in the works similar to that or how to even achieve a similar effects using something else?

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Lucid Dreams is slightly difficult on diffusion models 😞

  • @Jcs187-rr7yt
    @Jcs187-rr7yt Месяц назад

    Is there 1.5 models that this doesn't work with? I keep getting 'header too large' error and that usually happens with model mismatch, but I'm using the 1.5 adapter. ?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      Inpainting models may not work, but just your standard ones should all be fine

  • @DemShion
    @DemShion 25 дней назад

    has anyone managed to get this working with pony checkpoint? it works with other models derived from sdxl like animagine and jugg/realvis but not pony for some reason, curious if its just me.

  • @KDawg5000
    @KDawg5000 Месяц назад

    What preprocessor do you use when using this with Automatic1111?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      It’s just the same as usual, like when using ip-adapter-plus or light

    • @KDawg5000
      @KDawg5000 Месяц назад +1

      @@NerdyRodent Hmm. It was giving me error messages no matter what I tried. Note, regular IPAdapter and the Face ID versions work. I'm not at home currently, but can share the messages later (in case anyone cares or is having the same problem).

    • @reallifecheatcodeaudiobooks
      @reallifecheatcodeaudiobooks Месяц назад

      Please if you find solution for this let me know. I am also struggling to make it work.@@KDawg5000

  • @MarcSpctr
    @MarcSpctr Месяц назад +2

    Can you make a video on all your favorites AI tools and ComfyUI workflows ?
    Like Google's Film Interpolation, StableDiffusion, RVC Webui, MusicGen, etc

    • @comfyui
      @comfyui Месяц назад +1

      Complete Menu

  • @Sandy5of5
    @Sandy5of5 Месяц назад +3

    No hugging cats? *giggles*

    • @NerdyRodent
      @NerdyRodent  Месяц назад +6

      Cat should never be used in a prompt! 😱

  • @ramn_
    @ramn_ Месяц назад

    I installed it in Forge and it ruined my installation. Now it generates only deformed images and randoms. I tried everything and I couldn't fix it, I will have to reinstall.

  • @wakegary
    @wakegary Месяц назад

    that tiger needs help and I think we should act on it.

  • @peoplez129
    @peoplez129 Месяц назад

    Images come out all garbled on A1111

  • @Hooooodad
    @Hooooodad Месяц назад +1

    Mate can you show how it's done in automatic 1111, forge please ?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Select the model and your composition image (like with ComfyUI). Win!

    • @Hooooodad
      @Hooooodad Месяц назад

      @@NerdyRodent I tried and failed miserably , doesn't work for me on forge. Do you use pre professor?

  • @kallamamran
    @kallamamran Месяц назад

    Just feels like img2img

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      Or perhaps how you'd LIKE img2img would work, but it doesn't? :)

    • @MyAmazingUsername
      @MyAmazingUsername Месяц назад

      This absolutely isn't like img2img whatsoever.
      Img2img keeps the exact pixels, colors and exact layout.
      This new technique is extremely flexible and can do anything and will be more "inspired by" than "exactly the same as the input".

    • @kallamamran
      @kallamamran Месяц назад

      @@MyAmazingUsername Img2img definately doesn't keep the exact pixels! If it did img2img would be useless!

  • @ForeverNot-wv4sz
    @ForeverNot-wv4sz Месяц назад

    I can't seem to get it to work for Auto1111, Like it works but the image comes out very painted/pastel/distorted. The same thing happened to me on comfyui.. till I downloaded the 2 encoders; CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, and added them to the \ComfyUI\models\clip_vision folder, then it worked. So I thought, maybe that's the issue with the auto1111 version? However I can't find where to put these 2 encoder files for auto1111, I tried extensions\sd-webui-controlnet\annotator\downloads\clip_vision folder but that didn't work. I've also had issues just getting it to drop down for me in the menu on the gui, the ip composition model, like when I click on the ip adapter in the control dropdown, it has ip adaptorplus etc but no composition one unless I click the refresh next to the model dropdown then I can select ALL the models (even the ones not for ipadapter) THEN I'm able to load it, but like I said, it's all foggy/blurry when I make the image. I have controlnet v1.1.441, and my auto1111 is version: v1.6.0. I'm not sure what else to do. EDIT; I just updated my auto to version: v1.8.0 still having issues.

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Yes, you do unfortunately need to click refresh to get the full model list if you select the ipadapter filter. As for blurry images, I can’t find any way to replicate that in either Comfy or Forge 🫤

    • @ForeverNot-wv4sz
      @ForeverNot-wv4sz Месяц назад

      Ah I see.. well at least it's good to know the refresh feature is meant to work that way. Perhaps I need to upgrade auto to forge, maybe that's the issue here@@NerdyRodent

    • @KDawg5000
      @KDawg5000 Месяц назад

      Are you using a preprocessor? I put the 2 "composition" models in my Controlnet folder, and get them to show up with a refresh, but I don't know which preprocessor to use? Any of the ip-adapter ones I try, never do anything. Meaning, automatic1111 just skips using controlnet (like it does when your controlnet settings don't make sense).

    • @reallifecheatcodeaudiobooks
      @reallifecheatcodeaudiobooks Месяц назад

      Can you please let us know what preprocessor you use in forge. I cant get this to work without proper preprocessor and If i choose ip-adapter_clip_sdxl or ip-adapter_clip_sdxl_plus_vith it gives errors and doesn't work :/@@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      It’s just the same Sd1.5 clip vision model as normal - like you’d use with ip adapter plus, ip adapter light, ip adapter full face, etc.