Use Any Face EASY in Stable Diffusion. Ipadapter Tutorial.

Поделиться
HTML-код
  • Опубликовано: 8 фев 2024
  • How to use Ipadapter face plus v2 for Stable Diffusion to get any face without training a model or lora.
    Download models huggingface.co/h94/IP-Adapter...
    Patreon text & image guide / use-same-face-ip-98117124
    Prompt styles for Stable diffusion Automatic1111, ComfyUI & Vlad/SD.Next: / sebs-hilis-79649068
    Get early access to videos and help me, support me on Patreon / sebastiankamph
    Chat with me in our community discord: / discord
    Stable Diffusion for Beginners Playlist • Stable Diffusion Begin...
    My Weekly AI Art Challenges • Let's AI Paint - Weekl...
    My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
    ControlNet tutorial and install guide • NEW ControlNet for Sta...
    Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
  • ХоббиХобби

Комментарии • 212

  • @sebastiankamph
    @sebastiankamph  3 месяца назад +2

    Text & image guide for Patreon supporters www.patreon.com/posts/use-same-face-ip-98117124

    • @LouisGedo
      @LouisGedo 3 месяца назад

      👋

    • @brianckelley
      @brianckelley 3 месяца назад

      Dude. Why have you burned subtitles into this video!? You probably have no idea how distracting it is for people with ADHD. There's a big button to enable closed caption inside a YT video (which YT generates automatically) on every single platform and its turned off by default for a reason. I can't get anything from this tutorial with those subs burned in. Love the content, Sebastian. Been watching for a long time. But this one, and any others with burned subs, is a no go for me and possibly others.
      Shame. I couldn't wait to dig in to this one.

    • @kayinsho2558
      @kayinsho2558 2 месяца назад

      Great vid. Can you use the new Stable diffusion Web UI Forge with this? Automatic 1111 is a mess.

  • @avmsteve
    @avmsteve 3 месяца назад +16

    I had been aware of this extension for a few days, but after looking at some other guides (and being confused) I decided to wait for yours, which as anticipated was clear, concise and comprehensive. Thanks Sebastian.

  • @Shabazza84
    @Shabazza84 Месяц назад +1

    Awesome video once again. Here, have my sub.
    Absolutely love this IPAdapter thing. I only roughly knew what it does so far. That's a a game changer.

  • @meadow-maker
    @meadow-maker 3 месяца назад +2

    by the way, as usual this is a great tutorial. As I had to try reinstalling reactor I wanted to follow along with a tut to make sure I was doing it right, couldn't find yours at first, the alternative one was terrible so I persevered until I found yours which is great! You are the best!

    • @sebastiankamph
      @sebastiankamph  3 месяца назад +1

      I'm glad you finally found mine and that it helped you get things running! Good to know my guides are preferred :)

    • @meadow-maker
      @meadow-maker 3 месяца назад

      @@sebastiankamph yeah, even with your dad jokes. 🤣

  • @sn0wbr33z3
    @sn0wbr33z3 3 месяца назад +4

    Good thing this is available for SD 1.5. Not everyone is able to catch up with SDXL.

  • @nikgrid
    @nikgrid 3 месяца назад

    Nice tute Seb! Thanks

  • @NAKOOT
    @NAKOOT 3 месяца назад

    Thanks Sebastian, I used to do this with other methods, but this way is much cleaner. 🤘

    • @ronbere
      @ronbere 3 месяца назад

      really?

  • @tag_of_frank
    @tag_of_frank Месяц назад

    Seems like a good way to test your input images before making a textual-inversion or lora.

  • @TomiTom1234
    @TomiTom1234 3 месяца назад +1

    Great tutorial as always.
    I laughed at 8:46 😂😂

  • @KDawg5000
    @KDawg5000 3 месяца назад +9

    One thing nice about using Ip-adapter first, to create the base image, is that it creates a good image to use Reactor for a face swap. So if you want an image that very closely resembles yourself, just use Reactor to swap your face back on there. :)

    • @sebastiankamph
      @sebastiankamph  3 месяца назад +1

      That's actually... pretty clever 😅🌟

    • @ADZIOO
      @ADZIOO 3 месяца назад +1

      This is an option, but so what if using this adapter with Lora significantly spoils the render quality? It's probably not worth it.

    • @Al-Storm
      @Al-Storm 2 месяца назад

      Great tip. I'm getting better results with reactor than using controlnet.

  •  2 месяца назад +4

    The subtitles are really helpful!

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Thank you for the feedback! Happy to hear it

  • @mbfcs2
    @mbfcs2 2 месяца назад +1

    Woh ! Thanks for this very useful video, such a good job ! I'm a newbie and your videos are perfect to me.
    I've tried and it's amazing, even though for now all the characters in the image looks the same 😂

  • @notBeggingMattandLissy2PlayRE4
    @notBeggingMattandLissy2PlayRE4 3 месяца назад +8

    Crazy how fast you've grown your channel! Congrats, it's of high quality.

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Thank you! Any feedback on the video?

  • @mada_faka
    @mada_faka 3 месяца назад +1

    Hi sir, thank you so much for video, you always bring best tutorial stable diffusion.

  • @Elwaves2925
    @Elwaves2925 3 месяца назад

    If I met you in a dark alley at night it would be a pun-ishing time for both. 🙂
    I stumbled onto this through trial and error but was missing the settings to get good results. Cheers.

  • @adrianmunevar654
    @adrianmunevar654 3 месяца назад

    Your sense of humor is making me subscribe 😅🤣😅🤣😅🤣 Of course, nice explanation as well

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Welcome aboard! More dad jokes for the people

  • @raphalopes495
    @raphalopes495 2 месяца назад

    First, thanks for the video, your channel is really helpful in keeping up to date with all changes, and you explain everything very well.
    I've been using Reactor for a few months to generate a consistent face for a model, and I really like it. Decided to give this a try, and I was expecting to get similar results with this method (but somehow better and/or more consistent), however that's not what happened.
    The face swaps I'm getting with IpAdapter are very different from the results I get with Reactor, it seems like another person, which makes this method not very useful for me (at least for this particular project, which already has a strongly defined face). Is this normal? Any tips to improve? For now I will keep using Reactor, since it's giving me better results.

  • @stanislavmalyshev5209
    @stanislavmalyshev5209 2 месяца назад

    It works well! Thank u :)

  • @qbcle
    @qbcle 2 месяца назад

    I laughed for 3 minutes 🤣🤣🤣
    nice tutorial btw

  • @Oxes
    @Oxes 3 месяца назад

    Finally, a really well explained Tutorial that works perfectly!

  • @ShocktorGaming
    @ShocktorGaming 3 месяца назад

    Great video!

  • @marathonour
    @marathonour 2 месяца назад

    Hi! Thx for tutorial, it worked for me, but as soon as I tried to combine it with the second controlnet for Open pose(Unit 1) - the quality of the face degrated a lot.
    How can we properly combine ipadapter with other controlnets models?

  • @jombbobbo6291
    @jombbobbo6291 2 месяца назад

    How does this compare to reactor? that's the best one i've found, but of course always looking for something that might be better

  • @meadow-maker
    @meadow-maker 3 месяца назад +4

    I've been playing with it and the main issues is the lora can drastically change your image even if you set it to lower values. Reactor isn't dead yet. 😊

    • @Elwaves2925
      @Elwaves2925 3 месяца назад

      Yeah, some loras work great but others effectively destroy the person. That's what you get with community based content though, it's usually hit or miss in terms of quality.

  • @ziffano8940
    @ziffano8940 3 месяца назад +1

    Great tutorial, 100% spot on. I'm new to SD. Is there a way to save the "outcome" of the face, so you can use it later?

    • @rwarren58
      @rwarren58 Месяц назад

      We need an answer, good sir.

  • @DaKussh
    @DaKussh Месяц назад

    any tips for choosing the most adequate samples images? resolution? background? portrait angles? number of samples?

  • @simile20
    @simile20 3 месяца назад

    Nice tutorial Sebastian. But why it doesn't work with batch images? Only with single image input.

  • @mostafamostafa-fi7kr
    @mostafamostafa-fi7kr 3 месяца назад

    very good

  • @gamingoutloud293
    @gamingoutloud293 3 месяца назад

    thanks for the video, i find it quite hard to get into SD. There are so many options, i have the feeling its overcomplicated. We need more simple ui.

  • @GodLikesMoe
    @GodLikesMoe 3 месяца назад

    Are there any requirements for the quality or type of image used as an input? Also, is the amount of input images important? I can't get any good results. At times, you can't even recognize anything at all.

  • @GeekL30N
    @GeekL30N 3 месяца назад

    no me impresionaba tanto, pero en la segunda generacion :O its awesome

  • @Sysshad
    @Sysshad 2 месяца назад

    Better than reactor? becausee that one is really good. And also much easier to setup

  • @Airbag888
    @Airbag888 2 месяца назад

    I'm guessing this is not going to help for groups? I'm trying to recreate family pictures of us on the moon and more haha, does this work with the fork that uses directml for AMD GPUs? it does not seem to be taking my pictures in consideration

  • @niedermeier_online
    @niedermeier_online Месяц назад

    thanks for your tutorial! but what can I do wehen I see this: AttributeError: 'NoneType' object has no attribute 'mode'?

  • @MauricetePas
    @MauricetePas 18 дней назад

    Don't forget to enable your Controlnet module which very subtlety happens here. 4:34
    And I was just wondering why my images didn't look anything like me. 😂
    Other than that, great tutorial !! Thanks! 👍

  • @Aidenkeagan
    @Aidenkeagan 2 месяца назад

    Is there a way we can generate images with the same face and clothing but in different pose?

  • @frankiesomeone
    @frankiesomeone Месяц назад +1

    Can ForgeUI do multi-input? I'm trying to use batch folder and batch upload but I think they only use the first image.

  • @forestlong1
    @forestlong1 2 месяца назад +3

    Very strange work of the controlnet, the IP adapter V2 worked a couple of times and then began to produce completely worthless results completely different from the sample photos that I uploaded.

  • @artist.zahmed
    @artist.zahmed 3 месяца назад

    can help me about sdxl training model i have a good pc with rtx 4090 but i dont know how to train xl model for my personal data

  • @TeosAntagonian
    @TeosAntagonian 2 месяца назад +9

    Hello Sebastian. Thank you very much for your insightfull tutorials. I started playing with SD thanks to you. I just recently installed forge as instructed in your video but how do I install IP adapter plus on forge? I did the same as in this tutorial but I dont get the proper selection for the preprocessor. I can select ip adapter plus for the model but not for the preprocessor.

  • @MrCRFultz
    @MrCRFultz 3 месяца назад +1

    set it up exactly as he did, not even close to looking like my input images, seems to be alot of how to videos on sd, not any of them will give the same results as posted. moving on to the next one.

  • @eugenekhristo7252
    @eugenekhristo7252 22 дня назад

    Why Starting Control Step and Ending Control Step sometimes doesn't affect output render at all. And if I set 0.2 - 0.8 as in your example, I have zero resemblance on ref images 😂😂😂 And should I change extension of .bin to .pth? Or atm it doesn't matter? Thanks

  • @Sing00525
    @Sing00525 28 дней назад

    Hello Sebastian. Thank you for sharing. I tried with Reactor and the ipadapter on SD1.5 txt2img under the same prompt (with lora when using Controlnet). When I use the ipadapter, the photo becomes so blurred like a glare covered. What parameters should I adjust?

    • @iceman1125
      @iceman1125 28 дней назад

      did you get resembling faces? I am trying to use his methods to resemble the face however they are not at all close and seem generic

  • @daxtv6168
    @daxtv6168 6 дней назад

    do you have the link for the said app?

  • @CoconutPete
    @CoconutPete 2 месяца назад

    wonder if this would work with SSD-1B?

  • @rd-cv4vm
    @rd-cv4vm Месяц назад

    Hello, thank you for the tutorial,
    i am using forge, i am unable to find the proper pre processor in my list or online, i have InsightFace+CLIP-H (IPAdapter) only. it isnt the same as you

  • @Warrioroffaith11
    @Warrioroffaith11 14 дней назад

    How do you create a consistent body? cause I know how to do the face swap but I can't seem to get a consistent body though.

  • @forestlong1
    @forestlong1 2 месяца назад

    How does this work in ImageToImage or Inpaint ?

  • @dqschannel
    @dqschannel 28 дней назад

    Do you know if this works with Fooocus?

  • @olavpettersen9465
    @olavpettersen9465 3 месяца назад +4

    The faceid portrait model is even better.

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Hey! I'd love to know in what ways you find it better. Been playing with the latest 1.1 and not really seeing much (and limited to 1.5)

    • @olavpettersen9465
      @olavpettersen9465 3 месяца назад

      @@sebastiankamph I get much better likeness, and it's super good for mixing faces. Realisticvision5, ddpm, 50-60 steps, 4-5 cfg, weight at 1.0, 0-1.0. I think that the input images are very important. I use five, and they're all 768x768 taken from almost the same angle and distance, with a little bit of distance around the head. Using comfy.
      I guess this all might depend on the subject. I haven't tried it with many different faces. I never had much luck with sdxl anyway :\

    • @lucianodaluz5414
      @lucianodaluz5414 3 месяца назад

      Is this a Controle Net thing? Can you share the link?

    • @olavpettersen9465
      @olavpettersen9465 3 месяца назад

      @@lucianodaluz5414 Or just look at Sebastian's link in the description.

    • @benharris144
      @benharris144 3 месяца назад

      May I ask what you mean by this?

  • @yosribengaidhassine9299
    @yosribengaidhassine9299 16 дней назад

    which is better ReActor or IPadapter

  • @Sviddenofficial
    @Sviddenofficial Месяц назад +3

    Hey! Thanks for this great guide! Unfortunately it doesn't work for me, just results in images of random people.

    • @lanoi3d
      @lanoi3d 18 дней назад

      Did you find a solution? I get the same. I also tried using with and without ipadapter and the results are the same. I think maybe this has recently broken? I only recently downloaded all the latest versions of everything but this ipadapter face thing doesn't seem to work.

  • @ameet21
    @ameet21 2 месяца назад

    you please show us how to merge sdxl train model with other sdxl model... like we do with sd 1.5 checkpoint merger where in A we use to put our train model on B we use to put Photon or dreamshaper and on C pruned checkpoint, and as VAE 56000 or 84000 so
    we want to see tutorial on SDXL

  • @darkazurr9891
    @darkazurr9891 26 дней назад +1

    i got it all installed im just trying to get it to look like my photo but it comes out nothing like them XD so i gotta play around see what i did wrong , great guide tho it looks great even tho its not me XD

  • @botlifegamer7026
    @botlifegamer7026 3 месяца назад

    So what is the best lora?

  • @RZ370z
    @RZ370z 2 месяца назад +2

    for some reason I am having difficulty adding the upgraded version of controlnet to ForgeUI, anyone else having this issue? - may be conflicting due to already build in controlnet?

  • @toon4367
    @toon4367 2 месяца назад

    Great turtorial! but i have one problem. My stable diffusion generates 2 or more people 50% of the time. Is this a common problem?

  • @cai567890
    @cai567890 2 месяца назад

    Can used it at Intel graphics?

  • @jonathanedward5288
    @jonathanedward5288 2 месяца назад

    How do you use ipadapter with FORGE? I tried on FORGE but the result not as good as original auto1111

  • @Beltramstein
    @Beltramstein 2 месяца назад

    I think the pre-processors name changed to InsightFace and CLIP ViT on newest Forge.

  • @Oryon520
    @Oryon520 3 месяца назад +1

    Best settings:
    LORA: 0.65
    Guidance controlnetNet: 1.25
    Keep Start 0 and end 1.0

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Interesting, thanks!

    • @10mmlover
      @10mmlover 3 месяца назад

      You're definitely on to something concerning lowering the weight of the lora. The higher the weight, the more it wants to zoom whatever picture you're generating into a portrait. If you lower the weight, it zooms out so there's more going on. (aka me riding a unicorn and wielding a sword riding into battle)

  • @davidpuentes
    @davidpuentes 3 месяца назад

    Maybe with facefusion or roop can help to reach that last step of similarity

  • @levis89
    @levis89 3 месяца назад +1

    Do we need any trigger word in the prompt for the lora we have added

    • @sebastiankamph
      @sebastiankamph  3 месяца назад +1

      No, it's weighted in when you add it like I did.

    • @levis89
      @levis89 3 месяца назад

      @@sebastiankamph gotcha, thanks for replying!

  • @donschannel9310
    @donschannel9310 3 месяца назад +4

    its not working when i use my model face in multiple or even in single

    • @osrsdreambot4006
      @osrsdreambot4006 3 месяца назад +1

      Same here this method doesn't work with multiple images

  • @Mootai1
    @Mootai1 3 месяца назад

    I'm working on ComfyUI (and SDXL) since weeks. I think i installed ReActor node just before we started to read eveywhere about ipadapter.... so i was wondering : About face swapping... Is ipadapter better than ReActor ? or is it mostly the same ?
    Thanks if someone here tryed them both and could answer me ! I'd like to be sure first before i decide to uninstall ReActor and choose instead the other one.
    And thank you for your new video Sebastian !

    • @sebastiankamph
      @sebastiankamph  3 месяца назад +1

      One is not better than the other, it's different and has different usecases. Reactor with the insightface model is mostly used for realism.

    • @Mootai1
      @Mootai1 3 месяца назад

      Ok; Good to know.
      I've only just started using it and I had the impression that ReActor had trouble matching the chosen face to the expression of the target face. But I'll have to do more tests to verify this.
      Thanks a lot for your reply!@@sebastiankamph

  • @vurmamivurdu
    @vurmamivurdu 14 дней назад

    Can this somehow work with Fooocus?

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 3 месяца назад

    Was this video re uploaded. But anyway. Could you do a video with comfy UI using this workflow?

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Yes, was an error, had to reupload :)

  • @mihalisization
    @mihalisization 3 месяца назад

    What's the difference between this method and the old one where we use the ReActor extension? Are there any benefits to using IPAdapter?

    • @sebastiankamph
      @sebastiankamph  3 месяца назад +1

      Reactor with its insightface model is best at photorealism, whereas ipadapter can do any style.

    • @mihalisization
      @mihalisization 3 месяца назад

      thnx @@sebastiankamph you are right!

  • @dadbrasil
    @dadbrasil 2 месяца назад

    What is the full name of the extension? Typing ControlNet gives me a billion results.

  • @blademarketing
    @blademarketing 2 месяца назад +1

    Drives me crazy, I am trying and i keep getting an error Exception: Insightface: No face found in image. its like control net doesnt really do anything and i made sure its enabled, uploaded 6 photos 1000x1000 very clear face in them... any ideas?

  • @Nutronic
    @Nutronic 2 месяца назад

    Where's your baseball cap from?
    I need a new one 😊

  • @blackfollowersshow
    @blackfollowersshow Месяц назад

    someone help me with this solution please, when I choose the multiple option this is what appears '' loadsave.cpp:1121: error: (-215:Assertion failed) !image.empty() in function 'cv::imencode' '''

  • @Umermehmood-jo6gv
    @Umermehmood-jo6gv 3 месяца назад

    how you get all foocus styles in A 1111.

  • @Lw24_AI
    @Lw24_AI Месяц назад

    How much video memory do you need on these controlNet models. I have 12gb of video memory and still can't handle it

    • @sebastiankamph
      @sebastiankamph  Месяц назад

      12gb should be fine. Have you tried lowering your resolution?

    • @Lw24_AI
      @Lw24_AI Месяц назад

      @@sebastiankamph 512х720 It is on the 1.5 models on SDXL there is no memory at all on the juggernaut model.

  • @waurbenyeger
    @waurbenyeger 2 месяца назад +1

    I'm using forge and I don't have the face id plus preprocessor ... where can I find that and where do I put it?

    • @zoro_uchiha777
      @zoro_uchiha777 Месяц назад +1

      did you find the solution

    • @waurbenyeger
      @waurbenyeger Месяц назад

      @@zoro_uchiha777 With Forge the default IP-Adapter preprocessor called "InsightFace+CLIP-H (IPAdapter)" works just fine. Just follow what he does with everything else and it will work. Also, you might not need to use the LoRA, but if you do I find that switching the number at the end from 1 to 0.4 gives better results.

  • @0oORealOo0
    @0oORealOo0 2 месяца назад

    it's not better ReActor?

  • @KINGLIFERISM
    @KINGLIFERISM 3 месяца назад +1

    Brother this is old and I say that respect. I assumed you was talking about InstantID. I hope you try that and make a video.

  • @VirgilLucaRusan
    @VirgilLucaRusan 2 месяца назад +1

    Man, controlnet is not working at all for me. I have installed the extensions and all of its models and uploaded them and everything but stablefusion is completely ignoring it. Yes I pressed enable and did everything from the videos. Please any advice or a discord server or anyone to whom I can share screen so he can maybe see where the problem is?

    • @devillmay
      @devillmay 17 дней назад

      I had the same case. But I resolved it after changing/playing with the parameters. The given parameters for ip-adapter-faceid- plusv2-sdxl parameters didn't work with me. And I went for ip-adapter-faceid- plusv2-sd15 paramaters. It didn't work with the parameters Sebastian gave in the video. It started working after I changed the CFG from 1.5 to 5 and the Sampling to 20. And only then my SD started recognizing Controlnet. I was able revert it to what Sebastian suggested in the video once it started working. My best results came with: ip-adapter-faceid- plusv2-sd15 - Sampling 30 - CFG 8 - Control Weight 1.0

  • @gnome2024
    @gnome2024 3 месяца назад

    I have an older computer so cant run SD on it... is there any way to do this online? Thanks!

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      I generally recommend ThinkDiffusion. I am biased however since they sponsor some of my videos.

    • @aa-nw5mq
      @aa-nw5mq 3 месяца назад

      @@sebastiankamph is that cheaper than runpod?

    • @sebastiankamph
      @sebastiankamph  3 месяца назад

      Runpod is probably cheaper, but then you have to set it up yourself. TD comes preinstalled ready to go.@@aa-nw5mq

    • @aa-nw5mq
      @aa-nw5mq 3 месяца назад

      @@sebastiankamph thanks à lot

  • @bassieboot2120
    @bassieboot2120 2 месяца назад

    im trying to use it with force but it wont see the bin files

  • @MisterWealth
    @MisterWealth 3 месяца назад

    My images turn out to be an absolute mess, it isn't working at all. Anyone know why? I have the same exact version of controlnet, same models, same res, same sampling method with steps and resolution. Same model, but the faces are just glitchy

  • @aymanekochaina4343
    @aymanekochaina4343 23 дня назад

    What happens when you turn preprocessor to 1024 instead of 512 when using sdxl checkpoint ?

  • @YungWH1T3B0Y
    @YungWH1T3B0Y 2 месяца назад +1

    I have forge UI and i dont know how to get this working. preprocessors for for Face Id dont appear

  • @ijayraj
    @ijayraj 3 месяца назад +1

    5:38 Could you please make an video on styles, your patrion page is paid so it would be helpfull for free users, Thanks for the tutorial Following you closely 👏😁

  • @Hooooodad
    @Hooooodad 3 месяца назад

    This js amazing, can you please make a video on the new automatic 1111 forge

  • @anotherdimension2915
    @anotherdimension2915 28 дней назад

    Why is my local Automatic1111 setup missing DPM++ 2M SDE Karras sampling method? I only have DpM++ 2M, first time fresh install this April 2024, someone please any ideas...

    • @devillmay
      @devillmay 17 дней назад

      I have the same issue. Any idea @sebastiankamph ?

  • @filmyentity
    @filmyentity 3 месяца назад

    I Followed the same steps but its not working as shown in the video the resemblence is not at all matching

  • @josephwestphal292
    @josephwestphal292 2 месяца назад

    I use A1111 and in the controlnet it does not show me the Models beside the preprocessor. what could i have done wrong? Controlnet is 1.1.440, i restated all multiple times ... i put the folder into stablediffusion > models > stable-diffusion and extracted them

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      For a1111, the controlnet models folder is stable-diffusion-webui\extensions\sd-webui-controlnet\models

    • @josephwestphal292
      @josephwestphal292 2 месяца назад

      Thx a lot! I will try again@@sebastiankamph

  • @mangashba
    @mangashba 2 месяца назад

    does this work in Forge ? The preprocessor doesn't seem to appear there

    • @DannySmith-bg1pk
      @DannySmith-bg1pk Месяц назад

      I have the same issue. Were you able to find a fix for this?

    • @mangashba
      @mangashba Месяц назад +1

      @@DannySmith-bg1pk nope, I gave up on it

  • @RedBalloonArtWorks
    @RedBalloonArtWorks Месяц назад

    When I follow the steps, it starts doenloading 2.35gb file first on the following path
    stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\doenloads\clip_vision\clip_h.pth

  • @benharris144
    @benharris144 3 месяца назад

    It's great when it works, but for some reason it just stops working randomly. I don't know what causes this.

  • @christopherniv1755
    @christopherniv1755 3 месяца назад

    Do i need to delete reactor first ?

  • @filterophilicxx5914
    @filterophilicxx5914 2 месяца назад

    why it processed only one picture i added multiple pictures "ControlNet - WARNING - Insightface: More than one face is detected in the image. Only the first one will be used."

  • @NRubric
    @NRubric 2 месяца назад

    For me ControlNet "Preprocessor" didn't show "ip-adapter_face_id_plus" or anything with ip-adapter.
    I only get "InsightFace+CLIP-H (IPAdapter)", "CLIP-ViT-bigG (IPAdapter)" and "CLIP-ViT-H (IPAdapter)".

    • @Oryon520
      @Oryon520 2 месяца назад

      U use Forge ?

    • @user-yf4fh6bd7t
      @user-yf4fh6bd7t 16 часов назад

      @@Oryon520 for me too, and i use forge

  • @TheGalacticIndian
    @TheGalacticIndian 3 месяца назад

    🎖🎖

  • @johncloud998
    @johncloud998 24 дня назад +1

    reactor is definitely easier and more relevant for beginners

    • @sebastiankamph
      @sebastiankamph  21 день назад

      Reactor is sadly low resolution and only works on photorealism well.

  • @Necksteppa77
    @Necksteppa77 2 месяца назад

    Hey, IDK what im doing wrong. I followed step by step, by my images turn into messy kaleidoscopes as soon as I enable controlnet. Any ideas what im doing wrong?

  • @gpatil4456
    @gpatil4456 Месяц назад

    how to install bro make tutorial

  • @Al-Storm
    @Al-Storm 2 месяца назад

    I don't have any of those preprocessors?

  • @NithinBalakrishnanIsOnline
    @NithinBalakrishnanIsOnline 3 месяца назад +9

    Speaking of owning ducks - Did you hear about the Dr. Duck who got arrested?
    Apparently he was a Quack