NEW ControlNet for Stable diffusion RELEASED! THIS IS MIND BLOWING!

Поделиться
HTML-код
  • Опубликовано: 27 май 2024
  • ControlNet can transfer any pose or composition. In this ControlNet tutorial for Stable diffusion I'll guide you through installing ControlNet and how to use it. ControlNet is a neural network structure to control Stable diffusion models by adding extra conditions.
    Open cmd, type in: pip install opencv-python
    Extension: github.com/Mikubill/sd-webui-...
    Updated 1.1 models: huggingface.co/lllyasviel/Con...
    1.0 Models from video (old): huggingface.co/lllyasviel/Con...
    FREE Prompt styles here:
    / sebs-hilis-79649068
    How to install Stable diffusion - ULTIMATE guide:
    • Stable diffusion tutor...
    Chat with me in our community discord: / discord
    Support me on Patreon to get access to unique perks!
    / sebastiankamph
    The Rise of AI Art: A Creative Revolution
    • The Rise of AI Art - A...
    7 Secrets to writing with ChatGPT (Don't tell your boss!)
    • 7 Secrets in ChatGPT (...
    Ultimate Animation guide in Stable diffusion
    • Stable diffusion anima...
    Dreambooth tutorial for Stable diffusion
    • Dreambooth tutorial fo...
    5 tricks you're not using
    • Top 5 Stable diffusion...
    Avoid these 7 mistakes
    • Don't make these 7 mis...
    How to ChatGPT. ChatGPT explained:
    • How to ChatGPT? Chat G...
    How to fix live render preview:
    • Stable diffusion gui m...

Комментарии • 505

  • @sebastiankamph
    @sebastiankamph  Год назад +4

    Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
    Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph

  • @IIStaffyII
    @IIStaffyII Год назад +510

    This is the reason that its so important that Stable Diffusion is open source.

    • @losttoothbrush
      @losttoothbrush Год назад +20

      I mean its cool yeah, but doesnt it steal art from Artist that way?

    • @JorgetePanete
      @JorgetePanete Год назад +1

      it's*

    • @IIStaffyII
      @IIStaffyII Год назад +54

      ​@@losttoothbrush
      Open source just means people can access the source code and therefore add to the tool.
      Being open source is not directly contributing to the "stealing" issue. Although indirectly it can make it more accessable.
      In the end it's a tool and I'd argue what you make with it may be transformative work or not.

    • @Mimeniia
      @Mimeniia Год назад +12

      People "artists" cling to their prompts like their lives depend on it.
      Asking them to share is like squeazing blood from a stone.

    • @verendale1789
      @verendale1789 Год назад +30

      @@losttoothbrush Well, yknow, if we are gonna steal art, at least make it public and for everyone instead for one big corpo having the goods, hell yea brotha

  • @user-zv6su5cp6o
    @user-zv6su5cp6o 11 месяцев назад +1

    man you are incredible! so good and simple, i installed stable diffusion with one of your videos, and now im ready to install control net. i am officially your fan!! thanks for everything!! greetings from corfu greece

  • @woszkar
    @woszkar Год назад +8

    This is probably the most useful thing for SD. Thanks for showing us!

  • @GrandHorseMusic
    @GrandHorseMusic Год назад +14

    Thank you, this is really helpful. My "pencil sketch of a ballerina" had three arms and no head, but eventually I generated something usable. It's all absolutely fascinating and it's been fun to learn over the past week or so.

    • @sebastiankamph
      @sebastiankamph  Год назад

      Glad it was helpful! And we've all struggled with the correct amount of body parts 😅

  • @sharadrbhoir
    @sharadrbhoir Год назад +195

    As a Drawing Teacher of having 33 years of experience of teaching school kids how to draw and paint, One thing for sure....AI can not replace human creativity but I must say this will surely help so many people with poor drawing skill to unleash their creative thoughts and imagination! which for a teacher like me gives immense hope of revolution in Arts!
    Thanks for such an easy and helpful tutorial on this topic!

    • @mike_lupriger
      @mike_lupriger Год назад +8

      ​@@ClanBez Same, I see possibility to work on multiple projects as a designer. Tedious parts of process are getting automated. Super excited keep exploring!! Will get more time for vacation, well I hope!🤞 ​ PS: In my area, high school art teacher is referred to as Drawing teacher and College art teachers are referred to as Art teachers. Yeah it's little weird.

    • @rushalias8511
      @rushalias8511 Год назад +16

      honestly refreshing to see some people be so open minded to this. AI art is often viewed as a job killer but i mean honestly speaking look at so many incidents from the past. When digital art first started i'm sure millions of artists who worked hard with paint, and pencils and ink and every other form of real life art, felt threatened by it.
      Why pay a guy to paint a logo for you, when you can use a paint tool? Among so many other stuff.
      But look what happened now, digital art is so common now because its quicker, cheaper and more flexible. If you made a mistake in a real life painting, you didn't have an undo button or an eraser.
      Just like digital art gave so many new individuals a chance to make art, so too is ai, its all on how you use it

    • @pedrovitor5324
      @pedrovitor5324 Год назад +7

      People feel threatened because a lot of artists still lives from comissions (Btw, they aren't wrong for doing that, it's "easy money"). When you're a teacher in an art school it's easy for you to not be threatened by AI art.
      Don't get me wrong, I'm not here to sound mad or anything, I'm just saying the truth. I agree AI art will revolutionize the way we think about creativity and I also think it won't destroy art (At least not completely), people will still have their community of non AI art. But it's undeniable, AI art has tons of legal issues and the AI is pretty bad right now. Very rarely I wasn't able to spot if an art was AI or not.

    • @viquietentakelliebe2561
      @viquietentakelliebe2561 Год назад

      yeah, but it can sure enhance what skill you have yet to acquire or lack the talent for

    • @lilacbuni
      @lilacbuni Год назад +5

      @@viquietentakelliebe2561 How can u enhance a skill ur not practising? drawing a squiggle then letting ai complete the work based off actual artist's work isn't YOUR imagination or skill and u still learn nothing. ur not doing any of the work the ai is

  • @marcelqueiroz8613
    @marcelqueiroz8613 Год назад +7

    Really cool. Things are evolving pretty fast! Thanks

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Right? This is moving extremely fast. I'm hyped for what's more to come! 🌟

  • @1salacious
    @1salacious Год назад +1

    Another good easy to follow tutorial, thanks Seb 👍

  • @agusdor1044
    @agusdor1044 Год назад +7

    This is gold, and Im talking about your video, dude. Really well explained, very detailed, thanks a lot!

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Why thank you for the kind words, that's really thoughtful of you 😊🌟

  • @justinwhite2725
    @justinwhite2725 Год назад +10

    This looks amazing. My drive is full but I definitely want to play more with this.

    • @sebastiankamph
      @sebastiankamph  Год назад +3

      Throw away the other models and get this, it's fantastic! If you only have space for one, get the canny model.

    • @justinwhite2725
      @justinwhite2725 Год назад +2

      @@sebastiankamph I'm going to get a new hd after work today. 2tb or so. My stable diffusion folder is 500gb.
      I'm also a little nervous since I have an AMD card I'm not sure if this will work on the CPU, but I'm working on building a new computer soon.

  • @Jaxs_Time
    @Jaxs_Time Год назад

    Brah, your camera is so nice..... Love to see the commitment to your craft. Keep it up fam

  • @artistx8512
    @artistx8512 Год назад

    I messed with this already... seems like the first step to something amazing!

  • @MONGIE30
    @MONGIE30 Год назад

    Set this up yesterday its pretty amazing

  • @Dante02d12
    @Dante02d12 Год назад +2

    The pose algorithm is EXACTLY what I've been looking for. Thanks for this video!
    Hopefully I'll manage to install it. Last time I tried to use extensions, Stable Diffusion just refused it and I had to reinstall everything, lol.
    EDOT : Ok, I installed it, and it works! Sadly, the Open Pose model seems... capricious. It often doesn't give me any skull. The Depth Map works wondefully though.

  • @dommyshan
    @dommyshan Год назад +2

    That is really awesome :D Gonna try the scribble! I've been having horrible varied results of deformed humans and I was getting sick of it. Haven't touched SD since. Now this changes! :D

  • @VIpown3d
    @VIpown3d Год назад +22

    This is the second best thing right after Ikea Köttbullar

  • @bongkem2723
    @bongkem2723 10 месяцев назад +1

    great video on controlnet man, thanks a lot !!

  • @blackswann9555
    @blackswann9555 Год назад

    Installing controlNet !!!! eeeeeek great tutorial so much fun!

  • @TonyRobertAllen
    @TonyRobertAllen Год назад

    Super helpful content man, thank you for making it.

  • @jubb1984
    @jubb1984 Год назад +7

    Thanks for this well put together tutorial on how to get it going!
    This is kinda what i was hoping for, turning my b&w line art into ai generated images =D, lotsa scribbles here i come!

  • @BryGuy_TV
    @BryGuy_TV 11 месяцев назад

    Controlnet is insane. Thanks for the examples

  • @MaxWeir
    @MaxWeir Год назад +1

    I had Pingu vibes at the end, this is quite an amazing update.

  • @daconl
    @daconl Год назад +42

    If you want to use the source image as ControlNet image, you don't have to load the ControlNet image separately (it will automatically pick the source image when no image is selected). Saves some time. 🙂

    • @Naundob
      @Naundob Год назад

      I wonder why img2img is used at all since ControlNet is meant to do the job now instead of the old img2img algorithm, right?

    • @superresistant8041
      @superresistant8041 Год назад +1

      @@Naundob ControlNet can create from something whereas img2img can create from nothing.

    • @Naundob
      @Naundob Год назад

      @@superresistant8041 Interesting, isn't img2img meant to create a new image from an image instead from nothing?

    • @daryladhityahenry
      @daryladhityahenry Год назад +2

      Please please please finish these arguments... I don't understand what you both talking about hahaahahah. And give conclusion please. Thanksss

    • @ikcikor3670
      @ikcikor3670 Год назад +4

      ​​@@Naundob img2img gives you way less control, basically you pick "denoising strength" which at 0.5 basically tells AI "this is a 50% done txt2img image, half way between random noise and desired result, continue working on it until the end" so you have to look for golden middle between your image not changing at all and changing way too much. Controlnet can be used both in txt2img and img2img and it has many powerful features like drawing very accurate poses, keeping lineart intact and turning simple scribbles into actual art (where with normal img2img you'd end up with either an ugly result or one that doesn't resemble the doodle almost at all)

  • @Argentuza
    @Argentuza 10 месяцев назад

    You have teached me so much, thank you very much!

  • @cassiosiquara
    @cassiosiquara Год назад

    This is absolutely amazing! Thank you so much!! s2

  • @Refused56
    @Refused56 Год назад +27

    Since I've been playing with ControlNet I am in a constant state of awe and disbelief😮 Truly game changing. What I really like is the possibility of rendering higher resolution images with that much control. Does anyone have a tip on applying a certain color scheme when using ControlNet? Probably something we have to wait for until the next SD revolution hits. So roughly 5 days.. (me making sounds of pure excitement and slight fatigue at the same time).

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Hah, I totally feel you. I'm hyped for every new update, and then I look at the list of all the videos I want to do.

    • @deadlymarmoset2074
      @deadlymarmoset2074 Год назад +5

      Try using the base picture in the img2img for the colors and tone you want, use a de-noising strength of like 70+,
      (it can be of a completely unrelated subject and different aspect ratio)
      Then set the text prompt to the subject you want. Additionally you can set the base control net image, to the pose and subject your looking for.
      This is creating a relatively new image however, not color grading an existing one, still, it is an interesting way to control the general vibe and keep consistent colors between renders.

    • @sergiogonzalez2611
      @sergiogonzalez2611 Год назад

      @@sebastiankamph SEBASTIAN GREAT CHANNLE AND CONTENT, i hacve a doubt this extention work with stable difusion 1.5 models?

    • @sebastiankamph
      @sebastiankamph  Год назад

      @@sergiogonzalez2611 Works with all models, majority of my testing have been on 1.5.

    • @prettyawesomeperson2188
      @prettyawesomeperson2188 Год назад

      I'm having trouble to get it to work. I'm lost. I tried for example to scribble a poorly drawn dog, prompted "A photorealistic dog"(With openpose, canny, depth) and the only time I got a photorealistic dog was when it outputed a black img, otherwise It just spits out a 3D image of my scribble. Hope that made sense.

  • @dhavalpatel3455
    @dhavalpatel3455 Год назад

    Thanks for explaining this.

  • @jameshughes3014
    @jameshughes3014 Год назад +5

    I feel silly, but I hadn't tried this yet because I dont have 50 gigabytes of free drive space. It didn't occur to me that I could just install part of them. This is truly amazing stuff, I'm looking forward to seeing how animations look with this tool.

  • @coloryvr
    @coloryvr Год назад +4

    ...WOW! ...the next growth spurt of SD...people say AI makes us stupid but i haven't learned so much since AI crashed into my life...Big FANX for keeping us up to date!

    • @sebastiankamph
      @sebastiankamph  Год назад

      So much new information entering our heads 😅 Thanks for the support! 🌟

    • @conorstewart2214
      @conorstewart2214 Год назад

      AI does and will make people stupid, in the sense they don’t need to learn anything themselves they just ask an AI to do it for them. You are learning because you are interested in it and it is new, once it becomes more prevalent it will most likely stop being open source and people will just be interested in the results, not how it works.

    • @coloryvr
      @coloryvr Год назад

      I agree with many things and I think that children should not have access to generative AIs until a certain age ((16?). However, I have no idea how to remove open source software from millions of private PCs (?).
      My biggest concern is that the AIs will greatly increase the general smartphone addiction.
      (I don't have one myself and don't want one either).
      But: I love "painting" and filming in VR... and thanks to the new AIs, I now have the potential of an entire animation studio at my own disposal.... BTW:
      The absolute nightmare are AIs that develop weapons, toxins, etc. as well as the AI-based mind-reading technology that is already pushing onto the markets...

  • @nackedgrils9302
    @nackedgrils9302 Год назад +1

    Thanks for sharing your experience! I'd kind of given up on SD because my computer is way too slow (5-10min to generate a 512x512 Euler a image) but when I came back to the community last week, everyone was creaming their panties over Controlnet and I had no idea why. Thanks to your explanation, now I kind of understand but I guess I'll have to try it myself some day once I can afford a better computer.

  • @dancode9738
    @dancode9738 Год назад

    got it working, great video.

  • @Dessme
    @Dessme Год назад

    The audio is SUPER👌👍

  • @namds3373
    @namds3373 Год назад

    amazing video, thanks!

  • @robcorrina8897
    @robcorrina8897 Год назад

    I had difficulty cutting through the jargon. thanks man.

  • @EmanueleDelFio
    @EmanueleDelFio Год назад +2

    Thanks Seb ! you are my Obiwan Kenobi of ai !

    • @sebastiankamph
      @sebastiankamph  Год назад

      Thank you as always my friend! Your supportive attitude is a national treasure 🌟

  • @matthallett4126
    @matthallett4126 Год назад

    Very helpful.. Thank you!

  • @jonathaningram8157
    @jonathaningram8157 Год назад +1

    I'm convinced the future of IA generated picture will be with a mix of 3D models. Like you do a precise pose in 3D and apply stable diffusion on it so that it can have precise informations about depth in the scene and that will achieve true photorealistic render.

    • @martiddy
      @martiddy Год назад

      You can do that already with ControlNet

  • @CoconutPete
    @CoconutPete 2 месяца назад

    controlnet is king from what I can tell.. so far

  • @jzwadlo
    @jzwadlo Год назад

    Great video thank you brother!

  • @rayamc607
    @rayamc607 Год назад

    Be so much better when somebody actually puts a proper UI on all of this.

  • @noonelivesforever2302
    @noonelivesforever2302 Год назад +1

    ooohhh someone that explain things like should be done. ty

  • @doze3705
    @doze3705 Год назад +7

    I'm trying to find a way to have SD include character accessories accurately and consistently. Like having a character holding a Gameboy, or some other specific device. Would love to see a video breaking down how to train SD on specific objects, and then how to include those objects in a scene.

  • @ArtbyKurtisEdwards
    @ArtbyKurtisEdwards Год назад

    another awesome video. Thanks!

  • @wowclassicplus
    @wowclassicplus Год назад

    thanks alot. only works with 1.5 tough. but i found out, so all good:)

  • @Agent-Spear
    @Agent-Spear Год назад

    This is really a Game Changing feature!!!

  • @paulgomez3318
    @paulgomez3318 Год назад

    Thank you for this mate

  • @emmettbrown6418
    @emmettbrown6418 Год назад +1

    For the Openpose, is there a way to get the coordinates of the joints in the pose?

  • @fynnjackson2298
    @fynnjackson2298 Год назад +2

    For storyboarding this is insane.

  • @dustyday837
    @dustyday837 8 месяцев назад

    another great video!

  • @emmasnow29
    @emmasnow29 Год назад +21

    This is an AMAZINGLY useful tool. Another big step for A.I art.

    • @sebastiankamph
      @sebastiankamph  Год назад +3

      Couldn't agree more! Real game changer 🌟🌟🌟

  •  Год назад +1

    Is the preprocessor always has to match the controlnet model? I was using it with mostly no preprocessor selected and it seems to still work? I thought it's only an optional thing which allows you to create an additional pass.

  • @GrayFates
    @GrayFates Год назад

    Does stable diffusion rely on metadeta created when it generates the sketch or the original image to generate the reposed image? I'm wondering because I think it would be interesting to upload hand drawn sketches for the pose sketch and have stable diffusion redraw an image based on that.

  • @roger7641
    @roger7641 Год назад

    How challenging would it be to add your own training data (not sure if correct term) that this stack would use?
    Let's say that I would get too much of certain style, but in case I would like to do something totally different.

  • @eddybeghennou8682
    @eddybeghennou8682 Год назад

    amazing thanks

  • @Gerard_V
    @Gerard_V Год назад

    Fantastic! thanks for the tutorial! let's play!

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Have fun! Good to see you again Gerard 💫

  • @thorminator
    @thorminator Год назад

    This is nuts! 🤯

  • @JesseCotto
    @JesseCotto 2 месяца назад

    If you lower the weight to zero it will cost you and arm and a leg. Brilliant! Thanks for Your Video! Definitely Highly Valuable Content.

  • @royceahr
    @royceahr Год назад +2

    Sabastian, I get this error when I tried typing pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file. Any idea what is wrong?

  • @chariots8x230
    @chariots8x230 Год назад +9

    Pretty awesome! 😍 Now I’d like to know if there’s a way to apply these poses to our own custom characters, instead of just random characters. 🤔
    Is it possible to pose two of our original characters together?
    Also, it’s nice that we can copy the pose, but can we also copy facial expressions into our characters?

    • @sebastiankamph
      @sebastiankamph  Год назад +11

      Yes and yes! 🌟 It might be a little tricky to get exactly what you're looking for though, but it is possible. I would inpaint each character separately to get the original features.

  • @leilagi1345
    @leilagi1345 Год назад +1

    HI! Very useful video i got intrigued but how to do it all in Google Colab especially the first steps in "Command Prompt" or cnd, is it possible?

  • @StefanPerriard
    @StefanPerriard Год назад

    This is truly mind-blowing. Thank you for sharing. What version of Stable Diffusion are you using. 1.5 or 2?

    • @sebastiankamph
      @sebastiankamph  Год назад

      Both! Your Stable diffusion program is not version dependant. It's the actual model .ckpt or .safetensors file that has a version. 1.5 is great for illustrations, while 2.1 does a great job with photorealistic portraits.

  • @LeChaunpre
    @LeChaunpre Год назад +1

    Any clue why the controlnet models takes a while to load for me ? I've had the same issue with safetensors models

  • @MatthewEverhart
    @MatthewEverhart Год назад +1

    Thank you for the tutorial - I am not getting the two images when I generate from ControlNet - just the one.

  • @Amelia_PC
    @Amelia_PC Год назад

    After being so disappointed with Pose, I had much better results with Depth. Thanks!

  • @devnull_
    @devnull_ Год назад

    Thanks another well done video. Annoying, are those two dropdowns really needed? Seems like preprocessor type and model go hand in hand? Or is it some UX decision made by extension author?

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Thanks! Honestly, I couldn't say. It's still too early, let's see as people explore the tool more how it ends up.

  • @fenriswolf-always-forward
    @fenriswolf-always-forward 10 месяцев назад

    How did you get the drawing canvas ?

  • @messer_sorgivo
    @messer_sorgivo Год назад

    Super useful tutorial. I have one question, my stable diffusion does not show me Scribble mode next to enable, i have invert input color, rgb to bgr, low vram and guess mode, why is that?

  • @pdealmada
    @pdealmada 11 месяцев назад

    is there a way to clone a object or a person with the background with Inpaint? what would be the prompt ? Ty

  • @TrashMull
    @TrashMull 2 месяца назад

    Hello Sebastian Kamph,
    I really like your channel and the way how you talk and make those very comprehensive videos. I learn a lot from you and I thank you very much for that. Pls never change the way of your videos (calm, stabil, precise).
    Of course I have a question. I am concerned about the pickle files from Illyasviel. Does pickle mean, that it can harm your PC? If yes, what safetensor files can be the alternative?
    thank you very much and have a nice day.
    Best Regards

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Hey! Thank you! Safetensors are pickle-free and safe, yes. But official files from lllyasviel are safe

  • @MrMikeIppo
    @MrMikeIppo Год назад

    What stable diffusion checkpoint do you recommend? Does it change anything picking a different one apart from the first image generation?
    Amazing video! Got everything up and running

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      I've been playing a lot with Dreamshaper and variants of Protogen lately, but there are a lot of good ones out there.

  • @3dzmax
    @3dzmax 9 месяцев назад

    Hi, thank you for this, I'm very interested but I can't download your prompt styles, any help ?

  • @user-ri8to4rd5u
    @user-ri8to4rd5u Год назад

    When I open the pre-processor tab there is a long list of processors to choose from, also processors I have not installed (manually). For instance, there are 3 scribble processors: scribble_hed, _pidinet and _xdog - which one to choose? It is also hard to invert the sketch from black to white

  • @e1123581321345589144
    @e1123581321345589144 Год назад

    how does it handle larger images? I played a bit with version 1.6 and I got a lot of out of v-memory exceptions for thing like 1000x800 pixels. and I have 12GB of visual ram.

  • @pkay3399
    @pkay3399 Год назад +1

    Thank you. If we are running it on Colab Notebook with WebUI enabled, can we paste the models in Google Drive's Models folder instead of the WebUI folder and then just paste the path into the Notebook?

    • @SilasGrieves
      @SilasGrieves Год назад +1

      Not OP but yes, you can copy/paste the models into your folder on your Google Drive but make sure you paste them to the Models folder in the Extensions parent folder and Stable Diffusion’s base models folder.

    • @pkay3399
      @pkay3399 Год назад +1

      @@SilasGrieves Thank you

  • @user-pv7fm9ep5e
    @user-pv7fm9ep5e Год назад

    Thank U

  • @Name-sl3bm
    @Name-sl3bm Год назад +1

    this is cool

  • @ZakZky007
    @ZakZky007 Год назад +1

    Thanks for the explanation! Just asking , the checkpoint that you got there, is it self made? Or can I get it from somewhere? If I use the v2-1_768-ema-pruned.ckpt, I get this error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)". Any idea?

    • @Mrig87
      @Mrig87 Год назад +1

      I get the same... any ideas ?

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Check Civitai for models. I recommend finetuned 1.5 models.

    • @Mrig87
      @Mrig87 Год назад

      @@sebastiankamph yup I figured this was because I used 2.1 models, 1.5 works !

  • @newone295
    @newone295 Год назад

    Thanks 👍

  • @Ghost_Text
    @Ghost_Text Год назад

    Is it possible to get multiple poses in one image, like two or more figures interacting?
    Or would one do the figures individually and try to inpaint the others into the same scene?

    • @sebastiankamph
      @sebastiankamph  Год назад

      Yeah, I think Controlnet is a great way to have multiple people in the image. Take a photo or sketch them. SD is not great at multiple faces though, but can inpaint that if needed.

    • @Gh0sty.14
      @Gh0sty.14 Год назад +2

      It can do multiple people. I saw someone show an example where there were four people in the image.

  • @tiesojones9880
    @tiesojones9880 Год назад

    Could you help me adding Control Net to the Deforum extension? Thank you

  • @adriaanspronk8806
    @adriaanspronk8806 Год назад

    Awesome !

    • @sebastiankamph
      @sebastiankamph  Год назад

      Thanks Adriaan! Good to hear from you again 😊🌟

  • @AnimatingDreams
    @AnimatingDreams Год назад +2

    My question is: Can you give SD a character in the img to img tab and use ControlNet to pose them, thus having a near identical character from the img to img one, just in a different pose?

    • @Max_Powers
      @Max_Powers Год назад +1

      I would like to know the answer to this too

  • @SurveillanceSystem
    @SurveillanceSystem 11 месяцев назад

    Hej, I am interested in car body design and I need to produce orthogonal views of a vehicle (front, side, rear and top). Do you know if there is any Stable Diffusion extension that allows me to generate these views/images based on a car render I already have? My idea is to use these four views as a blueprint to make the 3D CAD model in Solidworks. Thank you!

  • @mlnj144
    @mlnj144 Год назад

    ty!!

  • @JeremyFry
    @JeremyFry Год назад

    I haven’t been able to get the model to deviate like in your thumbnail. How did you manage to lose the skirt in one photo but get a flowing dress in another? Photoshopping the image fist?

    • @sebastiankamph
      @sebastiankamph  Год назад

      These are not shopped at all, just prompt and settings changed inside SD. You can finetune with both denoising strength and ControlNet weight 🌟

    • @JeremyFry
      @JeremyFry Год назад

      @@sebastiankamph I thought you might need to tweak the input images. I'm watching your other workflow videos now and it's been very helpful to see how you can tweak things. Thank you for all these videos!

  • @roughlyEnforcing
    @roughlyEnforcing Год назад

    is there any docs on these model so i have an idea what I'm dldling ? -- sorry if that's a dumb q , I'm SUPER new to all of this :)

  • @aviator4922
    @aviator4922 Год назад

    awesome

  • @grillodon
    @grillodon Год назад

    How can I use an alpha of an image to use it for create a new different image? Thx

  • @DanielS-zq2rr
    @DanielS-zq2rr Год назад

    What GPU do you have? I noticed you generate stuff way faster than I'm able to.
    Thanks for the tutorial btw

  • @parsons318
    @parsons318 Год назад

    what kind of specs are you using for your computer? and how long does it take to generate a controlnet image?

  • @kallamamran
    @kallamamran Год назад

    Are you running on an old version of A1111? I don't have buttons for the sampling methods. That changed to a drop down long ago. Didn't it? 🤔🤔

    • @sebastiankamph
      @sebastiankamph  Год назад

      Yes! I've kept various stable releases and stopped auto-updating since I had it break far too often.

  • @CoconutPete
    @CoconutPete 2 месяца назад

    controlnet is amazing.. still trying to figure out the HED model

  • @nic-ori
    @nic-ori Год назад

    Thanks.

  • @Romazeo
    @Romazeo Год назад

    Links to images from video preview?? I really like lighting from the first two.

  • @hazencruz
    @hazencruz 2 месяца назад

    what do i do if my canvas won't show any marks even after inverting the preprocessor?

  • @Charblaze89
    @Charblaze89 Год назад

    This look so fun 😢😢

  • @The-Inner-Self
    @The-Inner-Self Год назад +1

    Can you use it for batch img2img animations? Or just single image generations

    • @sebastiankamph
      @sebastiankamph  Год назад

      It's possible to use it in batch!

    • @wjm123
      @wjm123 Год назад

      @@sebastiankamph could you get it to work in batch? Mine only makes the first image but has errors of saving when generating past the first image

  • @MarinaArtDesign
    @MarinaArtDesign Год назад

    What is interesting is that I need the opposite, I need that coloring page lineart from the beginning :D LOL

    • @sebastiankamph
      @sebastiankamph  Год назад

      I'll tell you a secret, that was the hardest part when I played around with this! 😅 But you can still use photos as references.

  • @laioliver2299
    @laioliver2299 9 месяцев назад

    thanks for the tutorial ! However I couldn't find the tab "Open drawing canvas"

  • @CCHSCriminalJustice
    @CCHSCriminalJustice Год назад +1

    Does this have to be on windows? Can it be installed on Mac?