WOW! NEW ControlNet feature DESTROYS competition!

Поделиться
HTML-код
  • Опубликовано: 12 май 2023
  • With a new major update to ControlNet for Stable diffusion, Reference only literally changed the game, again.
    Prompt styles here:
    / sebs-hilis-79649068
    Support me on Patreon to get access to unique perks! / sebastiankamph
    Chat with me in our community discord: / discord
    My Weekly AI Art Challenges • Let's AI Paint - Weekl...
    My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
    ControlNet tutorial and install guide • NEW ControlNet for Sta...
    Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
    LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
    Control Lights in Stable Diffusion • Control Light in AI Im...
    Ultimate Stable diffusion guide • Stable diffusion tutor...
    Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
    The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
    7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
    Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
    Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
    5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
    Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
    How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
    This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
    Adobe Firefly Tutorial • Adobe Firefly Tutorial...
    ChatGPT Playlist • ChatGPT

Комментарии • 509

  • @sebastiankamph
    @sebastiankamph  Год назад +22

    Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
    Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph

    • @mizutofu
      @mizutofu 11 месяцев назад

      how do you get two controlnet unit on your gui?

    • @UnBknT
      @UnBknT 11 месяцев назад

      You have to add the styles to the prompt, btw. In the video you just selected them from the dropdown, but they're not added to the prompt until you click on "add style to prompt"

    • @sebastiankamph
      @sebastiankamph  11 месяцев назад +1

      @@UnBknT Not needed to click the button. They are still applied.

    • @142vids
      @142vids 11 месяцев назад

      Why pay your monthly patreon while I can watch your free youtube videos with adblock on? We were beating the competition I thought no?

    • @sebastiankamph
      @sebastiankamph  11 месяцев назад +6

      @@142vids you're free to do whatever you want. The people supporting me does it from the kindness of their heart, helping me keep doing these videos.

  • @Rustmonger
    @Rustmonger Год назад +113

    Started playing with it a few hours ago. It is insane. It's nearly as good as training but without the training. It pulls faces, poses, lighting, art style, everything. I cannot believe this is only the first iteration, it is already so good. I thought Shuffle was dope but this is on a whole new level.

    • @Mocorn
      @Mocorn Год назад +19

      Exactly, "almost as good as training" is the scary part. I've been able to get better likeness out of this reference_only model than I've had with pretty much every early training attempt. There's been a bit of cherry picking but in some cases I've gotten 2 extremely good hits from a 4 batch render. It's crazy how good this is already!

    • @derek5634
      @derek5634 Год назад +8

      @@Mocorn Strange - casue I still cannot get it to create a decent copy of the original face. Always makes the new image look younger and very different from original face

    • @tamask001
      @tamask001 Год назад +5

      To me it looks like this method only works for people "coming out of the model". For example if you take the seed image from this video and try to generate other images from that without Sebastian's "Digital/Oil Painting" and "Easy Negative" styles, the results are very unimpressive. I'm not saying that this new ControlNet is not super cool for some use cases, but I though he could have been clearer about the limitations.

    • @ggoddkkiller1342
      @ggoddkkiller1342 Год назад +4

      I couldn't make it work with v1.1.174, txt2img is completely broken even hair colour doesn't match. While img2img kinda works better at least matching hair and clothes but faces are out of a horror movie twisted etc. Im using exactly same styles and settings

    • @kimjohn3877
      @kimjohn3877 11 месяцев назад +1

      how to access free trial?

  • @inkmage4084
    @inkmage4084 8 месяцев назад +4

    This has been extremely helpful in redesigning the characters for a video game I did way back in high school. I've taken my art, ran it through Ai, and saw it give me different variations of my work. I'd then pick what I liked from each and draw up the final design, It is such a time saver.

  • @hongxu9893
    @hongxu9893 Год назад +6

    Thank you so much! You've been pretty much the only source I've needed to learn everything I need about control net. Great videos with clear and concise information. Keep it up!

    • @sebastiankamph
      @sebastiankamph  Год назад

      Thank you very much, glad the videos have been helpful to you 😊

  • @ckblue154
    @ckblue154 Год назад +74

    this is actually the very definition of game changing

  • @deepfakeboy
    @deepfakeboy Год назад +21

    Dude... You're litrally faster than me clicking update button in SD... Have my sub!

  • @kamillorek5087
    @kamillorek5087 Год назад +6

    The Open Pose 3D extension is great for posing - you can run it in the GUI tab, set the skeleton in three-dimensional space, together with hands and feet and generate 3 images: canny, depth and openpose.

  • @ADMNtek
    @ADMNtek 11 месяцев назад +1

    I have only started using Stble Diffusion a bit over a week ago and your videos are such a big help.

  • @bjrnard.d9516
    @bjrnard.d9516 Год назад

    Waiting for my new computer with beefy vram to arrive, watching your vids to prep, and I'm loving what I'm seeing! Thanks so much for these!

  • @illusivec
    @illusivec Год назад +100

    Wow. CN guys are on a roll. They are innovating faster than OpenAI and Google. Hopefully they can keep up the momentum.

    • @rproctor83
      @rproctor83 Год назад +11

      Ha, every day there is a dozen new breakthroughs!

    • @jan.kowalski
      @jan.kowalski Год назад

      @@AG-ur1lj Thats why the battle for those brilliant mind is not based on ambition but depravation. The big ones will acquire what they can, and the rest will be depraved and obscured. As always.

    • @jan.kowalski
      @jan.kowalski Год назад

      ​@@AG-ur1lj powerful how? Will it scale to millions of users? Will it be safe from lawsuits or flexible enough to attract business users? I doubt that. Microsoft or Google could wait and buy anything viable and you, even with your brilliance will have nothing to say. As always in the history.

    • @jan.kowalski
      @jan.kowalski Год назад

      @@AG-ur1lj you didn't realized that this technology is already paywalled and regulated. You will not profit of it - above certain level of course - because you will not have a resources to train those tools or licenses to use copyrighted source data. As of now, it is not a problem for big corporations, because they just take the best solutions and use them with their data. You probably will be happy, but once more, you will not profit of that. Even if you will be able to train the best of the art algorithm it will be WORSE than their, because they have access to all those data and resources.

    • @jan.kowalski
      @jan.kowalski Год назад

      @@AG-ur1lj you downloaded terabytes of images and text and all copyrighted books and proprietary magazines from internet? I doubt that. Yet, Google or Microsoft works on that scale. Since you will NEVER have access to data, you will just become a giver of ideas to big corporations with your improvements to "open" algorithms. Without data, those algorithms just dont work. Even algorithms I enclosed in parantheses because when open source community will produce some breakthrough algorithm, big corporations WILL patent some small improvement and you will be barred from using them. That is the reality based on history. I'm amazed of your idealistic view of business.

  • @Melodias102
    @Melodias102 Год назад

    Sebastian big thanks for providing your styles. I mostly use them right at the beginning before even prompting and they provide beautiful results.

  • @MrVirus9898
    @MrVirus9898 Год назад +3

    If you High res fix after the init image is generated, you can usually cut through the noise. Go with R-ESRGAN 4x -> Denoise to 0.3 or 0.2. Keep that part weak. Or, alternatively, you can drag your CFG, and try to use High Res fix to add additional noise and burn if you are going for a Noisey style.

  • @geort45
    @geort45 9 месяцев назад

    Fantastic info dude, thanks again

  • @blackvx
    @blackvx Год назад

    Thank you for this news update!

  • @iambinarymind
    @iambinarymind Год назад

    This is fantastic! Thanks so much for the heads up.

  • @moe_joe_man
    @moe_joe_man Год назад +1

    Thanks for the video, I love watching how you present it. Keep it up!

  • @leosieczka3724
    @leosieczka3724 6 месяцев назад

    Amazing! Thanks you for making these tutorials

  • @jeremycointin1996
    @jeremycointin1996 Год назад

    Love this!!! I need this. Character consistency is my biggest problem.

  • @cryptolover827
    @cryptolover827 10 месяцев назад

    Many thanks for sharing tutorials, its a massive time saver ;D

  • @lioncrud9096
    @lioncrud9096 11 месяцев назад +1

    I LOVE THIS FEATURE. Already got some awesome results on first few minutes fooling with it.

  • @thebrokenglasskids5196
    @thebrokenglasskids5196 10 месяцев назад +5

    Upon seeing this I upgraded to a 12gb gpu this week so I could finally run ControlNet.
    It is indeed a literal game changer for projects that need character consistency. No more Lora and prompting gymnastics while crossing your fingers that the next batch will render what you want.
    Cuts workflow to a fraction of what is was before and opens all kinds of new creative doors.
    I’m loving this feature!

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      Happy to hear it's working out for you! ControlNet is life.

  • @ChrisCapel
    @ChrisCapel Год назад +2

    Is it possible to use this to get a different angle of a specific environment in the same style? No people or characters, just an environment.

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Yeah, but it won't be 100%. It's like a better img2img

  • @FCCEO
    @FCCEO 9 месяцев назад

    Sebastian thank you so much for doing what you are doing. I found you today and I have been watching your tutorials all day. I immediately signed up your patron! so glad to find you. I have 2 questions regarding this amazing tutorial. 1. I saw you highlighted prompt, woman smiling, and then ctrl+ Up. could you tell me what it does? is there any resources about those tricks on prompt? 2. would it be possible to combine 2 photos that I like to create new one? Thank you so much again. have a wonderful day!

    • @williamquaresma2529
      @williamquaresma2529 6 месяцев назад

      the ctrl+up with text selected make this text have more weight. the default weight is 1 and is valid for everything inside the parêntesis. the weight takes in consideration all the prompt.

  • @randymonteith1660
    @randymonteith1660 Год назад +3

    I did get the smile to work, but I had to add my whole prompt so my image didn't change drastically and added the (woman smiling:1.2) at the beginning of my prompt. The Posing part was changing my image too much but I have to play some more with that. In the time you made this video they updated controlnet to v1.1.164. Thanks love your videos!

    • @sebastiankamph
      @sebastiankamph  Год назад

      Glad you're enjoying the videos! I had to test a bunch of stuff before I got it working, and some versions barely even worked for me. Hoping new versions will make it easier to use for all.

    • @monsterlair
      @monsterlair Год назад

      This is exactly my experience too.
      Also, "ControlNet is more important" brightens up the image for me. I can get more consistent lighting with "My prompt is more important", but that changes the image more.

    • @Maltebyte2
      @Maltebyte2 3 месяца назад

      Im not getting nowhere fast! might just give up alltogether! i mean the output looks nothing, nothing like the input image! and i did everything exactly the same as in the video! ;(

  • @michaelleue7594
    @michaelleue7594 Год назад +4

    I would love to see how they pulled this off. It seems like if they can do this, then a lot of other things we don't have yet ought to be possible, like maintaining outfits or architecture. This is perfect for making comics, though, with character coherence between frames. Maybe they could even fix the coherency issue of tiling a high res image, depending on what they did, exactly. This is pretty crazy,

    • @wykydytron
      @wykydytron Год назад

      You can maintain outfit with it, just promy that outfit or maybe use just outfit here and face in separate controlnet .. you know what gonna check that today

    • @skittlzboi
      @skittlzboi Год назад

      @@wykydytron did you figure out how to do it? I try to use a CN for reference and one for open pose but cant seem to figure out how to get good results

  • @Bloxfruitsgamer894
    @Bloxfruitsgamer894 11 месяцев назад

    Thank You Sebastian, As ever your Tutorials are informative and straight to the point...And they Work!

    • @sebastiankamph
      @sebastiankamph  11 месяцев назад

      Happy to help, thank you for being here! 🌟

  • @hoting666
    @hoting666 Год назад +13

    This seems like something they could really use to do multi-frame rendering for txt2video

  • @SouthbayCreations
    @SouthbayCreations Год назад

    AMAZING!! Fantastic video! Thank you for sharing it!!

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Glad you liked it, you superstar, you! 😊🌟

  • @MrSongib
    @MrSongib Год назад

    Pog, didn't notice the Update. xd
    ty, Seb. Had a good day.

  • @rproctor83
    @rproctor83 Год назад

    Cant wait to try it, thx!

  • @sherpya
    @sherpya Год назад +2

    please raise your volume, I almost had an heart attack when ad kicked in lol

  • @wrayday7149
    @wrayday7149 8 месяцев назад

    Well, this wasn't quite what I was looking for but holy hell I got something good.
    I accidently wiped my prompts and I didn't know how to get them back... loading the image into png brought up my prompts/settings.
    So, thank you for that!!!

  • @betterlifeexe4378
    @betterlifeexe4378 8 месяцев назад +1

    How would you recommend getting the back of a character? I am trying to grab a depth map from both sides and combine them in blender. I guess I could do head on and then 120deg turns in either direction....

  • @MrErick1160
    @MrErick1160 Год назад +14

    Being able to do my characters in different 3D positions Dang this is godlike

    • @fernando749845
      @fernando749845 Год назад

      This has more character consistency than many 'old-fashioned' comic books :-)

    • @MrErick1160
      @MrErick1160 Год назад

      @@fernando749845 😅😅 this is actually sad to hear

    • @piotrrossa8030
      @piotrrossa8030 Год назад

      @@MrErick1160 my results are compeltly different than the reference :D :D

    • @sebastiankamph
      @sebastiankamph  Год назад

      🌟🌟

    • @TimeMasterOG
      @TimeMasterOG Год назад

      @@fernando749845 yes but actually no.. comic books stay very character consistent unless a panel gets drawn by a different artist

  • @Pahiro
    @Pahiro Год назад +63

    Wonder if people have started building graphic novels with this. Consistency in character design and style between frames is going to be really useful for something like that.

    • @mhelvens
      @mhelvens Год назад +1

      Or video. 😮

    • @HunterIndia
      @HunterIndia Год назад +5

      U can already get consistent characters with textual inversion or LORA , u can train one yourself, especially textual inversion which needs 8 images , anymore is just useless to train a TI

    • @Pahiro
      @Pahiro Год назад +7

      @@HunterIndia but then you'd need to train a model for each character.. Suppose it's not that tall of an order but still, this'll make things much easier. I should start looking for some webcomics with an AI tag. Would love to see AI being utilized in that space.

    • @ColoNihilism
      @ColoNihilism Год назад

      that's the dream --> Video indeed, scary how much GPU power would be required

    • @zoybean
      @zoybean Год назад

      @@Pahiro I'm trying but with Blender and img2img (more fine control).

  • @TheWorldAccordingToArf
    @TheWorldAccordingToArf 9 месяцев назад

    "I only have my shelf to blame." What a super fine hack joke. I bow, and thanks for the quality info.

  • @GooseAlarm
    @GooseAlarm Год назад +7

    This is insanely useful. Like I've been trying for the last week to collect images for a Lora. It can be tricky as hell because keeping characters consistent is HARD. Change just a few words and suddenly the whole piece looks like a different style. It will be SOOO easy to make Lora now thanks to this. What will they come up with next because google and openai in my opinion are doing a pretty "meh" job.

    • @TTarragon
      @TTarragon Год назад

      Yeah, this was my first thought too. By itself, it's great, but it can be SO useful for training Loras, which I suppose, are more accurate

    • @scottyfityoga
      @scottyfityoga 11 месяцев назад

      Hey can you please tell me how this makes it easier for training a Lora?

    • @benjaminjako
      @benjaminjako 11 месяцев назад +1

      @@scottyfityoga Easier to source images of a certain person, for example.

  • @gjohgj
    @gjohgj Год назад

    This is what I was waiting for! My goodness

  • @timpruitt7270
    @timpruitt7270 Год назад +14

    Unfortunately, it doesn't work for me. The generated images all look like the same person but they dont resemble the person in my original image. It's like my image is completely ignored.

    • @PKBO173
      @PKBO173 11 месяцев назад +1

      Totaly same. Iam getting whole different face..

    • @user-dk9td7kl8c
      @user-dk9td7kl8c 2 месяца назад

      Do you use mac m processors? Cause i have and there is a bug when it tryies to catch uploaded face.

  • @Distty
    @Distty Год назад

    Wow, that is amazing, great video as always.

  • @MaxKrovenOfficial
    @MaxKrovenOfficial Год назад +1

    Just wow, GAME CHANGER is the right set of words for this... just tried it and am uttelry impressed, thanks for reporting on this!!

  • @audiogus2651
    @audiogus2651 Год назад +2

    Really curious how this could also work with inpainting and img2img at the same time. exciting!

  • @FantasyArtworkAI
    @FantasyArtworkAI 8 месяцев назад

    Whenever I try this, it works well EXCEPT I keep getting instances where the body of the person looks like it's covered in sand or other patterns. The face area gets cleaned up during the swap and face fix, but the body just gets completely wrecked. The most recent example is it looks like they got wet then laid down in the sand before standing up to take a picture.

  • @johnsmith-vy7es
    @johnsmith-vy7es 9 месяцев назад

    Great video thank you. I have a question, I can make a pose in img2img. When you use a batch of 4 you get 4 pictures and one pose picture. Can i save this pose. Because when i click on the pose image and i use the save button it doesnt work. I don't get a download button as with a normal picture.

  • @eranfeit
    @eranfeit 10 месяцев назад +3

    Hi ,
    very good tutorial .
    I tried my own image is input for the controlnet with refrence_only , and a simple promt like "man is smiling" in the face are totally different. how can I preserve the face ?
    Thanks
    Eran

  • @residentzen
    @residentzen Год назад

    Thank you for this excellent content!

  • @delyou5704
    @delyou5704 9 месяцев назад

    when I use controlnet, it only produces an inverted image as a result of the reference even when I select reference as the control. how would I fix this?

  • @iamspain2174
    @iamspain2174 9 месяцев назад

    This is actually a game changer 🎉🎉🎉

  • @titusfx
    @titusfx 8 месяцев назад

    What I would like to do is inpainting woth controlNet. And what I mean is, I have an img with a pose, I remove one arm for inpainting and I pass another arm pose, and the inpainting is done with that new arm pose. Is this possible? What i found is not like that

  • @jlodrawing8769
    @jlodrawing8769 11 месяцев назад +1

    Awesome Tutorial! Thank you soo much! However for some reason SD ignores the second controlnet and doesn't give me the pose I want. Any idea what the issue might be? please keep making more videos!

  • @Jokeoftheday-iq7jv
    @Jokeoftheday-iq7jv 7 месяцев назад

    Hey there. Good content. Learning a lot on this channel! Thank you Sebastian.
    How do I bring such a face (as here) into a generated image of say an assassin? Do I just carry on with my prompt as I would have? And bring a face image to controlnet?

  • @kallamamran
    @kallamamran Год назад +5

    I've tried it... I don't get it to render anything even close to the likeness of the input image 😥

  • @ElHongoVerde
    @ElHongoVerde Год назад +1

    This is quite amazing. Even better than using LORAs and the chance to combine LORAs, seeds and ControlNet with reference methods, NICE...
    BTW... I was specting my "Wonderwall" dad joke. I'm very disapointed, mister Kamph (read it in a beautiful british Sean Connery angry tone).

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      You posted it after I recorded this. But I did find it very good! 😂😘

  • @jonnygudman1815
    @jonnygudman1815 Год назад

    Thank you, this is very helful😉

  • @fred4838
    @fred4838 9 месяцев назад

    Thnak you!

  • @waijan
    @waijan Год назад

    thank you for your work.

  • @dusteye1616
    @dusteye1616 11 месяцев назад

    Thanks, great video and straight to the point. Liked, subbed and commented !!!

    • @sebastiankamph
      @sebastiankamph  11 месяцев назад

      The holy trinity! You're the real mvp 🌟

  • @forestlong1
    @forestlong1 Год назад +8

    Very strange! I updated everything, turned everything on exactly the same way, uploaded a picture but the result is completely random. It does NOT WORK!

    • @TheFarmingAngels
      @TheFarmingAngels Год назад +3

      the same. it works only with some demo pictures (perfect face, no sobtle expressions, no background). And openpose miss the front/back pose 70% of time

    • @DedBruzon
      @DedBruzon 11 месяцев назад

      Yes, same thing.

  • @TheAquabears
    @TheAquabears 9 месяцев назад

    How did you get your styles menu subdivided like that? Is there an extension that does that or what?

  • @hardflame6486
    @hardflame6486 9 месяцев назад

    Bro is the best, thank you so much for saving a ton of time

  • @jonathaningram8157
    @jonathaningram8157 Год назад +2

    That was one the missing feature : the ability to keep the same character. Still not perfect but we are going there ! I now wonder if it will become possible to generate few good looking image and train a dreambooth on them. That way you can reuse the face only as an inpaint

    • @RoboMagician
      @RoboMagician 11 месяцев назад

      wondering the same thing. things like coping over styles like a person's clothes, and patterns on the clothes etc to the generation imges. does midjourney remix do that?

  • @ColoNihilism
    @ColoNihilism Год назад

    10x for the update! Looks reassuring as not to have to learn how to train and fine tune. Wonder if you can just keep using the same reference face with ANY different scenario, thus we got ourself a character mapped by seed only

  • @Elwaves2925
    @Elwaves2925 Год назад +1

    If you keep injuring yourself, it's time to book an appointment to learn some shelf improvement.
    This looks amazing, I keep meaning to look into ControlNet more but never seem to get around to it. Cheers.

  • @kritikusi-666
    @kritikusi-666 9 месяцев назад

    do you have any tutorials on how to create professional self portraits? I want to look pretty on Linkedin lol

  • @Silver_Nomad
    @Silver_Nomad Год назад +2

    There's actually one thing, I was thinking about... After version 1.1, ControlNet started to implement something new almost every week. First of all, new preprocessors. So, I'm pretty much curious about, when there is going to be an actual counterpart of Midjourney's Remix mode...?

  • @ThaRaiZe
    @ThaRaiZe Год назад +4

    I've followed the same steps but my pictures come out nothing like my original. I am enabling it, selecting 'reference only' but new pictures look like nothing like me

  • @Crisisdarkness
    @Crisisdarkness 8 месяцев назад

    I wanted to ask you if there is a way to have two "Lora" in the same image, you know how to do it?, could you make a tutorial about it?, thanks

  • @AnimalsSlaughterButchery
    @AnimalsSlaughterButchery Год назад +1

    Thank you very much for another great video.
    Wanted to ask about the styles, its the first time to see this, it there any videos that you explain what it is and how to use it or can you let me know here quickly about it?
    Thank you

    • @sebastiankamph
      @sebastiankamph  Год назад

      Check the pinned commend or video description. Install instructions and usage in that link

  • @yeastydynasty
    @yeastydynasty 11 месяцев назад +2

    I wish I could use this on my pc. Just too limited on the GPU front. Been wanting to do a comic book but getting consistent characters in Midjourney is like pulling teeth.

  • @ozstockman
    @ozstockman 9 месяцев назад

    has anybody figured out while there are no models coming with ControlNet v1.1.234? I tried to use it on this version and nothing worked -ControlNet just ignored for anything(canny, pose etc). I could not select any models for any pre-processor as Models dropdown list was empty. I have downloaded one model for openPose, put it in models folder in extensions. Now I can select this model for pose and it is all started working. I have installed ControlNet from Automatic1111 but it only puts yml files for models in the required folder but not actually models itself.

  • @reallybigname
    @reallybigname 11 месяцев назад

    I'm actually doing even more crazy things with tile. But yeah, reference ones are great too.

  • @redtriangle1950
    @redtriangle1950 Год назад +2

    Awesome video👍 Your computer is so fast in generating pictures, what are your hardware specs (cpu, gpu, RAM)?

  • @7sensestudios
    @7sensestudios Год назад

    hello, meanwhile thanks for your videos because I'm learning so much! I wanted to ask if there is a way to add real objects into an image, for example a model holding a real bag. Thank you

  • @user-bn7pw9tc4g
    @user-bn7pw9tc4g 11 месяцев назад +2

    Hey Sebastian, loving your videos. I notice that I don't have any ControlNet Units in my UI. Any advice on why/how that is set up?

    • @alexandercato7400
      @alexandercato7400 8 месяцев назад

      Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.

  • @bryan98pa
    @bryan98pa Год назад

    Great video, amazing tool!!

    • @sebastiankamph
      @sebastiankamph  Год назад

      Thank you! ControlNet is so powerful, it blows my mind. And I'm not exaggerating.

  • @Deljron777
    @Deljron777 Год назад +2

    It's still very clunky but we can see the future here. I want to be able to adjust like making a MMO character, then dress how ever i see fit, then put the character in any scene I want, in any pose i want, talking / singing / dancing /what ever. we are so close to that now it is so exciting!

    • @wykydytron
      @wykydytron Год назад

      There is controlnet that allows for easy outfit swap, my poor memory can't handle it's name it has 3 versions first end on 20 in name if that helps, anyway it detects what's on picture and paints it in corresponding color, then you just say you want person to have x outfit and it will change clothes but rest will remain unchanged

    • @aguyfromnothere
      @aguyfromnothere 11 месяцев назад

      @@wykydytron Segmentation.

  • @Silver_Nomad
    @Silver_Nomad Год назад +1

    There's one thing about this pre-processor... It's more resource consuming. I'm generating a 512x768 image, and setting a Hires Fix to 2x. As soon, as it starts to render a Hires image, "NansException: A tensor with all NaNs was produced in Unet" error occurs. It starts to render an upscaled image only if I lower an upscaler to 1,6.

  • @sathien9158
    @sathien9158 8 месяцев назад

    what is the difference between reference only and roop? thanks

  • @sphericalearther1461
    @sphericalearther1461 3 месяца назад

    Thanks again. First try failed, but will attempt again soon. ? After you get it to draw the character correct, can you then load a ref pic of costume only and use in-painting with it to give the character a chosen costume?

  • @ianhmoll
    @ianhmoll Год назад +2

    In my case the reference_only isn't respecting the style of the model, the results is always too realistic. I would like to use recpecting the style I want

  • @davidm8966
    @davidm8966 11 месяцев назад

    What a gamechanger...my goodness!

  • @HoppingMadMedia
    @HoppingMadMedia 11 месяцев назад +1

    I don't have the ControlNet Unit 0 and ControlNet Unit 1 tabs. I only have "single image" and "batch" and nothing above that. Have I done something wrong? I've checked that everything is up to date.

    • @alexandercato7400
      @alexandercato7400 8 месяцев назад

      Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.

  • @VIpown3d
    @VIpown3d Год назад

    You are on the top of your game Seb! Go king!

  • @user-ey1nf9xl2j
    @user-ey1nf9xl2j 11 месяцев назад

    Hi Sebastian, your videos are amazing!! Thanks very much, i have a question for you. Do you think it's possible to make an Ai model wear a real dress? For example if I have the ghost mannequin photo of a dress can i generate a worn photo with an AI model? Please let me know, i'm new in this fild anch I think this could be very useful

  • @MrErick1160
    @MrErick1160 Год назад

    Omg this is incredible

  • @JonatanE
    @JonatanE 9 месяцев назад +1

    I can't get this to work, it just send a bunch of error messages my way and end with ''TypeError: unhashable type: slice

  • @metasamsara
    @metasamsara 8 месяцев назад

    Does it work with multiple pictures of reference?

  • @ANGEL-fg4hv
    @ANGEL-fg4hv 11 месяцев назад

    for the posing , im thinking we can also extract a pose from an image ?

  • @CabrioDriving
    @CabrioDriving 11 месяцев назад

    Best channel. period.

  • @harshvardhansinha5267
    @harshvardhansinha5267 9 месяцев назад

    Great videos. But I always have to crank up the sound to max to listen. 😊

  • @slowstar6224
    @slowstar6224 Год назад +5

    i have 1.66 - but it will only copy the pose - the person looks nothing like the original photo...any ideas why?

    • @Maltebyte2
      @Maltebyte2 3 месяца назад

      Same issue here! does exactly motivate me at all to continue this! have you found out why?

  • @chaerazard
    @chaerazard 10 месяцев назад

    Thanks!

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Wow, thank you once again! Real mvp material. 💫

  • @lakislambrianides7619
    @lakislambrianides7619 10 месяцев назад

    Do I have to follow this procedure if I want to take one image from image 2 image window and apply on it open pose to get different variations? Or is there a simpler way!

  • @AiliaSyed-of2zm
    @AiliaSyed-of2zm 11 месяцев назад

    Hello there I have a question regarding controlnet. I have seen, using a 3d model you can make poses and use openpose to extract it, now in this video I have learned, that you could use any face as a reference and even combine it with the openpose. Now my question is: I do have a whole finished 3D Model of my Character eg. a 3d anime character in blender, I would pose it and it has its own face and clothing. Now I would pose my 3d model and have a picture of it. How can I use Controlnet so it would use the reference picture, and generate a image with the same face and clothing? Is there any way?

  • @Airbender131090
    @Airbender131090 Год назад +3

    How is this different from image2image? I played with it and dont see a diference

  • @wykydytron
    @wykydytron Год назад +2

    With faces result vary, from very impressive almost 1:1 copy to completely messy IMG, but if you add proper Lora for that person it has about 90% of accuracy but what i love about it most is that you can use it as style definition, you don't need Lora, just put IMG in style you aim for and your done, doesn't even matter what's on that image it will do great job in doing style. It's also very easy to achieve dark low light images when you put dark low light img as source. Honestly CN will dominate everything untill someone makes similar ai but one that does understand math, gender, individuals and how human body can or cannot band.

    • @leeren_
      @leeren_ 11 месяцев назад

      What do you recommend then for creating new characters that only exist as a single image to start? I was thinking of using CN and cherry-picking the good results to create a LoRa out of.

  • @carlosmachucafotografia5726
    @carlosmachucafotografia5726 Год назад

    perfecto !

  • @buddy2665
    @buddy2665 Год назад +1

    My laptop has only 4gb vram so not a good start already😅 but i was able to generate at 1024×1024 resolution but after updating automatic 1111 i can't generate above 512×512 , and also can't use controlnet, everytime the vram usage goes through the roof. Then i upgraded to torch 2.0 but still it didn't help.
    Torch 2 definitely decreased my generation time though ngl.
    What should i do?? I want to use controlenet.

  • @DedBruzon
    @DedBruzon 11 месяцев назад +1

    Something wrong, i have latest version of ControlNet, but images come out absolutely different from my control image.

  • @FnD4212
    @FnD4212 8 месяцев назад

    2:05 How do you make that STYLE list?
    EDIT: Nevermind, I check your PATREON link.