NEXT-GEN MULTI-CONTROLNET INPAINTING! You’ve NEVER SEEN THIS BEFORE!

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 231

  • @Aitrepreneur
    @Aitrepreneur  Год назад +15

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @Leto2ndAtreides
      @Leto2ndAtreides Год назад

      Have you seen Corridor Crew's video "Did We Just Change Animation Forever?", that seems super worth experimenting with.

    • @dimitrishow_D
      @dimitrishow_D Год назад

      i have some cartoon characters , see my profile ....i tried training stable diffusion with this but it cant seem to replacate this...the training only seems to work with things it created itself ..am i doing something wrong or is it that my character is unique and it doesnt have enough reference...when i did kinda get it to worrk with i think astria...it drew it like a 5 year old ..if it is possible to train with your own 2d characters PLEASE make a video about that

    • @long20014
      @long20014 Год назад

      Hi, could you share the excel file about color?

    • @MissAIchan
      @MissAIchan Год назад

      How did you enable the guess mode?

  • @richardgreaney
    @richardgreaney Год назад +27

    This is getting so mind blowing now. There are just so many different possibilities for how to make a good image now. I almost feel like I need to take a step back for a bit, and think about what I would really like to create, then see what technologies are now available that could help me achieve this.

  • @vi6ddarkking
    @vi6ddarkking Год назад +14

    We Are Getting So Much Control So Fast.
    I Honestly Don't Think even 6 month ago anyone thought we would have advanced thim much this quickly.

    • @wakegary
      @wakegary Год назад +8

      and it's exponentially growing. video is a month or two away (or less). then come the apps, the corporate control, the tightening laws on models, the underground trade, the bootleg versions blah blah. it will be fun to watch - we're in the wild wild west phase right now

    • @cdreid9999
      @cdreid9999 Год назад +5

      This reminds me of when personal computers became available and a lot of us learned to program

    • @Mocorn
      @Mocorn Год назад +1

      I can see a quality difference in my output folder from just two weeks back! This is crazy.

  • @Real28
    @Real28 Год назад +8

    I called it. 6 months ago I said that while it seems like this kind of tech should take a year, it would take half the time. There's just too many people grinding away at these tools simultaneously.
    Just crazy.

  • @kyoko703
    @kyoko703 Год назад +153

    With so much "tuning" and manual work, it's pretty safe to say that these "generative" works are more human than anything else and demonstrates that these are just tools at the end of the day.

    • @dv6165
      @dv6165 Год назад +6

      In this case very cumbersome tools for a pretty simple operation

    • @gara8142
      @gara8142 Год назад +10

      @@dv6165 True, but only if we take it at face value, if you wanted to change an image from realisitc to , idk, helltaker style manually, it would take way longer. As with most tools it's a matter of use cases.

    • @user-zi6rz4op5l
      @user-zi6rz4op5l Год назад +2

      With Generative AI, you never get the image you wanted, but end with some image which looks pretty after a million random pushing of parameters.
      Can you get deterministic result on the next attempt. NO.
      It would be better if these models churn out 2k+ resolution as the upscaling is again a million guess on the parameter buttons 😔😔

    • @markusm108
      @markusm108 Год назад

      @@user-zi6rz4op5l you essentially describe what a customer feels when he hires an artist or designer. what these tools do is they put you into a different role - not the artist but the art director.

  • @kinnectar820
    @kinnectar820 Год назад +4

    Holy shit, so much progress in AI image gen in so little time, my dreams of doing my own graphic novels using ML are pretty much fully within grasp using these techniques. What a time to be alive!

  • @Dex_1M
    @Dex_1M Год назад

    dude, i need to watch this video 10 times with some personal experimentations, this new multi control net is a masterpiece

  • @Gxbbzee
    @Gxbbzee Год назад +3

    I've been looking at separate tutorials for the past few days, and even those combined didn't give me as good and precise information as this single video did... amazing! Very much appreciated!

  • @dreamzdziner8484
    @dreamzdziner8484 Год назад +5

    For someone who had been playing with Controlnet from day one just to blend a foreground character perfectly to a desired background perfectly; this video is a treasure. I tried many combinations but never ever thought of that inpainting trick you showed. You are the absolute best dear friend. 👌💪👍👏🧡

    • @jurandfantom
      @jurandfantom Год назад +2

      have you managed to make it working? for me it change whole picture and "masked only" other hand make things incorrect.

  • @johannvandebron986
    @johannvandebron986 Год назад +7

    Nice! You have the best Stable Diffusion / Automatic 1111 content on YT. Thank you so much for letting us know all this!

  • @Mocorn
    @Mocorn Год назад +5

    That trick to crudely draw lines to generate edge lighting worked way better than I thought it would. How on earth does it bleed onto the nose like that? This is some wild shit!

  • @uk3dcom
    @uk3dcom Год назад +12

    I have been trying to do the things you have now solved for hours. Thank you for your experimentation and sharing. The fact that you and a couple of other guys on RUclips are feeding off each other and pushing this forward to simplify our efforts is really appreciated. Please keep up the good work.☺

  • @winkletter
    @winkletter Год назад +60

    This gives me an idea for an experiment: Comics made with multi ControlNets. One for the frame. Add characters in with OpenPose. Then segmentation for specific objects.

    • @f0kes32
      @f0kes32 Год назад +1

      how would you draw the same character in different poses?

    • @anastasiaklyuch2746
      @anastasiaklyuch2746 Год назад +8

      @@f0kes32 see previous video ;)

    • @M.I.F..
      @M.I.F.. Год назад

      Charturner....(civitai)

    • @wakegary
      @wakegary Год назад +2

      I just madea Steve Urkel in the style of EC Comics (Tales from the Crypt) and I'm very weirded out. AKA it's awesome

    • @Strangepaper
      @Strangepaper Год назад

      @@wakegary what model for ec comics?

  • @DESIGNISTASTY
    @DESIGNISTASTY Год назад

    This is why I love SD because you have total control of what you want to do and how your image will be.

  • @schenier
    @schenier Год назад +8

    I find that all this shows that this tool can be used by real artists and create little by little an image, from background, to pose, all the way to the lighting

  • @fnorgen
    @fnorgen Год назад +1

    Now this is the level of control I was badly missing back in January!

  • @oblivionronin
    @oblivionronin Год назад

    15:41 That image is absolutely insane, i love it !

  • @설리-o2w
    @설리-o2w Год назад +3

    Can't wait WHAT A TIME TO BE ALIVE AND FIRST!!!

  • @coda514
    @coda514 Год назад +1

    Controlnet is amazing, and thanks to you I understand it more and more every day. Thank you from the bottom of my heart. Sincerely, your loyal subject

  • @bentp4891
    @bentp4891 Год назад +1

    Amazing. Probably the most impressive new features since Dreambooth.

  • @travelwell6049
    @travelwell6049 Год назад

    Interesting. I have not seen anyone talking about controlnet. So it’s great to see. Thanks.

  • @human-error
    @human-error Год назад

    Best controlnet video on the net. Tnx NON-HUMAN !

  • @Artishtic
    @Artishtic Год назад +2

    Thanks for the epic tutorial! If there were only a photopea version of After Effects too.

  • @MrBlitzpunk
    @MrBlitzpunk Год назад +2

    I haven't tried the multiCN, but with a single CN if you use the hed model it's already good for changing the style of a single image, it works kinda like depth map but preserves the detail and lines of the image

  • @PhilippSeven
    @PhilippSeven Год назад +2

    You can Inpaint character on the background much easier: there is a tab for that - “inpaint upload”. Just use depth pic as a mask. No need to draw something, and result is much cleaner.

  • @evylrune
    @evylrune Год назад

    Damn, the sketch lighting trick is pretty cool.

  • @bolbolzaboon
    @bolbolzaboon Год назад

    This video right here worth so much that I'm gladly going to join Patreon.

  • @TrancorWD
    @TrancorWD Год назад +2

    Very cool seeing the multi-control net stuff.
    Last few days I've been adding in support for interactive visualization of control net outputs as a pre-pre-processor. Since it's annoying poking at values without knowing what the outcome will actually be.
    And I figure, if I'm doing that for fun, other people are probably doing this as well on the sd-webui-controlnet extension team.
    Cause the capability is all there for feedback from the ControlNet preprocessor in A1111, just a matter of connecting up the hooks for it.

  • @krz9000
    @krz9000 Год назад

    dude...what a treasure chest of a video!!

  • @autonomousreviews2521
    @autonomousreviews2521 Год назад +1

    You never waste my time :D Thank you for the depth and detail.

  • @lioncrud9096
    @lioncrud9096 Год назад

    damn, this was like 20 tutorials in one! Awesome content Mr. Aitrepreneur

  • @Nickknows00
    @Nickknows00 Год назад

    Awesome man!!! So much new I didn’t realise you could do!

  • @Apothis1
    @Apothis1 Год назад

    Thankyou so much for all these helpful vids, maes this super easy to understand, really appreciate it

  • @AscendantStoic
    @AscendantStoic Год назад

    Fantastic collection of tips & tricks, nice work ;)

  • @joshuajaydan
    @joshuajaydan Год назад

    You sir are insanely talented. Thanks for sharing.

  • @garen591
    @garen591 Год назад +1

    Thats really cool. Could you talk about how to change an object perspective too? Say you want a front, top, side or isometric view of an object

  • @karl-heinzbiederbick87
    @karl-heinzbiederbick87 Год назад

    Wow! Thank you for sharing.

  • @mariokotlar303
    @mariokotlar303 Год назад

    Thank you so much! This was very helpful!

  • @HolidayAtHome
    @HolidayAtHome Год назад +3

    remember the time where txt2 image with 1.4 was the only thing we could do back then =D ?

    • @estrangeiroemtodaparte
      @estrangeiroemtodaparte Год назад

      I remember discovering disco diffusion and thinking it was magical! lol Things are changing fast!

    • @IceMetalPunk
      @IceMetalPunk Год назад

      Yeah, less than a year ago 😂

  • @ashokp9260
    @ashokp9260 Год назад +2

    Hey this is so incredible... tools for infinite creativity at 0 cost. Also, I feel I am light years behind, if I forget to follow develpments even for a week.

    • @devnull_
      @devnull_ Год назад

      Like you didn't already have possibility for "infinite creativity" with pen and paper or photoshop.

    • @ashokp9260
      @ashokp9260 Год назад +1

      @@devnull_ Like everyone on earth is so artistic like Davinci.

  • @luke.perkin.inventor
    @luke.perkin.inventor Год назад +1

    Great video, so many useful tips!

  • @maddercat
    @maddercat Год назад

    wow, i had no idea this was possible, that's insane...I gotta wrap my head around it, it's difficult even seeing you do it. lol

  • @GerwaldJensRadsma
    @GerwaldJensRadsma Год назад +1

    WooooOOOooooW Good as always!

  • @moneyjuice
    @moneyjuice Год назад

    that's insane how quick the AI tech is evolving

  • @asciikat2571
    @asciikat2571 Год назад

    Craaaaaazzzzzzy! Love it, You are the strongest

  • @Kryptonic83
    @Kryptonic83 Год назад

    awesome collection of tips, I've been loving playing around with controlnet lately. keep up the great work!

  • @alexb6969
    @alexb6969 Год назад

    You r the best! This stuff is so cool! I can't stop to admire this new tool)

  • @Tsero0v0
    @Tsero0v0 Год назад +2

    What if we use multiply controlnet and all of them with guess mode or without it? Is there any different?

  • @OriBengal
    @OriBengal Год назад

    whoa! Totally stands out from the other guys :) -- Great techniques!

  • @victorwijayakusuma
    @victorwijayakusuma Год назад

    Thank you for this! YOu are wonderful! by the way how to do the excel thing or is there any other program to see the segmentation list?

  • @fredmcveigh9877
    @fredmcveigh9877 Год назад

    Very informative and inspiring .Thankyou .

  • @gameswithoutfrontears416
    @gameswithoutfrontears416 Год назад

    Thanks. Wow. Pretty cool stuff. So many possibilities 👍

  • @aaronhhill
    @aaronhhill Год назад

    "Because I'm a madman." Hahahahaha! Love you man, you're great!

  • @6666daf
    @6666daf Год назад

    Really good techniques.

  • @Actual_CT
    @Actual_CT Год назад

    finally 3d texture workflow assistance...with precision

  • @vi6ddarkking
    @vi6ddarkking Год назад +3

    Would it be possible to use Multiple Hypernetworks With Multi-Controlnet?
    To for example, compose an image with multiple different characters with specific outfits?

  • @cruhstin
    @cruhstin Год назад

    Fantastic tips! It would be awesome if you add checkpoints/timestamps to this video so I can quickly go to a spot in the video if I want to review a specific trick you showed off. Keep up the great work 😀

  • @TheAiConqueror
    @TheAiConqueror Год назад

    troll king 👑 no seriously, i love your video, your workflow tricks. I could watch you for hours 😁👍

  • @ColoNihilism
    @ColoNihilism Год назад

    awesome
    where do we get some here do we get some lighting examplars ?

  • @leowei771
    @leowei771 Год назад +1

    Good lord, this is moving so fast that I can barely keep up with all this new stuff.

  • @hatonafox5170
    @hatonafox5170 Год назад

    You sir are my hero!

  • @Eagleshadow
    @Eagleshadow Год назад

    Super useful, thanks!

  • @jimdelsol1941
    @jimdelsol1941 Год назад

    Absolutly amazing. Thank you.

  • @hexemeister
    @hexemeister Год назад

    Your channel is awesome, but I am overwhelmed with so much info. Is there a newbie playlist to start from beginning to catch it up?

  • @lucstep
    @lucstep Год назад +1

    Really cool! But is it possible to combine photos, like shown in the video, but not originating from SD (for example a real photo of a background with a real photo of someone) ?

  • @velly027
    @velly027 Год назад +1

    I don't know if it's possible yet because I lost the overview of the newest developments. I have 2 trained models of 2 different characters. I want place both characters in one image with a chosen background. Both characters should be arranged with open pose. Then the whole image should be created with diffusion in one step so that the lightnings and the shadows are correct over the whole image. Is this possible yet? Maybe I must wait another week :)

  • @benjamininkorea7016
    @benjamininkorea7016 Год назад +2

    Brilliant. The segmentation index is completely crazy, and I never would have found it. I gave up on segmentation almost immediately as useless, but I was wrong!
    Do you have any workflow yet for altering the appearance of exisitng human faces not from prompts? Especially, I want to use my face in a new scene but not with the same lighting taken in my living room-- but add god rays etc. to it.

    • @muuuuuud
      @muuuuuud Год назад +1

      Unless I'm mistaken i think Lora is your answer

    • @Mocorn
      @Mocorn Год назад

      ​@@muuuuuudor dreambooth or textual inversion or hypernetwork.

    • @sunlightg
      @sunlightg Год назад

      I agree with the previous commentator. Just train your own lora model with your own face and then use it as you wish. There is a good video in this channel about training LoRa, so just follow the instructions. Just don't forget to read the comments under the video because there are some important things to add :D

  • @muuuuuud
    @muuuuuud Год назад

    Thanks K for the powerful knowledge, have a wonderful weekend! ^__^

  • @mandapanda5252
    @mandapanda5252 Год назад +2

    I have been going through this one by one, not getting any of the same results, feeling a bit defeated. I was most excited about transferring a character onto a background, but no matter what I do it changes the character completely. I have adjusted the denoise and weight so many times. Perhaps it struggles with full body characters?

    • @cameron7814
      @cameron7814 Год назад

      If you figure this out please lmk, also having trouble transferring characters into a background. The depth and canny models seem to be working fine but the character always shows up non detailed and almost transparent to the background no matter which settings I change.

  • @rincondesalva
    @rincondesalva Год назад

    Awesome video!... Could you please (in any control-net video) tell us which requirements about VRAM specially) should everyone need in order to run in local properly this addon?...I guess my rtx3070 mobile with 8gb is almost uncapable...

  • @doobertoob4266
    @doobertoob4266 Год назад

    Lol finally. Guess mode is pretty much what Midjourney does by default

  • @OndrejL
    @OndrejL Год назад

    wow bruh great discoveries and thx for the share

  • @megumin4625
    @megumin4625 Год назад

    The multi controlnets are now tabbed. Niiice

  • @haan3388
    @haan3388 Год назад +1

    do you have an in depth vid on the inpaint trick you explained around 5:00?

  • @cienciaemutopia
    @cienciaemutopia Год назад

    Nice tips and video,from brazil

  • @thethiny
    @thethiny Год назад

    Why did you turn off the pre processor for Open Pause while making the man dancing in the living room?

  • @zhizui
    @zhizui Год назад

    omg!!这真的太有用了!看一次可能都学不完里面的知识

  • @goldenknowledge5914
    @goldenknowledge5914 Год назад

    Its crazy. So much things to learn just for one feature. It is indeed the age of AI

  • @in2thedark
    @in2thedark Год назад

    You are the man! 🤖 thanks for the tips

  • @justinyuvilla8944
    @justinyuvilla8944 Год назад

    How can you show a photograph into the reference ouput with the correct exact lighting style you want and have the ai match that very same lighting on your original image as well?

  • @danilo_88
    @danilo_88 Год назад

    This is pretty cool. I love your videos. You could use more models other than just anime and cartoon.

  • @jtmcdole
    @jtmcdole Год назад +1

    Now we just need a node-based flow plugin to make the rotoscoping easier.

  • @mistercapitale
    @mistercapitale Год назад

    Aitrepreneur, is it possible to use a logo and place it on a shirt with any of control net models?

  • @XavierCliment
    @XavierCliment Год назад

    Hello robot!! I have a question that may not be possible to do with controlnet and Stable Diffusion
    I am trying to convert old images to color, as now controlnet allows more than two models, i'm trying to do with a reference image.
    I have the image in black and white, and I have another that is not the same image but in color, it is not the same image but the dresses and the people are the same.
    Is there a way for controlnet to understand that this t-shirt in black and white image has to be like the one in color? the same as the face, the trousers, the wall... ?
    Is there any way to make him understand that the t-shirt in the black and white image has to be like the colored one?
    Would you be able to get it?
    Understand that i'm saying?

  • @Avalon19511
    @Avalon19511 Год назад

    Is using controlnet a more exact way of doing img2img?

  • @nganpoiis8961
    @nganpoiis8961 Год назад

    Is there a contolnet rig available for blender?

  • @Dex_1M
    @Dex_1M Год назад

    i think using 2 control nets, one with the canny or depth, and one with the pose, and playing only with the pose around and not changing the canny or depth, well make you control the characters movement while keeping the details, i assume this is true and if you can get this right, you can make a lot of images, and thus having an animation.

  • @fernandoz6329
    @fernandoz6329 Год назад

    Awesome tricks

  • @KonstantinRozumny
    @KonstantinRozumny Год назад

    Great video! How to add two different characters to the same background?

  • @cesar4729
    @cesar4729 Год назад +1

    You missed the best sketch trick, were you use colors to add things or transform the picture. For that you need the same picture in controlnet and play around. Is way more powerful than the segmentation trick.

  • @teebu
    @teebu Год назад

    How to get controlnet to work with sitting, cross legged or floating poses? Mine keeps messing up the limbs and sometimes arms become feet? It seems to have problems when lines overlap.

  • @theairchitect
    @theairchitect Год назад

    Im in love with the new updates of controlnet. But after the update i see an error in Deforum =( error in paths... yout know what i have to do scale? =(

  • @Krougher
    @Krougher Год назад

    That s insane !

  • @flonixcorn
    @flonixcorn Год назад

    Very nice!

  • @lntcmusik
    @lntcmusik Год назад

    This is great! Thanks for sharing 🙂
    Do you have an idea on how to create an animated, let's say, sticker for any messanger using Stable Diffusion? That would be interesting.

  • @stinkyjutsu
    @stinkyjutsu Год назад +1

    I've tried using 1.5 Inpainting with contronet and it just won't work for me, everything else is good though. I've also purchased painters model figurines to create my own poses for controlnet to map and create images with.

  • @philipZhang86
    @philipZhang86 Год назад

    great tutorial~~

  • @earthpond8043
    @earthpond8043 Год назад

    Bro you keep blowing my mind damn gj

  • @Tcgtrainer
    @Tcgtrainer Год назад

    ¿How can you create the highlight effect without changing the image?,cause faces change a little when you do it.

  • @resonantone3284
    @resonantone3284 Год назад +2

    Installed it yesterday, and boy... I'm still struggling to figure it out. If it's going to do what I think, wow... Why do all the cool toys have to come out when I'm on a deadline?!??!? It's a plot I tell you!