i have some cartoon characters , see my profile ....i tried training stable diffusion with this but it cant seem to replacate this...the training only seems to work with things it created itself ..am i doing something wrong or is it that my character is unique and it doesnt have enough reference...when i did kinda get it to worrk with i think astria...it drew it like a 5 year old ..if it is possible to train with your own 2d characters PLEASE make a video about that
This is getting so mind blowing now. There are just so many different possibilities for how to make a good image now. I almost feel like I need to take a step back for a bit, and think about what I would really like to create, then see what technologies are now available that could help me achieve this.
and it's exponentially growing. video is a month or two away (or less). then come the apps, the corporate control, the tightening laws on models, the underground trade, the bootleg versions blah blah. it will be fun to watch - we're in the wild wild west phase right now
I called it. 6 months ago I said that while it seems like this kind of tech should take a year, it would take half the time. There's just too many people grinding away at these tools simultaneously. Just crazy.
With so much "tuning" and manual work, it's pretty safe to say that these "generative" works are more human than anything else and demonstrates that these are just tools at the end of the day.
@@dv6165 True, but only if we take it at face value, if you wanted to change an image from realisitc to , idk, helltaker style manually, it would take way longer. As with most tools it's a matter of use cases.
With Generative AI, you never get the image you wanted, but end with some image which looks pretty after a million random pushing of parameters. Can you get deterministic result on the next attempt. NO. It would be better if these models churn out 2k+ resolution as the upscaling is again a million guess on the parameter buttons 😔😔
@@user-zi6rz4op5l you essentially describe what a customer feels when he hires an artist or designer. what these tools do is they put you into a different role - not the artist but the art director.
Holy shit, so much progress in AI image gen in so little time, my dreams of doing my own graphic novels using ML are pretty much fully within grasp using these techniques. What a time to be alive!
I've been looking at separate tutorials for the past few days, and even those combined didn't give me as good and precise information as this single video did... amazing! Very much appreciated!
For someone who had been playing with Controlnet from day one just to blend a foreground character perfectly to a desired background perfectly; this video is a treasure. I tried many combinations but never ever thought of that inpainting trick you showed. You are the absolute best dear friend. 👌💪👍👏🧡
That trick to crudely draw lines to generate edge lighting worked way better than I thought it would. How on earth does it bleed onto the nose like that? This is some wild shit!
I have been trying to do the things you have now solved for hours. Thank you for your experimentation and sharing. The fact that you and a couple of other guys on RUclips are feeding off each other and pushing this forward to simplify our efforts is really appreciated. Please keep up the good work.☺
This gives me an idea for an experiment: Comics made with multi ControlNets. One for the frame. Add characters in with OpenPose. Then segmentation for specific objects.
I find that all this shows that this tool can be used by real artists and create little by little an image, from background, to pose, all the way to the lighting
I haven't tried the multiCN, but with a single CN if you use the hed model it's already good for changing the style of a single image, it works kinda like depth map but preserves the detail and lines of the image
You can Inpaint character on the background much easier: there is a tab for that - “inpaint upload”. Just use depth pic as a mask. No need to draw something, and result is much cleaner.
Very cool seeing the multi-control net stuff. Last few days I've been adding in support for interactive visualization of control net outputs as a pre-pre-processor. Since it's annoying poking at values without knowing what the outcome will actually be. And I figure, if I'm doing that for fun, other people are probably doing this as well on the sd-webui-controlnet extension team. Cause the capability is all there for feedback from the ControlNet preprocessor in A1111, just a matter of connecting up the hooks for it.
Hey this is so incredible... tools for infinite creativity at 0 cost. Also, I feel I am light years behind, if I forget to follow develpments even for a week.
Would it be possible to use Multiple Hypernetworks With Multi-Controlnet? To for example, compose an image with multiple different characters with specific outfits?
Fantastic tips! It would be awesome if you add checkpoints/timestamps to this video so I can quickly go to a spot in the video if I want to review a specific trick you showed off. Keep up the great work 😀
Really cool! But is it possible to combine photos, like shown in the video, but not originating from SD (for example a real photo of a background with a real photo of someone) ?
I don't know if it's possible yet because I lost the overview of the newest developments. I have 2 trained models of 2 different characters. I want place both characters in one image with a chosen background. Both characters should be arranged with open pose. Then the whole image should be created with diffusion in one step so that the lightnings and the shadows are correct over the whole image. Is this possible yet? Maybe I must wait another week :)
Brilliant. The segmentation index is completely crazy, and I never would have found it. I gave up on segmentation almost immediately as useless, but I was wrong! Do you have any workflow yet for altering the appearance of exisitng human faces not from prompts? Especially, I want to use my face in a new scene but not with the same lighting taken in my living room-- but add god rays etc. to it.
I agree with the previous commentator. Just train your own lora model with your own face and then use it as you wish. There is a good video in this channel about training LoRa, so just follow the instructions. Just don't forget to read the comments under the video because there are some important things to add :D
I have been going through this one by one, not getting any of the same results, feeling a bit defeated. I was most excited about transferring a character onto a background, but no matter what I do it changes the character completely. I have adjusted the denoise and weight so many times. Perhaps it struggles with full body characters?
If you figure this out please lmk, also having trouble transferring characters into a background. The depth and canny models seem to be working fine but the character always shows up non detailed and almost transparent to the background no matter which settings I change.
Awesome video!... Could you please (in any control-net video) tell us which requirements about VRAM specially) should everyone need in order to run in local properly this addon?...I guess my rtx3070 mobile with 8gb is almost uncapable...
How can you show a photograph into the reference ouput with the correct exact lighting style you want and have the ai match that very same lighting on your original image as well?
Hello robot!! I have a question that may not be possible to do with controlnet and Stable Diffusion I am trying to convert old images to color, as now controlnet allows more than two models, i'm trying to do with a reference image. I have the image in black and white, and I have another that is not the same image but in color, it is not the same image but the dresses and the people are the same. Is there a way for controlnet to understand that this t-shirt in black and white image has to be like the one in color? the same as the face, the trousers, the wall... ? Is there any way to make him understand that the t-shirt in the black and white image has to be like the colored one? Would you be able to get it? Understand that i'm saying?
i think using 2 control nets, one with the canny or depth, and one with the pose, and playing only with the pose around and not changing the canny or depth, well make you control the characters movement while keeping the details, i assume this is true and if you can get this right, you can make a lot of images, and thus having an animation.
You missed the best sketch trick, were you use colors to add things or transform the picture. For that you need the same picture in controlnet and play around. Is way more powerful than the segmentation trick.
How to get controlnet to work with sitting, cross legged or floating poses? Mine keeps messing up the limbs and sometimes arms become feet? It seems to have problems when lines overlap.
Im in love with the new updates of controlnet. But after the update i see an error in Deforum =( error in paths... yout know what i have to do scale? =(
This is great! Thanks for sharing 🙂 Do you have an idea on how to create an animated, let's say, sticker for any messanger using Stable Diffusion? That would be interesting.
I've tried using 1.5 Inpainting with contronet and it just won't work for me, everything else is good though. I've also purchased painters model figurines to create my own poses for controlnet to map and create images with.
Installed it yesterday, and boy... I'm still struggling to figure it out. If it's going to do what I think, wow... Why do all the cool toys have to come out when I'm on a deadline?!??!? It's a plot I tell you!
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
Have you seen Corridor Crew's video "Did We Just Change Animation Forever?", that seems super worth experimenting with.
i have some cartoon characters , see my profile ....i tried training stable diffusion with this but it cant seem to replacate this...the training only seems to work with things it created itself ..am i doing something wrong or is it that my character is unique and it doesnt have enough reference...when i did kinda get it to worrk with i think astria...it drew it like a 5 year old ..if it is possible to train with your own 2d characters PLEASE make a video about that
Hi, could you share the excel file about color?
How did you enable the guess mode?
This is getting so mind blowing now. There are just so many different possibilities for how to make a good image now. I almost feel like I need to take a step back for a bit, and think about what I would really like to create, then see what technologies are now available that could help me achieve this.
We Are Getting So Much Control So Fast.
I Honestly Don't Think even 6 month ago anyone thought we would have advanced thim much this quickly.
and it's exponentially growing. video is a month or two away (or less). then come the apps, the corporate control, the tightening laws on models, the underground trade, the bootleg versions blah blah. it will be fun to watch - we're in the wild wild west phase right now
This reminds me of when personal computers became available and a lot of us learned to program
I can see a quality difference in my output folder from just two weeks back! This is crazy.
I called it. 6 months ago I said that while it seems like this kind of tech should take a year, it would take half the time. There's just too many people grinding away at these tools simultaneously.
Just crazy.
With so much "tuning" and manual work, it's pretty safe to say that these "generative" works are more human than anything else and demonstrates that these are just tools at the end of the day.
In this case very cumbersome tools for a pretty simple operation
@@dv6165 True, but only if we take it at face value, if you wanted to change an image from realisitc to , idk, helltaker style manually, it would take way longer. As with most tools it's a matter of use cases.
With Generative AI, you never get the image you wanted, but end with some image which looks pretty after a million random pushing of parameters.
Can you get deterministic result on the next attempt. NO.
It would be better if these models churn out 2k+ resolution as the upscaling is again a million guess on the parameter buttons 😔😔
@@user-zi6rz4op5l you essentially describe what a customer feels when he hires an artist or designer. what these tools do is they put you into a different role - not the artist but the art director.
Holy shit, so much progress in AI image gen in so little time, my dreams of doing my own graphic novels using ML are pretty much fully within grasp using these techniques. What a time to be alive!
dude, i need to watch this video 10 times with some personal experimentations, this new multi control net is a masterpiece
I've been looking at separate tutorials for the past few days, and even those combined didn't give me as good and precise information as this single video did... amazing! Very much appreciated!
For someone who had been playing with Controlnet from day one just to blend a foreground character perfectly to a desired background perfectly; this video is a treasure. I tried many combinations but never ever thought of that inpainting trick you showed. You are the absolute best dear friend. 👌💪👍👏🧡
have you managed to make it working? for me it change whole picture and "masked only" other hand make things incorrect.
Nice! You have the best Stable Diffusion / Automatic 1111 content on YT. Thank you so much for letting us know all this!
That trick to crudely draw lines to generate edge lighting worked way better than I thought it would. How on earth does it bleed onto the nose like that? This is some wild shit!
I have been trying to do the things you have now solved for hours. Thank you for your experimentation and sharing. The fact that you and a couple of other guys on RUclips are feeding off each other and pushing this forward to simplify our efforts is really appreciated. Please keep up the good work.☺
This gives me an idea for an experiment: Comics made with multi ControlNets. One for the frame. Add characters in with OpenPose. Then segmentation for specific objects.
how would you draw the same character in different poses?
@@f0kes32 see previous video ;)
Charturner....(civitai)
I just madea Steve Urkel in the style of EC Comics (Tales from the Crypt) and I'm very weirded out. AKA it's awesome
@@wakegary what model for ec comics?
This is why I love SD because you have total control of what you want to do and how your image will be.
I find that all this shows that this tool can be used by real artists and create little by little an image, from background, to pose, all the way to the lighting
Now this is the level of control I was badly missing back in January!
15:41 That image is absolutely insane, i love it !
Can't wait WHAT A TIME TO BE ALIVE AND FIRST!!!
Controlnet is amazing, and thanks to you I understand it more and more every day. Thank you from the bottom of my heart. Sincerely, your loyal subject
Amazing. Probably the most impressive new features since Dreambooth.
Interesting. I have not seen anyone talking about controlnet. So it’s great to see. Thanks.
Best controlnet video on the net. Tnx NON-HUMAN !
Thanks for the epic tutorial! If there were only a photopea version of After Effects too.
I haven't tried the multiCN, but with a single CN if you use the hed model it's already good for changing the style of a single image, it works kinda like depth map but preserves the detail and lines of the image
You can Inpaint character on the background much easier: there is a tab for that - “inpaint upload”. Just use depth pic as a mask. No need to draw something, and result is much cleaner.
Damn, the sketch lighting trick is pretty cool.
This video right here worth so much that I'm gladly going to join Patreon.
Very cool seeing the multi-control net stuff.
Last few days I've been adding in support for interactive visualization of control net outputs as a pre-pre-processor. Since it's annoying poking at values without knowing what the outcome will actually be.
And I figure, if I'm doing that for fun, other people are probably doing this as well on the sd-webui-controlnet extension team.
Cause the capability is all there for feedback from the ControlNet preprocessor in A1111, just a matter of connecting up the hooks for it.
dude...what a treasure chest of a video!!
You never waste my time :D Thank you for the depth and detail.
damn, this was like 20 tutorials in one! Awesome content Mr. Aitrepreneur
Awesome man!!! So much new I didn’t realise you could do!
Thankyou so much for all these helpful vids, maes this super easy to understand, really appreciate it
Fantastic collection of tips & tricks, nice work ;)
You sir are insanely talented. Thanks for sharing.
Thats really cool. Could you talk about how to change an object perspective too? Say you want a front, top, side or isometric view of an object
Wow! Thank you for sharing.
Thank you so much! This was very helpful!
remember the time where txt2 image with 1.4 was the only thing we could do back then =D ?
I remember discovering disco diffusion and thinking it was magical! lol Things are changing fast!
Yeah, less than a year ago 😂
Hey this is so incredible... tools for infinite creativity at 0 cost. Also, I feel I am light years behind, if I forget to follow develpments even for a week.
Like you didn't already have possibility for "infinite creativity" with pen and paper or photoshop.
@@devnull_ Like everyone on earth is so artistic like Davinci.
Great video, so many useful tips!
wow, i had no idea this was possible, that's insane...I gotta wrap my head around it, it's difficult even seeing you do it. lol
WooooOOOooooW Good as always!
that's insane how quick the AI tech is evolving
Craaaaaazzzzzzy! Love it, You are the strongest
awesome collection of tips, I've been loving playing around with controlnet lately. keep up the great work!
You r the best! This stuff is so cool! I can't stop to admire this new tool)
What if we use multiply controlnet and all of them with guess mode or without it? Is there any different?
whoa! Totally stands out from the other guys :) -- Great techniques!
Thank you for this! YOu are wonderful! by the way how to do the excel thing or is there any other program to see the segmentation list?
Very informative and inspiring .Thankyou .
Thanks. Wow. Pretty cool stuff. So many possibilities 👍
"Because I'm a madman." Hahahahaha! Love you man, you're great!
Really good techniques.
finally 3d texture workflow assistance...with precision
Would it be possible to use Multiple Hypernetworks With Multi-Controlnet?
To for example, compose an image with multiple different characters with specific outfits?
Fantastic tips! It would be awesome if you add checkpoints/timestamps to this video so I can quickly go to a spot in the video if I want to review a specific trick you showed off. Keep up the great work 😀
troll king 👑 no seriously, i love your video, your workflow tricks. I could watch you for hours 😁👍
awesome
where do we get some here do we get some lighting examplars ?
Good lord, this is moving so fast that I can barely keep up with all this new stuff.
You sir are my hero!
Super useful, thanks!
Absolutly amazing. Thank you.
Your channel is awesome, but I am overwhelmed with so much info. Is there a newbie playlist to start from beginning to catch it up?
Really cool! But is it possible to combine photos, like shown in the video, but not originating from SD (for example a real photo of a background with a real photo of someone) ?
I don't know if it's possible yet because I lost the overview of the newest developments. I have 2 trained models of 2 different characters. I want place both characters in one image with a chosen background. Both characters should be arranged with open pose. Then the whole image should be created with diffusion in one step so that the lightnings and the shadows are correct over the whole image. Is this possible yet? Maybe I must wait another week :)
Brilliant. The segmentation index is completely crazy, and I never would have found it. I gave up on segmentation almost immediately as useless, but I was wrong!
Do you have any workflow yet for altering the appearance of exisitng human faces not from prompts? Especially, I want to use my face in a new scene but not with the same lighting taken in my living room-- but add god rays etc. to it.
Unless I'm mistaken i think Lora is your answer
@@muuuuuudor dreambooth or textual inversion or hypernetwork.
I agree with the previous commentator. Just train your own lora model with your own face and then use it as you wish. There is a good video in this channel about training LoRa, so just follow the instructions. Just don't forget to read the comments under the video because there are some important things to add :D
Thanks K for the powerful knowledge, have a wonderful weekend! ^__^
I have been going through this one by one, not getting any of the same results, feeling a bit defeated. I was most excited about transferring a character onto a background, but no matter what I do it changes the character completely. I have adjusted the denoise and weight so many times. Perhaps it struggles with full body characters?
If you figure this out please lmk, also having trouble transferring characters into a background. The depth and canny models seem to be working fine but the character always shows up non detailed and almost transparent to the background no matter which settings I change.
Awesome video!... Could you please (in any control-net video) tell us which requirements about VRAM specially) should everyone need in order to run in local properly this addon?...I guess my rtx3070 mobile with 8gb is almost uncapable...
Lol finally. Guess mode is pretty much what Midjourney does by default
wow bruh great discoveries and thx for the share
The multi controlnets are now tabbed. Niiice
do you have an in depth vid on the inpaint trick you explained around 5:00?
Nice tips and video,from brazil
Why did you turn off the pre processor for Open Pause while making the man dancing in the living room?
omg!!这真的太有用了!看一次可能都学不完里面的知识
Its crazy. So much things to learn just for one feature. It is indeed the age of AI
You are the man! 🤖 thanks for the tips
How can you show a photograph into the reference ouput with the correct exact lighting style you want and have the ai match that very same lighting on your original image as well?
This is pretty cool. I love your videos. You could use more models other than just anime and cartoon.
Now we just need a node-based flow plugin to make the rotoscoping easier.
Aitrepreneur, is it possible to use a logo and place it on a shirt with any of control net models?
Hello robot!! I have a question that may not be possible to do with controlnet and Stable Diffusion
I am trying to convert old images to color, as now controlnet allows more than two models, i'm trying to do with a reference image.
I have the image in black and white, and I have another that is not the same image but in color, it is not the same image but the dresses and the people are the same.
Is there a way for controlnet to understand that this t-shirt in black and white image has to be like the one in color? the same as the face, the trousers, the wall... ?
Is there any way to make him understand that the t-shirt in the black and white image has to be like the colored one?
Would you be able to get it?
Understand that i'm saying?
Is using controlnet a more exact way of doing img2img?
Is there a contolnet rig available for blender?
i think using 2 control nets, one with the canny or depth, and one with the pose, and playing only with the pose around and not changing the canny or depth, well make you control the characters movement while keeping the details, i assume this is true and if you can get this right, you can make a lot of images, and thus having an animation.
Awesome tricks
Great video! How to add two different characters to the same background?
You missed the best sketch trick, were you use colors to add things or transform the picture. For that you need the same picture in controlnet and play around. Is way more powerful than the segmentation trick.
How to get controlnet to work with sitting, cross legged or floating poses? Mine keeps messing up the limbs and sometimes arms become feet? It seems to have problems when lines overlap.
Im in love with the new updates of controlnet. But after the update i see an error in Deforum =( error in paths... yout know what i have to do scale? =(
That s insane !
Very nice!
This is great! Thanks for sharing 🙂
Do you have an idea on how to create an animated, let's say, sticker for any messanger using Stable Diffusion? That would be interesting.
I've tried using 1.5 Inpainting with contronet and it just won't work for me, everything else is good though. I've also purchased painters model figurines to create my own poses for controlnet to map and create images with.
great tutorial~~
Bro you keep blowing my mind damn gj
¿How can you create the highlight effect without changing the image?,cause faces change a little when you do it.
Installed it yesterday, and boy... I'm still struggling to figure it out. If it's going to do what I think, wow... Why do all the cool toys have to come out when I'm on a deadline?!??!? It's a plot I tell you!