Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111
HTML-код
- Опубликовано: 11 июн 2023
- Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
#### Links from the Video ####
Make Ads in A1111: • Make AI Ads in Flair.A...
Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
Goose unsplash.com/photos/eObAZAgVAcc
Pillar www.pexels.com/photo/a-brown-...
explorer: unsplash.com/photos/8tY7wHckcM8
castle: unsplash.com/photos/8tY7wHckcM8
mountains unsplash.com/photos/lSXpV8bDeMA
Ruins unsplash.com/photos/d57A7x85f3w
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord - Хобби
#### Links from the Video ####
Make Ads in A1111: ruclips.net/video/LBTAT5WhFko/видео.html
Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
Goose unsplash.com/photos/eObAZAgVAcc
Pillar www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/
explorer: unsplash.com/photos/8tY7wHckcM8
castle: unsplash.com/photos/8tY7wHckcM8
mountains unsplash.com/photos/lSXpV8bDeMA
Ruins unsplash.com/photos/d57A7x85f3w
Latent couple when?
looks like the explorer image and castle image are the same.
Sir, no need to check 'restore face'. Because if you use kinda 2.5D/animated base model, it face will looks weird.
You can use an extension named 'After Detailer'. It can fix your character's faces flawlessly (based on your model). Also it can works perfectly with character (face) LoRa. There are also several models like it can fix hands/finger and body.
Give it a try~
how to put own face in a picture generated in SD, should we use inpaint or what? need the same style tho and matching lighting, should we use inpaint or what?
As a birthday gift to my sister three months ago, I made a picture featuring her and one of her favorite characters.
The way it worked was I trained models of both the character and my sister. My sister's models had to be done in two steps: first with IRL pictures, then second with generated animated pictures.
Once that was a done, it was a matter of compositing them all together in one pic via OpenPose + Canny + Depth and hours of Inpainting, with a little Photopea.
Took me 20 work-hours.
Idk how much of this process has changed since Auto1111 is now at v1.3.2 and ControlNet at 1.1.
What are these “other models” that fix hands? If you can point me in the right direction, I’d be grateful!
@@samc5933 until ai learns to draw hands and feet i wouldn't worry so much about ai like Elon is now
adetailer is amazing, comes standard on Vladmandic...it can be set to detect hands and fix those as well if you choose the hand instead of face model, but only mildly, not as effective on hands as it is on faces, but still can save a picture from time to time!
Tremendous human artistic control while maintaining the ai creativity as well. Nice!
Exactly. I think a lot of traditional artists, particularly those with at least basic desktop publishing skills (or basic doodling skills) would love how empowering this is. 1111 is such a wonderful art tool, it's a pity that it can be so technically challenging to get set up, I hope this gets solved soon and that the solution becomes more accessible to the unwashed masses.
@@paulodonovanmusic Yeah it's very close to how a real artist concept artist for movies and games works.
Main difference is they use a collage of photos to get a rough composition and then paint over it.
ControlNet gets even better with every new update.
It is. But this method is as old as control
Tried it, and this is probably the most simple, creative, and effort-effective technique I've come across. It's so easy to edit edge maps, even with simple image editing software. Thank you Olivio! :D
This is amazing as very often... one of the most under rated RUclips account on A1111 tutorials !
Olivio, you're a Rockstar! Been following you for a while. Extremely grateful to have found your channel.
You are a lot more amazing than Stable Diffusion XL bro, what good is a tool if we don't have people like you to show us how to use it properly!!!
This is exactly what I was looking for. I still have a few things to piece together but this was huge, thank you so Much for your time.
I have been using similar techniques for a while now, I AI Dance animations I make are a lot more complex, glad you made a tutorial on this, Ill redirect anyone who asks for SD tutorials to your channel. Thanks Olivio❤❤
Hi Olivio! I am amazed about the master level at which you use the tools. Thank you for sharing this with us!
Fantastic, and well detailed video Olivio. Look forward to trying this.
Oh wow, this actually solves the composition and color issues, great find Olivio thanks !
this is brilliant! thanks for sharing. opens up so many possibilities, and also helps me grasp the infinitely vast world of controlnet a little better
You are a legend!!! Thank you sooooo much for this. Game changer. I will check back and let you know how it goes!
This is one of your best videos, and you have a lot of really good videos!
I'm so excited to use this technique. I was getting frustrated with the limitations of openpose not being detailed enough. But this soft edge thing looks really powerful as long as I'm willing to do a little manual photo editing beforehand.
Great video. I'm particularly happy that you used Affinity Photo to create your maps.
Amazing process! Thanks for sharing this!
I've been contemplating how best to bash up source images to create a final composition for SD rendering and this looks like a grand solution! Thanks for sharing.
Watched your live stream over this last night. Highly enjoyed it.
Thank you very much
That was incredible! I love what you do. I don't have ControlNET but if I could get it I would study your methods even more.
This is a great video! Thank you! 😮
Brilliant!! Thanks!
One very similar method I've been exploring is creating depth maps via digital painting.
Additionally, I've experimented with using a inference based map and then modifying by hand it to get more unusual results.
Mixing 3D based maps (rendered), inference based (preprocessed), and digital painting methods, while utilizing img2img and multi-controlnet highlights the power of this tech.
"Map Bashing" is a great term.
You could also use background removal tool step to preprocess each image, or as others suggested, non destructive masking when cutting them out.
You don't even need to do any sort of masking. When both images have a black background and white strokes, just set the top layers to Linear Dodge blend and they will seamlessly blend together.
Thorough brother.
Peace and love from Cape Town.
Thanks .. smart trick to make machine function as our helper not just our overlord
Love your stuff, learning lots. this is awesome
wow, learned a new thing today. Thank you for sharing.
Who’s da man? You da man!
very cool video - thanks for it
Awesome, please make more videos like this. Thank!
This will take my non-existant photo bashing skills to the next level. Thanks!
Holy smokes. This changes the flow
I recommend playing around with adding this to your positive prompt: "depth of field, bokeh, (wide angle lens:1.2)"
Without the double quotes of course.
Wide angle lens is a trick that allows the subject's face to take up more of the area on the image while still fitting in enough context of the area around the subject. And the more pixels you allow it to generate the face, the more details you'll get generally. Although, if you already have controlnet dictating the composition of the image, adding wide angle lens to your prompt will likely have no effect and therefore reduce the effectiveness of everything else in your prompt.
The depth of field and bokeh are just some ways to make it feel like it was a photo shot professionally by a photographer than if it was just shot by an average person with automatic camera settings.
This was very useful, thank you. I was considering drawing outlines over photos and 3D renders to do something similar, but using the masks generated by the AI should work as well and save a lot of time.
Incredible video like always is. GRats!
Thank you
You are amazing!
Brilliant stuff
👏👏👏👏👏
Nice, Thanks Olivio
this was really cool, thanks for sharing!
Beautiful
Super helpful as always! Big FAT FANX!
This was a great tutorial on affinity
wonderful
Brilliant video and thanks for sharing your workflow. I have been doing something similar but using blender & daz studio to build the composition first (although this does take a lot longer I think!).
I love it!♥♥
I've been using this for ages! ❤
NOTE!: RevAnimated is *terrible* at obeying controlnet! (It is my favorite model for composition, but... I wouldn't use it like this.)
I inpaint after the initial render. Same map bash controlnet, +inpaint controlnet (no image), inpaint her face w/ "face" prompt, pillar w/ "pillar" prompt, etc.
No final full-image upscale; SD can't handle more than 3 large-scale concepts.
You can get hires details in a 4k canvas by cropping a section, inpainting more detail, then blending the section back in w/ photoediting software. (This takes some extra lighting-control steps; there are tutorials on how to control lighting in SD.)
Could you clarify the "extra lighting-control steps" you mentioned? Is that the map we painted in Black&white and then feed into img2img tab?
Thank you in advance!
@@foxmp1585 I barely remember my workflow from back then... SDXL is fantastic at figuring out what sketches mean in img2img. Right now, I block out a color paint sketch with a large brush, then run it through img2img with the prompt, then paint over the output, and run it through again and repeat, eventually upscaling and inpainting region by region with the same process. I have just about perfect control over composition, facial expressions, lighting, and style. :-)
Thank you for this technique! It’s really useful. As for advice from my side, I suggest using an alternative methods for fixing faces (aDetailer, inpaint, etc ) instead of “restore faces”. It uses one model for each face, and as a result, the faces turn out to be too generic.
@14:43 mind blown 🤯😵🎉
WOW I like It!
Hello future me,remember to use IP adapter for faces and body and have A detailer for a backup works well x
amazing video.!!! as usual. Im still not getting where to do it. it is local on your pc? need a very powerfull GPU? or its online?
Brilliant results! if not a very convoluted workflow beyond the scope of but the most dedicated, but as the saying goes, no pain - no gain 🍷
Would it not be simpler to create the Control Maps right in Affinity Photo by using the FILTER/Detect Edges command on your source images? just a thought.
спасибо за видео 👍
hi Olivio, the image of the castle has the same link as the explorer image. Great video!
I installed automatic 1111 last week and now I'm watching one video after another from you, so i get ready to become an Ai artist😁
Awesome!!!
Hi Olivio, thank you for the super cool video! Curious, if you were using a depth map instead of softedge for the woman, how would you edit it in Affinity to remove the background? It seems trickier for depth map since the background might be a shade of gray instead of absolute black. Thanks.
you need to familiarize yourself with masks in your image editor so that way you're using a nondestructive process instead of rasterizing and then resizing things which will lose you quality and if you erase things you wont have a way to undo other than using the undo button.
In a way, I agree with you - but honestly, the whole point of a workflow like this is (and AI/SD in general I think) that its as quick/efficient as possible. Going in and using more "proper" methods like masking/mask management, more layers, etc is nice, but it takes more time and more clicks to do, and for the purposes of making a quick map for ControlNet like this, likely not even worth bothering (in my opinion).
@@theSato I mean once you learn to use masks it is so much quicker. for example he had to resize the girl larger because he wanted to make sure the quality was best, If he did a mask he could have just made a mask and erase with a black paint brush (hit x to switch to white brush to correct a mistake) or do the free section method and instead of pressing delete you just fill with the foreground color by hitting option+delete. it's a super small thing as you said but will make your workflow faster, your mistakes less damaging (resizing a rasterized image over and over will decrease it's quality), and lastly it will just make your images better.
sorry for writing a book, once you learn masks you will never not use them again.
I've found myself saving intermediate steps less and less. Something about AI just changes the way you feel about data. (Also, Infinite Painter doesn't have masks, and I can make great art just fine.)
@@theSatoI agree with this. The bashing part of the process isn't so much about precision as giving SD a rough visual guide to what you want.
@@ayaneagano6059 I know how to use masks, dont get me wrong. But it's an unnecessary extra step when you're just trying to spend 30 seconds bashing some maps or elements together for sd/ controlnet. The precision is redundant and I have no need to sit there and get it all just right.
For purposes other than the one shown in the video, yes, use masks and itll save time long term. But for the use in the video, it just costs more time when it's meant to be one and done quickly and quality losses from resizing is irrelevant
超級有用!!!!! thx
This is great! What's the AI program you're using called? It's obviously not Midjourney.
Holy bananas!!!!!!!!!!!!!!!!!
Those fingers, though :D
nice
Hi. Thanks for sharing your smart notes of every new thing. I really appreciate that. I have one question. After checking update in SD's extension, system response that I have lates controlnet(caf54076 (Tue Jun 13 07:39:32 2023)). However, I can't find Softedge control model in that dropdown list. Though, i do have Softedge controlnet type and pre-processer. What might be wrong?
It would be interesting to see what your outcome is without the maps, and just using the prompts as a comparison.
To fix faces automatically you can use the adetailer extension.
Cool! Now what if we make an animation using e.g. Blender but only for the line art, then input each frame to ControlNet then generate the finaly animation frame-by-frame? I wonder when it becomes so consistent that we can consider it as a real animation.
For iOS users in iOS 16 and above, there's an easy way to crop out the image, transfer the image to your phone (google photos or something), save image, press and hold on the area you want captured. Tap share and save image, then transfer it back to your pc.
esa era mi método para crear arte 😊
It can be easier to fuse layers with "additive" filter
You're a legend... Are you available on LinkedIn?
Have you watched the log while running hires fix with upscale by 1? I tried doing so as you noted, but it just ignores the process. On or off, no difference in output. Might just be bacuase I'm using vlad's fork. worth double-checking, though
This is very valueable content but may I suggest you alter the title a bit? It is not very enticing to users who are not fully in the know of AI and prompts.
Keep up the great work!
How do you control a scene with 2 people in it? Say, fighting. Do a map bash and then a colored version of the map with separate prompts?
Crazy cool! How can I retain the face if I wanted to use my own face? What’s the best prompt to use to ensure the closest resemblance? Thanks!
Make Lora of your face then use adetailer
Oh I see, I missunderstood. Name makes more sense now
What version of SD are you using? Have you upgraded to 2.0+? (If so do you have a video on how to upgrade?)
like always
It would have been much easier with photoshop select subject. I wonder if edge detection would do the same for soft edge
Great video Olivio. What extension or setting are you using that allows you @ 11:13 to select the vae and clip skip right there in txt2img page?
I would like to know as well
In Auto1111 go to settings, user interface, look down the page for "[info] Quicksettings list ". From there go to the arrow on the right and then highlight and check (A tick mark will appear) both 'sd_vae' and "CLIP_stop_at_last_layers". Restart the UI and they will be where Olivio has them. Hope that helped.
@@addermoth Thank you!
could this be done all in automatic using mini paint?
Hi all, may someone answer me a question?
How much GPU do I need to run A111? I'm using mostly Midjourney because I've a really old PC
I'm happy ai still struggle with hands.
There's is close button bottom right of the preview image. I feel little anxiety that you didn't click it. haha
Is there any way to render the render elements inside 3d application like masking id, Z depth , Ambient occlusion, material id and different channels to add information in stable diffusion for making more variation out of it.
Currently SD can properly reads Z-Depth (Depth map), Material ID (Segmentation map), Normal map.
And it depends on apps of your choice (Blender, Max, Maya, C4D, ...).
Each of these app will have their own way of rendering/ exporting these maps, you need to find out yourself. It'll take time but worth it!
OK to use A1111 to get the outline, but also Photoshop filter can do this and you can do at any resolution. So I think that this first steps can be done with filters to get the outline picture and bash it, Even you can do before the mix roughly and then apply the filter, will not speed up the process because you see what you are doing easier.
I don't think Photoshop has filters for Depth map, Normal Map or Open Pose. And for the soft edge filter there is a option, but there are 4 options in ControlNET and does the PS version look exactily the same as the ControlNET version?
How to fix finger deformity, multiple fingers, and bifurcation?
How do you see the 'quality' from that drop down menu ?
Can you do this with Invoke AI?
How do you got two tabs of controlnet?
Still searching for this AI tool for comic book and children's book creators: 1. Al draws actor using prompts. 2. Option to convert the selected character to a simple, clean 3d frame (no background). The character can be rotated. 3. The limbs, head, eyelids, etc can be repositioned using many pivot points. 4. Then, we can ask for the character to be completely regenerated again using the face and clothing on the original. Once we are satisfied we can save and paste the character in a background graphic.
Remove background first with AI or right click on Mac. Then do the depth maps.
The image is not generated from the mask I created. Only based on the Prompt. I have set all the settings as in the video. What could be the problem?
Have you clicked the "Enable" check box in control net blade? I'm often missing that!
@@jibcot8541 Thank you. Yes, I clicked on enable. Unfortunately, it keeps generating random results. It feels like I have something not installed.
@@jibcot8541 problem was solved by removing the segment-anything extension
Looks like rendering of hands is still the Achilles' heel.
Hands are just really hard to create and understand. Even for actual artists, this is one of the hardest things to create
Everyone wants to give bird wings. I might try using a peacock spider instead.
Why I don't have ControlNet on my automatic1111 ?
because that is an extension you need to install
0:01 six fingers ahushauhsuahsua
Can this be used for commercial use? The base is someone else's intellectual property 🤔