Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111

Поделиться
HTML-код
  • Опубликовано: 11 июн 2023
  • Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
    #### Links from the Video ####
    Make Ads in A1111: • Make AI Ads in Flair.A...
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-...
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
  • ХоббиХобби

Комментарии • 155

  • @OlivioSarikas
    @OlivioSarikas  Год назад +8

    #### Links from the Video ####
    Make Ads in A1111: ruclips.net/video/LBTAT5WhFko/видео.html
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w

    • @aeit999
      @aeit999 Год назад

      Latent couple when?

    • @xiawilly8902
      @xiawilly8902 Год назад

      looks like the explorer image and castle image are the same.

  • @ainosekai
    @ainosekai Год назад +74

    Sir, no need to check 'restore face'. Because if you use kinda 2.5D/animated base model, it face will looks weird.
    You can use an extension named 'After Detailer'. It can fix your character's faces flawlessly (based on your model). Also it can works perfectly with character (face) LoRa. There are also several models like it can fix hands/finger and body.
    Give it a try~

    • @hacknslashpro9056
      @hacknslashpro9056 Год назад +1

      how to put own face in a picture generated in SD, should we use inpaint or what? need the same style tho and matching lighting, should we use inpaint or what?

    • @ryry9780
      @ryry9780 Год назад +3

      ​As a birthday gift to my sister three months ago, I made a picture featuring her and one of her favorite characters.
      The way it worked was I trained models of both the character and my sister. My sister's models had to be done in two steps: first with IRL pictures, then second with generated animated pictures.
      Once that was a done, it was a matter of compositing them all together in one pic via OpenPose + Canny + Depth and hours of Inpainting, with a little Photopea.
      Took me 20 work-hours.
      Idk how much of this process has changed since Auto1111 is now at v1.3.2 and ControlNet at 1.1.

    • @samc5933
      @samc5933 Год назад +1

      What are these “other models” that fix hands? If you can point me in the right direction, I’d be grateful!

    • @Feelix420
      @Feelix420 11 месяцев назад

      @@samc5933 until ai learns to draw hands and feet i wouldn't worry so much about ai like Elon is now

    • @cleverestx
      @cleverestx 11 месяцев назад +1

      adetailer is amazing, comes standard on Vladmandic...it can be set to detect hands and fix those as well if you choose the hand instead of face model, but only mildly, not as effective on hands as it is on faces, but still can save a picture from time to time!

  • @jason-sk9oi
    @jason-sk9oi Год назад +13

    Tremendous human artistic control while maintaining the ai creativity as well. Nice!

    • @paulodonovanmusic
      @paulodonovanmusic Год назад

      Exactly. I think a lot of traditional artists, particularly those with at least basic desktop publishing skills (or basic doodling skills) would love how empowering this is. 1111 is such a wonderful art tool, it's a pity that it can be so technically challenging to get set up, I hope this gets solved soon and that the solution becomes more accessible to the unwashed masses.

    • @chickenmadness1732
      @chickenmadness1732 11 месяцев назад

      @@paulodonovanmusic Yeah it's very close to how a real artist concept artist for movies and games works.
      Main difference is they use a collage of photos to get a rough composition and then paint over it.

  • @Maria_Nette
    @Maria_Nette Год назад +6

    ControlNet gets even better with every new update.

    • @aeit999
      @aeit999 Год назад +1

      It is. But this method is as old as control

  • @neeqstock8617
    @neeqstock8617 Год назад +24

    Tried it, and this is probably the most simple, creative, and effort-effective technique I've come across. It's so easy to edit edge maps, even with simple image editing software. Thank you Olivio! :D

  • @mikerhinos
    @mikerhinos Год назад +1

    This is amazing as very often... one of the most under rated RUclips account on A1111 tutorials !

  • @jacque1331
    @jacque1331 11 месяцев назад

    Olivio, you're a Rockstar! Been following you for a while. Extremely grateful to have found your channel.

  • @soothingtunes6780
    @soothingtunes6780 9 месяцев назад

    You are a lot more amazing than Stable Diffusion XL bro, what good is a tool if we don't have people like you to show us how to use it properly!!!

  • @eddiedixon1356
    @eddiedixon1356 Год назад +1

    This is exactly what I was looking for. I still have a few things to piece together but this was huge, thank you so Much for your time.

  • @akanekomi
    @akanekomi Год назад +3

    I have been using similar techniques for a while now, I AI Dance animations I make are a lot more complex, glad you made a tutorial on this, Ill redirect anyone who asks for SD tutorials to your channel. Thanks Olivio❤❤

  • @boyanfg
    @boyanfg 10 месяцев назад

    Hi Olivio! I am amazed about the master level at which you use the tools. Thank you for sharing this with us!

  • @BruceMorgan1979
    @BruceMorgan1979 Год назад +1

    Fantastic, and well detailed video Olivio. Look forward to trying this.

  • @frostreaper1607
    @frostreaper1607 11 месяцев назад

    Oh wow, this actually solves the composition and color issues, great find Olivio thanks !

  • @ronnykhalil
    @ronnykhalil Год назад

    this is brilliant! thanks for sharing. opens up so many possibilities, and also helps me grasp the infinitely vast world of controlnet a little better

  • @monteeaglevision5505
    @monteeaglevision5505 10 месяцев назад

    You are a legend!!! Thank you sooooo much for this. Game changer. I will check back and let you know how it goes!

  • @ctrlartdel
    @ctrlartdel 9 месяцев назад

    This is one of your best videos, and you have a lot of really good videos!

  • @ex0stasis72
    @ex0stasis72 11 месяцев назад +3

    I'm so excited to use this technique. I was getting frustrated with the limitations of openpose not being detailed enough. But this soft edge thing looks really powerful as long as I'm willing to do a little manual photo editing beforehand.

  • @CCoburn3
    @CCoburn3 Год назад +1

    Great video. I'm particularly happy that you used Affinity Photo to create your maps.

  • @travislrogers
    @travislrogers Год назад

    Amazing process! Thanks for sharing this!

  • @EllaIsSlayest
    @EllaIsSlayest 11 месяцев назад

    I've been contemplating how best to bash up source images to create a final composition for SD rendering and this looks like a grand solution! Thanks for sharing.

  • @trickydicky8488
    @trickydicky8488 Год назад +1

    Watched your live stream over this last night. Highly enjoyed it.

  • @mysterious_monolith_
    @mysterious_monolith_ 11 месяцев назад

    That was incredible! I love what you do. I don't have ControlNET but if I could get it I would study your methods even more.

  • @Aisaaax
    @Aisaaax 8 месяцев назад

    This is a great video! Thank you! 😮

  • @bjax2085
    @bjax2085 11 месяцев назад

    Brilliant!! Thanks!

  • @AZTECMAN
    @AZTECMAN Год назад +2

    One very similar method I've been exploring is creating depth maps via digital painting.
    Additionally, I've experimented with using a inference based map and then modifying by hand it to get more unusual results.
    Mixing 3D based maps (rendered), inference based (preprocessed), and digital painting methods, while utilizing img2img and multi-controlnet highlights the power of this tech.
    "Map Bashing" is a great term.

  • @luke2642
    @luke2642 Год назад +15

    You could also use background removal tool step to preprocess each image, or as others suggested, non destructive masking when cutting them out.

    • @TorQueMoD
      @TorQueMoD 11 месяцев назад +3

      You don't even need to do any sort of masking. When both images have a black background and white strokes, just set the top layers to Linear Dodge blend and they will seamlessly blend together.

  • @destructiveeyeofdemi
    @destructiveeyeofdemi 11 месяцев назад

    Thorough brother.
    Peace and love from Cape Town.

  • @morizanova
    @morizanova 11 месяцев назад

    Thanks .. smart trick to make machine function as our helper not just our overlord

  • @Braunfeltd
    @Braunfeltd Год назад

    Love your stuff, learning lots. this is awesome

  • @yadav-r
    @yadav-r Год назад

    wow, learned a new thing today. Thank you for sharing.

  • @aicarpool
    @aicarpool Год назад +2

    Who’s da man? You da man!

  • @heikohesse4666
    @heikohesse4666 Год назад

    very cool video - thanks for it

  • @minhhaipham9527
    @minhhaipham9527 11 месяцев назад +1

    Awesome, please make more videos like this. Thank!

  • @dm4life579
    @dm4life579 11 месяцев назад

    This will take my non-existant photo bashing skills to the next level. Thanks!

  • @spoonikle
    @spoonikle Год назад

    Holy smokes. This changes the flow

  • @ex0stasis72
    @ex0stasis72 11 месяцев назад +1

    I recommend playing around with adding this to your positive prompt: "depth of field, bokeh, (wide angle lens:1.2)"
    Without the double quotes of course.
    Wide angle lens is a trick that allows the subject's face to take up more of the area on the image while still fitting in enough context of the area around the subject. And the more pixels you allow it to generate the face, the more details you'll get generally. Although, if you already have controlnet dictating the composition of the image, adding wide angle lens to your prompt will likely have no effect and therefore reduce the effectiveness of everything else in your prompt.
    The depth of field and bokeh are just some ways to make it feel like it was a photo shot professionally by a photographer than if it was just shot by an average person with automatic camera settings.

  • @joywritr
    @joywritr Год назад +9

    This was very useful, thank you. I was considering drawing outlines over photos and 3D renders to do something similar, but using the masks generated by the AI should work as well and save a lot of time.

  • @ericvictor8113
    @ericvictor8113 Год назад +1

    Incredible video like always is. GRats!

  • @accy1337
    @accy1337 Год назад

    You are amazing!

  • @MadazzaMusik
    @MadazzaMusik 11 месяцев назад

    Brilliant stuff

  • @Carolingio
    @Carolingio Год назад

    👏👏👏👏👏
    Nice, Thanks Olivio

  • @ZeroIQ2
    @ZeroIQ2 Год назад

    this was really cool, thanks for sharing!

  • @ysy69
    @ysy69 Год назад

    Beautiful

  • @coloryvr
    @coloryvr Год назад

    Super helpful as always! Big FAT FANX!

  • @WolfCatalyst
    @WolfCatalyst Год назад

    This was a great tutorial on affinity

  • @adastra231
    @adastra231 Год назад

    wonderful

  • @Marcus_Ramour
    @Marcus_Ramour 9 месяцев назад +1

    Brilliant video and thanks for sharing your workflow. I have been doing something similar but using blender & daz studio to build the composition first (although this does take a lot longer I think!).

  • @TheGalacticIndian
    @TheGalacticIndian Год назад

    I love it!♥♥

  • @jonmichaelgalindo
    @jonmichaelgalindo Год назад +5

    I've been using this for ages! ❤
    NOTE!: RevAnimated is *terrible* at obeying controlnet! (It is my favorite model for composition, but... I wouldn't use it like this.)
    I inpaint after the initial render. Same map bash controlnet, +inpaint controlnet (no image), inpaint her face w/ "face" prompt, pillar w/ "pillar" prompt, etc.
    No final full-image upscale; SD can't handle more than 3 large-scale concepts.
    You can get hires details in a 4k canvas by cropping a section, inpainting more detail, then blending the section back in w/ photoediting software. (This takes some extra lighting-control steps; there are tutorials on how to control lighting in SD.)

    • @foxmp1585
      @foxmp1585 9 месяцев назад

      Could you clarify the "extra lighting-control steps" you mentioned? Is that the map we painted in Black&white and then feed into img2img tab?
      Thank you in advance!

    • @jonmichaelgalindo
      @jonmichaelgalindo 9 месяцев назад

      @@foxmp1585 I barely remember my workflow from back then... SDXL is fantastic at figuring out what sketches mean in img2img. Right now, I block out a color paint sketch with a large brush, then run it through img2img with the prompt, then paint over the output, and run it through again and repeat, eventually upscaling and inpainting region by region with the same process. I have just about perfect control over composition, facial expressions, lighting, and style. :-)

  • @PhilippSeven
    @PhilippSeven Год назад +2

    Thank you for this technique! It’s really useful. As for advice from my side, I suggest using an alternative methods for fixing faces (aDetailer, inpaint, etc ) instead of “restore faces”. It uses one model for each face, and as a result, the faces turn out to be too generic.

  • @ddiva1973
    @ddiva1973 10 месяцев назад

    @14:43 mind blown 🤯😵🎉

  • @williamuria4048
    @williamuria4048 Год назад

    WOW I like It!

  • @starmanmia
    @starmanmia 4 месяца назад

    Hello future me,remember to use IP adapter for faces and body and have A detailer for a backup works well x

  • @rodrigoundaa
    @rodrigoundaa 11 месяцев назад

    amazing video.!!! as usual. Im still not getting where to do it. it is local on your pc? need a very powerfull GPU? or its online?

  • @SergeGolikov
    @SergeGolikov Год назад +4

    Brilliant results! if not a very convoluted workflow beyond the scope of but the most dedicated, but as the saying goes, no pain - no gain 🍷
    Would it not be simpler to create the Control Maps right in Affinity Photo by using the FILTER/Detect Edges command on your source images? just a thought.

  • @blood505
    @blood505 10 месяцев назад

    спасибо за видео 👍

  • @mayalarskov
    @mayalarskov Год назад +1

    hi Olivio, the image of the castle has the same link as the explorer image. Great video!

  • @Grimmona
    @Grimmona Год назад +3

    I installed automatic 1111 last week and now I'm watching one video after another from you, so i get ready to become an Ai artist😁

  • @AlfredLua
    @AlfredLua 11 месяцев назад

    Hi Olivio, thank you for the super cool video! Curious, if you were using a depth map instead of softedge for the woman, how would you edit it in Affinity to remove the background? It seems trickier for depth map since the background might be a shade of gray instead of absolute black. Thanks.

  • @glssjg
    @glssjg Год назад +41

    you need to familiarize yourself with masks in your image editor so that way you're using a nondestructive process instead of rasterizing and then resizing things which will lose you quality and if you erase things you wont have a way to undo other than using the undo button.

    • @theSato
      @theSato Год назад +20

      In a way, I agree with you - but honestly, the whole point of a workflow like this is (and AI/SD in general I think) that its as quick/efficient as possible. Going in and using more "proper" methods like masking/mask management, more layers, etc is nice, but it takes more time and more clicks to do, and for the purposes of making a quick map for ControlNet like this, likely not even worth bothering (in my opinion).

    • @glssjg
      @glssjg Год назад +18

      @@theSato I mean once you learn to use masks it is so much quicker. for example he had to resize the girl larger because he wanted to make sure the quality was best, If he did a mask he could have just made a mask and erase with a black paint brush (hit x to switch to white brush to correct a mistake) or do the free section method and instead of pressing delete you just fill with the foreground color by hitting option+delete. it's a super small thing as you said but will make your workflow faster, your mistakes less damaging (resizing a rasterized image over and over will decrease it's quality), and lastly it will just make your images better.
      sorry for writing a book, once you learn masks you will never not use them again.

    • @jonmichaelgalindo
      @jonmichaelgalindo Год назад +2

      I've found myself saving intermediate steps less and less. Something about AI just changes the way you feel about data. (Also, Infinite Painter doesn't have masks, and I can make great art just fine.)

    • @blakecasimir
      @blakecasimir Год назад +2

      ​@@theSatoI agree with this. The bashing part of the process isn't so much about precision as giving SD a rough visual guide to what you want.

    • @theSato
      @theSato Год назад +8

      @@ayaneagano6059 I know how to use masks, dont get me wrong. But it's an unnecessary extra step when you're just trying to spend 30 seconds bashing some maps or elements together for sd/ controlnet. The precision is redundant and I have no need to sit there and get it all just right.
      For purposes other than the one shown in the video, yes, use masks and itll save time long term. But for the use in the video, it just costs more time when it's meant to be one and done quickly and quality losses from resizing is irrelevant

  • @novabk2729
    @novabk2729 11 месяцев назад

    超級有用!!!!! thx

  • @TorQueMoD
    @TorQueMoD 11 месяцев назад

    This is great! What's the AI program you're using called? It's obviously not Midjourney.

  • @kyoko703
    @kyoko703 Год назад +1

    Holy bananas!!!!!!!!!!!!!!!!!

  • @KryptLynx
    @KryptLynx 11 месяцев назад

    Those fingers, though :D

  • @EmilioNorrmann
    @EmilioNorrmann Год назад

    nice

  • @hngjoe
    @hngjoe Год назад

    Hi. Thanks for sharing your smart notes of every new thing. I really appreciate that. I have one question. After checking update in SD's extension, system response that I have lates controlnet(caf54076 (Tue Jun 13 07:39:32 2023)). However, I can't find Softedge control model in that dropdown list. Though, i do have Softedge controlnet type and pre-processer. What might be wrong?

  • @Kal-el23
    @Kal-el23 Год назад

    It would be interesting to see what your outcome is without the maps, and just using the prompts as a comparison.

  • @yoavco99
    @yoavco99 Год назад

    To fix faces automatically you can use the adetailer extension.

  • @merion297
    @merion297 Год назад +1

    Cool! Now what if we make an animation using e.g. Blender but only for the line art, then input each frame to ControlNet then generate the finaly animation frame-by-frame? I wonder when it becomes so consistent that we can consider it as a real animation.

  • @Pianist7137Gaming
    @Pianist7137Gaming 11 месяцев назад

    For iOS users in iOS 16 and above, there's an easy way to crop out the image, transfer the image to your phone (google photos or something), save image, press and hold on the area you want captured. Tap share and save image, then transfer it back to your pc.

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR Год назад

    esa era mi método para crear arte 😊

  • @nspc69
    @nspc69 Год назад +4

    It can be easier to fuse layers with "additive" filter

  • @nsrakin
    @nsrakin Год назад

    You're a legend... Are you available on LinkedIn?

  • @NERvshrd
    @NERvshrd 11 месяцев назад

    Have you watched the log while running hires fix with upscale by 1? I tried doing so as you noted, but it just ignores the process. On or off, no difference in output. Might just be bacuase I'm using vlad's fork. worth double-checking, though

  • @d1m18
    @d1m18 11 месяцев назад

    This is very valueable content but may I suggest you alter the title a bit? It is not very enticing to users who are not fully in the know of AI and prompts.
    Keep up the great work!

  • @gwcstudio
    @gwcstudio Год назад +1

    How do you control a scene with 2 people in it? Say, fighting. Do a map bash and then a colored version of the map with separate prompts?

  • @cryptobullish
    @cryptobullish Год назад

    Crazy cool! How can I retain the face if I wanted to use my own face? What’s the best prompt to use to ensure the closest resemblance? Thanks!

    • @wykydytron
      @wykydytron Год назад +2

      Make Lora of your face then use adetailer

  • @hugoruix_yt995
    @hugoruix_yt995 Год назад

    Oh I see, I missunderstood. Name makes more sense now

  • @anim8or
    @anim8or 11 месяцев назад

    What version of SD are you using? Have you upgraded to 2.0+? (If so do you have a video on how to upgrade?)

  • @shipudlink
    @shipudlink Год назад

    like always

  • @hakandurgut
    @hakandurgut Год назад +1

    It would have been much easier with photoshop select subject. I wonder if edge detection would do the same for soft edge

  • @honestgoat
    @honestgoat Год назад

    Great video Olivio. What extension or setting are you using that allows you @ 11:13 to select the vae and clip skip right there in txt2img page?

    • @forifdeflais2051
      @forifdeflais2051 Год назад

      I would like to know as well

    • @addermoth
      @addermoth Год назад +1

      In Auto1111 go to settings, user interface, look down the page for "[info] Quicksettings list ". From there go to the arrow on the right and then highlight and check (A tick mark will appear) both 'sd_vae' and "CLIP_stop_at_last_layers". Restart the UI and they will be where Olivio has them. Hope that helped.

    • @forifdeflais2051
      @forifdeflais2051 Год назад

      @@addermoth Thank you!

  • @springheeledjackofthegurdi2117

    could this be done all in automatic using mini paint?

  • @lsd250
    @lsd250 11 месяцев назад

    Hi all, may someone answer me a question?
    How much GPU do I need to run A111? I'm using mostly Midjourney because I've a really old PC

  • @Shoopps
    @Shoopps Год назад

    I'm happy ai still struggle with hands.

  • @Shandypur
    @Shandypur Год назад

    There's is close button bottom right of the preview image. I feel little anxiety that you didn't click it. haha

  • @rajendrameena150
    @rajendrameena150 11 месяцев назад

    Is there any way to render the render elements inside 3d application like masking id, Z depth , Ambient occlusion, material id and different channels to add information in stable diffusion for making more variation out of it.

    • @foxmp1585
      @foxmp1585 9 месяцев назад

      Currently SD can properly reads Z-Depth (Depth map), Material ID (Segmentation map), Normal map.
      And it depends on apps of your choice (Blender, Max, Maya, C4D, ...).
      Each of these app will have their own way of rendering/ exporting these maps, you need to find out yourself. It'll take time but worth it!

  • @ValicsLehel
    @ValicsLehel Год назад

    OK to use A1111 to get the outline, but also Photoshop filter can do this and you can do at any resolution. So I think that this first steps can be done with filters to get the outline picture and bash it, Even you can do before the mix roughly and then apply the filter, will not speed up the process because you see what you are doing easier.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      I don't think Photoshop has filters for Depth map, Normal Map or Open Pose. And for the soft edge filter there is a option, but there are 4 options in ControlNET and does the PS version look exactily the same as the ControlNET version?

  • @moomoodad
    @moomoodad Год назад

    How to fix finger deformity, multiple fingers, and bifurcation?

  • @MONTY-YTNOM
    @MONTY-YTNOM Год назад

    How do you see the 'quality' from that drop down menu ?

  • @electricdreamer
    @electricdreamer Год назад

    Can you do this with Invoke AI?

  • @TheElement2k7
    @TheElement2k7 Год назад

    How do you got two tabs of controlnet?

  • @bjax2085
    @bjax2085 11 месяцев назад

    Still searching for this AI tool for comic book and children's book creators: 1. Al draws actor using prompts. 2. Option to convert the selected character to a simple, clean 3d frame (no background). The character can be rotated. 3. The limbs, head, eyelids, etc can be repositioned using many pivot points. 4. Then, we can ask for the character to be completely regenerated again using the face and clothing on the original. Once we are satisfied we can save and paste the character in a background graphic.

  • @andu896
    @andu896 Год назад

    Remove background first with AI or right click on Mac. Then do the depth maps.

  • @maxeremenko
    @maxeremenko Год назад +1

    The image is not generated from the mask I created. Only based on the Prompt. I have set all the settings as in the video. What could be the problem?

    • @jibcot8541
      @jibcot8541 Год назад

      Have you clicked the "Enable" check box in control net blade? I'm often missing that!

    • @maxeremenko
      @maxeremenko Год назад

      @@jibcot8541 Thank you. Yes, I clicked on enable. Unfortunately, it keeps generating random results. It feels like I have something not installed.

    • @maxeremenko
      @maxeremenko 11 месяцев назад

      @@jibcot8541 problem was solved by removing the segment-anything extension

  • @serena-yu
    @serena-yu 11 месяцев назад

    Looks like rendering of hands is still the Achilles' heel.

    • @OlivioSarikas
      @OlivioSarikas  11 месяцев назад +1

      Hands are just really hard to create and understand. Even for actual artists, this is one of the hardest things to create

  • @itchykami
    @itchykami Год назад

    Everyone wants to give bird wings. I might try using a peacock spider instead.

  • @emmanuele1986
    @emmanuele1986 11 месяцев назад

    Why I don't have ControlNet on my automatic1111 ?

    • @OlivioSarikas
      @OlivioSarikas  11 месяцев назад

      because that is an extension you need to install

  • @serizawa3844
    @serizawa3844 11 месяцев назад

    0:01 six fingers ahushauhsuahsua

  • @robbasgaming7044
    @robbasgaming7044 Год назад

    Can this be used for commercial use? The base is someone else's intellectual property 🤔