MULTIPLE CHARACTERS In ONE IMAGE WITH CONTROLNET & LATENT COUPLE! THIS IS SO FUN!

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • Recently a brand new extension for Stable Diffusion was released called Latent Couple, which allows you to determine specific zones of an image and attribute it a prompt, making it possible to generate multiple characters in different styles in 1 style generation, no Inpainting required at all! And when combining the extension to ControlNET and the Composable LoRA extension you can generate multiple different characters in different styles and in different positions in one single image generation! This is so cool and so powerful! So in this video, I will show you how to install and use the Couple Latent extension and how to use it in combination with ControlNet and LatentCoupleHelper to get the best results possible! So let's go!
    Did you manage to generate multiple characters in 1 image? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    🧠 My Second Channel THE MAKER LAIR: bit.ly/themake...
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    Latent Couple Extension: github.com/opp...
    Composable Lora: github.com/opp...
    Latent Couple Helper : github.com/Zun...
    divisions=1:1,1:3,1:3,1:3 positions=0:0,0:0,0:1,0:2 weights=0.2,0.8,0.8,0.8 end at step=50
    All ControlNet Videos: • ControlNet
    My previous ControlNet video: • GET PERFECT HANDS With...
    NEXT-GEN MULTI-CONTROLNET INPAINTING: • NEXT-GEN MULTI-CONTROL...
    CHARACTER TURNAROUND In Stable Diffusion: • CHARACTER TURNAROUND I...
    EASY POSING FOR CONTROLNET : • EASY POSING FOR CONTRO...
    3D Posing With ControlNet: • 3D POSING For PERFECT ...
    My first ControlNet video: • NEXT-GEN NEW IMG2IMG I...
    Special thanks to Royal Emperor:
    - Merlin Kauffman
    - Totoro
    Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
    #stablediffusion #controlnet #aiart #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stabled...
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlay...
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

Комментарии • 380

  • @depthbyvfx9604
    @depthbyvfx9604 Год назад +28

    Literally every day I master one aspect of controlnet and another appears. This space ceases to amaze me with the daily advances

    • @roybatty2268
      @roybatty2268 Год назад +3

      Master?? That's a bit of an exaggeration, no?

    • @depthbyvfx9604
      @depthbyvfx9604 Год назад +6

      I get paid to do both prompt engineering and Training Models for bigger companies for concept art, so I have to learn each extension and maximize its usage. It can only be considered an exaggeration based on how fast the technology updates. For example, this video hasn’t been out for 24 hours, and there is already something called Multidiffusion Region Control which is an extra add on to this that allows you to sketch a mask in each section rather than using rectangular boxes/ratios. I spent the past 6 hours learning how it works, and what limitations are and I’m in a lot of communities getting updates about this stuff. Anyway, if that was your focus in my comment, you missed the point of my awe in how fast all of this stuff keeps updating

    • @robertgarcia4627
      @robertgarcia4627 Год назад

      @@depthbyvfx9604 which communities if you don’t mind? I’d like to master as much as I can and maybe in the future get a job in this

  • @thanksfernuthin
    @thanksfernuthin Год назад +3

    HOLY CRAP!!! Another HUGE advance. Now I can say a character has blue eyes without everything else being blue. Plus ALL the other things I can do. Fantastic!!!

  • @mkhernandez6181
    @mkhernandez6181 Год назад +21

    I think the tools are advancing but getting quite complex in a way so hope some people can streamline controll net and coupls soon, but this is barely year 1! so there is alot of progress ahead

    • @M.I.F..
      @M.I.F.. Год назад +1

      ...wait for Multidiffusion. Veeeery promising!

    • @ShawnFumo
      @ShawnFumo Год назад +2

      I agree (this new tool makes sense to me, but only because I did table-based layouts for websites back in the day lol). But I bet it'll get there pretty soon. Leonardo and Mage (and probably other sites) seem like they're trying to wrap up functionality in a more user-friendly package. This is a perfect case where a good UI would help. Like you drag over a region of the image and it pops open a prompt and strength slider. You even could build in a full 3d posing tool with the hands/feet without having to go into another app and exporting/importing stuff. And of course would have a big library of predefined poses, etc.

  • @someguycalledcerberus9805
    @someguycalledcerberus9805 Год назад +60

    OK, so I just spent like two hours getting this to work only to produce horrible abominations, and here are two very important tips:
    1) *If you are generating monsters fusing together:* The first subprompt (before the first AND) will be applied to the whole image (if you use divisions like in this video). This means that if you leave the settings like they are in this video and enter "a man AND a woman", you will smear the man over the whole image and then denoise the woman into the left side of the image, and do nothing for the right side (you entered only 2 subprompts). _You need as many subprompts as there are divisions. And you need to pay attention to the ratios of the divisions._
    2) *If you are generating a person who is cut in half and generating the other person on their other half:* Increase the width of the image. None of the models I tested were able to properly generate two full persons like this with the default 512 width. I assume that's because the models were trained on 512 images, meaning they try to adjust the size of a person to 512. If you halve this 512 width, you are not letting the model complete a full human.
    +1: You can leverage latent coupling with img2img and controlnets to better guide generation to what you are trying to achieve.

    • @professorpenne9962
      @professorpenne9962 Год назад +2

      but the best part is making horrible abominations depending how you look at it 😅

    • @__-fi6xg
      @__-fi6xg Год назад +2

      yeah it doesnt work for me neither, it just mashes the loras into one...

    • @hairlover8689
      @hairlover8689 Год назад +5

      I had problem 2) solved it by increasing the width of my image to 1024 while keeping the height at 512. Now it produces two different characters as prompted. Thank you!

    • @Gins.
      @Gins. Год назад +3

      I just keep generating one person who is a mix of both characteristics... I have Latent Couple enabled, but not sure what's going on. Any advice?

    • @someguycalledcerberus9805
      @someguycalledcerberus9805 Год назад +1

      @@Gins. If you divide the picture to 1x100%, 1x50 left side, and 1x50 right side then you need to enter something like this:
      forest background AND black man AND white man
      You also need to set the aspect ratio to be wider. This will create a wide picture of a black man on the left and a white man on the right, with a forest in the background.

  • @richardgreaney
    @richardgreaney Год назад +1

    This was such a good explanation of how this works. I've seen other tutorials on this before but none that actually explained it like yours did. I am going to have a lot of fun with this now.

  • @bigal1093
    @bigal1093 Год назад +1

    Wow ... my mind continues to get blown with how fast powerful tools are being created. Makes me really curious about there we will be by the end of the year!

  • @lilshadow3441
    @lilshadow3441 Год назад +2

    Hello Ai Overlord K, has this extension been replaced by the "Regional Prompter manipulations"? Thanks!

  • @ImNotQualifiedToSayThisBut
    @ImNotQualifiedToSayThisBut Год назад

    Tried around with this a few days ago and was surprised by how well it worked. Did not know about the Latent Couple Helper though. Makes things a lot easier.

  • @NaThattitude
    @NaThattitude Год назад +1

    The possibilities are insane ! Thx for the tutorial.

  • @RolandT
    @RolandT Год назад

    Thank you for the video! Exactly what I have been looking for a long time! 😊

  • @morizanova
    @morizanova Год назад

    Thanks !!! I can see those extension will be helpful for doing Comic panel , more precise t- shirt - merch and even book cover mockup direct inside SD and A1111. Awesome !

  • @Ai_staBlitz
    @Ai_staBlitz Год назад

    What a time to be alive!!!! Thanks for this 😍

  • @SirBSpecial
    @SirBSpecial 9 месяцев назад

    Would be super cool if the LoRA part comes earlier or if there was a chaper division in the video time bar.

  • @sazarod
    @sazarod Год назад +4

    Dang, can't get it to work. It always gives me one subject ... even with the same settings and prompt as in the video. any ideas?

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      are you sure that you enabled the extension? Any errors?

    • @Gh0sty.14
      @Gh0sty.14 Год назад +4

      I'm getting the same results. Either one person or a mutated blend of the two. Once in awhile I get what I prompted for.

    • @sazarod
      @sazarod Год назад +4

      @@Aitrepreneur it's enabled, no errors, just keeps merging everything together... so weird

    • @davidwootton7355
      @davidwootton7355 Год назад +2

      Same for me, I have a bunch of other extensions installed, maybe there's a weird interaction between them.

    • @geffridamer9798
      @geffridamer9798 Год назад +2

      same for me, maybe for the low gpu? I have 1060 6 gb

  • @Nathan-ib2rs
    @Nathan-ib2rs Год назад

    This is cool... that said, I feel like this at the back end of a painted rgb mask would be the next step.

  • @coda514
    @coda514 Год назад

    Really cool. I'm gonna need a minute to digest this information. It blows my mind how far this technology has come in such a short time.

  • @skymaster__
    @skymaster__ Год назад

    You are awesome! Thats exactly what I needed! Thank you so much!

  • @gameredan
    @gameredan Год назад +1

    Do you enable the composable lora, didnt work on my setup. Already followed your step all the way, but there's a frame skip you didnt explain anything before generating image.

  • @AltimaNEO
    @AltimaNEO Год назад +1

    Question, since you can use AND for the positive prompt, can you also use AND in the negative prompt to give discrete negative details to the individual zones?

    • @ShawnFumo
      @ShawnFumo Год назад

      I believe someone else mentioned it is a common negative prompt unfortunately

  • @dhanang
    @dhanang Год назад

    Man, your videos are Incredibles. Thank you!!

  • @vi6ddarkking
    @vi6ddarkking Год назад +6

    We are almost there.
    The ultimate version Of Stable Diffusion is almost Here.
    It will be a Blender Addon that will combine the recently released Blender Skeleton for MULTI-CONTROLNET that our AI overlord talked about.
    Combined with the next version of this which will allow us to assign a Prompt, Hypernetworks and Multi-Controlnets to each Skeleton and or "Control Meshs" and the Background.
    And once Text To 3D, AI Animation and Images to 3D are also inevitably implemented as Blender Addons The fusion of the 2D and 3D Workflows will be Complete.
    And with it The full democratization of animation.
    It Will be Glorious and at the rate we are going It will be here Sooner than we realize.

    • @thanksfernuthin
      @thanksfernuthin Год назад

      Blender is a program the vast majority of people won't be using. Those same types of tools will be included with WebUi etc. -- In my opinion.

  • @devouringon
    @devouringon Год назад +2

    Please help! I'm using a SD 1.5 and i installed the latent couple extenssion from the list yet no UI shows up. I tried wiping it out and reinstall through URL install option but still deosn't work. Any idea how to fix this?

  • @gkp2696
    @gkp2696 8 месяцев назад +2

    sadly this does not work for me in the slightest, i just keep getting half a body on one side and one girl on the other, not sure why as i followed everything to a T

    • @dreamyrhodes
      @dreamyrhodes 8 месяцев назад

      I have the same issues. There is a thread on reddit explaining how supposedly get it to work but even with copying the settings it gets me the same merged person

  • @81HM
    @81HM 9 месяцев назад

    I shouldn't have watched this before lunch. The pizza made me hungry.

  • @phaedon_Imperius
    @phaedon_Imperius 7 месяцев назад

    Is there any way to use this or any similar tool like Latent Couple Helper in Mac OS?? Thanks in advance for the good quality content

  • @thays182
    @thays182 10 месяцев назад

    My Characters are blended together in the center of the image, split down the middle, half one type, half the other... How do i get two different figures in the image?

  • @ArielTavori
    @ArielTavori Год назад +1

    Thanks, been trying to figure this out!..
    Now if someone can integrate this with the segmentation ControlNet, and manual segmentation definition/coupling (basically just a "prompt brush"), I think we'll start seeing what the future of working with this tech is going be like...

    • @ShawnFumo
      @ShawnFumo Год назад +1

      I think I saw on reddit that someone was working on it but code not released yet.

    • @Gh0sty.14
      @Gh0sty.14 Год назад +1

      @@ShawnFumo Check again, it just got released.

    • @Gh0sty.14
      @Gh0sty.14 Год назад +2

      Look for MultiDiffusion Region Control extension. It's what you're asking for I believe.

    • @ArielTavori
      @ArielTavori Год назад +1

      @@Gh0sty.14 wow, thanks, ill check it out, the pace this is moving is just wild!..

    • @Gh0sty.14
      @Gh0sty.14 Год назад

      @@ArielTavori Yeah it's incredible but so hard to keep up. This extension just released a few hours ago.

  • @RedSnow3567
    @RedSnow3567 Год назад

    Latent Couple is not working for me.. I am enabling it but still only 1 character is generated. Please help

  • @RandomUser311
    @RandomUser311 Год назад +2

    Would be nice if you could just paint the zones in the web UI, e.g. in red, green and blue and then just reference them from the prompt via something like "{red:a man} {green:a woman} on some background...".

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      you can already do that with the segmentation map in controlnet :)

    • @ShawnFumo
      @ShawnFumo Год назад +1

      @@Aitrepreneur but segmentation is limited to the classifications they predefined isn't it? Or is there a way to customize each color to the extend that Latent Couple allows?

    • @Gh0sty.14
      @Gh0sty.14 Год назад +1

      You can now. MultiDiffusion Region Control extension does exactly that.

  • @Bookedtuyo
    @Bookedtuyo Год назад +1

    What if I don't have consistent results? It is normal? referring to the latent couple, it is difficult for me to get 2 characters out. sometimes it works but most of the time it doesn't 🤧

    • @ShawnFumo
      @ShawnFumo Год назад +1

      It might be easier using the pose controlnet at the same time

  • @KiritoxNemesis
    @KiritoxNemesis 8 месяцев назад

    Does it work for stable diffusion automatic1111 ? I tried composable-lora
    Public but it didn't work, lora still mixed together, not separated. Not sure where I went wrong.

  • @Chip10591
    @Chip10591 9 месяцев назад

    just got into all this ai art generation stuff and this was very useful, have subcribr

  • @OGUNite
    @OGUNite Год назад

    Can you add different seeds to go with the prompts? That would produce ultimate consistency

  • @dremwav1652
    @dremwav1652 Год назад

    Just discovered Stable Diffusion a few days ago which I thought was insane but this is fucking crazy

  • @krozareq
    @krozareq Год назад

    Really cool. Don't have to do a huge inpaint area on a completed image.

  • @gregkun1
    @gregkun1 11 месяцев назад

    I can't seem to get this to work for me. I followed step by step and work on it for 3 days straight.

  • @thays182
    @thays182 2 месяца назад

    Has Regional Prompter replaced this?

  • @ranobe7518
    @ranobe7518 Год назад +1

    I installed this Couple extention but it's not visible on my WebUI, it says error on CMD, ImportError: cannot import name 'CFGDenoisedParams' from 'modules.script_callbacks'.
    I read about some patch need to be used but I don't know how to do it since Git patch can change some settings and ruin the normal work...

    • @ShawnFumo
      @ShawnFumo Год назад

      I got this at first and had to remove the encodings that were copied by default in the colab I was using. Didn't get a chance to figure out yet which encoding was causing the problem specifically

    • @ranobe7518
      @ranobe7518 Год назад

      @@ShawnFumo I see thanks for answer. I don't know what to remove since I use Windows 10. And all I see in the extension folder is this stable-diffusion-webui-two-shot and some git patch in it.

  • @RunoffRhythm
    @RunoffRhythm Год назад

    Been trying to make a half human face with half venom face without any luck, I hope this will finally make it work!

  • @fasyrunner7070
    @fasyrunner7070 Год назад +1

    Can someone help,i follow the tutorial step by step,but my latent couple doesn't show up😥😥

  • @RicardoCampos-bs6fi
    @RicardoCampos-bs6fi 2 месяца назад

    Does it work with SDXL and Forge? Thanks!

  • @xerxer9251
    @xerxer9251 Год назад

    Question please. what if I want them to be kissing for example? That division would make them separately and ruin the composition

  • @ixiTimmyixi
    @ixiTimmyixi Год назад

    This is huge. Thanks for all your hard work. I never regret having the bell on.

  • @Solizeus
    @Solizeus 2 месяца назад

    It isn't working here, it is still merging the characters =(, i will try together with control net later to see if it helps (it didn't, maybe it just don't work with AMD)

  • @milestrombley1466
    @milestrombley1466 Год назад

    Now I can make a harem book cover!

  • @DrHojo123
    @DrHojo123 10 месяцев назад +1

    I seriously can never get latent couple working

  • @sadoshi
    @sadoshi Год назад

    5:05 "weird position but pretty cool" 🤣DEAD

  • @mckachun
    @mckachun Год назад

    thank you~! many inspiration !

  • @kallamamran
    @kallamamran Год назад

    Can the negative prompt be used the same way?
    Embeddings seem to be general and is npt prossible to restrict to the specific image zones

  • @USBEN.
    @USBEN. Год назад

    Holy diffusion

  • @geneoverride3725
    @geneoverride3725 Год назад

    after following a lot of channels, my settings is a bit messed up. could you give us a screenshots of all the correct settings out of millions of settings in the web UI please? I tried your method, but my images are not as perfect as the ones you showed in the video.

  • @Hugh_Mungus
    @Hugh_Mungus Год назад

    Thanks for the tutorial
    I installed it, enabled it, tried it but it won't work. Has anyone gotten this issue and figured how to solve it? It has no effect for me

  • @23dsin
    @23dsin Год назад

    Well, already you can pose two or more people (in any place you want) and replace them by inpaint? anyway, it's great new tool. Thanks for a video.

  • @Drone2222
    @Drone2222 Год назад

    I it possible to have characters/objects that are intersecting the different zones? Like people hugging, for example. I'm away from my SD computer for 3 weeks so I can't try anything !

  • @Rainbowsaur
    @Rainbowsaur Год назад

    Oh boy, does he know?

  • @flonnefallenangel
    @flonnefallenangel Год назад

    i am going to try this.... definitely xD

  • @JohnWick-dn6ty
    @JohnWick-dn6ty Год назад

    Got stuck when try to visulize on Latent Couple.

  • @오오와아아앙
    @오오와아아앙 Год назад

    suuuuuuuuper cool again!!!!

  • @damird9635
    @damird9635 Год назад

    for what program is that exstension? stupig question, i know, but.....

  • @yeastydynasty
    @yeastydynasty Год назад

    Does this run off your PC?

  • @ramimelki4084
    @ramimelki4084 Год назад

    All of this is cool and all but there are some scripts or tabs that don't seem to work on stable diffusion. including these 2 new features I get a mistake. Is it like that for anyone else?

  • @SageMinimalist
    @SageMinimalist Год назад +1

    For some reasons it's not working for me. (Error loading script: two_shot.py) installed it 3 times but doesn't work

    • @Aitrepreneur
      @Aitrepreneur  Год назад +2

      check this out: github.com/opparco/stable-diffusion-webui-two-shot/issues/19

    • @SageMinimalist
      @SageMinimalist Год назад +1

      applying this fixed the issue (thanks again K)
      git apply --ignore-whitespace extensions/stable-diffusion-webui-two-shot/cfg_denoised_callback-ea9bd9fc.patch

    • @Gwenyria
      @Gwenyria Год назад +1

      Hey, after nearly destroying my stable diffusion i found a solution! For me the issue was not having the latest version of Automatic1111 installed. to do this you just open the webui-user.bat (by draging it into an open window of notepad) and write " git pull " (without ")in a seperate line above the line " call webui.bat ". The check for updates also broke my openpose editor where automatic1111 will only display errors after you pressed teh send to txt2img button. therefore i reverted to a previous version of the openpose editor by going into my automatic1111 extensions folder and afterwards into the openpose editor folder. then you click in the bar with the filepath and enter cmd to open the console and there you just enter: git checkout 7b8b58390c49bf26d20dbd04fd678955221541dc to revert to an older version that doesn´t cause this error. Maybe this solution also works for others :D

  • @pongtrometer
    @pongtrometer Год назад

    so the regions are like layers ?

  • @unlimitedespair7634
    @unlimitedespair7634 Год назад

    don't forget to do a "git pull" first if the option doesn't appear.

  • @danieljfdez
    @danieljfdez Год назад

    Really very good explained!! Congratulations! You are always able to make hard things as simple as possible! Thanks a lot

  • @thays182
    @thays182 10 месяцев назад

    What's this extension called today?

  • @Darkbolt83
    @Darkbolt83 Год назад

    I don't have the 3 tabs in controlnet do you know why? :)

    • @Portoli
      @Portoli Год назад

      you need to change your settings
      in Settings>ControlNet, there is option "Multi ControlNet: Max models amount (requires restart)", by changing it to 3 you will have more tabs

  • @therookiesplaybook
    @therookiesplaybook Год назад

    How do I get it to stop blending them together?

  • @Dr.R.
    @Dr.R. Год назад

    Thx!

  • @swagabrownie
    @swagabrownie Год назад

    someone please help, ive got the prompt and all settings just the video and im only getting one person.

    • @Braulio_Cyberyisus
      @Braulio_Cyberyisus Год назад

      Try adding 2girls in each prompt

    • @swagabrownie
      @swagabrownie Год назад

      @@Braulio_Cyberyisus strange, even with 2girls in the prompt it’s still merging into one character

  • @__-fi6xg
    @__-fi6xg Год назад +1

    doesnt seem to work anymore

  • @roseydeep4896
    @roseydeep4896 Год назад

    What about Colab? Can I install this via code? Just clone the repo?

  • @paulsheriff
    @paulsheriff Год назад

    not sure what happend crashed my sd ..time for a reinstalll

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR Год назад +1

    05:04 😂😅😂 Saludos Aitrepreneur!!! desde Bolivia

  • @speedeespeedboi9527
    @speedeespeedboi9527 Год назад

    All ive been doing with ai is shipping characters! But im staying with inpainting

  • @EdWingfield
    @EdWingfield 5 месяцев назад +1

    AND does not work anymore. Too mad it looked promising

  • @Placid_Falcon
    @Placid_Falcon Год назад

    Nevermind. Like Bapt Iste said in the comments, YOU NEED COMPOSABLE LORA TO BE ENABLED.
    IT IS NOT ENABLED BY DEFAULT.
    IF Aitrepreneur actually reads comments, please include a note in your video description. That's why this is not working for everyone.

  • @haidargzYT
    @haidargzYT Год назад

    still no mention of the ControlNet problem models reloading every time generating an image
    after they added using 2 models at the same time ControlNet is Bugged

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      I mean it's up to the devs to fix this not me :)

    • @ShawnFumo
      @ShawnFumo Год назад

      Do you mean with the multiple LoRAs, or is there some way to use more than one model at the same time in the ui?

  • @SussyBacca
    @SussyBacca Год назад

    I didn't find it it in "latent couple", it appeared to me as "webui-two-shot"

    • @SussyBacca
      @SussyBacca Год назад

      FYI this was because my extensions folder was corrupted. I deleted it, restarted automatic1111, and it appeared correctly. It got corrupted because I upgraded python since installing it, I also needed to delete my venv folder as well. 😃

  • @TanvirsTechTalk
    @TanvirsTechTalk Год назад

    i dont image like this. i get one half girl and one half guy. very weird

  • @brunnomenezes3346
    @brunnomenezes3346 Год назад

    I started Tô get blurry images since yesterday. Nothing to do with this topic in the video but can you help? Using vae 84000, abyssorangemix nsfw

  • @SparkofGeniusKTP
    @SparkofGeniusKTP Год назад

    this just does not work for me at all

  • @metanulski
    @metanulski Год назад

    Can you tell us how to get the dark theme on auto1111?

    • @Bebunio007
      @Bebunio007 Год назад +1

      add COMMANDLINE_ARGS= --theme dark in webui-user.bat

    • @ElHongoVerde
      @ElHongoVerde Год назад

      Just put your browser on dark mode...

  • @androidgamerxc
    @androidgamerxc Год назад

    i dont have that option of clip skip

    • @IceMetalPunk
      @IceMetalPunk Год назад

      It's in the Settings normally. You can add any settings you change often to the top bar.

  • @SoCalGuitarist
    @SoCalGuitarist Год назад +45

    Wow, this is really fantastic! You could essentially create comic book panels with ease this way, with a separate prompt for each panel. Thanks for yet another great video!

    • @Aitrepreneur
      @Aitrepreneur  Год назад +2

      Maybe yeah, would be interesting to try!

    • @ShawnFumo
      @ShawnFumo Год назад +1

      That's an interesting idea in general. Like I wonder if anyone has tried the ControlNet with straight lines, but instead of using it for a room/building, lay out panels for a comic book prompt? Separate from Latent Couple, I wonder if that would work? Or I guess canny probably would if not.

    • @pladselsker8340
      @pladselsker8340 Год назад +2

      The only problem with this is that you maybe get up to 300 dpi with this technique, which is absolutely horrible resolution for a manga panel. You can probably get around it with upscaling and inpainting, but then you hit a wall if your story contains original characters (because of consistency).
      This could be solved with loras, but I feel like you'd have to train a lot of them.
      I really can't wait for elite to come out as an extension, as it might be able to solve the consistency problem.

    • @juanjesusligero391
      @juanjesusligero391 Год назад

      @@pladselsker8340 Hey, what is that elite extension?

    • @ramlama9893
      @ramlama9893 Год назад +3

      You're probably still best served by generating each panel separately for now. Say you do six generations of each panel- you get to mix and match the best for each panel. If you try to generate them all at once, each generation will take significantly longer and the odds of all the panels being exactly what you want are honestly pretty low. There's definitely interesting potential, though- and definitely worth experimenting. It seems particularly promising with a style where one panel blends seamlessly into the next instead of having gutters.

  • @sinceredeku
    @sinceredeku Год назад +27

    Its crazy how fast Stable DIffusion outscales all Paid services. Thats the power of the people

  • @JohnDoe-hd9de
    @JohnDoe-hd9de Год назад +7

    unfortunately the installation doesn't work for me. Compsable Lora tab is there, latent couple is not. latent couple is also no longer selectable in the extentions, via URL installation I get an error message that it already exists. everything is up to date. the folder is in extention folder as it should. I restarted the web ui and also in the browser. any ideas?

  • @Y0y0Jester
    @Y0y0Jester Год назад +5

    14:33 I have two remaining questions after watching the video. What about LoRAs trained on concepts or characters? Can I put one specific character in one side of the image and another one in the other? I ask because I've tried numerous times without any success, I'm getting the worst imaginable results. Secondly, I suppose this doesn't support textual inversion at all? I have some very clean, very well trained character embeddings but none of the cool new stuff seems to waste time on text inversion anymore. I wonder why? They are still pretty damn powerful, nothing has really changed in that regard. And they are like 1/250th the size. Is there maybe a way to convert an embedding to a LoRA so I can make use utmost use of what I already have? I'm begging you, if you have any information for me, please share. You would not believe how much I've dug for an answer.
    I will sum up for ease:
    - What about character LoRAs? Can we group together two, three, five specific characters in one prompt/image? I asked because my attempts failed.
    - Are text embeddings out of fashion? Why is no new tech supporting them? I see them on the same level as LoRAs
    - Can I port my good textual inversions to a LoRA somehow without going through the process of training all over again, lol?

    • @Gamerguy826
      @Gamerguy826 4 месяца назад

      I tried to use Controlnet and Latent couple and it kept fusing my two LORA characters together into a badly rendered hybrid one. Still trying to figure that out myself.
      If anyone knows how to use Latent couple in combination with separate LORAs any help would be appreciated.

  • @Snafu2346
    @Snafu2346 Год назад +10

    The Stable Diffusion space is evolving faster than I can keep up with it. Or so fast that I can't learn the previous new features and get good at them before something else comes out.
    Thing is once something comes out, and I watch an older video of it to catch up, there's been another video that has updated the previous feature. I kind of wish it would slow down a little bit, I still got to go to work in a few hours. 😆

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      Yeah I feel you :)

    • @laceycharizard2546
      @laceycharizard2546 Год назад

      Heck I'm still learning about merging checkpoints.

    • @SkyGeekWave
      @SkyGeekWave Год назад +1

      Oh man, I really understand you. I don't have time to learn in practice one function, 3 weeks later there's another, better one coming out. Or even UI elements can become slightly different or move somewhere else in some cases 😁

    • @Snafu2346
      @Snafu2346 Год назад +1

      @@SkyGeekWave Yeah, at this rate, by the time I catch up to where it is now, Stable Diffusion may have already replace the president.

    • @professorpenne9962
      @professorpenne9962 Год назад +1

      it's growing very fast. I remember not even understanding how to make multiple characters generate and thought it was impossible

  • @SpikyRoss
    @SpikyRoss Год назад +13

    This is just insane, each day there is something new, when is this even gonna stop 😳Thanks as always for the tutorials!

    • @professorpenne9962
      @professorpenne9962 Год назад +1

      try taking personal photography and throwing it into a program like this with img2img, it's mindblowing what can be generated. dude I took some personal photography that took days to shoot along the erie canal. threw them into img2img and was blown away with what it came up with with the right prompts and checkpoints.

  • @shaman_ns
    @shaman_ns Год назад +112

    It’s scary how fast this entire space is improving

    • @F5alconsHouse
      @F5alconsHouse Год назад +15

      I was still working on learning blender posing

    • @IceMetalPunk
      @IceMetalPunk Год назад

      It's amazing!

    • @sinceredeku
      @sinceredeku Год назад

      @@F5alconsHouse I think ill just skip this and download mdoels from other users xD that will safe a lot of time and i never used blender so yeah

    • @DicklessHipster
      @DicklessHipster Год назад +5

      The word you're looking for is "exciting."

    • @Skullivon
      @Skullivon Год назад +11

      This video is literally ALREADY out of date, now you can draw colored masks in whatever shapes you want instead of being stuck with rectangles.

  • @backster4744
    @backster4744 Год назад +6

    Would you ever do a video on the merge block weighted gui extension? The extension allows for more in-depth control in merging models than the stock 'Checkpoint Merge' ui thats in the base Automatic 1111 and it has pretty great potential.

    • @Aitrepreneur
      @Aitrepreneur  Год назад +4

      I saw it, I need to try it out first

  • @ItsmeCoringa
    @ItsmeCoringa Год назад +4

    I have installed with this tutorial, follow step by step. And even so, my Latent Couple dont work, its enabled and i have even tried with this model and settings, looks like its not doing anything with the images. Anyone else have this problem?

  • @dreamyrhodes
    @dreamyrhodes 8 месяцев назад +2

    This doesn't work at all. I can't understand how you managed to get the picture. I typed your settings and prompt 1:1 into my automatic1111 installation and all I get is one merged person. I don't know what I am missing but it must be something hidden so badly that this is completely useless.

  • @GameUpOG
    @GameUpOG 8 месяцев назад +2

    seems to be broken

  • @MrStatistx
    @MrStatistx Год назад +2

    Edit: Turns out the regional prompt extension (which i installed but haven't used and don't know how to set up properly right) was enabled.
    Looks like it works now (to varying success at least)
    Edit 2: Nah, was coincidence it seems. Still get horrible results 90% of the time. the 10% are at least showing SOMETHING on both sides, but most often it does closeups of 1 or ignores it completely.
    Doesn't work for me. like I write Beach AND man AND woman (simplified example), so i get a beach background and then a horrible mishmash of a manwoman monster, merged in the middle.
    Same settings as in the video and taking into account what the comment with the 2 tips said

    • @MrStatistx
      @MrStatistx Год назад

      Now it switched over to basically ignoring the man and just making the beach and woman (on the correct side, just ignores the 1st subprompt)

  • @_Merchant_
    @_Merchant_ Год назад +2

    I can't seem to get this to work, enabling the extension and generating an image just results in one character with aspects of both prompts merged into one character

    • @Deejayronin
      @Deejayronin Год назад

      The problem is the size of images, if you want to have 2 subjects consider a width of 1024, if you want 3.. consider a width of 1536 and so on because each character takes 512 pixels

  • @Nyxo1000
    @Nyxo1000 Год назад +2

    dont work for me

  • @renegat552
    @renegat552 Год назад +2

    Do not work for me

  • @Aitrepreneur
    @Aitrepreneur  Год назад +15

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @timduck8506
      @timduck8506 Год назад

      please can you post which version of stable diffusion, wed UI, lora.. your using as I just get errors on lora and Dream booth. ? or could you give us and install order with version's used so we can replicate your install.

    • @anastasiaklyuch2746
      @anastasiaklyuch2746 Год назад +2

      What if I installed the latest latent couple with composable lora, and no latent couple section appeared in txt2img? Only the composable Lora did

    • @zeeshanzaffar1435
      @zeeshanzaffar1435 Год назад

      @@anastasiaklyuch2746 same here, any solutions to this problem yet?

    • @zeeshanzaffar1435
      @zeeshanzaffar1435 Год назад +1

      @@anastasiaklyuch2746 never mind I got it, Open cmd in sd root dir then paste this line below
      git apply --ignore-whitespace extensions/stable-diffusion-webui-two-shot/cfg_denoised_callback-ea9bd9fc.patch
      restart sd not just UI

    • @anastasiaklyuch2746
      @anastasiaklyuch2746 Год назад

      @@zeeshanzaffar1435 It worked! Thank you, my heroic technomancer!