Stable diffusion tutorial - How to use Unlimited LoRA models in one image without in-paint

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024

Комментарии • 64

  • @ComplexTagret
    @ComplexTagret Год назад +6

    I usually use Regional Prompter for several LORAs.

    • @SHKShark
      @SHKShark Год назад +1

      Can you explain me how please?

    • @GamerDadKain
      @GamerDadKain 7 месяцев назад

      @@SHKShark having issues with this as well, even using lora masks,controlnet,regional prompt, and masking the regions.

    • @mehmetalirende
      @mehmetalirende 2 месяца назад

      and what about now, flux?

  • @Dante02d12
    @Dante02d12 Год назад +2

    Thank you! It works brillantly. The results are much better than Composable LoRA, and the extension is easy to use.
    But I couldn't get it to work with 4 or 5 characters. I'll try with longer images.
    EDIT: Ok, I'm getting there. I realized my masks had white parts in it ; there must not, because white counts as all channels altogether. Everything must be black or red or green or blue.

  • @SantoValentino
    @SantoValentino Год назад +1

    Dude im so hyped to watch this

    • @SantoValentino
      @SantoValentino Год назад +2

      Your previous video did not work for me. I don’t know why, but I will try this one.
      I think it’s really cool you are coding for us and I appreciate it and
      I may stick to in painting since it is a quick and easy process

  • @Ultimum
    @Ultimum Год назад +1

    I will need to play with more Loras soon!! Thank you again for your awesome video!

  • @ddrguy3008
    @ddrguy3008 Год назад +2

    So, here's a question: One of the reasons you want a lora face train to give a good image generation upfront (without inpainting) is so that a) the facial structure is correct and minor inpainting can give little touchups and b) that the body type the lora learned is also correct. With clothing styles loras or any other loras that would affect the body type, how would you set it up so that the body type and structure of the person lora is maintained, but it applies the style to that body type? Would you go, for example, mask 1 just the person lora and mask 2 just the clothing style lora with no mask color for the face? Because you'd need a second mask to effectively have two masks occupying the same space.

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад +1

      I think the problem is if there are more than one LoRA models affecting the same area of an image, that area would have so much influence and would appear to be broken

  • @lygiaquan937
    @lygiaquan937 Год назад +2

    Thanks for the amazing extension. However on paper, the extension should use the exact LoRA on a given mask to it. But in the end we need to use Latent couple for that to work. 🤔

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад +1

      Yes, I don't need to use the Latent Couple extension is the last video. I guess it's because every pixels are affected by the text prompts without the Latent Couple extension. Therefore to have more control over the text prompts affected area, we need to use the Latent Couple extension. The Latent Couple is actually a mask in the de-noised Latent space, it's basically is masking out the area that we wanted and combine all of the masks together as a result at every Sampling steps.

  • @fernandomasotto
    @fernandomasotto Год назад +1

    Great tutorial!! thank you for sharing.

  • @Shirakawa2007
    @Shirakawa2007 11 месяцев назад

    Does anybody managed to get this extension working when the loras are inside subfolders? I kept all mine organized that way, but the extension can't find them, no matter what name or path I put in the text field.

  • @wiama
    @wiama Год назад +2

    Can you please create a video on how to create animation incorporating Thin-Plate-Spline-Motion model + Depthmap script extension + Ebsynth + Blender

  • @2mnyshp
    @2mnyshp 3 месяца назад

    It kinda works but then it also doesn't. I've created two masks for two loras I want to use: left half of the screen is Red (Lora 1) and right side is Green (Lora 2). I also used enabled latent couple to again split the areas into two.
    In the prompt I specified the keyword for the first person and then the second person. No matter what I try, the loras still bleed into each other...

  • @SabinaKaminska-oc5ed
    @SabinaKaminska-oc5ed 7 месяцев назад

    AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'hr_upscale_to_y'
    how to fix it?

  • @andranikmuradkhanyan
    @andranikmuradkhanyan Год назад +1

    Unfortunately seems not to be working with scripts like Ultimate SD Upscale. The lora separation works fine in the initial generation and normal img2img upscale, but get garbled together in the tiles when upscaling with Ultimate Upscale. =(

  • @lorenzoruggiero9266
    @lorenzoruggiero9266 Год назад +1

    Hello! Lora must be put in a particular directory or it is kept from the general directory of Stable diffusion Lora? Then, I have to put the name of the .safetensors without extension? Thanks 😀

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад

      Lora models are put in standard stable diffusion models Lora folder instead of a separate extension folder
      The LoRA dropdown box is replaced with input text field, you can just copy the filename of the LoRA model to the input field (i.e you input libspgc for the file libspgc.safetensors)

    • @lorenzoruggiero9266
      @lorenzoruggiero9266 Год назад +1

      @@life-is-boring-so-programming thanks a lot, I'll try this now!

  • @dan00-tf4um
    @dan00-tf4um 10 месяцев назад

    Hello, would you like to ask if you have any plans to migrate and adapt to comfyui?

  • @ddrguy3008
    @ddrguy3008 Год назад +1

    This is an incredibly useful extension, so thanks so much for that. But also, I've got a question/obstacle that could use some clarification on or potential solution for: This is really great for getting multiple actors or styles into a single generation. However, it is very obvious that the closer in proximity/pixels different loras are to one another, the more likely they are to blend some. Is there a solution to this? Like if I have two people/loras standing very close together, they're going to look much more similar than if they were standing further apart. And this isn't even with overlapping gradients meaning each lora is given its own space and not sharing pixels with any other lora. This is also with gradient mapping done correctly, so that's not the issue either. So, to recap: How can I make it so that (for example) two people standing close together look just as different as if they were standing far apart in the generation process? What is causing one lora to bleed into another despite not sharing the specific space/gradient map?

  • @BlueScorpioZA
    @BlueScorpioZA Год назад +2

    Seems similar to your previous video you did and I have the same observations I had there - even looking at your thumbnail image, I can see the subjects have very similar facial features, particularly the eyes. So while there is a little bit of flexibility of not having the exact same character rendered twice, in your thumbnail it looks like the same person and then that same person gender swapped in a different costume.

    • @relaxation_ambience
      @relaxation_ambience Год назад

      Indeed, looks like the same character spreaded on 4 persons. But why is so ? Author used 4 different loras, but it looks like 1 lora...

    • @BlueScorpioZA
      @BlueScorpioZA Год назад +1

      @@relaxation_ambience In a nutshell, the technique doesn't really work. There seems to be some bleeding between the LoRAs and the only way to have multiple, different looking people in one image is either by inpainting the faces after the initial image has rendered (swap the LoRA for each person), or to composite various characters into the image with Photoshop, Affinity photo, gimp, or something similar. Failing that you're going to end up with renders with characters that all look the same.

  • @ddrguy3008
    @ddrguy3008 Год назад

    Hello again. Hope that you're doing well. Bit of a long shot question regarding this extension that I'm still using (and probably will forever): Is there any way to get the maps used to also be saved in the generation process? Controlnet lets you do this under settings buy choosing a directory and then selecting the "Allow detectmap auto saving" check box and it's incredibly useful for having a relatively small directory that you can reference in the future for generations you've made if you want to create similar again. I'm guessing that in order for that to work with this extension you'd need to program those options in?

  • @EmilBogomolov-w3n
    @EmilBogomolov-w3n Год назад +1

    What tool do you use to draw PNG masks?

  • @lilillllii246
    @lilillllii246 11 месяцев назад

    If I try to make a cloth lora, should I do the same as in the video? Is there any difference?

  • @liangliang2268
    @liangliang2268 9 месяцев назад

    hi buddy, are they availabe in comfy ui? I mean both lora -masks and latent couple

  • @lijiang-g2s
    @lijiang-g2s Год назад +1

    I found a problem, I try to three-person photo, the first lora is always wrong to find, not to model, I switch positions, replace Lora are the same results

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад

      I created something like that in my post before, you can find how I use three LoRA masks with the Latent Couple here
      github.com/lifeisboringsoprogramming/sd-webui-lora-masks/blob/main/images/example.02.png

    • @lijiang-g2s
      @lijiang-g2s Год назад +1

      @@life-is-boring-so-programming You misunderstand, I like your multiple Lora mask very much, I follow your example, but the first lora always hint can not find, what is wrong with it?

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад

      did you get something like below in the console terminal? if the LoRA is found, it will be loaded. If it cannot be found, it will not be loaded. If it is loaded, does it need any trigger word?
      LoRA weight: 1, model: ErenAd
      dimension: {16}, alpha: {8.0}, multiplier_unet: 1, multiplier_tenc: 1
      create LoRA for Text Encoder: 72 modules.
      create LoRA for U-Net: 192 modules.
      original forward/weights is backed up.
      enable LoRA for text encoder
      enable LoRA for U-Net
      shapes for 0 weights are converted.
      LoRA model ErenAd loaded:
      LoRA weight: 1.1, model: mikasaAckermanLora_offset
      dimension: {128}, alpha: {128.0}, multiplier_unet: 1.1, multiplier_tenc: 1.1
      create LoRA for Text Encoder: 72 modules.
      create LoRA for U-Net: 192 modules.
      enable LoRA for text encoder
      enable LoRA for U-Net
      shapes for 0 weights are converted.
      LoRA model mikasaAckermanLora_offset loaded:
      LoRA weight: 0.9, model: Reiner
      dimension: {16}, alpha: {8.0}, multiplier_unet: 0.9, multiplier_tenc: 0.9
      create LoRA for Text Encoder: 72 modules.
      create LoRA for U-Net: 192 modules.
      enable LoRA for text encoder
      enable LoRA for U-Net
      shapes for 0 weights are converted.
      LoRA model Reiner loaded:
      setting (or sd model) changed. new networks created.
      use mask 1 image to control LoRA regions.
      apply mask 1. channel: R, model: ErenAd
      apply mask 1. channel: G, model: mikasaAckermanLora_offset
      apply mask 1. channel: B, model: Reiner

    • @lijiang-g2s
      @lijiang-g2s Год назад

      @@life-is-boring-so-programming 出现的是这个,Error running process_batch: E:\sd\sd-webui-aki-v4\extensions\sd-webui-lora-masks\scripts\lora_masks.py
      Traceback (most recent call last):
      File "E:\sd\sd-webui-aki-v4\modules\scripts.py", line 469, in process_batch
      script.process_batch(p, *script_args, **kwargs)
      File "E:\sd\sd-webui-aki-v4\extensions\sd-webui-lora-masks\scripts\lora_masks.py", line 179, in process_batch
      raise RuntimeError(f"model not found: {model}")
      RuntimeError: model not found:
      提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

    • @lijiang-g2s
      @lijiang-g2s Год назад

      I use Lora elsewhere, and Lora works

  • @hinata_4044
    @hinata_4044 Год назад +1

    hii, great video, can you please make a tutorial of how to merge loRA of Diff network dimm but now in google collab? pleasee

  • @pengwenxuan8157
    @pengwenxuan8157 Год назад

    Hi, thanks for this amaying extention! I am very interested in this method, may I ask how do you actually use multiple lora model to generate a single harmounious image? I find it quite hard to read the ui codes directly... can you maybe explain a bit~~~

  • @MrSongib
    @MrSongib Год назад +1

    I think your AI speach setting it's a bit too fast here, its a bit jumping around aswell. idk how to explain it. xd (nvm it's slow down at the end) xdd
    Nice content. ty

  • @dervlex4500
    @dervlex4500 9 месяцев назад

    Doesnt work anymore... use mask 1 image to control LoRA regions.
    *** Error running process_batch: C:\Users\dervl\stable-diffusion-webui NEU NEU\extensions\sd-webui-lora-masks\scripts\lora_masks.py
    Traceback (most recent call last):
    File "C:\Users\dervl\stable-diffusion-webui NEU NEU\modules\scripts.py", line 742, in process_batch
    script.process_batch(p, *script_args, **kwargs)
    File "C:\Users\dervl\stable-diffusion-webui NEU NEU\extensions\sd-webui-lora-masks\scripts\lora_masks.py", line 235, in process_batch
    hr_height=p.hr_upscale_to_y, hr_width=p.hr_upscale_to_x)
    AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'hr_upscale_to_y'

  • @odakyuodakyu6650
    @odakyuodakyu6650 Год назад

    how much v-ram does the lora mask feature use up?

  • @ianfan4224
    @ianfan4224 Год назад

    It seems have a little problems...it keep said that "AttributeError: module 'modules.shared' has no attribute 'walk_files'".I dont know what's i did wrong...

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад

      I will have a look, thanks

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад

      I just tested on latest github.com/AUTOMATIC1111/stable-diffusion-webui and it worked
      perhaps you can upgrade your webui and try again
      version: v1.4.0  •  python: 3.10.9  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0  •  checkpoint: 6ce0161689

    • @legionarioromano4436
      @legionarioromano4436 3 месяца назад

      @@life-is-boring-so-programming Hi bro, can you do this extension for Forge too ?

  • @omegablast2002
    @omegablast2002 Год назад +2

    Wasn't this already possible with just Latent couple alone?

    • @life-is-boring-so-programming
      @life-is-boring-so-programming  Год назад +3

      Without using the LoRA masks and put every LoRA models in the text prompts would get result like that
      github.com/lifeisboringsoprogramming/sd-webui-lora-masks#result-without-lora-masks

  • @fadoo999
    @fadoo999 Год назад +2

    the result is bad lmao

    • @brutiaut79
      @brutiaut79 Год назад +1

      cause he is using the base model ^^