Relight anything with IC-Light in Stable Diffusion - SD Experimental

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии •

  • @UnclePapi_2024
    @UnclePapi_2024 7 месяцев назад

    Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Thank you! I’m glad to be of help!

  • @xxab-yg5zs
    @xxab-yg5zs 7 месяцев назад

    Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      It can definitely be used as a professional tool, it all depends on the how!

  • @aysenkocakabak7703
    @aysenkocakabak7703 12 дней назад

    great, I learned a lot. I feel so good about it :)

  • @JohanAlfort
    @JohanAlfort 7 месяцев назад

    Nice insight to this new workflow, super helpful as usual :) This opens up a whole lot of possibility! Thanks and keep it up.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Yea it does! I honestly believe that this is insane for product photography

  • @aynrandom3004
    @aynrandom3004 7 месяцев назад +1

    Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +2

      Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent.
      Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.

    • @aynrandom3004
      @aynrandom3004 7 месяцев назад

      @@risunobushi_ai thank you! Lowering the cfg value worked. :D

  • @pranavahuja1796
    @pranavahuja1796 7 месяцев назад +1

    Things are getting so exciting🔥

  • @uzouzoigwe
    @uzouzoigwe 7 месяцев назад

    Well explained and super useful for image composition. I expect that a small hurdle might be when it comes to reflective/shiny objects...

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure

  • @KeenHendrikse
    @KeenHendrikse 5 месяцев назад

    Thank you for this video, it was really helpful. There are a few undefined nodes with the workflow, do u have any advice as to how I can fix this?

    • @risunobushi_ai
      @risunobushi_ai  5 месяцев назад

      Hi! Did you try installing the missing custom nodes via the manager?

  • @zeeyannosse
    @zeeyannosse 7 месяцев назад

    BRAVO ! thanks for sharing!. super interesting development !

  • @houseofcontent3020
    @houseofcontent3020 7 месяцев назад

    This is a great video! Thanks for sharing the info.

  • @daryladhityahenry
    @daryladhityahenry 4 месяца назад

    Hi! Can you tell me how you keep the product the same? I mean, I see the bag in the couple last minute, and you didn't use anything like controlnet etc, but the product is the same before and after lighting... How? @_@... Thank you

    • @risunobushi_ai
      @risunobushi_ai  4 месяца назад

      this is how IC-Light works. at its core, it's a instruct pix2pix pipeline, so the subject is always going to stay the same - although in more recent videos I solve issues like color shifting, detail preservation, etc by using stuff like controlnets, color matching nodes, etc.

    • @daryladhityahenry
      @daryladhityahenry 4 месяца назад

      @@risunobushi_ai That's what makes me confuse.. Since I do that, and the product was changed... Is it depends on our checkpoint model too?

  • @dreaminspirer
    @dreaminspirer 7 месяцев назад

    I would SEG her out from the close up. then draft composite her on the BG. This probably reduces the color cast :)

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source.
      I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.

  • @PierreGrenet-ty4tc
    @PierreGrenet-ty4tc 7 месяцев назад

    This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light
      Have you tried the gradio implementation?

  • @JavierCamacho
    @JavierCamacho 7 месяцев назад

    Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      Hey! You might be interested in something like this: www.reddit.com/r/comfyui/comments/1bxos86/genfill_generative_fill_in_comfy_updated/

    • @JavierCamacho
      @JavierCamacho 7 месяцев назад

      @@risunobushi_ai i'll give it a try. Thanks

    • @JavierCamacho
      @JavierCamacho 7 месяцев назад

      @@risunobushi_ai so I tried running it but I have no idea what I'm suppose to do. Thanks anyways.

  • @Architectureg
    @Architectureg 7 месяцев назад

    how to make sure the input picture doesn't change in the output? it seems to change how can i keep it exaclty and just manipulate thelight instead?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      My latest video is about that, I added both a way to preserve details through frequency separation and three ways to color match

  • @QuickQuizQQ
    @QuickQuizQQ Месяц назад

    Super useful in a product industry.quick quiestion please:
    got prompt
    Failed to validate prompt for output 243:
    * CheckpointLoaderSimple 82:
    - Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors']
    * ControlNetLoader 215:
    - Value not in list: control_net_name: 'control_sd15_depth.pth' not in []
    * ControlNetLoader 316:
    - Value not in list: control_net_name: 'control_v11p_sd15_lineart.pth' not in []
    * ImageResize+ 53:
    - Value not in list: method: 'False' not in ['stretch', 'keep proportion', 'fill / crop', 'pad']
    Output will be ignored
    Failed to validate prompt for output 269:
    * CheckpointLoaderSimple 2:
    - Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors']
    * LoadAndApplyICLightUnet 37:
    - Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []
    Output will be ignored
    Failed to validate prompt for output 291:
    Output will be ignored
    Failed to validate prompt for output 220:
    Output will be ignored
    Failed to validate prompt for output 76:
    Output will be ignored
    Failed to validate prompt for output 270:
    Output will be ignored
    Failed to validate prompt for output 306:
    Output will be ignored
    Failed to validate prompt for output 212:
    Output will be ignored
    Failed to validate prompt for output 225:
    Output will be ignored
    Failed to validate prompt for output 230:
    Output will be ignored
    im getting this error and i think reason is i didnt install ic-light model..i already installed it and should i install controlnet model too?
    Thank you

  • @mohammednasr7422
    @mohammednasr7422 7 месяцев назад

    hi dear Andrea Baioni
    I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github).
      If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): ruclips.net/video/CD1YLMInFdc/видео.html
      or look up a multi-video basic course, like this playlist from Olivio: ruclips.net/video/LNOlk8oz1nY/видео.html

  • @caseyj789456
    @caseyj789456 2 месяца назад

    Yeah we know... just waiting this for sdxl..... 😅

  • @cycoboodah
    @cycoboodah 7 месяцев назад

    The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at andrea@andreabaioni.com? I can run some tests on it and check what's wrong.
      If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.

  • @user-de8nc3hx4u
    @user-de8nc3hx4u 3 месяца назад

    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 90, 160] to have 4 channels, but got 8 channels instead
    What's going on? It could still be used normally before.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      Update kijai’s ic-light repo, it should solve the issue (it’s most probably because you update comfy)

  • @StringerBell
    @StringerBell 7 месяцев назад +4

    Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +2

      The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!

  • @yangchen-zd9zl
    @yangchen-zd9zl 7 месяцев назад

    Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?

    • @yangchen-zd9zl
      @yangchen-zd9zl 7 месяцев назад

      @@risunobushi_ai Sorry, I know the key to the problem. The first is because I did not watch the video tutorial carefully and ignored downloading fbc. The second is the image size problem. After downloading fbc, I adjusted the image size (512 pixels × 512 pixels) The drawing efficiency is much higher, thank you very much for this video. In addition, I would like to ask if I want to add some other products to this workflow, that is, product + background for light source fusion, what should I do?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      I cover exactly that (and more) in my latest live stream from yesterday!
      I demonstrate how to generate an object (but you can just use a load image node with a already existing picture), use segment anything to isolate it, generate a new background, merge the two together, and relight with a mask so that it looks both more consistent and with better lighting than just using the optional background option in the original workflow.
      For now, you’d need to follow the process in the livestream to achieve it. In a couple of hours I will update the video description with the new workflow, so you can just import it.

    • @yangchen-zd9zl
      @yangchen-zd9zl 7 месяцев назад

      @@risunobushi_ai Thank you very much for your reply. I watched the live broadcast in general and learned how to blend existing images with the background. By the way, in the video, I saw that the pictures you generated were very high-definition and close to reality, but when I generated them, I found that the characters would have some deformities and the faces would become weird. I used the Photon model.

  • @twilightfilms9436
    @twilightfilms9436 7 месяцев назад

    Does it work with batch sequencing?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I haven’t tested it with batch seq, but I don’t see why it wouldn’t in its version that doesn’t require custom masks applied on the preview bridge nodes, and instead relies on custom maps from load image nodes.
      I’ve got a new version coming on Monday that preserves details as well, and that can use automated masks from the SAM group, you can find the updated workflow on my openart profile in the meantime.

  • @syducchannel9451
    @syducchannel9451 7 месяцев назад

    Can you guide me how to use Ic - light in Google Colab?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I'm sorry, I'm not well versed in Google Collab

  • @antronero5970
    @antronero5970 7 месяцев назад

    Number one

  • @houseofcontent3020
    @houseofcontent3020 7 месяцев назад

    I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?

    • @houseofcontent3020
      @houseofcontent3020 7 месяцев назад +1

      That was the problem. Wrong model~
      Thank you :) @@risunobushi_ai