Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!
Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.
Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc
Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent. Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.
I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure
Hi! Can you tell me how you keep the product the same? I mean, I see the bag in the couple last minute, and you didn't use anything like controlnet etc, but the product is the same before and after lighting... How? @_@... Thank you
this is how IC-Light works. at its core, it's a instruct pix2pix pipeline, so the subject is always going to stay the same - although in more recent videos I solve issues like color shifting, detail preservation, etc by using stuff like controlnets, color matching nodes, etc.
Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source. I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.
This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?
Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light Have you tried the gradio implementation?
Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?
Super useful in a product industry.quick quiestion please: got prompt Failed to validate prompt for output 243: * CheckpointLoaderSimple 82: - Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors'] * ControlNetLoader 215: - Value not in list: control_net_name: 'control_sd15_depth.pth' not in [] * ControlNetLoader 316: - Value not in list: control_net_name: 'control_v11p_sd15_lineart.pth' not in [] * ImageResize+ 53: - Value not in list: method: 'False' not in ['stretch', 'keep proportion', 'fill / crop', 'pad'] Output will be ignored Failed to validate prompt for output 269: * CheckpointLoaderSimple 2: - Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors'] * LoadAndApplyICLightUnet 37: - Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in [] Output will be ignored Failed to validate prompt for output 291: Output will be ignored Failed to validate prompt for output 220: Output will be ignored Failed to validate prompt for output 76: Output will be ignored Failed to validate prompt for output 270: Output will be ignored Failed to validate prompt for output 306: Output will be ignored Failed to validate prompt for output 212: Output will be ignored Failed to validate prompt for output 225: Output will be ignored Failed to validate prompt for output 230: Output will be ignored im getting this error and i think reason is i didnt install ic-light model..i already installed it and should i install controlnet model too? Thank you
hi dear Andrea Baioni I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice
Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github). If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): ruclips.net/video/CD1YLMInFdc/видео.html or look up a multi-video basic course, like this playlist from Olivio: ruclips.net/video/LNOlk8oz1nY/видео.html
The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.
That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at andrea@andreabaioni.com? I can run some tests on it and check what's wrong. If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.
Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 90, 160] to have 4 channels, but got 8 channels instead What's going on? It could still be used normally before.
Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?
The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!
Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?
@@risunobushi_ai Sorry, I know the key to the problem. The first is because I did not watch the video tutorial carefully and ignored downloading fbc. The second is the image size problem. After downloading fbc, I adjusted the image size (512 pixels × 512 pixels) The drawing efficiency is much higher, thank you very much for this video. In addition, I would like to ask if I want to add some other products to this workflow, that is, product + background for light source fusion, what should I do?
I cover exactly that (and more) in my latest live stream from yesterday! I demonstrate how to generate an object (but you can just use a load image node with a already existing picture), use segment anything to isolate it, generate a new background, merge the two together, and relight with a mask so that it looks both more consistent and with better lighting than just using the optional background option in the original workflow. For now, you’d need to follow the process in the livestream to achieve it. In a couple of hours I will update the video description with the new workflow, so you can just import it.
@@risunobushi_ai Thank you very much for your reply. I watched the live broadcast in general and learned how to blend existing images with the background. By the way, in the video, I saw that the pictures you generated were very high-definition and close to reality, but when I generated them, I found that the characters would have some deformities and the faces would become weird. I used the Photon model.
I haven’t tested it with batch seq, but I don’t see why it wouldn’t in its version that doesn’t require custom masks applied on the preview bridge nodes, and instead relies on custom maps from load image nodes. I’ve got a new version coming on Monday that preserves details as well, and that can use automated masks from the SAM group, you can find the updated workflow on my openart profile in the meantime.
I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?
I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?
Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!
Thank you! I’m glad to be of help!
Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.
It can definitely be used as a professional tool, it all depends on the how!
great, I learned a lot. I feel so good about it :)
Nice insight to this new workflow, super helpful as usual :) This opens up a whole lot of possibility! Thanks and keep it up.
Yea it does! I honestly believe that this is insane for product photography
Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc
Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent.
Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.
@@risunobushi_ai thank you! Lowering the cfg value worked. :D
Things are getting so exciting🔥
Indeed they are!
Well explained and super useful for image composition. I expect that a small hurdle might be when it comes to reflective/shiny objects...
I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure
Thank you for this video, it was really helpful. There are a few undefined nodes with the workflow, do u have any advice as to how I can fix this?
Hi! Did you try installing the missing custom nodes via the manager?
BRAVO ! thanks for sharing!. super interesting development !
Thanks, Glad you liked it!
This is a great video! Thanks for sharing the info.
Hi! Can you tell me how you keep the product the same? I mean, I see the bag in the couple last minute, and you didn't use anything like controlnet etc, but the product is the same before and after lighting... How? @_@... Thank you
this is how IC-Light works. at its core, it's a instruct pix2pix pipeline, so the subject is always going to stay the same - although in more recent videos I solve issues like color shifting, detail preservation, etc by using stuff like controlnets, color matching nodes, etc.
@@risunobushi_ai That's what makes me confuse.. Since I do that, and the product was changed... Is it depends on our checkpoint model too?
I would SEG her out from the close up. then draft composite her on the BG. This probably reduces the color cast :)
Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source.
I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.
This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?
Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light
Have you tried the gradio implementation?
Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?
Hey! You might be interested in something like this: www.reddit.com/r/comfyui/comments/1bxos86/genfill_generative_fill_in_comfy_updated/
@@risunobushi_ai i'll give it a try. Thanks
@@risunobushi_ai so I tried running it but I have no idea what I'm suppose to do. Thanks anyways.
how to make sure the input picture doesn't change in the output? it seems to change how can i keep it exaclty and just manipulate thelight instead?
My latest video is about that, I added both a way to preserve details through frequency separation and three ways to color match
Super useful in a product industry.quick quiestion please:
got prompt
Failed to validate prompt for output 243:
* CheckpointLoaderSimple 82:
- Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors']
* ControlNetLoader 215:
- Value not in list: control_net_name: 'control_sd15_depth.pth' not in []
* ControlNetLoader 316:
- Value not in list: control_net_name: 'control_v11p_sd15_lineart.pth' not in []
* ImageResize+ 53:
- Value not in list: method: 'False' not in ['stretch', 'keep proportion', 'fill / crop', 'pad']
Output will be ignored
Failed to validate prompt for output 269:
* CheckpointLoaderSimple 2:
- Value not in list: ckpt_name: 'epicrealism_naturalSinRC1VAE.safetensors' not in ['epicrealism_naturalSin.safetensors']
* LoadAndApplyICLightUnet 37:
- Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []
Output will be ignored
Failed to validate prompt for output 291:
Output will be ignored
Failed to validate prompt for output 220:
Output will be ignored
Failed to validate prompt for output 76:
Output will be ignored
Failed to validate prompt for output 270:
Output will be ignored
Failed to validate prompt for output 306:
Output will be ignored
Failed to validate prompt for output 212:
Output will be ignored
Failed to validate prompt for output 225:
Output will be ignored
Failed to validate prompt for output 230:
Output will be ignored
im getting this error and i think reason is i didnt install ic-light model..i already installed it and should i install controlnet model too?
Thank you
hi dear Andrea Baioni
I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice
Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github).
If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): ruclips.net/video/CD1YLMInFdc/видео.html
or look up a multi-video basic course, like this playlist from Olivio: ruclips.net/video/LNOlk8oz1nY/видео.html
Yeah we know... just waiting this for sdxl..... 😅
The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.
That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at andrea@andreabaioni.com? I can run some tests on it and check what's wrong.
If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.
Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 90, 160] to have 4 channels, but got 8 channels instead
What's going on? It could still be used normally before.
Update kijai’s ic-light repo, it should solve the issue (it’s most probably because you update comfy)
Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?
The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!
Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?
@@risunobushi_ai Sorry, I know the key to the problem. The first is because I did not watch the video tutorial carefully and ignored downloading fbc. The second is the image size problem. After downloading fbc, I adjusted the image size (512 pixels × 512 pixels) The drawing efficiency is much higher, thank you very much for this video. In addition, I would like to ask if I want to add some other products to this workflow, that is, product + background for light source fusion, what should I do?
I cover exactly that (and more) in my latest live stream from yesterday!
I demonstrate how to generate an object (but you can just use a load image node with a already existing picture), use segment anything to isolate it, generate a new background, merge the two together, and relight with a mask so that it looks both more consistent and with better lighting than just using the optional background option in the original workflow.
For now, you’d need to follow the process in the livestream to achieve it. In a couple of hours I will update the video description with the new workflow, so you can just import it.
@@risunobushi_ai Thank you very much for your reply. I watched the live broadcast in general and learned how to blend existing images with the background. By the way, in the video, I saw that the pictures you generated were very high-definition and close to reality, but when I generated them, I found that the characters would have some deformities and the faces would become weird. I used the Photon model.
Does it work with batch sequencing?
I haven’t tested it with batch seq, but I don’t see why it wouldn’t in its version that doesn’t require custom masks applied on the preview bridge nodes, and instead relies on custom maps from load image nodes.
I’ve got a new version coming on Monday that preserves details as well, and that can use automated masks from the SAM group, you can find the updated workflow on my openart profile in the meantime.
Can you guide me how to use Ic - light in Google Colab?
I'm sorry, I'm not well versed in Google Collab
Number one
I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?
I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?
That was the problem. Wrong model~
Thank you :) @@risunobushi_ai