go break it and report back how it works for you, chatterinos: openart.ai/workflows/risunobushi/product-photography-relight-v3---with-internal-frequency-separation-for-keeping-details/YrTJ0JTwCX2S0btjFeEN
Since my computer can't run the segment anything models, how can I remove their nodes without effecting the rest? I want to manually upload the masked object instead of relying to SAM.
this way, i can run the entire workflow without any problem (I apologize for this one since I'm still new to all of these and have no idea on how to do it)
@@astrophilecynic9990 no worries! if you don't want to break anything, you can right click on all the segment anything nodes and select bypass. this will let the image that comes into the segment anything node just pass through it. now, a mask won't be generated, but you can look at the color coded links that come out of the segment anything node - those are where the mask is used. in order to upload and use a custom mask, you need to: - create a load image node, and put your mask in there - from the image output, drag and drop it, and search for "convert image to mask" - select red as the channel - now connect the green mask output from the convert image to mask node to all the nodes that the segment anything mask output was connected to it's a bit of a chore, but it's simple
@@risunobushi_ai I apologize for any inconvenice. I've already tried the method above, but the resulting images are way off compared to their reference
Hey man! Thank you for that. I have a question. I am pretty new in Stable Diffusion. I am using 3d softwares for product photography. But midjourney is more comfortable for me to generate backgrounds and i like the results. I want to add my product to my midjourney scene. I can make it in photoshop. But for relight the product i think this is just a amazing shot cut. So question is, is it possible to add my own background source with copy pasted product photo and just using it for relight? if it is possible, can you please explain how can i do that? Thanks a lot!
Thank you for such an advanced and powerful workflow. I'm encountering problems where all images generated have a yellowish tint even when selecting a white light for example. Am i doing something wrong?
Hi! No, you're not doing anything wrong, we solve the color shifting issue here: ruclips.net/video/_1YfjczBuxQ/видео.html and here: ruclips.net/video/AKNzuHnhObk/видео.html
Amazing work!!! Very into to the IC-lighting stuffs recently, was just trying to upscale the image from the IC-light workflow. Will try your workflow and let you know the outcome soon. Thanks again Andrea.
Thanks! If you add an upscaler pass, remember to upscale the high frequency mask you're using as well, be it the one from SAM or the one you're drawing yourself, otherwise it won't work anymore because of a size mismatch between mask and high frequency layers. As I say in the video, a good spot to place a upscale group would be in between the relight group and the preserve details group.
Hey Andrea, I am facing a error at the final node of the workflow and I can't find a fix Error occurred when executing Image Levels Adjustment: math domain error can you please provide a fix as I really want to use your workflow
Hi! I’ve heard about this error a few times now, it’s possible that the level adjustment mode got updated and my values don’t match now. Try using values between 0 and 255, I’ll update the json when I’ll have the time
@@risunobushi_ai Wow, you are the real one! and you made me SUBSCRIBE. But I have same error. Image Levels Adjustment: math domain error Can you please provide a fix and give me reply please!
@@risunobushi_ai Hey so I did try to change the values and get rid of the issue but the error isn't going away, can you please help and provide us the new value🙇♂
For everyone having this issue: - the level node probably was updated and the new values are clamped between 0 and 255 - you can try changing the values to reflect the new absolute values (0= black, 255 = white) - if it doesn’t work, you can swap the level node for any other level nodes (there’s a few) - if you are not comfortable doing it yourself, you’ll have to wait for me to update the json, but because of work I won’t be able to until late next week.
Would it be posible, to integrate a function, where you can give the workflow a reference image to guide the backgruond generator in the direction you want it to go? :)
Thank you for sharing, this is great. But I have a question, why are all the processed photos in a dark style, and does it need to be adjusted anywhere?
Thanks for the video! Pls tell is it possible to preserve both the foreground and the background, and change the lighting only? I need to keep the initial image
Hello, why does the color of an object change after I turn on the lights? For example, the bottle was originally green, but it turned yellow after the lights were turned on. Which parameter should I adjust to maintain the original color?
Mind-blowing! As a product photographer, I'm more excited than terrified. AI is just another tool, like any other. You still need to learn how to use it, and so far, it is complicated enough to require a lot of effort to create quality product images. I wonder, is there a way to generate 16-bit TIFF files that can be edited in Photoshop without introducing image quality degradation? Frequency separation sometimes makes banding, probably because it is done in 8-bit.
That’s the way I see it too, and why I started getting interested in it a long while ago. Unfortunately there’s no way to generate TIFF files (as far as I know, but I’m 99% sure). Jpegs and PNGs are all we can work with as of now. The only way to alleviate banding issues (to a degree, and it’s more of a bandaid than a solution) or outlines is to generate files at a higher res, this way the affected pixels are, in percentage, less relative to the total amount of pixels in the image.
Amazing results. The beauty of open source is finding solutions together. Can the detail preserving part be used on the workflows for clothing? It might be a challenge with the posing but I just thought about it.
I've tested it on underwear only right now (I'm working with a client who produces underwear, so that's what I had laying around) and it works well, even with harsh relights, such as neon strips. I haven't tested it with other types of clothing, but I might do that tomorrow when I have more time. The only thing that it struggles it, right now, are faces in full body shots, because the high frequency layer catches a ton of data there, but I think it just might need some tinkering, nothing major.
Nice workflow! As other people have said, for certain objects it's a bit tough to keep the original color of the object. I added a perturbed attention guidance between the first model loader and ksampler, which helps create more coherent backgrounds. Thank you for making the tutorial video as well!
Thanks! Yeah, I understand now that some people prefer having a complete workflow rather than a barebones one, I’ll create two versions going forward, one barebones for further customization and one with more stuff, like PAG, IPAdapters, color match or whichever group might be useful
I am definitely testing this tomorrow. Just one question. Do you think this will work on the intricate details and designs on a jewellery? That is something I am looking forward to as i have a jewellery business as well
there's a more recent version that should work with jewelry, as long as you don't want refraction to go through the jewel itself: ruclips.net/video/GsJaqesboTo/видео.html
as long as it has the same dimensions as the relighting mask and subject, and has the same perspective as the subject, you can use custom backgrounds, sure!
Amazing work!!!quick question please: i got this error Image Levels Adjustment math domain error then i found that - ''Be sure your black level isn't higher than mid level, and vice versa. Black, must be lower than mid, and mid lower than high.'' but when i do it colors are incorrect.any advices?
first of all thank you for letting us participate on this mindblowing journey! I've managed to get the whole comfyUI setup with the manager running. took me a while since i've no experience in this field. My only question is you've mentioned that to do an upscale you'd need to include the mask and upscale it too? Would there be a way to include this upscaling process within the workflow or has this already been done and i dont see it ?
Ciao! Thank you, I've ended up setting up an upscaler (by no means the best upscaler out there, it was just something I had laying around from a previous test) here: ruclips.net/video/_1YfjczBuxQ/видео.html you can check it out and figure out how it works in terms of upscaling the masks, and link up any other upscaler you like as long as it upscales the same things. also I ended up going through more iterations of this workflow (the one I linked was version 3 I think?) doing color matching and detail preservation in more recent ones, so you can check those out as well!
This is great for a product photographer like myself, I got v3 going however v4 keeps breaking Comfy so I want to concentrate on V3 to see how it performs, for me I am using a bottle of wine however the txt on the label is not preserved enough, is there a way to give it more importance?
You can try using my set of Frequency Separation nodes here, by changing the nodes that are responsible for it in either V3 or V4 with them. You can find them in this video: ruclips.net/video/AKNzuHnhObk/видео.html
This is really cool, but it still changes my colors.... It seems to work better (not perfect) pulling the blended image into the second frequecy seperacion. At least the scene gets re-lit. Is there a way to use the IC-light and then just pull the colors over with some transparency value so they don't get washed out?
Yep, we solved the color matching here: ruclips.net/video/_1YfjczBuxQ/видео.html and on monday I'll release a workflow for relighting people while preserving details and colors too. I also developed custom nodes for frequency separation, but I haven't had the chance to update the workflow yet. They'll be in Monday's video tho.
@@risunobushi_ai I got this error; Failed to validate prompt for output 269: * GroundingDinoSAMSegment (segment anything) 204:. And I don´t know hot to fix it
Yeah, this is a “barebones” workflow, it can be expanded with anything one might need. I usually publish barebones rather than fully customized ones because it’s easier to make it your own (or at least it is for me, I don’t like when I have useless stuff in other people’s workflows)
Thank you very much! but I found that some products have great results through this workflow, and some niche products have not been very good after re-lighting. Is this because of the basic model training? Because some products have very small training volume
Hi! No, the way IC-Light works is through a instruct Pix2pix process, so there shouldn’t be any issues with object permanence at very low CFG (between 1.1 and 2), as it forces the original image on top of the light mask. Btw this workflow is one of my first attempts, these are my latest ones: Colors and details preservation: ruclips.net/video/_1YfjczBuxQ/видео.html People (and products) relighting: ruclips.net/video/AKNzuHnhObk/видео.htmlsi=gfYJmWLIFK7HrhL7
hi! my first videos are basic tutorials, and they get harder and more in depth the more recent they are. for the product relighting series in particular, I'd suggest watching them all in order of publishing, since they're small incremental improvements over the course of a month of development. you'll probably understand more about how they change and what's going on if you watch them in that order.
This is amazing... thank you so much for putting these videos together!! Question: For some reason, the image I'm getting out of the KSampler after the IC-Light Conditioning node is always coming out darker/orange/brown. I've tried it with a bunch of different images but the image and color are always significantly different than what's being fed into it. I've also tried a few different prompts in the text encoded that's being fed into the IC-Lighting node but everything still comes out quite dark. Thanks again!
Thanks! Please refer to the comment by AbsolutelyForward, where we talk about this and about the use of a color match node. You can also increase the amount of light by remapping the light mask (right now it should be set to 0.7, 1 is full white)
@@risunobushi_ai Thank you!! I tried to see if anyone else had the same issue and must have missed it. Color Blend definitely helped at the end when connecting it to the original image. I also found increasing the min value of the Remap Mask Range node to 0.4 helped brighten up the initial input image. I also increased the IC-Lighting Conditioning to 0.5. Thanks again for this amazing workflow!!
It's incredible, again! 😱 One thing, just a minorly-minor improvement idea: You enter a prompt then copy it into another prompt field, after a lighting prompt part. You could separate these two then synthetise it using the product prompt. Turning it into sample code: ProductPrompt = 'a photograph of a product standing on a banana peel' LightingPrompt = 'white light' SynthesizedPrompt = ProductPrompt + LightingPrompt # Here's the point where we no longer Ctrl-C/Ctrl-V 😁 Plus the prompt nodes could be rearranged into a Prompts group. (Of course I could do this myself after downloading the workflow for which you deserve a Praying Blanket 🙏 but I'm here just for admiring, my machine is far from below the minimal requirements of all this.)
thanks, I didn't know about the product prompt node! I knew about other prompt concatenate nodes, and I thought about using them, but again, not knowing the knowledge level of the end user I usually end up using the least complicated setup. sometimes this ends up producing minor inconveniences like copy pasting text, or having to link outputs and inputs manually where I could have used a logic switch, but it's a tradeoff I accept for the sake of clarity
This is an awesome workflow. It was working fine. Sadly the latest updates to either comfyui (ComfyUI Version:** v0.2.2-22-g81778a7) or the WAS node suite gives a "ValueError: math domain error" at the "Image Levels Adjustment" node. Any solution?
Yeah the level adjustment nodes was update, I received a ton of complaints and requests for help but I’m currently unable to update the json because of work :/ if you have found the correct values I’ll make sure to post a pinned comment
I haven’t tested it with upscaling, I guess that as long as you don’t need to upscale the original image you won’t have to resize the frequency layers, so the details would be as they are in the original image. If you need to upscale the original image and the frequency layers as well, you might have some troubles with preserving details depending on how much you’re upscaling.
Ok, since no one asked it yet, can i use sdxl model with this workflows ? Thanks for this work and I'm also a photographer 😅😊 cant wait for v4 with that ip adapter for consistent backgrounds(and sdxl for higher res? ;) )
This looks like a game changer. Maybe only for mockups, ideas iterations, or even real productions ! Everything starts well on my side, but the segment anything does nothing so the process is useless. I am on a M2pro, any ideas ?
Did you install all the necessary dependencies for SAM to work on M chips? As far as I know you’ve got some hoops to jump through in order to get tensorflow and other dependencies running on M chips
Absolutely fantastic workflow and a well explained tutorial :) I tried to relight some package designs, but somehow it gets allways „tinted“ in a warmish-yellow tone, no matter what text prompt I use for the lightning. I noticed that the epicrealism checkpoint tends to do so if I use a very generic (no description apart from the advertising photography) prompt for the background. Im lost.
you could either try different checkpoints, and / or you could try to specify which kind of light you want. I notice that I get a very warm tint with "natural light", but specifying "white light" or some kind of studio light (softbox, spotlight, strip light) produces more neutral results. You could also try influencing with a negative prompt (warm tones, warm colors, etc).
@@risunobushi_ai thx for the hints :) The package image (input) is colored half-green + half-grey. What is your expierence (so far) with retaining the original colors and transfering them in a realistic way with your workflow? Would an additional color matching node perhaps do some help?
I have never particularly cared for the color matching node (at least the one I used), as it was almost never working well for me, but you could try and blend it at a lower percentage for better results. I guess it all depends on how important it is to color match to an exact degree the final relit image to the source one. This is my own preference, but the way I'm used to working I'd rather fix the colors in PS for a better degree of control. If one would want to do everything inside of comfyUI, which to be fair is in the spirit of this video and workflow, a color matching node could be a good enough solution, although less "directable" than proper post work in PS.
adding here, since I just thought about it: you could even try color matching only specific parts of the subject, such as the non-lit ones, or only the lit ones, by using the same node I'm using to extract a light mask from the blended image, or a RGB/CMYK/BW to mask node, based on the color / light you need to correct.
@@risunobushi_ai So far I haven't had any success by changing the checkpoints or modifying the lightning prompt - the original colours of the packaging are lost. But: at the end of the workflow, I used the input image again to readjust the colours. To do this, I combined the "ImageBlend" (settings: 1.00, soft_light) node with the "Image Blend by Mask" (for masking the packaging) node - this has worked very well so far :)
Andrea Im building sdxl workflow for product photography, if i add iclight as an option inside the sdxl workflow so users can turn on or off from the webapl based on the input . Is that possible or iclight should be in stand alone workflow ?
You can just encode a resulting image from SDXL and the use it as a base for a IC-Light pipeline, no need to have two different workflows if keeping two checkpoints loaded at the same time is not an issue
Hello blogger, I am a novice, I saw your work on the Internet. (Product Photography Relight v3 - With internal Frequency Separation for keeping details) I really, really want to be able to use this workflow, but I'm having so many problems, I don't know how to install the relevant model and where to put the model in the folder, can you show me a tutorial video to install this workflow? It really means a lot to me. Thank you very, very much. I liked your video and subscribed to your channel.
Using comfyui, I get error: Error occurred when executing MaskFromColor+: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3. Fixed it bypassing it but then ran into a problem with the Image Resize apparently due to an update in Comfui so switch it to "select keep_proportion for the method of the Image Resize nodes." solved it and I could get passed this concern.
Yep, but you need to have the same perspective between the subject and the background. Simply add a load image node and blend the background with the segmented subject, bypassing the background generator group. There’s no perspective correction in comfyUI that I know of, but if someone knows about it it’d be great.
Thank you so much for the detailed answer. I’ll look up for a rotúrela that explains how to connect the nodes you talked about. As for the perspective, that’s fine, since I’ll be editing it before on Photoshop l, so it will only need to mix the light and color
Wow! This is fantastic! I was faced with the problem that the Load And Apply IC-Light node does not find loaded models. Does anyone know how to solve this? * LoadAndApplyICLightUnet 37: - Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []
i've got this error Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it
You're most probably using the FBC IC-Light model instead of the FC model, or using the FC model while plugging in an optional background in the IC-Light node (opt backgrounds are for the FBC model only)
So cool! I'm doing basically the same for cars and people! But at the moment I stll prefer to do the freq seperation part in Nuke - I can only dream of 32bit workflow in Comfy
i'm having this error: RuntimeError: Given groups=1, weight of size [320, 12, 3, 3], expected input[2, 8, 128, 128] to have 12 channels, but got 8 channels instead
Thanks! Yeah, I know there's better ways to bypass a double prompt field, more so if the two prompts are similar, but I usually construct my workflows so that there's as little complications as possible for new users. In this case, this means using two different prompt fields for what is essentially the same prompt, but to new users having the usual Load Checkpoint -> CLIP Text Encode -> KSampler pipeline makes more sense than having a Text node somewhere, conditioning two different KSamplers in two different groups.
Facing the same issue. Are we passing the mask or the image to resizer. Debugging shows resizer is getting a tensor with no channels. If you can confirm, I will patch the resizer to bypass this shape mismatch. Thank you. Btw I am working in api mode. Never used comfy in ui mode.
We’re passing an image, but it’s not the first time I hear someone having issues with this resize node. swapping it for another resize node solves usually solves it.
@@Arminas211 I encountered the same issue, but I eventually discovered that I hadn't changed the prompt of the segment anything node, which caused the problem. Perhaps you could try doing that as well?
Man, how come your Image Levels Ajustment work with those settings?! Mine only work if the value o mid_level is between de black and white. If I use the numbers you have (ex: mid_level=1), I get a "math domain error".... another thing I notice is that it is changing the color of some of the product parts..
it should be this one if you want to dive into it: ruclips.net/user/livexjy3JyaPfHQ but my latest video showcases a workflow that solves most of the stuff I was talking about two months ago!
In this case, and when you only have one subject yes, but if you have more subjects (like in my update on this video, when I have the bottle sitting on a branch) it might not work. But I agree, here you can just use subject instead!
Hi.. Thanks a Lot For this tutorial and workflow. I am getting this error , can you please help me how can I fix this : C:\ComfyUI ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py:1051: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers. warnings.warn( Prompt executed in 8.58 seconds
This is not an error per se, it’s a warning about a transformers argument being deprecated. As you can see, the prompt gets executed. What issues are you facing during the prompt? Where does it stop?
hi! this is a known issue, the Image Level Adjustment was updated and it broke the range. I haven't had the time to fix this yet because of my job, I'll try to do it as soon as I have the time to. Unfortunately I can't maintain all my old workflows on a daily schedule.
go break it and report back how it works for you, chatterinos: openart.ai/workflows/risunobushi/product-photography-relight-v3---with-internal-frequency-separation-for-keeping-details/YrTJ0JTwCX2S0btjFeEN
Since my computer can't run the segment anything models, how can I remove their nodes without effecting the rest? I want to manually upload the masked object instead of relying to SAM.
this way, i can run the entire workflow without any problem (I apologize for this one since I'm still new to all of these and have no idea on how to do it)
@@astrophilecynic9990 no worries! if you don't want to break anything, you can right click on all the segment anything nodes and select bypass. this will let the image that comes into the segment anything node just pass through it. now, a mask won't be generated, but you can look at the color coded links that come out of the segment anything node - those are where the mask is used.
in order to upload and use a custom mask, you need to:
- create a load image node, and put your mask in there
- from the image output, drag and drop it, and search for "convert image to mask"
- select red as the channel
- now connect the green mask output from the convert image to mask node to all the nodes that the segment anything mask output was connected to
it's a bit of a chore, but it's simple
@@risunobushi_ai I apologize for any inconvenice. I've already tried the method above, but the resulting images are way off compared to their reference
it seems that the outline is the only one being preserve and the actual objects are being change
Thanks!
Thanks you for the donation!
you chew all the details, grats! I love when people are doing such dedicate tutorials! Much appreciate!
thank you!
"I hope you break things bc I would like to hear some feedback on it" - this got me. *Subscribed*
Ahah thank you! I really appreciate when people give me well thought feedbacks. Outside testing is key to deliver good results for everyone out there!
Same here. I think this statement here is the power of a community.
Hey man! Thank you for that. I have a question. I am pretty new in Stable Diffusion. I am using 3d softwares for product photography. But midjourney is more comfortable for me to generate backgrounds and i like the results. I want to add my product to my midjourney scene. I can make it in photoshop. But for relight the product i think this is just a amazing shot cut. So question is, is it possible to add my own background source with copy pasted product photo and just using it for relight? if it is possible, can you please explain how can i do that? Thanks a lot!
Incredible, you deserve to have more subcribers, i was looking for this for a long time
100% I don’t why I’m this until now
Wow- That's of massive value. Thank you for solving this and sharing and explaining. This is one of the most practical things I've seen so far.
Thanks! Honestly I’m astonished at how useful it ended up being.
Awesome! As a photographer I think this is the best Ai processing so far.
yeah, it feels like IC-Light really takes the whole space a lot closer to being a sort of "exact science" rather than being way too random
OMG Andrea, this is amazing! Thanks a ton for sharing. Can't wait to give this workflow a go. Keep being awesome!
Thank you! I'd love some feedback on it, have fun!
Thank you for such an advanced and powerful workflow. I'm encountering problems where all images generated have a yellowish tint even when selecting a white light for example. Am i doing something wrong?
Hi! No, you're not doing anything wrong, we solve the color shifting issue here: ruclips.net/video/_1YfjczBuxQ/видео.html and here: ruclips.net/video/AKNzuHnhObk/видео.html
Amazing work!!! Very into to the IC-lighting stuffs recently, was just trying to upscale the image from the IC-light workflow. Will try your workflow and let you know the outcome soon. Thanks again Andrea.
Thanks! If you add an upscaler pass, remember to upscale the high frequency mask you're using as well, be it the one from SAM or the one you're drawing yourself, otherwise it won't work anymore because of a size mismatch between mask and high frequency layers.
As I say in the video, a good spot to place a upscale group would be in between the relight group and the preserve details group.
Hey Andrea, I am facing a error at the final node of the workflow and I can't find a fix
Error occurred when executing Image Levels Adjustment:
math domain error
can you please provide a fix as I really want to use your workflow
Hi! I’ve heard about this error a few times now, it’s possible that the level adjustment mode got updated and my values don’t match now.
Try using values between 0 and 255, I’ll update the json when I’ll have the time
@@risunobushi_ai Wow, you are the real one! and you made me SUBSCRIBE.
But I have same error.
Image Levels Adjustment:
math domain error
Can you please provide a fix and give me reply please!
@@risunobushi_ai Hey so I did try to change the values and get rid of the issue but the error isn't going away, can you please help and provide us the new value🙇♂
For everyone having this issue:
- the level node probably was updated and the new values are clamped between 0 and 255
- you can try changing the values to reflect the new absolute values (0= black, 255 = white)
- if it doesn’t work, you can swap the level node for any other level nodes (there’s a few)
- if you are not comfortable doing it yourself, you’ll have to wait for me to update the json, but because of work I won’t be able to until late next week.
@@risunobushi_ai please let us know when you do, I am just getting a crash, or the whitewashed image if I change the settings.
Amazing work thank you !! Can you upload a background picture instead of using a prompt to create it ?
Love all of your content. Thank you.
Would it be posible, to integrate a function, where you can give the workflow a reference image to guide the backgruond generator in the direction you want it to go? :)
Thank you for sharing, this is great. But I have a question, why are all the processed photos in a dark style, and does it need to be adjusted anywhere?
Great content!! Thanks for sharing this!
It's incredible work!! I'm encountering problems with " 308 image level adjustment", and how to fix it?
Try this. Someone posted it on the openart feed, and it worked for me.
change values
black_level = 80.0
mid_level = 130.0
white_level = 180.0
You are the best!
Don't stop😊
Thanks for the video! Pls tell is it possible to preserve both the foreground and the background, and change the lighting only? I need to keep the initial image
Hello, why does the color of an object change after I turn on the lights? For example, the bottle was originally green, but it turned yellow after the lights were turned on. Which parameter should I adjust to maintain the original color?
we solve that issue in this update: ruclips.net/video/_1YfjczBuxQ/видео.html
Thank you so much and I am your ❤❤❤big fans 🎉
Mind-blowing! As a product photographer, I'm more excited than terrified. AI is just another tool, like any other. You still need to learn how to use it, and so far, it is complicated enough to require a lot of effort to create quality product images.
I wonder, is there a way to generate 16-bit TIFF files that can be edited in Photoshop without introducing image quality degradation? Frequency separation sometimes makes banding, probably because it is done in 8-bit.
That’s the way I see it too, and why I started getting interested in it a long while ago.
Unfortunately there’s no way to generate TIFF files (as far as I know, but I’m 99% sure). Jpegs and PNGs are all we can work with as of now. The only way to alleviate banding issues (to a degree, and it’s more of a bandaid than a solution) or outlines is to generate files at a higher res, this way the affected pixels are, in percentage, less relative to the total amount of pixels in the image.
Amazing results. The beauty of open source is finding solutions together.
Can the detail preserving part be used on the workflows for clothing? It might be a challenge with the posing but I just thought about it.
I've tested it on underwear only right now (I'm working with a client who produces underwear, so that's what I had laying around) and it works well, even with harsh relights, such as neon strips. I haven't tested it with other types of clothing, but I might do that tomorrow when I have more time.
The only thing that it struggles it, right now, are faces in full body shots, because the high frequency layer catches a ton of data there, but I think it just might need some tinkering, nothing major.
I have tried full body shot or infact half body for t shirt, my experience was not that good (yet)
yeah, it needs to be fine tuned for people, that's why I released it for product shots only
WOW! This is amazing! Would it be able to use image image templates as inputs for the background generation?
already did! ruclips.net/video/GsJaqesboTo/видео.html
@@risunobushi_ai Amazing bro, and that workflow will preserve label details and text on products aswell?
Yep, it’s just a more up to date version (this was the first iteration of it, my latest video is my latest version out of 4? 5? I think)
Nice workflow! As other people have said, for certain objects it's a bit tough to keep the original color of the object.
I added a perturbed attention guidance between the first model loader and ksampler, which helps create more coherent backgrounds.
Thank you for making the tutorial video as well!
Thanks! Yeah, I understand now that some people prefer having a complete workflow rather than a barebones one, I’ll create two versions going forward, one barebones for further customization and one with more stuff, like PAG, IPAdapters, color match or whichever group might be useful
This is so good! Makes me wanna download the video to keep it forever
@@robertdouble559 thx mate, but i only use laserdiscs.
@@robertdouble559Thx mate, but i only use laserdiscs.
Great work... great explanation, thank you very much, Andrea!
great video thank you!! my product image is so stretched because of my phone format, how do i put in as squareformat?
I am definitely testing this tomorrow. Just one question. Do you think this will work on the intricate details and designs on a jewellery? That is something I am looking forward to as i have a jewellery business as well
there's a more recent version that should work with jewelry, as long as you don't want refraction to go through the jewel itself: ruclips.net/video/GsJaqesboTo/видео.html
Awesome video. Thanks for sharing! Also looking forward to the people work flow
I’ll try to get it working soon, but I’m currently swamped with deadlines from my day job, so I might get it done for next week’s video
@andrea this is so fantastic, thank you for the breakdown! Do you think there's a way to BRING a background plate in instead of generating one???
as long as it has the same dimensions as the relighting mask and subject, and has the same perspective as the subject, you can use custom backgrounds, sure!
I new in this and have issues with installation controlnet maybe it was some hints in your past videos ? What .pth you use why and etc..
This is amazing, thank you!
Amazing work!!!quick question please:
i got this error Image Levels Adjustment
math domain error
then i found that - ''Be sure your black level isn't higher than mid level, and vice versa. Black, must be lower than mid, and mid lower than high.''
but when i do it colors are incorrect.any advices?
first of all thank you for letting us participate on this mindblowing journey!
I've managed to get the whole comfyUI setup with the manager running. took me a while since i've no experience in this field.
My only question is you've mentioned that to do an upscale you'd need to include the mask and upscale it too?
Would there be a way to include this upscaling process within the workflow or has this already been done and i dont see it ?
ah e prima che me lo scordo! GRAZIE !
Ciao! Thank you, I've ended up setting up an upscaler (by no means the best upscaler out there, it was just something I had laying around from a previous test) here: ruclips.net/video/_1YfjczBuxQ/видео.html
you can check it out and figure out how it works in terms of upscaling the masks, and link up any other upscaler you like as long as it upscales the same things.
also I ended up going through more iterations of this workflow (the one I linked was version 3 I think?) doing color matching and detail preservation in more recent ones, so you can check those out as well!
@@risunobushi_ai alright! Thank you very much for your reply. I'll mess around and try to figure out stuff step by step :)! Cheers & thank you!
This is great for a product photographer like myself, I got v3 going however v4 keeps breaking Comfy so I want to concentrate on V3 to see how it performs, for me I am using a bottle of wine however the txt on the label is not preserved enough, is there a way to give it more importance?
You can try using my set of Frequency Separation nodes here, by changing the nodes that are responsible for it in either V3 or V4 with them. You can find them in this video: ruclips.net/video/AKNzuHnhObk/видео.html
This is really cool, but it still changes my colors.... It seems to work better (not perfect) pulling the blended image into the second frequecy seperacion. At least the scene gets re-lit. Is there a way to use the IC-light and then just pull the colors over with some transparency value so they don't get washed out?
Yep, we solved the color matching here: ruclips.net/video/_1YfjczBuxQ/видео.html
and on monday I'll release a workflow for relighting people while preserving details and colors too.
I also developed custom nodes for frequency separation, but I haven't had the chance to update the workflow yet. They'll be in Monday's video tho.
What I usually do is use IC-light and the luminosity masks in Krita
I would do it outside of comfyUI too, but the viewers wanted a all-in-one workflow
@@risunobushi_ai wow!! Thanks!
Nice tutorial but i got an error on the node GroundingDinoSAMSegment, and I don´t know how to install with docker. Could you help with that?
What's the issue you're encountering? Do you get anything in the logs? If SAM doesn't work for you, you could use a remBG node instead
@@risunobushi_ai I got this error; Failed to validate prompt for output 269:
* GroundingDinoSAMSegment (segment anything) 204:. And I don´t know hot to fix it
Great workflow! Also I can imagine hooking up an IPAdapter for the BG generation to keep consistency between different angled product shots!
Yeah, this is a “barebones” workflow, it can be expanded with anything one might need. I usually publish barebones rather than fully customized ones because it’s easier to make it your own (or at least it is for me, I don’t like when I have useless stuff in other people’s workflows)
@@risunobushi_ai Agreed! Cheers
The 'frequency' part actually sounds a lot like how focus peaking works.
Grazie mille! ottimo lavoro
Grazie a te!
Thank you very much!
but I found that some products have great results through this workflow, and some niche products have not been very good after re-lighting. Is this because of the basic model training? Because some products have very small training volume
Hi! No, the way IC-Light works is through a instruct Pix2pix process, so there shouldn’t be any issues with object permanence at very low CFG (between 1.1 and 2), as it forces the original image on top of the light mask.
Btw this workflow is one of my first attempts, these are my latest ones:
Colors and details preservation:
ruclips.net/video/_1YfjczBuxQ/видео.html
People (and products) relighting:
ruclips.net/video/AKNzuHnhObk/видео.htmlsi=gfYJmWLIFK7HrhL7
Thank you! Incredible work!
wow! this is a great solution!!
how should i started to making something like this? did you have any tutorial for beginner?
hi! my first videos are basic tutorials, and they get harder and more in depth the more recent they are. for the product relighting series in particular, I'd suggest watching them all in order of publishing, since they're small incremental improvements over the course of a month of development. you'll probably understand more about how they change and what's going on if you watch them in that order.
thank you again for posting workflow
This is amazing... thank you so much for putting these videos together!!
Question: For some reason, the image I'm getting out of the KSampler after the IC-Light Conditioning node is always coming out darker/orange/brown. I've tried it with a bunch of different images but the image and color are always significantly different than what's being fed into it. I've also tried a few different prompts in the text encoded that's being fed into the IC-Lighting node but everything still comes out quite dark. Thanks again!
Thanks! Please refer to the comment by AbsolutelyForward, where we talk about this and about the use of a color match node. You can also increase the amount of light by remapping the light mask (right now it should be set to 0.7, 1 is full white)
@@risunobushi_ai Thank you!! I tried to see if anyone else had the same issue and must have missed it.
Color Blend definitely helped at the end when connecting it to the original image. I also found increasing the min value of the Remap Mask Range node to 0.4 helped brighten up the initial input image. I also increased the IC-Lighting Conditioning to 0.5.
Thanks again for this amazing workflow!!
It's incredible, again! 😱
One thing, just a minorly-minor improvement idea: You enter a prompt then copy it into another prompt field, after a lighting prompt part. You could separate these two then synthetise it using the product prompt.
Turning it into sample code:
ProductPrompt = 'a photograph of a product standing on a banana peel'
LightingPrompt = 'white light'
SynthesizedPrompt = ProductPrompt + LightingPrompt # Here's the point where we no longer Ctrl-C/Ctrl-V 😁
Plus the prompt nodes could be rearranged into a Prompts group. (Of course I could do this myself after downloading the workflow for which you deserve a Praying Blanket 🙏 but I'm here just for admiring, my machine is far from below the minimal requirements of all this.)
thanks, I didn't know about the product prompt node! I knew about other prompt concatenate nodes, and I thought about using them, but again, not knowing the knowledge level of the end user I usually end up using the least complicated setup. sometimes this ends up producing minor inconveniences like copy pasting text, or having to link outputs and inputs manually where I could have used a logic switch, but it's a tradeoff I accept for the sake of clarity
@@risunobushi_ai Nonono, I've just called it Prompt Node. 😁 It's what it is, you're 100-fold more educated in this than I.
This is an awesome workflow. It was working fine. Sadly the latest updates to either comfyui (ComfyUI Version:** v0.2.2-22-g81778a7) or the WAS node suite gives a "ValueError: math domain error" at the "Image Levels Adjustment" node. Any solution?
Fixed. Gamma is a bit offset
@@AITransformers Did you find right values for black_level, mid_level and white_level?
Yeah the level adjustment nodes was update, I received a ton of complaints and requests for help but I’m currently unable to update the json because of work :/ if you have found the correct values I’ll make sure to post a pinned comment
Try this. Someone posted it on the openart feed, and it worked for me.
change values
black_level = 80.0
mid_level = 130.0
white_level = 180.0
Hi guys, The file should be placed in which folder: BiRefNet-DIS_ep580?
Keeping détails in upscaling is a common problem. Could tuat technique be applied to upscaling as well?
I haven’t tested it with upscaling, I guess that as long as you don’t need to upscale the original image you won’t have to resize the frequency layers, so the details would be as they are in the original image. If you need to upscale the original image and the frequency layers as well, you might have some troubles with preserving details depending on how much you’re upscaling.
i tried also on runconfy but never work, you give private assistence?
hi! this workflow is not on runcomfy, my latest one is - which error are you getting?
Love your content.
thank you!
I always run out of VRAM with GroundingDINO. Any alternatives?
remBG, or a smaller SAM model, there’s as small as less than 100mb!
Great vdo sir, thank you very much!
thank you for watching!
Ok, since no one asked it yet, can i use sdxl model with this workflows ? Thanks for this work and I'm also a photographer 😅😊 cant wait for v4 with that ip adapter for consistent backgrounds(and sdxl for higher res? ;) )
Subed
Thanks! Unfortunately there’s no support for SDXL, it’s for 1.5 only, but you can definitely upscale a ton with SUPIR or other upscalers
This looks like a game changer. Maybe only for mockups, ideas iterations, or even real productions !
Everything starts well on my side, but the segment anything does nothing so the process is useless. I am on a M2pro, any ideas ?
Did you install all the necessary dependencies for SAM to work on M chips? As far as I know you’ve got some hoops to jump through in order to get tensorflow and other dependencies running on M chips
Absolutely fantastic workflow and a well explained tutorial :)
I tried to relight some package designs, but somehow it gets allways „tinted“ in a warmish-yellow tone, no matter what text prompt I use for the lightning. I noticed that the epicrealism checkpoint tends to do so if I use a very generic (no description apart from the advertising photography) prompt for the background. Im lost.
you could either try different checkpoints, and / or you could try to specify which kind of light you want. I notice that I get a very warm tint with "natural light", but specifying "white light" or some kind of studio light (softbox, spotlight, strip light) produces more neutral results. You could also try influencing with a negative prompt (warm tones, warm colors, etc).
@@risunobushi_ai thx for the hints :)
The package image (input) is colored half-green + half-grey. What is your expierence (so far) with retaining the original colors and transfering them in a realistic way with your workflow?
Would an additional color matching node perhaps do some
help?
I have never particularly cared for the color matching node (at least the one I used), as it was almost never working well for me, but you could try and blend it at a lower percentage for better results. I guess it all depends on how important it is to color match to an exact degree the final relit image to the source one. This is my own preference, but the way I'm used to working I'd rather fix the colors in PS for a better degree of control. If one would want to do everything inside of comfyUI, which to be fair is in the spirit of this video and workflow, a color matching node could be a good enough solution, although less "directable" than proper post work in PS.
adding here, since I just thought about it: you could even try color matching only specific parts of the subject, such as the non-lit ones, or only the lit ones, by using the same node I'm using to extract a light mask from the blended image, or a RGB/CMYK/BW to mask node, based on the color / light you need to correct.
@@risunobushi_ai So far I haven't had any success by changing the checkpoints or modifying the lightning prompt - the original colours of the packaging are lost.
But: at the end of the workflow, I used the input image again to readjust the colours. To do this, I combined the "ImageBlend" (settings: 1.00, soft_light) node with the "Image Blend by Mask" (for masking the packaging) node - this has worked very well so far :)
Andrea
Im building sdxl workflow for product photography, if i add iclight as an option inside the sdxl workflow so users can turn on or off from the webapl based on the input . Is that possible or iclight should be in stand alone workflow ?
You can just encode a resulting image from SDXL and the use it as a base for a IC-Light pipeline, no need to have two different workflows if keeping two checkpoints loaded at the same time is not an issue
Could you update it tu run with Flux ? I know IC Light doesn't work with Flux but the other parts of the generation could benefit from Flux
Hello blogger, I am a novice, I saw your work on the Internet. (Product Photography Relight v3 - With internal Frequency Separation for keeping details) I really, really want to be able to use this workflow, but I'm having so many problems, I don't know how to install the relevant model and where to put the model in the folder, can you show me a tutorial video to install this workflow? It really means a lot to me. Thank you very, very much. I liked your video and subscribed to your channel.
Using comfyui, I get error: Error occurred when executing MaskFromColor+: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3. Fixed it bypassing it but then ran into a problem with the Image Resize apparently due to an update in Comfui so switch it to "select keep_proportion for the method of the Image Resize nodes." solved it and I could get passed this concern.
Thanks man i was searching for this I was stuck , not going after searching everywhere here u gave the solution. THnks again
How do I mix in existing background? Is it possible, instead of having the workflow creating my background
Yep, but you need to have the same perspective between the subject and the background. Simply add a load image node and blend the background with the segmented subject, bypassing the background generator group.
There’s no perspective correction in comfyUI that I know of, but if someone knows about it it’d be great.
Thank you so much for the detailed answer. I’ll look up for a rotúrela that explains how to connect the nodes you talked about. As for the perspective, that’s fine, since I’ll be editing it before on Photoshop l, so it will only need to mix the light and color
Wow! This is fantastic!
I was faced with the problem that the Load And Apply IC-Light node does not find loaded models. Does anyone know how to solve this?
* LoadAndApplyICLightUnet 37:
- Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []
Did you place the model in the Unet folder?
@@risunobushi_ai It works! Thank you!
Chapeau!
i've got this error
Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it
You're most probably using the FBC IC-Light model instead of the FC model, or using the FC model while plugging in an optional background in the IC-Light node (opt backgrounds are for the FBC model only)
So cool! I'm doing basically the same for cars and people! But at the moment I stll prefer to do the freq seperation part in Nuke - I can only dream of 32bit workflow in Comfy
Wait, if you generate a normal map from IC-Light do you get to work with 32bit images in Nuke?
i'm having this error:
RuntimeError: Given groups=1, weight of size [320, 12, 3, 3], expected input[2, 8, 128, 128] to have 12 channels, but got 8 channels instead
Are you using the IC-Light FBC model instead of the FC? Are you trying to use SDXL instead of SD 1.5?
Great stuff!
Brother take the text node and use that as inputs for the clip positives. helps. This workflow is awesome btw.
Thanks! Yeah, I know there's better ways to bypass a double prompt field, more so if the two prompts are similar, but I usually construct my workflows so that there's as little complications as possible for new users. In this case, this means using two different prompt fields for what is essentially the same prompt, but to new users having the usual Load Checkpoint -> CLIP Text Encode -> KSampler pipeline makes more sense than having a Text node somewhere, conditioning two different KSamplers in two different groups.
I got the error during ImageResize+: not enough values to unpack (expected 4, got 3). Any ideas what went wrong and how to fix it?
What is the image extension you’re using? You can sub in another resize node if that one doesn’t work for you
Facing the same issue. Are we passing the mask or the image to resizer. Debugging shows resizer is getting a tensor with no channels. If you can confirm, I will patch the resizer to bypass this shape mismatch. Thank you. Btw I am working in api mode. Never used comfy in ui mode.
We’re passing an image, but it’s not the first time I hear someone having issues with this resize node. swapping it for another resize node solves usually solves it.
@@risunobushi_ai thanks really much. I will write a comment in openart.
@@Arminas211 I encountered the same issue, but I eventually discovered that I hadn't changed the prompt of the segment anything node, which caused the problem. Perhaps you could try doing that as well?
Man, how come your Image Levels Ajustment work with those settings?! Mine only work if the value o mid_level is between de black and white. If I use the numbers you have (ex: mid_level=1), I get a "math domain error"....
another thing I notice is that it is changing the color of some of the product parts..
We're you able to solve this as I'm getting the same error.
muchas gracias senor
Love you 3000 ❤😂
Image Level Adjustment is brokennn aaaa this is the only step I need to fix can you help me please?
Try this. Someone posted it on the openart feed, and it worked for me.
change values
black_level = 80.0
mid_level = 130.0
white_level = 180.0
very cool, but for some reason the control nets just crash my computer... i have a 3080ti, so it must be something else.
That's weird, I haven't had any reports of crashes yet. I have a 3080ti too, so maybe try subbing in another controlnet node / controlnet model?
@@risunobushi_ai yeah, going to try that... thanks for the reply.
@@risunobushi_ai turns out it was the depth anything model.. i can use depth_anything_vits14.pth - thanks. insane workflow... powerful stuff.
where is the 2 hour live video?
it should be this one if you want to dive into it: ruclips.net/user/livexjy3JyaPfHQ
but my latest video showcases a workflow that solves most of the stuff I was talking about two months ago!
workflow plz, sir!
The workflow is in the description *and* in the pinned comment, and I even say "the workflow is in the description below" as soon as 00:40
Amazing!
Instead of changing the subject name in the Grounding Dino prompt, you can try using just "subject" or "main subject", it should work ;-)
In this case, and when you only have one subject yes, but if you have more subjects (like in my update on this video, when I have the bottle sitting on a branch) it might not work. But I agree, here you can just use subject instead!
thanks. the first time, 5%. hehe
Hi.. Thanks a Lot For this tutorial and workflow.
I am getting this error , can you please help me how can I fix this :
C:\ComfyUI
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py:1051: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Prompt executed in 8.58 seconds
This is not an error per se, it’s a warning about a transformers argument being deprecated. As you can see, the prompt gets executed.
What issues are you facing during the prompt? Where does it stop?
Suscribed!!!!
Image Levels Adjustment
math domain error :(
gamma = math.log(0.5) / math.log((self.mid_level - self.min_level) / (self.max_level - self.min_level))
ValueError: math domain error
hi! this is a known issue, the Image Level Adjustment was updated and it broke the range. I haven't had the time to fix this yet because of my job, I'll try to do it as soon as I have the time to. Unfortunately I can't maintain all my old workflows on a daily schedule.
i can here brands screaming you can't fuck around with product orginal colour.
I can hear brands screaming we solved color matching in the latest videos and workflows ;)