#### Links from the Video #### Join my Map Bashing Live Stream: ruclips.net/user/livemq8Z0CNJjP0 Join my Gaming Channel: www.youtube.com/@multi4g Tweet: twitter.com/mickeyxfriedman/status/1666518047203401736 app.flair.ai/ Bottle Image: www.pexels.com/photo/close-up-shot-of-a-bottle-of-whiskey-11271794/ civitai.com/models/4201/realistic-vision-v20
This is really good. I was trying different ways to create some ad picture but your workflow is spot on. I do use infill and full picture so that the AI uses the image as reference for lighting and I use an Inpaint model.
Great idea. I just tried it and combined it with your latest video on creating SoftEdge Maps for composition porposes and placed a bottle on a table (composition wise it's not optimal, since it is placed in the middle, but it works) - created bottle image/ mask image as described - created a table via text2image - created SoftEdge Map with the table - setup photoshop file for Map/ Positioning stuff 1024*768 - put table in here and the bottle (just for positioning it right on the table) - unfortunately the table was too small - so i did some Generative Fill Magic to get it larger and fill the whole space (works even with the black/ white SoftEdge Map) - export map file and used this as a second ControlNet with Softedge for Positoning Few more steps, but with just two images and some AI magic, you get really cool results.
I am having some trouble rendering the final image and matching the perspective of my object with the environment around it. This is the part that you are not showing (unfortunately) and I'd love to see how you manage to end up with such nice pairing of the object with the bg at the end. Thank you!!!!🤠
Olivio thank you very much for your content. You are my Stable Diffusion guru. The technique in another video to use img2img to upscale and add more details to the same picture is gold.
I would like to ask how to make the product better integrate into the environment, such as the light and shadow of the product, caustic and so on, which has troubled me for a long time
Hi Olivio! Do you have a technique (or searched for one) to also change the base object reflections and specular to fit the background? Maybe a 2nd pass with img2img or inpaint though... Imagine the product is a water bottle for exemple, it would have some part of the background going into the transparency, and ideally deformed, to be perfectly realistic... A bit like the way here in your exemples, the bottle reflection on floor integrate nicely the object, but this time inside the masked area... the chalenge here is to don't change much the product of course (low denoising etc...)
After you generated a background that works, how do you then get the masked object to take on the colours/lighting of the background? Would I have to put it into img2img and set a denoise of 0.4+? maybe restrict the object changing too much with a ControlNet?
Great tutorial! But how do I influence the lighting? For example if the lighting of my product does not match the lighting of the background I need to change the product in terms of lighting without destroying the picture.
I'm also looking for a way to handle this. I've watched several videos where they replace the environment by inpainting, but don't address the fact that the lighting is now inconsistent between the subject and environment. In the example with the bottle on the beach, I was pleased to see that it rendered the bottle's shadow on the sand, but the lighting on the bottle itself never changes to reflect the new environment. I've heard people suggest running the final composite through img2img with a low denoise value to "blend" the image together, but I haven't experimented with that yet so I don't know if that can remedy serious lighting inconsistencies.
Dear Olivio, could you tell me where and how you got the noise level settings in your stable diffusion. I installed it according to your guide, but it does not have such settings. Maybe it's some kind of extension? I hope I won't take too much of your time to answer.
Ever since the inpainting model has been out i'm wondering when it will be possible to just click on an obkect and create a mask of it inside a1111 instead of having to use a brish or go to a third party app like you did. Is that really that hard fo implement? Controlnet Segment model can be also used for that?
Year workflow gives better results than the Flair AI since their background are almost comoletely unrelated to the subject in terms of lighting. I tried canny for labels with typography and i got terrible glitchy letters - what were yoir settings on the whisky bottle? This looks like a holy grail scenario for AI product phktography as most labels contain at least some writing (unlike the Starbucks example).
Any tips on getting the item to match the scene better? Grabbing a random item off the web and bringing it into photoshop with the mask, it looks very stuck into scene as opposed to being part of it.
I have the same problem.. maybe its the different model that i am using, but his products look like they are part of the picture. mine look like I just cropped them into the picture hahah
Been trying this out but it will only give me a result of the image I uploaded to inpaint. It completely ignores my prompt. I even tried turning off the controlnet and it still just gives me the object. All settings are the same as you. I don't get it.
Great stuff. I tried just giving backgrounds to characters I created that I liked and it would morph bits of them outside of themselves. Now I know now to stop that. Any time I try to use Adetailer and ControlNets it errors out. I think it's memory but even if I drop the requested image size, no love. Anyone else have this problem?
@@mikelaing8001 I did not. But I haven't played around with it either. Let me know if you figure it out. (Although finding an old comment is basically impossible. 🙂)
Please I really want to listen to your opinion. I don't have any intention to harm you. I admire your this video but I just want whether we can find better way
My playground AI account was deactivated today... I'm sad... I feel as if I pushed stable diffusion 1.5 further than it was expected to go (LORA free).. Completely photo realistic impossible apocalyptic ballerina zombie bikini photo shoots.. I still haven't seen anything as graphic or realistic as I was able to generate. It was teetering on obscene, but I'm made it a mission to have a insanely graphic image produced from a very PG family friendly prompt and seed. I do know the difference between my prompts and the majority of others'. I want to share it but the AI art higher-ups obviously don't want the masses to know the hack a couple of srtists and I have been playing with the last week and a half. I think I was the first of us to be kicked off. I hope they'll let me back on just to document my prompt evolution... Does anyone know if they reactivate your account? I'm on Mac and have to figure out how to run 1111 and do this locally without a GPU... Any suggestions?
@@alecubudulecu the email from playground AI said my work "teetered on obscenity". I began running it locally. I have a Mac studio M1 max. I am more than pleased with it's performance with A1111. Since then playground AI has reinstated my secondary account as long as I keep it family friendly. I don't have access to the hundred or so awesome apocalyptic zombie ballerina bikini photos.. but hey, I'm still in the game
#### Links from the Video ####
Join my Map Bashing Live Stream: ruclips.net/user/livemq8Z0CNJjP0
Join my Gaming Channel: www.youtube.com/@multi4g
Tweet: twitter.com/mickeyxfriedman/status/1666518047203401736
app.flair.ai/
Bottle Image: www.pexels.com/photo/close-up-shot-of-a-bottle-of-whiskey-11271794/
civitai.com/models/4201/realistic-vision-v20
This is really good. I was trying different ways to create some ad picture but your workflow is spot on.
I do use infill and full picture so that the AI uses the image as reference for lighting and I use an Inpaint model.
Glad it was helpful!
Inpainting + control net for great results! Nice work Olivio :) All the best with your gaming channel :D
Tremendous workflow!! Thank you for sharing 🎉
wow. Thanks for this wonderful video.
Great idea. I just tried it and combined it with your latest video on creating SoftEdge Maps for composition porposes and placed a bottle on a table (composition wise it's not optimal, since it is placed in the middle, but it works)
- created bottle image/ mask image as described
- created a table via text2image
- created SoftEdge Map with the table
- setup photoshop file for Map/ Positioning stuff 1024*768
- put table in here and the bottle (just for positioning it right on the table) - unfortunately the table was too small - so i did some Generative Fill Magic to get it larger and fill the whole space (works even with the black/ white SoftEdge Map)
- export map file and used this as a second ControlNet with Softedge for Positoning
Few more steps, but with just two images and some AI magic, you get really cool results.
I used to think about how to do the same. Great guide! Thanks
I love your videos bro. Thanks for being awesome 🤝
I am having some trouble rendering the final image and matching the perspective of my object with the environment around it. This is the part that you are not showing (unfortunately) and I'd love to see how you manage to end up with such nice pairing of the object with the bg at the end. Thank you!!!!🤠
Olivio thank you very much for your content. You are my Stable Diffusion guru.
The technique in another video to use img2img to upscale and add more details to the same picture is gold.
Awesome vid Olivio, thanks!
you are welcome :)
I would like to ask how to make the product better integrate into the environment, such as the light and shadow of the product, caustic and so on, which has troubled me for a long time
Hi Olivio!
Do you have a technique (or searched for one) to also change the base object reflections and specular to fit the background? Maybe a 2nd pass with img2img or inpaint though...
Imagine the product is a water bottle for exemple, it would have some part of the background going into the transparency, and ideally deformed, to be perfectly realistic...
A bit like the way here in your exemples, the bottle reflection on floor integrate nicely the object, but this time inside the masked area... the chalenge here is to don't change much the product of course (low denoising etc...)
Can you make a video on product photography with the talent interact with the product such as a bike, bottle...
It may be a challenge but worthy xD
this is exactly what i am looking for.
Latest cool stuff as always!
thank you
very informative, thanks
After you generated a background that works, how do you then get the masked object to take on the colours/lighting of the background? Would I have to put it into img2img and set a denoise of 0.4+? maybe restrict the object changing too much with a ControlNet?
Thank you for that video! I was really looking for something like that.
Great tutorial! But how do I influence the lighting? For example if the lighting of my product does not match the lighting of the background I need to change the product in terms of lighting without destroying the picture.
I'm also looking for a way to handle this. I've watched several videos where they replace the environment by inpainting, but don't address the fact that the lighting is now inconsistent between the subject and environment. In the example with the bottle on the beach, I was pleased to see that it rendered the bottle's shadow on the sand, but the lighting on the bottle itself never changes to reflect the new environment. I've heard people suggest running the final composite through img2img with a low denoise value to "blend" the image together, but I haven't experimented with that yet so I don't know if that can remedy serious lighting inconsistencies.
Dear Olivio, could you tell me where and how you got the noise level settings in your stable diffusion. I installed it according to your guide, but it does not have such settings. Maybe it's some kind of extension? I hope I won't take too much of your time to answer.
감사합니다 :)
Ever since the inpainting model has been out i'm wondering when it will be possible to just click on an obkect and create a mask of it inside a1111 instead of having to use a brish or go to a third party app like you did. Is that really that hard fo implement? Controlnet Segment model can be also used for that?
how to get rid of the white outline around the products?
Classic Stable Diffusion - so much work and in the end you get a goofy lowres image you could have created in 2 minutes in Photoshop.
Do you have a comfyui workflow to do something similar?
Year workflow gives better results than the Flair AI since their background are almost comoletely unrelated to the subject in terms of lighting. I tried canny for labels with typography and i got terrible glitchy letters - what were yoir settings on the whisky bottle? This looks like a holy grail scenario for AI product phktography as most labels contain at least some writing (unlike the Starbucks example).
The whiskey bottle isn't rendered by the AI at all. That's why i use the mask, so it is rendering everything but the Bottle
Any tips on getting the item to match the scene better? Grabbing a random item off the web and bringing it into photoshop with the mask, it looks very stuck into scene as opposed to being part of it.
I have the same problem.. maybe its the different model that i am using, but his products look like they are part of the picture. mine look like I just cropped them into the picture hahah
When are you going to talk about deepfake roops? BRAVO
Thanks.
You're welcome
Been trying this out but it will only give me a result of the image I uploaded to inpaint. It completely ignores my prompt. I even tried turning off the controlnet and it still just gives me the object. All settings are the same as you. I don't get it.
great tutorials ever as always, do you have a tutorial on how to install automatic 1111 ? thanks
can you do the same in comfyui, please
🎉
🔥
Let's play some games Olivio! 🎮🕹🙌
If Reddit goes the way of Digg later this month, where's everyone going for their SD stuff?
Can this run on i7 32gbram and 4gbvram???
Why do you use canny instead of reference only?
good point. I need to cover that too
I got error no image match please help me
Me saca de la pagina ☹
I know how to implement these, including the light and reflect light。
Great stuff. I tried just giving backgrounds to characters I created that I liked and it would morph bits of them outside of themselves. Now I know now to stop that.
Any time I try to use Adetailer and ControlNets it errors out. I think it's memory but even if I drop the requested image size, no love. Anyone else have this problem?
did you get a solution to this?
@@mikelaing8001 I did not. But I haven't played around with it either. Let me know if you figure it out. (Although finding an old comment is basically impossible. 🙂)
@@thanksfernuthin will do!
@@thanksfernuthin just needed a new checkpoint. I'd not downloaded a photorealistic one and was just using what was preloaed. It's fine for me now
@@mikelaing8001 Which checkpoint are you using? Oh, and how much VRAM do you have? I have 12GB.
CC off ?
seems to work now :)
This is great but can you explain how to make 'masked' versions of images quickly?
How about we just use generative fill in Photoshop? It seems that we save more time.
Please I really want to listen to your opinion. I don't have any intention to harm you. I admire your this video but I just want whether we can find better way
It looks bad with lighting that doesn't match
My playground AI account was deactivated today... I'm sad... I feel as if I pushed stable diffusion 1.5 further than it was expected to go (LORA free).. Completely photo realistic impossible apocalyptic ballerina zombie bikini photo shoots.. I still haven't seen anything as graphic or realistic as I was able to generate. It was teetering on obscene, but I'm made it a mission to have a insanely graphic image produced from a very PG family friendly prompt and seed. I do know the difference between my prompts and the majority of others'. I want to share it but the AI art higher-ups obviously don't want the masses to know the hack a couple of srtists and I have been playing with the last week and a half. I think I was the first of us to be kicked off. I hope they'll let me back on just to document my prompt evolution... Does anyone know if they reactivate your account? I'm on Mac and have to figure out how to run 1111 and do this locally without a GPU... Any suggestions?
No idea about them. Why would they deactivate? You can’t run it locally?
@@alecubudulecu the email from playground AI said my work "teetered on obscenity". I began running it locally. I have a Mac studio M1 max. I am more than pleased with it's performance with A1111. Since then playground AI has reinstated my secondary account as long as I keep it family friendly. I don't have access to the hundred or so awesome apocalyptic zombie ballerina bikini photos.. but hey, I'm still in the game
Is lycoris the same as Lora?