Make AI Ads in Flair.AI and also A1111 - Consistent Objects

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024

Комментарии • 67

  • @OlivioSarikas
    @OlivioSarikas  Год назад +3

    #### Links from the Video ####
    Join my Map Bashing Live Stream: ruclips.net/user/livemq8Z0CNJjP0
    Join my Gaming Channel: www.youtube.com/@multi4g
    Tweet: twitter.com/mickeyxfriedman/status/1666518047203401736
    app.flair.ai/
    Bottle Image: www.pexels.com/photo/close-up-shot-of-a-bottle-of-whiskey-11271794/
    civitai.com/models/4201/realistic-vision-v20

  • @lithium534
    @lithium534 Год назад +7

    This is really good. I was trying different ways to create some ad picture but your workflow is spot on.
    I do use infill and full picture so that the AI uses the image as reference for lighting and I use an Inpaint model.

  • @joeyc666
    @joeyc666 Год назад

    Inpainting + control net for great results! Nice work Olivio :) All the best with your gaming channel :D

  • @jason-sk9oi
    @jason-sk9oi Год назад +1

    Tremendous workflow!! Thank you for sharing 🎉

  • @vrynstudios
    @vrynstudios 5 месяцев назад

    wow. Thanks for this wonderful video.

  • @steffen_r
    @steffen_r Год назад +1

    Great idea. I just tried it and combined it with your latest video on creating SoftEdge Maps for composition porposes and placed a bottle on a table (composition wise it's not optimal, since it is placed in the middle, but it works)
    - created bottle image/ mask image as described
    - created a table via text2image
    - created SoftEdge Map with the table
    - setup photoshop file for Map/ Positioning stuff 1024*768
    - put table in here and the bottle (just for positioning it right on the table) - unfortunately the table was too small - so i did some Generative Fill Magic to get it larger and fill the whole space (works even with the black/ white SoftEdge Map)
    - export map file and used this as a second ControlNet with Softedge for Positoning
    Few more steps, but with just two images and some AI magic, you get really cool results.

  • @protasbox
    @protasbox Год назад +1

    I used to think about how to do the same. Great guide! Thanks

  • @jragon355
    @jragon355 Год назад

    I love your videos bro. Thanks for being awesome 🤝

  • @tsmakrakis32
    @tsmakrakis32 Год назад +1

    I am having some trouble rendering the final image and matching the perspective of my object with the environment around it. This is the part that you are not showing (unfortunately) and I'd love to see how you manage to end up with such nice pairing of the object with the bg at the end. Thank you!!!!🤠

  • @Romanetics
    @Romanetics Год назад

    Olivio thank you very much for your content. You are my Stable Diffusion guru.
    The technique in another video to use img2img to upscale and add more details to the same picture is gold.

  • @ddadassadsadasdasdsd
    @ddadassadsadasdasdsd Год назад +1

    Awesome vid Olivio, thanks!

  • @追光者-j4o
    @追光者-j4o Год назад +3

    I would like to ask how to make the product better integrate into the environment, such as the light and shadow of the product, caustic and so on, which has troubled me for a long time

  • @zephilde
    @zephilde Год назад +2

    Hi Olivio!
    Do you have a technique (or searched for one) to also change the base object reflections and specular to fit the background? Maybe a 2nd pass with img2img or inpaint though...
    Imagine the product is a water bottle for exemple, it would have some part of the background going into the transparency, and ideally deformed, to be perfectly realistic...
    A bit like the way here in your exemples, the bottle reflection on floor integrate nicely the object, but this time inside the masked area... the chalenge here is to don't change much the product of course (low denoising etc...)

  • @jhPampoo
    @jhPampoo Год назад +1

    Can you make a video on product photography with the talent interact with the product such as a bike, bottle...
    It may be a challenge but worthy xD

  • @sabotage3d
    @sabotage3d Год назад

    Latest cool stuff as always!

  • @Hexrun
    @Hexrun Год назад

    very informative, thanks

  • @gohan2091
    @gohan2091 7 месяцев назад

    After you generated a background that works, how do you then get the masked object to take on the colours/lighting of the background? Would I have to put it into img2img and set a denoise of 0.4+? maybe restrict the object changing too much with a ControlNet?

  • @gerritfellisch4309
    @gerritfellisch4309 Год назад

    Thank you for that video! I was really looking for something like that.

  • @demischi3018
    @demischi3018 Год назад +2

    Great tutorial! But how do I influence the lighting? For example if the lighting of my product does not match the lighting of the background I need to change the product in terms of lighting without destroying the picture.

    • @alecvallintine5435
      @alecvallintine5435 Год назад +2

      I'm also looking for a way to handle this. I've watched several videos where they replace the environment by inpainting, but don't address the fact that the lighting is now inconsistent between the subject and environment. In the example with the bottle on the beach, I was pleased to see that it rendered the bottle's shadow on the sand, but the lighting on the bottle itself never changes to reflect the new environment. I've heard people suggest running the final composite through img2img with a low denoise value to "blend" the image together, but I haven't experimented with that yet so I don't know if that can remedy serious lighting inconsistencies.

  • @ИльяДедов-и4ъ
    @ИльяДедов-и4ъ Год назад

    Dear Olivio, could you tell me where and how you got the noise level settings in your stable diffusion. I installed it according to your guide, but it does not have such settings. Maybe it's some kind of extension? I hope I won't take too much of your time to answer.

  • @요즘생활취미생활
    @요즘생활취미생활 Год назад

    감사합니다 :)

  • @diskomiks
    @diskomiks Год назад

    Ever since the inpainting model has been out i'm wondering when it will be possible to just click on an obkect and create a mask of it inside a1111 instead of having to use a brish or go to a third party app like you did. Is that really that hard fo implement? Controlnet Segment model can be also used for that?

  • @chuckynorris616
    @chuckynorris616 11 месяцев назад

    how to get rid of the white outline around the products?

  • @f4ust85
    @f4ust85 Год назад +1

    Classic Stable Diffusion - so much work and in the end you get a goofy lowres image you could have created in 2 minutes in Photoshop.

  • @NickDrake-f1t
    @NickDrake-f1t Год назад

    Do you have a comfyui workflow to do something similar?

  • @diskomiks
    @diskomiks Год назад

    Year workflow gives better results than the Flair AI since their background are almost comoletely unrelated to the subject in terms of lighting. I tried canny for labels with typography and i got terrible glitchy letters - what were yoir settings on the whisky bottle? This looks like a holy grail scenario for AI product phktography as most labels contain at least some writing (unlike the Starbucks example).

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      The whiskey bottle isn't rendered by the AI at all. That's why i use the mask, so it is rendering everything but the Bottle

  • @therookiesplaybook
    @therookiesplaybook Год назад

    Any tips on getting the item to match the scene better? Grabbing a random item off the web and bringing it into photoshop with the mask, it looks very stuck into scene as opposed to being part of it.

    • @mahamadkader488
      @mahamadkader488 Год назад

      I have the same problem.. maybe its the different model that i am using, but his products look like they are part of the picture. mine look like I just cropped them into the picture hahah

  • @Archalternative
    @Archalternative Год назад +4

    When are you going to talk about deepfake roops? BRAVO

  • @nic-ori
    @nic-ori Год назад

    Thanks.

  • @Gh0sty.14
    @Gh0sty.14 Год назад

    Been trying this out but it will only give me a result of the image I uploaded to inpaint. It completely ignores my prompt. I even tried turning off the controlnet and it still just gives me the object. All settings are the same as you. I don't get it.

  • @brandhuetv
    @brandhuetv Год назад

    great tutorials ever as always, do you have a tutorial on how to install automatic 1111 ? thanks

  • @jamit63
    @jamit63 2 месяца назад

    can you do the same in comfyui, please

  • @MarkDemarest
    @MarkDemarest Год назад

    🎉

  • @RealityCheckVR
    @RealityCheckVR Год назад

    Let's play some games Olivio! 🎮🕹🙌

  • @brockpenner1
    @brockpenner1 Год назад

    If Reddit goes the way of Digg later this month, where's everyone going for their SD stuff?

  • @CerebricTech
    @CerebricTech 2 месяца назад

    Can this run on i7 32gbram and 4gbvram???

  • @김기선-j5t
    @김기선-j5t Год назад +1

    Why do you use canny instead of reference only?

  • @fahabibfahabib2312
    @fahabibfahabib2312 Год назад

    I got error no image match please help me

  • @TheMellado
    @TheMellado Год назад

    Me saca de la pagina ☹

  • @jeffreyhao1343
    @jeffreyhao1343 Год назад

    I know how to implement these, including the light and reflect light。

  • @thanksfernuthin
    @thanksfernuthin Год назад

    Great stuff. I tried just giving backgrounds to characters I created that I liked and it would morph bits of them outside of themselves. Now I know now to stop that.
    Any time I try to use Adetailer and ControlNets it errors out. I think it's memory but even if I drop the requested image size, no love. Anyone else have this problem?

    • @mikelaing8001
      @mikelaing8001 Год назад

      did you get a solution to this?

    • @thanksfernuthin
      @thanksfernuthin Год назад +1

      @@mikelaing8001 I did not. But I haven't played around with it either. Let me know if you figure it out. (Although finding an old comment is basically impossible. 🙂)

    • @mikelaing8001
      @mikelaing8001 Год назад

      @@thanksfernuthin will do!

    • @mikelaing8001
      @mikelaing8001 Год назад

      @@thanksfernuthin just needed a new checkpoint. I'd not downloaded a photorealistic one and was just using what was preloaed. It's fine for me now

    • @thanksfernuthin
      @thanksfernuthin Год назад

      @@mikelaing8001 Which checkpoint are you using? Oh, and how much VRAM do you have? I have 12GB.

  • @mr.entezaee
    @mr.entezaee Год назад

    CC off ?

  • @MADVILLAIN669
    @MADVILLAIN669 Год назад

    This is great but can you explain how to make 'masked' versions of images quickly?

  • @김기선-j5t
    @김기선-j5t Год назад

    How about we just use generative fill in Photoshop? It seems that we save more time.

    • @김기선-j5t
      @김기선-j5t Год назад

      Please I really want to listen to your opinion. I don't have any intention to harm you. I admire your this video but I just want whether we can find better way

  • @piotr.czechowski
    @piotr.czechowski 24 дня назад

    It looks bad with lighting that doesn't match

  • @templeofleila
    @templeofleila Год назад

    My playground AI account was deactivated today... I'm sad... I feel as if I pushed stable diffusion 1.5 further than it was expected to go (LORA free).. Completely photo realistic impossible apocalyptic ballerina zombie bikini photo shoots.. I still haven't seen anything as graphic or realistic as I was able to generate. It was teetering on obscene, but I'm made it a mission to have a insanely graphic image produced from a very PG family friendly prompt and seed. I do know the difference between my prompts and the majority of others'. I want to share it but the AI art higher-ups obviously don't want the masses to know the hack a couple of srtists and I have been playing with the last week and a half. I think I was the first of us to be kicked off. I hope they'll let me back on just to document my prompt evolution... Does anyone know if they reactivate your account? I'm on Mac and have to figure out how to run 1111 and do this locally without a GPU... Any suggestions?

    • @alecubudulecu
      @alecubudulecu Год назад

      No idea about them. Why would they deactivate? You can’t run it locally?

    • @templeofleila
      @templeofleila Год назад

      @@alecubudulecu the email from playground AI said my work "teetered on obscenity". I began running it locally. I have a Mac studio M1 max. I am more than pleased with it's performance with A1111. Since then playground AI has reinstated my secondary account as long as I keep it family friendly. I don't have access to the hundred or so awesome apocalyptic zombie ballerina bikini photos.. but hey, I'm still in the game

  • @srij0n316
    @srij0n316 Год назад

    Is lycoris the same as Lora?