Transform Your Images: Adding Exciting Characters with Stable Diffusion and Inpainting

Поделиться
HTML-код
  • Опубликовано: 28 июн 2023
  • In this captivating video, join me as I reveal the secrets of adding more people to your images using the power of Stable Diffusion and simple inpainting techniques. Watch as boring images come to life with the addition of exciting characters, giving your creations a whole new level of creativity and storytelling. Don't miss out on this incredible opportunity to elevate your artwork! 🎨✨
    🛒 Shop Arcane Shadows: shop.xerophayze.com
    🔔 Subscribe to our RUclips channel: video.xerophayze.com
    🌐 Explore our portfolio: portfolio.xerophayze.com
    📱 Follow us on Facebook: www.xerophayze.com
  • ХоббиХобби

Комментарии • 61

  • @Shabazza84
    @Shabazza84 6 месяцев назад +1

    Just a little tip for people:
    When you have such a pretty 1-point perspecive shot, it's super easy to get the size of the main person in the foreground right.
    Take the head/eye level of the other people in the background, put an (imagined) horizontal line in at their eye level
    and then paint the mask for the foreground char, so the head/eyes of that char will roughly end up on that horizontal imaginerd line.
    Then the char will have the exact height of the other people in perspective.
    You can ofc make that person smaller or taller now, but with this, you have the 1:1 height and can work from there.

  • @HypnotizeInstantly
    @HypnotizeInstantly 11 месяцев назад +2

    Thank you for listening to my previous comment! Releasing this on June 30th is a birthday gift from you!

  • @MarvelSanya
    @MarvelSanya 2 месяца назад +1

    Why, every time I try to do everything as in the video, instead of the whole character that I write in prompts, I only get pieces of it? As if it did not fit into the area that I selected in inpaint. In addition, the background of the character does not match the original image...

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  2 месяца назад

      My guess is that you're not using an in-painting model. When you're using a regular model, is sometimes has a very hard time matching the surrounding environment. As well as centering the subject and blending subject in.

    • @MarvelSanya
      @MarvelSanya 2 месяца назад

      @@AIchemywithXerophayze-jt1gg you are right, I tried using regular models.

  • @jean-baptisteclerc1586
    @jean-baptisteclerc1586 9 месяцев назад +3

    any tutorial to add realistic AI people into an existing 3d rendered image?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  9 месяцев назад +1

      This could probably be done just through the prompt, but you would probably need to do inpainting. Render the scene as a whole 3D rendered scene, including 3D rendered people. Then going to in painting mask out the people and then use a checkpoint or model for realistic looking stuff and then change the prompt to photographic or realistic.

    • @1986xuan
      @1986xuan 8 месяцев назад +2

      @@AIchemywithXerophayze-jt1gg Do you think you would make a tutorial on that? Dealing with realistic people in an existing scene / setup? That would have been an amazing project as photo content creation for local businesses like existing coffee shops, gyms, restaurant..

  • @cce7087
    @cce7087 8 месяцев назад +1

    If I'm interested in creating a scene where I add multiple additional characters but want them to be specific (ie from a seed), is this possible and how? I want to create a number of images with multiple characters in various scene changes. I would prefer not to learn latent couple and is it called component net - you mentioned it, can't recall off to of my head. Hoping there's an easier way to do what I want!

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  8 месяцев назад

      Working with multiple characters that you want to be consistent between images is very difficult even when using controlnet. It is possible but honestly you should just join our discord and ask some of the users there because I think some of them had some ways of implementing roop and controlnet.
      discord.gg/HWksPdT6

  • @HypnotizeInstantly
    @HypnotizeInstantly 11 месяцев назад +3

    What is the difference between using inpainting model vs regular model? because I am getting no difference between them. Also whenever I do inpainting stuff, I enable controlnet - inpaint - global harmonious inpaint as preprocessor. Doing so would blend the whole masked generation to be in fit with the whole picture more harmoniously. Also you can crank up the denoise strength and it won't produce any weird distortions in the image.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад +1

      Not using an inpainting model can cause some weird effects. The most noticable thing is that it did not blend the edges correctly, it usually renders a different image in front of the original, but keeping the original behind the new one.

    • @Shabazza84
      @Shabazza84 6 месяцев назад

      You can partially mitigate that, by using inpaint "whole picture" instead of "only masked". The blending will be better, due to using more context.
      But this can ofc lead to issues when trying to inpaint detailed/small sections.@@AIchemywithXerophayze-jt1gg

  • @lilillllii246
    @lilillllii246 2 месяца назад +1

    Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  2 месяца назад +1

      If I understand correctly, you want to use an existing image as the background and add characters to it. Yes this is absolutely possible. Join my discord and we can help.
      discord.com/invite/EQMyYbtw

  • @Aristocle
    @Aristocle 8 месяцев назад +1

    I wanted to use Latent couple, but in this latest version of automatic1111 it doesn't seem to work (using inpanting based SD1.5 models). Is it possible to add entire scenarios with inpainting? such as entire furnished rooms or natural sceneries of a landscape.

  • @377omkar
    @377omkar Месяц назад +1

    can you give that prompt that you gave to chat gpt for genration of prompts

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  Месяц назад

      I've changed it to an online prompt generator. It's a subscription based service now, but I offer a free version here: shop.xerophayze.com/xerogenlite

  • @ricardoborgesba
    @ricardoborgesba 5 месяцев назад +1

    I would like to put an exacly png inside the scene, or copy as much as possible mixing, is that possible anyway?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  5 месяцев назад

      In a way I think yes you could do that. Using Photoshop to copy and paste the image into the scene, then using in painting to blend the edges better.

  • @bobtahar
    @bobtahar 11 месяцев назад +1

    May I know what is the extension called to zoom the inpaint image?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      It's built into a1111. It's called canvas zoom and paint. Check the top left corner of the inpaint window. You should see a little "i" that if you hover over it should give you some info

  • @AIMusicExperiment
    @AIMusicExperiment 11 месяцев назад +2

    As usual your tutorial is helpful! Thanks fo all you do. My guess with the problem you were having is the nature of how the AI intruperats the prompt. Rather than hearing you say that it is a local artist in the picture, it thinks that the image was drawn by a local artist, Like if you had written "Masterpiece by Rembrandt." That is my thought.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      It's possible. I think I'm this instance it was a combination of the description of the artist in the prompt and what the fruit stand looked like. The fruit stand just looked too much like the artist description and so the AI didn't see a need to change much.

  • @tonisins
    @tonisins 8 месяцев назад +1

    hey, would you mind sharing the chatGPT prompt for creating SD prompts?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  8 месяцев назад

      It's something I sell in my store. Https://shop.xerophayze.com
      Join our discord. I'm working on an extension for automatic 1111 that will interface with GPT better than others. It will be free. But works great with my prompt generator.
      discord.gg/mTEGXWMw

  • @DigitalAscensionArt
    @DigitalAscensionArt 11 месяцев назад

    Thank you for always uploading great content for advanced techniques. How do I get into your discord?

  • @Yeeeeeehaw
    @Yeeeeeehaw 8 месяцев назад +2

    Great video

  • @dthSinthoras
    @dthSinthoras 11 месяцев назад

    Do you also have a workflow to get something unnatural on people? Like blue skin, without making half the image blue. Or give someone cat-eyes without transforming him into something with cat-ears and stuff. Or striped hair with 2 specific colors. etc. For my feeling these type of things without colorbleeding is the hardest thing to achive if you hvae something kinda specific in mind.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад +1

      So with the tiny details like eyes, definitely use inpainting, for whole image changes without bleeding you would want to use regional prompting. I was going to do a video on that a while ago and completely forgot. I think I'll do that along with the micro detailing.

    • @dthSinthoras
      @dthSinthoras 11 месяцев назад

      @@AIchemywithXerophayze-jt1gg Looking forward to see that then :)

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  10 месяцев назад +1

      Using the BREAK command might actually keep with this. I've been messing around with it and got it built into my prompt generator now. I'll try what you talked about.

  • @MrSongib
    @MrSongib 11 месяцев назад +2

    13:55 if you want introduce new consept in the scene is always go for high sampling steps, or just tick the box in "With img2img, do exactly the amount of steps the slider specifies. (normally you'd do less with less denoising)" it will do the exact steps, because in Img2img the formula for sampling steps is "sampling steps * denoising strength" so 39*0.95 = 37 sampling steps in this case.
    ruclips.net/video/V1aaB7UgP7M/видео.html
    and consider mask the shadow area aswell. and use "Fill" if using "original" is a bit stubborn 26:00.
    padding pixel is inside the h*w, not the outside of it so it's actually using less resolution for buffer, it's actually similar to make a dot outside the main mask area to read other areas.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      I didn't know the formula, I just knew that it would take more steps the higher the denoise strength. I'll adjust try the exact number of steps. I use the technique of putting a mask dot somewhere else in the image when I want to remove something like a watermark out other object. Thanks for the tips.

    • @Yeeeeeehaw
      @Yeeeeeehaw 8 месяцев назад

      Can you kindly explain the difference between usage of fill and latent noise

  • @Rasukix
    @Rasukix 11 месяцев назад +2

    I feel like it would be more efficient to swap into photopea and do a stick man and inpaint

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад +2

      It very well may be. Especially if the area your trying to put someone did not have a lot of variance in pixels. Like a solid color.

  • @BabylonBaller
    @BabylonBaller 11 месяцев назад +1

    Great tutorials as always, its just very difficult to hear you on a mobile phone as it seems you're recording with a builtin laptop mic so its super low

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      Thanks for pointing this out, I have a desktop computer, and a nice mic, but I also have a web cam. And I think my recording audio somehow got switched to the web cam instead of my mic

    • @BabylonBaller
      @BabylonBaller 11 месяцев назад

      @@AIchemywithXerophayze-jt1gg ah yes, indeed it sounds like your far away from the mic, which has happened to me when the C920 mic switches back to default.

    • @davidchi501
      @davidchi501 7 месяцев назад

      @@AIchemywithXerophayze-jt1gg I'd recommend using Adobe Enhance to make your mic audio sound clearer.

  • @baraka99
    @baraka99 10 месяцев назад +1

    Wish you streamlined your videos to like 22m or so, reaching the same final result and explaining the process.

  • @xehanort3623
    @xehanort3623 11 месяцев назад

    I can't add people it just spits out the same image

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      It's not the most easy of processes to use. Takes a lot of practice to get it to do it correctly. A lot depends on the pixels your masking out. Are they extremely uniform, like a solid color, that's the most difficult to try and change. You may need to switch to "fill" instead of "original" to get it to put something there, anything, then switch back to original she try again

  • @eugeniakenne2865
    @eugeniakenne2865 10 месяцев назад

    "PromoSM" 😱

  • @Officemeds
    @Officemeds 11 месяцев назад

    The pain ohvof the ain? Helpe no ono! Why gof go why!!!! Blond is everywhere someone call pp1!!! Shlock is the sound his head make stabing

  • @octopuss3893
    @octopuss3893 9 месяцев назад

    bla bla bla bla...................

  • @relaxation_ambience
    @relaxation_ambience 11 месяцев назад

    And again: all your tutorial would easily fit in 10 min. Now we need to watch all your imperfections, unsuccessful experimentations and wait for picture rendering. Of course it's possible to skip manually, but this is annoying. All what you did would be totally acceptable on LIVE stream, but not as you provide now. Maybe this is a reason, why your subscribers list grows so slowly.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  11 месяцев назад

      That's interesting that you think that. The only thing I cut out of my videos is when I am clearing my throat or have to cough. And if I run into something seriously wrong that's going to take me a while to fix. But in this video, the only thing that gets cut out are the coughs and clearing my throats. I understand your concern, and yeah a lot of my tutorials are going to be based around simple concepts. But a lot of people out there are still trying to figure out the simple concepts. And I'm more than happy to provide that.

    • @relaxation_ambience
      @relaxation_ambience 11 месяцев назад

      @@AIchemywithXerophayze-jt1gg Thank you for the answer. But my only concern was what I figured out, that it's possible to shorten a lot. I like slow pace (for example as youtuber Olivio Sarikas), but I found myself a lot of skipping 5-10 seconds forward and not missing information.

    • @dashx3465
      @dashx3465 10 месяцев назад +2

      I seriously disagree with this take. I think showing the problems he runs into and his thought process into fixing them adds more value to the tutorial and gives more knowledge. If you want to skip through the troubleshooting that's fine. But I think showcasing them is better.

    • @relaxation_ambience
      @relaxation_ambience 10 месяцев назад

      @@dashx3465 What you say usually happens in live stream where you experiment and search how to fix the problems. In tutorials you get polished and finished product only mentioning about possible problems and how to overcome them. His tutorial seems that just recorded raw live stream. So he could put in the category "live streams" and then would be clear that we will see lots of experimentation and searching how to solve the problems.

    • @Yeeeeeehaw
      @Yeeeeeehaw 8 месяцев назад

      ​@@dashx3465I second this
      I learned a lot from those mistakes in the video