Stable Diffusion - FaceSwap and Consistent Character Tips - Part 1

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 111

  • @KLEEBZTECH
    @KLEEBZTECH  9 месяцев назад +1

    If you find the video useful and would like to tip you can buy me some electricity for all these image generations. It is greatly appreciated! ko-fi.com/kleebztech

  • @aussieexpertenglish5885
    @aussieexpertenglish5885 5 месяцев назад +4

    There is a "character sheet style" (or something like that) amongst the styes. By selecting that and asking in the prompt for the same character from multiple perspectives, a turnaround, etc., you need to have a few goes at it, but it generally produces something okay in the end.

    • @Coeurebene1
      @Coeurebene1 3 месяца назад +1

      thanks! this ended up being the best solution for me

  • @mohanish35
    @mohanish35 10 месяцев назад +2

    Amazing!! Your videos are God sent

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      Thank you! Always love to hear that they help someone with ideas.

  • @nikosmanolopoulos3711
    @nikosmanolopoulos3711 11 месяцев назад +1

    Thank you a lot for your help. Could you please make a video for body consistency?

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      If I find a good way I will but I really have not. I do find that using celebrities names like I mention for the face also helps with the body. Although not as well.

    • @eugenmalatov5470
      @eugenmalatov5470 10 месяцев назад

      Exactly this is important as well

    • @eugenmalatov5470
      @eugenmalatov5470 10 месяцев назад

      @@KLEEBZTECH that would be great! :)

  • @LoveisHell85
    @LoveisHell85 11 месяцев назад +1

    Very nice. Thank you! I would love to see a full workflow to create constent characters in where the person is not looking at the camera

  • @asheshsrivastava8921
    @asheshsrivastava8921 11 месяцев назад +3

    Hey great video, please also make a video about swapping products/ consistent products

  • @Bamigaru
    @Bamigaru 11 месяцев назад +1

    super good video as always! thanks!

  • @zoezerbrasilio2419
    @zoezerbrasilio2419 11 месяцев назад +3

    Weak variation (subtle) can give you the smile, keep the image reference, mix with variation and it will replace, use the prompt to direct the change, mouth and eyes, I could get other expressions doing the same trick

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      Yes in further testing I have been able to get decent results. You do have to lower the weight and stop at a little and it may not look exactly the same but very close.

  • @marcus-b4x3h
    @marcus-b4x3h 11 месяцев назад +1

    Very well done video.

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Thank you very much!

  • @MeMine-zu1mg
    @MeMine-zu1mg 11 месяцев назад +1

    I used a 2x2 grid, I went and had a reference pack i bought of photos of people from all different angles. Yiu viuld just as easily use a video or movie to get several angles of a person. Then I used ComfyUI to create a depth pass for the angles I wanted to use.

  • @Coeurebene1
    @Coeurebene1 3 месяца назад +1

    The problem I had with eye color is that if you mention eyes in the prompt, it tends to put more emphasis on them, even make them bigger/rounder (I'm using Cheyenne). Using 3 portaits as face swap + 1 as image prompt kept the eye colors without mentionning them in the prompt, which can focus on emotions, context or activities.

  • @dussio22
    @dussio22 11 месяцев назад +1

    pretty good video! very useful tips

  • @onlineispections
    @onlineispections 11 месяцев назад

    HI. Can you make a full body texture video to use the same characters but with full body, like in artflow? Thank you

  • @StoryFBMizostory
    @StoryFBMizostory 11 месяцев назад

    I have issue with high ram usage with fooocus, i have 16gb ram and rtx 3060. As soon as i run webui my ram will go to 80, while generating image it will use almost 100% of ram but vram will use about 18%.

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      Have you looked over this? Check out the stuff on System Swap. I have 32GB and when I run Fooocus it uses about 9-10 GB of RAM it seems since mine will go up to 16GB total used. github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

    • @StoryFBMizostory
      @StoryFBMizostory 11 месяцев назад +1

      @@KLEEBZTECH already checked it but it doesnt seems to work, on my system how much does your vram memory goes up. For mw it looks like my ram do all the works and vram do a little work, since vram usage is about 20-30% only

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Yeah mine uses 100% of my VRAM when generating. I assume you have looked at the CMD window to see if anything in there that might give you a clue? Any errors?

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      Is this a new download of Fooocus or something you may have made some changes to settings? Because another thing to try is a fresh download and see if it still does it. I have multiple instances of Fooocus downloaded myself since I do tend to mess around with things.

    • @StoryFBMizostory
      @StoryFBMizostory 11 месяцев назад +1

      @@KLEEBZTECH i dnt change anything, i just dnt install on my c drive. I will check cmd if there is some erro or code not

  • @zraieee
    @zraieee 11 месяцев назад +1

    wonderful, thanks

  • @onlineispections
    @onlineispections 11 месяцев назад +2

    HI. How do you get a grid with the same face but different angles (frontal, left profile, right profile, three quarters) to use them to create storyboards with other characters with the mix image prompt and impaint,Because when I use them, they always look into the camera, they're all frontal, okay?

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      I have not found a great way yet. You can also try terms like character sheet.

    • @onlineispections
      @onlineispections 11 месяцев назад

      @@KLEEBZTECH ok, can you tell me which prompt I need to write to have a grid with four photographs for four different face angles? thank

    • @zoezerbrasilio2419
      @zoezerbrasilio2419 11 месяцев назад

      Using a grid reference and putting Canny can do it, but you will have to inpaint each face separetelly for the consistency after the first grid generation

    • @onlineispections
      @onlineispections 11 месяцев назад

      @@zoezerbrasilio2419 We know that. With the mix, the question was another: what is the prompt to create a grid with different angles?

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      I recorded this which should help you get different angles. There is not a set prompt but the first part of this video will show how to get them. ruclips.net/video/MntZa4qLwn8/видео.html@@onlineispections

  • @jowisels
    @jowisels 11 месяцев назад +1

    I ran into the Problem, that the face swap messes up my hair. To be specific the orginal has long hair but it always crops some weird hairstyle around the face thats rather short. Any tipps?

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Have you tried masking more or less of the area?

  • @kmc8522
    @kmc8522 10 месяцев назад +1

    👍👍😍😍

  • @user87546
    @user87546 10 месяцев назад

    1:36 It still don't get clear to me what the seed really does... I guess keeping the same it goes to the log file and takes the same config as the other? It's like an ID? But what happens if you also put an input image? My principal issue e.g. is when I try to keep my face on images with other people and make an upscale (I already tried mixing upscale with face swap in debug options). I had one result that it seemed a little bit to my face but I can't keep the same face :(. Now I'm downloading Fooocus MRE to try image-2-image and more...

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад +2

      The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. It is just a number used to create the randomness. The same seed is useful for testing but otherwise random is usually what you want.

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад +1

      And for faceswap I find .9 or above and weight of .9 of above work better.

    • @user87546
      @user87546 10 месяцев назад

      I just tried upscale fast 2x and worked! Using the same seed. But some errors in the eyes (like anime styled). I will try fixing it with inpaint...@@KLEEBZTECH

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад +1

      Yes fast upscale will not change the image since it is more of a traditional upscale.

  • @eugenmalatov5470
    @eugenmalatov5470 10 месяцев назад

    to be honest when I do the grid the faces are get actually be significantly different. Also in the video, I think the size of the lips is rather different from one picture to the next.

  • @wernerblahota6055
    @wernerblahota6055 11 месяцев назад +1

    👍👍👍👍

  • @prashantkanwat2839
    @prashantkanwat2839 4 месяца назад

    i already have the face of my model, its not from the image prompt nor she is real human, how can i make grid of that model from different angle and in different emotions . i tried it but the face completely changes. need help with this one

    • @KLEEBZTECH
      @KLEEBZTECH  4 месяца назад

      No easy way. You could try using Faceswap but not sure you will get the results you want.

  • @fufudece
    @fufudece 10 месяцев назад +1

    Amazing video. Just a question. When i use image prompt faceswap with inpaint together fooocus gives me an error. I use fooocus_colab ipynb with google, free version. If I pay 100 will I be able to do it without the error? My PC has a Ryzen 5 5600 and an RX 6650 XT, 16 gb ram 3200, solid disc, etc.

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      I would suggest looking on the GitHub discussions for fooocus. I haven't used it yet on colab. I just never had great luck with anything on there when I've used things in the past. I think I saw a discussion there or on Reddit about that very subject recently.

    • @fufudece
      @fufudece 10 месяцев назад

      @@KLEEBZTECH There is another way to use fooocus without google colab? Thanks for the answer

    • @fufudece
      @fufudece 10 месяцев назад

      @@KLEEBZTECH Another question, my graphic card isnt Nvidia, can I use it anyways or it has to be Nvidia?

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      rundiffusion.com and diffusionhub.io are places that you can run Fooocus online but I am not familiar with them. I do know some people use the paid Colab with good results but I really don't know a ton on that. You can run it without Nvidia but I have not tested to see how big of a performance difference there is but from reading on the Github page it will likely be about 3x slower than with Nvidia. There are instructions on the main GitHub page when you scroll down explaining what to do for AMD GPUs. Looks like you just need to edit the run.bat file. I actually just got access to an AMD card but have not been motivated to swap it out and try it yet to compare.

    • @fufudece
      @fufudece 10 месяцев назад +1

      @@KLEEBZTECH Thanks!

  • @Subliminal-b5s
    @Subliminal-b5s 11 месяцев назад

    what if i already have the face made and want to do a grid

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Could try Faceswap to see how it does.

  • @aswinrathan7898
    @aswinrathan7898 4 месяца назад

    I understand what is weight, but what is stop at mean, why we reduce if u want to reduce its impact??

    • @KLEEBZTECH
      @KLEEBZTECH  4 месяца назад

      That determines how long it has an impact on the generation.. .5 for example would stop having influence 50% of the way through the generation process. So for the quality setting of 60 steps it would stop having any influence at 30 steps.

    • @aswinrathan7898
      @aswinrathan7898 4 месяца назад

      @@KLEEBZTECH thanks mate

  • @Craftedcuts.
    @Craftedcuts. 6 месяцев назад

    what specs of computer or laptop do you have?

    • @KLEEBZTECH
      @KLEEBZTECH  6 месяцев назад +1

      This video it was i5 32GB RAM and 3070 8GB VRAM. I am currently using a 4070 with 12GB VRAM.

  • @alexa.1995
    @alexa.1995 7 месяцев назад

    Hi, is it possible that something has changed since the last update? I've noticed that "Mixing Image Prompt and Inpaint" no longer works. The result is largely the same as the original image. Does anyone know more?

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      I just did a video yesterday using that option without issue: ruclips.net/video/BbKeDEQ7uik/видео.html

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      Have you accidentally adjusted the denoising strength down to a low number?

    • @alexa.1995
      @alexa.1995 7 месяцев назад

      @@KLEEBZTECH I'm working with Colab Pro and Fooocus version 2.4.3, I've just tested it again. I left all the settings on standard, added a face to "Image Prompt" and switched to "FaceSwap". In the "Inpaint or Outpaint" tab I added another image and clicked on "Improve Detail" and clicked on "detailed face". Now I draw a mask over the face. In the Advanced tab I activated "Developer Debug Mode" and checked "Mixing Image Prompt and Inpaint". No other settings were changed. I feel like the result has been worse for about 3 weeks than it was before. It could have been that someone knew something about a change in the function. The "Mixing Image Prompt and Vary/Upscale" function still works very well.
      Your videos are really well done!

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      So it sounds like you are saying the Faceswap doesn't seem to work well lately and not that the whole using inpaint with image prompts function doesn't work. I don't think anything has changed when it comes to that but I don't read all the code changes.

    • @alexa.1995
      @alexa.1995 7 месяцев назад

      @@KLEEBZTECH Yes, exactly, I'm talking about the FaceSwap function. That's why I left a comment under this video. Sorry if I expressed myself in a confusing way.

  • @hikmetozanustundag8036
    @hikmetozanustundag8036 10 месяцев назад

    Which is the best base model we can use for face swap? Can we achieve better results by combining two base models at the same time?

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      I have not done enough testing to determine if one is better than another when it comes to the checkpoints.

  • @Poppinthepagne
    @Poppinthepagne 11 месяцев назад

    What seed actually mean? Why u wanna keep it stable and when

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. Using the same seed with the same generation parameters will produce the exact same image every time.

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Same seed I find gives more similar face I find in this case.

  • @Craftedcuts.
    @Craftedcuts. 6 месяцев назад

    how fast did you generate this image for 3070 8GB VRAM?

    • @KLEEBZTECH
      @KLEEBZTECH  6 месяцев назад +1

      With a 3070 I could do 60 steps quality in about 35 seconds or so. I am currently using a 4070 12GB Vram and can do in about 20 seconds.

    • @Craftedcuts.
      @Craftedcuts. 6 месяцев назад

      @@KLEEBZTECH "What brand or model of laptop or computer are you using for this?"

    • @KLEEBZTECH
      @KLEEBZTECH  6 месяцев назад +1

      It is a custom built rig. MSI motherboard and I don't recall most of the other parts at the moment.

    • @Craftedcuts.
      @Craftedcuts. 6 месяцев назад

      @@KLEEBZTECH Is the video card the most important component?

    • @KLEEBZTECH
      @KLEEBZTECH  6 месяцев назад +1

      For AI yes it is.

  • @zinheader6882
    @zinheader6882 10 месяцев назад

    faceswap doesn't work if the image source is external, not an image generated from fooocus..?

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      It can but it might not work as well. Depends on the source image.

  • @IanLally
    @IanLally 11 месяцев назад

    Can't seem to get an actual grid. I gave the same prompt

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      Check out the second video. I have more tips. ruclips.net/video/MntZa4qLwn8/видео.html

  • @zastenchivo
    @zastenchivo 10 месяцев назад

    how do i ask AI to remove something, sometimes he generates bunch of objects that were not in the promt, how to remove them? 🥺

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      Inpainting can be used for that. Depending on what you are trying to remove it can be a little hit or miss. You can't really just tell it to remove something though and need it to generate the area again. You can alter the prompts when doing it to help remove the items. I might make a separate video on that sort of thing soon. I do have videos that cover different aspects of inpainting but not specifically that.

    • @KLEEBZTECH
      @KLEEBZTECH  10 месяцев назад

      To give an example, I created an image of a woman having coffee and it put two cups in front of her. I masked out one with inpainting to regenerate that area. Of course it usually added another cup in the same place. So I change the prompt to say empty table and with a few attempts it removed the item and generated an empty area in that spot.

    • @zastenchivo
      @zastenchivo 10 месяцев назад +1

      @@KLEEBZTECH got it, thanks a lot!

  • @anyoneanything803
    @anyoneanything803 7 месяцев назад

    Bro i m having ram issue, is 16gb not enough??

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      Depends on what you have for GPU.

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      Have you checked here: github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

    • @anyoneanything803
      @anyoneanything803 7 месяцев назад

      @@KLEEBZTECH BRO I GOT RTX 3050 4GB

    • @KLEEBZTECH
      @KLEEBZTECH  7 месяцев назад

      With only 4GB VRAM you will want to make sure you have things like the system swap set up correctly. Check the troubleshooting guide I linked to.

    • @anyoneanything803
      @anyoneanything803 7 месяцев назад

      @@KLEEBZTECH sure thanks

  • @smuki196
    @smuki196 11 месяцев назад +1

    I believe I've seen that if you use the inpainting faceswap you need to select "improve details" not "inpaint/outpaint". To me it seems that it creates more similar results. The "inpaint/outpaint" even produced garbage.

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      I get much worse results that way in all the testing I did. But I am always looking for a better way. When I would do it that way it did not blend things very well and was always obvious that the face was swapped.

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад +1

      But you got me looking into better ways of doing it. I did find that if you use the vary subtle after, it blends things in decently and maintains the look for the most part...

    • @smuki196
      @smuki196 11 месяцев назад

      @@KLEEBZTECH good point, I'll try that, too

  • @GES1985
    @GES1985 8 месяцев назад +1

    I might have figured it out. Although, still semi time consuming, but with great results.. Go create a MetaHuman, for Unreal Engine 5. Problem solved.

  • @constantingata5964
    @constantingata5964 6 месяцев назад

    Generated a girls face with tensor art that i like it.
    Got that image and uploaded in image prompt and face swapped.any image generated by a highly detailed prompt will only give me the same expression and pose .sadly

  • @slaznum1
    @slaznum1 11 месяцев назад +2

    ANOTHER WAY.....is train a LoRa. But more work for sure....

    • @KLEEBZTECH
      @KLEEBZTECH  11 месяцев назад

      That is for sure a good way if you can. Although I have one trained and don't get the best results. Of course the way it was trained has a big part of that. I have found a LoRA and FaceSwap can be a good mix.

    • @constantingata5964
      @constantingata5964 6 месяцев назад

      @@KLEEBZTECHis it expensive to train a lora suited for sdxl .juggernaut v8 rundiffusion ?