If you find the video useful and would like to tip you can buy me some electricity for all these image generations. It is greatly appreciated! ko-fi.com/kleebztech
There is a "character sheet style" (or something like that) amongst the styes. By selecting that and asking in the prompt for the same character from multiple perspectives, a turnaround, etc., you need to have a few goes at it, but it generally produces something okay in the end.
If I find a good way I will but I really have not. I do find that using celebrities names like I mention for the face also helps with the body. Although not as well.
Weak variation (subtle) can give you the smile, keep the image reference, mix with variation and it will replace, use the prompt to direct the change, mouth and eyes, I could get other expressions doing the same trick
Yes in further testing I have been able to get decent results. You do have to lower the weight and stop at a little and it may not look exactly the same but very close.
I used a 2x2 grid, I went and had a reference pack i bought of photos of people from all different angles. Yiu viuld just as easily use a video or movie to get several angles of a person. Then I used ComfyUI to create a depth pass for the angles I wanted to use.
The problem I had with eye color is that if you mention eyes in the prompt, it tends to put more emphasis on them, even make them bigger/rounder (I'm using Cheyenne). Using 3 portaits as face swap + 1 as image prompt kept the eye colors without mentionning them in the prompt, which can focus on emotions, context or activities.
I have issue with high ram usage with fooocus, i have 16gb ram and rtx 3060. As soon as i run webui my ram will go to 80, while generating image it will use almost 100% of ram but vram will use about 18%.
Have you looked over this? Check out the stuff on System Swap. I have 32GB and when I run Fooocus it uses about 9-10 GB of RAM it seems since mine will go up to 16GB total used. github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md
@@KLEEBZTECH already checked it but it doesnt seems to work, on my system how much does your vram memory goes up. For mw it looks like my ram do all the works and vram do a little work, since vram usage is about 20-30% only
Yeah mine uses 100% of my VRAM when generating. I assume you have looked at the CMD window to see if anything in there that might give you a clue? Any errors?
Is this a new download of Fooocus or something you may have made some changes to settings? Because another thing to try is a fresh download and see if it still does it. I have multiple instances of Fooocus downloaded myself since I do tend to mess around with things.
HI. How do you get a grid with the same face but different angles (frontal, left profile, right profile, three quarters) to use them to create storyboards with other characters with the mix image prompt and impaint,Because when I use them, they always look into the camera, they're all frontal, okay?
Using a grid reference and putting Canny can do it, but you will have to inpaint each face separetelly for the consistency after the first grid generation
I recorded this which should help you get different angles. There is not a set prompt but the first part of this video will show how to get them. ruclips.net/video/MntZa4qLwn8/видео.html@@onlineispections
I ran into the Problem, that the face swap messes up my hair. To be specific the orginal has long hair but it always crops some weird hairstyle around the face thats rather short. Any tipps?
1:36 It still don't get clear to me what the seed really does... I guess keeping the same it goes to the log file and takes the same config as the other? It's like an ID? But what happens if you also put an input image? My principal issue e.g. is when I try to keep my face on images with other people and make an upscale (I already tried mixing upscale with face swap in debug options). I had one result that it seemed a little bit to my face but I can't keep the same face :(. Now I'm downloading Fooocus MRE to try image-2-image and more...
The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. It is just a number used to create the randomness. The same seed is useful for testing but otherwise random is usually what you want.
I just tried upscale fast 2x and worked! Using the same seed. But some errors in the eyes (like anime styled). I will try fixing it with inpaint...@@KLEEBZTECH
to be honest when I do the grid the faces are get actually be significantly different. Also in the video, I think the size of the lips is rather different from one picture to the next.
i already have the face of my model, its not from the image prompt nor she is real human, how can i make grid of that model from different angle and in different emotions . i tried it but the face completely changes. need help with this one
Amazing video. Just a question. When i use image prompt faceswap with inpaint together fooocus gives me an error. I use fooocus_colab ipynb with google, free version. If I pay 100 will I be able to do it without the error? My PC has a Ryzen 5 5600 and an RX 6650 XT, 16 gb ram 3200, solid disc, etc.
I would suggest looking on the GitHub discussions for fooocus. I haven't used it yet on colab. I just never had great luck with anything on there when I've used things in the past. I think I saw a discussion there or on Reddit about that very subject recently.
rundiffusion.com and diffusionhub.io are places that you can run Fooocus online but I am not familiar with them. I do know some people use the paid Colab with good results but I really don't know a ton on that. You can run it without Nvidia but I have not tested to see how big of a performance difference there is but from reading on the Github page it will likely be about 3x slower than with Nvidia. There are instructions on the main GitHub page when you scroll down explaining what to do for AMD GPUs. Looks like you just need to edit the run.bat file. I actually just got access to an AMD card but have not been motivated to swap it out and try it yet to compare.
That determines how long it has an impact on the generation.. .5 for example would stop having influence 50% of the way through the generation process. So for the quality setting of 60 steps it would stop having any influence at 30 steps.
Hi, is it possible that something has changed since the last update? I've noticed that "Mixing Image Prompt and Inpaint" no longer works. The result is largely the same as the original image. Does anyone know more?
@@KLEEBZTECH I'm working with Colab Pro and Fooocus version 2.4.3, I've just tested it again. I left all the settings on standard, added a face to "Image Prompt" and switched to "FaceSwap". In the "Inpaint or Outpaint" tab I added another image and clicked on "Improve Detail" and clicked on "detailed face". Now I draw a mask over the face. In the Advanced tab I activated "Developer Debug Mode" and checked "Mixing Image Prompt and Inpaint". No other settings were changed. I feel like the result has been worse for about 3 weeks than it was before. It could have been that someone knew something about a change in the function. The "Mixing Image Prompt and Vary/Upscale" function still works very well. Your videos are really well done!
So it sounds like you are saying the Faceswap doesn't seem to work well lately and not that the whole using inpaint with image prompts function doesn't work. I don't think anything has changed when it comes to that but I don't read all the code changes.
@@KLEEBZTECH Yes, exactly, I'm talking about the FaceSwap function. That's why I left a comment under this video. Sorry if I expressed myself in a confusing way.
The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. Using the same seed with the same generation parameters will produce the exact same image every time.
Inpainting can be used for that. Depending on what you are trying to remove it can be a little hit or miss. You can't really just tell it to remove something though and need it to generate the area again. You can alter the prompts when doing it to help remove the items. I might make a separate video on that sort of thing soon. I do have videos that cover different aspects of inpainting but not specifically that.
To give an example, I created an image of a woman having coffee and it put two cups in front of her. I masked out one with inpainting to regenerate that area. Of course it usually added another cup in the same place. So I change the prompt to say empty table and with a few attempts it removed the item and generated an empty area in that spot.
I believe I've seen that if you use the inpainting faceswap you need to select "improve details" not "inpaint/outpaint". To me it seems that it creates more similar results. The "inpaint/outpaint" even produced garbage.
I get much worse results that way in all the testing I did. But I am always looking for a better way. When I would do it that way it did not blend things very well and was always obvious that the face was swapped.
But you got me looking into better ways of doing it. I did find that if you use the vary subtle after, it blends things in decently and maintains the look for the most part...
Generated a girls face with tensor art that i like it. Got that image and uploaded in image prompt and face swapped.any image generated by a highly detailed prompt will only give me the same expression and pose .sadly
That is for sure a good way if you can. Although I have one trained and don't get the best results. Of course the way it was trained has a big part of that. I have found a LoRA and FaceSwap can be a good mix.
If you find the video useful and would like to tip you can buy me some electricity for all these image generations. It is greatly appreciated! ko-fi.com/kleebztech
There is a "character sheet style" (or something like that) amongst the styes. By selecting that and asking in the prompt for the same character from multiple perspectives, a turnaround, etc., you need to have a few goes at it, but it generally produces something okay in the end.
thanks! this ended up being the best solution for me
Amazing!! Your videos are God sent
Thank you! Always love to hear that they help someone with ideas.
Thank you a lot for your help. Could you please make a video for body consistency?
If I find a good way I will but I really have not. I do find that using celebrities names like I mention for the face also helps with the body. Although not as well.
Exactly this is important as well
@@KLEEBZTECH that would be great! :)
Very nice. Thank you! I would love to see a full workflow to create constent characters in where the person is not looking at the camera
Hey great video, please also make a video about swapping products/ consistent products
super good video as always! thanks!
Thanks again!
Weak variation (subtle) can give you the smile, keep the image reference, mix with variation and it will replace, use the prompt to direct the change, mouth and eyes, I could get other expressions doing the same trick
Yes in further testing I have been able to get decent results. You do have to lower the weight and stop at a little and it may not look exactly the same but very close.
Very well done video.
Thank you very much!
I used a 2x2 grid, I went and had a reference pack i bought of photos of people from all different angles. Yiu viuld just as easily use a video or movie to get several angles of a person. Then I used ComfyUI to create a depth pass for the angles I wanted to use.
The problem I had with eye color is that if you mention eyes in the prompt, it tends to put more emphasis on them, even make them bigger/rounder (I'm using Cheyenne). Using 3 portaits as face swap + 1 as image prompt kept the eye colors without mentionning them in the prompt, which can focus on emotions, context or activities.
pretty good video! very useful tips
HI. Can you make a full body texture video to use the same characters but with full body, like in artflow? Thank you
I have issue with high ram usage with fooocus, i have 16gb ram and rtx 3060. As soon as i run webui my ram will go to 80, while generating image it will use almost 100% of ram but vram will use about 18%.
Have you looked over this? Check out the stuff on System Swap. I have 32GB and when I run Fooocus it uses about 9-10 GB of RAM it seems since mine will go up to 16GB total used. github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md
@@KLEEBZTECH already checked it but it doesnt seems to work, on my system how much does your vram memory goes up. For mw it looks like my ram do all the works and vram do a little work, since vram usage is about 20-30% only
Yeah mine uses 100% of my VRAM when generating. I assume you have looked at the CMD window to see if anything in there that might give you a clue? Any errors?
Is this a new download of Fooocus or something you may have made some changes to settings? Because another thing to try is a fresh download and see if it still does it. I have multiple instances of Fooocus downloaded myself since I do tend to mess around with things.
@@KLEEBZTECH i dnt change anything, i just dnt install on my c drive. I will check cmd if there is some erro or code not
wonderful, thanks
Welcome!
HI. How do you get a grid with the same face but different angles (frontal, left profile, right profile, three quarters) to use them to create storyboards with other characters with the mix image prompt and impaint,Because when I use them, they always look into the camera, they're all frontal, okay?
I have not found a great way yet. You can also try terms like character sheet.
@@KLEEBZTECH ok, can you tell me which prompt I need to write to have a grid with four photographs for four different face angles? thank
Using a grid reference and putting Canny can do it, but you will have to inpaint each face separetelly for the consistency after the first grid generation
@@zoezerbrasilio2419 We know that. With the mix, the question was another: what is the prompt to create a grid with different angles?
I recorded this which should help you get different angles. There is not a set prompt but the first part of this video will show how to get them. ruclips.net/video/MntZa4qLwn8/видео.html@@onlineispections
I ran into the Problem, that the face swap messes up my hair. To be specific the orginal has long hair but it always crops some weird hairstyle around the face thats rather short. Any tipps?
Have you tried masking more or less of the area?
👍👍😍😍
1:36 It still don't get clear to me what the seed really does... I guess keeping the same it goes to the log file and takes the same config as the other? It's like an ID? But what happens if you also put an input image? My principal issue e.g. is when I try to keep my face on images with other people and make an upscale (I already tried mixing upscale with face swap in debug options). I had one result that it seemed a little bit to my face but I can't keep the same face :(. Now I'm downloading Fooocus MRE to try image-2-image and more...
The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. It is just a number used to create the randomness. The same seed is useful for testing but otherwise random is usually what you want.
And for faceswap I find .9 or above and weight of .9 of above work better.
I just tried upscale fast 2x and worked! Using the same seed. But some errors in the eyes (like anime styled). I will try fixing it with inpaint...@@KLEEBZTECH
Yes fast upscale will not change the image since it is more of a traditional upscale.
to be honest when I do the grid the faces are get actually be significantly different. Also in the video, I think the size of the lips is rather different from one picture to the next.
👍👍👍👍
i already have the face of my model, its not from the image prompt nor she is real human, how can i make grid of that model from different angle and in different emotions . i tried it but the face completely changes. need help with this one
No easy way. You could try using Faceswap but not sure you will get the results you want.
Amazing video. Just a question. When i use image prompt faceswap with inpaint together fooocus gives me an error. I use fooocus_colab ipynb with google, free version. If I pay 100 will I be able to do it without the error? My PC has a Ryzen 5 5600 and an RX 6650 XT, 16 gb ram 3200, solid disc, etc.
I would suggest looking on the GitHub discussions for fooocus. I haven't used it yet on colab. I just never had great luck with anything on there when I've used things in the past. I think I saw a discussion there or on Reddit about that very subject recently.
@@KLEEBZTECH There is another way to use fooocus without google colab? Thanks for the answer
@@KLEEBZTECH Another question, my graphic card isnt Nvidia, can I use it anyways or it has to be Nvidia?
rundiffusion.com and diffusionhub.io are places that you can run Fooocus online but I am not familiar with them. I do know some people use the paid Colab with good results but I really don't know a ton on that. You can run it without Nvidia but I have not tested to see how big of a performance difference there is but from reading on the Github page it will likely be about 3x slower than with Nvidia. There are instructions on the main GitHub page when you scroll down explaining what to do for AMD GPUs. Looks like you just need to edit the run.bat file. I actually just got access to an AMD card but have not been motivated to swap it out and try it yet to compare.
@@KLEEBZTECH Thanks!
what if i already have the face made and want to do a grid
Could try Faceswap to see how it does.
I understand what is weight, but what is stop at mean, why we reduce if u want to reduce its impact??
That determines how long it has an impact on the generation.. .5 for example would stop having influence 50% of the way through the generation process. So for the quality setting of 60 steps it would stop having any influence at 30 steps.
@@KLEEBZTECH thanks mate
what specs of computer or laptop do you have?
This video it was i5 32GB RAM and 3070 8GB VRAM. I am currently using a 4070 with 12GB VRAM.
Hi, is it possible that something has changed since the last update? I've noticed that "Mixing Image Prompt and Inpaint" no longer works. The result is largely the same as the original image. Does anyone know more?
I just did a video yesterday using that option without issue: ruclips.net/video/BbKeDEQ7uik/видео.html
Have you accidentally adjusted the denoising strength down to a low number?
@@KLEEBZTECH I'm working with Colab Pro and Fooocus version 2.4.3, I've just tested it again. I left all the settings on standard, added a face to "Image Prompt" and switched to "FaceSwap". In the "Inpaint or Outpaint" tab I added another image and clicked on "Improve Detail" and clicked on "detailed face". Now I draw a mask over the face. In the Advanced tab I activated "Developer Debug Mode" and checked "Mixing Image Prompt and Inpaint". No other settings were changed. I feel like the result has been worse for about 3 weeks than it was before. It could have been that someone knew something about a change in the function. The "Mixing Image Prompt and Vary/Upscale" function still works very well.
Your videos are really well done!
So it sounds like you are saying the Faceswap doesn't seem to work well lately and not that the whole using inpaint with image prompts function doesn't work. I don't think anything has changed when it comes to that but I don't read all the code changes.
@@KLEEBZTECH Yes, exactly, I'm talking about the FaceSwap function. That's why I left a comment under this video. Sorry if I expressed myself in a confusing way.
Which is the best base model we can use for face swap? Can we achieve better results by combining two base models at the same time?
I have not done enough testing to determine if one is better than another when it comes to the checkpoints.
What seed actually mean? Why u wanna keep it stable and when
The "seed" in AI image generation acts as an initial starting point for the algorithm to generate images. Think of it as a unique key that determines the randomness of the output. Using the same seed with the same generation parameters will produce the exact same image every time.
Same seed I find gives more similar face I find in this case.
how fast did you generate this image for 3070 8GB VRAM?
With a 3070 I could do 60 steps quality in about 35 seconds or so. I am currently using a 4070 12GB Vram and can do in about 20 seconds.
@@KLEEBZTECH "What brand or model of laptop or computer are you using for this?"
It is a custom built rig. MSI motherboard and I don't recall most of the other parts at the moment.
@@KLEEBZTECH Is the video card the most important component?
For AI yes it is.
faceswap doesn't work if the image source is external, not an image generated from fooocus..?
It can but it might not work as well. Depends on the source image.
Can't seem to get an actual grid. I gave the same prompt
Check out the second video. I have more tips. ruclips.net/video/MntZa4qLwn8/видео.html
how do i ask AI to remove something, sometimes he generates bunch of objects that were not in the promt, how to remove them? 🥺
Inpainting can be used for that. Depending on what you are trying to remove it can be a little hit or miss. You can't really just tell it to remove something though and need it to generate the area again. You can alter the prompts when doing it to help remove the items. I might make a separate video on that sort of thing soon. I do have videos that cover different aspects of inpainting but not specifically that.
To give an example, I created an image of a woman having coffee and it put two cups in front of her. I masked out one with inpainting to regenerate that area. Of course it usually added another cup in the same place. So I change the prompt to say empty table and with a few attempts it removed the item and generated an empty area in that spot.
@@KLEEBZTECH got it, thanks a lot!
Bro i m having ram issue, is 16gb not enough??
Depends on what you have for GPU.
Have you checked here: github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md
@@KLEEBZTECH BRO I GOT RTX 3050 4GB
With only 4GB VRAM you will want to make sure you have things like the system swap set up correctly. Check the troubleshooting guide I linked to.
@@KLEEBZTECH sure thanks
I believe I've seen that if you use the inpainting faceswap you need to select "improve details" not "inpaint/outpaint". To me it seems that it creates more similar results. The "inpaint/outpaint" even produced garbage.
I get much worse results that way in all the testing I did. But I am always looking for a better way. When I would do it that way it did not blend things very well and was always obvious that the face was swapped.
But you got me looking into better ways of doing it. I did find that if you use the vary subtle after, it blends things in decently and maintains the look for the most part...
@@KLEEBZTECH good point, I'll try that, too
I might have figured it out. Although, still semi time consuming, but with great results.. Go create a MetaHuman, for Unreal Engine 5. Problem solved.
Generated a girls face with tensor art that i like it.
Got that image and uploaded in image prompt and face swapped.any image generated by a highly detailed prompt will only give me the same expression and pose .sadly
ANOTHER WAY.....is train a LoRa. But more work for sure....
That is for sure a good way if you can. Although I have one trained and don't get the best results. Of course the way it was trained has a big part of that. I have found a LoRA and FaceSwap can be a good mix.
@@KLEEBZTECHis it expensive to train a lora suited for sdxl .juggernaut v8 rundiffusion ?