Want to get a quick start? Here is the 4x4 Face Grid!: www.mediafire.com/file/uyvyzi4yfk2fgiv/4-face.zip/file **EDIT** If getting a single image and not a grid of faces, make sure to put the 4x4 Grid to PYRACANNY setting. Since I've previously explained this part a bunch, I went past it and some people are confused. Sorry!
Just dropping in to share something exciting with my fellow content creators! As a newcomer to RUclips, I've been all about storytelling and creative video-making. Recently stumbled upon VideoGPT, and it's been a game-changer, giving my videos that polished, professional touch.
Your video is the best I've ever seen on generating consistent characters. I hope you will continue to share more technologies in this area in the future, and I also hope you will share more advanced usage of Fooocus. 🎉🎉🎉
Subscribed! the only 1 channel that explained to details what and what does what, simply amazing. been wanting to change the face expression and this video teach just that. Thank you a lot for this video alone make huge dfferent for me, thanks bro 👍
i have never commented on any youtube video, but you made me comment, i thank you so much for teaching me about fooocus, i hope to make more videos about model consistency, thank you
Used a background removal tool after I split them. If you are on windows even MS paint has one, not as great as others but a pure white background like that is easy to remove with it.
When you first put the character grid in the image prompt, did you leave the setting to image prompt or did you select the PyraCanny box? I followed all of your steps and settings exactly, but my faces don't follow the character grid. If I put my Weight too high, then I get the character grid but no the face I want. Any suggestions? Thanks in advance
First of all I want to congratulate you for the extraordinary way you explain the procedure. Your explanation is very precise and, at the same time, didactic. What was a great addition is the "Disable Seed Increment" option to create multiple images with the same SEED and PROMPT. In the procedure you start with a detailed PROMPT (t 3 min 32 sec). The question is: How to start the process with a reference image (instead of a PROMPT) from which you want to create different emotions?
When doing it this way, having the prompt, the pyracanny grid, and the same seed. We can get emotion without compromising the original image because we are only adding in one or two words to all of that information. When putting in an image and trying to get emotion without anything to holding the face to its original form, you will get what you want but not without the character details changing too much. What you can try is putting an image into the "Upscale or Variation" and using the "Vary(subtle)" and simply put in the emotion in the text prompt. It's better if the face is majority of the image as then it wont change as much as if the face was far away. You can also try going into the debug menu on the debug tools tab and go down to (Forced Overwrite of Denoising Strength of "Vary" ) and change the denoise level of that. Vary (subtle) starts at 0.5 so try going down to 0.4 or lower if the image is changing too much.
@@JumpIntoAI Hi, also thanks from my side for your excellent tutorial! It helps a lot. I was first creating multiple models to then later choose the one that I'd like the most. When going back to create the emotion grids, would you approach it in the same way as pointed out above, or does the seed + details from the log help me in any form to start from the selecting your model point? Thank you in advance! Edit: Nevermind - the log file usually helps to get back into the editing
@@JumpIntoAI Hi, I have the same problem, I generated the model I wanted, but when I tried your process, it changed the model even if I lowered the Forced Overwrite of Denoising Strength of ‘Vary’. I get stuck on this step
This is really good. Question, why would you want to "redo" the front facing one when you can just use the original image? ie. why don't you do 4 grids of different angles. and you can swap around when generating different pose. Meaning use the original as front facing + 3 other angles for face swap.
Hi Jump, thank you so much for this video, you are giving away very valuable content and I appreciate it, you have earned my subscription, my like, and the bell, my question is how do you insert the models into local fooocus, I was able to use the web but not locally, do you have any video that explains it?
Hi, this is great tutorial, thanks! I have a question, How do you get Fooocus to respect your prompts? If I tell it to use a white background, it doesn't do it and even generates a rainbow background instead. The same happens with clothing colors. Can you help me?
I watched the videos and created my virtual model, thank you very much. What if I have rings, necklaces and earrings, and I want to create photos in which the virtual model is wearing these pieces?
Hey, thanks so much for this super-clear tutorial! but I can't seem to make it work for me :( the pyracanny just takes over and allow no room for changing the expression, tried to test with different parameters, but nothing works. the image always keep the same expression, no matter what parameters I use with the pyracanny. If i understand correctly, I need to use only pyracanny, with the same seed (no increments), and just change the prompt for different expressions? it just wont change it..
HOLA AMIGO UNA SUGERENCIA O EXPLICACIÓN DE COMO PODER GENERAR TRES PERSONAS EN UNA MISMA IMAGEN PERO QUE TODAS SEAN TOTALMENTE DIFERENTES QUE NINGUNA TENGA NINGUN PARECIDO CON LA OTRA AYUDAME POR FAVOR
great video! how is the workflow, when you already got a beautiful portrait, but with only one ancle? Im trying, but with face swap i always got the same result and no diffrent angles....
you can prompt for it in the generation, if that doesnt work well enough you can change it in inpaint after. If wanting to change the grid of 4 you can mask all 4 subjects hair and try to change at once with inpainting.
I'm not sure how you got your character to do emotions, starting at 4:30. Is it because you generated everything from the seed that gave you the original 4 faces and then kept that seed and prompt? I personally built my 4 faces quadrille some other way (using stable diffusion) because I didn't want to rely on luck to get a face I like. But now that I have a face I like and want to give it emotions, I can't seem to be able to do it using your technique. I have followed every step of your video multiple times but end up very short on the expected results. I have tried different models like juggernault and stockphoto, but the results are way, way off. Help please?
This method really only works with generating an image first, all the factors like the prompt/seed/grid faces/ stop at & weight settings, all hold the image close to the original and changing just a word or 2 for emotion is why it works.
@JumpIntoAI bro, pls make video about how to fix fingers, because it's the most hardest thing to fix with inpaint, is there any good setting to guaranteed or at least make it better to fix the fingers and hand deformed?
Fingers and hands are still a pain. Especially if they are closer to the camera, and holding hands or ones with the fingers interlocking are nightmares. I usually end up hiding them if i can. But I have fixed them in the past with some basic photoshop (just separating the fingers) and then running again in Fooocus inpaint. If I can come up with something that isn't too complicated that works I will make a vid.
@@AxelWXChannel Yep, I have even found a 3d hand that i spent hours getting the right angle and then low denoising in inpaint over and over to blend in naturally. But it was way too much work than it was worth.
Just wondering why I am not getting the four sections/squares after generating. I do only get the top left one in one single image of 1024x1024. Thoughts?
google collab has limited gpu credits in the free tier which fooocus quickly maxes out. Only solution to this is to either switch to a paid tier or run it on your own system.
Dude can you help me? I don't know why my interface is different from yours. I've notice you don't have the checkpoint selection on top of your screen like I got on my version. I also don't have the "image prompt" tab and that "advanced" option. Can you share your installed version?
Have you had much luck at all maintaining character consistency across couples? I can't seem to manage it straight out of generation. Trying to avoid the inpainting routes, but it seems like at the moment that's the only way to get it done without resorting to training up a LORA or using the famous person blend prompt.
Hello! Could you help me please? I have an issue with running Fooocus. Its says RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver. I have amd gpu.
AMD gpu aren't officially supported (its a stable diffusion problem not a fooocus problem) but there are ways to get it to work as long as it is a beefy card. Fooocus install page has a few things you can try to get it to work. github.com/lllyasviel/Fooocus?tab=readme-ov-file#linux-amd-gpus Or this person created a little tutorial to get his amd gpu to work github.com/lllyasviel/Fooocus/discussions/2552
Awesome videos, I'm binge watching your channel! Is there a reason why you chose only 4 head positions? And How does the Weight and Stop At parameters actually work? And I really liked the "[[happy, laughing, angry, crying]]" prompt, I didn't know it was possible to do that. Is it possible to do something like that, but to generate sequential images, like a person standing up? Do you know a reliable way for making this kind of sequential images?
I used 9 head positions before which worked but since the heads were smaller the quality of each was lower. So i switched to 4. stop at and weight can be difficult to understand at first. ruclips.net/video/0vzunoCYiMI/видео.html that explains it some but might still be confusing. As far as the array function and adding actions I haven't tried. All depends on the image really, asking a closeup face to smile is different than asking a character to do an action. But always worth a shot!
@@JumpIntoAI Ah ok, I'll check that video. Do you know situations other than facial expressions, where this type of array function can be used? Or other possible array functions? For example, I was wondering if it's possible to vary other parameters like adding a awway for the function to generate 4 images, each one with values equal to 0.5, 0.6, 0.7 and 0.8, in the same prompt, like the facial expression array, in order to generate each image with a different set of variables.
And another question, if I'm not already being an annoyance... Kkkkk Is there a good way to alter specific characteristics of a character while prompting. For example, if I just want the right arm to be up, or the left leg to be sideways... I was struggling with that a lot yesterday. @@JumpIntoAI
IT seems that this array function works for clothes colors, interesting.... But I couldn't find online a reference for the term "array function" used in this context @@JumpIntoAI
@@BiancaMatsuo The array function was only added in 2.2 a few weeks ago so haven't tested it myself extensively. But right now generating different weights with a lora automatically isn't possible, actually the lora text prompt command like that doesnt work in fooocus (but i think they are looking to add it) so possibly in the future.
Make a somewhat similar one like in the video. Put your original model in the Image Prompt with Stop at about 1 and weight 0,75 (maybe will have to fiddle those) and FaceSwap mode. In Advanced, Debug then Controle, enable "Mixing Image Prompt and Inpaint" Put the model you have made like in the video in the Inpaint tab and paint the face of one of the 4. Generate... Do it for all the 4 faces. Only way I find to do it.
@@JumpIntoAI I get this from Civit Ai that’s name Realities edge XL lightning+ turbo that one brother ? I’m installing fooocus on hdd if I change to install on SSD that’s have some change for me or not brother ? And if this 1660S 6GB too slow for this Fooocus ?
@@zeezee4760 Yes an SSD will help. and Yes any of the lightning and or turbo models will help since they use less steps (3-8). They do need specific settings and changes made in the advanced/Debug mode to get them to work correctly. ruclips.net/video/DRcxsqnhjws/видео.html That will show you what settings need to be changed. that gpu is on the low end but I would try out the lightning models and see how much it improves.
@@JumpIntoAI I’m followed ur step brother but still too slow to generate a image with only 4-6 steps but quality is quite good love it maybe I need to upgrade my GPU
Hello, good video, I followed all the recommendations and they worked for me, except "faceswap" I cropped the background of all the profiles, I deactivated the seed increase, but when generating the photo the face appears scratched like crion. Do you know why that happens to me? Thank you
Hi there, I'm trying to replicate this with Fooocus in Colab, I did install the same model, but clearly doesn't get the same result, (By far). Could it be the Fooocus version? Any other thoughts? Thanks in Advance
with the grid that I uploaded created by you I can't understand why on the first pass the 2 images below are merged into a single one the two above are done correctly, on the second pass all 4 images are corrected, but why on the first pass does it give me that error?
If I already have the character beforehand, how do I swap the faces into the 4x4 face grid instead of creating a new character from scratch like you did?
Worked great for realistic images but none of the anime checkpoints seem to play nicely with this, not sure if it's just a case that the AI doesn't recognise cartoons as people, perhaps.
hi, I did all the steps you did but when comes the time to generate the faces on the 4 different 3d heads, it only gives me a weird colored square. Anyone would know why?
Just to check. you have used fooocus fine before trying this yes? Are you on google colab or using this on your own computer? What model/checkpoint are you using? What is your GPU if using on your computer?
Hello, how are you? I wanted to ask you, I was testing with focus, and I can't get it to generate full body images. Do you have any idea if there is a specific checkpoint/models to create those images? Or do I just have to generate the prompt in another way?. Thank you very much, i'm learning a lot from your videos.
Yes there are a few ways to achieve this, the best is to describe the subject from top to bottom. Like, "A woman wearing a t-shirt, jeans and cowboy boots." adding in their shoes helps as it will want to include what you are describing so it will be more zoomed out. Also describing their surroundings. especially items that are bigger than them like. "A woman wearing a t-shirt, jeans and cowboy boots. Standing next to a lamppost. " If still having trouble, you might try removing the 3 default fooocus styles in the "Styles" tab. they tend to lean more towards portraits.
Great bro..👍 Please continue your tutorial by making complete 1 Video Music Clip tutorial (from beginning to end) with the theme: 1. Image to video, 2. Video to lip sync. 3. Choose the Best & Natural Lipsync AI for singers. 4. The singer's face remains consistent (unchanged), without any defects in the scene. 5. Realistic, Photography 6. Photo material of this Italian woman. 7. Choose Easy, Best & Free AI. 8. Thanks bro..🙏
I'm actually struggling to get even or 'flat' lighting in my images. Every image fooocus generates feels like it has studio lighting or a rendered quality to it. Any way to fix that?
If you describe the clothing in detail you can get similar clothes in generations. But its hard to get exactly the same. there will always be slight variations in shape and color, especially if the clothes you want are specific looking. You can also go back and try to inpaint clothing to change it, especially if its close to what you want and just needs a color change or a missing button, etc.
Wow really great video didn’t know that fooocus had so many features. Did you try training a lora with the faces you generated or do you have any experiences?
How to force it to generate 4 faces in one picture? I've added that sheet to image prompt but it keeps generating just one face at a picture. Increased weight to 1.0 but it still ignoring it. Using juggernautxl9 model, turned all filters and negative prompts off.
I already have a face that I've been using but i want to use this tutorial to have the possibility of having better images, how could I do it? I have the image of my face
What if i already have a face that i created but i want to use this method? Would i need to do faceswaps with the grid and then move on from there with the emotion arrays???
With this specific method no. It will be a different face. The reason I can create different emotions is because I am using the exact settings/weights/prompts/seed/model that created the original and tweaking just one or two words. So it wont change much.
you are making amazing video 💯💥💥💥💥💥💥 we request you to please make one dedicated video about "how to maintain body" (without using other models photos) how can we generate same body all time because if we use another or character or influencers photos for consistency of body. but it will copy pose as well. and yes sometime even cloth also.🥲 any specific prompt which should we use ? seed ? this might be challenging for you but we hope you will..... please help us it will be a lot for us. thank you thank you thank you🙏🙏
How hard is it to get a model and train it to learn a new persona. Eg: you can use meganfox in the promp to generate stuff with her face. Maybe you can train the model to learn a new entity and if you use that in the prompt, generate everything with the entity face/hair/body ? Could you do a training on that? You are really awesome btw
Thans a lot for your videos! I'be been following your videos about consistency and more or less you can get a good face consistency, but whay about the body? Famous AI influencers have a perfect consistency, and would be great to know how to do it. Thank you!!
One thing to remember is those people probably have some resources in post processing and perhaps some well trained LoRAs. And I bet they still generate hundreds of images for every one good photo. But to do this yourself isn't impossible if the clothing you want is not isn't very specific, doesn't have lots of minor details, and no words or logos. With a lot patience and skill with inpainting it could work. If I come up with a workflow that makes sense I will make a video about it.
@@JumpIntoAI I've thought that using a body created in 3D (created with daz3D, Blender or similar) , you can render it and after that, use AI to make a faceswap on the render. So, you'll always have the same body (sizes of chest, hips, etc..) Even you can change clothes, make physics on clothes, etc..) I don't know if it'd be possible ,I'm only thinking in loud voice :)
Want to get a quick start? Here is the 4x4 Face Grid!: www.mediafire.com/file/uyvyzi4yfk2fgiv/4-face.zip/file
**EDIT** If getting a single image and not a grid of faces, make sure to put the 4x4 Grid to PYRACANNY setting.
Since I've previously explained this part a bunch, I went past it and some people are confused. Sorry!
ComfyUI Please Please 🙏🙏
That link isnt working.. neither one (in the description or above)
@DesignerVIsuals hmm still works for me, and it's still getting downloads according to the stats. ❓🤔
Just dropping in to share something exciting with my fellow content creators! As a newcomer to RUclips, I've been all about storytelling and creative video-making. Recently stumbled upon VideoGPT, and it's been a game-changer, giving my videos that polished, professional touch.
Can’t find the initial tutorial you talked about.. :/
This is the only true Character Consistency tutorial out there. Others are click bait.
You better start learning how to create the ai yourself , it takes as much time as it takes you to learn how to do these lol
to the point, i wish all youtubers follow your method of honest tutoring
You really get in depth but keep it easy. Really good at what you do.
Thanks! I appreciate the support 😉. 👍
It only generates one front facing image for me not the grid. SOLUTION: switch to pyracanny which was not mentioned on the video
thanks for the much needed help.
Thank You Man
By far the most in depth explanation I've seen
Nice to hear and watch you again! Each time I get a notification of a new film on this channel I drop everything and run to RUclips! :)
Thanks! I'm glad the videos are something to look forward to. Much appreciated. 😊
@@JumpIntoAI You sound very smart buddy...i bet you are but your voice is consistent with what youre doing!
Finally an actual great tutorial from a content creator! Nice job!!
Wow, this is an excellent tutorial that shows your perfect mastery of fooocus and image generation. Thank you so much
No problem! Glad to help show people other ways to get results.
Thank you very much for this detailed and very well explained tutorial. Your video has actually made me want to try it myself.
I wonder if you could use similar techniques to change the weather and lighting of a scene.
That's a lot for me to process now, but you did an excellent job here.
Your video is the best I've ever seen on generating consistent characters. I hope you will continue to share more technologies in this area in the future, and I also hope you will share more advanced usage of Fooocus. 🎉🎉🎉
Subscribed! the only 1 channel that explained to details what and what does what, simply amazing. been wanting to change the face expression and this video teach just that. Thank you a lot for this video alone make huge dfferent for me, thanks bro 👍
Liked and subscribed! Thank you!
very good guide
PS: how you made this great fox, the settings etc. - it would be great if you could describe it briefly or, even better, make a tutorial on it!
finally someone who knows how to do it. No bs!
Hands down one of the best videos I’ve found on the topic
i have never commented on any youtube video, but you made me comment, i thank you so much for teaching me about fooocus, i hope to make more videos about model consistency, thank you
Great video! But how'd you get the images at 8:20 to have no background?
Used a background removal tool after I split them. If you are on windows even MS paint has one, not as great as others but a pure white background like that is easy to remove with it.
thanks for clarifying @@JumpIntoAI
Hello Mr, thanks for video. It is possible to do consistent body type with this method ?
This is the video that Im looking for. Thanks and keep posting new learnings
Would you be able to share details on the audio track by any chance at all? 🙏 It is insanely good!
Good stuff
Cheers from Poland!
When you first put the character grid in the image prompt, did you leave the setting to image prompt or did you select the PyraCanny box?
I followed all of your steps and settings exactly, but my faces don't follow the character grid.
If I put my Weight too high, then I get the character grid but no the face I want.
Any suggestions? Thanks in advance
facing the same issue
Same @@ravichauhan4802
same here
First of all I want to congratulate you for the extraordinary way you explain the procedure. Your explanation is very precise and, at the same time, didactic. What was a great addition is the "Disable Seed Increment" option to create multiple images with the same SEED and PROMPT.
In the procedure you start with a detailed PROMPT (t 3 min 32 sec). The question is: How to start the process with a reference image (instead of a PROMPT) from which you want to create different emotions?
When doing it this way, having the prompt, the pyracanny grid, and the same seed. We can get emotion without compromising the original image because we are only adding in one or two words to all of that information.
When putting in an image and trying to get emotion without anything to holding the face to its original form, you will get what you want but not without the character details changing too much.
What you can try is putting an image into the "Upscale or Variation" and using the "Vary(subtle)" and simply put in the emotion in the text prompt. It's better if the face is majority of the image as then it wont change as much as if the face was far away.
You can also try going into the debug menu on the debug tools tab and go down to (Forced Overwrite of Denoising Strength of "Vary" ) and change the denoise level of that. Vary (subtle) starts at 0.5 so try going down to 0.4 or lower if the image is changing too much.
@@JumpIntoAI Hi, also thanks from my side for your excellent tutorial! It helps a lot.
I was first creating multiple models to then later choose the one that I'd like the most. When going back to create the emotion grids, would you approach it in the same way as pointed out above, or does the seed + details from the log help me in any form to start from the selecting your model point?
Thank you in advance!
Edit: Nevermind - the log file usually helps to get back into the editing
@@JumpIntoAI Hi, I have the same problem, I generated the model I wanted, but when I tried your process, it changed the model even if I lowered the Forced Overwrite of Denoising Strength of ‘Vary’.
I get stuck on this step
Hi,
Please make a video where we can learn Creating face grid from a pre generated face image.
This is really good. Question, why would you want to "redo" the front facing one when you can just use the original image? ie. why don't you do 4 grids of different angles. and you can swap around when generating different pose. Meaning use the original as front facing + 3 other angles for face swap.
Hi Jump, thank you so much for this video, you are giving away very valuable content and I appreciate it, you have earned my subscription, my like, and the bell, my question is how do you insert the models into local fooocus, I was able to use the web but not locally, do you have any video that explains it?
Hi, this is great tutorial, thanks! I have a question, How do you get Fooocus to respect your prompts? If I tell it to use a white background, it doesn't do it and even generates a rainbow background instead. The same happens with clothing colors. Can you help me?
Thank you! Great technique, thanks for sharing
No Problem!
Hi, Your teaching is very excellent. REQUESTING YOU PLEASE MAKE VIDEO IMAGE FULL POTRAIT (MEANS HEAD TO TOE) IN ANY POSE AND ANY EXPRESSION. THANKS
That's great! Thank you so much for a perfect and needed lesson!
Nothing changes the pose for me. My model always has the same pose on the park bench. Can someone please help me?
I did all the steps you did untinl minute 3:47 (you hit generate) but it will only give me 1 Face, not 4 Versions of the same face. why is that?
bump
I watched the videos and created my virtual model, thank you very much. What if I have rings, necklaces and earrings, and I want to create photos in which the virtual model is wearing these pieces?
Can you make a video on how to change the weather in photos or how to make photos at night of the same scene?
Hey, thanks so much for this super-clear tutorial!
but I can't seem to make it work for me :(
the pyracanny just takes over and allow no room for changing the expression, tried to test with different parameters, but nothing works. the image always keep the same expression, no matter what parameters I use with the pyracanny.
If i understand correctly, I need to use only pyracanny, with the same seed (no increments), and just change the prompt for different expressions? it just wont change it..
cool tutorial, thx. where can i download these models 3:08?
CivitAI, just set the filter on there to SDXL 1.0>Checkpoints. Those are all the base models that can be used in Fooocus.
Oooo I was looking for such videos a lot thank you 💕💕❤
You're welcome 😊
HOLA AMIGO UNA SUGERENCIA O EXPLICACIÓN DE COMO PODER GENERAR TRES PERSONAS EN UNA MISMA IMAGEN PERO QUE TODAS SEAN TOTALMENTE DIFERENTES QUE NINGUNA TENGA NINGUN PARECIDO CON LA OTRA AYUDAME POR FAVOR
What if I don't have 'disable seed increment'? it arrives till 'Forced Overwrite of Denoising Strength of "Upscale"' and than 'Disable Preview'
Yes i was struggling with nonhuman such as a minotaur
great video! how is the workflow, when you already got a beautiful portrait, but with only one ancle? Im trying, but with face swap i always got the same result and no diffrent angles....
I don't see the invert mask button >.< im using fooocus 2.55 at my pc
If I made up more grids using the remaining head positions I'm unsure how I could maintain the character consistency. Is it possible to do this?
Awesome video and awesome channel man!
Is there a good way to keep landscapes and environments consistent too?
Awesome video! how you would advice to change the hair style and color?
you can prompt for it in the generation, if that doesnt work well enough you can change it in inpaint after.
If wanting to change the grid of 4 you can mask all 4 subjects hair and try to change at once with inpainting.
I'm not sure how you got your character to do emotions, starting at 4:30. Is it because you generated everything from the seed that gave you the original 4 faces and then kept that seed and prompt?
I personally built my 4 faces quadrille some other way (using stable diffusion) because I didn't want to rely on luck to get a face I like. But now that I have a face I like and want to give it emotions, I can't seem to be able to do it using your technique. I have followed every step of your video multiple times but end up very short on the expected results. I have tried different models like juggernault and stockphoto, but the results are way, way off. Help please?
This method really only works with generating an image first, all the factors like the prompt/seed/grid faces/ stop at & weight settings, all hold the image close to the original and changing just a word or 2 for emotion is why it works.
@JumpIntoAI bro, pls make video about how to fix fingers, because it's the most hardest thing to fix with inpaint, is there any good setting to guaranteed or at least make it better to fix the fingers and hand deformed?
Yes, I also need it
Fingers and hands are still a pain. Especially if they are closer to the camera, and holding hands or ones with the fingers interlocking are nightmares. I usually end up hiding them if i can. But I have fixed them in the past with some basic photoshop (just separating the fingers) and then running again in Fooocus inpaint.
If I can come up with something that isn't too complicated that works I will make a vid.
@@JumpIntoAI thanks a lot bro, nice idea, cut them off and run it again, should be easier for the AI to regenerate whole new thing.
@@AxelWXChannel Yep, I have even found a 3d hand that i spent hours getting the right angle and then low denoising in inpaint over and over to blend in naturally. But it was way too much work than it was worth.
I already have the character I want, how do I apply it to the template?
Thanks for the grid !
Just wondering why I am not getting the four sections/squares after generating. I do only get the top left one in one single image of 1024x1024. Thoughts?
How would you do to maintain the same background of an original photo and just switch the person?
Or not even switch but place your AI character inside your original pic without damaging background?
I can't get the image to generate a grid with the 4 faces as the image prompt. Can anyone help?
Thx for the tutorial buddy😀 when I'm swapping face in collab with mixing img and inpaint I always get error... So is there any way to fix it
google collab has limited gpu credits in the free tier which fooocus quickly maxes out. Only solution to this is to either switch to a paid tier or run it on your own system.
@@dhruvarora2995 so is there anyway to generate high quality background or blend things perfectly with face swap and then typing prompt manually?
Dude can you help me?
I don't know why my interface is different from yours. I've notice you don't have the checkpoint selection on top of your screen like I got on my version. I also don't have the "image prompt" tab and that "advanced" option. Can you share your installed version?
Have you had much luck at all maintaining character consistency across couples? I can't seem to manage it straight out of generation. Trying to avoid the inpainting routes, but it seems like at the moment that's the only way to get it done without resorting to training up a LORA or using the famous person blend prompt.
can I make different angles using a face, (face swap)?
Hello! Could you help me please? I have an issue with running Fooocus. Its says RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver. I have amd gpu.
AMD gpu aren't officially supported (its a stable diffusion problem not a fooocus problem) but there are ways to get it to work as long as it is a beefy card.
Fooocus install page has a few things you can try to get it to work. github.com/lllyasviel/Fooocus?tab=readme-ov-file#linux-amd-gpus
Or this person created a little tutorial to get his amd gpu to work
github.com/lllyasviel/Fooocus/discussions/2552
@@JumpIntoAI Thank you!)
Awesome videos, I'm binge watching your channel! Is there a reason why you chose only 4 head positions? And How does the Weight and Stop At parameters actually work?
And I really liked the "[[happy, laughing, angry, crying]]" prompt, I didn't know it was possible to do that. Is it possible to do something like that, but to generate sequential images, like a person standing up? Do you know a reliable way for making this kind of sequential images?
I used 9 head positions before which worked but since the heads were smaller the quality of each was lower. So i switched to 4.
stop at and weight can be difficult to understand at first.
ruclips.net/video/0vzunoCYiMI/видео.html
that explains it some but might still be confusing.
As far as the array function and adding actions I haven't tried. All depends on the image really, asking a closeup face to smile is different than asking a character to do an action. But always worth a shot!
@@JumpIntoAI Ah ok, I'll check that video.
Do you know situations other than facial expressions, where this type of array function can be used? Or other possible array functions?
For example, I was wondering if it's possible to vary other parameters like adding a awway for the function to generate 4 images, each one with values equal to 0.5, 0.6, 0.7 and 0.8, in the same prompt, like the facial expression array, in order to generate each image with a different set of variables.
And another question, if I'm not already being an annoyance... Kkkkk Is there a good way to alter specific characteristics of a character while prompting. For example, if I just want the right arm to be up, or the left leg to be sideways... I was struggling with that a lot yesterday. @@JumpIntoAI
IT seems that this array function works for clothes colors, interesting.... But I couldn't find online a reference for the term "array function" used in this context @@JumpIntoAI
@@BiancaMatsuo The array function was only added in 2.2 a few weeks ago so haven't tested it myself extensively. But right now generating different weights with a lora automatically isn't possible, actually the lora text prompt command like that doesnt work in fooocus (but i think they are looking to add it) so possibly in the future.
Bien!!! gracias... quisiera ya que una vez venga los videos ai de Sora
What if I already have a model but I want to create more emotions for example?
Make a somewhat similar one like in the video.
Put your original model in the Image Prompt with Stop at about 1 and weight 0,75 (maybe will have to fiddle those) and FaceSwap mode.
In Advanced, Debug then Controle, enable "Mixing Image Prompt and Inpaint"
Put the model you have made like in the video in the Inpaint tab and paint the face of one of the 4.
Generate... Do it for all the 4 faces.
Only way I find to do it.
Someone help me I have I7 12700kf Ram 32G
GPU 1660S but when I generate image on fooocus need to spend Around 1-2min per image any help ?
Have you tried using a Lightning Model?
@@JumpIntoAI I get this from Civit Ai that’s name Realities edge XL lightning+ turbo that one brother ?
I’m installing fooocus on hdd if I change to install on SSD that’s have some change for me or not brother ?
And if this 1660S 6GB too slow for this Fooocus ?
@@zeezee4760 Yes an SSD will help. and Yes any of the lightning and or turbo models will help since they use less steps (3-8). They do need specific settings and changes made in the advanced/Debug mode to get them to work correctly.
ruclips.net/video/DRcxsqnhjws/видео.html
That will show you what settings need to be changed.
that gpu is on the low end but I would try out the lightning models and see how much it improves.
@@JumpIntoAI I’m followed ur step brother but still too slow to generate a image with only 4-6 steps but quality is quite good love it maybe I need to upgrade my GPU
Great Video my friend...
best tutorial video
Hello, good video, I followed all the recommendations and they worked for me, except "faceswap" I cropped the background of all the profiles, I deactivated the seed increase, but when generating the photo the face appears scratched like crion. Do you know why that happens to me? Thank you
Hi there,
I'm trying to replicate this with Fooocus in Colab, I did install the same model, but clearly doesn't get the same result, (By far). Could it be the Fooocus version? Any other thoughts? Thanks in Advance
with the grid that I uploaded created by you I can't understand why on the first pass the 2 images below are merged into a single one the two above are done correctly, on the second pass all 4 images are corrected, but why on the first pass does it give me that error?
Where did you learn these?
Amazing work dude!
If I already have the character beforehand, how do I swap the faces into the 4x4 face grid instead of creating a new character from scratch like you did?
How do you get 4x4 image using a face swap for low lighting images? Anyone help?
Worked great for realistic images but none of the anime checkpoints seem to play nicely with this, not sure if it's just a case that the AI doesn't recognise cartoons as people, perhaps.
hi, I did all the steps you did but when comes the time to generate the faces on the 4 different 3d heads, it only gives me a weird colored square. Anyone would know why?
Just to check. you have used fooocus fine before trying this yes? Are you on google colab or using this on your own computer?
What model/checkpoint are you using?
What is your GPU if using on your computer?
Hello, how are you? I wanted to ask you, I was testing with focus, and I can't get it to generate full body images. Do you have any idea if there is a specific checkpoint/models to create those images? Or do I just have to generate the prompt in another way?. Thank you very much, i'm learning a lot from your videos.
Yes there are a few ways to achieve this, the best is to describe the subject from top to bottom. Like, "A woman wearing a t-shirt, jeans and cowboy boots." adding in their shoes helps as it will want to include what you are describing so it will be more zoomed out.
Also describing their surroundings. especially items that are bigger than them like. "A woman wearing a t-shirt, jeans and cowboy boots. Standing next to a lamppost. "
If still having trouble, you might try removing the 3 default fooocus styles in the "Styles" tab. they tend to lean more towards portraits.
@@JumpIntoAI Ok, i'll try that. And this works with any models? Or some models are trained to create portraits only?
i have already created ai character through tensor art, how can i achieve consistent face angles?
Great bro..👍
Please continue your tutorial by making complete 1 Video Music Clip tutorial (from beginning to end) with the theme:
1. Image to video,
2. Video to lip sync.
3. Choose the Best & Natural Lipsync AI for singers.
4. The singer's face remains consistent (unchanged), without any defects in the scene.
5. Realistic, Photography
6. Photo material of this Italian woman.
7. Choose Easy, Best & Free AI.
8. Thanks bro..🙏
I tried all your steps and it still gave me different photos and then sometimes it even generates them distorted naked
I'm actually struggling to get even or 'flat' lighting in my images. Every image fooocus generates feels like it has studio lighting or a rendered quality to it. Any way to fix that?
First is to uncheck the 3 default "styles". They lean towards that studio look.
any ideas to kepp consistent clothes and faces on faceswap?
If you describe the clothing in detail you can get similar clothes in generations. But its hard to get exactly the same. there will always be slight variations in shape and color, especially if the clothes you want are specific looking.
You can also go back and try to inpaint clothing to change it, especially if its close to what you want and just needs a color change or a missing button, etc.
Where can I get the model you use in this video RealismEngineSDXL ? Thanks!
all models from CivitAI
civitai.com/models/152525/realism-engine-sdxl
Wow really great video didn’t know that fooocus had so many features. Did you try training a lora with the faces you generated or do you have any experiences?
I have done LoRAs from generations with mixed results. I haven't put enough work into training since it can be frustrating, lol.
Bro I where you select RealismEngineSDXL as base model I only have one stockphoto model not more, How do I get yours?
How to force it to generate 4 faces in one picture? I've added that sheet to image prompt but it keeps generating just one face at a picture. Increased weight to 1.0 but it still ignoring it. Using juggernautxl9 model, turned all filters and negative prompts off.
is the 4x4 grid set to pyracanny?
@@JumpIntoAI Thanks a lot! I've missed that part in the video..
I already have a face that I've been using but i want to use this tutorial to have the possibility of having better images, how could I do it? I have the image of my face
What if i already have a face that i created but i want to use this method? Would i need to do faceswaps with the grid and then move on from there with the emotion arrays???
With this specific method no. It will be a different face. The reason I can create different emotions is because I am using the exact settings/weights/prompts/seed/model that created the original and tweaking just one or two words. So it wont change much.
@@JumpIntoAI ok. Thanks. Makes sense
tell me abit about your spec computer . and render time per image ..thank you.
AMD Ryzen 7 3800X
32gb Ram
4070 Ti 12gb Vram
Images take around 12 seconds to generate each
Cool! I looked for it!
you are making amazing video 💯💥💥💥💥💥💥
we request you to please make one dedicated video about "how to maintain body" (without using other models photos)
how can we generate same body all time because if we use another or character or influencers photos for consistency of body. but it will copy pose as well. and yes sometime even cloth also.🥲
any specific prompt which should we use ?
seed ?
this might be challenging for you but we hope you will.....
please help us it will be a lot for us.
thank you thank you thank you🙏🙏
Hi, I need advice. Why am I only seeing one Base model?
You need to download additional models. CivitAI is the main site for this. Just use the filter in the model tab and look for SDXL 1.0 Checkpoints.
How hard is it to get a model and train it to learn a new persona. Eg: you can use meganfox in the promp to generate stuff with her face.
Maybe you can train the model to learn a new entity and if you use that in the prompt, generate everything with the entity face/hair/body ?
Could you do a training on that?
You are really awesome btw
Thans a lot for your videos! I'be been following your videos about consistency and more or less you can get a good face consistency, but whay about the body? Famous AI influencers have a perfect consistency, and would be great to know how to do it. Thank you!!
One thing to remember is those people probably have some resources in post processing and perhaps some well trained LoRAs. And I bet they still generate hundreds of images for every one good photo.
But to do this yourself isn't impossible if the clothing you want is not isn't very specific, doesn't have lots of minor details, and no words or logos.
With a lot patience and skill with inpainting it could work.
If I come up with a workflow that makes sense I will make a video about it.
@@JumpIntoAI I've thought that using a body created in 3D (created with daz3D, Blender or similar) , you can render it and after that, use AI to make a faceswap on the render. So, you'll always have the same body (sizes of chest, hips, etc..) Even you can change clothes, make physics on clothes, etc..) I don't know if it'd be possible ,I'm only thinking in loud voice :)
He There,
I wrote you a PM on your mail. I love your videos and would like to use your 4-grid in one of my videos.
Thanks
So helpful thks
Ah this is great. thanks so much
Wow! Amazing video Thank You very Much!
Welcome! Glad you liked it!
i could not get 4 different photo as like you
Make sure the 4 face grid is set to pyracanny
Thanks, also had this problem.
@@JumpIntoAI yes but still not working