I dont know if you tried already but if you select the source file from the source selector inside of fusion you can select a whole faceset, for example you can get the source video into facelab, extract all the images into individual files and later extract the faceses so you can have your faceset, this faceset can be feed into the facefusion as individual files (png, jpg, etc) and improve the consistency of the results. The video is great man ;)
@@alexvillabon sure, my idea came from my former use of deepfacelab to create the videos, the idea is that in your target video you can get more consistency in the result, and make the AI imagine less and have more points of reference, if you instead of feed facefusion one image you feed facefusion with a lot, so instead of having one as reference it can have thousands to get ideas to create the faces, At least the process would be as follows, 1- lets say that you have two videos src(source) and dst(destiny). 2-You want the dace of src into the face of dst. 3-you take the src video and introduce it into the folder of workspace in deepfacelab 4- you click in extract images from video data_src.bat (this will separete every frame and turn it into an individual file) 5-later you can press in 4) data_src faceset extract.bat (this will extract a frame with the face detected in every file)that will contain the face in the size that you say once you start the process. 6-later from you can clean not good pictures or repeated to avoid redundant faces 7- later you can open facefusion and click into select the source file (or files) and it will allow you to feed him a lot of pictures from teh ones you extracted, idk exactly the limit but i tried with aprox 1800 and it works right of course wiht the downside of the time it takes to move all of the images. you can later select as you say the target video and do all what you did. at least i did notice an increase in the overall quality. Sorry if i didn't explain myself good enough, my language is the spanish
Woah that is amazing! I had no idea I could import more than one image for the person's face! How does it decide what to use from each picture though. This is so interesting and helpful, thank you for sharing.
@@alexvillabon im "guessing" it works like a deep-fake . i.e tries to reach the output using all the inputs. In deepface lab we'd shoot the source face in various lighting conditions. Just a guess. Also would comfy allow us to run Expression Restorer before Face Enhancer ? cheers
@@behrampatel4872 turns put the processes happen in the order you select them. As for the multi image input, i spoke with the developer and he let me know 5-6 images is the sweet spot. After that amount it doesnt really make a difference.
Face fusion will stack up the process base on what you selected first and so on. For me, I deselect everything, the selected in this order... Face enhancer Face swapping This will make facefusion run one after the other.
Yes, the order of activating the processor defines which one is calculated first. More sense in my experience: 1. Face Swapper 2. Expression Restorer 3. Face-enhancer If you wanna use lipsyncer or agemodifier, put them after or before Expression Restorer, but before the Face-Enhancer. Definetly makes a difference on closeup shots, as well as using the 512x512 or even 1024x1024 FaceSwapper Pixel Boost.
Difference key with some erode/dilate and blur should give you a reasonable matte to play with, not as good as the real thing, but at least a start. Thanks for making these. Interesting stuff.
That is a way around it. I actually tested it but the reason why I didn't mention it was because the matte is barely usable because of the different compression artifacts.
@@alexvillabon Ah, very good point, with compressed footage, which so few of us normally work with! You still couldn't choke out the artifacts or clip them with a black or white clip? Perhaps frame blending or noise removal first would help. But it looks like the developer is on to this given the earlier comment below. (That face exporter proof of concept video)
I only got FaceFusion 3.0 a few days ago and your video explains how to use it. It's a great tutorial, and you have a new SUB.👍😁
Happy to hear it helped! Thanks for the sub :)
Really cool, thanks for creating and sharing!
I dont know if you tried already but if you select the source file from the source selector inside of fusion you can select a whole faceset, for example you can get the source video into facelab, extract all the images into individual files and later extract the faceses so you can have your faceset, this faceset can be feed into the facefusion as individual files (png, jpg, etc) and improve the consistency of the results. The video is great man ;)
Wow, im not sure I understand but it really sounds interesting. Could you elaborate?
@@alexvillabon sure, my idea came from my former use of deepfacelab to create the videos, the idea is that in your target video you can get more consistency in the result, and make the AI imagine less and have more points of reference, if you instead of feed facefusion one image you feed facefusion with a lot, so instead of having one as reference it can have thousands to get ideas to create the faces,
At least the process would be as follows,
1- lets say that you have two videos src(source) and dst(destiny).
2-You want the dace of src into the face of dst.
3-you take the src video and introduce it into the folder of workspace in deepfacelab
4- you click in extract images from video data_src.bat (this will separete every frame and turn it into an individual file)
5-later you can press in 4) data_src faceset extract.bat (this will extract a frame with the face detected in every file)that will contain the face in the size that you say once you start the process.
6-later from you can clean not good pictures or repeated to avoid redundant faces
7- later you can open facefusion and click into select the source file (or files) and it will allow you to feed him a lot of pictures from teh ones you extracted, idk exactly the limit but i tried with aprox 1800 and it works right of course wiht the downside of the time it takes to move all of the images.
you can later select as you say the target video and do all what you did.
at least i did notice an increase in the overall quality.
Sorry if i didn't explain myself good enough, my language is the spanish
Woah that is amazing! I had no idea I could import more than one image for the person's face! How does it decide what to use from each picture though. This is so interesting and helpful, thank you for sharing.
@@alexvillabon im "guessing" it works like a deep-fake . i.e tries to reach the output using all the inputs. In deepface lab we'd shoot the source face in various lighting conditions. Just a guess. Also would comfy allow us to run Expression Restorer before Face Enhancer ?
cheers
@@behrampatel4872 turns put the processes happen in the order you select them. As for the multi image input, i spoke with the developer and he let me know 5-6 images is the sweet spot. After that amount it doesnt really make a difference.
Another great video, thanks 👍
Face fusion will stack up the process base on what you selected first and so on. For me, I deselect everything, the selected in this order...
Face enhancer
Face swapping
This will make facefusion run one after the other.
is the output better than what is shown?
Oh! Thank you, I didnt think of trying that.
Yes, the order of activating the processor defines which one is calculated first. More sense in my experience:
1. Face Swapper
2. Expression Restorer
3. Face-enhancer
If you wanna use lipsyncer or agemodifier, put them after or before Expression Restorer, but before the Face-Enhancer.
Definetly makes a difference on closeup shots, as well as using the 512x512 or even 1024x1024 FaceSwapper Pixel Boost.
Warning. Prerequisite for Pinokio is a ton of SDK files and dev tools, tons of tool scattered across your machine that you might not want.
Difference key with some erode/dilate and blur should give you a reasonable matte to play with, not as good as the real thing, but at least a start. Thanks for making these. Interesting stuff.
That is a way around it. I actually tested it but the reason why I didn't mention it was because the matte is barely usable because of the different compression artifacts.
@@alexvillabon Ah, very good point, with compressed footage, which so few of us normally work with! You still couldn't choke out the artifacts or clip them with a black or white clip? Perhaps frame blending or noise removal first would help. But it looks like the developer is on to this given the earlier comment below. (That face exporter proof of concept video)
Yeah I saw that! Having a face matte option would be amazing.
Modul not found error: no modul named ' numpy ' problem please solve it. One vedio for this topic plaese 🙏🙏🙏🙏
Anyway to get this to render out at higher bit depths? Seems to do 8biit only.
Plaese do some some thing🙏🙏😢
How to use this on colab
But, these tools really seem to be illegal.
😂 It’s like saying photoshop is illegal bc we can do the same thing (but with lot more time)
Face fusion is heavily censored, so does not need to be regulated
Face fusion is heavily censored by the way
@@aegisgfx in what way? Seems to have no issues with celebrity faces.
@@alexvillabon any nudity even cleavage and it won't work at all
Creator of FF here... so you are asking for this? watch?v=qwDqm1wpEEs
Hey Henry. Yes, that is exactly it! How do you manage to render the face by itself with an alpha channel?
Modul not found error: no modul named ' numpy ' problem please solve it. One vedio for this topic plaese 🙏🙏🙏🙏
Not sure I can help. I'd refer you to the github page.