I dont know if you tried already but if you select the source file from the source selector inside of fusion you can select a whole faceset, for example you can get the source video into facelab, extract all the images into individual files and later extract the faceses so you can have your faceset, this faceset can be feed into the facefusion as individual files (png, jpg, etc) and improve the consistency of the results. The video is great man ;)
@@alexvillabon sure, my idea came from my former use of deepfacelab to create the videos, the idea is that in your target video you can get more consistency in the result, and make the AI imagine less and have more points of reference, if you instead of feed facefusion one image you feed facefusion with a lot, so instead of having one as reference it can have thousands to get ideas to create the faces, At least the process would be as follows, 1- lets say that you have two videos src(source) and dst(destiny). 2-You want the dace of src into the face of dst. 3-you take the src video and introduce it into the folder of workspace in deepfacelab 4- you click in extract images from video data_src.bat (this will separete every frame and turn it into an individual file) 5-later you can press in 4) data_src faceset extract.bat (this will extract a frame with the face detected in every file)that will contain the face in the size that you say once you start the process. 6-later from you can clean not good pictures or repeated to avoid redundant faces 7- later you can open facefusion and click into select the source file (or files) and it will allow you to feed him a lot of pictures from teh ones you extracted, idk exactly the limit but i tried with aprox 1800 and it works right of course wiht the downside of the time it takes to move all of the images. you can later select as you say the target video and do all what you did. at least i did notice an increase in the overall quality. Sorry if i didn't explain myself good enough, my language is the spanish
Woah that is amazing! I had no idea I could import more than one image for the person's face! How does it decide what to use from each picture though. This is so interesting and helpful, thank you for sharing.
@@alexvillabon im "guessing" it works like a deep-fake . i.e tries to reach the output using all the inputs. In deepface lab we'd shoot the source face in various lighting conditions. Just a guess. Also would comfy allow us to run Expression Restorer before Face Enhancer ? cheers
@@behrampatel4872 turns put the processes happen in the order you select them. As for the multi image input, i spoke with the developer and he let me know 5-6 images is the sweet spot. After that amount it doesnt really make a difference.
Face fusion will stack up the process base on what you selected first and so on. For me, I deselect everything, the selected in this order... Face enhancer Face swapping This will make facefusion run one after the other.
Yes, the order of activating the processor defines which one is calculated first. More sense in my experience: 1. Face Swapper 2. Expression Restorer 3. Face-enhancer If you wanna use lipsyncer or agemodifier, put them after or before Expression Restorer, but before the Face-Enhancer. Definetly makes a difference on closeup shots, as well as using the 512x512 or even 1024x1024 FaceSwapper Pixel Boost.
Difference key with some erode/dilate and blur should give you a reasonable matte to play with, not as good as the real thing, but at least a start. Thanks for making these. Interesting stuff.
That is a way around it. I actually tested it but the reason why I didn't mention it was because the matte is barely usable because of the different compression artifacts.
@@alexvillabon Ah, very good point, with compressed footage, which so few of us normally work with! You still couldn't choke out the artifacts or clip them with a black or white clip? Perhaps frame blending or noise removal first would help. But it looks like the developer is on to this given the earlier comment below. (That face exporter proof of concept video)
@@alexvillabon I'm curious with Face_editor if that can animate like blinking eyes or moving head. Because if you set that then the head and expression only stays in that position.
@@alexvillabon have you tried ABME in Cattery? It only does 2X but since you are dropping 50% of the frames it might be worth a try if you haven't already.
I've been running FaceFusion 3 in Pinokio for about a week now. I love it. The ONLY thing is I don't have the CUDA option at all. I have CPU and DIRECTML.
oh and sorry. Face fusion 3.0 in this case. Yeah, just install it from Git. Make sure you have those prerequisites in a local directory. Ask ChatGPT to explainin it to you. You don't need hard installs to run most C++ stuff. You can just have the part you need.
Hi! Thanks for the video. I was facing a problem with side views and don''t know how to fix it. It is always flickering and trying to put front face into the side view. What can I do for it?
Tried installing on a new Macbook air and the install went perfect until the last required cloudflared module. It won't finish install and can't find a fix.
Thanks. I am not finding the 'cuda' next to 'cpu' as execution providers! What could be the reasons for not having the 'cuda' option? Is it happening because of any installation issue, or it is it because of I am using FaceFusion 3.0.1? please let me know.
@@alexvillabon My computer has 'intel(r) iris(r) xe graphics ' - do you think not having 'nvidia' in my laptop could be the reason for not seeing 'cuda' option?
One big issue with Facefusion is that it can't handle very sharp angles, like when somebody is turning around. The face reverts back to the original for a second, which ruins the whole video. I'm still waiting for a better tool that's as easy to use.
I dont know if you tried already but if you select the source file from the source selector inside of fusion you can select a whole faceset, for example you can get the source video into facelab, extract all the images into individual files and later extract the faceses so you can have your faceset, this faceset can be feed into the facefusion as individual files (png, jpg, etc) and improve the consistency of the results. The video is great man ;)
Wow, im not sure I understand but it really sounds interesting. Could you elaborate?
@@alexvillabon sure, my idea came from my former use of deepfacelab to create the videos, the idea is that in your target video you can get more consistency in the result, and make the AI imagine less and have more points of reference, if you instead of feed facefusion one image you feed facefusion with a lot, so instead of having one as reference it can have thousands to get ideas to create the faces,
At least the process would be as follows,
1- lets say that you have two videos src(source) and dst(destiny).
2-You want the dace of src into the face of dst.
3-you take the src video and introduce it into the folder of workspace in deepfacelab
4- you click in extract images from video data_src.bat (this will separete every frame and turn it into an individual file)
5-later you can press in 4) data_src faceset extract.bat (this will extract a frame with the face detected in every file)that will contain the face in the size that you say once you start the process.
6-later from you can clean not good pictures or repeated to avoid redundant faces
7- later you can open facefusion and click into select the source file (or files) and it will allow you to feed him a lot of pictures from teh ones you extracted, idk exactly the limit but i tried with aprox 1800 and it works right of course wiht the downside of the time it takes to move all of the images.
you can later select as you say the target video and do all what you did.
at least i did notice an increase in the overall quality.
Sorry if i didn't explain myself good enough, my language is the spanish
Woah that is amazing! I had no idea I could import more than one image for the person's face! How does it decide what to use from each picture though. This is so interesting and helpful, thank you for sharing.
@@alexvillabon im "guessing" it works like a deep-fake . i.e tries to reach the output using all the inputs. In deepface lab we'd shoot the source face in various lighting conditions. Just a guess. Also would comfy allow us to run Expression Restorer before Face Enhancer ?
cheers
@@behrampatel4872 turns put the processes happen in the order you select them. As for the multi image input, i spoke with the developer and he let me know 5-6 images is the sweet spot. After that amount it doesnt really make a difference.
I only got FaceFusion 3.0 a few days ago and your video explains how to use it. It's a great tutorial, and you have a new SUB.👍😁
Happy to hear it helped! Thanks for the sub :)
Really cool, thanks for creating and sharing!
Fantastic tutorial!
great video man , thanks
Face fusion will stack up the process base on what you selected first and so on. For me, I deselect everything, the selected in this order...
Face enhancer
Face swapping
This will make facefusion run one after the other.
is the output better than what is shown?
Oh! Thank you, I didnt think of trying that.
Yes, the order of activating the processor defines which one is calculated first. More sense in my experience:
1. Face Swapper
2. Expression Restorer
3. Face-enhancer
If you wanna use lipsyncer or agemodifier, put them after or before Expression Restorer, but before the Face-Enhancer.
Definetly makes a difference on closeup shots, as well as using the 512x512 or even 1024x1024 FaceSwapper Pixel Boost.
Another great video, thanks 👍
Difference key with some erode/dilate and blur should give you a reasonable matte to play with, not as good as the real thing, but at least a start. Thanks for making these. Interesting stuff.
That is a way around it. I actually tested it but the reason why I didn't mention it was because the matte is barely usable because of the different compression artifacts.
@@alexvillabon Ah, very good point, with compressed footage, which so few of us normally work with! You still couldn't choke out the artifacts or clip them with a black or white clip? Perhaps frame blending or noise removal first would help. But it looks like the developer is on to this given the earlier comment below. (That face exporter proof of concept video)
Yeah I saw that! Having a face matte option would be amazing.
such a good video thank you !
For the Processors it is the order you check that determines the order. You can then decide for yourself in which order you want your method.
Yes, thank you. I had no idea. I will do a followup video where I cover some new things I have learned.
@@alexvillabon I'm curious with Face_editor if that can animate like blinking eyes or moving head. Because if you set that then the head and expression only stays in that position.
@@VesuviusAntaria im actively looking into that. Otherwise it seems like a feature for images only.
@@alexvillabon Maybe it works with the live webcam to express extra facial expression? Ideal for cgi animation for movies.
I have run into a major issue, I have a RTX 3070ti but it doesn't show the CUDA option, only CPU, any fix for this?
ABME might also be a good alternative to Topaz or RIFE for the frame interpolation.
Yes. Im a big fan of RIFE. In my follow-up FaceFusion video I do the interpolation with RIFE.
@@alexvillabon have you tried ABME in Cattery? It only does 2X but since you are dropping 50% of the frames it might be worth a try if you haven't already.
thx for the video man, a question, what about the faceswap model? which one do you prefer?
Thanks. Unfortunately there is no one perfect model. You have to try many for your footage and see which one gives the best results.
I've been running FaceFusion 3 in Pinokio for about a week now. I love it. The ONLY thing is I don't have the CUDA option at all. I have CPU and DIRECTML.
That's strange. Might be a Pinokio thing. Consider installing the non pinokio version and see if that fixes it? Only thing I can think of.
@@alexvillabon I'm just going to leave it as is. It works for me, just takes awhile, but I'm retired and I have all the time in the world
@@dalecorne-new-mtv Love to see that even older folks are using my software. 🙏
AMD graphic card?
@@Sombralhom Radeon RX550...cheap and crappy
Modul not found error: no modul named ' numpy ' problem please solve it. One vedio for this topic plaese 🙏🙏🙏🙏
Remove the conda environment and start over with a fresh one based with Pyhton 3.10 or higher.
Warning. Prerequisite for Pinokio is a ton of SDK files and dev tools, tons of tool scattered across your machine that you might not want.
Any alternative?
@@maxvorobyev9596 Sure. DIY install. Forge runs way better if you just install it from git.
oh and sorry. Face fusion 3.0 in this case. Yeah, just install it from Git. Make sure you have those prerequisites in a local directory. Ask ChatGPT to explainin it to you. You don't need hard installs to run most C++ stuff. You can just have the part you need.
Not true. Everything is in the pinokio folder.
Have you done any experiments swapping 2 faces in same shot or swap of same face over multiple shots (same face from different angles in same clip)?
Plaese do some some thing🙏🙏😢
Hi! Thanks for the video. I was facing a problem with side views and don''t know how to fix it. It is always flickering and trying to put front face into the side view. What can I do for it?
Anyway to get this to render out at higher bit depths? Seems to do 8biit only.
What does "Execution thread count" means under execution Provider and what is the best value we can select for 'Cuda'. Can anybody help?
Tried installing on a new Macbook air and the install went perfect until the last required cloudflared module. It won't finish install and can't find a fix.
@@stevecalabro I’d recommend you go on their discord and ask for help there.
@stevecalabro To install the Cloudfare module on your Mac, you will need to install the Xcode developer tool.
I just get errors when saving the frames during the process, any fix?
Thanks. I am not finding the 'cuda' next to 'cpu' as execution providers! What could be the reasons for not having the 'cuda' option? Is it happening because of any installation issue, or it is it because of I am using FaceFusion 3.0.1? please let me know.
@@sajib333 my guess if you dont have a compatible gpu. Post in the github or discord for more help.
@@alexvillabon My computer has 'intel(r) iris(r) xe graphics ' - do you think not having 'nvidia' in my laptop could be the reason for not seeing 'cuda' option?
@ yes, cuda is an nvidia feature.
is the pinnokio UI default still local? Or do I need to click on local URL
Yes, still local.
Code for ffmpeg Pls ?
One big issue with Facefusion is that it can't handle very sharp angles, like when somebody is turning around. The face reverts back to the original for a second, which ruins the whole video. I'm still waiting for a better tool that's as easy to use.
@@slow-moe3518 absolutely. Its early days for this tech.
How to use this on colab
They banned FF for a while, try RunDiffusion
Facefusion 3.0 is not very AMD Friendly....constantly crashes
you have forget to mentioned, that it doesn't work (theoretically) with a naked woman 🙂
I’m glad you told us that. I would have wasted time downloading it, only to be disappointed.
Bummer
@@tim9778 I said theoretically, but practically it works for a naked woman. Dig a little in the RUclips.
Face fusion is heavily censored by the way
@@aegisgfx in what way? Seems to have no issues with celebrity faces.
@@alexvillabon any nudity even cleavage and it won't work at all
@@alexvillabon try nudity
You mean nsfw? You can change 1 line with the editor and its not cencored anymore. Trust me 😉
@@roy5181 which line?
Wow never gets to the render process. Waste of time.
onething i say its not better than roop unleash see the results nothing great anyway its ok for basic swap faces
Good to know. Is roop compatible with video as well?
@@alexvillabon yes its compatiable borh pics and videos
@@kalayan705 Nice. Ill check it out.
But, these tools really seem to be illegal.
😂 It’s like saying photoshop is illegal bc we can do the same thing (but with lot more time)
Face fusion is heavily censored, so does not need to be regulated
Creator of FF here... so you are asking for this? watch?v=qwDqm1wpEEs
Hey Henry. Yes, that is exactly it! How do you manage to render the face by itself with an alpha channel?
Now in our buymeacoffee shop - free for members.
Modul not found error: no modul named ' numpy ' problem please solve it. One vedio for this topic plaese 🙏🙏🙏🙏
Not sure I can help. I'd refer you to the github page.
Missing python input. I’d suggest manually downloading numpy into python and it should work
Stands for “number Py”