Hello, just took a look at your channel, some impressive stuff. Congrats. Especially the Fast & Furious aging video. And that's why I'm writing to you : I have solid knowledge of DFL 2, but none about Fakeapp and almost none about EBSynth. So, could you make a tutorial on how you did the aging in this video and do you think it's doable with DFL + EbSynth ? If so, you would only need to explain the EBSynth part. Or link me a good tutorial for aging if it already exists. Also, whatever step you do in postprod to have this clean and crisp look, I'm also interested to know (I know Premiere well, AE not so much).
You got DFL hooked up [13:40] hit space to change for the actual preview to src or dst and then P to change frame to identify any xseg issues. U can use pretrained xseg model too for faster learning. There are 1mln iter available.
Previously on older versions of deepfacelab, I never had any issues with obstructions over the face. I never had to do any custom roto scoping techniques, but now with dlf 2.0, it seems like any thing covering the face, like a hand waving, or long hair, or whatever it is causes the mask to breakdown or go blurry. What settings have changed that allow the behind the mask to stay intact. Previously it was just learned DST, but that doesn't work the way it used to.
without any obstructions, I have no problem. The deepface mask looks fine, but as soon as something gets in the way, what didn't used to be a problem is now suddenly a problem.
face is ok, all you need is a new hair style. BTW,I'm very new to this dfl thng. Can i use pretrained model & Pretrained Xseg mask together for SEAHD training?
SAEHD is by far more time consuming than Quick96.. it takes for ever to finish something.... the question is, can we use Xseg to train masking and use Quick96 (which is much faster) or XSeg training it works only with SAEHD ?
Your work is so good, dude, that i decided to get into this myself, however, upon starting up, i realized that there's so little info out there about pretraining stuff, so i have a couple of questions: 1. How long did you pretrain your src model for (when you did)? 2. Did you place your src model into the folder "/_internal/pretrain_faces"? If not, how did you pretrain your face model?
Did yo figure this out? I’m still confused on how to use pretrained models. Do I train the celeb I’m swapping with the pre train?? I don’t understand lol
JSFILMZ - once I edit the dst or src with XSeg, do I select XSeg data/src_dst/src trained mask-apply.bat opposed to the XSeg) train.bat? It appears you went directly to the XSeg train.bat opposed to applying the Xseg mask first. It may not make a difference. Thanks for any info you can provide.
bro i tried man but the face changes it makes funny faces when i exclude something like a knife goin across his face hahahaha when lets say a hand goes across the face im trying to figure out how to fix
@ JSFILMZ Hey there. Thanks for the great tutorial. Is there any way to make the nose of the dst model have the same shape as the src model when he looks to the side ? When the character look to the side, his nose keeps the dst shape, i want his nose shape to look like the src model when he looks to the side (side view/profile view). How am i supposed to do that ?
Thank you for this tutorial... please let us know something. If we make a good model of ourselves (source video), then we can use it in all times right ? I mean we can avoid all the steps for extract images etc that concern the source video. Is it correct ?
Very good video, but I have a question about the end.... Did you really spend a full week with 800k+ iterations to make a 2-3 second destination video? If you wanted to make a 5 minute video how much longer would we need? Is it the same, or are we talking a few months?
You know, I seriously wrote out a long answer and then when re-reading the question again, I suddenly realized that you're actually/probably just teasing him so it was rhetorical and you probably already know all the wrong turns this video made without me telling you, huh?
Hi after using the dst sort, I deleted some of aligned face , then I sort the faces again by using original filename. However, the numbers of the faces changed and not same with the photos after I extract, is that fine or what .thank you
My process - 2) extract images from video data_src 3) extract images from video data_dst FULL FPS 4) data_src faceset extract 5) data_dst faceset extract 5.XSeg) data_dst mask - edit 5.XSeg) data_src mask - edit 5.XSeg) train 5.XSeg) data_dst trained mask - apply 5.XSeg) data_src trained mask - apply 7) merge SAEHD This is exactly what I did. Now while I started to merge, the model seems to apply the destination face itself to data_dst. What am I doing wrong?? It's not applying source face for some reason.
Did it ask to use the CelebsA pre-trained model upon initial model creation? Say no. New to deepfacelab so not sure if you can just override it on next start up.
How to create a source face that is an average of two persons? Can I extract faces separately from two characters (each producing its own data_src), and put all the aligned face jpg files together into one aligned folder to train? Can this method create an average source model that looks somewhat like both of the characters? Is there a better method? I appreciate your answer. Thanks.
@@Jsfilmz Thank you for your reply, there are abunch of errors top ones say : Map memory - CL _Mem _object allocation_ Failure then all the other errors are in diffgerent file lines to do with the .py eg: error library.py line 131 , backend.py line 1668 in variable there are almost 12 lines with errors on different .py files
Hey ! thank you very much for this tutorial. My question is, what happens when you have a video with two people in scene? how do i choose wich one i want to deepfake ?
@@Jsfilmz For my destination. Say for example i have a harry potter scene with hermione and ron and i want my face only to be applied in ron's face. Thank you for your time and response!
@@GarageRockk When you extract faces from your destination source, you will have an extra set of photos in your data_dst/aligned folder, delete all the ones that detected and are focused on hermoine's face or to be more precise and not accidentally delete what you need, use data_dst view aligned debug results to see the landmarks of each image in data_dst/aligned/ and for any of them show landmarks for Hermoine can be deleted from the aligned folder, (make sure you do NOT delete them from data_dst folder but the algined folder within data_dst) then you can proceed with the rest of the instructions if you want, but there are a lot of steps here that are not ideal and is not recommended. But I mention this more for anyone reading the answer to this later in the future, as 4 months have passed and you may have already learned for yourself how to accomplish this.
Hey man, not sure why but the Xseg training isn’t working for my 3080 :( it does 1 iteration and then just stops doing iterations. No errors or anything, just seems to be infinitely working on a single iteration. Any ideas? No support for 3000 series cards yet? Seems to work fine off my 5900X but it obviously isn’t optimized for CPUs so it takes 2 seconds an iteration if I do that
@@Jsfilmz Thank you. But what could be the estimated time it takes to create 15 seconds of a deepfake video on my normal i7 64 bit computer? I am curious. Can I start? Or should I leave it for weeks? Or what if it is only of 5 or 6 seconds video?
@@PlatinNr1 I wonder to hear that the duration of the video won't effect the time to create deepfake. If I use. Longer videos and I want to create longer deepfake it takes long time. That's why I have asked about the time to create a minimum length of video with minimum resources. How you understood.
@@VKTVCHANNEL the process which is time intensive is to train the model (replacing the face). in this case the length of the video output doesnt matter.
you MASSIVELY overfit your model. 800k is way too many. Always look at the graph and once the yellow part starts moving up instead of down you have trained for too long. The more data you have the longer you can train before this happens though.
Followed all the steps from different video's. Nothing works. Errors is all I get. And my pc is pretty pretty fast! ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,128,320,320]
vote for my short film here!myrodereel.com/watch/8674
🗳✔
Hello, just took a look at your channel, some impressive stuff. Congrats. Especially the Fast & Furious aging video. And that's why I'm writing to you : I have solid knowledge of DFL 2, but none about Fakeapp and almost none about EBSynth. So, could you make a tutorial on how you did the aging in this video and do you think it's doable with DFL + EbSynth ? If so, you would only need to explain the EBSynth part. Or link me a good tutorial for aging if it already exists. Also, whatever step you do in postprod to have this clean and crisp look, I'm also interested to know (I know Premiere well, AE not so much).
SPT1 hey bro sorry for late response youtube is terrible at notifying. I will make a tutorial for sure
VOTE done brother
You got DFL hooked up
[13:40] hit space to change for the actual preview to src or dst and then P to change frame to identify any xseg issues.
U can use pretrained xseg model too for faster learning. There are 1mln iter available.
There is still a lot of manual work but for those who are brave enough it seems rewarding
18:55 and I will see you guys in a week. Lmao 😂 Great tutorial dude!
Previously on older versions of deepfacelab, I never had any issues with obstructions over the face. I never had to do any custom roto scoping techniques, but now with dlf 2.0, it seems like any thing covering the face, like a hand waving, or long hair, or whatever it is causes the mask to breakdown or go blurry. What settings have changed that allow the behind the mask to stay intact. Previously it was just learned DST, but that doesn't work the way it used to.
without any obstructions, I have no problem. The deepface mask looks fine, but as soon as something gets in the way, what didn't used to be a problem is now suddenly a problem.
face is ok, all you need is a new hair style. BTW,I'm very new to this dfl thng. Can i use pretrained model & Pretrained Xseg mask together for SEAHD training?
Is there a faster method or program /software with better results today?
derm derm derm, u really helped me , thumbs up bro
Najlepszy samouczek jaki widziałem na RUclips😉
Hey do you recommend using a pretrained model or training it from scratch
i train once then save it just incase i have to use that same face again
SAEHD is by far more time consuming than Quick96.. it takes for ever to finish something.... the question is, can we use Xseg to train masking and use Quick96 (which is much faster) or XSeg training it works only with SAEHD ?
did you ever find out?
@@Arewethereyet69 unfortunately I quit ... it is so complicated and time consuming that doesn't worth the time
Mine doesn't merge them all to the same setting. any reason or can you think of one why it doesn't
great tutorial. I'm a noob to deepfakes and have done a couple but I think xseg might be the step I was missing!
goodluck man!
Your work is so good, dude, that i decided to get into this myself, however, upon starting up, i realized that there's so little info out there about pretraining stuff, so i have a couple of questions:
1. How long did you pretrain your src model for (when you did)?
2. Did you place your src model into the folder "/_internal/pretrain_faces"? If not, how did you pretrain your face model?
Did yo figure this out? I’m still confused on how to use pretrained models. Do I train the celeb I’m swapping with the pre train?? I don’t understand lol
Very noob question indeed but what are in a nutshell the differences between XSeg train, Quick train and SAEHD train ?
xseg is manual others are automatic
question : can i use only images for data source
good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.
JSFILMZ - once I edit the dst or src with XSeg, do I select XSeg data/src_dst/src trained mask-apply.bat opposed to the XSeg) train.bat? It appears you went directly to the XSeg train.bat opposed to applying the Xseg mask first. It may not make a difference. Thanks for any info you can provide.
hey bro gotta train first before u apply
you should mask out the obstructions during the xseg part
bro i tried man but the face changes it makes funny faces when i exclude something like a knife goin across his face hahahaha when lets say a hand goes across the face im trying to figure out how to fix
Nice tutorial! What's your pc spec? Tks
Kun Hasan Hadi ryzen threadripper 1950 gtx 1080 its almost 4 years old. I tried to snag a 3080 but u know how that went lol
Anastazy Staziński of course man ill make more as i learn
What if my laptop cant do train SAEHD and only Quick96? will the XSEG training not be used?
I have this issue too. My laptop only has a 6GB VRAM NVIDIA RTX Geforce 40 series. So it can't train SAEHD
@ JSFILMZ
Hey there. Thanks for the great tutorial.
Is there any way to make the nose of the dst model have the same shape as the src model when he looks to the side ?
When the character look to the side, his nose keeps the dst shape, i want his nose shape to look like the src model when he looks to the side (side view/profile view).
How am i supposed to do that ?
yea you can increase the face style power but be very careful save your model first
how do you eliminate the blurriness?
iteration time leave it longer, i think that was 800k
@@Jsfilmz Hi mine was 976,873 iterations and still blurry. any advice?
================== Model Summary ===================
== ==
== Model name: new_SAEHD ==
== ==
== Current iteration: 976873 ==
== ==
==---------------- Model Options -----------------==
== ==
== resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: liae-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_mouth_prio: False ==
== uniform_yaw: False ==
== blur_out_mask: False ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: False ==
== random_hsv_power: 0.0 ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_src_flip: False ==
== random_dst_flip: True ==
== batch_size: 8 ==
== gan_power: 0.0 ==
== gan_patch_size: 16 ==
== gan_dims: 16 ==
== ==
==------------------ Running On ------------------==
== ==
== Device index: 0 ==
== Name: NVIDIA GeForce GTX 1660 ==
== VRAM: 4.80GB ==
== ==
====================================================
Thank you for this tutorial... please let us know something. If we make a good model of ourselves (source video), then we can use it in all times right ? I mean we can avoid all the steps for extract images etc that concern the source video. Is it correct ?
yes u can save the model and iterations
@@Jsfilmz thanks but how ? where is the trained model ? (I use my self video as source)
Very good video, but I have a question about the end.... Did you really spend a full week with 800k+ iterations to make a 2-3 second destination video? If you wanted to make a 5 minute video how much longer would we need? Is it the same, or are we talking a few months?
You know, I seriously wrote out a long answer and then when re-reading the question again, I suddenly realized that you're actually/probably just teasing him so it was rhetorical and you probably already know all the wrong turns this video made without me telling you, huh?
Hi....It say Exception: Unable to start subprocesses ....
Press any key to continue ...
Then after i press enter is shutting down ...
u need an nvidia gpu probably
Hi after using the dst sort, I deleted some of aligned face , then I sort the faces again by using original filename. However, the numbers of the faces changed and not same with the photos after I extract, is that fine or what .thank you
no that sounds weird bro
*Hi, In the aligned results, there are 2 faces. How will I know if it belongs to the subject and not the extra person?*
Gupta Flavio for your destination folder im guessing?
delete the one you dont need
can I do it on mac?
never owned a mac sorry
How do you fix the eyes looking different positions to original dst?
LightWolf25 try doin eye priority y then turn it off when its fixed
@@Jsfilmz when does that option come ? Thnx
LightWolf25 during training its called eyes priority its in this video
Hi! I have Nvidia GeForce MX110 2 VRAM can I use deepfakelab. Please let me know.
Please reply
prolly not bro, u can try cpu but it wont be as good
Fantastic tutorial, good sir! If you're looking for brown actors or darker skin toned people, you'll find heaps in Australian movies/trailers :)
My process -
2) extract images from video data_src
3) extract images from video data_dst FULL FPS
4) data_src faceset extract
5) data_dst faceset extract
5.XSeg) data_dst mask - edit
5.XSeg) data_src mask - edit
5.XSeg) train
5.XSeg) data_dst trained mask - apply
5.XSeg) data_src trained mask - apply
7) merge SAEHD
This is exactly what I did. Now while I started to merge, the model seems to apply the destination face itself to data_dst. What am I doing wrong?? It's not applying source face for some reason.
you did not train the saehd, you just started merging
when i'm at train SAEHD and it starts i see all the random included faces instead of my own, i dont get it i copied the entire workflow. please help!
did u clear ur workspace?
Did it ask to use the CelebsA pre-trained model upon initial model creation? Say no. New to deepfacelab so not sure if you can just override it on next start up.
Where is the repo for these bat files?
Hello , why I getting error when I go for the xseg training
what gpu
JSFILMZ I am using gtx 1070 which has 8gb vram . However, I still getting memory error or just stuck after I entered the batch size
Benjamin Blacher oh is there any difference between putting the folder at C drive in case of other drive?
@@tszlokchane5505 yeap. it can't run xD. it didnt work for me at first then i moved it to c drive n also did other things then it worked
@@tszlokchane5505 reduce the batch size
How to create a source face that is an average of two persons? Can I extract faces separately from two characters (each producing its own data_src), and put all the aligned face jpg files together into one aligned folder to train? Can this method create an average source model that looks somewhat like both of the characters? Is there a better method? I appreciate your answer. Thanks.
Hello, I'm building head. how do i find the faceset.pak for head ? thanks
Hi Js , do you know why the why i get an error when i train the files on #6 step , i tried all trains, they all give errors, any idea
Zimbabwe did u change any parameters what error r u gettin
@@Jsfilmz Thank you for your reply, there are abunch of errors top ones say : Map memory - CL _Mem _object allocation_ Failure then all the other errors are in diffgerent file lines to do with the .py eg: error library.py line 131 , backend.py line 1668 in variable there are almost 12 lines with errors on different .py files
@@Jsfilmz Thank you are again, your channel is so informative, i shared to a bunch of my friends.
Zimbabwe what graphics card do i have
@@Jsfilmz i have AMD Radeon HD 8490 its 24gb combined , 8 gb internal and the rest is an add from the extra card
Hey ! thank you very much for this tutorial. My question is, what happens when you have a video with two people in scene? how do i choose wich one i want to deepfake ?
Ale Camps two faces as the destination or source?
@@Jsfilmz For my destination. Say for example i have a harry potter scene with hermione and ron and i want my face only to be applied in ron's face.
Thank you for your time and response!
Ale Camps you can do a manual extract for the dst or just crop out the other faces or blur them. If you can wait im planning on doing more tuts
@@Jsfilmz thank you very much! Of course ill wait, definitely suscribing
@@GarageRockk When you extract faces from your destination source, you will have an extra set of photos in your data_dst/aligned folder, delete all the ones that detected and are focused on hermoine's face or to be more precise and not accidentally delete what you need, use data_dst view aligned debug results to see the landmarks of each image in data_dst/aligned/ and for any of them show landmarks for Hermoine can be deleted from the aligned folder, (make sure you do NOT delete them from data_dst folder but the algined folder within data_dst) then you can proceed with the rest of the instructions if you want, but there are a lot of steps here that are not ideal and is not recommended. But I mention this more for anyone reading the answer to this later in the future, as 4 months have passed and you may have already learned for yourself how to accomplish this.
Hey man, not sure why but the Xseg training isn’t working for my 3080 :( it does 1 iteration and then just stops doing iterations. No errors or anything, just seems to be infinitely working on a single iteration. Any ideas? No support for 3000 series cards yet? Seems to work fine off my 5900X but it obviously isn’t optimized for CPUs so it takes 2 seconds an iteration if I do that
far as i know rtx 3000 is still not supported with dfl
How much time it takes to create deepfake a 15 seconds video on a normal i7 64 bit computer?, Please...
VK TV its gpu u will need for it to be faster.
@@Jsfilmz Thank you. But what could be the estimated time it takes to create 15 seconds of a deepfake video on my normal i7 64 bit computer? I am curious. Can I start? Or should I leave it for weeks? Or what if it is only of 5 or 6 seconds video?
@@VKTVCHANNEL the time of the video doesnt matter.
@@PlatinNr1 I wonder to hear that the duration of the video won't effect the time to create deepfake. If I use. Longer videos and I want to create longer deepfake it takes long time. That's why I have asked about the time to create a minimum length of video with minimum resources. How you understood.
@@VKTVCHANNEL the process which is time intensive is to train the model (replacing the face). in this case the length of the video output doesnt matter.
Sir can we make a deep fake with (Intel® Core™ i3-9100F Processor 6M Cache, up to 4.20 GHz) alone with Nvidia GTX 1650super graphic card
yes im sure you can, if not try a cpu one
@@Jsfilmz am very new to this deep fake.how to work on CPU sir.
You haven't given a clue what program to get to start this. Where do we get the program
its in the thumbnail brosky its called dfl 2.0
you MASSIVELY overfit your model. 800k is way too many. Always look at the graph and once the yellow part starts moving up instead of down you have trained for too long. The more data you have the longer you can train before this happens though.
Mirip orang indo. 🤔
Please make a deep fake video (not required graphic card)
Followed all the steps from different video's. Nothing works. Errors is all I get. And my pc is pretty pretty fast! ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,128,320,320]
good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.