@@murtazasworkshop at 33:20, i leave blank like you say for use interactive merger i press enter, number of workers i leave blank press enter, it then says collecting alignments and computing motion vectors, its hits 100% but then the grey merger box with all of the buttons to manipulate does not pop up anywhere for me, do you know what may be causing this? thank you
This is epic. For a person to learn all of this themself, it would take anywhere from a couple of weeks to months. You distilled all the information in just a 45 min video! You are really a generous genius. Hats off to you.
at 33:20, i leave blank like you say for use interactive merger i press enter, number of workers i leave blank press enter, it then says collecting alignments and computing motion vectors, its hits 100% but then the grey merger box with all of the buttons to manipulate does not pop up anywhere for me, do you know what may be causing this? thank you
For anyone else having this issue - increase your pagefile to 4x your RAM i.e. for 32Gb of RAM, make your pagefile 128,000Mb. This fixed the issue for me.
not sure if somebody mentioned this before... But the elephant Lion metaphor to understand encoder/decoder relationship needs one more step to make sense. After the student learns to draw an elephant, missing step -- > " we teach him how to draw a Lion" Now, when we draw an elephant in a certain pose, the student should be able to draw a lion in a matching pose. Great video overall. Thanks a lot and keep it up!
Thanks! This video was very helpful. There was a bit of confusion with some things like the "up arrow" on the keyboard in the diagram but it is actually the right hand shift key that has the little up arrow on the American English keyboard.
When he say's to switch screens during the merging process, it sounds like he is saying "press on top", he is actually say "press on tab". And the Tab key typically just says Tab in English and we never look at the little arrows on it. So I was thrown off. Haha! I'll share more little quirks as I go through it to help other English speaking Americans.
A little tip. The software doesn't work when I use #7 which is "merge" when I select GPU. I have had to use CPU. I also had to update my video card driver to get it to actually use the GPU during the initial process where it scans the faces. It was worth it to get the GPU to function over using the CPU. It is about 20 times quicker. I let it run on GPU for 20 hours and it had 28,000 iterations. I am getting about 5,000 iterations per hour with my Nvidia GTX 1660 and an Intel I 4770.
when i done the 23:21 as video said,i choose my gpu gtx1060 but it said "coaching GPU" and then after 3 seconds nothing happen as the end it said"unexpection"or something,what should i do to slove this problem
I have successfully made 1lakh iterations, but when I run the merge quick 96 , it's does not open anything. The command prompt shows RUNNING CPU36 and it's stuck there for hours. What might be the problem?
Hello Murtaza thanks for the nice understandable explanation, I would like to ask you I am working on a short documentary about my grandfather who passed away a long time ago is there any chance to use static photography of his face and put it on the face of the actor (me)? Or it is some possibilities to pick up a different angle of his face from several old photographs and make a image sequence out of them or video sequence and use it? Thank you.
Already the second time I'm watching this and it's so amazing! One question, what would happen if I choose "head" in the wf Settings? Can you do a tutorial on that, since this function seems pretty underrated. Thank you so much. :)
Hi, does the source video lighting of the persons face have to be similar to the merging video lighting for best results or would a brightly lit source face work just as good?
This is the best video I've watched so far on the topic. Learning this process, I have a question. How can a mobile app takes just a single photo and be able to achieve a really good deepfake using just a few seconds? I know all the cloud processing stuff, but there must be something else, right?
Murtaza i cant train from the gpu. it always trains with the cpu and this takes very long. i have the geforce gtx 650 4gb. Any idea how i can train with the gpu?
@@gigachadkartik I think you need to upgrade. GPUs below a certain level aren't compatible. I had a 750 and it wouldn't work, either. I installed a 1660TI and... boom.
pretraining in saehd, you know where your model files are, you can keep them for later, people share pretrained models etc. what does pretraining in xseg do? what's its difference from saehd's pretraining? where are the model files for xseg pretrained model? and how come people don't seem to exchange these models like they do saehd models? and finally, saehd trains the source model. does xseg also pretrain just the source model?
Hey! Thanks for the tutorial! One thing I’m wondering is can you pause the encoding and generating the faces? Let’s say if I have to shut down the computer. Only with quick96?
@@TREXHUNTERX I've seen another guy had that, it's gone after enabled auto backup. try that and let us know. also, make sure the driver up to date. anyway, something in the code of the bat file doesn't run correctly, I will check it out. cheers
very nice video and well explained , thank you... BUT please tell us, what if the destination video or our source video have multiply faces ? (i mean there are more faces from other people inside the video footage) ???? what to do ?
@@robomanthan6122 I think I found a solution. If you have a video with more people inside, just open it in premiere or other video editing software and "hide" their faces either with a mask or whatever... only let the target face... I tried it and worked. Then you should to some work at the result.mp4 in a video editing software to "bring" back the other faces...
Do all the source images have to come from the same video or could we merge a few videos for a better variety of angles and lightiong to get a more realistic output?
Nice work man. thanks you so much! have a question. because of training of models. How can y nake it autimatically? Cause after hour of holding a button P its only 3500 iter points. also, do you know why smetimes on merge part program just stucking and not responding aymore and last question. what to do if i have different rakurs and face that i wanted to change really small. In that case program do not working and face stay original. will be really glad if you will have time to answere. Thanks for greate work. Peace!
im stuck at step no.5, i can extract face from src but cant do that from dst, im using cpu, i have intel uhd-none of them work,sys error msg: "cpu doesnt respond,terminating it"..can you help me?
Hey Murtaza. Thanks for sharing. Wanted to know how it is possible to extract the "hair" features of the source - in addition to the face's - and then merge it to the destination. As you can see in the final result, the original hair - belong to Musk - was retained. Thanks
If you want to extract hair, you need use head and XSeg. You outline the person including the hair, even if long hair and then train in xseg and it will include the hair. the more you outline the more it will include, like earings. more detailed outline that you need to do manually, making polygons using Xseg editor. you double click Xseg)data_src mask-edit and editor will open up. click arrows to go forward and back and outline as few or as many as you want. then do some Xseg training, 5XSeg)train (does both src and dst align pictures, you can only do hair in align pictures) Keep repeating and then put click Saed train check other videos for using Saed.
Nice tutorial. Is quick96 faster or better than saed? I've not had great success so far. I downloaded some footage off the Web, did all the procedure except with Saed. I let it run over night and had 200k iterations but it was still quite poor. Also, I just heard recently that running deep face lab burns out hdd quite fast due to all the extra use. Which is quite worrying. One person commented they had got through 2 samsung pro ssd's in a year!
Im wondering what happens if you were to put multiple videos in the scr folder? Say (sylvester stalone) rocky 1 2 3 rambo etc. I know you hav to block out other faces... Also what about images? Dropping a load of high res google images of sly stallone face?
I assume it's best to use videos with matching frame rates? Or does it not really matter as long as your destination frame rate is your preferred frame rate?
what happen if i dont have videos for data_src and just pictures? can i place them there in the folder and then extract the faces? or is a better way? id like if someone can explain how to do if i have only pictures for data_src.. also how about data_src , like if i add a video ..does it matter if the video duration is less than the data_dst? does it matter only to get the faces from _src right? ty
I have 6974 frames in dst. Do I have to MANUALLY go through them and select every frame that is blurry or isn't a face or is there a way that let's me sort them?
No, just make sure the first frame looks good after you use the settings. Then click 'shift' & '?' to apply these settings on all the frames and then click 'shift' & '>'. Let it process and then continue to the next step of the margin.
Hey bro, I just wanted to ask, are you going to make a intermediate tutorial about deepfake? Like for example: How the manual option works anyways new subscriber here hope you have a pleasant morrow! My thanks to thee.
Thank you! I have an odd problem. I renamed the original source video file data_src.mp4 to data_src.mp4_BAK. Then I brought my new source video into the folder and renamed it data_src.mp4. When I ran 2) extract images from video data_src.bat, it extracted the frames from data_src.mp4_BAK (the one with Ironman). When I deleted the data_src.mp4_BAK file, and then ran the .bat, the command prompt said input_file not found. This is really strange. Has anyone has this issue? P.S. I was able to successfully extract the frames from my new data_dst.mp4.
I've uncovered a bug and a work around if anyone else is having the same issue with the current build as of 2/1/21. For some reason, I need to keep pressing S or P in the training window every few minutes. If I don't the program will stop training, and become unresponsive to S, P, and Enter commands (however the space bar still works). I don't know for sure but it appears to be any command sent to the Training Preview that gets passed on to the command prompt. If that happens, all the new training information is lost because I can't save it and need to force quite. The work around I've come up with is using Pulover's macro creator (a free automation app) to just send P (wait a few seconds) S to the training preview window every 1-3 minutes so that I can go AFK and let it run while I sleep, eat dinner, work out or whatever. It's annoying, but I can't complain too much since the software is free and producing good results.
I am using windows 11 and pressing return does not stop and save the model! I closed the window to stop. Also wondering if you have multiple faces in a destination file, can you select one?
I don't get the fact of using a common encoder. Does it mean we have to use the same encoder architecture or extract the actual encoder from trained autoencoder_1 to use in autoencoder_2.
Where's the software that have an UI that you can select source data and destination data and then click run and then you get what you want ? DeepFaceLab looks like a rocket engineering, man ! I know it could be learned, but it should be much easier ! After Effects also can do the face replacemant, if the user knows how to operate After Effects.
Hey bro,thanks for the tutorial firstly…brill job,but i cant seem to get past step 2,when i extract images from vid data src,i get lots of errors saying failed to load NATIVE TENSORFLOW RUNTIME??
The training keeps getting an error. It says I need to turn on the hardware accelerated Gpu setting in my graphics settings. I already did that and restarted my pc but it keeps saying that I need to turn it on. I tried SAEHD and it gave me the same results. Please help.
Boss , serious u r taking teaching to a different level . Cant believe we r getting learning like this for free
Glad to hear that.
Is this course is available for free!?😱😱😱
@@athiamaanr Did you pay to view this video? /s
@@tunesoop7653 well, he actually paid with his "view" and Data. :-P
@@murtazasworkshop at 33:20, i leave blank like you say for use interactive merger i press enter, number of workers i leave blank press enter, it then says collecting alignments and computing motion vectors, its hits 100% but then the grey merger box with all of the buttons to manipulate does not pop up anywhere for me, do you know what may be causing this? thank you
This is epic. For a person to learn all of this themself, it would take anywhere from a couple of weeks to months. You distilled all the information in just a 45 min video! You are really a generous genius. Hats off to you.
Can someone tell I was training saehd laptop turned off cuz No charge now will it be saved or not
Very well explained Murtaza sir... I really appreciate to get your knowledge... Thanks...
Thanks sooooo much for you tutorials! Just started understanding this field of programming and your tutorials are making it so much easier!!!
Great Work Bhai...
Keep it Up..
Day by Day You are becoming an Inspiration forme to Join the AI field.....
God Bless You.
Love from 🇵🇰
Very nice tutorial - lots of ins 'n' outs to deep face labs and your tutorial was one of the more helpful ones, thanks!
Bruh thank you very much... But make disclaimer that no one will use this for doing wrong of someone
Thanks will do.
@@murtazasworkshop Yeah
Brother,
Everyday i wake up and check the notifications, that have you uploaded any videos !!😍❤️
at 33:20, i leave blank like you say for use interactive merger i press enter, number of workers i leave blank press enter, it then says collecting alignments and computing motion vectors, its hits 100% but then the grey merger box with all of the buttons to manipulate does not pop up anywhere for me, do you know what may be causing this? thank you
For anyone else having this issue - increase your pagefile to 4x your RAM i.e. for 32Gb of RAM, make your pagefile 128,000Mb. This fixed the issue for me.
Well explained. The best I have seen so for on this topic. Thanks so much.
This is gold! Many thanks from Nairobi, Kenya
Your accent sounds very pleasant and clear, didn't expect that ;)
My dear sir, you are a savior. Thanks for this tutorial.
Thank you sir for such a great tutorial love from Delhi
Thanks so much for this in depth tutorial. I was finally able to complete the example, and I am ready to try my own!
This is the best Tut ever. Even the original source don't have this kind 0f tut
This is a very straight forward and detailed video, very well done and straight to the point. Thank you for the help!
not sure if somebody mentioned this before... But the elephant Lion metaphor to understand encoder/decoder relationship needs one more step to make sense.
After the student learns to draw an elephant,
missing step -- > " we teach him how to draw a Lion"
Now, when we draw an elephant in a certain pose, the student should be able to draw a lion in a matching pose.
Great video overall. Thanks a lot and keep it up!
Love the videos interestingly cool. Love the sense of humor too.
Excellent sir.. I am following and practicing all your teachings. Wonderful collections sir
Thanks! This video was very helpful. There was a bit of confusion with some things like the "up arrow" on the keyboard in the diagram but it is actually the right hand shift key that has the little up arrow on the American English keyboard.
When he say's to switch screens during the merging process, it sounds like he is saying "press on top", he is actually say "press on tab". And the Tab key typically just says Tab in English and we never look at the little arrows on it. So I was thrown off. Haha! I'll share more little quirks as I go through it to help other English speaking Americans.
A little tip. The software doesn't work when I use #7 which is "merge" when I select GPU. I have had to use CPU. I also had to update my video card driver to get it to actually use the GPU during the initial process where it scans the faces. It was worth it to get the GPU to function over using the CPU. It is about 20 times quicker. I let it run on GPU for 20 hours and it had 28,000 iterations. I am getting about 5,000 iterations per hour with my Nvidia GTX 1660 and an Intel I 4770.
@@markfothebeast great advices! Thank you for sharing!
Crazy level content. AND Free. Great Job Man 🙌
This is super helpful and you are so generous to teach us free!! Cudo's to you!!❤
Nicee video's lovee itttt gonno use this for my final year project
Good luck
Amazing Mr. Murtaza. Perfecto
when i done the 23:21 as video said,i choose my gpu gtx1060 but it said "coaching GPU" and then after 3 seconds nothing happen as the end it said"unexpection"or something,what should i do to slove this problem
caching CPU kernels...
Error:cudaGetDevice()failed.
..
..
..
help me XD
I have successfully made 1lakh iterations, but when I run the merge quick 96 , it's does not open anything. The command prompt shows RUNNING CPU36 and it's stuck there for hours. What might be the problem?
I'm having the same issue. Did you ever figure this out?
Wow am super pumped to learn this
Thank you very much for this incredible tutorial
When I opened the faceset extract ... It is showing any options for CPU or GPU ... It's taking default values and showing unable to process 😞
Does it let you choose or not? Can you write your specs?
@@ab1577 There's no option to choose between CPU and GPU on the command prompt ... It's asking me for FACE TYPE directly...
@@ab1577 HP EliteBook 840 G1 Intel Core i5 (4th Gen) 8GB Ram 1 TB
@@charan1969 hi mate, this laptop contains AMD Radeon GPU, this lab use NVidia that's why it let you use only the CPU for the moment.
@@ab1577 Any alternative solution ?
depply waiting for this tutorial thanks man..............
Best tutorial about DFL, I learned about the new functions, I did not know that I need to press Shift + / , to apply the config to all other frames.
Thanku sooo much for this tutorial .....this is soo amazing ...
Hello Murtaza thanks for the nice understandable explanation, I would like to ask you I am working on a short documentary about my grandfather who passed away a long time ago is there any chance to use static photography of his face and put it on the face of the actor (me)? Or it is some possibilities to pick up a different angle of his face from several old photographs and make a image sequence out of them or video sequence and use it? Thank you.
Already the second time I'm watching this and it's so amazing! One question, what would happen if I choose "head" in the wf Settings? Can you do a tutorial on that, since this function seems pretty underrated. Thank you so much. :)
Thanks a lot!! Does the reface app use the same model??
Hi, does the source video lighting of the persons face have to be similar to the merging video lighting for best results or would a brightly lit source face work just as good?
This is the best video I've watched so far on the topic. Learning this process, I have a question. How can a mobile app takes just a single photo and be able to achieve a really good deepfake using just a few seconds? I know all the cloud processing stuff, but there must be something else, right?
I thought about this as well!
Great bro!!! Can you work on any deepfake detection one ?
Murtaza i cant train from the gpu. it always trains with the cpu and this takes very long. i have the geforce gtx 650 4gb. Any idea how i can train with the gpu?
Same prob everything is fine for cpu but not workimg for gpu
@@gigachadkartik I think you need to upgrade. GPUs below a certain level aren't compatible. I had a 750 and it wouldn't work, either. I installed a 1660TI and... boom.
Great tutorial..has been for it very long
Muy buen vídeo, cubre los aspectos básicos para que quede muy decente!
Do a video explanation of xseg markings and training
pretraining in saehd, you know where your model files are, you can keep them for later, people share pretrained models etc. what does pretraining in xseg do? what's its difference from saehd's pretraining? where are the model files for xseg pretrained model? and how come people don't seem to exchange these models like they do saehd models? and finally, saehd trains the source model. does xseg also pretrain just the source model?
Hey! Thanks for the tutorial! One thing I’m wondering is can you pause the encoding and generating the faces? Let’s say if I have to shut down the computer. Only with quick96?
Incredible great video. Thank you for sharing.
I checked - everything is clean
Great man, great video tutorials. Thanks a lot sir
Hi! I always have Value Error Cannot convert float Nan to integer when doing Quick96. Is there a solution?
What is the error? Have you tried using the CPU instead of the GPU?
@@ab1577 yes
@@TREXHUNTERX what version of lab are you using. And what is the spec?
@@ab1577 I tried it on the latest Version and the 1.0. both resulted fo a "Value Error Cannot convert float Nan integer" GTX 1060 6GB RYZEN 3 3100
@@TREXHUNTERX I've seen another guy had that, it's gone after enabled auto backup. try that and let us know. also, make sure the driver up to date. anyway, something in the code of the bat file doesn't run correctly, I will check it out. cheers
Very clear and understandable. Thank you
Thank u very muck. Very good explain algorithms. You are a greate man
Congratulations Bro!!
Please make a tutorial on Lip Sync from Audio. It will very helpful to us. BTW great video. It helps me a lot.
Help, for some reason the source face is not showing on the merging tab. Can someone please help me? Advance thanks.
wow u are awesome and you should give your website link in the description also bro Thanks :)
very nice video and well explained , thank you... BUT please tell us, what if the destination video or our source video have multiply faces ? (i mean there are more faces from other people inside the video footage) ???? what to do ?
yes same question
@@robomanthan6122 I think I found a solution. If you have a video with more people inside, just open it in premiere or other video editing software and "hide" their faces either with a mask or whatever... only let the target face... I tried it and worked. Then you should to some work at the result.mp4 in a video editing software to "bring" back the other faces...
@@georges8408 good thanks , I will try
Do all the source images have to come from the same video or could we merge a few videos for a better variety of angles and lightiong to get a more realistic output?
aligned_debug folder did we delete the non faces images from this folder ? Thank you for the video?
I would love to see an explanation of the different training algorithms
Really good content! Can you make a tutorial for Aliaksandr Siarohin's First Order Motion Model?
Nice work man. thanks you so much!
have a question. because of training of models. How can y nake it autimatically? Cause after hour of holding a button P its only 3500 iter points.
also, do you know why smetimes on merge part program just stucking and not responding aymore
and last question. what to do if i have different rakurs and face that i wanted to change really small. In that case program do not working and face stay original.
will be really glad if you will have time to answere. Thanks for greate work. Peace!
So it's cuda _only?_
Thanks for this learning video. Can I use available inages directly instead of extracting them from a video file?
im stuck at step no.5, i can extract face from src but cant do that from dst, im using cpu, i have intel uhd-none of them work,sys error msg: "cpu doesnt respond,terminating it"..can you help me?
28:02 MINE KEEPS CRASHING HERE! HELP!!! "Python has stopped working" 😭
Hey Murtaza. Thanks for sharing. Wanted to know how it is possible to extract the "hair" features of the source - in addition to the face's - and then merge it to the destination. As you can see in the final result, the original hair - belong to Musk - was retained. Thanks
i think (im not sure) thats what head is for but agian im not sure
If you want to extract hair, you need use head and XSeg. You outline the person including the hair, even if long hair and then train in xseg and it will include the hair. the more you outline the more it will include, like earings. more detailed outline that you need to do manually, making polygons using Xseg editor. you double click Xseg)data_src mask-edit and editor will open up. click arrows to go forward and back and outline as few or as many as you want. then do some Xseg training, 5XSeg)train (does both src and dst align pictures, you can only do hair in align pictures) Keep repeating and then put click Saed train check other videos for using Saed.
Can someone tell I was training saehd laptop turned off cuz No charge now will it be saved or not
Thank You So Much My Brother😪🥺🥺
Is it possible to change the words of the speaker or does this only work for the original audio?
Nice tutorial. Is quick96 faster or better than saed? I've not had great success so far. I downloaded some footage off the Web, did all the procedure except with Saed. I let it run over night and had 200k iterations but it was still quite poor.
Also, I just heard recently that running deep face lab burns out hdd quite fast due to all the extra use. Which is quite worrying. One person commented they had got through 2 samsung pro ssd's in a year!
Im wondering what happens if you were to put multiple videos in the scr folder? Say (sylvester stalone) rocky 1 2 3 rambo etc. I know you hav to block out other faces...
Also what about images? Dropping a load of high res google images of sly stallone face?
thank you so much! Your tutorial is so perfect.
can I use the created model for other videos swap as well??
I men same faces but different videos, can we do it?
does it work for MacBook?
I couldn't find this project on your new website :(
Thank you. What about cfg to the previous frames?
You are my best teacher.
Does anyone have a further developed version or know of a more advanced alternative to the DFL?
I assume it's best to use videos with matching frame rates? Or does it not really matter as long as your destination frame rate is your preferred frame rate?
what happen if i dont have videos for data_src and just pictures? can i place them there in the folder and then extract the faces? or is a better way?
id like if someone can explain how to do if i have only pictures for data_src..
also how about data_src , like if i add a video ..does it matter if the video duration is less than the data_dst? does it matter only to get the faces from _src right?
ty
Sir amazing this is I can't believe
Wait so, if you use Quick96 instead of 5.XSeg you don't need to manually do any of the data_src/dst mask edit?!?!?!
Pls can you help me I didn’t see all those files when I opened up I only saw internal
Userdata and the run deepfacelab folder
Hello, can use the deeepfacelab with my i3 cpu 10th gen uhd graphics 650, please reply.
What GPU you are using? Also tell me what is the recommended PC configuration for this kind of courses. What PC configuration you are using?
He is using Dual GeForce GTX1080 [SLI] 16G GDDR5X (8GB each)
It says on his site
Hi , I couldn't get the course on your website , how can I get I ?
I have 6974 frames in dst. Do I have to MANUALLY go through them and select every frame that is blurry or isn't a face or is there a way that let's me sort them?
No, just make sure the first frame looks good after you use the settings. Then click 'shift' & '?' to apply these settings on all the frames and then click 'shift' & '>'. Let it process and then continue to the next step of the margin.
Hey bro, I just wanted to ask, are you going to make a intermediate tutorial about deepfake?
Like for example: How the manual option works
anyways new subscriber here hope you have a pleasant morrow!
My thanks to thee.
Hi there. Where are the course notes? The link redirected to another site and I can't find them.
Thank you! I have an odd problem. I renamed the original source video file data_src.mp4 to data_src.mp4_BAK. Then I brought my new source video into the folder and renamed it data_src.mp4. When I ran 2) extract images from video data_src.bat, it extracted the frames from data_src.mp4_BAK (the one with Ironman). When I deleted the data_src.mp4_BAK file, and then ran the .bat, the command prompt said input_file not found. This is really strange. Has anyone has this issue? P.S. I was able to successfully extract the frames from my new data_dst.mp4.
I re-extracted the exe into a new folder and tried again and now it works.
What if you have 2 faces in the dest video? How do I apply the source face with only 1 of the dest faces?
So nice👏👏
is it possible to change my training settings for training saehd after i have already saved the file? if so, how?
I've uncovered a bug and a work around if anyone else is having the same issue with the current build as of 2/1/21.
For some reason, I need to keep pressing S or P in the training window every few minutes. If I don't the program will stop training, and become unresponsive to S, P, and Enter commands (however the space bar still works). I don't know for sure but it appears to be any command sent to the Training Preview that gets passed on to the command prompt. If that happens, all the new training information is lost because I can't save it and need to force quite.
The work around I've come up with is using Pulover's macro creator (a free automation app) to just send P (wait a few seconds) S to the training preview window every 1-3 minutes so that I can go AFK and let it run while I sleep, eat dinner, work out or whatever. It's annoying, but I can't complain too much since the software is free and producing good results.
With a medium thin mustache Elon kind of looks like Robert Downey jr
I am using windows 11 and pressing return does not stop and save the model! I closed the window to stop. Also wondering if you have multiple faces in a destination file, can you select one?
I don't get the fact of using a common encoder. Does it mean we have to use the same encoder architecture or extract the actual encoder from trained autoencoder_1 to use in autoencoder_2.
Where's the software that have an UI that you can select source data and destination data and then click run and then you get what you want ? DeepFaceLab looks like a rocket engineering, man ! I know it could be learned, but it should be much easier ! After Effects also can do the face replacemant, if the user knows how to operate After Effects.
Hey bro,thanks for the tutorial firstly…brill job,but i cant seem to get past step 2,when i extract images from vid data src,i get lots of errors saying failed to load NATIVE TENSORFLOW RUNTIME??
The training keeps getting an error. It says I need to turn on the hardware accelerated Gpu setting in my graphics settings. I already did that and restarted my pc but it keeps saying that I need to turn it on. I tried SAEHD and it gave me the same results. Please help.
Sir plz tell how to learn basics of Computer visions. Plz sir 🙏🙏