Easy Deepfake tutorial for beginners Xseg

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 113

  • @Jsfilmz
    @Jsfilmz  4 года назад +8

    vote for my short film here!myrodereel.com/watch/8674

    • @SP95
      @SP95 4 года назад +1

      🗳✔

    • @SPT1
      @SPT1 4 года назад

      Hello, just took a look at your channel, some impressive stuff. Congrats. Especially the Fast & Furious aging video. And that's why I'm writing to you : I have solid knowledge of DFL 2, but none about Fakeapp and almost none about EBSynth. So, could you make a tutorial on how you did the aging in this video and do you think it's doable with DFL + EbSynth ? If so, you would only need to explain the EBSynth part. Or link me a good tutorial for aging if it already exists. Also, whatever step you do in postprod to have this clean and crisp look, I'm also interested to know (I know Premiere well, AE not so much).

    • @Jsfilmz
      @Jsfilmz  4 года назад +1

      SPT1 hey bro sorry for late response youtube is terrible at notifying. I will make a tutorial for sure

    • @oliverwatson6689
      @oliverwatson6689 4 года назад

      VOTE done brother

  • @zabique
    @zabique 4 года назад +4

    You got DFL hooked up
    [13:40] hit space to change for the actual preview to src or dst and then P to change frame to identify any xseg issues.
    U can use pretrained xseg model too for faster learning. There are 1mln iter available.

  • @SP95
    @SP95 4 года назад +6

    There is still a lot of manual work but for those who are brave enough it seems rewarding

  • @kaaaiyu
    @kaaaiyu 2 года назад +12

    18:55 and I will see you guys in a week. Lmao 😂 Great tutorial dude!

  • @Snafu2346
    @Snafu2346 3 года назад +1

    Previously on older versions of deepfacelab, I never had any issues with obstructions over the face. I never had to do any custom roto scoping techniques, but now with dlf 2.0, it seems like any thing covering the face, like a hand waving, or long hair, or whatever it is causes the mask to breakdown or go blurry. What settings have changed that allow the behind the mask to stay intact. Previously it was just learned DST, but that doesn't work the way it used to.

    • @Snafu2346
      @Snafu2346 3 года назад

      without any obstructions, I have no problem. The deepface mask looks fine, but as soon as something gets in the way, what didn't used to be a problem is now suddenly a problem.

  • @Slay0r815
    @Slay0r815 Месяц назад

    face is ok, all you need is a new hair style. BTW,I'm very new to this dfl thng. Can i use pretrained model & Pretrained Xseg mask together for SEAHD training?

  • @drgclangamez
    @drgclangamez 8 месяцев назад

    Is there a faster method or program /software with better results today?

  • @enochAyim
    @enochAyim 2 года назад +2

    derm derm derm, u really helped me , thumbs up bro

  • @tgtutorials
    @tgtutorials Год назад

    Najlepszy samouczek jaki widziałem na RUclips😉

  • @deviljoe7171
    @deviljoe7171 3 года назад +1

    Hey do you recommend using a pretrained model or training it from scratch

    • @Jsfilmz
      @Jsfilmz  3 года назад

      i train once then save it just incase i have to use that same face again

  • @georges8408
    @georges8408 4 года назад +1

    SAEHD is by far more time consuming than Quick96.. it takes for ever to finish something.... the question is, can we use Xseg to train masking and use Quick96 (which is much faster) or XSeg training it works only with SAEHD ?

    • @Arewethereyet69
      @Arewethereyet69 Год назад

      did you ever find out?

    • @georges8408
      @georges8408 Год назад

      @@Arewethereyet69 unfortunately I quit ... it is so complicated and time consuming that doesn't worth the time

  • @botlifegamer7026
    @botlifegamer7026 Год назад

    Mine doesn't merge them all to the same setting. any reason or can you think of one why it doesn't

  • @KillaCyst
    @KillaCyst 4 года назад

    great tutorial. I'm a noob to deepfakes and have done a couple but I think xseg might be the step I was missing!

    • @Jsfilmz
      @Jsfilmz  4 года назад

      goodluck man!

  • @BenjiJames
    @BenjiJames 2 года назад +2

    Your work is so good, dude, that i decided to get into this myself, however, upon starting up, i realized that there's so little info out there about pretraining stuff, so i have a couple of questions:
    1. How long did you pretrain your src model for (when you did)?
    2. Did you place your src model into the folder "/_internal/pretrain_faces"? If not, how did you pretrain your face model?

    • @Vacated204
      @Vacated204 Год назад

      Did yo figure this out? I’m still confused on how to use pretrained models. Do I train the celeb I’m swapping with the pre train?? I don’t understand lol

  • @clydefrog6961
    @clydefrog6961 3 года назад

    Very noob question indeed but what are in a nutshell the differences between XSeg train, Quick train and SAEHD train ?

    • @Jsfilmz
      @Jsfilmz  3 года назад

      xseg is manual others are automatic

  • @LukmanHakim-np2fk
    @LukmanHakim-np2fk Год назад

    question : can i use only images for data source

  • @danielreso4405
    @danielreso4405 2 года назад

    good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.

  • @bakermclendon
    @bakermclendon 4 года назад +1

    JSFILMZ - once I edit the dst or src with XSeg, do I select XSeg data/src_dst/src trained mask-apply.bat opposed to the XSeg) train.bat? It appears you went directly to the XSeg train.bat opposed to applying the Xseg mask first. It may not make a difference. Thanks for any info you can provide.

    • @Jsfilmz
      @Jsfilmz  4 года назад +2

      hey bro gotta train first before u apply

  • @sheeeple2069
    @sheeeple2069 4 года назад +1

    you should mask out the obstructions during the xseg part

    • @Jsfilmz
      @Jsfilmz  4 года назад +3

      bro i tried man but the face changes it makes funny faces when i exclude something like a knife goin across his face hahahaha when lets say a hand goes across the face im trying to figure out how to fix

  • @kunhasanhadi3711
    @kunhasanhadi3711 4 года назад +2

    Nice tutorial! What's your pc spec? Tks

    • @Jsfilmz
      @Jsfilmz  4 года назад +2

      Kun Hasan Hadi ryzen threadripper 1950 gtx 1080 its almost 4 years old. I tried to snag a 3080 but u know how that went lol

    • @Jsfilmz
      @Jsfilmz  4 года назад

      Anastazy Staziński of course man ill make more as i learn

  • @Arewethereyet69
    @Arewethereyet69 Год назад

    What if my laptop cant do train SAEHD and only Quick96? will the XSEG training not be used?

    • @doziekizito3399
      @doziekizito3399 Год назад

      I have this issue too. My laptop only has a 6GB VRAM NVIDIA RTX Geforce 40 series. So it can't train SAEHD

  • @silverhawk661
    @silverhawk661 4 года назад +1

    @ JSFILMZ
    Hey there. Thanks for the great tutorial.
    Is there any way to make the nose of the dst model have the same shape as the src model when he looks to the side ?
    When the character look to the side, his nose keeps the dst shape, i want his nose shape to look like the src model when he looks to the side (side view/profile view).
    How am i supposed to do that ?

    • @Jsfilmz
      @Jsfilmz  4 года назад

      yea you can increase the face style power but be very careful save your model first

  • @rgrimoldi
    @rgrimoldi 3 года назад +1

    how do you eliminate the blurriness?

    • @Jsfilmz
      @Jsfilmz  3 года назад

      iteration time leave it longer, i think that was 800k

    • @francisdeleon5647
      @francisdeleon5647 Год назад

      @@Jsfilmz Hi mine was 976,873 iterations and still blurry. any advice?
      ================== Model Summary ===================
      == ==
      == Model name: new_SAEHD ==
      == ==
      == Current iteration: 976873 ==
      == ==
      ==---------------- Model Options -----------------==
      == ==
      == resolution: 128 ==
      == face_type: wf ==
      == models_opt_on_gpu: True ==
      == archi: liae-ud ==
      == ae_dims: 256 ==
      == e_dims: 64 ==
      == d_dims: 64 ==
      == d_mask_dims: 22 ==
      == masked_training: True ==
      == eyes_mouth_prio: False ==
      == uniform_yaw: False ==
      == blur_out_mask: False ==
      == adabelief: True ==
      == lr_dropout: n ==
      == random_warp: False ==
      == random_hsv_power: 0.0 ==
      == true_face_power: 0.0 ==
      == face_style_power: 0.0 ==
      == bg_style_power: 0.0 ==
      == ct_mode: none ==
      == clipgrad: False ==
      == pretrain: False ==
      == autobackup_hour: 0 ==
      == write_preview_history: False ==
      == target_iter: 0 ==
      == random_src_flip: False ==
      == random_dst_flip: True ==
      == batch_size: 8 ==
      == gan_power: 0.0 ==
      == gan_patch_size: 16 ==
      == gan_dims: 16 ==
      == ==
      ==------------------ Running On ------------------==
      == ==
      == Device index: 0 ==
      == Name: NVIDIA GeForce GTX 1660 ==
      == VRAM: 4.80GB ==
      == ==
      ====================================================

  • @georges8408
    @georges8408 4 года назад

    Thank you for this tutorial... please let us know something. If we make a good model of ourselves (source video), then we can use it in all times right ? I mean we can avoid all the steps for extract images etc that concern the source video. Is it correct ?

    • @Jsfilmz
      @Jsfilmz  4 года назад

      yes u can save the model and iterations

    • @georges8408
      @georges8408 4 года назад +1

      @@Jsfilmz thanks but how ? where is the trained model ? (I use my self video as source)

  • @freehaven-junprince2376
    @freehaven-junprince2376 3 года назад

    Very good video, but I have a question about the end.... Did you really spend a full week with 800k+ iterations to make a 2-3 second destination video? If you wanted to make a 5 minute video how much longer would we need? Is it the same, or are we talking a few months?

    • @chi11estpanda
      @chi11estpanda 3 года назад

      You know, I seriously wrote out a long answer and then when re-reading the question again, I suddenly realized that you're actually/probably just teasing him so it was rhetorical and you probably already know all the wrong turns this video made without me telling you, huh?

  • @romania3dart
    @romania3dart 4 года назад

    Hi....It say Exception: Unable to start subprocesses ....
    Press any key to continue ...
    Then after i press enter is shutting down ...

    • @Jsfilmz
      @Jsfilmz  4 года назад

      u need an nvidia gpu probably

  • @tszlokchan5343
    @tszlokchan5343 3 года назад

    Hi after using the dst sort, I deleted some of aligned face , then I sort the faces again by using original filename. However, the numbers of the faces changed and not same with the photos after I extract, is that fine or what .thank you

    • @Jsfilmz
      @Jsfilmz  3 года назад

      no that sounds weird bro

  • @guptaflavio5383
    @guptaflavio5383 4 года назад

    *Hi, In the aligned results, there are 2 faces. How will I know if it belongs to the subject and not the extra person?*

    • @Jsfilmz
      @Jsfilmz  4 года назад

      Gupta Flavio for your destination folder im guessing?

    • @idigogideon4339
      @idigogideon4339 2 месяца назад

      delete the one you dont need

  • @KhalilCh
    @KhalilCh 2 года назад +1

    can I do it on mac?

    • @Jsfilmz
      @Jsfilmz  2 года назад

      never owned a mac sorry

  • @LightWolf25
    @LightWolf25 4 года назад

    How do you fix the eyes looking different positions to original dst?

    • @Jsfilmz
      @Jsfilmz  4 года назад

      LightWolf25 try doin eye priority y then turn it off when its fixed

    • @LightWolf25
      @LightWolf25 4 года назад

      @@Jsfilmz when does that option come ? Thnx

    • @Jsfilmz
      @Jsfilmz  4 года назад

      LightWolf25 during training its called eyes priority its in this video

  • @bharatiratan5209
    @bharatiratan5209 4 года назад

    Hi! I have Nvidia GeForce MX110 2 VRAM can I use deepfakelab. Please let me know.

    • @bharatiratan5209
      @bharatiratan5209 4 года назад

      Please reply

    • @Jsfilmz
      @Jsfilmz  4 года назад

      prolly not bro, u can try cpu but it wont be as good

  • @BenjiJames
    @BenjiJames 2 года назад

    Fantastic tutorial, good sir! If you're looking for brown actors or darker skin toned people, you'll find heaps in Australian movies/trailers :)

  • @gauravlokha8787
    @gauravlokha8787 Год назад

    My process -
    2) extract images from video data_src
    3) extract images from video data_dst FULL FPS
    4) data_src faceset extract
    5) data_dst faceset extract
    5.XSeg) data_dst mask - edit
    5.XSeg) data_src mask - edit
    5.XSeg) train
    5.XSeg) data_dst trained mask - apply
    5.XSeg) data_src trained mask - apply
    7) merge SAEHD
    This is exactly what I did. Now while I started to merge, the model seems to apply the destination face itself to data_dst. What am I doing wrong?? It's not applying source face for some reason.

    • @idigogideon4339
      @idigogideon4339 2 месяца назад

      you did not train the saehd, you just started merging

  • @boratsagdiyev1586
    @boratsagdiyev1586 4 года назад

    when i'm at train SAEHD and it starts i see all the random included faces instead of my own, i dont get it i copied the entire workflow. please help!

    • @Jsfilmz
      @Jsfilmz  4 года назад

      did u clear ur workspace?

    • @Vlfkfnejisjejrjtjrie
      @Vlfkfnejisjejrjtjrie 4 года назад

      Did it ask to use the CelebsA pre-trained model upon initial model creation? Say no. New to deepfacelab so not sure if you can just override it on next start up.

  • @slafajsldasdf7592
    @slafajsldasdf7592 2 года назад

    Where is the repo for these bat files?

  • @tszlokchane5505
    @tszlokchane5505 3 года назад

    Hello , why I getting error when I go for the xseg training

    • @Jsfilmz
      @Jsfilmz  3 года назад

      what gpu

    • @tszlokchane5505
      @tszlokchane5505 3 года назад

      JSFILMZ I am using gtx 1070 which has 8gb vram . However, I still getting memory error or just stuck after I entered the batch size

    • @tszlokchane5505
      @tszlokchane5505 3 года назад

      Benjamin Blacher oh is there any difference between putting the folder at C drive in case of other drive?

    • @paulgeorge9228
      @paulgeorge9228 2 года назад

      @@tszlokchane5505 yeap. it can't run xD. it didnt work for me at first then i moved it to c drive n also did other things then it worked

    • @idigogideon4339
      @idigogideon4339 2 месяца назад

      @@tszlokchane5505 reduce the batch size

  • @impcharts
    @impcharts 2 года назад

    How to create a source face that is an average of two persons? Can I extract faces separately from two characters (each producing its own data_src), and put all the aligned face jpg files together into one aligned folder to train? Can this method create an average source model that looks somewhat like both of the characters? Is there a better method? I appreciate your answer. Thanks.

  • @GiselaMarten-d4c
    @GiselaMarten-d4c 4 месяца назад

    Hello, I'm building head. how do i find the faceset.pak for head ? thanks

  • @Zimbabwe.
    @Zimbabwe. 4 года назад

    Hi Js , do you know why the why i get an error when i train the files on #6 step , i tried all trains, they all give errors, any idea

    • @Jsfilmz
      @Jsfilmz  4 года назад +1

      Zimbabwe did u change any parameters what error r u gettin

    • @Zimbabwe.
      @Zimbabwe. 4 года назад

      @@Jsfilmz Thank you for your reply, there are abunch of errors top ones say : Map memory - CL _Mem _object allocation_ Failure then all the other errors are in diffgerent file lines to do with the .py eg: error library.py line 131 , backend.py line 1668 in variable there are almost 12 lines with errors on different .py files

    • @Zimbabwe.
      @Zimbabwe. 4 года назад

      @@Jsfilmz Thank you are again, your channel is so informative, i shared to a bunch of my friends.

    • @Jsfilmz
      @Jsfilmz  4 года назад

      Zimbabwe what graphics card do i have

    • @Zimbabwe.
      @Zimbabwe. 4 года назад

      @@Jsfilmz i have AMD Radeon HD 8490 its 24gb combined , 8 gb internal and the rest is an add from the extra card

  • @GarageRockk
    @GarageRockk 4 года назад

    Hey ! thank you very much for this tutorial. My question is, what happens when you have a video with two people in scene? how do i choose wich one i want to deepfake ?

    • @Jsfilmz
      @Jsfilmz  4 года назад +1

      Ale Camps two faces as the destination or source?

    • @GarageRockk
      @GarageRockk 4 года назад

      @@Jsfilmz For my destination. Say for example i have a harry potter scene with hermione and ron and i want my face only to be applied in ron's face.
      Thank you for your time and response!

    • @Jsfilmz
      @Jsfilmz  4 года назад +4

      Ale Camps you can do a manual extract for the dst or just crop out the other faces or blur them. If you can wait im planning on doing more tuts

    • @GarageRockk
      @GarageRockk 4 года назад

      @@Jsfilmz thank you very much! Of course ill wait, definitely suscribing

    • @chi11estpanda
      @chi11estpanda 3 года назад

      @@GarageRockk When you extract faces from your destination source, you will have an extra set of photos in your data_dst/aligned folder, delete all the ones that detected and are focused on hermoine's face or to be more precise and not accidentally delete what you need, use data_dst view aligned debug results to see the landmarks of each image in data_dst/aligned/ and for any of them show landmarks for Hermoine can be deleted from the aligned folder, (make sure you do NOT delete them from data_dst folder but the algined folder within data_dst) then you can proceed with the rest of the instructions if you want, but there are a lot of steps here that are not ideal and is not recommended. But I mention this more for anyone reading the answer to this later in the future, as 4 months have passed and you may have already learned for yourself how to accomplish this.

  • @grantpeterson2524
    @grantpeterson2524 4 года назад

    Hey man, not sure why but the Xseg training isn’t working for my 3080 :( it does 1 iteration and then just stops doing iterations. No errors or anything, just seems to be infinitely working on a single iteration. Any ideas? No support for 3000 series cards yet? Seems to work fine off my 5900X but it obviously isn’t optimized for CPUs so it takes 2 seconds an iteration if I do that

    • @Jsfilmz
      @Jsfilmz  4 года назад

      far as i know rtx 3000 is still not supported with dfl

  • @VKTVCHANNEL
    @VKTVCHANNEL 4 года назад

    How much time it takes to create deepfake a 15 seconds video on a normal i7 64 bit computer?, Please...

    • @Jsfilmz
      @Jsfilmz  4 года назад

      VK TV its gpu u will need for it to be faster.

    • @VKTVCHANNEL
      @VKTVCHANNEL 4 года назад

      @@Jsfilmz Thank you. But what could be the estimated time it takes to create 15 seconds of a deepfake video on my normal i7 64 bit computer? I am curious. Can I start? Or should I leave it for weeks? Or what if it is only of 5 or 6 seconds video?

    • @PlatinNr1
      @PlatinNr1 4 года назад +1

      @@VKTVCHANNEL the time of the video doesnt matter.

    • @VKTVCHANNEL
      @VKTVCHANNEL 4 года назад

      @@PlatinNr1 I wonder to hear that the duration of the video won't effect the time to create deepfake. If I use. Longer videos and I want to create longer deepfake it takes long time. That's why I have asked about the time to create a minimum length of video with minimum resources. How you understood.

    • @PlatinNr1
      @PlatinNr1 4 года назад +1

      @@VKTVCHANNEL the process which is time intensive is to train the model (replacing the face). in this case the length of the video output doesnt matter.

  • @oliverwatson6689
    @oliverwatson6689 4 года назад

    Sir can we make a deep fake with (Intel® Core™ i3-9100F Processor 6M Cache, up to 4.20 GHz) alone with Nvidia GTX 1650super graphic card

    • @Jsfilmz
      @Jsfilmz  4 года назад +1

      yes im sure you can, if not try a cpu one

    • @oliverwatson6689
      @oliverwatson6689 4 года назад

      @@Jsfilmz am very new to this deep fake.how to work on CPU sir.

  • @DAVIDTATLITUG
    @DAVIDTATLITUG 4 года назад

    You haven't given a clue what program to get to start this. Where do we get the program

    • @Jsfilmz
      @Jsfilmz  4 года назад +2

      its in the thumbnail brosky its called dfl 2.0

  • @I77AGIC
    @I77AGIC Год назад

    you MASSIVELY overfit your model. 800k is way too many. Always look at the graph and once the yellow part starts moving up instead of down you have trained for too long. The more data you have the longer you can train before this happens though.

  • @thewakandapost
    @thewakandapost Год назад

    Mirip orang indo. 🤔

  • @oliverwatson6689
    @oliverwatson6689 4 года назад

    Please make a deep fake video (not required graphic card)

  • @PascalQNH2992
    @PascalQNH2992 2 года назад +1

    Followed all the steps from different video's. Nothing works. Errors is all I get. And my pc is pretty pretty fast! ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,128,320,320]

  • @danielreso4405
    @danielreso4405 2 года назад

    good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.