DeepFaceLab 2.0 Pretraining Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 90

  • @joshjakkrit6085
    @joshjakkrit6085 ปีที่แล้ว +5

    Can you make a tutorial how to use pretrained model that we downloaded from deepfakeVFX?

  • @khanameen4692
    @khanameen4692 หลายเดือนก่อน

    Hey I am confused which architecture is better? Which one do you use the most ?

  • @hytalegermany1095
    @hytalegermany1095 ปีที่แล้ว +1

    But how to use the pretrained model after it is trained enough? Let´s say I have a video of person A and a video of person B. How can I now use the pretrained model with all the thousand faces to help calculate the Person a to be faster?
    Like if I train saeHD how with those images: how to make use of the pretrained stuff?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      You need to follow all of the previous steps to get both the source and destination facesets. Then run the pretrained model and hit enter when prompted to change the settings. Turn on Random Warp and turn off Pretrain Mode. That will get you started

    • @LR-vw8yu
      @LR-vw8yu ปีที่แล้ว

      @@Deepfakery why do you say this in the comments thought, isnt it what this video is about ?

    • @LR-vw8yu
      @LR-vw8yu ปีที่แล้ว +2

      @@Deepfakery like whats the point of pre training a model if u not going to use it

    • @huhkgerry
      @huhkgerry 25 วันที่ผ่านมา

      @@LR-vw8yuyou dont get it tho ?? It will make it quicker if you want another videos you want to deepfake ,if you already pre trained the source

  • @johnjohnsin3762
    @johnjohnsin3762 ปีที่แล้ว +2

    I'm pre-training a model using the default face set, trimmed for gender, and my own images added; can I isolate my images to a separate faceset, train that, and not loose model progress on the default images to later add the default images back?
    Or, is it better in the long/short run to keep every different expression/lighting/etc you can?

  • @TomaJerome
    @TomaJerome ปีที่แล้ว +1

    If I just want to deepfacelive my own face, would it be better if I just use my own face as faceset for pretraining?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +1

      Normally you would use a large variety faceset so that the model can be applied to any face. So you might want to try mixing in photos of yourself. Otherwise you could go all the way and just train on your own face, but you still want some variety to account for differences in the video feed (i.e. your lighting changes slightly from video to video.) If you have a very controlled shooting environment then you could get away with less variety.

  • @ironmanYT-n9o
    @ironmanYT-n9o 3 หลายเดือนก่อน

    I understand all the steps. I have after this a pretrained model. but how can i use this then, to apply it for a certain image e.g. my fathers picture, so that the deepfake is applied to his face at best?

    • @Deepfakery
      @Deepfakery  3 หลายเดือนก่อน

      There are 2 ways:
      1 - Extract the pics as dst, continue to normal training for a short time, then merge.
      2 - Export as DFM and use DeepFaceLive. It can quickly apply the model to images.

  • @FlexinVR
    @FlexinVR ปีที่แล้ว

    Not sure what to do with the folders in workspace after I've created my first project. I want to create another video clip, using same source and same person as destination, but just a different scene. Not sure what folders to rename, what folders to keep the same. Can't find ANY info anywhere on next steps to continue creating. I can only find Tutorials on how to create first time.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      For this you can just remove all the dst material, keep the src and model, and extract the new dst. For the dst mask you might be able to simply apply the trained xseg from the previous project. To train the new deepfake you want to go back to the first phase where Random Warp is turned on. If its an LIAE model you should delete the inter_B model file before re-training.

    • @NakedMatrix
      @NakedMatrix ปีที่แล้ว

      @@Deepfakery Gotcha. I did that, worked like a charm the second time. Thanks.

  • @dancespoilers6203
    @dancespoilers6203 ปีที่แล้ว

    Pls, how do i train my own model, like not face merging, training my own model ,how do i set the data_src and data_dst

  • @AGVenge
    @AGVenge 4 หลายเดือนก่อน

    Any version of dfl that works well with a new AMD gpu like xtx7900?

  • @khanameen4692
    @khanameen4692 4 หลายเดือนก่อน

    hey i am confused when to stop training i am currently at 80k iterations on the batch size of 14, how do i know when to stop ?

    • @Deepfakery
      @Deepfakery  4 หลายเดือนก่อน

      Just stop and do a merge. If it looks good, good! If not, you can keep training.

    • @khanameen4692
      @khanameen4692 4 หลายเดือนก่อน

      @@Deepfakery can I merge in pre training?

  • @ColorFusion97
    @ColorFusion97 ปีที่แล้ว

    its possible to training only one face? i try to run SAHED but i dont have anny dst , crash with some kernel in the python window. my idea its train my face to use in different videos.thanks

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +2

      You will do this after pretraining. Take your face as src and use the pretrain faceset (or some other large random faceset) as dst. Continue training the model with random warp on and pretrain off. At this point you will have a model of your face that is generalized to fit many different dst faces. To use it first delete the inter_B model file, add your dst faceset to the project, then either try merging directly, or let it train for a little while. It should adapt very quickly. For a more detailed description you can look into the process for training DFLive or RTT/RTM models

  • @deeber35
    @deeber35 ปีที่แล้ว

    When I manually mask images that weren't solved by DFL, if I accidentally end the process with a few I meant to go back to, when I re-run Manual Extract, it says there are no images to work on. How can I edit those images I missed?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      TBH I've never had luck restarting an extraction even though its supposed to be possible

  • @kai_harm942
    @kai_harm942 4 หลายเดือนก่อน

    I don’t understand. The video doesn’t show where to leave the face pack with the pre trainer faces

    • @Deepfakery
      @Deepfakery  4 หลายเดือนก่อน

      The pak file is just a single file archive of the faceset so it should be placed in the appropriate aligned folder.

    • @kai_harm942
      @kai_harm942 4 หลายเดือนก่อน

      @@Deepfakery I got it in the end, I didn’t enable pre training

  • @brianpfft
    @brianpfft ปีที่แล้ว

    importerror: dll load failed: the paging file is too small for this operation to complete. no matter what

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      You need to increase Windows paging file size: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/#system-optimization

  • @deeber35
    @deeber35 ปีที่แล้ว

    I have a source video with someone using a microphone. I want to keep that microphone when it obstructs the face. Can DFL handle that, or do I need to track the mic w/ other software then put it in the top layer before rendering?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Why do you want to keep the microphone from the source images? Does the dst person have a mic? I'm confused about what you're trying to accomplish here.

    • @deeber35
      @deeber35 ปีที่แล้ว

      @@Deepfakery I want to replace the face of a performer on-stage, who's using a mic. So I want to keep the microphone so the source face looks like he is singing.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Ok, got it. The normal way to do this is to have similar images in the src faceset if possible. Then you want to XSeg label faces with and without the mic in both facesets. If you don't have src images with a mic try including other obstructions like hands. The idea is to match the mask shape and obstruction color in both facesets.
      The problem comes during training if there's too much difference between the faces. It can be difficult if the src faceset doesn't have the obstructions. I'd recommend using a pretrained model at the very least. During training do alot of Random Warp. There's a trick of deleting the model inter file(s) periodically, which might help if the face gets locked into a weird shape. There's also the possibility of using different masks for training and merging.
      Having said that, when I do obstructions I always add them back in post anyway.

  • @krishnapushpak8101
    @krishnapushpak8101 ปีที่แล้ว

    what are the prerequisites before using deepfacelab ? do i need to install python and tensor flow ? and any other prerequisites ?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      For Windows everything is included in the download. If you want to use the GitHub repo then the dependencies should be listed

  • @dancespoilers6203
    @dancespoilers6203 ปีที่แล้ว

    Bruh, pls how do i create a custom model that i can actually use on DFL LIVE ,how do i set the work space and how do i save the model pls

  • @주왕-y3u
    @주왕-y3u 8 หลายเดือนก่อน

    It's awkward even if it's over a million times
    There's a lot of video quality that's blurry, but if it's longer than
    Do I have to do more videos?
    It stopped at 0.1730
    Someone told me to go from 0.02 to 0.08, is that right?

  • @deeber35
    @deeber35 ปีที่แล้ว

    To train a model in SAEHD, are the data_src image and data_dst folders all that are used? I changed the source file, but the model for some reason includes images from the prior source, which I don't want.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Specifically data_src/aligned and data_dst/aligned are used. If its an LIAE model, after changing the source you should delete the inter_AB file and train with Random Warp on.

  • @unknownexposer8399
    @unknownexposer8399 7 หลายเดือนก่อน

    Does SAEHD make the process way faster and accurate?

    • @Deepfakery
      @Deepfakery  5 หลายเดือนก่อน

      Its "slower" than Quick96, but much higher quality and accuracy. Pre-training speeds up the process by creating a sort of base generic model instead of starting from scratch.

  • @mikkepalvanen
    @mikkepalvanen 10 หลายเดือนก่อน

    So as I've understood it's best to use a pre-trained model with many different faces to get the best results? And for that I was able to download a 1,5M iteration pre-trained model from your website but it has 14 different files and no .pak. There's not really any information out there so how do I install/use this pre-trained model?

    • @J95h
      @J95h 7 หลายเดือนก่อน

      @deepfakery I'm facing the same problem

    • @khanameen4692
      @khanameen4692 4 หลายเดือนก่อน

      just extract the file copy everything and paste it in the workspace>model and you are good to go
      P.S while trainning saehd turn off pretraining.

  • @deeber35
    @deeber35 ปีที่แล้ว

    What if you have 2 faces to replace in a video? If u run normally with the first face source, then take that result and run it again for the second face source, how do u handle frames that have both faces in them? Would I need to do those frames manually?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      The best way is to make 2 projects (1 for each face) and combine them in post.
      If you're not able to edit the videos together, you can run project 1 to the end, then take the merged frames and put them as the project 2 dst frames, then run that to the end and merge the final video from there.

    • @deeber35
      @deeber35 ปีที่แล้ว

      @@Deepfakery [re-read your reply- I think I got it]

  • @loreak128
    @loreak128 ปีที่แล้ว

    If I have already trained a model can I switch to pretrain and let it iterate up and then continue to go back to my src/dst afterward or would that make the work I've already done worse.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Not really, it will basically undo the training on your src/dst.

  • @SongStudios
    @SongStudios ปีที่แล้ว

    The problem I encounter is when I try to use the model to actually create a deepfake of 2 people, I have to turn off pretraining. This results in the model being completely wiped and starting at iteration 0?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Yeah the iteration count will reset to 0 and the model will begin to learn your src/dst faces instead of the pretrain faceset.

  • @icepeh1443
    @icepeh1443 ปีที่แล้ว

    Hi , can I ask your help to train the model for us ? What I should do ?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      I'm working on the next tutorial, but after pretraining you can proceed to enable random warp and disable pretraining. That will get you started...

  • @gcinfamous
    @gcinfamous ปีที่แล้ว

    Hi wanted to ask before I updated my software and gpu when I trained the training preview window showed more diff frames with my src and dst together but now it’s just like 4 big frames and I gotta space to see different stuff. Is this just a new version of DFL thing or is there something I can do to use back the old preview window style where I see a lot more frames?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +1

      The size of the preview window depends on the image resolution. At a certain point it splits to separate pages so that the window doesn't get too big. Basically its meant to be fully visible on most monitors without having to mess with output settings. You can modify the code to override it, but i'm not sure off hand where to do that.

    • @gcinfamous
      @gcinfamous ปีที่แล้ว

      @@Deepfakery wanted to ask if ive been training my model to a dst for a good amount of time and the preview screen actually looking good. Is this model immediately usable to train for another DST? will it be faster? or do I have to train a fresh model for the same SRC to a new DST?

  • @ywueeee
    @ywueeee ปีที่แล้ว

    can you add section for CGP, AWS, Azure on your website guide please?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Are you using one of these platforms? I've only tried DFL on Colab because there were already some notebooks available. In my experience it was easier to do most of the stuff on desktop and only do the training on Colab. Then it was just down to getting a good allocated GPU.

  • @bestcake2076
    @bestcake2076 ปีที่แล้ว

    will it be better using my own 6500 src faces for pretraining rather than the deafault random ones? also would you train the dst faces? if so, could i just pak the src and dst faces together?also should i use a head pretrained model if im doing head?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      You could add some of your src and dst images to the pretrain faceset, but I wouldn't pretrain on just those images alone. The point of pretraining is to create a generic model that can be used across multiple deepfakes.
      For your model you should pretrain as head. You can't change the face type of the model afterward.
      BTW the default pretrain faceset if WF, so you'll want to find or build a faceset for pretraining the head model.

    • @bestcake2076
      @bestcake2076 ปีที่แล้ว

      ​@@Deepfakery Thanks, I made my own 256res pretrained model from scratch since I could not find many head models to fit my 10gb of VRAM, is it ok using the default faces to train head?, I haven't found any good head facesets. I'm just training on the defaults anyway. Also DFL only says i have 7.25gb of VRAM when I actually have 10gb.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      You're going to need a head faceset in order to pretrain, because there's no information outside the square image. So the WF faceset doesn't even include parts of the head. Also the head extraction is different from the others; it uses a 3D space instead of 2D for alignment.
      As for the VRAM, DFL doesn't seem to report accurately. On Windows use the Task Manager > Performance tab and you can see the GPU's dedicated, shared, and total usage.

  • @irislegions
    @irislegions ปีที่แล้ว

    First of all, thanks alot for all u are doing. I really appreciate.
    Please, i am on 500k plus iterations on SAEHD. I observed when ever ever i export the dfm
    it is usually on 414MB.
    Now am on 650k iterations , the size of my DFM after exporting is still exactly the same. I even deleted old dfm from workspace folder and reexported it gave me the exact 414,282 bytes.
    Is anything wrong?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Don't worry about the size of the DFM. I think it depends mostly on the model settings and the amount of data, so if those haven't changed it will probably be roughly the same file size, as you have observed. You can keep training if you want to and the quality of the live swap will increase.

  • @jeffwads
    @jeffwads ปีที่แล้ว

    Great video. Thanks for this indepth procedure.

  • @scorpi1756
    @scorpi1756 ปีที่แล้ว

    Hi, Thanks for the great guide.
    I notice this method places the RTM facesets or ones you create into the \_internal\pretrain_faces folder however most methods I have researched say to place faceset.pak into the /data_dst/aligned folder and train for 500k iters and then delete the inter_AB.npy every 100.000-500.000 iters with you own src model aligned and XSeged. Then add your own dst images (for final merge) at the end and the SAEHD training is done in a very short time. So what is the difference?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +4

      The process you described is for creating a Ready-to-Merge model for a single celeb, which can create an almost instant deepfake because it has trained 1 src celeb against many random faces. Basically you create a model of that celeb face with all sorts of different lighting and expressions that might occur in a dst video. So while the RTM model can accept many different dst videos for merging, it is limited to making deepfakes of that specific celeb. A pretrained model on the other hand can be used to start training any deepfake since both src and dst have been trained on random faces. In fact you can pretrain a model then use it to train various RTM models (also DFLive models) of different celebs.

  • @mastergracious2127
    @mastergracious2127 ปีที่แล้ว

    Please what build can I use on my pc I got a dell 7510
    Core i5 6th gen
    8GB
    256gb SSD
    4gb Nvidea Quadro M2000M
    And how many hours do you think I can use to achieve good result

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      I have an installation tutorial which should help: th-cam.com/video/8W9uu-pVOIE/w-d-xo.html
      You GPU has CUDA compute capability of 5.0, which is above the recommended 3.5, so you should be able to use the "up to RTX 2080 TI build". If that doesn't work you can try the DX12 version, which your GPU should also support.
      As for the time, it all depends on the model settings. With your 4GB card I would suggest using a low res model, maybe just start with the defaults. I gave some tips on how to make the model slimmer, such as disabling Adabelief, using LIAE instead of LIAE-UD, etc. Honestly with that setup you're better off downloading a pretrained model once you figure out what the system will handle.

    • @mastergracious2127
      @mastergracious2127 ปีที่แล้ว

      @@Deepfakery thank you 🙏

  • @ElonMusksThoughts
    @ElonMusksThoughts ปีที่แล้ว +1

    Hi Deepfakery. Rtx a6000 support AMD ryzen 9 5950x?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +1

      Yes, I have the same GPU. Use the RTX 3000 build. The CPU shouldn't matter...

    • @ElonMusksThoughts
      @ElonMusksThoughts ปีที่แล้ว

      @@Deepfakery thank you

  • @charankumar7124
    @charankumar7124 ปีที่แล้ว

    0:25 ffhq datset for different people faces

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      you can find it inside _internal/pretrain_faces/

    • @charankumar7124
      @charankumar7124 ปีที่แล้ว

      @@Deepfakery can you make a complete detailed video on how to replace our face with someone else

  • @ywueeee
    @ywueeee ปีที่แล้ว +2

    would watch a new video, where you provide data_dst and data_src which I can download and then go from start to finish with the best settings, covering each step but concisely such that I can just download, start, pause, follow along and get a fantastic output in the end and then later on adjust the settings based on my machine and my own video files.

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +1

      Yeah like a mini course with resource downloads for each step/section. I have considered this, maybe adding it to my website. There's only a few more major topics to cover so maybe i'll circle back to the idea. Need to do a full run through my own videos and see what can be trimmed up or expanded upon.

  • @miniyata
    @miniyata ปีที่แล้ว

    just wanted to say ur a goat

  • @paulgauguin7730
    @paulgauguin7730 ปีที่แล้ว

    What graphic card did you have?

  • @stanleyaikhomon2804
    @stanleyaikhomon2804 ปีที่แล้ว

    Can’t I just place my video and use it in deep face live without needing to train ???? … I don’t want to swap the face

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      I'm not sure what you mean by " I don’t want to swap the face", thats kinda the whole point of this. Anyway for DFLive you still need a model. You can't just input a video and instantly pull a deepfake from it

    • @stanleyaikhomon2804
      @stanleyaikhomon2804 ปีที่แล้ว

      @@Deepfakery
      I want to use a model for DFlive which I already have.
      But I just want to convert it to a dfm file without needing to Xseg and SAEHD Train.
      Will that’s be possible ?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว

      Well you can just put it in the model folder and try exporting as DFM and see if it works in DFLive. However the model needs to be trained on a specific person. If its just a general pretrain model it won't work.

  • @viraldigitalman
    @viraldigitalman ปีที่แล้ว +2

    Please show us noobs a Deepfacelive version for making custom models.

  • @Sanguen666
    @Sanguen666 ปีที่แล้ว

    casually shows off 2xa6000 :D

  • @tomatoway
    @tomatoway ปีที่แล้ว +3

    Too many movements. Why don't the developers make a program that combines all these functions? Launch the program, select item 1, 2, 3, etc. Why is everything so complicated?

    • @Deepfakery
      @Deepfakery  ปีที่แล้ว +4

      It was pretty much designed to be the exact opposite. They focused more on creating this sort of extendable pipeline codebase, definitely geared toward ppl who already know about machine learning. TBH once you know what your system can handle it gets easier. You can use one good pretrained model for all of your deepfakes. Or just download one!