Mr DeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
  •  Previous
  • 1
  • ...
  • 67
  • 68
  • 69(current)
  • 70
  • 71
  • Next 
DeepFaceLab Explained and Usage Tutorial
(03-14-2019, 12:08 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-13-2019, 01:53 PM)MrE Wrote: You are not allowed to view links. Register or Login to view.When I start training it says "WARNING: You are using 2GB GPU. Result quality may be significantly decreased." I have a 1080 ti. What can I do to address this?

Do you have all your drivers updated?

Yes
@limpy1 It's going to vary a little depending on the application you choose to use, but for the most part you need:
  • A modern 64 bit CPU.
  • A graphics card that has at least 4 Gb of VRAM. Yes there are ways to use a lesser card, but for decent results, 4 Gb VRAM is the minimum. If you have 6+ Gb, that's a lot better.
  • 8 Gb of system RAM. This is not necessarily a hard requirement, but it will help.
  • Ample hard drive space. Depends on a few different things, but personally, I suggest at least 50 to 100 Gb of free usable hard drive space.
Further clarification: if you choose to use Faceswap, you will need a graphics card that supports a CUDA Compute Capability of 3.5, and at least 2 Gb of VRAM. A 2 Gb card will not deliver you good results, even though you can use Faceswap. You still need at least 4 Gb of VRAM for a decent DeepFake. If you choose to use DeepFaceLab, then you will need a graphics card that has at least 4 Gb of VRAM. The CUDA requirements are not necessary for DeepFaceLab. This means that you can use a non-nVidia card for DeepFaceLab. Currently, you must have an nVidia card for Faceswap.

If you choose to use an older application, such as FakeApp or OpenFaceSwap, you need the requirements listed above, plus an nVidia graphics card with at least 4 Gb of VRAM and supporting a CUDA Compute Capability of 3.5.

You can find more information here: You are not allowed to view links. Register or Login to view. and here: You are not allowed to view links. Register or Login to view.

MrDeepFakes.com highly suggests that you use either DeepFaceLab (you will have the most support for DFL on this site) or Faceswap. Older applications are essentially dead.
(03-14-2019, 06:58 PM)MrE Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 12:08 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-13-2019, 01:53 PM)MrE Wrote: You are not allowed to view links. Register or Login to view.When I start training it says "WARNING: You are using 2GB GPU. Result quality may be significantly decreased." I have a 1080 ti. What can I do to address this?

Do you have all your drivers updated?

Yes
I had this problem before. You are missing the NVSMI folder in the Nvidia Corporation program folder. You have to try to uninstall and reinstall the Geforce installation package to get it back.  May take many tries.
(03-14-2019, 10:03 PM)Pocketspeed Wrote: You are not allowed to view links. Register or Login to view.If you choose to use DeepFaceLab, then you will need a graphics card that has at least 4 Gb of VRAM.

You are writing a lot of disinformation about DFL.
You are not allowed to view links. Register or Login to view. this trained on 2GB gtx850m
Im now training on a 1080SC (8GB) at 256 resolution (SAE).  4 batch size, optimizer 2 (crashes without optimizer due to mem).  
Taking about 1,500-1,600 ms per iteration.  I'm going to train for a good 20-30 hours this weekend.  I'll update with results, currently Im at 20.5k with loss rates around 1.0 to 3.0 still.  But the 256 looks like it could be nice once I can get to 200k epochs or so.  
I'll add to the spreadsheet.
(03-14-2019, 01:42 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 12:22 AM)gecisalex Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 12:07 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-13-2019, 10:12 AM)gecisalex Wrote: You are not allowed to view links. Register or Login to view.Dear Dpfks,

I continued training with bath size 30 overnight and normally it takes 3-4 sec for one epoch. However, I am really unsatisfied that after 1.5 days of training I am still at 35k epochs and my errors are not that low. Should I still decrease the batch size?

I attached a picture: [Image: 9Fdyt7Gh.png]

Thank you!

3.2s is a lot better, but it's still high. I'm not sure if its the settings you chose for your model. How many images in data_src and data_dst? I see you have feed by yaw on - on your next model don't use this unless the number of data_dst images are smaller than your data_src.

Also the H128 model is a bit out-dated now. SAE includes H128 but includes better features. Don't worry about loss values so much, just look at your preview and if you're happy with the results. Anything after 100k epoch is decent in my opinion.

I have the following number of images in each data folder:
data_src: 1700 images
data_dst 327 images

According to what you said, in this case feed by yaw is good, because I have more src images.

Also another question:

If I have another dst video of the same person, am I going to be able to use this model to swap it with the same src face?

I really appreciate your help!

Oh then yes, use feed by yaw.

Yes, you can re-use this model on the new dst video but you will need to retrain the model with the new data_dst faceset because it may have different angles. An extra few hours usually does the trick.

@dpfks
When re-using a model to learn new angles, should you reset styles and pixel loss to default if they've been applied already? Or just use the model as is?
(03-15-2019, 02:00 PM)iperov Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 10:03 PM)Pocketspeed Wrote: You are not allowed to view links. Register or Login to view.If you choose to use DeepFaceLab, then you will need a graphics card that has at least 4 Gb of VRAM.

You are writing a lot of disinformation about DFL.
You are not allowed to view links. Register or Login to view. this trained on 2GB gtx850m

@iperov I apologize. I will stop giving advice for DFL.

Thank you for all your work! Sincerely!
I collected 6000 src images of my src from 20+ different videos. A third of the src images are extreme lighting, ie: heavy shadows, no shadows but strong blue or yellow hues, brightness causing detail loss (no blur). Is the extreme difference in lighting / hues causing my src_loss to stagnate? I've purged blurry or low res images. Thanks again!!
(03-15-2019, 10:35 PM)Endalus Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 01:42 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 12:22 AM)gecisalex Wrote: You are not allowed to view links. Register or Login to view.
(03-14-2019, 12:07 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.
(03-13-2019, 10:12 AM)gecisalex Wrote: You are not allowed to view links. Register or Login to view.Dear Dpfks,

I continued training with bath size 30 overnight and normally it takes 3-4 sec for one epoch. However, I am really unsatisfied that after 1.5 days of training I am still at 35k epochs and my errors are not that low. Should I still decrease the batch size?

I attached a picture: [Image: 9Fdyt7Gh.png]

Thank you!

3.2s is a lot better, but it's still high. I'm not sure if its the settings you chose for your model. How many images in data_src and data_dst? I see you have feed by yaw on - on your next model don't use this unless the number of data_dst images are smaller than your data_src.

Also the H128 model is a bit out-dated now. SAE includes H128 but includes better features. Don't worry about loss values so much, just look at your preview and if you're happy with the results. Anything after 100k epoch is decent in my opinion.

I have the following number of images in each data folder:
data_src: 1700 images
data_dst 327 images

According to what you said, in this case feed by yaw is good, because I have more src images.

Also another question:

If I have another dst video of the same person, am I going to be able to use this model to swap it with the same src face?

I really appreciate your help!

Oh then yes, use feed by yaw.

Yes, you can re-use this model on the new dst video but you will need to retrain the model with the new data_dst faceset because it may have different angles. An extra few hours usually does the trick.

@dpfks
When re-using a model to learn new angles, should you reset styles and pixel loss to default if they've been applied already? Or just use the model as is?

I just use the model as is. The model should already have a decent match when being reused.

(03-16-2019, 02:59 AM)chortlemortle Wrote: You are not allowed to view links. Register or Login to view.I collected 6000 src images of my src from 20+ different videos. A third of the src images are extreme lighting, ie: heavy shadows, no shadows but strong blue or yellow hues, brightness causing detail loss (no blur). Is the extreme difference in lighting / hues causing my src_loss to stagnate? I've purged blurry or low res images. Thanks again!!

Yes this could be a reason. Generally we recommend even lighting for data_src.
(03-15-2019, 10:56 PM)Pocketspeed Wrote: You are not allowed to view links. Register or Login to view.@iperov I apologize. I will stop giving advice for DFL.

man who does not use DFL should not give advices for DFL.
  •  Previous
  • 1
  • ...
  • 67
  • 68
  • 69(current)
  • 70
  • 71
  • Next 

Forum Jump:

Users browsing this thread: 4 Guest(s)