MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
Total Likes Received: 47 (0.27 per day | 1.84 percent of total 2551)
(Find All Threads Liked ForFind All Posts Liked For) Total Likes Given: 11 (0.06 per day | 0.43 percent of total 2567)
(Find All Liked ThreadsFind All Liked Posts)

(Fake Kingpin)
Fake Kingpin

Registration Date: 08-30-2019
Date of Birth: Not Specified
Local Time: 02-21-2020 at 05:46 AM

Fappenberg's Most Liked Post
Post Subject Numbers of Likes
(outdated) DFL 1.0 SAE+ MOD. 10
Thread Subject Forum Name
(outdated) DFL 1.0 SAE+ MOD. Guides and Tutorials
Post Message
With the new DeepFaceLab 2.0, this mod is now outdated and will only work for version 1.0 of DeepFaceLab

This is a modified SAE that adds features introduced to SAEHD but not to SAE.  This gives SAE nice features without the extra layers that makes SAEHD intensive, especially on older cards.

Thing added are to this modified SAE are:
  • Added lr_dropout toggle on/off (When the face is trained enough, you can enable this option to get extra sharpness for less amount of iterations)
  • Added TrueFace training with options (0 - Disabled, 1 - Low, 2 - Medium, 3 - High, 4 - Ultra ... higher = more aggressive)
  • Added new option called 'network' that can be switched between 'adam' and 'rmsprop' (See below for difference)
  • Added option to disabled random warp (can be disabled around 30K iterations, should increase sharpness with less iterations)
  • Pixel loss and dssim loss are merged together to achieve both training speed and pixel trueness
  • CA weights are automatically loaded after the first successful iterations, therefore you can test batch size for OOM before initializing the weights
  • Adding mid-face, which covers 30% more area than half face.
  • Changed learn mask default to n
  • Updated DF and LIAE loss values

Neural Network Optimization Algorithms:
With the new option 'network', you can switch between 'adam' and 'rmsprop'
  • Root Mean Square Prop (RMSProp) works by keeping an exponentially weighted average of the squares of past gradients
  • Adaptive Moment Estimation (Adam) combines ideas from both RMSProp and Momentum, therefore RMSProp uses less VRAM than Adam
SAE default network optimizer is Adam, while SAEHD is RMSProp
Since RMSProp uses less VRAM than Adam, you can run a higher batch size/resolution/dims

Which is better? Adam might be better since it uses a combination of 2 different algorithm but RMSProp is already good by itself, also uses less vram so you can run a higher batch/dim/res.  Really comes to testing and preference.

GTX 1070 8GB w/ 256res, batch 4, 256/21/21 dims LIAE adam example:
[Image: You are not allowed to view links. Register or Login to view.]

How to install on computer:

1. Download the file
2. Go to your DeepFaceLab folder and navigate to _internal\DeepFaceLab\models\Model_SAE
3. Backup the existing file
4. Copy the downloaded into that folder
5. Profit???

How to install on colab:
1. Download the file
2. Clone Github repository on colab
3. Go to Files tab on left side panel
4. Navigate to DeepFaceLab\models\Model_SAE
5. Right click on and delete file
6. Right click on Model_SAE folder and click upload
7. Upload the file that you downloaded from step 1
8. Profit???
(Note: You will have to do this again for every reset from colab)

It should also be compatible with existing models running those versions as well. 
Just make sure to backup your model before you do this!

Download: You are not allowed to view links. Register or Login to view.

Credits to You are not allowed to view links. Register or Login to view.on his work on DeepFaceLab

Fappenberg's Forum Info
Last Visit:
4 hours ago
Total Posts:
81 (0.46 posts per day | 0.65 percent of total posts) Find All Posts
Total Threads:
10 (0.06 threads per day | 0.41 percent of total threads) Find All Threads
Time Spent Online:
4 Days, 15 Hours, 26 Minutes
Given: 11 | Recieved: 47
Members Referred:
Fappenberg's Contact Details
Additional Info About Fappenberg