MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
Total Likes Received: 327 (0.57 per day | 17.7 percent of total 1847)
(Find All Threads Liked ForFind All Posts Liked For) Total Likes Given: 366 (0.64 per day | 19.44 percent of total 1883)
(Find All Liked ThreadsFind All Liked Posts)

dpfks
(Administrator)
Administrator

Registration Date: 02-25-2018
Date of Birth: Not Specified
Local Time: 09-23-2019 at 02:36 PM
Status:

dpfks's Most Liked Post
Post Subject Numbers of Likes
RE: DeepFaceLab Explained and Usage Tutorial 14
Thread Subject Forum Name
[GUIDE] - DeepFaceLab EXPLAINED AND TUTORIALS Guides and Tutorials
Post Message
My Personal DeepFake Workflow Using DeepFaceLab

The following walk-through describes my process and workflow. This is what works for me, but it may not be the best or efficient way to create deepfakes. I am still learning how to perfect these.

Creating Celebrity Faceset - Collecting data_src (celebrity) videos

Sources:

  1. YouTube - 90% of the time I try to find interview videos on YouTube in 720p or 1080p definition. These videos should have your target celebrity's face clearly in the video, and moving in different directions with multiple facial expressions. Different angles is also very important. I use a tool to then download the YouTube video (any can work).
  2. Movies/TV shows - similarly, if the celebrity is in movies or TV shows, you can download them and use a video editor to collect clips where the celebrity is in the video. This source is also good to find those hard to get angles (like looking from above or below).
  3. Images - the last source I would use if needed are images from photoshoots, image boards, wallpapers. These images should all be HD.
If I find a single long interview video that has consistent lighting with different facial expressions and angles, I download it then rename the video "data_src.mp4" to extract the celebrity face. If I need to use multiple videos from different sources, I put them all into a video editor (Adobe Premiere) and combine them into one long video before renaming it "data_src.mp4".

Extracting Faces from data_src (celebrity) video:
  1. Make sure you name the celebrity video you just made to "data_src" and place it in the appropriate directory "\workspace"
  2. Next, run "2) extract PNG from video data_src 5 FPS" -  usually use 5 FPS so I can ensure I have enough images for a decent faceset. I can always remove and delete images later if I want to reduce the size of the faceset. Usually 1 FPS is too little and 10 FPS is too much for me.
  3. Next, run "4) data_src extract faces MT best GPU" - This will extract and align your faceset. The images will be in "\workspace\data_src\aligned" in sequence. The faceset needs to stay in this directory, but you can now clean up the faceset.
  4. Next, run "4.2.2) data_src sort by similar histogram" - This will sort all the images by histogram, and often groups different faces together. You should then manually go through this folder and delete any images that are not of the target celebrity, are blurry, or any duplicates. Some times I use the program You are not allowed to view links. Register or Login to view. to help remove similar images if I have a lot extracted.
  5. (Optional) you can run "4.1) data_src check result" to use the included program XNViewMP to quickly view and delete unwanted images.
  6. (Optional) Sometimes I also run "4.2.4) data_src sort by dissimilar histogram" - This will sort images that are really different from each other first. I then view the images at the end of the folder, and if they mostly look the same, I will delete 1/4 of the images to reduce my faceset size.
  7. Next, to make sure ALL my data_src images/faceset is aligned, I run "4.2.other) data_src util add landmarks debug images" which will generate jpg images showing the facial landmarks detected previously when you extracted the celebrity face.
Since this essentially duplicates your data_src folder mixing both the debug and regular images together, you can use window's search feature to view only the debug images. Use the search bar on the top write and search for "_debug"

[Image: You are not allowed to view links. Register or Login to view.]

You can now quickly scroll through and look for images where the landmarks are misaligned and delete them (remember to delete the original images and not just the _debug version). Once you clean your entire faceset, you can delete the "_debug" images since they are just duplicates with landmarks.

What images should be removed?

Images that are blurry should be removed during the training process. I usually remove these and place them somewhere else during training. Just make a new folder somewhere. If the images I remove are aligned and just blurry, I will place them back into the aligned folder after training is complete, and before converting. See examples below where I would remove images.

[Image: You are not allowed to view links. Register or Login to view.]

Another example of blurry images:

[Image: You are not allowed to view links. Register or Login to view.]

Previoiusly I recommended removing partial faces during training, but I found that training them is better as it will still convert partial faces. So as long as the images are properly aligned, you can leave them in.

[Image: You are not allowed to view links. Register or Login to view.]

Bad lighting (blown whites) or too dark, transparent faces (eg: during scene transitions) should also be removed during training. In the example below, all images will be removed during training, and some even delete because they are not aligned properly. I generally remove images from training if the eyebrows are cut off.

[Image: You are not allowed to view links. Register or Login to view.]

Extracting Faces from data_dst (Pornstar) video

After finding a porn video where the actress (or actor) looks like the celebrity, I edit the porn video and cut out any scenes that aren't required (eg: intros), or scenes with odd angles I know will not convert well. Lately I have been cutting out kissing scenes as well because extraction in these scenes are often wrong, and it's a real pain to manually extract hundreds of images. After you have your full porn video clip, rename it "data_dst.mp4" and make sure it's in the "\workspace"
  1. Run "3.2) extract PNG from video data_dst FULL FPS" to cut the video into each frame.
  2. Run "5) data_dst extract faces MT (or DLIB) best GPU" to extract the Pornstar face and align it using MTCNN or DLIB based on your selection.
  3. Next, run "5.2) data_dst sort by similar histogram" to sort the Pornstar face by histogram. You should then go through the images to clean this faceset.
  4. I then run "5.1) data_dst check results" which will use XNViewMP to review my data_dst faceset. I then make a folder called "removed" where I will move obstructed, blurry, or partial faces (see examples above). I also delete ALL images that are not of my Pornstar I want to swap, and also images that are not faces. After going through my data_src folder once, I keep a mental note of what scenes have misaligned faces.
  5. Next, will run "5.1) data_dst check results debug" which will bring up XNViewMP again, but shows me all the data_dst images with the facial landmarks. I quickly scroll through the images, or skip to known scenes where images are not aligned properly. Delete the images are are not correctly aligned, or have completely been missed.
[Image: You are not allowed to view links. Register or Login to view.]

In the example above, you can see that the face is not exact. A small mistake like this can drastically reduce the quality of your deepfake. This will make the side of the face blurry/fuzzy. Some times the extractor will also totally miss the face in the frame; go ahead and delete that too (in the aligned_debug folder only)

Here is another example:

[Image: You are not allowed to view links. Register or Login to view.][Image: You are not allowed to view links. Register or Login to view.]

Once you have done that, its time to run "5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG" which will re-extract the images you just deleted, but in manual mode.

Manual Extraction

Currently the landmarks are auto generated on the image, and you just use your cursor to move it in place so it matches the target face. Here are the keys you may use while manually extracting:

Mouse wheel - this will change the sizes of the red and blue boxes. For images where the face is far away from the camera (person is farther away) you will need to use the mouse wheel to make the boxes smaller, which will zoom into your target face so you can properly place the landmarks.

Mouse left click - this will lock in the landmarks, which will turn the landmarks a slightly different color.

Enter - clicking enter will bring you to the next frame you need to manually extract. To save time to can just hover your cursor over the target face until you are satisfied and click enter (instead of left clicking first).

Continue doing this until you finished all your manual extraction, and the app will re-extract those images. You now should have a pretty accurate faceset that's ready for training.

Training - Now recommended that you always train SAE model

Why train on SAE model? This is the most complex model that allows you to fully utilize DeepFaceLab's features, and your PC resources. It also includes ALL other models (H64, H128, DF, VG), you just need to select the right settings when prompted.

Training SAE Models

When first starting the 6) train SAE .bat file, you will be prompted with different configurations. Below is the order of settings, and their functions. Users have been sharing their hardware specs, with settings that they've tried. This is a good place to start if you don't know where to start:

SUMMARY - First ~8 hours of training (30-80k iterations)

Code:
== Model options:
== |== batch_size : 8
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 10
== |== bg_style_power : 10
== Running on:
== |== [0 : GeForce GTX 1080 Ti]

Rest of training:


Code:
== Model options:
== |== batch_size : 12
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 0.1
== |== bg_style_power : 4.0
== Running on:
== |== [0 : GeForce GTX 1080 Ti]
** Note: I no longer recommend using pixel loss due to high model collapse rate. Only use it if your model does not get better and the loss is not decreasing. Make sure you run backups on your model in-case it collapses.

Converting your deepfake:

Now that you have trained your model, and your preview looks good, the next step is to convert or "Swap" your faces.

Run 7) convert SAE

If you have multiple GPUs it will ask you which one to use:

Code:
Running converter.

You have multi GPUs in a system:
[0] : GeForce GTX 1080 Ti
[1] : GeForce GTX 1070
Which GPU idx to choose? ( skip: best GPU ) :

The GPU you select should match the one used to train your model. If you're not sure check your model folder files 

SAE_data = choose "best GPU"

0_SAE_data = choose "0"
1_SAE_data = choose "1"

Once selected it will show your model summary, and prompt you with some settings:

Code:
Loading model...
Using TensorFlow backend.
===== Model summary =====
== Model name: SAE
==
== Current iteration: 221936
==
== Model options:
== |== batch_size : 16
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 0.1
== |== bg_style_power : 0.1
== |== ca_weights : False
== |== apply_random_ct : True
== Running on:
== |== [0 : GeForce GTX 1080 Ti]
=========================
Choose mode: (1) overlay, (2) hist match, (3) hist match bw, (4) seamless, (5) raw. Default - 1 :

See post 1 about what these modes mean. I use (1) overlay.

Code:
Mask mode: (1) learned, (2) dst, (3) FAN-prd, (4) FAN-dst , (5) FAN-prd*FAN-dst (6) learned*FAN-prd*FAN-dst (?) help. Default - 1 :

See post 1 and 2 regarding mask mode. I use either (1) learned (if no obstructions of face in video), or (4) FAN-dst if obstructions and I want the video faster, or (6) learned*FAN-prd*FAN-dst if I don't care how long it takes to convert.


Code:
Choose erode mask modifier [-200..200] (skip:0) : 0
Choose blur mask modifier [-200..200] (skip:100) : 0
Choose output face scale modifier [-50..50] (skip:0) : 0

Most of the time I don't have to use these settings. See post 1 and 2 to understand them.



Code:
Apply color transfer to predicted face? Choose mode ( rct/lct skip:None ) : rct

I like using rct for color transfer to match skin tones


Code:
Apply super resolution? (y/n ?:help skip:n) : n

I only apply super resolution if the data_dst video is low quality.


Code:
Degrade color power of final image [0..100] (skip:0) : 0
Export png with alpha channel? (y/n skip:n) : n

I then skip the rest.

After inputting all the settings it should run the conversion. This process is slow so just sit tight and be patient. You can preview the images in your "data_dst\merged" folder.

Preview these images and if you're not happy you can stop the conversion process early and restart it with different settings.

Next just run 8) converted to mp4 and a "result.mp4" file should be created in your "workspace" folder.
Goodluck, and happy deepfaking!

dpfks's Forum Info
Joined:
02-25-2018
Last Visit:
7 hours ago
Total Posts:
1,473 (2.56 posts per day | 14.43 percent of total posts) Find All Posts
Total Threads:
182 (0.32 threads per day | 8.49 percent of total threads) Find All Threads
Time Spent Online:
1 Month, 1 Week, 6 Days
Thanks/Likes:
Given: 366 | Recieved: 327
Members Referred:
2
dpfks's Contact Details
Additional Info About dpfks
Sex:
Undisclosed
dpfks's Signature
~ Fake it till you make it ~
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.