Mr DeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
  • 1(current)
  • 2
  • 3
  • 4
  • 5
  • ...
  • 123
  • Next 
dpfksDeepFaceLab Explained and Usage Tutorial
#1
DeepFaceLab - Tutorial on making deepfakes

[Image: CkMdATah.jpg]

What is DeepFaceLab?
DeepFaceLab is considered a "Fakeapp" which uses machine learning to perform faceswaps in videos. 

NOTE: For better results an NVIDIA or AMD GPU with 2GB+ vRAM is recommended. The more memory available, the better quality the results.

DeepFaceLab now compatible with AMD, NVIDIA, IntelHD graphics and all OpenCL 1.2 compatibile video cards with at least 256M video memory.

I have moved from Faceswap to DeepFaceLab due to ease of use, better outcomes, and a lot of time saved. This tutorial will be a mix of the creator's instructions, and how I use and understand the program. The GitHub can be found here: You are not allowed to view links. Register or Login to view.

You are not allowed to view links. Register or Login to view.
(Choose the latest version based on date)

DeepFaceLabCUDA9.2SSE - for NVIDIA video cards up to GTX 1080 Ti
DeepFaceLabCUDA10.1AVX - for RTX NVIDIA video cards with a CPU that supports AVX
DeepFaceLabOpenCLSSE - for AMD/IntelHD cards plus any 64-bit CPU

Features:
  • Available as standalone with zero dependencies, ready to be used with prebult binary (CUDA, OpenCL, ffmpeg, etc.) for all windows versions.
  • New models (H64, H128, DF, LIAEF128, SAE, Villain) expanding from the original faceswap model.
  • New architecture, easy to experiment with models.
  • Works on 2GB old cards, such as GT730. Example of a deepfake trained on a 2GB gtx850m notebook in 18 hours: You are not allowed to view links. Register or Login to view.
  • Face data embedded in png files (no more aligned file required).
  • Automatically manage GPU by choosing the best GPU(s).
  • New preview window
  • Extractor and Converter in parallel.
  • Added debug option for all stages.
  • Multiple face extraction modes including S3FD, MTCNN, dlib, or manual extraction.
  • Train in any resolution by increments of 16. Easily train on 256 with NVIDIA cards due to optimization settings.
Extraction Modes:

S3FD Extraction: Best extractor to date. More accurate with less false positives compared to MTCNN extraction. Possibly slightly smoother.
[Image: aH2KbUR.gif][Image: XEdAWZh.gif]
Left = S3FD
Right = MTNCC


MTCNN Extraction: this mode predicts faces more uniformly compared to dlib which creates a less jittered aligned output. The disadvantage of MTCNN extraction is that it will produce a much greater number of false positives, which will mean you have to spend more time cleaning up the facesets generated.
[Image: 68747470733a2f2f692e696d6775722e636f6d2f...562e676966]
Left = dlib
Right = MTCNN

Manual Extractor: This uses a preview GUI that will allow users to properly align detected faces by changing the landmarks on the image itself. This is often very useful when faces are obstructed and can significantly improve the quality of your faceset to improve overall training and deepfakes.

[Image: manual_extractor_0.jpg]
[Image: 38454756-0fa7a86c-3a7e-11e8-9065-182b4a8a7a43.gif]

Advanced Mask Editor:

[Image: Tmy5tACh.jpg]

Results of edited mask training + merging:

[Image: wNewLwjh.jpg]

FANseg conversion - Obstructions no longer an issue!

[Image: M1gQaZfh.jpg]

Models Types:

H64 (2GB+): 64x64 face resolution, which is used in the original FakeApp or FaceSwap app, but this model in DeepFaceLab uses TensorFlow 1.8 DSSIM Loss function and separated mask decoder and better ConverterMasked. For 2GB and 3GB VRAM model works in reduced mode. This is also a good option for straight face-on scenes.

H64 example: Robert Downey Jr

[Image: bKyKyBUh.jpg]
[Image: M8AFAEBh.jpg]

H128 (3GB+): Same as above; however the resolution is improved 128x128 which conserves better face details, and will perform better with higher resolution videos and close-up shots. For 3GB and 4GB VRAM model works in reduced mode. Also great for direct face-on scenes and gives the highest resolution and details. Best option for Asian faces because of their relatively flat faces and even lighting on clear skin.

H128 example: Nicholas Cage

[Image: FqyhUyqh.jpg]

H128 example: Asian face on blurry target

[Image: bd7pwnOh.jpg]
[Image: kleK9m8h.jpg]

DF (5+GB): dfaker model. Has 128x128 resolution with a full face model. When using this model, it is recommended not to mix src faces with different lighting conditions. Great for side faces but provides lower resolution and detail. This model covers a more "full" face which often expands and covers more areas of the cheeks. It keeps the face unmorphed giving a convincing face swap; however the dst face will have to have a similar shape.

DF example: Nicholas Cage

[Image: r9XOvJMh.jpg]

LIAEF (5GB+): New model which combines DF, IAE and experiments. The model tries to morph the src face into dst while keeping the facial features of src face but less aggressive morphine. This model has problems with closed eyes recognition. This model can partially fix dissimilar face shapes, but will result in a less recognizable face.

LIAEF128 example: Nicholas Cage

[Image: o2FUziZh.jpg]
[Image: xWrxrSqh.jpg]

LIAEF128: Trump to Nicholas Cage example video



LIAEF128YAW (5GB+): Currently testing, but useful for when your src has too many side faces against dst faces. It feeds NN by sorted samples by yaw.

MIAEF128 (5GB+): Same as the model above, but it also tries to match brightness and color features.
This model has been discontinued by the developer

AVATAR (4GB+): non GAN, 256x256 face controlling model.
This model has been discontinued by the developer


AVATAR video example:



SAE (2GB+): Styled AutoEncoder that is similar to LIAEF but with a new face style loss. The SAE model is like a face morpher/stylizer instead to a direct swapper. Since this model is a face morpher, the results often produce unrecognizable results. The model can collapse on some scenes.

SAE example: Nicholas Cage on Trump

[Image: RHSdT86h.jpg]
[Image: GRKbdhbh.jpg]

SAE example: Asian kpop star

[Image: 8j5lEnCh.jpg]


SAE example: Alexei Navalny
[Image: EFPjGu1h.jpg]

SAE example: Nicholas Cage in obstructed magneto helmet

[Image: uQXYFFHh.jpg]
[Image: YJMf8Ubh.jpg]

SAE model example of Cage-Trump:


General Overview of how DeepFaceLab Works:

Main Concept:

Taking the original dst face, aligning the predicted src face, and creating a masked area to swap or overlay the src face.

[Image: piZBwhL.jpg]

Convert Modes:

[Image: 9Y9g9th.jpg]

[Image: Z6vb5o2h.png]

Convert Options:

Use predicted mask? (yes/no): (default = yes)

[Image: v8wcOSz.jpg]

Erosion (-100 to +100): (default = 0)

A negative erosion number will essentially increase the area of the src face when converting onto the dst face. A positive number is "eroding" the src face, which reduces the are of the src face when converting onto the dst face.

[Image: 0rkatdy.jpg]

Seamless Erosion (0 to 40): (default = 0)

Similar to the description above for erosion, but in seamless mode.

[Image: bpsd772.jpg]

Blur (-200 to +200): (default = 0)

A negative blue will make the boarder of the cropped faceset more defined (a sharper line). This will make it look like you literally cut and pasted your src onto your dst face. Adding a positive blur will essentially blur or smooth the transition of the src face onto the dst face, making the boarder less noticeable. 

[Image: XKkZ9n2.jpg]

Hist-match threshold (0 to 255): (default = 255)


This option will only be available if you select the hist-match mode. The default threshold is 255 which can cause some highlights to be blown out. Modifying the histogram is essentially adjusting the darks and the lights. A higher threshold here will allow a wider dynamic range, often causing highlights to be blown (bright white). A lower threshold will crush the whites, dulling brightness.

[Image: I8k3MYN.jpg]

Face Scale (-50 to +50): (default = 0)

A negative face scale number will shrink your src face proportionally towards the center of the dst face. Adding a positive face scale will enlarge your src face.

[Image: 3vPcxI2.jpg]

Transfer Color from predicted face? (LCT/RCT/no): (default = no)

Selecting no will keep the original color of your src faceset. Depending on where you got your src videos and images from to create your faceset, it may have different skin tones compared to the rest of the dst face color. Choosing LCT or RCT method to transfer color may make skin tones more similar and realistic.

[Image: YaH1lCAh.jpg]

Degrade Color Power of Final Image: (default = 0)

Adding a positive number will cause the colors of the final converted image to be less "intense" usually making the video quality look more vintage and not as vibrant in colors.

[Image: pJVqc2B.jpg]

Tips to creating a good DeepFake
  • A narrow src face is better for deepfakes compared to a wide face.
  • Choose the correct model for your scene - Each model has it's advantages and disadvantages depending on the particular scene you're trying to make a deepfake with. Some models may work better with less powerful GPUs. See each model description above for recommended scenes, and vRAM suggestions. See descriptions above in the model section for a quick explanation.
  • Quality over quantity - Using a src faceset with quality images will give you better results. In general, try to keep only clear images with no obstructions of the src face unless it is a very odd angle that you do not have a lot of images of. Try to delete duplicates and blurry images.
  • Use less images if possible - The smaller amount of images you use for both src and dst while training, the faster the process is. Be sure to have enough images and facial expressions to cover the entire scene. Typically a src faceset of 1000-5000 is enough. Any more and you're likely adding training time without benefits. You can also work will <1000 photos as well, but results may vary.
  • Generally, the longer the trained model, the better the results.
DeepFaceLab Versions Changelog:

Code:
== 20.06.2019 ==

Trainer: added option for all models
Enable autobackup? (y/n ?:help skip:%s) : 
Autobackup model files with preview every hour for last 15 hours. Latest backup located in model/<>_autobackups/01

SAE: added option only for CUDA builds:
Enable gradient clipping? (y/n, ?:help skip:%s) : 
Gradient clipping reduces chance of model collapse, sacrificing speed of training.

== 02.06.2019 ==

fix error on typing uppercase values

== 24.05.2019 ==

OpenCL : fix FAN-x converter

== 20.05.2019 ==

OpenCL : fixed bug when analysing ops was repeated after each save of the model

== 10.05.2019 ==

fixed work of model pretraining

== 08.05.2019 ==

SAE: added new option 
Apply random color transfer to src faceset? (y/n, ?:help skip:%s) : 
Increase variativity of src samples by apply LCT color transfer from random dst samples.
It is like 'face_style' learning, but more precise color transfer and without risk of model collapse, 
also it does not require additional GPU resources, but the training time may be longer, due to the src faceset is becoming more diverse.

== 05.05.2019 ==

OpenCL: SAE model now works properly

== 05.03.2019 ==

fixes

SAE: additional info in help for options:

Use pixel loss - Enabling this option too early increases the chance of model collapse.
Face style power - Enabling this option increases the chance of model collapse.
Background style power - Enabling this option increases the chance of model collapse.


== 05.01.2019 == 

SAE: added option 'Pretrain the model?'

Pretrain the model with large amount of various faces. 
This technique may help to train the fake with overly different face shapes and light conditions of src/dst data. 
Face will be look more like a morphed. To reduce the morph effect, 
some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: encoder.h5. 
The longer you pretrain the model the more morphed face will look. After that, save and run the training again.


== 04.28.2019 ==

fix 3rd pass extractor hang on AMD 8+ core processors

Converter: fixed error with degrade color after applying 'lct' color transfer

added option at first run for all models: Choose image for the preview history? (y/n skip:n)
Controls: [p] - next, [enter] - confirm.

fixed error with option sort by yaw. Remember, do not use sort by yaw if the dst face has hair that covers the jaw.

== 04.24.2019 ==

SAE: finally the collapses were fixed

added option 'Use CA weights? (y/n, ?:help skip: %s ) : 
Initialize network with 'Convolution Aware' weights from paper https://arxiv.org/abs/1702.06295.
This may help to achieve a higher accuracy model, but consumes a time at first run.

== 04.23.2019 ==

SAE: training should be restarted
remove option 'Remove gray border' because it makes the model very resource intensive.

== 04.21.2019 ==

SAE: 
fix multiscale decoder.
training with liae archi should be restarted

changed help for 'sort by yaw' option:
NN will not learn src face directions that don't match dst face directions. Do not use if the dst face has hair that covers the jaw.


== 04.20.2019 ==

fixed work with NVIDIA cards in TCC mode

Converter: improved FAN-x masking mode.
Now it excludes face obstructions such as hair, fingers, glasses, microphones, etc.
example https://i.imgur.com/x4qroPp.gifv
It works only for full face models, because there were glitches in half face version.

Fanseg is trained by using manually refined by MaskEditor >3000 various faces with obstructions.
Accuracy of fanseg to handle complex obstructions can be improved by adding more samples to dataset, but I have no time for that :(
Dataset is located in the official mega.nz folder.
If your fake has some complex obstructions that incorrectly recognized by fanseg,
you can add manually masked samples from your fake to the dataset
and retrain it by using --model DEV_FANSEG argument in bat file. Read more info in dataset archive.
Minimum recommended VRAM is 6GB and batch size 24 to train fanseg.
Result model\FANSeg_256_full_face.h5 should be placed to DeepFacelab\facelib\ folder

Google Colab now works on Tesla T4 16GB.
With Google Colaboratory you can freely train your model for 12 hours per session, then reset session and continue with last save.
more info how to work with Colab: https://github.com/chervonij/DFL-Colab

== 04.07.2019 == 

Extractor: added warning if aligned folder contains files that will be deleted.

Converter subprocesses limited to maximum 6

== 04.06.2019 ==

added experimental mask editor. 
It is created to improve FANSeg model, but you can try to use it in fakes.
But remember: it does not guarantee quality improvement.
usage:
run 5.4) data_dst mask editor.bat
edit the mask of dst faces with obstructions
train SAE either with 'learn mask' or with 'style values'
Screenshot of mask editor: https://i.imgur.com/SaVpxVn.jpg
result of training and merging using edited mask: https://i.imgur.com/QJi9Myd.jpg
Complex masks are harder to train.

SAE: 
previous SAE model will not work with this update.
Greatly decreased chance of model collapse. 
Increased model accuracy.
Residual blocks now default and this option has been removed.
Improved 'learn mask'.
Added masked preview (switch by space key)

Converter: 
fixed rct/lct in seamless mode
added mask mode (6) learned*FAN-prd*FAN-dst

changed help message for pixel loss:
Pixel loss may help to enhance fine details and stabilize face color. Use it only if quality does not improve over time.

fixed ctrl-c exit in no-preview mode

== 03.31.2019 ==

Converter: fix blur region of seamless.

== 03.30.2019 == 

fixed seamless face jitter
removed options Suppress seamless jitter, seamless erode mask modifier.
seamlessed face now properly uses blur modifier
added option 'FAN-prd&dst' - using multiplied FAN prd and dst mask,

== 03.29.2019 ==

Converter: refactorings and optimizations
added new option
Apply super resolution? (y/n skip:n) : Enhance details by applying DCSCN network.
before/after gif - https://i.imgur.com/jJA71Vy.gif

== 03.26.2019 ==

SAE: removed lightweight encoder.
optimizer mode now can be overriden each run

Trainer: the loss line now shows the average loss values after saving

Converter: fixed bug with copying files without faces.

XNViewMP : updated version

fixed cut video.bat for paths with spaces

== 03.24.2019 ==

old SAE model will not work with this update.

Fixed bug when SAE can be collapsed during a time. 

SAE: removed CA weights and encoder/decoder dims.

added new options:

Encoder dims per channel (21-85 ?:help skip:%d) 
More encoder dims help to recognize more facial features, but require more VRAM. You can fine-tune model size to fit your GPU.

Decoder dims per channel (11-85 ?:help skip:%d) 
More decoder dims help to get better details, but require more VRAM. You can fine-tune model size to fit your GPU.

Add residual blocks to decoder? (y/n, ?:help skip:n) : 
These blocks help to get better details, but require more computing time.

Remove gray border? (y/n, ?:help skip:n) : 
Removes gray border of predicted face, but requires more computing resources.


Extract images from video: added option
Output image format? ( jpg png ?:help skip:png ) : 
PNG is lossless, but produces images with size x10 larger than JPG.
JPG extraction is faster, especially on HDD instead of SSD.

== 03.21.2019 ==

OpenCL build: fixed, now works on most video cards again.

old SAE model will not work with this update.
Fixed bug when SAE can be collapsed during a time

Added option
Use CA weights? (y/n, ?:help skip: n ) :
Initialize network with 'Convolution Aware' weights. 
This may help to achieve a higher accuracy model, but consumes time at first run.

Extractor:
removed DLIB extractor
greatly increased accuracy of landmarks extraction, especially with S3FD detector, but speed of 2nd pass now slower.
From this point on, it is recommended to use only the S3FD detector.
before https://i.imgur.com/SPGeJCm.gif
after https://i.imgur.com/VmmAm8p.gif

Converter: added new option to choose type of mask for full-face models.

Mask mode: (1) learned, (2) dst, (3) FAN-prd, (4) FAN-dst (?) help. Default - 1 : 
Learned – Learned mask, if you choose option 'Learn mask' in model. The contours are fairly smooth, but can be wobbly.
Dst – raw mask from dst face, wobbly contours.
FAN-prd – mask from pretrained FAN model from predicted face. Very smooth not shaky countours.
FAN-dst – mask from pretrained FAN model from dst face. Very smooth not shaky countours.
Advantages of FAN mask: you can get a not wobbly shaky without learning it by model.
Disadvantage of FAN mask: may produce artifacts on the contours if the face is obstructed.

== 03.13.2019 ==

SAE: added new option

Optimizer mode? ( 1,2,3 ?:help skip:1) : 
this option only for NVIDIA cards. Optimizer mode of neural network.
1 - default.
2 - allows you to train x2 bigger network, uses a lot of RAM.
3 - allows you to train x3 bigger network, uses huge amount of RAM and 30% slower.

Epoch term renamed to iteration term.

added showing timestamp in string of training in console

== 03.11.2019 ==

CUDA10.1AVX users - update your video drivers from geforce.com site

face extractor:

added new extractor S3FD - more precise, produces less false-positive faces, accelerated by AMD/IntelHD GPU (while MT is not)

speed of 1st pass with DLIB significantly increased

decreased amount of false-positive faces for all extractors

manual extractor: added 'h' button to hide the help information

fix DFL conflict with system python installation

removed unwanted tensorflow info from console log

updated manual_ru

== 03.07.2019 ==

fixes

upgrade to python 3.6.8

Reorganized structure of DFL folder. Removed unnecessary files and other trash.

Current available builds now:

DeepFaceLabCUDA9.2SSE - for NVIDIA cards up to GTX10x0 series and any 64-bit CPU
DeepFaceLabCUDA10.1AVX - for NVIDIA cards up to RTX and CPU with AVX instructions support
DeepFaceLabOpenCLSSE - for AMD/IntelHD cards and any 64-bit CPU

== 03.04.2019 == 

added
4.2.other) data_src util recover original filename.bat
5.3.other) data_dst util recover original filename.bat

== 03.03.2019 ==

Convertor: fix seamless

== for older changelog see github page ==
#2
How to Use DeepFaceLab - Explanation of Functions, Features and Inputs

This tutorial is based on how I use DeepFaceLab, which I now use to create my deepfakes. I don't know everything about this app yet, but this is a quick explanation of how I use it. I try to use an application that will produce decent results, with the least amount of work or time. For me, the easy version of DeepFaceLab works the best. This version has easy to use .bat files that input common commands for users so that we don't have to manually edit Python commands/codes to create our deepfakes. The developer is also really active in developing his app to make deepfakes easier to create, and produce quality faceswaps.

You are not allowed to view links. Register or Login to view.
(Choose the latest version based on date)

Installing DeepFaceLab

[Image: 4PlpB42h.jpg]
  • Use the download link above. This is directly from the developer, and will always be up-to-date.
  • Download the appropriate DeepFaceLab build depending on your system. CUDA for NVIDIA GPU, and OpenCL for AMD and Intel HD cards.
  • The application in the download link above is already compiled with everything it needs to make deepfakes. You just need to extract the folder anywhere you want. In this example I have extracted it to my "D:\" drive on a Windows 10 PC.
Components and Files of DeepFaceLab

[Image: aPSBPZwh.jpg]
  • Once you extract the folder to your PC, you should have something that looks like the image above.
  • DeepFaceLab is packed with features, which gives us all these .bat files but not all files have to be used.
  • .bat files are files that will execute certain commands to do a certain function in DeepFakeLab. This is to make it easier to for users to do not know how to run commands.
Folders and Files of DeepFaceLab:

"_internal" - this folder contains all the dependencies, and python apps that are used in the background. You will not likely need to modify anything in this folder (unless you need to update something to the latest version).

"workspace" - this folder is where you will go to add stuff for the app to work. For example, you will add videos, and images to this folder which will be used to create your deepfake (see explanation below). This will also be where your final video will be generated. This is what your "workspace" folder should look like:

[Image: x21WTuF.jpg]
For the application to work, the names of the above files must remain this way. Please be sure not to change them unless you are backing up your data.

"data_dst" folder - this is the folder where all your "destination" files will be contained. The "destination" is usually dealing with the model that you would like to change the face of (Pornstar).

"data_src" folder - this is the folder where all your "source" files will be contained. The "source" is usually the model face you would like to place on the "destination" model (Celebrity).

"data_dst" video - this is a video file that contains the video with the body you want to use (Porn video). It is called the destination video because this is the destination of where the desired face will be placed on.

"data_src" video - this is a video file that contains the face (celebrity) you want to take and place onto the destination body. 

Explanation of .bat files

The .bat files are files that will execute a certain code automatically. For new users, this saves us headaches by hiding the complicated code, and all we have to do is run one of the .bat files. I will be referring to the image above to explain only the ones I use. They are all titled appropriately and are pretty self explanatory. 

"1) clear workspace" - this will literally delete everything in the "workspace" folder and restore everything to the default folder/file structure. This is meant to be used when you want to start a new project. But if you want to save your model for later use, please remember to back it up! I don't use this .bat, I just manually delete the contents in "data_dst" and "data_src" folders. I actually delete this file to prevent my from accidentally starting it (it's missing from the image).

"2) extract PNG from video data_src (1 FPS/5 FPS/10 FPS/ FULL FPS)" - this will take the video file that you named "data_src" in the workspace folder, and cut the video into frames so that you can extract the faces. You can choose which FPS (frame per second) to use. The higher the FPS, the more frames will be generated, meaning more faces and folder size.

"3.1) cut video (edit me, drop video on me)" - This is used to cut your video, from a certain time - another time that you specify. You need to right click on this .bat file and edit the "SET from_time=" and SET "time_length=", save, then drag and drop your video file only this .bat for it to cut your video. It's easier to just use any random video editor that has a GUI to do this though.

"3.2) extract PNG from video data_dst FULL FPS" - this will take your destination video (porn video) and cut every frame into a PNG image. You can only use full FPS for this option because when you try to generate the video with the swapped faces, we want to FPS to be the same as the original.

"3.other) denoise extracted data_dst x1/x5/x10" - Running this file will add sort of a "filter" over your frames to denoise (smooth out) the images, making your entire video less noisy (grain in video for example). Sometimes this is used to smooth out the skin of actors in the video to make the deepfake more seamless. Most of the time this is not needed.

Example of denoise effect:
[Image: 0vW81lP.gif]

"4) data_src extract faces DLIB all GPU debug" - This will extract your faces from your already extracted frames from data_src video using DLIB. The debug window will be used everytime DLIB has an issue with a face detection so you can manually select and place the proper landmarks on the face it's having issues with.

"4) data_src extract faces MT best GPU" - this runs the module to extract faces from the images generated from the frames of video "data_src" using the MTCNN face detection. This may take longer than DLIB, but catches more angles then DLIB. It also produces more false positives (extracts faces that aren't really faces).

"4) data_src extract faces DLIB best GPU" - this runs the module to extract faces from the images generated from the frames of video "data_src" but using DLIB detection instead. This is is quicker than MTCNN but does not capture all angles. It also produces less false positives compared to MTCNN.

"4) data_src extract faces MANUAL" - this will allow you to manually extract and place landmarks on your faces in the spliced frames for your data_src video.

"4.1_ data_src check results" - this is useful for viewing and going through your data_src faceset using a program called XnView MP. This program is lightweight and is able to scroll through thousands of images without causing frustrating lag, unlike using window's default explorer. You can use this to delete bad frames.

"4.2.1) data_src sort by blur" - Useful to sort your data_src faceset by how blurry it is. It sorts from most clear to blurry. Often you can run this and just delete really blurry images at the end of the folder to make sure your faceset provides the best training data.

"4.2.2) data_src sort by similar histogram" - Most useful sorting tool. This will sort your data_src faceset by histogram, which means it will group similar lighting/color spread of images. This will ALSO group different faces together as well. If you have multiple faces in your frame that got extracted, you can run this and then manually go through your faceset to delete unwanted faces.

"4.2.4) data_src sort by dissimilar face" - This will sort your faceset by the differences in images. The less similar images will be at the top, whereas the most similar images will be at the bottom. With large facesets, you can run this and delete a lot of similar faces that aren't useful, and will cause your training time to be longer.

"4.2.4) data_src sort by dissimilar histogram" - placeholder

"4.2.5) data_src sort by face pitch" - placeholder 

"4.2.5) data_src sort by face yaw" - This will sort your fasceset by the position of the face (eg: side profile to front face to other side profile). You can use this to make sure you have sufficient images for different angles.

"4.2.6) data_src sort by final" - placeholder

"4.2.other) data_src sort by black" - This sorts your faceset by how much black space there is in the image. This is usually due to the face getting cut off the video screen. We only want full faces in out data_src faceset, so go ahead and run this to delete any images where the faces are cut off.

"4.2.other) data_src sort by brightness" - Sorts by how bright your images in your data_src faceset are. Goes from Brightest to darkest.

"4.2.other) data_src sort by hue" - Sorts by the hue of the images (color tones). This is not often used.

"4.2.other) data_src sort by one face in image" - Sorts by the number of faces it detects in an image. This is not often used.

"4.2.other) data_src sort by original filename" - Resorts the images in your data_src faceset by the original filename.

"4.2.other) data_src util add landmarks debug images" - Takes your data_src faceset images and adds landmarks to the images. You can use this to identify images that are not correctly aligned so you can remove them from your faceset.

"5) data_dst extract faces MT all GPU" - once again, extracts all the faces from your "data_dst" video using the MTCNN face detection module.

"5) data_dst extract faces DLIB all GPU" - again, extracts all the faces from your "data_dst" video but uses the DLIB module.

"5) data_dst extract faces DLIB all GPU +manual fix" - similarly to step 4, this will extract faces from your "data_dst" video and prompt you with the debug window to manually fix problematic images.

"5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG" - Running this will allow you to manually extract faces from images that you deleted from your "\workspace\data_dst\aligned_debug" folder. See my workflow post below for more details.

"5) data_dst extract faces MANUAL" - This will allow you to extract the faces from your data_dst video frames manually.

"5.1) data_dst check results debug" - This will open your "\workspace\data_dst\aligned_debug" folder in the XnView MP image viewer. You can use this program to view the landmarks on your problematic images, and delete them.

"5.1) data_dst check results" - this will open your data_dst folder in the XnView MP program for your to review your data_dst faceset.

"5.2) data_dst sort by similar histogram" - this will sort your data_dst faceset by histogram (lighting and color spread) and group similar faces together. If you have more than one person's face in your faceset, this will group them together to make it easier for your to remove unwanted faces.

"5.3) data_dst sort by original filename" - Sorts your dataset based on the original name of the image when it was first extracted.

"6) train (DF/H64/H128/LIAEF128/SAE)" - begins training your model based on the one you selected. See post 1 above for examples and descriptions of each model type.

"7) convert (DF/H64/H128)" - this takes your trained model, and "converts" your dst face into your targeted src face. This will create a folder called "merged" in your data_dst folder where the converted images are kept.

"8) convert to (avi/mp4/mov(lossless)/mp4(lossless))" - this takes the converted images from the merged folder and compiles them into a watchable video format (.avi or .mp4) depending on what you choose. The audio will also be automatically transferred as well.

"9) util convert aligned PNG to JPG (drop folder on me)" - this utility will convert your PNG facesets (from previous DFL versions) into JPG for use with the newer version.

The author of DeepFaceLab recently released a tutorial video:


SAE Training Input Settings:

A shared spreadsheet with SAE training settings can be found below. This is a good place to start to find out what settings you should test based on your GPU.

You are not allowed to view links. Register or Login to view.

If you have a system with multiple GPUs, you will be prompted to select which one you'd like to use for training.

Code:
Running trainer.

You have multi GPUs in a system:
[0] : GeForce GTX 1080 Ti
[1] : GeForce GTX 1070
Which GPU idx to choose? ( skip: best GPU ) : 1
Loading model...

In the above case, 0 = GTX 1080ti and 1 = GTX = 1070, no selection = best GPU

Code:
Model first run. Enter model options as default for each run.
Write preview history? (y/n ?:help skip:n) :

Selecting yes will make DFL save previews of your preview window every so often so you can see the progress whenever you'd like. This is useful at looking back to see what your setting changes did.


Code:
Target iteration (skip:unlimited/default) :

The iteration where DFL will stop. If you want it to keep training indefinitely, skip this step by clicking enter.

Code:
Batch_size (?:help skip:0) :

The batch size is the number of samples used for each iteration, where the model is parameters are updated . The lower the number, the faster it is, but the less accurate your model will be. The larger the number, the slower it will be, but the better generalization of your model.

Code:
Feed faces to network sorted by yaw? (y/n ?:help skip:n) :

This will feed your faceset through the network based on yaw. Use YES if you have a smaller, or similar number of data_src images compared to data_dst.

Code:
Flip faces randomly? (y/n ?:help skip:y) :

This will randomly flip faces on the vertical axis. This often results in an "abnormal" conversion because no one's face is exactly symetrical. It is generally recommended to use NO, unless you do not have enough images in your faceset to cover all sides/angles.

Code:
Src face scale modifier % ( -30...30, ?:help skip:0) :

Scales the image smaller, or larger. See post 1 for example.

Code:
Resolution ( 64-256 ?:help skip:128) :

Resolution of images that you'll train in. The higher the resolution, the longer the training time to get to the desired results. No evidence that higher resolutions produce better results.

Code:
Half or Full face? (h/f, ?:help skip:f) :

Full face models = DF or LIAE
Half face model = H128

See post one for images and examples of each model.

Code:
Learn mask? (y/n, ?:help skip:y)

The program will learn how to mask your faceswap to make the swap look seamless. You need to turn this on if you are using the mask editing 

Code:
Optimizer mode? ( 1,2,3 ?:help skip:1) :

Optimizer modes can be changed each time you start training. If you have a powerful GPU and can run your desired settings, then you can leave this setting at 1. If you are constantly getting OOM or memory errors, then try mode 2 or 3. Modes 2 and 3 will utilize system RAM and CPU.

Code:
AE architecture (df, liae ?:help skip:df) :

If you chose full face option above, you will see these options. Check post 1 for model examples.

Code:
AutoEncoder dims (32-1024 ?:help skip:512) :

The higher the dimensions, the more detail the trained model will hold, but this will be at a cost of training time and GPU resources used. Generally recommended to keep on default unless you have a higher end GPU.

Code:
Encoder dims per channel (21-85 ?:help skip:42) :

Same as above.

Code:
Decoder dims per channel (10-85 ?:help skip:21) :

Same as above.

Code:
Remove gray border? (y/n, ?:help skip:n) :

Removes the gray border around the mask. Turning this on will use more GPU resources. This feature has been removed in April. 21 version as it is resource intensive and has low impact.

Code:
Use multiscale decoder? (y/n, ?:help skip:n) :

Uses multiscale decoder which has superior results.

Code:
Use pixel loss? (y/n, ?:help skip: n ) :

Turning on pixel loss is no longer recommended as it may increase the risk of your model collapsing/corrupting. Turning this on will consume more GPU resources. It may fix skin tone differences, and reduce jitter in the conversion as well. If your pixel loss is not getting any better, you may try your luck with pixel loss on.

Code:
Face style power ( 0.0 .. 100.0 ?:help skip:0.00) :

Setting a style power of 0 is essentially using the base model (DF, H128, LIAEF). The higher the style power, the more the model tries to morph data_src face to data_dst. May fix skin tones with higher style power, but end result may look different from data_src.

Code:
Background style power ( 0.0 .. 100.0 ?:help skip:0.00) :

Same as above. Setting a style power of 0 is essentially using the base model (DF, H128, LIAEF). The higher the style power, the more the model tries to morph things outside the mask to data_dst. Higher the value may change the face so it looks different than data_src.

Once you have inputting your training settings, you'll see a preview window and a command window. Within the preview window, it will show the iter: number, which means how many iterations the training has completed. There is no right answer regarding when to stop training, but results usually become clear at 120k iterations for me (fresh trained model). Always use the preview display to estimate when to stop training.

If you wish to save training, click your "ENTER" key while on your preview menu.

Converting Settings: The process of "swapping" the faces

The next step after getting a well trained model is to convert your images. This is done by running the "7) Convert SAE" .bat file (or whatever model you trained). It is recommended to use the SAE model as it includes all previous models within it, and it is being actively developed with amazing features. The example below will only pertain to SAE settings.

Code:
Choose mode: (1) overlay, (2) hist match, (3) hist match bw, (4) seamless, (5) raw. Default - 1 : 1

See post 1 for examples of conversion modes: Overlay seems to have the best results for SAE. Seamless may be used as well for noobs.

Code:
Mask mode: (1) learned, (2) dst, (3) FAN-prd, (4) FAN-dst , (5) FAN-prd*FAN-dst (6) learned*FAN-prd*FAN-dst (?) help. Default - 1 : 6

(1) Learned - If you selecting "learn mask" during the training process, you can use this especially if you used the "edit mask" feature to manually mask obstructions or anything in your data_src faceset.

(2) dst - Convert using the mask from the data_dst faceset (I think this is how it works).

(3) FAN-prd, (4) FAN-dst, (5) FAN-prd*FAN-dst, (6) learned*FAN-prd*FAN-dst - all these options use pretrained models. Any FAN-x model will handle obstructions based on a pre-tained FANSEG model. Honestly, I don't know the exact details of each model. Play with each setting to see what works the best for you.

Once converting is finished, you will have a folder of images that have the swapped face "\workspace\data_dst\merged". You can have a look of at the images in this folder as a sneak peak of your deepfake.

Compiling merged images to video file

Next run "8) converted to mp4/avi/etc"

This will take your original audio froom data_dst.mp4 and merge it with the images in "\workspace\data_dst\merged". It will ask you whjat bitrate you want to compile at:

Code:
Bitrate of output file in MB/s ? (default:16) :

If you don't know what bitrate is, just google it.

Once complete, it will produce your video file called result.mp4 in the "\workspace" directory.
#3
My Personal DeepFake Workflow Using DeepFaceLab

The following walk-through describes my process and workflow. This is what works for me, but it may not be the best or efficient way to create deepfakes. I am still learning how to perfect these.

Creating Celebrity Faceset - Collecting data_src (celebrity) videos

Sources:

  1. YouTube - 90% of the time I try to find interview videos on YouTube in 720p or 1080p definition. These videos should have your target celebrity's face clearly in the video, and moving in different directions with multiple facial expressions. Different angles is also very important. I use a tool to then download the YouTube video (any can work).
  2. Movies/TV shows - similarly, if the celebrity is in movies or TV shows, you can download them and use a video editor to collect clips where the celebrity is in the video. This source is also good to find those hard to get angles (like looking from above or below).
  3. Images - the last source I would use if needed are images from photoshoots, image boards, wallpapers. These images should all be HD.
If I find a single long interview video that has consistent lighting with different facial expressions and angles, I download it then rename the video "data_src.mp4" to extract the celebrity face. If I need to use multiple videos from different sources, I put them all into a video editor (Adobe Premiere) and combine them into one long video before renaming it "data_src.mp4".

Extracting Faces from data_src (celebrity) video:
  1. Make sure you name the celebrity video you just made to "data_src" and place it in the appropriate directory "\workspace"
  2. Next, run "2) extract PNG from video data_src 5 FPS" -  usually use 5 FPS so I can ensure I have enough images for a decent faceset. I can always remove and delete images later if I want to reduce the size of the faceset. Usually 1 FPS is too little and 10 FPS is too much for me.
  3. Next, run "4) data_src extract faces MT best GPU" - This will extract and align your faceset. The images will be in "\workspace\data_src\aligned" in sequence. The faceset needs to stay in this directory, but you can now clean up the faceset.
  4. Next, run "4.2.2) data_src sort by similar histogram" - This will sort all the images by histogram, and often groups different faces together. You should then manually go through this folder and delete any images that are not of the target celebrity, are blurry, or any duplicates. Some times I use the program You are not allowed to view links. Register or Login to view. to help remove similar images if I have a lot extracted.
  5. (Optional) you can run "4.1) data_src check result" to use the included program XNViewMP to quickly view and delete unwanted images.
  6. (Optional) Sometimes I also run "4.2.4) data_src sort by dissimilar histogram" - This will sort images that are really different from each other first. I then view the images at the end of the folder, and if they mostly look the same, I will delete 1/4 of the images to reduce my faceset size.
  7. Next, to make sure ALL my data_src images/faceset is aligned, I run "4.2.other) data_src util add landmarks debug images" which will generate jpg images showing the facial landmarks detected previously when you extracted the celebrity face.
Since this essentially duplicates your data_src folder mixing both the debug and regular images together, you can use window's search feature to view only the debug images. Use the search bar on the top write and search for "_debug"

[Image: u8JTAx4h.jpg]

You can now quickly scroll through and look for images where the landmarks are misaligned and delete them (remember to delete the original images and not just the _debug version). Once you clean your entire faceset, you can delete the "_debug" images since they are just duplicates with landmarks.

What images should be removed?

Images that are blurry should be removed during the training process. I usually remove these and place them somewhere else during training. Just make a new folder somewhere. If the images I remove are aligned and just blurry, I will place them back into the aligned folder after training is complete, and before converting. See examples below where I would remove images.

[Image: JPld5qUh.jpg]

Another example of blurry images:

[Image: RkkeKuQh.jpg]

Previoiusly I recommended removing partial faces during training, but I found that training them is better as it will still convert partial faces. So as long as the images are properly aligned, you can leave them in.

[Image: Yy6K5qLh.jpg]

Bad lighting (blown whites) or too dark, transparent faces (eg: during scene transitions) should also be removed during training. In the example below, all images will be removed during training, and some even delete because they are not aligned properly. I generally remove images from training if the eyebrows are cut off.

[Image: Kqw2k20h.jpg]

Extracting Faces from data_dst (Pornstar) video

After finding a porn video where the actress (or actor) looks like the celebrity, I edit the porn video and cut out any scenes that aren't required (eg: intros), or scenes with odd angles I know will not convert well. Lately I have been cutting out kissing scenes as well because extraction in these scenes are often wrong, and it's a real pain to manually extract hundreds of images. After you have your full porn video clip, rename it "data_dst.mp4" and make sure it's in the "\workspace"
  1. Run "3.2) extract PNG from video data_dst FULL FPS" to cut the video into each frame.
  2. Run "5) data_dst extract faces MT (or DLIB) best GPU" to extract the Pornstar face and align it using MTCNN or DLIB based on your selection.
  3. Next, run "5.2) data_dst sort by similar histogram" to sort the Pornstar face by histogram. You should then go through the images to clean this faceset.
  4. I then run "5.1) data_dst check results" which will use XNViewMP to review my data_dst faceset. I then make a folder called "removed" where I will move obstructed, blurry, or partial faces (see examples above). I also delete ALL images that are not of my Pornstar I want to swap, and also images that are not faces. After going through my data_src folder once, I keep a mental note of what scenes have misaligned faces.
  5. Next, will run "5.1) data_dst check results debug" which will bring up XNViewMP again, but shows me all the data_dst images with the facial landmarks. I quickly scroll through the images, or skip to known scenes where images are not aligned properly. Delete the images are are not correctly aligned, or have completely been missed.
[Image: 6UaovJuh.jpg]

In the example above, you can see that the face is not exact. A small mistake like this can drastically reduce the quality of your deepfake. This will make the side of the face blurry/fuzzy. Some times the extractor will also totally miss the face in the frame; go ahead and delete that too (in the aligned_debug folder only)

Here is another example:

[Image: S2suSYHh.jpg][Image: eZ3GMYJh.jpg]

Once you have done that, its time to run "5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG" which will re-extract the images you just deleted, but in manual mode.

Manual Extraction

Currently the landmarks are auto generated on the image, and you just use your cursor to move it in place so it matches the target face. Here are the keys you may use while manually extracting:

Mouse wheel - this will change the sizes of the red and blue boxes. For images where the face is far away from the camera (person is farther away) you will need to use the mouse wheel to make the boxes smaller, which will zoom into your target face so you can properly place the landmarks.

Mouse left click - this will lock in the landmarks, which will turn the landmarks a slightly different color.

Enter - clicking enter will bring you to the next frame you need to manually extract. To save time to can just hover your cursor over the target face until you are satisfied and click enter (instead of left clicking first).

Continue doing this until you finished all your manual extraction, and the app will re-extract those images. You now should have a pretty accurate faceset that's ready for training.

Training - Now recommended that you always train SAE model

Why train on SAE model? This is the most complex model that allows you to fully utilize DeepFaceLab's features, and your PC resources. It also includes ALL other models (H64, H128, DF, VG), you just need to select the right settings when prompted.

Training SAE Models

When first starting the 6) train SAE .bat file, you will be prompted with different configurations. Below is the order of settings, and their functions. Users have been sharing their hardware specs, with settings that they've tried. This is a good place to start if you don't know where to start:

SUMMARY - First ~8 hours of training (30-80k iterations)

Code:
== Model options:
== |== batch_size : 8
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 10
== |== bg_style_power : 10
== Running on:
== |== [0 : GeForce GTX 1080 Ti]

Rest of training:


Code:
== Model options:
== |== batch_size : 12
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 0.1
== |== bg_style_power : 4.0
== Running on:
== |== [0 : GeForce GTX 1080 Ti]
** Note: I no longer recommend using pixel loss due to high model collapse rate. Only use it if your model does not get better and the loss is not decreasing. Make sure you run backups on your model in-case it collapses.

Converting your deepfake:

Now that you have trained your model, and your preview looks good, the next step is to convert or "Swap" your faces.

Run 7) convert SAE

If you have multiple GPUs it will ask you which one to use:

Code:
Running converter.

You have multi GPUs in a system:
[0] : GeForce GTX 1080 Ti
[1] : GeForce GTX 1070
Which GPU idx to choose? ( skip: best GPU ) :

The GPU you select should match the one used to train your model. If you're not sure check your model folder files 

SAE_data = choose "best GPU"

0_SAE_data = choose "0"
1_SAE_data = choose "1"

Once selected it will show your model summary, and prompt you with some settings:

Code:
Loading model...
Using TensorFlow backend.
===== Model summary =====
== Model name: SAE
==
== Current iteration: 221936
==
== Model options:
== |== batch_size : 16
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== remove_gray_border : False
== |== multiscale_decoder : True
== |== pixel_loss : False
== |== face_style_power : 0.1
== |== bg_style_power : 0.1
== |== ca_weights : False
== |== apply_random_ct : True
== Running on:
== |== [0 : GeForce GTX 1080 Ti]
=========================
Choose mode: (1) overlay, (2) hist match, (3) hist match bw, (4) seamless, (5) raw. Default - 1 :

See post 1 about what these modes mean. I use (1) overlay.

Code:
Mask mode: (1) learned, (2) dst, (3) FAN-prd, (4) FAN-dst , (5) FAN-prd*FAN-dst (6) learned*FAN-prd*FAN-dst (?) help. Default - 1 :

See post 1 and 2 regarding mask mode. I use either (1) learned (if no obstructions of face in video), or (4) FAN-dst if obstructions and I want the video faster, or (6) learned*FAN-prd*FAN-dst if I don't care how long it takes to convert.


Code:
Choose erode mask modifier [-200..200] (skip:0) : 0
Choose blur mask modifier [-200..200] (skip:100) : 0
Choose output face scale modifier [-50..50] (skip:0) : 0

Most of the time I don't have to use these settings. See post 1 and 2 to understand them.



Code:
Apply color transfer to predicted face? Choose mode ( rct/lct skip:None ) : rct

I like using rct for color transfer to match skin tones


Code:
Apply super resolution? (y/n ?:help skip:n) : n

I only apply super resolution if the data_dst video is low quality.


Code:
Degrade color power of final image [0..100] (skip:0) : 0
Export png with alpha channel? (y/n skip:n) : n

I then skip the rest.

After inputting all the settings it should run the conversion. This process is slow so just sit tight and be patient. You can preview the images in your "data_dst\merged" folder.

Preview these images and if you're not happy you can stop the conversion process early and restart it with different settings.

Next just run 8) converted to mp4 and a "result.mp4" file should be created in your "workspace" folder.
Goodluck, and happy deepfaking!
#4
I'll try to follow this now. Thanks!
#5
When I click the .bat files to extract the destination video nothing happens. There's a command run that pops out like a blink.

'C:\DeepFaceLab\DeepFaceLab' is not recognized as an internal or external command,
operable program or batch file.
Press any key to continue . . .

ffmpeg version N-89980-ge752da5464 Copyright © 2000-2018 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth
libavutil 56. 7.100 / 56. 7.100
libavcodec 58. 10.100 / 58. 10.100
libavformat 58. 9.100 / 58. 9.100
libavdevice 58. 1.100 / 58. 1.100
libavfilter 7. 11.101 / 7. 11.101
libswscale 5. 0.101 / 5. 0.101
libswresample 3. 0.101 / 3. 0.101
libpostproc 55. 0.100 / 55. 0.100
C:\DeepFaceLab\DeepFaceLab: Permission denied
Press any key to continue . . .
#6
(10-24-2018, 09:32 AM)miketran Wrote: You are not allowed to view links. Register or Login to view.When I click the .bat files to extract the destination video nothing happens. There's a command run that pops out like a blink.

'C:\DeepFaceLab\DeepFaceLab' is not recognized as an internal or external command,
operable program or batch file.
Press any key to continue . . .

ffmpeg version N-89980-ge752da5464 Copyright © 2000-2018 the FFmpeg developers
 built with gcc 7.2.0 (GCC)
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth
 libavutil      56.  7.100 / 56.  7.100
 libavcodec     58. 10.100 / 58. 10.100
 libavformat    58.  9.100 / 58.  9.100
 libavdevice    58.  1.100 / 58.  1.100
 libavfilter     7. 11.101 /  7. 11.101
 libswscale      5.  0.101 /  5.  0.101
 libswresample   3.  0.101 /  3.  0.101
 libpostproc    55.  0.100 / 55.  0.100
C:\DeepFaceLab\DeepFaceLab: Permission denied
Press any key to continue . . .

hello mike
when you extract the file archive for  the install , reduce the name of this .
ex:
C:\DeepFaceLab\DFL\
The error diseapear Angel

You are not allowed to view links. Register or Login to view.



#7
Thanks Marss. But this also came out

'C:\DeepFaceLab\DFL\DeepFaceLab' is not recognized as an internal or external command,
operable program or batch file.
#8
(10-24-2018, 03:16 PM)miketran Wrote: You are not allowed to view links. Register or Login to view.Thanks Marss. But this also came out

'C:\DeepFaceLab\DFL\DeepFaceLab' is not recognized as an internal or external command,
operable program or batch file.

ok i explain better Big Grin 

rename the original name folder "DeepFaceLab Easy" to "DFL" for have this ...
"C:\DeepFaceLab\DFL\_internal" 
"C:\DeepFaceLab\DFL\workspace"
"C:\DeepFaceLab\DFL\*.bat"

You are not allowed to view links. Register or Login to view.



#9
OMG! It worked! Thanks a lot Marss! and ofcourse dpfks

Btw, can I use my faceset from fakeapp 1 to deepfacelab?

Btw, can I use my faceset from fakeapp 1 to deepfacelab?
#10
(10-24-2018, 03:33 PM)marss Wrote: You are not allowed to view links. Register or Login to view.
(10-24-2018, 03:16 PM)miketran Wrote: You are not allowed to view links. Register or Login to view.Thanks Marss. But this also came out

'C:\DeepFaceLab\DFL\DeepFaceLab' is not recognized as an internal or external command,
operable program or batch file.

ok i explain better Big Grin 

rename the original name folder "DeepFaceLab Easy" to "DFL" for have this ...
"C:\DeepFaceLab\DFL\_internal" 
"C:\DeepFaceLab\DFL\workspace"
"C:\DeepFaceLab\DFL\*.bat"

Thank you for identifying the issue! I'm going to rename the folders so there are no spaces.

(10-24-2018, 05:13 PM)miketran Wrote: You are not allowed to view links. Register or Login to view.OMG! It worked! Thanks a lot Marss! and ofcourse dpfks

Btw, can I use my faceset from fakeapp 1 to deepfacelab?

Btw, can I use my faceset from fakeapp 1 to deepfacelab?

Yes you can. Just copy the image files into the /data_src/aligned folder then run the extract bat file
  • 1(current)
  • 2
  • 3
  • 4
  • 5
  • ...
  • 123
  • Next 

Forum Jump:

Users browsing this thread: 6 Guest(s)