CapHarlock
DF Pleb
Hi,
While waiting for time to make a more detailed guide, I leave you here the basic steps so you can train your own FANSEG model and get good results with the obstructions when making merge, especially in the final parts of your videos ("If you know what i mean", lol).
First of all you have to know that, to make the process a little easier I have decided to use a modified version of the .bat file that executes the process; basically to have a FANSEG folder where you can train this model and that doesn't interfere with the "workspace" folder.
Create (or modify if you already have it) a .bat file called "6)Train FANSEG 2.0" and copy this code inside:
(This file must be in the root folder, at the same level where you have the other bat files)
If you don't want to start your FANSEG model from scratch you can use mine.
It contains the original/default FANSEG as well as 550 additional trained masks (with "NSFW material"):
https://drive.google.com/open?id=13dUCKyNA5nPppu1_GWZVgwf-vavc1Sm5
Steps to follow:
- Create a directory called "FANSEG" at the same level where you have "workspace" (or _internal).
- OPTIONAL: Copy my model there if you want to keep training from it.
- Prepare your video "data_dst.mp4" as usual, extracting the frames and aligning the faces.
- Use "Mask Editor" in the final frames (or those where applicable) to modify the masks and "remove" parts such as the inside of the mouth, tongue, co__, hands, cu__, ... Save the changes with "e".
- The images with modified masks will have been moved to "aligned_confirmed", so they should be copied (not moved) to "aligned" in order that all the images are there again.
- Train your SAEHD model (workspace directory) as you normally would until the result is good enough.
- Copy the images in ".../workspace/data_dst/aligned_confirmed" to ".../FANSEG/data_masked"
- Run the modified .bat file to train FANSEG ( 6)Train FANSEG 2.0.bat )
- Save the model when you see that the mask has been learned (it should not take more than 1000-2000 iterations).
- Copy your FANSEG model into ".../_internal/DeepFaceLab/facelib" and rename it to "FANSeg_256_full_face.npy" (make a backup of the original just in case).
- Merge with FAN-X and check the result (if not satisfactory train your FANSEG with some additional masks)
I hope this will facilitate the process and help to understand it better.
PS: If you want to see how the masks have been modified for training with FANSEG here are some examples:
https://drive.google.com/open?id=1Hw4Kmbi66DdYF7mS9O6YywFXZGMlaK7A
PS2: As soon as I have some time I will extend the guide making it more detailed and with some images (although I don't know if this forum section allows NSFW material )
Enjoy your fakes!
While waiting for time to make a more detailed guide, I leave you here the basic steps so you can train your own FANSEG model and get good results with the obstructions when making merge, especially in the final parts of your videos ("If you know what i mean", lol).
First of all you have to know that, to make the process a little easier I have decided to use a modified version of the .bat file that executes the process; basically to have a FANSEG folder where you can train this model and that doesn't interfere with the "workspace" folder.
Create (or modify if you already have it) a .bat file called "6)Train FANSEG 2.0" and copy this code inside:
Code:
@echo off
call _internal\setenv.bat
"%PYTHON_EXECUTABLE%" "%DFL_ROOT%\main.py" train ^
--training-data-src-dir "FANSEG\data_masked\aligned" ^
--training-data-dst-dir "FANSEG\data_masked\aligned" ^
--model-dir "FANSEG\model" ^
--model FANSeg
pause
If you don't want to start your FANSEG model from scratch you can use mine.
It contains the original/default FANSEG as well as 550 additional trained masks (with "NSFW material"):
https://drive.google.com/open?id=13dUCKyNA5nPppu1_GWZVgwf-vavc1Sm5
Steps to follow:
- Create a directory called "FANSEG" at the same level where you have "workspace" (or _internal).
- OPTIONAL: Copy my model there if you want to keep training from it.
- Prepare your video "data_dst.mp4" as usual, extracting the frames and aligning the faces.
- Use "Mask Editor" in the final frames (or those where applicable) to modify the masks and "remove" parts such as the inside of the mouth, tongue, co__, hands, cu__, ... Save the changes with "e".
- The images with modified masks will have been moved to "aligned_confirmed", so they should be copied (not moved) to "aligned" in order that all the images are there again.
- Train your SAEHD model (workspace directory) as you normally would until the result is good enough.
- Copy the images in ".../workspace/data_dst/aligned_confirmed" to ".../FANSEG/data_masked"
- Run the modified .bat file to train FANSEG ( 6)Train FANSEG 2.0.bat )
- Save the model when you see that the mask has been learned (it should not take more than 1000-2000 iterations).
- Copy your FANSEG model into ".../_internal/DeepFaceLab/facelib" and rename it to "FANSeg_256_full_face.npy" (make a backup of the original just in case).
- Merge with FAN-X and check the result (if not satisfactory train your FANSEG with some additional masks)
I hope this will facilitate the process and help to understand it better.
PS: If you want to see how the masks have been modified for training with FANSEG here are some examples:
https://drive.google.com/open?id=1Hw4Kmbi66DdYF7mS9O6YywFXZGMlaK7A
PS2: As soon as I have some time I will extend the guide making it more detailed and with some images (although I don't know if this forum section allows NSFW material )
Enjoy your fakes!