MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

Separately training the most difficult images?

deepfakery

DF Pleb
I'm currently testing out the effect of training only the data_dst/aligned images that need the most work by moving the more "finished" ones to a temporary folder.
Model is res 256 / batch size 6 at 100k iterations before testing.

My worry is that I am wasting iterations on images that I would consider finished, while other frames need much more work.

The idea is that I can spend upfront time training these more difficult images. I would add the others back in once they are closer in quality, then resume training and conversion.

So I have a few questions:

- Will this work as a shortcut to get all of my merged images closer to a baseline quality?

- Will this cause my model to collapse or any other problems?

- Will this save me iterations?

- Does DeepFaceLab perform a similar operation?

Looking for input from the more experienced fakers out there. TIA
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
This should theoretically save you time. Users used to do this back when we were still using the old app faceswap. Some users will intentionally work on a single scene only with odd angles and focus training on that subset of angles.

Having a small number of data_src runs the risk of "over optimizing" your model/training to the point where it may collapse. I would suggest you run backups.

I'd be very interested in seeing your results though.
 

deepfakery

DF Pleb
I ran this for a few thousand iterations and found that the small dst faceset does indeed train quicker and I was able to fix some of the more problematic images (weird eyes, oblique face angles)

However when I added the rest of the dst faceset back in the yellow graph line seemed to show a complete reversal, indicating that there would be some "recovery" time. I paused the experiment so I can continue working on this video but I think its worth going back to later.

rhOpxhih.png

Green arrows point to when I removed the dst images and then put them back
 

fakerdaker

DF Vagrant
deepfakery said:
I ran this for a few thousand iterations and found that the small dst faceset does indeed train quicker and I was able to fix some of the more problematic images (weird eyes, oblique face angles)

However when I added the rest of the dst faceset back in the yellow graph line seemed to show a complete reversal, indicating that there would be some "recovery" time. I paused the experiment so I can continue working on this video but I think its worth going back to later.

rhOpxhih.png

Green arrows point to when I removed the dst images and then put them back
It should bounce back rather quickly since you only ran this for a few thousand iterations out of the 100k total on the model. It might even improve faster in the future since it fixed some of the outliers that were bringing up the average loss.
 
Top