deepfakery
DF Pleb
I'm currently testing out the effect of training only the data_dst/aligned images that need the most work by moving the more "finished" ones to a temporary folder.
Model is res 256 / batch size 6 at 100k iterations before testing.
My worry is that I am wasting iterations on images that I would consider finished, while other frames need much more work.
The idea is that I can spend upfront time training these more difficult images. I would add the others back in once they are closer in quality, then resume training and conversion.
So I have a few questions:
- Will this work as a shortcut to get all of my merged images closer to a baseline quality?
- Will this cause my model to collapse or any other problems?
- Will this save me iterations?
- Does DeepFaceLab perform a similar operation?
Looking for input from the more experienced fakers out there. TIA
Model is res 256 / batch size 6 at 100k iterations before testing.
My worry is that I am wasting iterations on images that I would consider finished, while other frames need much more work.
The idea is that I can spend upfront time training these more difficult images. I would add the others back in once they are closer in quality, then resume training and conversion.
So I have a few questions:
- Will this work as a shortcut to get all of my merged images closer to a baseline quality?
- Will this cause my model to collapse or any other problems?
- Will this save me iterations?
- Does DeepFaceLab perform a similar operation?
Looking for input from the more experienced fakers out there. TIA