lackeyproton
DF Vagrant
Hi @iperov, thanks for pushing hard for SAE in your comments.
I initially used H128 and then just kept using it because I didn't want to spend more cycles training a brand new model. When I finally got around to it, I was blown away by how much better SAE was - I totally understand why you considered removing the other models as options. It's been so good that I am thinking of writing a SAE guide in the Guides section when I get a chance.
Had a couple of questions for you:
Thanks for any questions you answer. I'll be sure to collect the information into a future guide.
I initially used H128 and then just kept using it because I didn't want to spend more cycles training a brand new model. When I finally got around to it, I was blown away by how much better SAE was - I totally understand why you considered removing the other models as options. It's been so good that I am thinking of writing a SAE guide in the Guides section when I get a chance.
Had a couple of questions for you:
Code:
1) Is there a way to tell the extractor to just use the existing "aligned_debug" images? I was performing a delete/manual fix of approx. 5000 images, but while in the middle of the process, the "aligned" folder got corrupted and I had to delete its contents.
I made a copy of my "aligned_debug" folder, then I tried various commands / options on the extractor to try and have it use the existing images, but it seems to always want to delete the "aligned_debug" directory and wants to scan the faces on its own every time.
I tried running the full-manual extraction, and then copying in my "aligned_debug" images once the program has begun the manual extraction process, then skipping all the images to see if it would use my my manual alignments. That didn't work either.
Would changing the python code be the only way to preserve all the alignment work I've done so far? I write software so if it's a matter of altering a bit of code I can do that.
Code:
2) In the SAE model, what does the pixel loss flag actually do? I know that you recommend turning it on if the DST face is masked, like your Cage / Magneto example. Does the same apply for a blowjob scene, which causes random obstructions around the face? Is it better to train 15-20K epochs without pixel loss, and then turning pixel loss back on for that situation? Or would you recommend it just stay off for the whole training?
Thanks for any questions you answer. I'll be sure to collect the information into a future guide.