Al_Bundy
DF Vagrant
I'm not very new to Deep Faking but I surely lack understanding of some things here and there. I thought that the best model we now should use is SAE (maybe some other models have their benefits in edge cases).
I have a BJ scene with frontal face, so the sausage is blocking the lower lip and chin area (and below).
When I train with SAE and use the default settings during conversion I get a result with the src face overlapping the ara where the sausage was, so no magic seems to happen there and the obstruction ( I guess this is also an obstruction) is not in the resulting image.
I'm not sure how to solve this issue and reading the forum made me even more confused. Should I use the advanced mask editor and mask the sausage out in each frame? Then after doing this training the model again a lot so it knows the face and also the masked out area and then convert it using mode overlay (or seamless?) and mask mode "fan-dst"?
I trained it for 30k iterations without masking out the sausage and then masking it out in 30 frames. Then made maybe 1k iterations (only) with this and tried SAE debug convert with "FAN-DST" but what happens is, that not the complete sausage will get shown, some parts of the resulting (src) face will mix up with it. Maybe it is because I haven't trained it enough after masking it out, or do I need to mask more space out?
Hope anyone can push me in the right direction.
I have a BJ scene with frontal face, so the sausage is blocking the lower lip and chin area (and below).
When I train with SAE and use the default settings during conversion I get a result with the src face overlapping the ara where the sausage was, so no magic seems to happen there and the obstruction ( I guess this is also an obstruction) is not in the resulting image.
I'm not sure how to solve this issue and reading the forum made me even more confused. Should I use the advanced mask editor and mask the sausage out in each frame? Then after doing this training the model again a lot so it knows the face and also the masked out area and then convert it using mode overlay (or seamless?) and mask mode "fan-dst"?
I trained it for 30k iterations without masking out the sausage and then masking it out in 30 frames. Then made maybe 1k iterations (only) with this and tried SAE debug convert with "FAN-DST" but what happens is, that not the complete sausage will get shown, some parts of the resulting (src) face will mix up with it. Maybe it is because I haven't trained it enough after masking it out, or do I need to mask more space out?
Hope anyone can push me in the right direction.