MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

Amanda Tapping takes a bath and masturbates

zipperguy

DF Vagrant
Verified Video Creator
I posted my first Deep Fake (made using DeepFaceLab on my home PC with a GTX 1080 GPU) at https://mrdeepfakes.com/video/3879/amanda-tapping-takes-a-bath-and-masturbates and wanted to share some of my experience as a new faker and hopefully get some constructive feedback and tips on how to improve.

I found a suitable porn star by using Pornhub's filters to narrow down the porn stars to those that were generally close to Amanda Tapping. I chose Anna Joy, because her overall look matched the best.

For source images, I used an episode of Stargate SG-1 called "Ripple Effect". In this episode, Amanda Tapping's character meets with other versions of herself from alternate universes, so there were lots of good shots of her, from several angles. I also added some images I found from an image search that I thought would be helpful (e.g. shots of Amanda laughing, and other poses and angles you don't get in TV shows).

One thing that concerned me was that the TV show was shot using dramatic lighting that is soft and has an orange or sometimes bluish time. The porn video, on the other hand, used strong, even white lighting to make everything easy to see, and the video had a sharper, clearer look than the TV show. This makes sense, because they are shot for completely different purposes, but I was afraid that it would make matching faces worse. I was pleased that this didn't turn out to be a big issue, but you can see in some frames that Amanda's face has a slight blue tint that doesn't blend well with the destination footage.

After extracting the images, I used a tool call "Visual Similarity Duplicate Images Finder" to find and delete pictures that were extremely similar to each other. There's also a free tool called VisiPics that does the same same, but I've used VSDIF for many years and like it. After I removed the duplicates, non-faces, etc. I ended up with a face set of 5387 for Amanda and 2786 for Anna. DeepFaceLab's feature for sorting by histogram and blur were extremely helpful in pruning down the face sets.

I trained for 70,000 iterations using an SAE model with default settings. I intended to run another few thousand with pixel loss on, to try to get a little more detail, but I quickly got a model collapse, and ended up using the 70,000 iteration model.

When I converted the SAE model, I found that some frames apparently got skipped because I had removed the duplicate/unsuitable images during pruning. I re-extracted the faces from the original video, and that fixed the problem, but I'm sure there's a better way that I don't know about.

Overall, I'm pleased with the results I got, but I noticed several shortcomings that could be improved. I welcome any tips for making things better.

- The face is overly smooth, compared to the rest of the video. I'm not sure if this is due to sampling/training issues or because of the quality differences between the source and the destination.

- The eyes are often too white and are not quite looking in the right directions.

- There is a bluish tint in some of the faces, particularly around the edges and side of the face. I assume this is because of the lighting of the source images.

- Face matching from 1:01 - 1:10 is not good. I'm not sure if this is because of the obstruction of the shoulder (I used FAN-dst), the angle of the shot, or because the face is in shadow. Maybe it had a hard time finding a matching angle in the source face set?

- Speaking of problems with matching angles, some of the worst shots are the "up the nostril" shots, where the model is looking up with her head back. This is a common shot in porn videos, but it is hard to find celebs in this pose, so the matching and interpolation is poor. Also, in this pose, the facial features are more obscured and harder to match. What techniques to people use to get around this?

- I tried using the same model for a second video, but it didn't turn out well at all. I suspect this is because in the second video the model is wearing a lot of makeup, eye shadow, fake eyelashes, etc., so it was harder to match the faces.

All in all, I'm happy with the way this turned out as a first project, and I learned a lot.
 

LCC

DF Pleb
One thing that concerned me was that the TV show was shot using dramatic lighting that is soft and has an orange or sometimes bluish time.

(Disclaimer, I'm new to this so take what I'm saying with a grain of salt!)

What I've been doing for some sources is putting the clip into a video editor, then colour correcting to make it more neutral or similar to dst. You can also balance out the lighting, to take some of the edge off (easiest way is to bring down contrast). Check out tutorials on Youtube to help you do that for whatever video editor you're using.

Finally, a good mix of source pics from lots of different videos/photos will help balance out lighting and colour too.
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
The overly smooth mask look will always be there. It might improve slightly with higher dims and longer training, but I believe it will always be there.

Eyes are a known issues with deepfakes, especially the direction of the eyes.

Any skin color differences can be reduced by training with RCT for 30-60min prior to conversion

Face matching - some angles are harder than others. You need to make sure the landmarks are matched. If not you should manually align them.
 
Top