MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

Having face color / matching issues [with pics]

potshot

DF Vagrant
Verified Video Creator
*specs + training settings posted at bottom of post

Hi again. I seem to have a problem that I thought I resolved previously. Just when I thought I had it figured out I get thrown for a loop again.

I've been wrangling my brain over a face color / match that wasn't giving me issues like before. I know there can be many reasons for this, or one reason out of many. I'd like to throw it to you guys and see if there's a way I can get out of this. 

I've seen a couple threads that deal with this issue but I haven't been able to find a solution, or am unsure on how to implement a solution as found in those threads.

Thread #1: https://mrdeepfakes.com/forums/thre...ur-mismatch?highlight="random+color+transfer"
Thread #2: https://mrdeepfakes.com/forums/thread-color-transfer?pid=7469&highlight=overlay+bright#pid7469

The SRC faceset I'm using I know works, at least partially, because I used it on an unreleased session and the face came out more or less fine with the same settings. As you can see below, I don't have any issues with it that would prevent it from being uploaded.

yRf0bFRh.png


However, for the current session it all looks janky. I don't have confidence in converting the entire thing, which'll probably take half a day for only 8 minutes of footage. Wouldn't want to waste my time if it's going to look bad the entire way.

                                       #1                                                                  #2                                                                     #3                                                                            #4

vsP5jebh.png


So I'd like your help, guys. I'd most prefer finding a way to make Option #2 work cause Overlay in my previous sessions look the best all around. 


1080 GTX      -        i7-6700K CPU @ 4.00 Ghz      -       32GB RAM        -        64 Bit, Windows 10
SAE Training
Most success on SAE conversion with Overlay mode, Learned*Fan-prd*Fan-dst, 0 erode, ~100 blur, turning RCT on after training ~100K.

== Model options:
== |== batch_size : 4                                      == |== ae_dims : 512                                            == |== apply_random_ct : false
== |== sort_by_yaw : False                              == |== e_ch_dims : 42                                           == |== clipgrad : false
== |== random_flip : False                              == |== d_ch_dims : 21                                           == Running on:
== |== resolution : 128                                   == |== remove_gray_border : False                          == |== [0 : GeForce GTX 1080]
== |== face_type : f = full                               == |== pixel_loss : False                             
== |== learn_mask : True                                == |== face_style_power : 0
== |== optimizer_mode : 2                               == |== bg_style_power : 0
== |== archi : df
 

tania01

DF Admirer
Verified Video Creator
try these model options:
== Model options:
== |== batch_size : 12 == |== ae_dims : 512 == |== apply_random_ct : true
== |== sort_by_yaw : False == |== e_ch_dims : 42 == |== clipgrad : true
== |== random_flip : False == |== d_ch_dims : 21 == Running on:
== |== resolution : 128 == |== remove_gray_border : False == |== [0 : GeForce GTX 1080]
== |== face_type : f = full == |== pixel_loss : False
== |== learn_mask : True == |== face_style_power : 0
== |== optimizer_mode : 1 == |== bg_style_power : 0
== |== archi : df

and train to around 200K
does the dst video have scenes with different lighting on the face?
 

potshot

DF Vagrant
Verified Video Creator
tania01 said:
try these model options:
== Model options:
== |== batch_size : 12                                      == |== ae_dims : 512                                            == |== apply_random_ct : true
== |== sort_by_yaw : False                              == |== e_ch_dims : 42                                           == |== clipgrad : true
== |== random_flip : False                              == |== d_ch_dims : 21                                           == Running on:
== |== resolution : 128                                   == |== remove_gray_border : False                          == |== [0 : GeForce GTX 1080]
== |== face_type : f = full                               == |== pixel_loss : False                            
== |== learn_mask : True                                == |== face_style_power : 0
== |== optimizer_mode : 1                               == |== bg_style_power : 0
== |== archi : df

and train to around 200K
does the dst video have scenes with different lighting on the face?

Hi there.

I tried to train using those settings and this happened.

Error: OOM when allocating tensor with shape[12,128,128,126] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1, PermConstNCHWToNHWC-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Traceback (most recent call last):
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 107, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\models\ModelBase.py", line 472, in train_one_iter
    losses = self.onTrainOneIter(sample, self.generator_list)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\models\Model_SAE\Model.py", line 430, in onTrainOneIter
    src_loss, dst_loss, = self.src_dst_train (feed)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
    return self._call(inputs)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
    run_metadata_ptr)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[12,128,128,126] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1, PermConstNCHWToNHWC-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Seems even my GPU can't handle some things, damn. The dreaded OOM. As for the lighting of the DST, the lighting is pretty consistently lit all around. There may be one scene where the face is particularly bright but not to any super-ghost level. It's primarily these three:

4SYmyd1h.jpg
ohbOj8Fh.jpg
CE18Q4wh.jpg
 

TMBDF

Moderator | Deepfake Creator | Guide maintainer
Staff member
Moderator
Verified Video Creator
potshot said:
tania01 said:
try these model options:
== Model options:
== |== batch_size : 12                                      == |== ae_dims : 512                                            == |== apply_random_ct : true
== |== sort_by_yaw : False                              == |== e_ch_dims : 42                                           == |== clipgrad : true
== |== random_flip : False                              == |== d_ch_dims : 21                                           == Running on:
== |== resolution : 128                                   == |== remove_gray_border : False                          == |== [0 : GeForce GTX 1080]
== |== face_type : f = full                               == |== pixel_loss : False                            
== |== learn_mask : True                                == |== face_style_power : 0
== |== optimizer_mode : 1                               == |== bg_style_power : 0
== |== archi : df

and train to around 200K
does the dst video have scenes with different lighting on the face?

Hi there.

I tried to train using those settings and this happened.

Error: OOM when allocating tensor with shape[12,128,128,126] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1, PermConstNCHWToNHWC-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Traceback (most recent call last):
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 107, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\models\ModelBase.py", line 472, in train_one_iter
    losses = self.onTrainOneIter(sample, self.generator_list)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\DeepFaceLab\models\Model_SAE\Model.py", line 430, in onTrainOneIter
    src_loss, dst_loss, = self.src_dst_train (feed)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
    return self._call(inputs)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
    run_metadata_ptr)
  File "E:\DeepFake Core Folder\DeepFaceLabCUDA10.1SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[12,128,128,126] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/model_2/leaky_re_lu_20/LeakyRelu/mul_grad/Mul_1, PermConstNCHWToNHWC-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Seems even my GPU can't handle some things, damn. The dreaded OOM. As for the lighting of the DST, the lighting is pretty consistently lit all around. There may be one scene where the face is particularly bright but not to any super-ghost level. It's primarily these three:

4SYmyd1h.jpg
ohbOj8Fh.jpg
CE18Q4wh.jpg
You ran out of vram, OOM - Out Of Memory, use Optimizer Mode 2/3 and see if it helps.

Also you may never actually get perfect match but first fo all disbale random color transfer during training and train more (until you see individual teeth on the faked face, the one on the far right in preview window, loss values around 0.2-0.3, then enable random ct, pixel loss and grad clip, keep training for another couple thousand iters until the preview looks sharp.

During conversion use RCT or LCT, if non works, try Hist match, enabled hist match mask, use value of 250 or less if the merged output has blown out whites, check the masking you want to use and also try RCT or LCT.

Oh, and when using hist you may not want to use blur because it will cause outline around mask.

You can also try just not using random CT at all during training, I had few cases where it actually gave me better color match, go figure, it's a trail and error process.
 

tania01

DF Admirer
Verified Video Creator
how is a 1080 throwing oom errors. my 1070ti can train batch of 13 at those settings. he's right, train first without color and to at least 160 - 170K.
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
It will be hard in that scene because I believe you're getting a whiter than normal face while training with RCT on.

This is because RCT while training has been found faulty when there are colors close to the face in the background of data_dst. See my post about it here: https://mrdeepfakes.com/forums/thre...transfer-rct-during-training?highlight=random

Since you've already trained with RCT on, I don't think you can fix it completely, but try training with RCT off and pixel loss on.

I have a few models where I learned the hard way as well, and I have not been able to fix it. I guess I'll have to retrain them and be more choosy with the data_dst.
 

potshot

DF Vagrant
Verified Video Creator
tutsmybarreh said:
You ran out of vram, OOM - Out Of Memory, use Optimizer Mode 2/3 and see if it helps.

Also you may never actually get perfect match but first fo all disbale random color transfer during training and train more (until you see individual teeth on the faked face, the one on the far right in preview window, loss values around 0.2-0.3, then enable random ct, pixel loss and grad clip, keep training for another couple thousand iters until the preview looks sharp.

During conversion use RCT or LCT, if non works, try Hist match, enabled hist match mask, use value of 250 or less if the merged output has blown out whites, check the masking you want to use and also try RCT or LCT.

Oh, and when using hist you may not want to use blur because it will cause outline around mask.

You can also try just not using random CT at all during training, I had few cases where it actually gave me better color match, go figure, it's a trail and error process.

Hey there. Took all day to respond (work) and currently I've loaded a backup and have it trained up to 183K. Will continue until about 200K. I do see teeth clearer now. I'll continue experimenting - not looking for anything perfect but good enough to where someone can't tell unless they really start looking at it (good enough to upload).

iQjuT0wh.jpg

tania01 said:
how is a 1080 throwing oom errors. my 1070ti can train batch of 13 at those settings. he's right, train first without color and to at least 160 - 170K.

That's what I've been wanting to know. I used to get OOM on just about everything until I updated drivers and added a missing script (when I started deepfacelab for the first time it rendered everything with just CPU). I'm hoping I'm missing one more thing that's causing these OOMs when I know the card should be handling it.

dpfks said:
It will be hard in that scene because I believe you're getting a whiter than normal face while training with RCT on.

This is because RCT while training has been found faulty when there are colors close to the face in the background of data_dst. See my post about it here: https://mrdeepfakes.com/forums/thre...transfer-rct-during-training?highlight=random

Since you've already trained with RCT on, I don't think you can fix it completely, but try training with RCT off and pixel loss on.

I have a few models where I learned the hard way as well, and I have not been able to fix it. I guess I'll have to retrain them and be more choosy with the data_dst.

Sheeeyit. Sucks to learn about stuff like this well into a project. I have a few backup folders so I can at least go back to those to test it out again. I'll admit that there's a white couch in the first scene (you can see it in the preview window above) that was a nightmare to align the face with. Kept bleeding over, and it most likely ended up in some frames here and there.

I'll see if I can make something work. I thank you guys thus far and hopefully will have some result that I'm ok with. Renders take a long time, even with just an 8 minute DST.
 

potshot

DF Vagrant
Verified Video Creator
Well, I think after spending my day doing trial and errors, I might have to throw in the towel. It seems that not even the various assortments of pixel loss, applied RCT, or more were capable of producing the overlay I was looking for. Seamless looked better on certain frames but ultimately it wasn't looking steady or consistently good when I rendered out a 10 second sample. Kept jittering and stuff.

I thought I had picked a good, 1080p DST that was well-lit and it turns out I kinda screwed myself. That's like a whole 2 weeks of alignments, training, and editing gone down the shitter. You can understand my frustration.

Somehow the test session I did before turned out fine. It actually looks decent. And it was using the same pornstar! I didn't even reuse the trained model, I started it from scratch to make sure I wasn't getting results mixed in.

GH9kPKwh.png
    <------- passable for 120k test, 3rd ever deepfake.



Meanwhile, for this session, after I hit 200k this happened less than 2000 iterations later when while pixel loss / gradient clip was on.

5kzdbfnh.jpg


Is that a model collapse? Did I pop my cherry with my first collapse?

Kinda makes me agitated how I can't even get that right when I see guys popping out deepfakes like it's nothing. Or newbies uploading whole scenes when I can't even scrape 3 minutes of footage.

For the future, is there anything that I can do to avoid hitting walls like this? I'll keep this thread in mind (https://mrdeepfakes.com/forums/thre...transfer-rct-during-training?highlight=random), but it's not like the other scenes I uploaded didn't have background things like that, either.
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
Yes your model collapsed. I always recommend running the auto-backup feature. You can also turn on gradient clipping feature to reduce the chances of a model collapse.

It takes time and patience. I've made over 100s of deepfakes before I've gotten comfortable, and happy with my own workflow. The first few deepfakes of new users are much better than what we used to have a year or 2 ago.
 

tania01

DF Admirer
Verified Video Creator
yep. unless you backed up the model, it's gone. it'a always best to have gradient clipping on and have multiple backups. if you want, upload the whole workspace to your google drive and share the link. i'll see if the scene can be salvaged.
 

potshot

DF Vagrant
Verified Video Creator
[size=x-small]dpfks 
Yes your model collapsed. I always recommend running the auto-backup feature. You can also turn on gradient clipping feature to reduce the chances of a model collapse.

It takes time and patience. I've made over 100s of deepfakes before I've gotten comfortable, and happy with my own workflow. The first few deepfakes of new users are much better than what we used to have a year or 2 ago.


----------------------------------------------------------------------------------------------------------------------------------------------

tania01[/size]


yep. unless you backed up the model, it's gone. it'a always best to have gradient clipping on and have multiple backups. if you want, upload the whole workspace to your google drive and share the link. i'll see if the scene can be salvaged.
Yeah, I not only have the auto-backup always on, but I also backup the actual folders at certain points just in case. So the model collapse above didn't stop me, just kinda set me back 10 minutes of deleting the old files, copying a new one from the latest iteration, and continuing. 

I didn't want to get into Deepfakes when it first blew up a 1.5 years ago specifically because of reasons such as the quality not being a certain level like it is now. This won't stop me from continuing, but it does piss me off. At the end of the day, at least I have a good Dormer faceset out of it.​

The entire thing is 68 gigs, which is mostly cause of the multiple copied folders. I don't mind uploading what I have, but it's far too large. For specific folders, would just the 200K regular-trained data_dst / data_src + model folder work? Or are you talking like, the original folders that have everything complete except they haven't been trained at all yet?​

The data_dst is in the folder too, it just wasn't cropped in. In the pic below the "data_dst" "data_src" and "model" folders are after all the 200K training, pixel loss, applied RCT during training, etc.

Where as the 200K is just trained up to 200K iterations regular. The "Original Copy" is all steps completed before any training.​

6DCL4Awh.png
 ​
 

tania01

DF Admirer
Verified Video Creator
potshot said:
[size=x-small]dpfks 
Yes your model collapsed. I always recommend running the auto-backup feature. You can also turn on gradient clipping feature to reduce the chances of a model collapse.

It takes time and patience. I've made over 100s of deepfakes before I've gotten comfortable, and happy with my own workflow. The first few deepfakes of new users are much better than what we used to have a year or 2 ago.


----------------------------------------------------------------------------------------------------------------------------------------------

tania01[/size]


yep. unless you backed up the model, it's gone. it'a always best to have gradient clipping on and have multiple backups. if you want, upload the whole workspace to your google drive and share the link. i'll see if the scene can be salvaged.
Yeah, I not only have the auto-backup always on, but I also backup the actual folders at certain points just in case. So the model collapse above didn't stop me, just kinda set me back 10 minutes of deleting the old files, copying a new one from the latest iteration, and continuing. 

I didn't want to get into Deepfakes when it first blew up a 1.5 years ago specifically because of reasons such as the quality not being a certain level like it is now. This won't stop me from continuing, but it does piss me off. At the end of the day, at least I have a good Dormer faceset out of it.​

The entire thing is 68 gigs, which is mostly cause of the multiple copied folders. I don't mind uploading what I have, but it's far too large. For specific folders, would just the 200K regular-trained data_dst / data_src + model folder work? Or are you talking like, the original folders that have everything complete except they haven't been trained at all yet?​

The data_dst is in the folder too, it just wasn't cropped in. In the pic below the "data_dst" "data_src" and "model" folders are after all the 200K training, pixel loss, applied RCT during training, etc.

Where as the 200K is just trained up to 200K iterations regular. The "Original Copy" is all steps completed before any training.​

6DCL4Awh.png
 ​

just the dst and src folders. i'll train the model from scratch. dst folder with aligned, debug and raw frames. how big are those to folders?
looking at the folder list, i think both "original copy" would do
 

potshot

DF Vagrant
Verified Video Creator
Ah man thanks a bunch for giving this a try. The two + dst video is about 5.5 gigs total. I can upload it to google and then PM you the link.

As for everyone else, I also want to give my thanks and appreciation for helping me try and see this through. It's not easy on your end to try and give a response expecting someone to be able to work that magic but you gave me some insight.
 

tania01

DF Admirer
Verified Video Creator
waiting for the link. it'll take a couple days to train and convert. let's see if we can salvage the hours spent on extraction
 

potshot

DF Vagrant
Verified Video Creator
tania01 said:
waiting for the link. it'll take a couple days to train and convert. let's see if we can salvage the hours spent on extraction

PM sent.
 
I am not an expert so maybe @dpfks or some of the other guys who know more than I do can weigh in but I wonder if the length of your video is the reason why the color is off? How many dst and src images are you training with?  For an 8 mintute long video I would think it would have to be over 5K for just the DST.  That's a lot o data for the network to handle.  In my previous experience the longer my video the lower the quality has been. To get it exactly right, I think it would take some post processing editing with After Effects or something like that.  I wish the export with alpha channel would work more consistently because that would help a lot.  I also haven't had any luck training with RCT on but I have used RCT quite a bit when converting.  Again not sure if this is why and if it's not I would like to try some longer videos but I haven't been able to get the result I want.  Have you tried the degrading the color power of final image setting during convert? Sometimes lowering that helps as well.  I hope you get it figured it out, and if you do please share what you changed.  GL! GL!
 

tania01

DF Admirer
Verified Video Creator
potshot said:
tania01 said:
waiting for the link. it'll take a couple days to train and convert. let's see if we can salvage the hours spent on extraction

PM sent.

there are 16K dst frames yet you have 26K aligned images.... why? and encoding at 20K bitrate won't increase the quality of the video when the original video is usually at >2K bitrate
 

potshot

DF Vagrant
Verified Video Creator
tania01 said:
there are 16K dst frames yet you have 26K aligned images.... why? and encoding at 20K bitrate won't increase the quality of the video when the original video is usually at >2K bitrate

Now that I looked at it. You're right. I'm not exactly sure how the aligned images ballooned up to that point. The new session I'm on has ~10,700 aligned images and +15,000 dst frames so that looks correct.

As for the encoding, that might have to do with me editing the original video in adobe after effects and exporting it as a specific mp4 for ease and file size, causing the rate to increase. The original video's bitrate is 12,416 kbps. The video in the new session I edited the same way and the bitrate didn't change.
 
Top