MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

Face / Background style power?

Status
Not open for further replies.
After more reading around I see others starting with 10 / 10 and then lowering to 0.1 / 4 or something.  What is the logic behind doing that?  I'm trying to understand more but testing things takes VERY long  :p
 

avalentino93

DF Admirer
TheMadFaker said:
After more reading around I see others starting with 10 / 10 and then lowering to 0.1 / 4 or something.  What is the logic behind doing that?  I'm trying to understand more but testing things takes VERY long  :p

The higher your iterations become, and the lower your loss values, the more chances you have of model collapse.
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
TheMadFaker said:
After more reading around I see others starting with 10 / 10 and then lowering to 0.1 / 4 or something.  What is the logic behind doing that?  I'm trying to understand more but testing things takes VERY long  :p

For me I use this type of training to make the skin tones match better first (when using style power), then I reduce to 0.1/4 to make the result more like data_src celebrity before converting.
 

VirginBoI

DF Pleb
TheMadFaker said:
After more reading around I see others starting with 10 / 10 and then lowering to 0.1 / 4 or something.  What is the logic behind doing that?  I'm trying to understand more but testing things takes VERY long  :p

Bruh don't use it as face style and background style power not only changes skin tones but also morphs the face.

If you keep it for long it will collapse your model. Like getting sudden spikes in loss values or black output.

Even if that doesn't happen then the src face changes rapidly in just 5-6 iterations and that's the reason they reduce it so as to get source face unmorphed.
But remember the tones also changes fast back to original.

For skin tones better train your model with SRC different facesets with lighting variety and later use Adobe software to do some tricks to change the body tone according to face if not completely similar still.
 

deep88

DF Vagrant
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
deep88 said:
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?

Kinda, the longer you train with higher style power, the more it will look like your destination model (skin tone included). The grey around the face is because the data_dst skin color is different. Background style is at 4 and will likely be adding to the greyness around the mask.

If you want to have the same skin color as your data_dst, just train with random color transfer = y
 

deep88

DF Vagrant
dpfks said:
deep88 said:
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?

Kinda, the longer you train with higher style power, the more it will look like your destination model (skin tone included). The grey around the face is because the data_dst skin color is different. Background style is at 4 and will likely be adding to the greyness around the mask.

If you want to have the same skin color as your data_dst, just train with random color transfer = y

Ok, I am going to try this.

I have 2 more questions that I haven't found an answer for:

1. What does 'ca_weights' mean? It is not mentioned in the tutorial.
2. There is a slight zoom in the destination video, meaning the person's face is getting closer. In this case the source face becomes a little blurry. Can it be resolved somehow?

Thanks for the help.
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
deep88 said:
dpfks said:
deep88 said:
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?

Kinda, the longer you train with higher style power, the more it will look like your destination model (skin tone included). The grey around the face is because the data_dst skin color is different. Background style is at 4 and will likely be adding to the greyness around the mask.

If you want to have the same skin color as your data_dst, just train with random color transfer = y

Ok, I am going to try this.

I have 2 more questions that I haven't found an answer for:

1. What does 'ca_weights' mean? It is not mentioned in the tutorial.
2. There is a slight zoom in the destination video, meaning the person's face is getting closer. In this case the source face becomes a little blurry. Can it be resolved somehow?

Thanks for the help.

CA weights will provide a more accurate model, but will take longer.

If the face is close up to the camera there is no fix to the blurriness unless you can train in high resolution.
 

deep88

DF Vagrant
What is the way of training in high resolution?
I have used youtube videos at least 720p for my source data.


deep88 said:
What is the way of training in high resolution?
I have used youtube videos at least 720p for my source data.

Much better merged photos, but somehow the face doesn't seem so natural.
What's the way of improving the result?

opWwXVIh.png
DkffZXah.png
WErqd58h.png
 

deep88

DF Vagrant
TheMadFaker said:
How many iterations in are you?  In your first example you're just over 8000.  You won't get good results until at least 100000

I relaunched the training, it's at 7270 iterations. I read that if I am satisfied with the preview, it can be terminated.
The preview gives a pretty clear result, so I thought it will be good enough.

GE3WlRdh.jpg


So no matter what the preview shows, it is recommended to go for 100k iterations at least?
 
If you are satisfied with the result then that's great. I would say train WAY longer on that example. In the preview, the far right image is your final output and that's the one you want clear and matching as best as possible
 

deep88

DF Vagrant
I am experiencing some improvement, but a question has come up.
How important is the number of dst images?
I mean is it enough if I use only one video for destination for training or I need as many photos for destination model as possible, just like in case of the source model?
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
train until 130k+ and it'll be nice.

The number of images in data_dst isn't important.
 

deep88

DF Vagrant
dpfks said:
train until 130k+ and it'll be nice.

The number of images in data_dst isn't important.

Thanks, currently I am at 41K.
I noticed that Colab stops when I am not using my macbook. I know that the tab must be open, but maybe sleep mode also stops the process?
 

VirginBoI

DF Pleb
deep88 said:
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?

Bruh make face style power and background style power both 0.0 as they basically do more loss than good.

Apply random colour transfer to SRC dataset should be your choice cause it basically not only gives the skin tone but also less GPU intensive and more stable.

Face style power = colour/skin tone
Background style power = skin morphing to date face

Both being constant values need to be monitored else the change will be drastic and within 5-6k iterations you will not recognise your src face.


TheMadFaker said:
How many iterations in are you?  In your first example you're just over 8000.  You won't get good results until at least 100000

Who said that ? I mean at 62k iterations with variety of SRC and limited dst (dst doesn't matter that MUCH) you can get pretty decent results.

If you are targeting something hyper realistic and completely stable then go for 100000 iterations.
 

deep88

DF Vagrant
VirginBoI said:
deep88 said:
I started with 10 / 10 and now I switched to 0.1 / 4.

Does the face style power have anything to do with that the face on my preview has grey area on it? ...and it sometimes seems too "art like", not natural.
See photo attached.

cdnjApGh.jpg


Will it recover with the new 0.1 / 4 settings?

Bruh make face style power and background style power both 0.0 as they basically do more loss than good.

Apply random colour transfer to SRC dataset should be your choice cause it basically not only gives the skin tone but also less GPU intensive and more stable.

Face style power = colour/skin tone
Background style power = skin morphing to date face

Both being constant values need to be monitored else the change will be drastic and within 5-6k iterations you will not recognise your src face.





Random colour transfer seems to be working perfectly with the example I posted.
However I am experiencing with another example where the face becomes too light in some cases.
dmmrNB0h.jpg

I use colour transfer here too with same src.
Is there a way of correcting this? Or more iterations will correct this?
Thanks.
 
Status
Not open for further replies.
Top