MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
  •  Previous
  • 1
  • ...
  • 140
  • 141
  • 142(current)
  • 143
  • 144
  • ...
  • 146
  • Next 
DeepFaceLab Explained and Usage Tutorial
(09-11-2019, 05:50 PM)lkas0012 Wrote: You are not allowed to view links. Register or Login to view.
(09-11-2019, 02:55 PM)abudfv2008 Wrote: You are not allowed to view links. Register or Login to view.How to fix completely white column in preview window (where dst should be recognised)

Roll back to a previous iteration of the model before the white column appeared....hopefully you have autobackups turned on. I don't believe there's any other workaround as it has collapsed at that point.

Overlay still gives me a better result than seamless or seamless2 (which seem to be splotchy around the edges). rct still seems like the most stable and reliable color transfer mode. ebs picked up a bit of banding in a test I ran. idt looks promising and has a better "shape" to the face, but has clipped to solid white in areas in every example I've tried.

The problem is that is starts that way from the beginning. Moved existing model to this one and it is OK.
But if I start new one with default settings (only batch is changed) it shows me white column.
Hi All, loving the info in this forum. I'm currently working on a political music video for a UK rock band which will place Trump, Boris and many others on the bands bodies. Up shot of this is that I'll have a whole load of facesets that I can share with you all.

I do have a couple of questions if someone can help.

1) The model I build between person A (src) and person B from dst footage can only ever be used for that piece of footage right?

2) Currently configuration is 6400 pictures of src on 262 frames of dst with a batch size of 32 and LIAEF128. It's looking pretty good, but I can see dst drops to below 0.2 much quicker than. Is 6400 too large a data set?

3) Is a batch size of 32 actually beneficial to the end result? In task manager I see that anything less actually under utilises the GPU's 16Gb RAM.

4) Can I restart the learning process with different parameters to further refine the model?

5) What's the best way to fix the odd rubbish frame? Is it possible to re-run the model but tag just the frames that need more attention? Or do I just create a new dst video with only the few frames I need visible (the rest of the video would be blank) and re-run it?

Thanks in advance for the help guys.[Image: nWXRBKph.png]
There is no option for use in some frames like blowjobs the dst-model's mouth right?
Can this software correctly identify the face with the hijab scarf as well? Like This !!!!

[Image: G6ECJ4mh.jpg]
(09-12-2019, 08:22 PM)tatu Wrote: You are not allowed to view links. Register or Login to view.Can this software correctly identify the face with the hijab scarf as well? Like This !!!!

[Image: G6ECJ4mh.jpg]

If the face has no obstructions then yes. The hijab is an open face type of headwear as far as I know. Actually it might be better since hair won't be there to disturb the final result.
(09-12-2019, 01:36 PM)jimjimjim Wrote: You are not allowed to view links. Register or Login to view.Hi All, loving the info in this forum. I'm currently working on a political music video for a UK rock band which will place Trump, Boris and many others on the bands bodies. Up shot of this is that I'll have a whole load of facesets that I can share with you all.

I do have a couple of questions if someone can help.

1) The model I build between person A (src) and person B from dst footage can only ever be used for that piece of footage right?

2) Currently configuration is 6400 pictures of src on 262 frames of dst with a batch size of 32 and LIAEF128. It's looking pretty good, but I can see dst drops to below 0.2 much quicker than. Is 6400 too large a data set?

3) Is a batch size of 32 actually beneficial to the end result? In task manager I see that anything less actually under utilises the GPU's 16Gb RAM.

4) Can I restart the learning process with different parameters to further refine the model?

5) What's the best way to fix the odd rubbish frame? Is it possible to re-run the model but tag just the frames that need more attention? Or do I just create a new dst video with only the few frames I need visible (the rest of the video would be blank) and re-run it?

Thanks in advance for the help guys.[Image: nWXRBKph.png]

Why are you using LIAEF? Start again, switch to SAE.
You can't have too many source/dst images.
You can reuse models (so long as you don't change the topology, eg; LIAEF/SAE).
There's guides about, I'd suggest reading those.
(09-13-2019, 03:37 AM)frosty3907 Wrote: You are not allowed to view links. Register or Login to view.
(09-12-2019, 01:36 PM)jimjimjim Wrote: You are not allowed to view links. Register or Login to view.Hi All, loving the info in this forum. I'm currently working on a political music video for a UK rock band which will place Trump, Boris and many others on the bands bodies. Up shot of this is that I'll have a whole load of facesets that I can share with you all.

I do have a couple of questions if someone can help.

1) The model I build between person A (src) and person B from dst footage can only ever be used for that piece of footage right?

2) Currently configuration is 6400 pictures of src on 262 frames of dst with a batch size of 32 and LIAEF128. It's looking pretty good, but I can see dst drops to below 0.2 much quicker than. Is 6400 too large a data set?

3) Is a batch size of 32 actually beneficial to the end result? In task manager I see that anything less actually under utilises the GPU's 16Gb RAM.

4) Can I restart the learning process with different parameters to further refine the model?

5) What's the best way to fix the odd rubbish frame? Is it possible to re-run the model but tag just the frames that need more attention? Or do I just create a new dst video with only the few frames I need visible (the rest of the video would be blank) and re-run it?

Thanks in advance for the help guys.[Image: nWXRBKph.png]

Why are you using LIAEF? Start again, switch to SAE.
You can't have too many source/dst images.
You can reuse models (so long as you don't change the topology, eg; LIAEF/SAE).
There's guides about, I'd suggest reading those.

I reuse my models all the time, it saves so many hours.. 

Are their any updated guides on the recently added features to DFL? the new color modes.. etc.

The "manual" on github hasn't been updated and there's no changelog or anything so I really don't know much about what has changed or how to put it to proper use. 

I really wish they would document the features as they are added, even if it's just a tiny description .. I did see one welcome change though, I haven't tested it yet though.  Saving the session during conversion, that's something I never understood why it wasn't there..  I've been writing down my settings each time to tweak them if it wasn't doing what I wanted =)

EDIT:

I did find a changelog in the windows download, just not in the git repo.. interesting..
-- 
You are not allowed to view links. Register or Login to view.ttps://onedualityfakes.com ( official forum )
You are not allowed to view links. Register or Login to view.
== 13.09.2019 ==
SAE: removed multiscale decoder, because it's not effective

That means i have to trash my models and start them new without multiscale?
Or can the existed models ben converted for the new version?
@aXu

In theory, you can try to create new model and then replace SAE_decoder.h5 in your trained model to new created.
But I'm not sure if this will work.

Actually, you can just delete SAE_decoder.h5 and continue training.
New file without multiscale decoder will be generated automatically by DFL
(09-13-2019, 03:37 AM)frosty3907 Wrote: You are not allowed to view links. Register or Login to view.
(09-12-2019, 01:36 PM)jimjimjim Wrote: You are not allowed to view links. Register or Login to view.Hi All, loving the info in this forum. I'm currently working on a political music video for a UK rock band which will place Trump, Boris and many others on the bands bodies. Up shot of this is that I'll have a whole load of facesets that I can share with you all.

I do have a couple of questions if someone can help.

1) The model I build between person A (src) and person B from dst footage can only ever be used for that piece of footage right?

2) Currently configuration is 6400 pictures of src on 262 frames of dst with a batch size of 32 and LIAEF128. It's looking pretty good, but I can see dst drops to below 0.2 much quicker than. Is 6400 too large a data set?

3) Is a batch size of 32 actually beneficial to the end result? In task manager I see that anything less actually under utilises the GPU's 16Gb RAM.

4) Can I restart the learning process with different parameters to further refine the model?

5) What's the best way to fix the odd rubbish frame? Is it possible to re-run the model but tag just the frames that need more attention? Or do I just create a new dst video with only the few frames I need visible (the rest of the video would be blank) and re-run it?

Thanks in advance for the help guys.[Image: nWXRBKph.png]

Why are you using LIAEF? Start again, switch to SAE.
You can't have too many source/dst images.
You can reuse models (so long as you don't change the topology, eg; LIAEF/SAE).
There's guides about, I'd suggest reading those.
Thanks for the advice. I've actually read a lot of the tutorials on here, but there were just a few bits that eluded me.
I'm gonna switch to SAE and drop my sources to 2000. I had a feeling I was bombarding the algorithm with too much

The only bit I don't know is about how to fix specific frames after the fact. I tried just feeding in a few frames and the software gave me an error saying their was too small a sample set to work from. So a bit of guidance on that would be great. Thank you

(09-13-2019, 07:06 AM)oneduality Wrote: You are not allowed to view links. Register or Login to view.
(09-13-2019, 03:37 AM)frosty3907 Wrote: You are not allowed to view links. Register or Login to view.
(09-12-2019, 01:36 PM)jimjimjim Wrote: You are not allowed to view links. Register or Login to view.Hi All, loving the info in this forum. I'm currently working on a political music video for a UK rock band which will place Trump, Boris and many others on the bands bodies. Up shot of this is that I'll have a whole load of facesets that I can share with you all.

I do have a couple of questions if someone can help.

1) The model I build between person A (src) and person B from dst footage can only ever be used for that piece of footage right?

2) Currently configuration is 6400 pictures of src on 262 frames of dst with a batch size of 32 and LIAEF128. It's looking pretty good, but I can see dst drops to below 0.2 much quicker than. Is 6400 too large a data set?

3) Is a batch size of 32 actually beneficial to the end result? In task manager I see that anything less actually under utilises the GPU's 16Gb RAM.

4) Can I restart the learning process with different parameters to further refine the model?

5) What's the best way to fix the odd rubbish frame? Is it possible to re-run the model but tag just the frames that need more attention? Or do I just create a new dst video with only the few frames I need visible (the rest of the video would be blank) and re-run it?

Thanks in advance for the help guys.[Image: nWXRBKph.png]

Why are you using LIAEF? Start again, switch to SAE.
You can't have too many source/dst images.
You can reuse models (so long as you don't change the topology, eg; LIAEF/SAE).
There's guides about, I'd suggest reading those.

I reuse my models all the time, it saves so many hours.. 

Are their any updated guides on the recently added features to DFL? the new color modes.. etc.

The "manual" on github hasn't been updated and there's no changelog or anything so I really don't know much about what has changed or how to put it to proper use. 

I really wish they would document the features as they are added, even if it's just a tiny description .. I did see one welcome change though, I haven't tested it yet though.  Saving the session during conversion, that's something I never understood why it wasn't there..  I've been writing down my settings each time to tweak them if it wasn't doing what I wanted =)

EDIT:

I did find a changelog in the windows download, just not in the git repo.. interesting..

When you say "reusing the model"? In what manner? Does it still have to be the same src / dst, just a different segment of footage? Or can it be a completely different dst so long as the subject of src and dst are the same
  •  Previous
  • 1
  • ...
  • 140
  • 141
  • 142(current)
  • 143
  • 144
  • ...
  • 146
  • Next 

Forum Jump:

Users browsing this thread: ccffcc, Edurisu, kalumza01, tbouckc, xilog, 6 Guest(s)