MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
Welcome, Guest
You have to register before you can post on our site.
Search Forums
Forum Statistics
 Members: 235,526
 Latest member: hotter_than_hell
 Forum threads: 3,365
 Forum posts: 16,935

Full Statistics
Online Users
There are currently 6163 online users.
 18 Member(s) | 6143 Guest(s)
Bing, Google, AaronOwnz, aire212, badalki, celebdeepfakes, cyborg, elvint, firedf, forhad06, harrydf, homerxp, jackjack2580, jim456, john7414, Mastir1230, memes_of_reality, Xinomorph, YiaT

Hi, I'm a newbie to DeepFaceLab so maybe I'm doing something wrong. I could never produce a successfull result video. It's always the same: the "result_mask" video only lasts a few seconds, then the "result" video lasts the same amount of time until it freezes the image, although the audio continues until the end of the duration of the actual "data_dst"

I think I'm doing all the steps correctly:

1) clear workspace; 2) extract images from video data_src; 3) extract images from video data_dst FULL FPS; 4) data_src faceset extract; 5) data_dst faceset extract; 6) train SAEHD; 7) merge SAEHD (using the Quick96 version for both train and merge gives the same results to me); 8) merged to mp4 (or avi or any other format is the same).

Whether I edit the data_dst mask or the data_src mask is the same. And I use all the default parameters when asked (although I also tried to change some parameters but still getting the same problem). And I'm using the Tony Stark and Elon Musk videos provided with the release download, specifically the 07_04_2020 build.

Now, I have found other people complaining about the same (or some similar problem):

You are not allowed to view links. Register or Login to view.

You are not allowed to view links. Register or Login to view.

Is this a known issue? Are any of you also getting the same problem? Or maybe I am doing something wrong, I don't know. Thanks in advance!

If trained for long enough would a model end up having good colormatch even without any colortransfer option used in training or merging ?

Meaning : does basic training without any color transfer method, also trains color match ? Seems so but not sure.

Nobody has time to do this, but in theory if you trained your src-dst for 2 millions iter, would you get a perfect color match ?

Just wondering about the inner workings of DFL.

[Image: XVhdTroh.jpg][Image: 1AOU2gJh.jpg][Image: MWgjqDah.jpg][Image: BiTctmNh.jpg][Image: 5T1ssPTh.jpg][Image: YzCDzVGh.jpg]

What is fastest ?

Example :

I have a pretrained model on generic faces at 400k, then train it for 75k with scr-dst

Then I upgrade generic pretrain up to 650k


I want a new fake with same src, which of the 2 will give best speed and/or quality ?

Hello,

Noob here trying to understand how I can improve my SAEHD settings to improve my results.

Any suggestions based on what you see on the results vs settings would be appreciated.

I think most of the issues with this result is the quality of dst video (the biggest issue is the mouth) but are there any items here you would suggest I try that given the scene would provide better results?

- Also if I make changes to the model settings, do I have to re-train SEAHD from scratch or can I pick up from this current iteration?

Thanks,

You are not allowed to view links. Register or Login to view.

================= Model Summary ==================
==                                              ==
==            Model name: putin on donald_SAEHD ==
==                                              ==
==     Current iteration: 100915                ==
==                                              ==
==--------------- Model Options ----------------==
==                                              ==
==            resolution: 128                   ==
==             face_type: wf                    ==
==     models_opt_on_gpu: True                  ==
==                 archi: df                    ==
==               ae_dims: 256                   ==
==                e_dims: 64                    ==
==                d_dims: 64                    ==
==           d_mask_dims: 22                    ==
==       masked_training: True                  ==
==             eyes_prio: False                 ==
==           uniform_yaw: False                 ==
==            lr_dropout: n                     ==
==           random_warp: True                  ==
==             gan_power: 1.0                   ==
==       true_face_power: 0.0                   ==
==      face_style_power: 0.0                   ==
==        bg_style_power: 0.0                   ==
==               ct_mode: none                  ==
==              clipgrad: False                 ==
==              pretrain: False                 ==
==       autobackup_hour: 0                     ==
== write_preview_history: False                 ==
==           target_iter: 0                     ==
==           random_flip: True                  ==
==            batch_size: 8                     ==
==                                              ==
==----------------- Running On -----------------==
==                                              ==
==          Device index: 0                     ==
==                  Name: GeForce GTX 1080 Ti   ==
==                  VRAM: 11.00GB               ==
==                                              ==
==================================================

I'm currently seeing @tutsmybarreh in yellow in the shoutbox and in purple in Latest Threads block on the right. 10mn ago he was purple in both. I think I've seen this happen to @Grrkin a few days ago also.

Please note, this is is unsupported, but has always worked for me.

This procedure has been detailed by iperov himself a long time ago.

It also used to be the only way to change preview images before that functionality was added into the training questions.



First, move your existing model to another directory.

Then start a new model using the exact same settings as the old one.  If you use the preview function you'll have to choose 'new' preview images again.  This was how we changed preview images previously.  You can also set target iterations to 1, so that it stops training basically as soon as it starts.

Once the model starts training, stop it immediately if you didn't set a target iteration of 1.

Now copy all the old model files back except for the _data.dat.  For example mymodel_SAEHD_data.dat.  This is the file that contains the iteration history, and previews.  The _summary.txt file doesn't matter as it'll get overwritten next time the model saves anway.

This should give you all your old model files with a new _data.dat.   Start training and run through the training options to make sure they look ok.  Once training starts, previews should look the same as they did before, ie trained up already, and not starting fresh.  But the iteration counter will have been reset.


This was asked in this thread:

You are not allowed to view links. Register or Login to view.

But the thread was closed with the response that you can't.  You certainly can, it's just a manual process that's not built in to DFL.  If this procedure is undesirable to have documented, please just delete the thread.

Hi

Anyone have solid tips for not losing the session in Google Colab?

I have burned about 20 google accounts the last week.

Normally the session disconnects and you are not allowed to use the GPU anymore. It doesnt matter if I wait 1 day or 1 week, the account seem locked for use with Google GPU.

I have tried the console scripts which clicks reconnect, but it still happens.

BR

Hey, I'm very new to deepfaking and I have some questions on collecting datasets. 

I'm currently using Faceswap(Because its one of the few programs that supports AMD GPUs) Dfl-H128 (Since its a decent trainer that doesn't take a week to train) with mostly default settings (enabled sharpen)

But i'm kind of having some confusion when it regards how the datasets effect how my initial deepfakes will look.
My first deepfake was trying to swap jerma (Since he has 6 hours of greenscreen footage) with the newsman who can pronounce Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch perfectly. I trained the AI on the first minute of the Jerma greenscreen and the respective clip (1700 frames and 500 frames)


They turned out alright, albiet a bit glitchy when any turned to the side. With also being a bit blury. For some reason the newsman one did better but that might have been from the large difference in skin tone.

However. When I try with a larger and what I think is better dataset. Specifically attempting to swap Jerma (using around 5000 frames of the shots when he is close in the greenscreen video as opposed to when he is just standing in the background) and Linus Tech Tips (4000 frames of him sitting and talking from "Red's Overpriced "Mini Mag" Cards - The Real Story"). They turn out horrendously blurry.

So i'm just here to ask, what should I look for in datasets? The only thing I think could be screwing up this comparison is that the linus dataset had some issues with it thinking some profile pictures were faces. However extraction seemingly already labeled those different faces (Like "Linus_000275_0" VS "Linus_000275_1") so I don't know how badly it effected the data. 
Sorry if I sound like a dumbass, new to this stuff

I'm new to this, and not entirely sure i'm doing things with the best workflow, which may be the issue. What i'm noticing is:

1. I have a 2070 S gpu, but cannot do batch size greater than 5 at 130 res using df architecture.
2. final merged dst video is extremely banded, even though original dst video is sharp and high res. ( is this related to not being able to do higher res training? if so, how can start to  i fix that?) model is trained to about 100k iterations using Whole Face
(edit: i found out how to reduce the banding well enough in Davinci Resolve.)

any help at all would be greatly appreciated!

Thanks!

  • 1(current)
  • 2
  • 3
  • 4
  • 5
  • ...
  • 319
  • Next