MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
Welcome, Guest
You have to register before you can post on our site.
Search Forums
Forum Statistics
 Members: 291,939
 Latest member: NotExistUser
 Forum threads: 3,789
 Forum posts: 19,206

Full Statistics
Online Users
There are currently 3005 online users.
 11 Member(s) | 2990 Guest(s)
Applebot, Bing, Google, Yandex, addi1234, Anonymusly, anotherspacemonkey, celebdeepfakes, FilthyMcNasty, jaxnxbox, Propustak, ReekTakes, Rfedz007, tavosolicituded

Was the 99999 a limit I was ignorant of when Merging SAEHD?  Looks like i might've forgotten my merger adjustments before I walked away from my computer, but it didn't occur to me that the max number of alignments that could be merged was under 100k.  I tried to do a search for merge limit/mergerconfig limit/99999, but couldn't pull up any relevant results and I don't recall reading this in the guides.  Also checked 99999.jpg and the last interactive merge matches what i have in dst aligned folder.  

1. Is there a way to get mergerconfig to 99999.jpg+ when using Merging SAEHD to finish my project?
2. If I need to split/stitch the video, can I still reuse the data_dst/aligned data?

Been on this off and on for a couple of weeks, just hoping to see some light at the end of the tunnel. Hoping someone can suggest a fix.
----------------------------------------------------------
MergerConfig 99999.jpg:
Mode: overlay
mask_mode: learned-prd*learned-dst
erode_mask_modifier: 0
blur_mask_modifier: 0
motion_blur_power: 0
output_face_scale: 0
color_transfer_mode: rct
sharpen_mode : None
blursharpen_amount : 0
super_resolution_power: 0
image_denoise_power: 0
bicubic_degrade_power: 0
color_degrade_power: 0
================
Merging: 100%|###############################################################| 129235/129235 [7:44:24<00:00,  4.64it/s]

I got an EVGA FTW3 RTX 3090 card yesterday, but so far it doesn't seem to work at all for Deepfacelab.  I was previously using an RTX Titan which was pretty stable/didn't really have many issues (that weren't caused by me choosing high settings that it couldn't handle/ran out of memory etc).

The 3090 doesn't seem to work period.  I can't get the "train SAEHD" to index the GPU most of the time, and if it does, it loads some samples but never actually gets to the point where it starts training.

I've done the usual stuff like a clean driver installation, but it hasn't made any difference.  

Probably going to be either a driver issue, or require a newer version of DFL to be released I imagine?

=============== Model Summary ===============
==                                         ==
==            Model name: jilamiga_SAEHD   ==
==                                         ==
==     Current iteration: 163395           ==
==                                         ==
==------------- Model Options -------------==
==                                         ==
==            resolution: 224              ==
==             face_type: wf               ==
==     models_opt_on_gpu: True             ==
==                 archi: df               ==
==               ae_dims: 512              ==
==                e_dims: 80               ==
==                d_dims: 80               ==
==           d_mask_dims: 30               ==
==       masked_training: False            ==
==             eyes_prio: False            ==
==           uniform_yaw: False            ==
==            lr_dropout: n                ==
==           random_warp: True             ==
==             gan_power: 0.0              ==
==       true_face_power: 0.0              ==
==      face_style_power: 0.0              ==
==        bg_style_power: 0.0              ==
==               ct_mode: none             ==
==              clipgrad: False            ==
==              pretrain: False            ==
==       autobackup_hour: 0                ==
== write_preview_history: False            ==
==           target_iter: 0                ==
==           random_flip: False            ==
==            batch_size: 4                ==
==                                         ==
==-------------- Running On ---------------==
==                                         ==
==          Device index: 0                ==
==                  Name: GeForce RTX 3090 ==
==                  VRAM: 24.00GB          ==
==                                         ==
=============================================
Starting. Press "Enter" to stop training and save model.
2020-09-26 17:59:38.642223: E tensorflow/stream_executor/cuda/cuda_blas.cc:698] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED
Error: Blas GEMM launch failed : a.shape=(4, 512), b.shape=(4, 100352), m=512, n=100352, k=4
         [[node gradients/MatMul_5_grad/MatMul_1 (defined at D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]

Caused by op 'gradients/MatMul_5_grad/MatMul_1', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 57, in trainerThread
    debug=debug,
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 471, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ]
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 420, in _MaybeCompile
    return grad_fn()  # Exit early
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 1132, in _MatMulGrad
    grad_b = gen_math_ops.mat_mul(a, grad, transpose_a=True)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5333, in mat_mul
    name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

...which was originally created as op 'MatMul_5', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 336, in on_initialize
    gpu_dst_code     = self.inter(self.encoder(gpu_warped_dst))
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 102, in forward
    x = self.dense2(x)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Dense.py", line 66, in forward
    x = tf.matmul(x, weight)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2455, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5333, in mat_mul
    name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(4, 512), b.shape=(4, 100352), m=512, n=100352, k=4
         [[node gradients/MatMul_5_grad/MatMul_1 (defined at D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]

Traceback (most recent call last):
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(4, 512), b.shape=(4, 100352), m=512, n=100352, k=4
         [[{{node gradients/MatMul_5_grad/MatMul_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 123, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
    losses = self.onTrainOneIter()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 636, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm_all, warped_dst, target_dst, target_dstm_all)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 503, in src_dst_train
    self.target_dstm_all:target_dstm_all,
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(4, 512), b.shape=(4, 100352), m=512, n=100352, k=4
         [[node gradients/MatMul_5_grad/MatMul_1 (defined at D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]

Caused by op 'gradients/MatMul_5_grad/MatMul_1', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 57, in trainerThread
    debug=debug,
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 471, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ]
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 420, in _MaybeCompile
    return grad_fn()  # Exit early
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 1132, in _MatMulGrad
    grad_b = gen_math_ops.mat_mul(a, grad, transpose_a=True)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5333, in mat_mul
    name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

...which was originally created as op 'MatMul_5', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 336, in on_initialize
    gpu_dst_code     = self.inter(self.encoder(gpu_warped_dst))
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 102, in forward
    x = self.dense2(x)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Dense.py", line 66, in forward
    x = tf.matmul(x, weight)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2455, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5333, in mat_mul
    name=name)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(4, 512), b.shape=(4, 100352), m=512, n=100352, k=4
         [[node gradients/MatMul_5_grad/MatMul_1 (defined at D:\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]


can anyone explain to me what's the error? i'm don't know coding so i can't understand what i'm doing wrong

I have multiple accounts in colab but in each one if I'm allocated a GPU i run training and it works for 30 or so minutes then it just stops. I don't know why  this is happening I haven't tried anything cause I don't know what to try. 
I also noticed some errors while installing DFL in colab i didn't notice before. listed below.[/font][/color]


ERROR: fancyimpute 0.4.3 requires tensorflow, which is not installed.[/font][/color]
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[/font][/color]
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.[/font][/color]

ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.[/font][/color][/font][/color]

I was doing xseg editor last night for my video but had to get to bed. I saved and closed out and turned off my computer. I wake up the next morning to resume xseg masking and I got this error.

Running XSeg editor.
Traceback (most recent call last):
  File "C:\Users\zacha\Desktop\Deepfake\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
    arguments.func(arguments)
  File "C:\Users\zacha\Desktop\Deepfake\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 274, in process_xsegeditor
    exit_code = XSegEditor.start (Path(arguments.input_dir))
  File "C:\Users\zacha\Desktop\Deepfake\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\XSegEditor\XSegEditor.py", line 1460, in start
    win = MainWindow( input_dirpath=input_dirpath, cfg_root_path=cfg_root_path)
  File "C:\Users\zacha\Desktop\Deepfake\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\XSegEditor\XSegEditor.py", line 1170, in __init__
    self.cfg_dict = pickle.loads(self.cfg_path.read_bytes()) if self.cfg_path.exists() else {}
_pickle.UnpicklingError: invalid load key, '\x00'.
Press any key to continue . . .

I get the same error for both dst and src mask, even though I haven't even touched src yet. 

I tried running as admin and got this error instead.

The system cannot find the path specified.
'""' is not recognized as an internal or external command,
operable program or batch file.
Press any key to continue . . .

If someone could please help i have been working on this deepfake for almost a week now and I will lose hope in humanity if I have to start over.



This is the error I receive when trying to run either data_dst mask - edit

From the pretraining model downloaded,

Can I change other options, such as Batch_size or dims?
Also, can I change to no when options such as warning rate dropout are yes?
I'd like to know how far I can handle these options.

Fast and the furious franchise couple of years from now. 

You are not allowed to view links. Register or Login to view.

Hey guys new guy here. I just would like to share with you all a tutorial i made a couple of days ago. I just started learning dfl maybe two weeks so if you see me doing something wrong or something i can improve on please let me know. Thanks and hope this helps. 

You are not allowed to view links. Register or Login to view.

Hi

is it possible to use the same face set that I have used for a FF model with also a WF model? Or do i need to rerun the extract faces function on the set?

Cheers

Looking for someone to do a top notch quality video of sexy UK model, TV Presenter Maya Jama. Looking forward to your work peoples

  • 1(current)
  • 2
  • 3
  • 4
  • 5
  • ...
  • 359
  • Next