MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
asdasdasd1Best Settings For 1050ti
#1
İ have 1050ti 4gb.i was using fakeapp last year.it was working ok .i installed yours but it gives errors probably allocation.

so what options do you recommend for my device ?

Code:
===== Model summary =====
== Model name: SAE
==
== Current iteration: 0
==
== Model options:
== |== batch_size : 8
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== multiscale_decoder : False
== |== ca_weights : False
== |== pixel_loss : False
== |== face_style_power : 10.0
== |== bg_style_power : 10.0
== |== apply_random_ct : False
== Running on:
== |== [0 : GeForce GTX 1050 Ti]
=========================
Starting. Press "Enter" to stop training and save model.
Error: OOM when allocating tensor with shape[64512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
        [[{{node mul_67}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_1/read, Variable_8/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

        [[{{node add_29/_1117}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7871_add_29", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Traceback (most recent call last):
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 107, in trainerThread
   iter, iter_time = model.train_one_iter()
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\DeepFaceLab\models\ModelBase.py", line 404, in train_one_iter
   losses = self.onTrainOneIter(sample, self.generator_list)
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\DeepFaceLab\models\Model_SAE\Model.py", line 423, in onTrainOneIter
   src_loss, dst_loss, = self.src_dst_train (feed)
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
   return self._call(inputs)
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
   fetched = self._callable_fn(*array_vals)
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
   run_metadata_ptr)
 File "D:\Downloads\DeepFaceLabCUDA9.2SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
   c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[64512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
        [[{{node mul_67}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_1/read, Variable_8/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

        [[{{node add_29/_1117}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7871_add_29", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Done.

Working ok with h64.what settings you guys recommend for h64.
#2
Just play around with settings until it no longer gives OOM errors.

== |== resolution : 128 decrease this
also decrease batch size
~ Fake it till you make it ~
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
#3
A bit late, but this might help someone else searching forums for the same GPU...

I have I have a 1050ti & the best I can seem to do before getting OOM errors is:

Autobackup: True
Write preview history: True
Yaw: false
Flip: false
Resolution: 128
Face type : full
Learn mask: True
Optimizer: 2
Archi: df
ae dims: 256
e ch dims: 38
d ch dims: 19
ca weights: True
pixel loss: False
Face Power: 0
Bg power : 0
Apply random ct: false
Clipgrad: True
Batch size: 8

Optimizer: 3 will allow me to either bump the e & d dims up to 42 & 21 OR increase Batch Size to 10... but I have to turn off mask, ca weights, clipgrad, and I can't enable any of the other settings.

I'm a noob, so I couldn't tell you if the increase in dims or batch size is worth using Optimizer 3 & losing the other settings, so I've just been sticking to the first settings I've listed... If I'm wrong I'd appreciate someone letting me know. Thanks
#4
With 1050Ti 4GB always use optimizer mode 3, keep res at 128, you can decrease dims a bit but I would leave them at default. Try new SAEHD, it has lower default dims so maybe it will run better for you. Style powers use vram too, I personally don't use them and get decent results and thats what I'd recommend.
Importance would be: higher res > higher batch size > style powers.
I'd try to get at least batch size of 8, anything lower at 128 res may have a bit to little detail or take to long to train. Low batches are not recommended at all (like 2 or 4).
You can also turn off clip grad but remember to backup because leaving it off makes it possible for model to collapse. If you are to scared of losing progress keep it enabled.
Youu can also try H128 but it's a half face model so it will always look worse than SAE becasuse it only covers small portion.
Good alternative between those two would be to use SAEHD with medium face mode (around 30% bigger face area than half face, potentially could provide better detail than full face because more resolution is focused on face itself and also should look better than half face because, again, it covers bigger area).
If I helped you in any way or you enjoy my deepfakes, please consider a small donation via bitcoin, tokens or paypal/patreon.
Paypal/Patreon: You are not allowed to view links. Register or Login to view.
Bitcoin: 1C3dq9zF2DhXKeu969EYmP9UTvHobKKNKF
Want to request a paid deepfake or have any questions reagarding the forums or deepfake creation using DeepFaceLab? Write me a message.
TMB-DF on the main website - You are not allowed to view links. Register or Login to view.

Forum Jump:

Users browsing this thread: 1 Guest(s)