MrDeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
  •  Previous
  • 1
  • 2(current)
  • 3
  • 4
  • 5
  • ...
  • 39
  • Next 
[GUIDE] DeepFaceLab - Google Colab Tutorial
#11
are the models trained in DFL google colab compatible with the regular DFL and vice-versa?
Download my videos You are not allowed to view links. Register or Login to view.
My BTC: 137LyEKiFNotLk8rcvxgn3XMAdgji3J6sM
You are not allowed to view links. Register or Login to view.: You are not allowed to view links. Register or Login to view.
!nvidia-smi
#12
(04-28-2019, 08:01 PM)tania01 Wrote: You are not allowed to view links. Register or Login to view.are the models trained in DFL google colab compatible with the regular DFL and vice-versa?

Yes, it is the same.
#13
(04-28-2019, 09:28 AM)GhostTears Wrote: You are not allowed to view links. Register or Login to view.So once collab resets, do we have to remount the automatically backed up workspace archive from the drive and resume training or does this happen automatically?

Yes, you basically have to do everything over again. Literally 5 minutes to set up again.

(04-28-2019, 09:35 AM)chervonij Wrote: You are not allowed to view links. Register or Login to view.@dpfks

Thank you very much for this guide.
Also, it is worth adding that Google does not like lengthy heavy calculations, and it is recommended to use Colab with the help of two Google accounts. And change them every session. At the same time, you can store your data (connect as Drive) from one.

Great, I will add this to the first post.

(04-28-2019, 12:34 PM)GhostTears Wrote: You are not allowed to view links. Register or Login to view.@Endalus have you found that SAE is reasonably better than H128? I've tried running SAE on my shadow but it just seems so slow and I'm not willing to wait a long ass time for something that isn't arguably better than h128.

I use SAE to future proof my models. SAE is actively being worked on while the others are not. Obstructions using FAN conversion is huge reason to swap over.
~ Fake it till you make it ~
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
#14
how do i delete/clear old workspace? in "Files" on the leftside menu. it still holds the old workspace including all the files, there is no option to delete it
Download my videos You are not allowed to view links. Register or Login to view.
My BTC: 137LyEKiFNotLk8rcvxgn3XMAdgji3J6sM
You are not allowed to view links. Register or Login to view.: You are not allowed to view links. Register or Login to view.
!nvidia-smi
#15
(04-29-2019, 12:25 AM)tania01 Wrote: You are not allowed to view links. Register or Login to view.how do i delete/clear old workspace? in "Files" on the leftside menu. it still holds the old workspace including all the files, there is no option to delete it

[Image: 95b9aQih.jpg]
#16
How often does the preview update? Each save? I will update the first post with this.
~ Fake it till you make it ~
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
#17
(04-29-2019, 01:10 AM)dpfks Wrote: You are not allowed to view links. Register or Login to view.How often does the preview update? Each save? I will update the first post with this.

Every 10 iterations
#18
(04-28-2019, 04:46 PM)Endalus Wrote: You are not allowed to view links. Register or Login to view.
(04-28-2019, 12:34 PM)GhostTears Wrote: You are not allowed to view links. Register or Login to view.@Endalus have you found that SAE is reasonably better than H128? I've tried running SAE on my shadow but it just seems so slow and I'm not willing to wait a long ass time for something that isn't arguably better than h128.
If the clip I'm working on is pretty strictly the face looking directly at the camera with not too much angle variation I still use H128 because it's quite good at that. For anything with angles where both eyes aren't visible (side profiles and close to side profiles), head tilting back, and tilting forward SAE really shines. Mask editor for brief obsctructions (1-3 seconds) and FAN conversion have been a godsend too. I spend more time training SAE, but I spend a lot less time editing destination videos to remove angles that are bad for H128, which works out for me because I value reducing the work I have to actively do more than reducing GPU usage since I'me usually running Shadow and 2 colab instances simultaneously. I also feel like it leads to a higher quality product because I can avoid an awkward jump cut when an angle was removed and blurry transition frames in cases where the head tiles all the way back or forward.

My only real complaint is that the model has been updated fairly frequently lately, so I've had to train a lot from scratch instead of re-using models, but a tip I have for converting an old model to a new one that I don't really see anyone talk about is taking a high quality result from your previous model and extracting it for source faces to add to training your new model. It makes the model come together very quickly. I "converted" one of my old h128 models to SAE recently this way and it was looking very usable with teeth definition by 50k



(04-28-2019, 04:02 PM)StrayNucleus Wrote: You are not allowed to view links. Register or Login to view.
(04-28-2019, 12:34 PM)GhostTears Wrote: You are not allowed to view links. Register or Login to view.@Endalus have you found that SAE is reasonably better than H128? I've tried running SAE on my shadow but it just seems so slow and I'm not willing to wait a long ass time for something that isn't arguably better than h128.

As long as your not doing a BJ seen, SAE proves to be superior. But if anything is interacting with face area it doesn't do well, especially when the face is tilted down.

Have you tried the new FAN mask conversion options with SAE? It's handling face interactions and obstructions more convincingly than anything I've seen before.
You are not allowed to view links. Register or Login to view.
Oh wow, no I haven't tried that! I guess I take back what I said. I am gonna began training a SAE model now.
#19
why is the model being trained on Tesla T4 taking longer for every iteration compared to my GTX 1070 Ti?


Tesla T4
===== Model summary =====
== Model name: SAE
==
== Current iteration: 609
==
== Model options:
== |== write_preview_history : True
== |== batch_size : 4
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 600
== |== e_ch_dims : 46
== |== d_ch_dims : 26
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== Running on:
== |== [0 : Tesla T4]
=========================
Starting. Press "Enter" to stop training and save model.
[18:34:09][#001288][1179ms][2.5218][2.9308]



GTX 1070 Ti
===== Model summary =====
== Model name: SAE
==
== Current iteration: 471
==
== Model options:
== |== write_preview_history : True
== |== batch_size : 4
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 600
== |== e_ch_dims : 46
== |== d_ch_dims : 26
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== Running on:
== |== [0 : GeForce GTX 1070 Ti]
=========================
Starting. Press "Enter" to stop training and save model.
[18:40:23][#000632][0980ms][2.9057][2.9094]


Difference: ~200ms
Download my videos You are not allowed to view links. Register or Login to view.
My BTC: 137LyEKiFNotLk8rcvxgn3XMAdgji3J6sM
You are not allowed to view links. Register or Login to view.: You are not allowed to view links. Register or Login to view.
!nvidia-smi
#20
(04-29-2019, 09:45 PM)tania01 Wrote: You are not allowed to view links. Register or Login to view.why is the model being trained on Tesla T4 taking longer for every iteration compared to my GTX 1070 Ti?


Tesla T4
===== Model summary =====
== Model name: SAE
==
== Current iteration: 609
==
== Model options:
== |== write_preview_history : True
== |== batch_size : 4
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 600
== |== e_ch_dims : 46
== |== d_ch_dims : 26
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== Running on:
== |== [0 : Tesla T4]
=========================
Starting. Press "Enter" to stop training and save model.
[18:34:09][#001288][1179ms][2.5218][2.9308]



GTX 1070 Ti
===== Model summary =====
== Model name: SAE
==
== Current iteration: 471
==
== Model options:
== |== write_preview_history : True
== |== batch_size : 4
== |== sort_by_yaw : False
== |== random_flip : False
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 600
== |== e_ch_dims : 46
== |== d_ch_dims : 26
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== Running on:
== |== [0 : GeForce GTX 1070 Ti]
=========================
Starting. Press "Enter" to stop training and save model.
[18:40:23][#000632][0980ms][2.9057][2.9094]


Difference: ~200ms

I'm not sure, but I think this is because, T4 has a lower core clock speed.
  •  Previous
  • 1
  • 2(current)
  • 3
  • 4
  • 5
  • ...
  • 39
  • Next 

Forum Jump:

Users browsing this thread: 1 Guest(s)