Mr DeepFakes Forums
  • New and improved dark forum theme!
  • Guests can now comment on videos on the tube.
   
tania01effect of ram with mode 2 on batch size
#1
since we know a higher batch can be trained on mode 2 which uses ram in conjunction with vram. has anyone experimented with 8gb vs 16gb vs 32gb ram on mode 2? vram remaining constant, how high did you batch go using different amount of ram used?

example: i can train a SAE 128 model on my 1070 Ti 8gb vram and 16gb ddr4 ram on mode 1 with batch of 13
i can bump up the batch to 21 on mode 2 which uses the ddr4 ram.

sooooo..... how much do you figure someone can bump up the batch size to, using 16gb, 32gb or 64gb ddr4 ram while still on mode 2?
don't tell me to try mode 3. there are pros and cons of using mode 2 vs 3 and i'm not using 3 if i don't have to.
!nvidia-smi
#2
I have RTX 2080 8GB, and the maximun batch size is 10 with ram 2400 8GB*2. How come 1070ti 8GB can do batch size 13? Is it related to the clock of the memory?

I also wanna know how much memory to get is the best for raising the batch size, but not wasting money buying too much of it.

On op mode 2, the maximum batch size is 16 with ram 2400 8GB*2.

Here are other settings. They are the same except batch size.
== Model name: SAE
== Current iteration: 190723
== Model options:
== |== autobackup : True
== |== write_preview_history : True
== |== batch_size : 10
== |== sort_by_yaw : False
== |== random_flip : True
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : True
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== |== apply_random_ct : False
== |== clipgrad : True
== Running on:
== |== [0 : GeForce RTX 2080]
Starting. Press "Enter" to stop training and save model.
[18:39:07][#191221][0824ms][0.0340][0.1864]

Forum Jump:

Users browsing this thread: 1 Guest(s)