MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

[SOLVED] Why can't RTX 2080 train by batch size 16 with optimizer_mode : 1?

Status
Not open for further replies.

dkny11354

DF Vagrant
I saw 1080ti 11GB can train by batch size 16. Then, I tested RTX 2080 8GB and the maximum batch size is 10 which is same as RTX 2070 with optimizer_mode : 1. I have checked some information on the internet. It says in FP16, RTX GPU can double the Vram or maybe increases from 8GB to 12GB. It's not really improving hardware features, but can do more jobs. If so, RTX 2080 should be able to train by batch size 16 as well. Is there any way to increase batch size with optimizer_mode:1 or check if its in FP16?


===== Model summary =====

== Model name: SAE

==

== Current iteration: 0

==

== Model options:

== |== autobackup : True

== |== write_preview_history : True

== |== batch_size : 10

== |== sort_by_yaw : False

== |== random_flip : True

== |== resolution : 128

== |== face_type : f

== |== learn_mask : True

== |== optimizer_mode : 1

== |== archi : df

== |== ae_dims : 512

== |== e_ch_dims : 42

== |== d_ch_dims : 21

== |== multiscale_decoder : True

== |== ca_weights : True

== |== pixel_loss : False

== |== face_style_power : 0.0

== |== bg_style_power : 0.0

== |== apply_random_ct : False

== |== clipgrad : True

== Running on:

== |== [0 : GeForce RTX 2080]

=========================
 

dpfks

DF Enthusiast
Staff member
Administrator
Verified Video Creator
Biggest reason is the extra vRAM.

If you want higher batches, you want more vRAM.

This doesn't mean its faster though.
 
Status
Not open for further replies.
Top