I saw 1080ti 11GB can train by batch size 16. Then, I tested RTX 2080 8GB and the maximum batch size is 10 which is same as RTX 2070 with optimizer_mode : 1. I have checked some information on the internet. It says in FP16, RTX GPU can double the Vram or maybe increases from 8GB to 12GB. It's not really improving hardware features, but can do more jobs. If so, RTX 2080 should be able to train by batch size 16 as well. Is there any way to increase batch size with optimizer_mode:1 or check if its in FP16?
===== Model summary =====
== Model name: SAE
==
== Current iteration: 0
==
== Model options:
== |== autobackup : True
== |== write_preview_history : True
== |== batch_size : 10
== |== sort_by_yaw : False
== |== random_flip : True
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== |== apply_random_ct : False
== |== clipgrad : True
== Running on:
== |== [0 : GeForce RTX 2080]
=========================
===== Model summary =====
== Model name: SAE
==
== Current iteration: 0
==
== Model options:
== |== autobackup : True
== |== write_preview_history : True
== |== batch_size : 10
== |== sort_by_yaw : False
== |== random_flip : True
== |== resolution : 128
== |== face_type : f
== |== learn_mask : True
== |== optimizer_mode : 1
== |== archi : df
== |== ae_dims : 512
== |== e_ch_dims : 42
== |== d_ch_dims : 21
== |== multiscale_decoder : True
== |== ca_weights : True
== |== pixel_loss : False
== |== face_style_power : 0.0
== |== bg_style_power : 0.0
== |== apply_random_ct : False
== |== clipgrad : True
== Running on:
== |== [0 : GeForce RTX 2080]
=========================