Large minibatches are of great practical interest as they allow for a better exploitation of modern GPUs. The approach is intended to speedup SGD convergence and also has the advantage of reducing overhead related to data loading on the internal GPU memory.
It is well known that, for most datasets, the use of large-size minibatches
for Stochastic Gradient Descent (SGD) typically leads to slow convergence and
poor generalization. On the other hand, large minibatches are of great
practical interest as they allow for a better exploitation of modern GPUs.
Previous literature on the subject concentrated on how to adjust the main SGD
parameters (in particular, the learning rate) when using large minibatches. In
this work we introduce an additional feature, that we call minibatch
persistency, that consists in reusing the same minibatch for K consecutive SGD
iterations. The computational conjecture here is that a large minibatch
contains a significant sample of the training set, so one can afford to
slightly overfitting it without worsening generalization too much. The approach
is intended to speedup SGD convergence, and also has the advantage of reducing
the overhead related to data loading on the internal GPU memory. We present
computational results on CIFAR-10 with an AlexNet architecture, showing that
even small persistency values (K=2 or 5) already lead to a significantly faster
convergence and to a comparable (or even better) generalization than the
standard "disposable minibatch" approach (K=1), in particular when large
minibatches are used. The lesson learned is that minibatch persistency can be a
simple yet effective way to deal with large minibatches.