While stochastic gradient descent (SGD) and variants have been surprisingly successful for training deep nets, several aspects of the optimization dynamics and generalization are still not well understood. In this paper, we present new empirical observations and theoretical results on both the optimization dynamics and generalization behavior of SGD for deep nets based on the Hessian of the training loss and associated quantities. We consider three specific research questions: (1) what is the relationship between the Hessian of the loss and the second moment of stochastic gradients (SGs)? (2) how can we characterize the stochastic optimization dynamics of SGD with fixed step sizes based on the first and second moments of SGs? and (3) how can we characterize a scale-invariant generalization bound of deep nets based on the Hessian of the loss? Throughout the paper, we support theoretical results with empirical observations, with experiments on synthetic data, MNIST, and CIFAR-10, with different batch sizes, and with different difficulty levels by synthetically adding random labels.
|Original language||English (US)|
|Title of host publication||Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020|
|Editors||Carlotta Demeniconi, Nitesh Chawla|
|Publisher||Society for Industrial and Applied Mathematics Publications|
|Number of pages||9|
|State||Published - 2020|
|Event||2020 SIAM International Conference on Data Mining, SDM 2020 - Cincinnati, United States|
Duration: May 7 2020 → May 9 2020
|Name||Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020|
|Conference||2020 SIAM International Conference on Data Mining, SDM 2020|
|Period||5/7/20 → 5/9/20|
Bibliographical noteFunding Information:
Acknowledgement: The research was supported by NSF grants OAC-1934634, IIS-1908104,IIS-1563950, IIS-1447566, IIS-1447574, IIS-1422557, CCF-1451986.